4.4 Article

Compressed suffix arrays and suffix trees with applications to text indexing and string matching

Journal

SIAM JOURNAL ON COMPUTING
Volume 35, Issue 2, Pages 378-407

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/S0097539702402354

Keywords

compression; text indexing; text retrieval; compressed data structures; suffix arrays; suffix trees; string searching; pattern matching

Ask authors/readers for more resources

The proliferation of online text, such as found on the World Wide Web and in online databases, motivates the need for space-efficient text indexing methods that support fast string searching. We model this scenario as follows: Consider a text T consisting of n symbols drawn from a fixed alphabet Sigma. The text T can be represented in n lg |Sigma| bits by encoding each symbol with lg |Sigma| bits. The goal is to support fast online queries for searching any string pattern P of m symbols, with T being fully scanned only once, namely, when the index is created at preprocessing time. The text indexing schemes published in the literature are greedy in terms of space usage: they require O( n lg n) additional bits of space in the worst case. For example, in the standard unit cost RAM, suffix trees and suffix arrays need Omega(n) memory words, each of Omega(lg n) bits. These indexes are larger than the text itself by a multiplicative factor of Omega(lg| Sigma| n), which is significant when Sigma is of constant size, such as in ASCII or UNICODE. On the other hand, these indexes support fast searching, either in O(m lg |Sigma|) time or in O(m+ lg n) time, plus an output-sensitive cost O(occ) for listing the occ pattern occurrences. We present a new text index that is based upon compressed representations of suffix arrays and suffix trees. It achieves a fast O(m/lg(|Sigma|) n + lg(|Sigma|)(epsilon) n) search time in the worst case, for any constant 0 < epsilon <= 1, using at most (epsilon(-1) + O(1)) n lg |Sigma| bits of storage. Our result thus presents for the first time an efficient index whose size is provably linear in the size of the text in the worst case, and for many scenarios, the space is actually sublinear in practice. As a concrete example, the compressed su. x array for a typical 100 MB ASCII file can require 30 - 40 MB or less, while the raw su. x array requires 500 MB. Our theoretical bounds improve both time and space of previous indexing schemes. Listing the pattern occurrences introduces a sublogarithmic slowdown factor in the output-sensitive cost, giving O(occ lg(|Sigma|)(epsilon) n) time as a result. When the patterns are sufficiently long, we can use auxiliary data structures in O( n lg |Sigma|) bits to obtain a total search bound of O(m/lg(|Sigma|) n + occ) time, which is optimal.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available