-
Notifications
You must be signed in to change notification settings - Fork 770
feat: impl NgramIndex
for FuseTable
, improve like query performance
#17852
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Kould <[email protected]>
63f8325
to
b107792
Compare
Signed-off-by: Kould <[email protected]>
b107792
to
e330d25
Compare
Signed-off-by: Kould <[email protected]>
ba2213e
to
8aabb9e
Compare
@@ -476,7 +476,7 @@ idx2 INVERTED books(title, author, description)index_record='"basic"' tokenizer= | |||
query III | |||
select row_count, bloom_filter_size, inverted_index_size from fuse_block('test_index', 't1') | |||
---- | |||
10 438 2390 | |||
10 439 2390 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a bit to distinguish whether the filter is Xor8 or Bloom
Signed-off-by: Kould <[email protected]>
8aabb9e
to
c88200b
Compare
@@ -21,7 +21,7 @@ insert into t values (1) | |||
query III | |||
select block_count, row_count, index_size from fuse_snapshot('db_09_0006', 't') order by row_count desc limit 1 | |||
---- | |||
1 1 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is the index size 0? The bloom filter is created by default, so after inserting a row of data, the index size will always be greater than 1 in theory.
I hereby agree to the terms of the CLA available at: https://docs.databend.com/dev/policies/cla/
Summary
part of: #17724
Implement Ngram Index to improve the retrieval speed of Like query
Its working principle is to insert String type data into multiple substrings in the form of ngram and insert them into BloomFilter. When querying Like, it determines whether there is a substring after ngram that does not exist in BloomFilter to filter out the Block that must not have data in Like in advance.
Therefore, when using Ngram Index, the insertion time will be longer due to ngram (depending on the length of each line of string and the total number of data lines).
Storage
Ngram Index is essentially a data segmentation method based on Bloom Index using Ngram. Therefore, Ngram Index shares Meta with Bloom Index and uses the same storage file.
Benchmark
Using amazon_reviews as the benchmark, the total data size is 39.2 GB, and review_body is 17 GB
Using this SQL to test Ngram, the total file size of BloomFilter is 1.5 GB
Query:
Ngram:
Not Ngram:
Insert:
Ngram:
Not Ngram:
Tips: The factors that affect the insertion time are as follows:
Therefore, this benchmark is the parameter I chose for query purposes. In actual applications, users need to weigh the insertion speed and filtering effect.
Tests
Type of change
This change is