ES: delimit words before ngram, optimize tokens (#487)

Before, long.tokens.with.dots.or.dashes would get edgengrammed up to the
ngram limit, so we'd get to long.tokens.wit which would then be split -
discarding "with.dots.or.dashes" completely. The fullword index would
keep the complete large token, but without any ngramming, so incomplete
searches (like "tokens") would not match it, only the full token.

Now, we split words before ngramming them, so the main index will
properly handle words up to the ngram limit. The fullword index will
still handle the longer words for non-ngram matching.

Also optimized away duplicate tokens from the indices (since we rely on
boolean matching, not scoring) to save a couple megabytes of space.
This commit is contained in:
Anna-Maria Meriniemi 2018-04-29 04:09:40 +03:00 committed by Arylide
parent 8f4202c098
commit 59db958977
1 changed files with 3 additions and 1 deletions

View File

@ -20,9 +20,10 @@ settings:
filter:
- resolution
- lowercase
- my_ngram
- word_delimit
- my_ngram
- trim_zero
- unique
# For exact matching - simple lowercase + whitespace delimiter
exact_analyzer:
tokenizer: whitespace
@ -40,6 +41,7 @@ settings:
# Skip tokens shorter than N characters,
# since they're already indexed in the main field
- fullword_min
- unique
filter:
my_ngram: