From 5dddda8aa53a3b09b5642020f06f1904164967d1 Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Tue, 3 Sep 2024 14:30:57 +0100 Subject: [PATCH 1/5] add lowercase token filter Signed-off-by: Anton Rubin --- _analyzers/token-filters/lowercase.md | 88 +++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) create mode 100644 _analyzers/token-filters/lowercase.md diff --git a/_analyzers/token-filters/lowercase.md b/_analyzers/token-filters/lowercase.md new file mode 100644 index 0000000000..041dff2e7b --- /dev/null +++ b/_analyzers/token-filters/lowercase.md @@ -0,0 +1,88 @@ +--- +layout: default +title: Lowercase +parent: Token filters +nav_order: 260 +--- + +# Lowercase token filter + +The `lowercase` token filter in OpenSearch is used to limit the number of tokens that are passed through the analysis chain. + +## Parameters + +The `lowercase` token filter in OpenSearch can be configured with the following parameters: + +- `max_token_count`: Maximum number of tokens that will be generated. Default is `1` (Integer, _Optional_) +- `consume_all_tokens`: Use all token, even if result exceeds `max_token_count`. Default is `false` (Boolean, _Optional_) + + +## Example + +The following example request creates a new index named `my_index` and configures an analyzer with `lowercase` filter: + +```json +PUT my_index +{ + "settings": { + "analysis": { + "analyzer": { + "three_token_limit": { + "tokenizer": "standard", + "filter": [ "custom_token_limit" ] + } + }, + "filter": { + "custom_token_limit": { + "type": "limit", + "max_token_count": 3 + } + } + } + } +} +``` +{% include copy-curl.html %} + +## Generated tokens + +Use the following request to examine the tokens generated using the created analyzer: + +```json +GET /my_index/_analyze +{ + "analyzer": "three_token_limit", + "text": "OpenSearch is a powerful and flexible search engine." +} +``` +{% include copy-curl.html %} + +The response contains the generated tokens: + +```json +{ + "tokens": [ + { + "token": "OpenSearch", + "start_offset": 0, + "end_offset": 10, + "type": "", + "position": 0 + }, + { + "token": "is", + "start_offset": 11, + "end_offset": 13, + "type": "", + "position": 1 + }, + { + "token": "a", + "start_offset": 14, + "end_offset": 15, + "type": "", + "position": 2 + } + ] +} +``` From 167d063f24c831625883fa45e230cb6e3d5f3bed Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Tue, 3 Sep 2024 15:42:08 +0100 Subject: [PATCH 2/5] adding examples in greek to lowercase token filter #8154 Signed-off-by: Anton Rubin --- _analyzers/token-filters/index.md | 2 +- _analyzers/token-filters/lowercase.md | 46 +++++++++++---------------- 2 files changed, 19 insertions(+), 29 deletions(-) diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index f4e9c434e7..303bcd4462 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -38,7 +38,7 @@ Token filter | Underlying Lucene token filter| Description `kuromoji_completion` | [JapaneseCompletionFilter](https://lucene.apache.org/core/9_10_0/analysis/kuromoji/org/apache/lucene/analysis/ja/JapaneseCompletionFilter.html) | Adds Japanese romanized terms to the token stream (in addition to the original tokens). Usually used to support autocomplete on Japanese search terms. Note that the filter has a `mode` parameter, which should be set to `index` when used in an index analyzer and `query` when used in a search analyzer. Requires the `analysis-kuromoji` plugin. For information about installing the plugin, see [Additional plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/#additional-plugins). `length` | [LengthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html) | Removes tokens whose lengths are shorter or longer than the length range specified by `min` and `max`. `limit` | [LimitTokenCountFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html) | Limits the number of output tokens. A common use case is to limit the size of document field values based on token count. -`lowercase` | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) is for the English language. You can set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)). +[`lowercase`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/lowercase/) | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) is for the English language. You can set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)). `min_hash` | [MinHashFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/minhash/MinHashFilter.html) | Uses the [MinHash technique](https://en.wikipedia.org/wiki/MinHash) to estimate document similarity. Performs the following operations on a token stream sequentially:
1. Hashes each token in the stream.
2. Assigns the hashes to buckets, keeping only the smallest hashes of each bucket.
3. Outputs the smallest hash from each bucket as a token stream. `multiplexer` | N/A | Emits multiple tokens at the same position. Runs each token through each of the specified filter lists separately and outputs the results as separate tokens. `ngram` | [NGramTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html) | Tokenizes the given token into n-grams of lengths between `min_gram` and `max_gram`. diff --git a/_analyzers/token-filters/lowercase.md b/_analyzers/token-filters/lowercase.md index 041dff2e7b..573f92e284 100644 --- a/_analyzers/token-filters/lowercase.md +++ b/_analyzers/token-filters/lowercase.md @@ -7,35 +7,32 @@ nav_order: 260 # Lowercase token filter -The `lowercase` token filter in OpenSearch is used to limit the number of tokens that are passed through the analysis chain. +The `lowercase` token filter in OpenSearch is used to convert all characters in the token stream to lowercase, making searches case-insensitive. ## Parameters -The `lowercase` token filter in OpenSearch can be configured with the following parameters: - -- `max_token_count`: Maximum number of tokens that will be generated. Default is `1` (Integer, _Optional_) -- `consume_all_tokens`: Use all token, even if result exceeds `max_token_count`. Default is `false` (Boolean, _Optional_) - +The `lowercase` token filter in OpenSearch can be configured with parameter `language`. The possible options are: [`greek`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html), [`irish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html) and [`turkish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html). Default is [Lucene’s LowerCaseFilter](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html). (String, _Optional_) ## Example -The following example request creates a new index named `my_index` and configures an analyzer with `lowercase` filter: +The following example request creates a new index named `custom_lowercase_example` and configures an analyzer with `lowercase` filter with greek `language`: ```json -PUT my_index +PUT /custom_lowercase_example { "settings": { "analysis": { "analyzer": { - "three_token_limit": { + "greek_lowercase_example": { + "type": "custom", "tokenizer": "standard", - "filter": [ "custom_token_limit" ] + "filter": ["greek_lowercase"] } }, "filter": { - "custom_token_limit": { - "type": "limit", - "max_token_count": 3 + "greek_lowercase": { + "type": "lowercase", + "language": "greek" } } } @@ -49,10 +46,10 @@ PUT my_index Use the following request to examine the tokens generated using the created analyzer: ```json -GET /my_index/_analyze +GET /custom_lowercase_example/_analyze { - "analyzer": "three_token_limit", - "text": "OpenSearch is a powerful and flexible search engine." + "analyzer": "greek_lowercase_example", + "text": "Αθήνα ΕΛΛΑΔΑ" } ``` {% include copy-curl.html %} @@ -63,25 +60,18 @@ The response contains the generated tokens: { "tokens": [ { - "token": "OpenSearch", + "token": "αθηνα", "start_offset": 0, - "end_offset": 10, + "end_offset": 5, "type": "", "position": 0 }, { - "token": "is", - "start_offset": 11, - "end_offset": 13, + "token": "ελλαδα", + "start_offset": 6, + "end_offset": 12, "type": "", "position": 1 - }, - { - "token": "a", - "start_offset": 14, - "end_offset": 15, - "type": "", - "position": 2 } ] } From 9a46d9afe12ac40a51ffcaa99d1151633d7092b7 Mon Sep 17 00:00:00 2001 From: AntonEliatra Date: Thu, 12 Sep 2024 11:04:17 +0100 Subject: [PATCH 3/5] Update lowercase.md Signed-off-by: AntonEliatra --- _analyzers/token-filters/lowercase.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_analyzers/token-filters/lowercase.md b/_analyzers/token-filters/lowercase.md index 573f92e284..6a3366ae0d 100644 --- a/_analyzers/token-filters/lowercase.md +++ b/_analyzers/token-filters/lowercase.md @@ -15,7 +15,7 @@ The `lowercase` token filter in OpenSearch can be configured with parameter `lan ## Example -The following example request creates a new index named `custom_lowercase_example` and configures an analyzer with `lowercase` filter with greek `language`: +The following example request creates a new index named `custom_lowercase_example` and configures an analyzer with `lowercase` filter with Greek `language`: ```json PUT /custom_lowercase_example @@ -43,7 +43,7 @@ PUT /custom_lowercase_example ## Generated tokens -Use the following request to examine the tokens generated using the created analyzer: +Use the following request to examine the tokens generated using the analyzer: ```json GET /custom_lowercase_example/_analyze From f39b925e51a50d5840ecc18acd17aa365dd40164 Mon Sep 17 00:00:00 2001 From: Fanit Kolchina Date: Fri, 15 Nov 2024 14:29:31 -0500 Subject: [PATCH 4/5] Doc review Signed-off-by: Fanit Kolchina --- _analyzers/token-filters/lowercase.md | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/_analyzers/token-filters/lowercase.md b/_analyzers/token-filters/lowercase.md index 6a3366ae0d..92190bfe4a 100644 --- a/_analyzers/token-filters/lowercase.md +++ b/_analyzers/token-filters/lowercase.md @@ -7,15 +7,19 @@ nav_order: 260 # Lowercase token filter -The `lowercase` token filter in OpenSearch is used to convert all characters in the token stream to lowercase, making searches case-insensitive. +The `lowercase` token filter is used to convert all characters in the token stream to lowercase, making searches case insensitive. ## Parameters -The `lowercase` token filter in OpenSearch can be configured with parameter `language`. The possible options are: [`greek`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html), [`irish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html) and [`turkish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html). Default is [Lucene’s LowerCaseFilter](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html). (String, _Optional_) +The `lowercase` token filter can be configured with the following parameter. + +Parameter | Required/Optional | Description +:--- | :--- | :--- + `language` | Optional | Specifies a language-specific token filter to use for lowercasing. Valid values are:
- [`greek`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)
- [`irish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)
- [`turkish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html).
Default is [Lucene’s LowerCaseFilter](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html). ## Example -The following example request creates a new index named `custom_lowercase_example` and configures an analyzer with `lowercase` filter with Greek `language`: +The following example request creates a new index named `custom_lowercase_example`. It configures an analyzer with a `lowercase` filter and specifies `greek` as the `language`: ```json PUT /custom_lowercase_example From b0ce7c17441ffed4be2912a005a8d6abb42f38c3 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Mon, 2 Dec 2024 08:57:47 -0500 Subject: [PATCH 5/5] Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _analyzers/token-filters/index.md | 2 +- _analyzers/token-filters/lowercase.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index 303bcd4462..ed5e984fa8 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -38,7 +38,7 @@ Token filter | Underlying Lucene token filter| Description `kuromoji_completion` | [JapaneseCompletionFilter](https://lucene.apache.org/core/9_10_0/analysis/kuromoji/org/apache/lucene/analysis/ja/JapaneseCompletionFilter.html) | Adds Japanese romanized terms to the token stream (in addition to the original tokens). Usually used to support autocomplete on Japanese search terms. Note that the filter has a `mode` parameter, which should be set to `index` when used in an index analyzer and `query` when used in a search analyzer. Requires the `analysis-kuromoji` plugin. For information about installing the plugin, see [Additional plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/#additional-plugins). `length` | [LengthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html) | Removes tokens whose lengths are shorter or longer than the length range specified by `min` and `max`. `limit` | [LimitTokenCountFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html) | Limits the number of output tokens. A common use case is to limit the size of document field values based on token count. -[`lowercase`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/lowercase/) | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) is for the English language. You can set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)). +[`lowercase`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/lowercase/) | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) processes the English language. To process other languages, set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)). `min_hash` | [MinHashFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/minhash/MinHashFilter.html) | Uses the [MinHash technique](https://en.wikipedia.org/wiki/MinHash) to estimate document similarity. Performs the following operations on a token stream sequentially:
1. Hashes each token in the stream.
2. Assigns the hashes to buckets, keeping only the smallest hashes of each bucket.
3. Outputs the smallest hash from each bucket as a token stream. `multiplexer` | N/A | Emits multiple tokens at the same position. Runs each token through each of the specified filter lists separately and outputs the results as separate tokens. `ngram` | [NGramTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html) | Tokenizes the given token into n-grams of lengths between `min_gram` and `max_gram`. diff --git a/_analyzers/token-filters/lowercase.md b/_analyzers/token-filters/lowercase.md index 92190bfe4a..89f0f219fa 100644 --- a/_analyzers/token-filters/lowercase.md +++ b/_analyzers/token-filters/lowercase.md @@ -15,7 +15,7 @@ The `lowercase` token filter can be configured with the following parameter. Parameter | Required/Optional | Description :--- | :--- | :--- - `language` | Optional | Specifies a language-specific token filter to use for lowercasing. Valid values are:
- [`greek`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)
- [`irish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)
- [`turkish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html).
Default is [Lucene’s LowerCaseFilter](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html). + `language` | Optional | Specifies a language-specific token filter. Valid values are:
- [`greek`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)
- [`irish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)
- [`turkish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html).
Default is the [Lucene LowerCaseFilter](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html). ## Example