-
Notifications
You must be signed in to change notification settings - Fork 342
SearchingBySkuDetailled
The page Searching by SKU in the User's Guide section presents what you can expect when searching a product by its SKU with Elasticsuite, in terms of results and limitations.
This page details the technical aspects behind the scenes in case you plan to tweak Elasticsuite behavior to better suit your SKU searching needs.
Warning : this is not a recommended read if you are not yet a bit familiar with how ElasticSearch analysis works and particularly how to build custom analyzers using tokenizers and token filters, and how this is done in Elasticsuite.
The behaviors described in the Searching by SKU wiki page rely mostly on the mapping of the following product index fields
-
sku
(the ElasticSearch field corresponding to thesku
attribute) -
search
(which collects text data of all indexed product attribute and is the default target search field when performing an exact search) -
spelling
(which collects text data of all indexed attribute with spellcheck enabled and is the default target search field when performing a fuzzy search)
It is not possible to cover all usages and situations in the context of a single chapter of documentation, so we would recommend reading this page with a browser tab opened on a Cerebro instance connected to your ElasticSearch instance.
This will allow you to test the behavior of the different analyzers against your own SKU and catalog data and search queries in the analysis screen of Cerebo which provides the analyze by field type and analyze by analyzer utilities.
You can also of course directly use the analyze API of ElasticSearch.
If you are already familiar both with custom analyzers, tokenizers and token filters on one hand and partial text matching in ElasticSearch you might expect reading in this page about the edge n-gram tokenizer and token filter. This will not be the case.
These mechanisms were used in our Magento 1 smile/elasticsearch module in the context of the autocomplete search. In ElasticSuite, those are not used anymore, both for performance and index size reasons and also for consistency between autocomplete and fulltext search results.
- Mappings
- Querying
- Improving partial SKU search on your site
This is the current definition of the sku
field in the elasticsuite_indices.xml
file :
<field name="sku" type="text">
<isSearchable>1</isSearchable>
<isUsedForSortBy>1</isUsedForSortBy>
<isUsedInSpellcheck>1</isUsedInSpellcheck>
<defaultSearchAnalyzer>reference</defaultSearchAnalyzer>
</field>
The resulting ElasticSearch mapping of the sku
field is the following
"sku": {
"copy_to": [
"search",
"spelling"
],
"analyzer": "reference",
"type": "text",
"fields": {
"shingle": {
"analyzer": "shingle",
"type": "text"
},
"sortable": {
"fielddata": true,
"analyzer": "sortable",
"type": "text"
},
"whitespace": {
"analyzer": "whitespace",
"type": "text"
}
}
},
The copy_to
instruction insures that any data indexed into the sku
field is also copied into general fields search
and spelling
.
The base sku
field analyzer named reference
is declared in the XML configuration file elasticsuite_analysis.xml
<analyzer name="reference" tokenizer="standard" language="default">
<filters>
<filter ref="ascii_folding" />
<filter ref="trim" />
<filter ref="reference_word_delimiter" />
<filter ref="lowercase" />
<filter ref="elision" />
<filter ref="reference_shingle" />
</filters>
<char_filters>
<char_filter ref="html_strip" />
</char_filters>
</analyzer>
The three important settings of this analyzer are its standard
tokenizer and the two tokens filter reference_word_delimiter
and reference_shingle
.
The standard
tokenizer splits a search query string or a string to index into a stream of tokens (ie distinct words) by detecting any non-alphanumerical character.
So for instance, the sku 24-MB-03 is split into three tokens : 24, MB and 03, at respective position 0, 1 and 2 in the tokens stream. On the other hand, a sku 24MB03 would not be split and would constitute a single token.
After the tokenizer, the different token filters are applied sequentially, amongst them the reference_word_delimiter
.
It is a word_delimiter type of token filter which is used either to split and/or regroup (catenate) existing tokens, according to its configuration.
Our reference_word_delimiter
has the following configuration
<filter name="reference_word_delimiter" type="word_delimiter" language="default">
<generate_word_parts>true</generate_word_parts>
<catenate_words>false</catenate_words>
<catenate_numbers>false</catenate_numbers>
<catenate_all>false</catenate_all>
<split_on_case_change>true</split_on_case_change>
<split_on_numerics>true</split_on_numerics>
<preserve_original>false</preserve_original>
</filter>
which instructs :
- to generate word parts (
generate_word_parts = true
) base on existing intra-word delimiters (all non alpha-numeric characters)- so a single token Star-Wars would be split into two tokens Star + Wars
- note that this setting has no effect in the context of the
reference
analyzer because thestandard
tokenizer has already done this job
-
not to catenate words (
catenate_words = false
) inside tokens- so a single token made of two words Star Wars (or Star-wars) would not be altered to hold StarWars
-
not to catenate numbers (
catenate_numbers = false
) inside tokens- so a single token containing 24 03 (or 24-03) would not be altered to hold 2403
-
not to catenate all (
catenate_all = false
)- so a single token containing 24 MB 03 (or 24-MB-03) would not be altered to hold 24MB03
- to split tokens when the case changes (
split_on_case_change = true
)- a single StarWars token would generate two tokens : Star + Wars
- note that this setting has no effect in the context of the
reference
analyzer because the previousascii_folding
token filter would have transformed StarWars into starwars
- to split tokens on numerics inside a token (
split_on_numerics = true
)- a single MB03 token would generate two tokens, MB + 03
- a single 24MB0 token would generate three tokens, 24 + MB + 0
-
not to preserve the original token (
preserve_original = false
)- so for instance, after having split a position 0 token StarWars into Star + Wars at respective position 0 and 1, StarWars is not kept in the tokens stream
As it stands, a sku of 24-MB-03 would not be impacted by this token filter because when it reaches it, it has already been separated into 3 separate tokens by the tokenizer and, as the catenate_
parameters are set to false
, those 3 tokens are not regrouped into a new additional single token.
On the other hand
- a sku of 24MB03, left unchanged by the tokenizer, would be split into 3 separate tokens (24, MB and 03) because
split_on_numerics
is set totrue
- a sku of 24-MB03, already split in 2 separate tokens by the tokenizer - 24 and MB03, would then again be split into 3 separate tokens (24, MB and 03) because of
split_on_numerics
The reference_shingle
filter applied after reference_word_delimiter
builds shingles, that is n-words tokens out of a token stream.
Basically, its role is to combine existing isolated tokens into new tokens.
It has the following XML configuration
<filter name="reference_shingle" type="shingle" language="default">
<min_shingle_size>2</min_shingle_size>
<max_shingle_size>10</max_shingle_size>
<output_unigrams>true</output_unigrams>
<token_separator></token_separator>
</filter>
which implies that :
- a shingle/new token must be made out of at minimum 2 tokens (
min_shingle_size = 2
) - a shingle/new token cannot be made out of more than 10 tokens (
max_shingle_size = 10
) -
- this means a shingle can be made out of between 2 to 10 tokens, with the filter generating all the possible combinations
- the original "single word" tokens (unigrams) are to be kept in the tokens stream (
output_unigrams = true
) - when building a shingle, the existing tokens will be glued without a separator character (
token_separator = [empty]
)
For instance, considering a token stream of 24 + MB + 03 after reference_word_delimiter
, the shingles generated will be 24MB, MB03 and 24MB03.
As the original unigram tokens are to be kept, the complete output would be the following, with concurrent token positioning :
- 24 + MB + 03
- 24 + MB03
- 24MB + 03
- 24MB03
The intent of this token filter is to enhance the accuracy of exact and partial sku matching.
Any data indexed into sku
is copied to the search
field which uses a slightly different custom analyzer, standard
.
"search": {
"analyzer": "standard",
"type": "text",
"fields": {
"shingle": {
"analyzer": "shingle",
"type": "text"
},
"whitespace": {
"analyzer": "whitespace",
"type": "text"
}
}
}
<analyzer name="standard" tokenizer="standard" language="default">
<filters>
<filter ref="ascii_folding" />
<filter ref="trim" />
<filter ref="word_delimiter" />
<filter ref="lowercase" />
<filter ref="elision" />
<filter ref="standard" />
</filters>
<char_filters>
<char_filter ref="html_strip" />
</char_filters>
</analyzer>
The standard
analyzer uses the same standard
tokenizer as the reference
analyzer, but with a different word delimiter, aptly named word_delimiter
.
It also uses a token filter named standard
which will be either a noop token filter or more probably a language stemmer according to your store configuration.
Its definition implies a broad spectrum generation of changes, to generate as much tokens as possible (without relying on a later stage shingle token filter).
<filter name="word_delimiter" type="word_delimiter" language="default">
<generate_word_parts>true</generate_word_parts>
<catenate_words>true</catenate_words>
<catenate_numbers>true</catenate_numbers>
<catenate_all>true</catenate_all>
<split_on_case_change>true</split_on_case_change>
<split_on_numerics>true</split_on_numerics>
<preserve_original>true</preserve_original>
</filter>
Given a 24-MB03 sku, its output will be
-
24 + MB03 (due to
preserve_original = true
) - 24 + MB + 03
- 24 + MB03
- 24MB03
The typology of the standard
token filter changes according to the language associated with the store (via the store locale) of the data being indexed.
By default, it is a "pass through" token filter which does nothing.
<filter name="standard" type="standard" language="default" />
For any language supported by ElasticSearch, it is a stemmer token filter configured for that language.
For instance, for English and French, the definition are the following
<filter name="standard" type="stemmer" language="en">
<language>english</language>
</filter>
...
<filter name="standard" type="stemmer" language="fr">
<language>french</language>
</filter>
On a typical supported SKU format, those stemmers do not have any impact.
Any data indexed into sku
is also copied to the spelling
field which uses the same standard
analyzer as search
.
"spelling": {
"analyzer": "standard",
"type": "text",
"fields": {
"phonetic": {
"analyzer": "phonetic",
"type": "text"
},
"shingle": {
"analyzer": "shingle",
"type": "text"
},
"whitespace": {
"analyzer": "whitespace",
"type": "text"
}
}
},
So any data indexed into the spelling
base field will be the same as the one indexed into the search
base field.
This is also true
- for subfields
sku.whitespace
,search.whitespace
andspelling.whitespace
with a dedicated analyzer of the namedwhitespace
- for subfields
sku.shingle
,search.shingle
andspelling.shingle
with a dedicated analyzer of the namedshingle
The whitespace
analyzer is very similar to the standard
analyzer, the only difference being that it does not contain a stemmer component.
<analyzer name="whitespace" tokenizer="standard" language="default">
<filters>
<filter ref="ascii_folding" />
<filter ref="trim" />
<filter ref="word_delimiter" />
<filter ref="lowercase" />
<filter ref="elision" />
</filters>
<char_filters>
<char_filter ref="html_strip" />
</char_filters>
</analyzer>
Given a 24-MB03 sku, its output will be the same as standard
:
-
24 + MB03 (due to
preserve_original = true
ofword_delimiter
) - 24 + MB + 03
- 24 + MB03
- 24MB03
The shingle
analyzer used for sku.shingle
, search.shingle
and spelling.shingle
is a notable a variation of the whitespace
analyzer.
Not only does it contain an additional shingle token filter, it also uses a whitespace tokenizer, which impacts what its word_delimiter token filter works with.
<analyzer name="shingle" tokenizer="whitespace" language="default">
<filters>
<filter ref="ascii_folding" />
<filter ref="trim" />
<filter ref="word_delimiter" />
<filter ref="lowercase" />
<filter ref="elision" />
<filter ref="shingle" />
</filters>
<char_filters>
<char_filter ref="html_strip" />
</char_filters>
</analyzer>
The whitespace
tokenizer splits a search query string or a string to index into a stream of tokens along all the whitespace characters.
This means that, contrary to the standard
tokenizer, it will not split along dashes or punctuation characters.
For instance, given a 24-MB03 sku, it will generate a single token 24-MB03.
Note that this token will be split afterwards by the word_delimiter
token filter as seen previously.
-
24-MB03 (due to
preserve_original = true
) - 24 + MB + 03
- 24MB03
The shingle
token filter used in the shingle
analyzer varies slightly from the reference_shingle
token filter used in the sku
base field by using the default value for token_separator
, that is a space ("
") character.
<filter name="shingle" type="shingle" language="default">
<min_shingle_size>2</min_shingle_size>
<max_shingle_size>2</max_shingle_size>
<output_unigrams>true</output_unigrams>
</filter>
At indexing time, given an expected SKU typology, there are only minor differences between the data indexed by sku
, search
and spelling
on one hand, and sku.whitespace
, search.whitespace
and spelling.whitespace
on the other hand.
The main issue is that search
and spelling
, being default search fields, do not contain only the SKU of a given indexed product, but also the rest of its text-based indexed attributes, which affects querying time behavior.
When a search is made, before any actual search request is sent to ElasticSearch, a pre-analysis request is made using the termVectors API of ElasticSearch.
This is to determine what kind of search request will best fit the user provided search query, according to the term(s) presence in the index.
The target of the termVectors request is made against the spelling
and spelling.whitespace
fields.
The request is initiated by \Smile\ElasticsuiteCore\Search\Adapter\Elasticsuite\Spellchecker::getSpellingType
.
For instance, the termVectors request for a search query of "24-MB03" on the default store of a Luma install would be
curl -XPOST http://127.0.0.1:9200/magento2_default_catalog_product/product/_termvectors -d'{
"term_statistics": true,
"fields": [
"spelling",
"spelling.whitespace"
],
"doc": {
"spelling": "24-MB03"
}
}
'
The term statistics obtained are used to determine how many of those terms are either
- exactly present in the index (ie present in
spelling.whitespace
which uses thewhitespace
analyzer) - or present in the index (ie present in
spelling
which uses thestandard
analyzer) - or present in the index but to be considered as stop words due to their (relative) high frequency in the index
- or absent
This directly determines the "spelling type" of the user search query which impacts the structure of the actual ElasticSearch search query [^1] to build to ensure the best relevancy
- SPELLING_TYPE_EXACT or SPELLING_TYPE_MOST_EXACT will lead to what can be described as an "exact match" query
- SPELLING_TYPE_PURE_STOPWORDS to an "stop words" query
- SPELLING_TYPE_MOST_FUZZY and SPELLING_TYPE_FUZZY to a "spellchecked" query
[^1]: See \Smile\ElasticsuiteCore\Search\Request\Query\Fulltext\QueryBuilder::create()
.
For instance on Luma, when searching "24-MB03", the 3 generated terms/tokens 24, mb and 03 are considered as exactly present, which is the expected behavior.
However, when searching for "24-MB0", the 3 generated terms/tokens 24, mb and 0 are also considered as exactly present which is counter-intuitive as there are no products with a SKU capable of generating those exact tokens.
The token 0 actually exists in spelling
and spelling.whitespace
due to other indexed attributes.
An exact match query consists of a multi_match query on weighted search fields for scoring with an additional multi_match query filter to enforce the exact matches.
The multi_match query has a hardcoded minimum_should_match
of 1 while the multi_match query filter uses the configuration defined minimum_should_match
.
Note that the cutoff_frequency
and tie_breaker
parameters value, as well as the actual list of weighted search fields also depends on your global and attribute related relevance configuration.
"must": {
"bool": {
"filter": {
"multi_match": {
"query": "24-MB03",
"fields": [
"search^1",
"sku^1"
],
"minimum_should_match": "100%",
"tie_breaker": 1,
"boost": 1,
"type": "best_fields",
"cutoff_frequency": 0.15
}
},
"must": {
"multi_match": {
"query": "24-MB03",
"fields": [
"search^1",
"name^5",
"sku^6",
"search.whitespace^10",
"name.whitespace^50",
"sku.whitespace^60",
"name.sortable^100",
"sku.sortable^120"
],
"minimum_should_match": 1,
"tie_breaker": 1,
"boost": 1,
"type": "best_fields",
"cutoff_frequency": 0.15
}
},
"boost": 1
}
}
The default minimum_should_match
of 100% means every term/token must be present in the document. This does not pose a problem if an exact SKU is provided.
On Luma, given a search query of 24-MB0, however, the isolated token 0, though reported as present in documents, cannot be actually found alongside the other tokens generated by the analysis in any document. The catalog search results page will not display any results.
This is the limit of the termVectors approach when the configured minimum_should_match
is set at 100%.
Reducing in configuration the minimum_should_match
down to 90% or 99% ensures a 24-MB0 search query matches all the 24-MB-0X SKUs but it will also match any combination of two tokens in search
coming from any indexed attribute (for instance the product description).
A stop words query is a multi_match query using exclusively the whitespace
analyzer for a single word search query or the shingle
analyzer for a multiple terms search query.
This below is a fictious stop words query for 24-MB0
"must": {
"multi_match": {
"query": "24-MB0",
"fields": [
"search.whitespace^1",
"name.whitespace^5",
"sku.whitespace^6"
],
"minimum_should_match": "100%",
"tie_breaker": 1,
"boost": 1,
"type": "best_fields"
}
},
The tie_breaker
is the one defined in configuration, but the minimum_should_match
at 100% is once again hardcoded.
Note that the query does not have a cutoff_frequency
parameter which is coherent with its aim.
According to the relevance configuration, a spellchecked query may use both fuzziness and phonetic matching.
Below is an example of a spellchecked query for 24-MB0 if the spelling type had been identified as SPELLING_TYPE_FUZZY.
See SPELLING_TYPE_FUZZY search request structure :
"query": {
"bool": {
"filter": { ... },
"must": {
"bool": {
"must": [],
"must_not": [],
"should": [
{
"multi_match": {
"query": "24-MB0",
"fields": [
"spelling.whitespace^10",
"name.whitespace^50",
"sku.whitespace^60"
],
"minimum_should_match": "100%",
"tie_breaker": 1,
"boost": 1,
"type": "best_fields",
"cutoff_frequency": 0.15,
"fuzziness": "AUTO",
"prefix_length": 1,
"max_expansions": 10
}
},
{
"multi_match": {
"query": "24-MB0",
"fields": [
"spelling.phonetic^1"
],
"minimum_should_match": "100%",
"tie_breaker": 1,
"boost": 1,
"type": "best_fields",
"cutoff_frequency": 0.15
}
}
],
"minimum_should_match": 1,
"boost": 1
}
},
"boost": 1
}
}
An additional bool query is applied which uses the whitespace
analyzer if the computed spelling type is SPELLING_TYPE_MOST_FUZZY.
See SPELLING_TYPE_MOST_FUZZY request structure :
"query": {
"bool": {
"filter": { ... },
"must": {
"bool": {
"must": [
{
"multi_match": {
"query": "24-MB0",
"fields": [
"search^1",
"name^5",
"sku^6",
"search.whitespace^10",
"name.whitespace^50",
"sku.whitespace^60",
"name.sortable^100",
"sku.sortable^120"
],
"minimum_should_match": 1,
"tie_breaker": 1,
"boost": 1,
"type": "best_fields",
"cutoff_frequency": 0.15
}
}
],
"must_not": [],
"should": [
{
"multi_match": {
"query": "24-MB0",
"fields": [
"spelling.whitespace^10",
"name.whitespace^50",
"sku.whitespace^60"
],
"minimum_should_match": "100%",
"tie_breaker": 1,
"boost": 1,
"type": "best_fields",
"cutoff_frequency": 0.15,
"fuzziness": "AUTO",
"prefix_length": 1,
"max_expansions": 10
}
},
{
"multi_match": {
"query": "24-MB0",
"fields": [
"spelling.phonetic^1"
],
"minimum_should_match": "100%",
"tie_breaker": 1,
"boost": 1,
"type": "best_fields",
"cutoff_frequency": 0.15
}
}
],
"minimum_should_match": 1,
"boost": 1
}
},
"boost": 1
}
}
When searching for a string that is an incomplete prefix of a SKU, neither the fuzziness nor the phonetic matching are in theory of any help :
- the fuzziness helps match terms of the same length as the one provided that are withing an edit distance (character change or swap)
- phonetic matching helps match terms that sound the same as the one provided, but written differently
But in some conditions, depending on the actual phonetic filter used (which depends on your store language), phonetic matching can occur when tokens are dropped by the filter.
For instance, on the Luma default catalog, the phonetic filter used by spelling.phonetic
is metaphone.
If provided with 24-MB03, its output will be following tokens :
- 24 + MB (twice)
- 24 + M + 03
If provided with 24-MB0 :
- 24 + MB (twice)
- 24 + M + 0
In that context, considering only the tokens 24 + MB, the minimum_should_match
of 100% is satisfied : 24-MB0 is considered a perfect match for 24-MB03.
24-MB0 would be, actually, considered an exact match for all "24-MB0X" SKUs.
This is highly theoretical since the termVectors pre-analysis phase would need to classify the spelling type of 24-MB0 as SPELLING_TYPE_FUZZY.
Reducing in the configuration the minimum_should_match
from its default value of 100% is a short term solution that needs to be taken into consideration with regards to its impact to the general fulltext search.
Considering the configured minimum_should_match
also impacts stop words and spellchecked queries, more search results per page is to be expected : the attributes' search weight and/or search optimizers would need to be carefully tuned to ensure the more relevant products business wise are still amongst the first few lines of results.
If your SKU typology involves a lot of letters/number/letters sequences or small/isolated letters part, limiting the splitting features of word delimiters token filters word_delimiter
and reference_word_delimiter
might be an option.
This will limit the problem that can be identified on Luma of termVectors false positive and which is inherent of generating of lot of small tokens due to the SKUs.
For instance, in this reported issue, the user decided to
- disable
split_on_numerics
,generate_word_parts
andpreserve_original
inreference_word_delimiter
- disable
split_on_numerics
inword_delimiter
Then again, such a solution might have in impact in the classic fulltext search, for instance if your products descriptions contain size/dimensions information. It might also rely too much on the "metaphone quirks" described above.
The autocomplete search of ElasticSuite relies upon any past similar search queries which did in fact returned results and were not flagged as spellchecked.
For instance, on Luma, if any user search previously for 24-MB03 and was redirected to the corresponding "Crown Summit Backpack" product (because one result redirect is enabled), that search query would have been recorded in the table search_query
and flagged as not spellchecked.
The first subsequent autocomplete search on 24-MB0 will
- display the 24-MB03 search term in the autocomplete search terms block
- display the "Crown Summit Backpack" (24-MB03 product) in the autocomplete search products block since the autocomplete search request sent to ElasticSearch would have actually searched for 24-MB03 instead of 24-MB0
The ElasticSuite autocomplete search products block actually always perform a search for all the previously validated (displayable as suggestions, with results and not leading to a spellchecked query) similar (based on a MySQL like query) search terms if some exist. If none exists, the autocomplete search request will use the user provided search terms as is.
If the user goes on and performs a fulltext search with 24-MB0, it will return 0 results but the magento/module-advanced-search
provided recommandations block will suggest the 24-MB03 related search term.
If the user goes on and performs a fulltext search with 24-MB, it will return the expected bag products but the magento/module-advanced-search
recommandations block will also suggest the 24-MB03 search term.
If your site activity is essentially B2B/BtoB where users traditionally search by full or partial SKUs and/or a lot of search traffic consists of partial SKU searches leading to 0 results, and depending on the size of your catalog, it is possible to leverage the behaviors described in the two previous sections.
A quick-and-dirty solution could be to pre-inject into the search_query
database table
- either all your SKUs
- or complete SKUs prefixes that lead to actual search results
For instance, on Luma, considering the 24-MB* bag/backpack products, it would consist of inserting
- either "24-MB01", "24-MB02", ..., "24-MB06" as separated pre-searched search terms
- or only the "24-MB" SKU prefix
This is of course probably not an option if your SKUs are in EAN13 and UPC-A formats or if your SKUs prefixes are shared amongst whole different products.
-
User's Guide
-
Developer's Guide
-
Releases
- Magento 2.3.x
- Magento 2.2.x
- Magento 2.1.x
- ElasticSuite 2.5.15
- ElasticSuite 2.5.14
- ElasticSuite 2.5.13
- ElasticSuite 2.5.12
- ElasticSuite 2.5.11
- ElasticSuite 2.5.10
- ElasticSuite 2.5.9
- ElasticSuite 2.5.8
- ElasticSuite 2.5.7
- ElasticSuite 2.5.6
- ElasticSuite 2.5.5
- ElasticSuite 2.5.4
- ElasticSuite 2.5.3
- ElasticSuite 2.5.2
- ElasticSuite 2.5.1
- ElasticSuite 2.5.0
- ElasticSuite 2.4.6
- ElasticSuite 2.4.5
- ElasticSuite 2.4.4
- ElasticSuite 2.4.3
- ElasticSuite 2.4.2
- ElasticSuite 2.4.1
- ElasticSuite 2.4.0
- ElasticSuite 2.3.10
- ElasticSuite 2.3.9
- ElasticSuite 2.3.8
- ElasticSuite 2.3.7
- ElasticSuite 2.3.6
- ElasticSuite 2.3.5
- ElasticSuite 2.3.4
- ElasticSuite 2.3.3
- ElasticSuite 2.3.2
- ElasticSuite 2.3.1
- ElasticSuite 2.3.0
- Magento 2.0.x