issue
dict | pr
dict | pr_details
dict |
---|---|---|
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): `6.2.2, Build: 10b1edd/2018-02-16T19:01:30.685723Z`\r\n\r\n**Plugins installed**: `[\"analysis-icu\"]`\r\n\r\n**JVM version** (`java -version`): `9.0.4`\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Mac OS Sierra 10.12.6\r\n`16.7.0 Darwin Kernel Version 16.7.0: Thu Jan 11 22:59:40 PST 2018; root:xnu-3789.73.8~1/RELEASE_X86_64 x86_64`\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen doing a query_string query with the \"stop\" analyzer and wildcard analysis enabled, if the query contains stop words and wildcards (on the stop words or on other query terms), the expected behavior (at least, the behavior in Elasticsearch 5.5.0) is for the stop word to be removed from the token stream; the actual behavior is that it gets converted to MatchNoDocsQuery.\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Start an elasticsearch process: `bin/elasticsearch`\r\n 2. Create a simple index:\r\n```\r\ncurl -X PUT \"localhost:9200/stop-wildcard-test\" -H \"Content-Type: application/json\" -d '{\r\n\t\"mappings\": {\r\n\t\t\"doc\": {\r\n\t\t\t\"properties\": {\r\n\t\t\t\t\"content\": { \"type\": \"text\" }\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n}'\r\n```\r\n 4. Request validation for `\"on the run*\"`: \r\n```\r\ncurl -X POST \"localhost:9200/stop-wildcard-test/_validate/query?explain&pretty\" -H \"Content-Type: application/json\" -d '{\r\n \"query\": {\r\n \"query_string\": {\r\n \"query\": \"on the run*\",\r\n \"analyzer\": \"stop\",\r\n \"analyze_wildcard\": true\r\n }\r\n }\r\n}'\r\n```\r\nResponse: \r\n```\r\n{\r\n \"valid\" : true,\r\n \"_shards\" : {\r\n \"total\" : 1,\r\n \"successful\" : 1,\r\n \"failed\" : 0\r\n },\r\n \"explanations\" : [\r\n {\r\n \"index\" : \"stop-wildcard-test\",\r\n \"valid\" : true,\r\n \"explanation\" : \"MatchNoDocsQuery(\\\"analysis was empty for content:on\\\") content:run*\"\r\n }\r\n ]\r\n}\r\n```\r\nI also tested the following queries:\r\n - `on the run` produces the expected `content:run`, without `MatchNoDocsQuery`\r\n - `on* the run` produces `MatchNoDocsQuery(\\\"analysis was empty for content:on\\\") content:run`\r\nAlso tested with wildcard analysis off:\r\n- `on the run*` produces `MatchNoDocsQuery(\\\"analysis was empty for content:on\\\") content:run*`\r\n- ` on the run` produces the expected `content:run`\r\n- `on* the run` produces `content:on* content:run` (unlike with wildcard analysis on)\r\n\r\nFor comparison, here is the response to the same query from a fresh installation of 5.5.0 (+ analysis_icu for parity, but probably not relevant here?):\r\n```\r\n{\r\n \"valid\": true,\r\n \"_shards\": {\r\n \"total\": 1,\r\n \"successful\": 1,\r\n \"failed\": 0\r\n },\r\n \"explanations\": [\r\n {\r\n \"index\": \"stop-wildcard-test\",\r\n \"valid\": true,\r\n \"explanation\": \"_all:run\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\n**Provide logs (if relevant)**:\r\nNot relevant.",
"comments": [
{
"body": "cc @elastic/es-search-aggs ",
"created_at": "2018-03-01T15:13:53Z"
}
],
"number": 28856,
"title": "MatchNoDocsQuery from stop words with wildcards in query_string"
} | {
"body": "This change ensures that we ignore terms removed from the analysis rather than returning a match_no_docs query for the part\r\nthat contain the stop word. For instance a query like \"the AND fox\" should ignore \"the\" if it is considered as a stop word instead of\r\nadding a match_no_docs query.\r\nThis change also fixes the analysis of prefix terms that start with a stop word (e.g. `the*`). In such case if `analyze_wildcard` is true and `the`\r\nis considered as a stop word this part of the query is rewritten into a match_no_docs query. Since it's a prefix query this change forces the prefix query\r\non `the` even if it is removed from the analysis.\r\n\r\nFixes #28855\r\nFixes #28856",
"number": 28871,
"review_comments": [],
"title": "Fix (simple)_query_string to ignore removed terms"
} | {
"commits": [
{
"message": "Fix query_string and simple_query_string to ignore removed terms\n\nThis change ensures that we ignore terms removed from the analysis rather than returning a match_no_docs query for the part\nthat contain the stop word. For instance a query like \"the AND fox\" should ignore \"the\" if it is considered as a stop word instead of\nadding a match_no_docs query.\nThis change also fixes the analysis of prefix terms that start with a stop word (e.g. `the*`). In such case if `analyze_wildcard` is true and `the`\nis considered as a stop word this part of the query is rewritten into a match_no_docs query. Since it's a prefix query this change forces the prefix query\non `the` even if it is removed from the analysis.\n\nFixes #28855\nFixes #28856"
},
{
"message": "add comment regarding the usage of ZeroTermsQuery.NULL"
}
],
"files": [
{
"diff": "@@ -102,7 +102,10 @@ public void writeTo(StreamOutput out) throws IOException {\n \n public enum ZeroTermsQuery implements Writeable {\n NONE(0),\n- ALL(1);\n+ ALL(1),\n+ // this is used internally to make sure that query_string and simple_query_string\n+ // ignores query part that removes all tokens.\n+ NULL(2);\n \n private final int ordinal;\n \n@@ -312,10 +315,16 @@ protected final Query termQuery(MappedFieldType fieldType, BytesRef value, boole\n }\n \n protected Query zeroTermsQuery() {\n- if (zeroTermsQuery == DEFAULT_ZERO_TERMS_QUERY) {\n- return Queries.newMatchNoDocsQuery(\"Matching no documents because no terms present.\");\n+ switch (zeroTermsQuery) {\n+ case NULL:\n+ return null;\n+ case NONE:\n+ return Queries.newMatchNoDocsQuery(\"Matching no documents because no terms present\");\n+ case ALL:\n+ return Queries.newMatchAllQuery();\n+ default:\n+ throw new IllegalStateException(\"unknown zeroTermsQuery \" + zeroTermsQuery);\n }\n- return Queries.newMatchAllQuery();\n }\n \n private class MatchQueryBuilder extends QueryBuilder {",
"filename": "server/src/main/java/org/elasticsearch/index/search/MatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -147,6 +147,7 @@ private QueryStringQueryParser(QueryShardContext context, String defaultField,\n this.context = context;\n this.fieldsAndWeights = Collections.unmodifiableMap(fieldsAndWeights);\n this.queryBuilder = new MultiMatchQuery(context);\n+ queryBuilder.setZeroTermsQuery(MatchQuery.ZeroTermsQuery.NULL);\n queryBuilder.setLenient(lenient);\n this.lenient = lenient;\n }\n@@ -343,7 +344,6 @@ protected Query getFieldQuery(String field, String queryText, int slop) throws P\n if (fields.isEmpty()) {\n return newUnmappedFieldQuery(field);\n }\n- final Query query;\n Analyzer oldAnalyzer = queryBuilder.analyzer;\n int oldSlop = queryBuilder.phraseSlop;\n try {\n@@ -353,7 +353,7 @@ protected Query getFieldQuery(String field, String queryText, int slop) throws P\n queryBuilder.setAnalyzer(forceAnalyzer);\n }\n queryBuilder.setPhraseSlop(slop);\n- query = queryBuilder.parse(MultiMatchQueryBuilder.Type.PHRASE, fields, queryText, null);\n+ Query query = queryBuilder.parse(MultiMatchQueryBuilder.Type.PHRASE, fields, queryText, null);\n return applySlop(query, slop);\n } catch (IOException e) {\n throw new ParseException(e.getMessage());\n@@ -555,7 +555,7 @@ private Query getPossiblyAnalyzedPrefixQuery(String field, String termStr) throw\n }\n \n if (tlist.size() == 0) {\n- return new MatchNoDocsQuery(\"analysis was empty for \" + field + \":\" + termStr);\n+ return super.getPrefixQuery(field, termStr);\n }\n \n if (tlist.size() == 1 && tlist.get(0).size() == 1) {\n@@ -763,7 +763,7 @@ private PhraseQuery addSlopToPhrase(PhraseQuery query, int slop) {\n @Override\n public Query parse(String query) throws ParseException {\n if (query.trim().isEmpty()) {\n- return queryBuilder.zeroTermsQuery();\n+ return Queries.newMatchNoDocsQuery(\"Matching no documents because no terms present\");\n }\n return super.parse(query);\n }",
"filename": "server/src/main/java/org/elasticsearch/index/search/QueryStringQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -74,6 +74,7 @@ public SimpleQueryStringQueryParser(Analyzer analyzer, Map<String, Float> weight\n this.queryBuilder = new MultiMatchQuery(context);\n this.queryBuilder.setAutoGenerateSynonymsPhraseQuery(settings.autoGenerateSynonymsPhraseQuery());\n this.queryBuilder.setLenient(settings.lenient());\n+ this.queryBuilder.setZeroTermsQuery(MatchQuery.ZeroTermsQuery.NULL);\n if (analyzer != null) {\n this.queryBuilder.setAnalyzer(analyzer);\n }",
"filename": "server/src/main/java/org/elasticsearch/index/search/SimpleQueryStringQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -111,7 +111,7 @@ protected MatchQueryBuilder doCreateTestQueryBuilder() {\n }\n \n if (randomBoolean()) {\n- matchQuery.zeroTermsQuery(randomFrom(MatchQuery.ZeroTermsQuery.values()));\n+ matchQuery.zeroTermsQuery(randomFrom(ZeroTermsQuery.ALL, ZeroTermsQuery.NONE));\n }\n \n if (randomBoolean()) {",
"filename": "server/src/test/java/org/elasticsearch/index/query/MatchQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -129,7 +129,7 @@ protected MultiMatchQueryBuilder doCreateTestQueryBuilder() {\n query.cutoffFrequency((float) 10 / randomIntBetween(1, 100));\n }\n if (randomBoolean()) {\n- query.zeroTermsQuery(randomFrom(MatchQuery.ZeroTermsQuery.values()));\n+ query.zeroTermsQuery(randomFrom(MatchQuery.ZeroTermsQuery.NONE, MatchQuery.ZeroTermsQuery.ALL));\n }\n if (randomBoolean()) {\n query.autoGenerateSynonymsPhraseQuery(randomBoolean());",
"filename": "server/src/test/java/org/elasticsearch/index/query/MultiMatchQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -1052,6 +1052,33 @@ public void testToFuzzyQuery() throws Exception {\n assertEquals(expected, query);\n }\n \n+ public void testWithStopWords() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ Query query = new QueryStringQueryBuilder(\"the quick fox\")\n+ .field(STRING_FIELD_NAME)\n+ .analyzer(\"english\")\n+ .toQuery(createShardContext());\n+ BooleanQuery expected = new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"quick\")), Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"fox\")), Occur.SHOULD)\n+ .build();\n+ assertEquals(expected, query);\n+ }\n+\n+ public void testWithPrefixStopWords() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ Query query = new QueryStringQueryBuilder(\"the* quick fox\")\n+ .field(STRING_FIELD_NAME)\n+ .analyzer(\"english\")\n+ .toQuery(createShardContext());\n+ BooleanQuery expected = new BooleanQuery.Builder()\n+ .add(new PrefixQuery(new Term(STRING_FIELD_NAME, \"the\")), Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"quick\")), Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"fox\")), Occur.SHOULD)\n+ .build();\n+ assertEquals(expected, query);\n+ }\n+\n private static IndexMetaData newIndexMeta(String name, Settings oldIndexSettings, Settings indexSettings) {\n Settings build = Settings.builder().put(oldIndexSettings)\n .put(indexSettings)",
"filename": "server/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -625,6 +625,33 @@ public void testLenientToPrefixQuery() throws Exception {\n assertEquals(expected, query);\n }\n \n+ public void testWithStopWords() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ Query query = new SimpleQueryStringBuilder(\"the quick fox\")\n+ .field(STRING_FIELD_NAME)\n+ .analyzer(\"english\")\n+ .toQuery(createShardContext());\n+ BooleanQuery expected = new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"quick\")), BooleanClause.Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"fox\")), BooleanClause.Occur.SHOULD)\n+ .build();\n+ assertEquals(expected, query);\n+ }\n+\n+ public void testWithPrefixStopWords() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ Query query = new SimpleQueryStringBuilder(\"the* quick fox\")\n+ .field(STRING_FIELD_NAME)\n+ .analyzer(\"english\")\n+ .toQuery(createShardContext());\n+ BooleanQuery expected = new BooleanQuery.Builder()\n+ .add(new PrefixQuery(new Term(STRING_FIELD_NAME, \"the\")), BooleanClause.Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"quick\")), BooleanClause.Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"fox\")), BooleanClause.Occur.SHOULD)\n+ .build();\n+ assertEquals(expected, query);\n+ }\n+\n private static IndexMetaData newIndexMeta(String name, Settings oldIndexSettings, Settings indexSettings) {\n Settings build = Settings.builder().put(oldIndexSettings)\n .put(indexSettings)",
"filename": "server/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "ES 6.2.1\r\nI've noticed that the simple-query-string query type (https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html) doesn't seem to handle stopwords at all at analysis time, best exemplified with \"default_operator: AND\".\r\n\r\nConsider the below - We create an index and change the default analyzer to use English stopwords:\r\n\r\n```\r\nPUT /simp_idx\r\n{\r\n \"mappings\": {\r\n \"my_type\": {\r\n \"properties\": {\r\n \t\t\"field_1\": {\r\n \t\t\t\"type\": \"text\"\r\n \t\t}\r\n }\r\n }\r\n },\r\n \t\"settings\": {\r\n\t\t\"number_of_shards\": 1,\r\n\t\t\"number_of_replicas\": 0,\r\n\t\t\"analysis\": {\r\n\r\n\t\t\t\"filter\": {\r\n\t\t\t\t\"english_stop\": {\r\n\t\t\t\t\t\"type\": \"stop\",\r\n\t\t\t\t\t\"stopwords\": \"_english_\"\r\n\t\t\t\t}\r\n\t\t\t},\r\n\t\t\t\"analyzer\": {\r\n\t\t\t\t\"default\": {\r\n\t\t\t\t\t\"tokenizer\": \"standard\",\r\n\t\t\t\t\t\"filter\": [\r\n\t\t\t\t\t\t\"english_stop\"\r\n\t\t\t\t\t]\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n}\r\n```\r\n\r\nAnd then populate it:\r\n\r\n```\r\nPUT /simp_idx/my_type/1\r\n{\r\n \"field_1\": \"place of beauty\"\r\n}\r\nPUT /simp_idx/my_type/2\r\n{\r\n \"field_1\": \"place and beauty\"\r\n}\r\n```\r\n\r\nNow, if we query this with the regular query_string, we get the expected two results:\r\n```\r\nGET /simp_idx/my_type/_search\r\n{\r\n \"query\": {\r\n \"query_string\" : {\r\n \"query\": \"place of\",\r\n \"default_operator\": \"and\"\r\n }\r\n }\r\n}\r\n```\r\n\r\nBut the same query using simple-query-string and the AND operator finds no results:\r\n```\r\nGET /simp_idx/my_type/_search\r\n{\r\n \"query\": {\r\n \"simple_query_string\" : {\r\n \"query\": \"place of\",\r\n\t\t\t\"fields\": [ \"field_1\"],\r\n \"default_operator\": \"and\"\r\n }\r\n }\r\n}\r\n```\r\nRemove the \"of\" from the query and it will work as expected.\r\n\r\nMaybe this is intentional because the SQS is \"simple\", but it's not documented on the SQS page - the only explicitly stated difference is no exception raising. Seems like a bug though, hence the report.",
"comments": [
{
"body": "I can reproduce this, and I don't think this is on purpose. cc @elastic/es-search-aggs ",
"created_at": "2018-03-01T14:44:05Z"
},
{
"body": "I am dealing with this problem and I intend to downgrade ES to a sound release.\r\nAnyone knows which previous version of ES is free of this bug ?",
"created_at": "2018-05-07T15:25:43Z"
},
{
"body": "ES 6.3.1\r\nThis bug is repeated if we have more than one field.\r\nStop words are not cut out in simple_query_string and finds no results.\r\n\r\nFields have the same type (\"text\")",
"created_at": "2018-08-21T04:21:26Z"
}
],
"number": 28855,
"title": "Stop-words not removed during simple-query query-time analysis"
} | {
"body": "This change ensures that we ignore terms removed from the analysis rather than returning a match_no_docs query for the part\r\nthat contain the stop word. For instance a query like \"the AND fox\" should ignore \"the\" if it is considered as a stop word instead of\r\nadding a match_no_docs query.\r\nThis change also fixes the analysis of prefix terms that start with a stop word (e.g. `the*`). In such case if `analyze_wildcard` is true and `the`\r\nis considered as a stop word this part of the query is rewritten into a match_no_docs query. Since it's a prefix query this change forces the prefix query\r\non `the` even if it is removed from the analysis.\r\n\r\nFixes #28855\r\nFixes #28856",
"number": 28871,
"review_comments": [],
"title": "Fix (simple)_query_string to ignore removed terms"
} | {
"commits": [
{
"message": "Fix query_string and simple_query_string to ignore removed terms\n\nThis change ensures that we ignore terms removed from the analysis rather than returning a match_no_docs query for the part\nthat contain the stop word. For instance a query like \"the AND fox\" should ignore \"the\" if it is considered as a stop word instead of\nadding a match_no_docs query.\nThis change also fixes the analysis of prefix terms that start with a stop word (e.g. `the*`). In such case if `analyze_wildcard` is true and `the`\nis considered as a stop word this part of the query is rewritten into a match_no_docs query. Since it's a prefix query this change forces the prefix query\non `the` even if it is removed from the analysis.\n\nFixes #28855\nFixes #28856"
},
{
"message": "add comment regarding the usage of ZeroTermsQuery.NULL"
}
],
"files": [
{
"diff": "@@ -102,7 +102,10 @@ public void writeTo(StreamOutput out) throws IOException {\n \n public enum ZeroTermsQuery implements Writeable {\n NONE(0),\n- ALL(1);\n+ ALL(1),\n+ // this is used internally to make sure that query_string and simple_query_string\n+ // ignores query part that removes all tokens.\n+ NULL(2);\n \n private final int ordinal;\n \n@@ -312,10 +315,16 @@ protected final Query termQuery(MappedFieldType fieldType, BytesRef value, boole\n }\n \n protected Query zeroTermsQuery() {\n- if (zeroTermsQuery == DEFAULT_ZERO_TERMS_QUERY) {\n- return Queries.newMatchNoDocsQuery(\"Matching no documents because no terms present.\");\n+ switch (zeroTermsQuery) {\n+ case NULL:\n+ return null;\n+ case NONE:\n+ return Queries.newMatchNoDocsQuery(\"Matching no documents because no terms present\");\n+ case ALL:\n+ return Queries.newMatchAllQuery();\n+ default:\n+ throw new IllegalStateException(\"unknown zeroTermsQuery \" + zeroTermsQuery);\n }\n- return Queries.newMatchAllQuery();\n }\n \n private class MatchQueryBuilder extends QueryBuilder {",
"filename": "server/src/main/java/org/elasticsearch/index/search/MatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -147,6 +147,7 @@ private QueryStringQueryParser(QueryShardContext context, String defaultField,\n this.context = context;\n this.fieldsAndWeights = Collections.unmodifiableMap(fieldsAndWeights);\n this.queryBuilder = new MultiMatchQuery(context);\n+ queryBuilder.setZeroTermsQuery(MatchQuery.ZeroTermsQuery.NULL);\n queryBuilder.setLenient(lenient);\n this.lenient = lenient;\n }\n@@ -343,7 +344,6 @@ protected Query getFieldQuery(String field, String queryText, int slop) throws P\n if (fields.isEmpty()) {\n return newUnmappedFieldQuery(field);\n }\n- final Query query;\n Analyzer oldAnalyzer = queryBuilder.analyzer;\n int oldSlop = queryBuilder.phraseSlop;\n try {\n@@ -353,7 +353,7 @@ protected Query getFieldQuery(String field, String queryText, int slop) throws P\n queryBuilder.setAnalyzer(forceAnalyzer);\n }\n queryBuilder.setPhraseSlop(slop);\n- query = queryBuilder.parse(MultiMatchQueryBuilder.Type.PHRASE, fields, queryText, null);\n+ Query query = queryBuilder.parse(MultiMatchQueryBuilder.Type.PHRASE, fields, queryText, null);\n return applySlop(query, slop);\n } catch (IOException e) {\n throw new ParseException(e.getMessage());\n@@ -555,7 +555,7 @@ private Query getPossiblyAnalyzedPrefixQuery(String field, String termStr) throw\n }\n \n if (tlist.size() == 0) {\n- return new MatchNoDocsQuery(\"analysis was empty for \" + field + \":\" + termStr);\n+ return super.getPrefixQuery(field, termStr);\n }\n \n if (tlist.size() == 1 && tlist.get(0).size() == 1) {\n@@ -763,7 +763,7 @@ private PhraseQuery addSlopToPhrase(PhraseQuery query, int slop) {\n @Override\n public Query parse(String query) throws ParseException {\n if (query.trim().isEmpty()) {\n- return queryBuilder.zeroTermsQuery();\n+ return Queries.newMatchNoDocsQuery(\"Matching no documents because no terms present\");\n }\n return super.parse(query);\n }",
"filename": "server/src/main/java/org/elasticsearch/index/search/QueryStringQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -74,6 +74,7 @@ public SimpleQueryStringQueryParser(Analyzer analyzer, Map<String, Float> weight\n this.queryBuilder = new MultiMatchQuery(context);\n this.queryBuilder.setAutoGenerateSynonymsPhraseQuery(settings.autoGenerateSynonymsPhraseQuery());\n this.queryBuilder.setLenient(settings.lenient());\n+ this.queryBuilder.setZeroTermsQuery(MatchQuery.ZeroTermsQuery.NULL);\n if (analyzer != null) {\n this.queryBuilder.setAnalyzer(analyzer);\n }",
"filename": "server/src/main/java/org/elasticsearch/index/search/SimpleQueryStringQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -111,7 +111,7 @@ protected MatchQueryBuilder doCreateTestQueryBuilder() {\n }\n \n if (randomBoolean()) {\n- matchQuery.zeroTermsQuery(randomFrom(MatchQuery.ZeroTermsQuery.values()));\n+ matchQuery.zeroTermsQuery(randomFrom(ZeroTermsQuery.ALL, ZeroTermsQuery.NONE));\n }\n \n if (randomBoolean()) {",
"filename": "server/src/test/java/org/elasticsearch/index/query/MatchQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -129,7 +129,7 @@ protected MultiMatchQueryBuilder doCreateTestQueryBuilder() {\n query.cutoffFrequency((float) 10 / randomIntBetween(1, 100));\n }\n if (randomBoolean()) {\n- query.zeroTermsQuery(randomFrom(MatchQuery.ZeroTermsQuery.values()));\n+ query.zeroTermsQuery(randomFrom(MatchQuery.ZeroTermsQuery.NONE, MatchQuery.ZeroTermsQuery.ALL));\n }\n if (randomBoolean()) {\n query.autoGenerateSynonymsPhraseQuery(randomBoolean());",
"filename": "server/src/test/java/org/elasticsearch/index/query/MultiMatchQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -1052,6 +1052,33 @@ public void testToFuzzyQuery() throws Exception {\n assertEquals(expected, query);\n }\n \n+ public void testWithStopWords() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ Query query = new QueryStringQueryBuilder(\"the quick fox\")\n+ .field(STRING_FIELD_NAME)\n+ .analyzer(\"english\")\n+ .toQuery(createShardContext());\n+ BooleanQuery expected = new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"quick\")), Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"fox\")), Occur.SHOULD)\n+ .build();\n+ assertEquals(expected, query);\n+ }\n+\n+ public void testWithPrefixStopWords() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ Query query = new QueryStringQueryBuilder(\"the* quick fox\")\n+ .field(STRING_FIELD_NAME)\n+ .analyzer(\"english\")\n+ .toQuery(createShardContext());\n+ BooleanQuery expected = new BooleanQuery.Builder()\n+ .add(new PrefixQuery(new Term(STRING_FIELD_NAME, \"the\")), Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"quick\")), Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"fox\")), Occur.SHOULD)\n+ .build();\n+ assertEquals(expected, query);\n+ }\n+\n private static IndexMetaData newIndexMeta(String name, Settings oldIndexSettings, Settings indexSettings) {\n Settings build = Settings.builder().put(oldIndexSettings)\n .put(indexSettings)",
"filename": "server/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -625,6 +625,33 @@ public void testLenientToPrefixQuery() throws Exception {\n assertEquals(expected, query);\n }\n \n+ public void testWithStopWords() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ Query query = new SimpleQueryStringBuilder(\"the quick fox\")\n+ .field(STRING_FIELD_NAME)\n+ .analyzer(\"english\")\n+ .toQuery(createShardContext());\n+ BooleanQuery expected = new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"quick\")), BooleanClause.Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"fox\")), BooleanClause.Occur.SHOULD)\n+ .build();\n+ assertEquals(expected, query);\n+ }\n+\n+ public void testWithPrefixStopWords() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ Query query = new SimpleQueryStringBuilder(\"the* quick fox\")\n+ .field(STRING_FIELD_NAME)\n+ .analyzer(\"english\")\n+ .toQuery(createShardContext());\n+ BooleanQuery expected = new BooleanQuery.Builder()\n+ .add(new PrefixQuery(new Term(STRING_FIELD_NAME, \"the\")), BooleanClause.Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"quick\")), BooleanClause.Occur.SHOULD)\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"fox\")), BooleanClause.Occur.SHOULD)\n+ .build();\n+ assertEquals(expected, query);\n+ }\n+\n private static IndexMetaData newIndexMeta(String name, Settings oldIndexSettings, Settings indexSettings) {\n Settings build = Settings.builder().put(oldIndexSettings)\n .put(indexSettings)",
"filename": "server/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "Use a fake similarity map that always returns a value in MetaDataIndexUpgradeService.checkMappingsCompatibility instead of an empty map.\r\n\r\nCloses #25350",
"comments": [
{
"body": "Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?\n",
"created_at": "2017-10-12T12:52:03Z"
},
{
"body": "Just to be clear, I'm waiting on further input here for both the requested changes.",
"created_at": "2017-10-30T12:49:31Z"
},
{
"body": "@xabbu42 Sorry for the delayed response. Could you please update this PR as per the new comments?",
"created_at": "2017-11-10T19:24:08Z"
},
{
"body": "@rjernst no problem, I just wanted to make sure nobody is waiting on me. I hope the updated PR is ok now.",
"created_at": "2017-11-11T11:59:28Z"
},
{
"body": "@elasticmachine test this please",
"created_at": "2017-11-27T22:27:22Z"
},
{
"body": "@xabbu42 Can you fix the tabs? They should be spaces:\r\n\r\n```\r\n23:25:22 Execution failed for task ':core:forbiddenPatterns'.\r\n23:25:22 > Found invalid patterns:\r\n23:25:22 - tab on line 159 of core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java\r\n23:25:22 - tab on line 160 of core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java\r\n23:25:22 - tab on line 181 of core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java\r\n23:25:22 - tab on line 182 of core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java\r\n```",
"created_at": "2017-11-28T20:56:56Z"
},
{
"body": "Ups sorry about that, its fixed now.",
"created_at": "2017-11-29T13:10:05Z"
},
{
"body": "@elasticmachine test this please",
"created_at": "2017-12-05T23:06:19Z"
},
{
"body": "@elasticmachine test this please",
"created_at": "2017-12-21T20:47:47Z"
},
{
"body": "Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?\n",
"created_at": "2017-12-22T19:32:15Z"
},
{
"body": "@elasticmachine test this please",
"created_at": "2017-12-22T19:33:25Z"
},
{
"body": "@elasticmachine test this please",
"created_at": "2017-12-22T21:09:05Z"
},
{
"body": "Thanks @xabbu42",
"created_at": "2018-01-09T01:27:38Z"
}
],
"number": 26985,
"title": "Fix upgrading indices which use a custom similarity plugin."
} | {
"body": "This is a straightforward port of #26985 to the 5.6 branch. Quoting the original:\r\n\r\n> Fix upgrading indices which use a custom similarity plugin.\r\n> Use a fake similarity map that always returns a value in MetaDataIndexUpgradeService.checkMappingsCompatibility instead of an empty map.\r\n",
"number": 28795,
"review_comments": [],
"title": "Backport PR 26985, upgrading indices with a custom similarity plugin"
} | {
"commits": [
{
"message": "Backport PR 26985 which fixes #25350.\n\nFix upgrading indices which use a custom similarity plugin.\nUse a fake similarity map that always returns a value in\nMetaDataIndexUpgradeService.checkMappingsCompatibility instead of\nan empty map."
}
],
"files": [
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.similarity.SimilarityService;\n+import org.elasticsearch.index.similarity.SimilarityProvider;\n import org.elasticsearch.indices.mapper.MapperRegistry;\n import org.elasticsearch.plugins.Plugin;\n \n@@ -42,6 +43,7 @@\n import java.util.Collections;\n import java.util.Map;\n import java.util.Set;\n+import java.util.function.BiFunction;\n import java.util.function.UnaryOperator;\n \n /**\n@@ -137,26 +139,51 @@ private static boolean isSupportedVersion(IndexMetaData indexMetaData, Version m\n */\n private void checkMappingsCompatibility(IndexMetaData indexMetaData) {\n try {\n- // We cannot instantiate real analysis server at this point because the node might not have\n- // been started yet. However, we don't really need real analyzers at this stage - so we can fake it\n+\n+ // We cannot instantiate real analysis server or similiarity service at this point because the node\n+ // might not have been started yet. However, we don't really need real analyzers or similarities at\n+ // this stage - so we can fake it using constant maps accepting every key.\n+ // This is ok because all used similarities and analyzers for this index were known before the upgrade.\n+ // Missing analyzers and similarities plugin will still trigger the apropriate error during the\n+ // actual upgrade.\n+\n IndexSettings indexSettings = new IndexSettings(indexMetaData, this.settings);\n- SimilarityService similarityService = new SimilarityService(indexSettings, Collections.emptyMap());\n+ final Map<String, BiFunction<String, Settings, SimilarityProvider>> similarityMap = new AbstractMap<String, BiFunction<String, Settings, SimilarityProvider>>() {\n+ @Override\n+ public boolean containsKey(Object key) {\n+ return true;\n+ }\n+\n+ @Override\n+ public BiFunction<String, Settings, SimilarityProvider> get(Object key) {\n+ assert key instanceof String : \"key must be a string but was: \" + key.getClass();\n+ return SimilarityService.BUILT_IN.get(SimilarityService.DEFAULT_SIMILARITY);\n+ }\n+\n+ // this entrySet impl isn't fully correct but necessary as SimilarityService will iterate\n+ // over all similarities\n+ @Override\n+ public Set<Entry<String, BiFunction<String, Settings, SimilarityProvider>>> entrySet() {\n+ return Collections.emptySet();\n+ }\n+ };\n+ SimilarityService similarityService = new SimilarityService(indexSettings, similarityMap);\n final NamedAnalyzer fakeDefault = new NamedAnalyzer(\"fake_default\", AnalyzerScope.INDEX, new Analyzer() {\n @Override\n protected TokenStreamComponents createComponents(String fieldName) {\n throw new UnsupportedOperationException(\"shouldn't be here\");\n }\n });\n- // this is just a fake map that always returns the same value for any possible string key\n- // also the entrySet impl isn't fully correct but we implement it since internally\n- // IndexAnalyzers will iterate over all analyzers to close them.\n+\n final Map<String, NamedAnalyzer> analyzerMap = new AbstractMap<String, NamedAnalyzer>() {\n @Override\n public NamedAnalyzer get(Object key) {\n assert key instanceof String : \"key must be a string but was: \" + key.getClass();\n return new NamedAnalyzer((String)key, AnalyzerScope.INDEX, fakeDefault.analyzer());\n }\n \n+ // this entrySet impl isn't fully correct but necessary as IndexAnalyzers will iterate\n+ // over all analyzers to close them\n @Override\n public Set<Entry<String, NamedAnalyzer>> entrySet() {\n return Collections.emptySet();",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java",
"status": "modified"
}
]
} |
{
"body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\n5.3.0\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`):\r\n1.8.0_111\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nCentOS\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nthe node.nodeId() should be primaryShard.currentNodeId()\r\n\r\n\r\n\r\n\r\n",
"comments": [
{
"body": "Hm, yes, that does look incorrect. Although this report is against 5.3.0, it looks like this is still the case in master:\r\n\r\nhttps://github.com/elastic/elasticsearch/blob/99f88f15c5febbca2d13b5b5fda27b844153bf1a/server/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ThrottlingAllocationDecider.java#L149-L169\r\n\r\n@czjxy881 it looks from your colour scheme that you're using IntelliJ. If so, you may like to know that you can select some code and select 'Open in Github' from the context menu to get a URL that links to code, rather than using a screenshot.\r\n\r\n@elastic/es-distributed I'm marking this as adoptme.",
"created_at": "2018-02-22T07:50:53Z"
},
{
"body": "@czjxy881 would you like to make a PR?",
"created_at": "2018-02-22T08:19:43Z"
},
{
"body": "@DaveCTurner thank you , i really don't know yet",
"created_at": "2018-02-22T08:50:45Z"
},
{
"body": "@ywelsch already submit",
"created_at": "2018-02-22T08:50:58Z"
}
],
"number": 28777,
"title": "ThrottlingAllocationDecider outgoing message's nodeid is wrong"
} | {
"body": "fixes #28777\r\n\r\n<!--\r\nThank you for your interest in and contributing to Elasticsearch! There\r\nare a few simple things to check before submitting your pull request\r\nthat can help with the review process. You should delete these items\r\nfrom your submission, but they are here to help bring them to your\r\nattention.\r\n-->\r\n\r\n- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?\r\nyes\r\n- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md)?\r\nyes\r\n- If submitting code, have you built your formula locally prior to submission with `gradle check`?\r\n- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.\r\n\r\n\r\n",
"number": 28779,
"review_comments": [
{
"body": "Nit: please add spaces after the `for` and before the `{`.",
"created_at": "2018-02-22T10:50:06Z"
},
{
"body": "Nit: please add spaces after the `if` and before the `{`.",
"created_at": "2018-02-22T10:50:20Z"
},
{
"body": "This line is longer than 120 characters so CI will reject it.",
"created_at": "2018-02-22T10:50:53Z"
},
{
"body": "`ThrottlingAllocationDecider` is not imported, so this is a compile error.",
"created_at": "2018-02-22T10:51:15Z"
},
{
"body": "Nit: please add a space before the `,` separating the function arguments.",
"created_at": "2018-02-22T10:52:57Z"
},
{
"body": "Could we call this `foundThrottledMessage`?",
"created_at": "2018-02-22T15:10:22Z"
},
{
"body": "Could you assert the entire text of the message as you had done previously? It helps to clarify what this bit of code is testing.",
"created_at": "2018-02-22T15:11:41Z"
},
{
"body": "Could you use `assertTrue()` here?",
"created_at": "2018-02-22T15:12:18Z"
}
],
"title": "Fix outgoing NodeID"
} | {
"commits": [
{
"message": "Fix outgoing NodeID\n\nfix https://github.com/elastic/elasticsearch/issues/28777"
},
{
"message": "add test case\n\nadd test case"
},
{
"message": "revise according to response"
},
{
"message": "add import"
},
{
"message": "revise"
}
],
"files": [
{
"diff": "@@ -156,7 +156,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n return allocation.decision(THROTTLE, NAME,\n \"reached the limit of outgoing shard recoveries [%d] on the node [%s] which holds the primary, \" +\n \"cluster setting [%s=%d] (can also be set via [%s])\",\n- primaryNodeOutRecoveries, node.nodeId(),\n+ primaryNodeOutRecoveries, primaryShard.currentNodeId(),\n CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_OUTGOING_RECOVERIES_SETTING.getKey(),\n concurrentOutgoingRecoveries,\n CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING.getKey());",
"filename": "server/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ThrottlingAllocationDecider.java",
"status": "modified"
},
{
"diff": "@@ -39,6 +39,7 @@\n import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands;\n import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n+import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n@@ -299,6 +300,18 @@ public void testOutgoingThrottlesAllocation() {\n new MoveAllocationCommand(\"test\", 0, \"node2\", \"node4\")), true, false);\n assertEquals(commandsResult.explanations().explanations().size(), 1);\n assertEquals(commandsResult.explanations().explanations().get(0).decisions().type(), Decision.Type.THROTTLE);\n+ boolean foundThrottledMessage = false;\n+ for (Decision decision : commandsResult.explanations().explanations().get(0).decisions().getDecisions()) {\n+ if (decision.label().equals(ThrottlingAllocationDecider.NAME)) {\n+ assertEquals(\"reached the limit of outgoing shard recoveries [1] on the node [node1] which holds the primary, \" \n+ + \"cluster setting [cluster.routing.allocation.node_concurrent_outgoing_recoveries=1] \" \n+ + \"(can also be set via [cluster.routing.allocation.node_concurrent_recoveries])\", \n+ decision.getExplanation());\n+ assertEquals(Decision.Type.THROTTLE, decision.type());\n+ foundThrottledMessage = true;\n+ }\n+ }\n+ assertTrue(foundThrottledMessage);\n // even though it is throttled, move command still forces allocation\n \n clusterState = commandsResult.getClusterState();",
"filename": "server/src/test/java/org/elasticsearch/cluster/routing/allocation/ThrottlingAllocationTests.java",
"status": "modified"
}
]
} |
{
"body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion: 6.2.1, Build: 7299dc3/2018-02-07T19:34:26.990113Z, JVM: 1.8.0_161\r\n\r\n**Plugins installed**: None\r\n\r\n**JVM version** (`java -version`):\r\njava version \"1.8.0_161\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_161-b12)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nLinux node1 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\ngeo_shape query on geo_shape field of type point with points_only set to true does not work anymore in 6.x. It was working correctly in 5.x. By not working I mean when querying for points contained within a geo_shape (envelope, polygon etc.), no hits are returned. When points_only: true is removed from mapping, the queries work as expected, returning the points contained within the specified geo_shape.\r\n\r\n**Steps to reproduce**:\r\n\r\nBasically using the examples in https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-geo-shape-query.html. The only difference is specifying points_only: true in the location field, as shown below:\r\n\r\n1. Create an index to store some points:\r\n```\r\nPUT /example\r\n{\r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"location\": {\r\n \"type\": \"geo_shape\",\r\n \"points_only\": true\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n2. Store a point\r\n```\r\nPOST /example/_doc?refresh\r\n{\r\n \"name\": \"Wind & Wetter, Berlin, Germany\",\r\n \"location\": {\r\n \"type\": \"point\",\r\n \"coordinates\": [13.400544, 52.530286]\r\n }\r\n}\r\n```\r\n3. The following query should find the point using the Elasticsearch’s envelope GeoJSON extension:\r\n```\r\nGET /example/_search\r\n{\r\n \"query\":{\r\n \"bool\": {\r\n \"must\": {\r\n \"match_all\": {}\r\n },\r\n \"filter\": {\r\n \"geo_shape\": {\r\n \"location\": {\r\n \"shape\": {\r\n \"type\": \"envelope\",\r\n \"coordinates\" : [[13.0, 53.0], [14.0, 52.0]]\r\n },\r\n \"relation\": \"within\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nBut in fact it returns no hits:\r\n```\r\n{\r\n \"took\": 0,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 1,\r\n \"successful\": 1,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 0,\r\n \"max_score\": null,\r\n \"hits\": []\r\n }\r\n}\r\n```\r\n\r\nIf you delete the example index and create it without specifying points_only: true, as follows:\r\n\r\n```\r\nPUT /example\r\n{\r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"location\": {\r\n \"type\": \"geo_shape\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\nand repeat steps 2 and 3 above, the query will return the point, as expected:\r\n\r\n```\r\n{\r\n \"took\": 0,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 1,\r\n \"successful\": 1,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"example\",\r\n \"_type\": \"_doc\",\r\n \"_id\": \"SZods2EBpb50aY-UDsK9\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"name\": \"Wind & Wetter, Berlin, Germany\",\r\n \"location\": {\r\n \"type\": \"point\",\r\n \"coordinates\": [\r\n 13.400544,\r\n 52.530286\r\n ]\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```",
"comments": [
{
"body": "@elastic/es-search-aggs",
"created_at": "2018-02-20T12:49:40Z"
},
{
"body": "@nknize could you take a look?",
"created_at": "2018-02-20T12:58:33Z"
},
{
"body": "I was just going to open the same bug.\r\n\r\nSome further info that might help: this is **not** an issue in `6.0.1` but I've reproduced it in subsequent versions including `6.1.0` and `6.2.2`.",
"created_at": "2018-02-21T10:32:43Z"
}
],
"number": 28744,
"title": "geo_shape query on geo_shape field of type point with points_only set to true does not work in 6.x"
} | {
"body": "This PR fixes a bug that was introduced in PR #27415 for 6.1 and 7.0 where a change to support MULTIPOINT types in a `geo_shape` index with `points_only` set to `true` mucked up indexing of standalone points.\r\n\r\ncloses #28744\r\n",
"number": 28774,
"review_comments": [],
"title": "[GEO] Fix points_only indexing failure for GeoShapeFieldMapper"
} | {
"commits": [
{
"message": "[GEO] Fix points_only indexing failure for GeoShapeFieldMapper\n\nThis commit fixes a bug that was introduced in PR #27415 for 6.1\nand 7.0 where a change to support MULTIPOINT shapes mucked up\nindexing of standalone points."
}
],
"files": [
{
"diff": "@@ -30,6 +30,7 @@\n import org.apache.lucene.spatial.prefix.tree.PackedQuadPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.SpatialPrefixTree;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Explicit;\n import org.elasticsearch.common.geo.GeoUtils;\n@@ -474,13 +475,13 @@ public Mapper parse(ParseContext context) throws IOException {\n for (Shape s : shapes) {\n indexShape(context, s);\n }\n+ return null;\n } else if (shape instanceof Point == false) {\n throw new MapperParsingException(\"[{\" + fieldType().name() + \"}] is configured for points only but a \" +\n ((shape instanceof JtsGeometry) ? ((JtsGeometry)shape).getGeom().getGeometryType() : shape.getClass()) + \" was found\");\n }\n- } else {\n- indexShape(context, shape);\n }\n+ indexShape(context, shape);\n } catch (Exception e) {\n if (ignoreMalformed.value() == false) {\n throw new MapperParsingException(\"failed to parse [\" + fieldType().name() + \"]\", e);",
"filename": "server/src/main/java/org/elasticsearch/index/mapper/GeoShapeFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -464,7 +464,7 @@ public void testPointsOnly() throws Exception {\n \n // test that point was inserted\n SearchResponse response = client().prepareSearch(\"geo_points_only\").setTypes(\"type1\")\n- .setQuery(matchAllQuery())\n+ .setQuery(geoIntersectionQuery(\"location\", shape))\n .execute().actionGet();\n \n assertEquals(1, response.getHits().getTotalHits());",
"filename": "server/src/test/java/org/elasticsearch/search/geo/GeoShapeQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "This PR fixes a bug where geo_shape indexes configured for `\"points_only\" : \"true\"` reject documents containing `multipoint` shape types.",
"comments": [],
"number": 27415,
"title": "[GEO] fix pointsOnly bug for MULTIPOINT"
} | {
"body": "This PR fixes a bug that was introduced in PR #27415 for 6.1 and 7.0 where a change to support MULTIPOINT types in a `geo_shape` index with `points_only` set to `true` mucked up indexing of standalone points.\r\n\r\ncloses #28744\r\n",
"number": 28774,
"review_comments": [],
"title": "[GEO] Fix points_only indexing failure for GeoShapeFieldMapper"
} | {
"commits": [
{
"message": "[GEO] Fix points_only indexing failure for GeoShapeFieldMapper\n\nThis commit fixes a bug that was introduced in PR #27415 for 6.1\nand 7.0 where a change to support MULTIPOINT shapes mucked up\nindexing of standalone points."
}
],
"files": [
{
"diff": "@@ -30,6 +30,7 @@\n import org.apache.lucene.spatial.prefix.tree.PackedQuadPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.SpatialPrefixTree;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Explicit;\n import org.elasticsearch.common.geo.GeoUtils;\n@@ -474,13 +475,13 @@ public Mapper parse(ParseContext context) throws IOException {\n for (Shape s : shapes) {\n indexShape(context, s);\n }\n+ return null;\n } else if (shape instanceof Point == false) {\n throw new MapperParsingException(\"[{\" + fieldType().name() + \"}] is configured for points only but a \" +\n ((shape instanceof JtsGeometry) ? ((JtsGeometry)shape).getGeom().getGeometryType() : shape.getClass()) + \" was found\");\n }\n- } else {\n- indexShape(context, shape);\n }\n+ indexShape(context, shape);\n } catch (Exception e) {\n if (ignoreMalformed.value() == false) {\n throw new MapperParsingException(\"failed to parse [\" + fieldType().name() + \"]\", e);",
"filename": "server/src/main/java/org/elasticsearch/index/mapper/GeoShapeFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -464,7 +464,7 @@ public void testPointsOnly() throws Exception {\n \n // test that point was inserted\n SearchResponse response = client().prepareSearch(\"geo_points_only\").setTypes(\"type1\")\n- .setQuery(matchAllQuery())\n+ .setQuery(geoIntersectionQuery(\"location\", shape))\n .execute().actionGet();\n \n assertEquals(1, response.getHits().getTotalHits());",
"filename": "server/src/test/java/org/elasticsearch/search/geo/GeoShapeQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "Hi!\r\n\r\nI'm using Elasticsearch to store test results of a software. I'm running Elasticsearch on Docker CE with the following configuration:\r\n\r\nDocker version 17.12.0-ce, build c97c6d6\r\nUbuntu 14.04.5 LTS 14.04 trusty\r\ndocker.elastic.co/elasticsearch/elasticsearch-oss:6.2.1\r\ndocker.elastic.co/kibana/kibana-oss:6.2.1\r\n\r\nThe documents look like the following:\r\n\r\n```\r\n { \r\n \"product\":{ \r\n \"software\":{ \r\n \"revision\":\"251ebc1ed721622f49c5485068f8b39c45005a20\",\r\n \"date\":\"2018-02-09T17:40:34+00:00\"\r\n },\r\n \"os\":{ \r\n \"name\":\"Ubuntu\",\r\n \"version\":\"14.04\"\r\n }\r\n },\r\n \"results\":{ \r\n \"summary\":{ \r\n \"Pass\":2,\r\n \"Fail\":3\r\n }\r\n }\r\n }\r\n```\r\n\r\nI want to compute the average Pass and Fail numbers for all combinations of os.name and os.version for their newest software version, so I use the following Composite query:\r\n\r\n```\r\n GET _search\r\n {\r\n \"size\": 0,\r\n \"query\": {\r\n \"match_all\": {}\r\n },\r\n \"aggs\": {\r\n \"products\": {\r\n \"composite\": {\r\n \"sources\": [\r\n {\r\n \"osName\": {\r\n \"terms\": {\r\n \"field\": \"product.os.name.keyword\"\r\n }\r\n }\r\n },\r\n {\r\n \"osVer\": {\r\n \"terms\": {\r\n \"field\": \"product.os.version.keyword\"\r\n }\r\n }\r\n }\r\n ]\r\n },\r\n \"aggs\": {\r\n \"revision\": {\r\n \"terms\": {\r\n \"field\": \"product.software.revision.keyword\",\r\n \"order\": {\r\n \"SWDate\": \"desc\"\r\n },\r\n \"size\": 1\r\n },\r\n \"aggs\": {\r\n \"SWDate\": {\r\n \"max\": {\r\n \"field\": \"product.software.date\"\r\n }\r\n },\r\n \"Pass\": {\r\n \"avg\": {\r\n \"field\": \"results.summary.Pass\",\r\n \"missing\": 0\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n```\r\n\r\nHowever, this fails with:\r\n\r\n```\r\n \"_shards\": {\r\n \"total\": 6,\r\n \"successful\": 1,\r\n \"skipped\": 0,\r\n \"failed\": 5,\r\n \"failures\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"test_results\",\r\n \"node\": \"<nodeId>\",\r\n \"reason\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Cannot replay yet, collection is not finished: postCollect() has not been called\"\r\n }\r\n }\r\n ]\r\n }\r\n```\r\n\r\nIf I remove the Pass average aggregation it does not fail but id does fail with other metric aggregations. Any ideas what I might do wrong? Thanks!\r\n",
"comments": [
{
"body": "@elastic/es-search-aggs ",
"created_at": "2018-03-13T12:02:52Z"
}
],
"number": 28688,
"title": "Metric aggregation fails in composite aggregation"
} | {
"body": "This change refactors the composite aggregation to add an execution mode that visits documents in the order of the values present in the leading source of the composite definition. This mode does not need to visit all documents since it can early terminate the collection when the leading source value is greater than the lowest value in the queue.\r\nInstead of collecting the documents in the order of their doc_id, this mode uses the inverted lists (or the bkd tree for numerics) to collect documents\r\nin the order of the values present in the leading source.\r\nFor instance the following aggregation:\r\n\r\n```\r\n\"composite\" : {\r\n \"sources\" : [\r\n { \"value1\": { \"terms\" : { \"field\": \"timestamp\", \"order\": \"asc\" } } }\r\n ],\r\n \"size\": 10\r\n}\r\n```\r\n... can use the field `timestamp` to collect the documents with the 10 lowest values for the field instead of visiting all documents.\r\nFor composite aggregation with more than one source the execution can early terminate as soon as one of the 10 lowest values produces enough composite buckets. For instance if visiting the first two lowest timestamp created 10 composite buckets we can early terminate the collection since it\r\nis guaranteed that the third lowest timestamp cannot create a composite key that compares lower than the one already visited.\r\n\r\nThis mode can execute iff:\r\n * The leading source in the composite definition uses an indexed field of type `date` (works also with `date_histogram` source), `integer`, `long` or `keyword`.\r\n * The query is a match_all query or a range query over the field that is used as the leading source in the composite definition.\r\n * The sort order of the leading source is the natural order (ascending since postings and numerics are sorted in ascending order only).\r\n\r\nIf these conditions are not met this aggregation visits each document like any other agg.\r\n\r\nCloses #28688",
"number": 28745,
"review_comments": [
{
"body": "It seems that this should really never occur? Should we make this an assertion instead?\r\n",
"created_at": "2018-02-20T13:39:32Z"
},
{
"body": "why do these need to be arrays if they only contain a single element?",
"created_at": "2018-02-20T14:22:12Z"
},
{
"body": "I wonder if it would be clearer for the SortedBucketProducer to throw the exception rather than throwing it here?",
"created_at": "2018-02-20T14:38:16Z"
},
{
"body": "Because the values are set in the anonymous class below, I find it nicer than using Atomic and I saw this pattern in other locations in the codebase.",
"created_at": "2018-02-20T18:52:09Z"
},
{
"body": "I pushed a commit that adds a comment regarding why we throw an exception here.",
"created_at": "2018-02-20T19:26:36Z"
},
{
"body": "++, I replaced it with an assert.",
"created_at": "2018-02-20T19:26:39Z"
},
{
"body": "how do we know it is not empty?",
"created_at": "2018-02-21T13:50:50Z"
},
{
"body": "missing text?",
"created_at": "2018-02-21T14:04:31Z"
},
{
"body": "do we need to accept both BytesRef and String?",
"created_at": "2018-03-12T15:38:37Z"
},
{
"body": "let's assert that bucket is 0?",
"created_at": "2018-03-12T15:57:03Z"
},
{
"body": "do we need to set it for every doc?",
"created_at": "2018-03-12T18:03:02Z"
},
{
"body": "this means currentValue will always be the higher value in case of a multi-valued field, is that ok?",
"created_at": "2018-03-12T18:03:52Z"
},
{
"body": "currentValue is only valid for the current composite bucket, `next.collect()` below will fill the other sources's currentValue and the last collector in the chain will check if the final composite bucket should be added in the queue. We don't use currentValue outside of these recursive calls.",
"created_at": "2018-03-12T18:33:45Z"
},
{
"body": "I see, thanks.",
"created_at": "2018-03-13T08:12:34Z"
},
{
"body": "maybe mention how it's used?",
"created_at": "2018-03-13T08:16:02Z"
},
{
"body": "maybe say how it's supposed to know about the current value?",
"created_at": "2018-03-13T08:16:31Z"
},
{
"body": "maybe say that the current value is the one from the last copyCurrent call?",
"created_at": "2018-03-13T08:17:00Z"
},
{
"body": "using `cost()==0` is a bit unsafe since cost() may be completely off",
"created_at": "2018-03-13T08:53:10Z"
},
{
"body": "doesn't look useful?",
"created_at": "2018-03-13T08:57:14Z"
},
{
"body": "This optimization is unsafe since cost may be inaccurate. I would be ok if it was only called with postings, but it looks like this method is sometimes called with the result of DocIdSetBuilder.finish which gives approximate costs in the dense case.",
"created_at": "2018-03-13T09:00:25Z"
},
{
"body": "Let's do a Math.min(cost, Integer.MAX_VALUE) rather than a blind cast?",
"created_at": "2018-03-21T21:00:02Z"
},
{
"body": "I don't think you need to count the number of remaining docs here, the BKD tree does it for you",
"created_at": "2018-03-21T21:07:08Z"
},
{
"body": "let's do min(Integer.MAX_VALUE, iterator.cost())",
"created_at": "2018-03-21T21:12:59Z"
},
{
"body": "I need it because we build one doc id set per bucket (not per bkd leaf) so if the values are different inside a leaf I need to know the number of remaining docs in that leaf to create the new doc id set builder.",
"created_at": "2018-03-23T15:30:37Z"
},
{
"body": "I see, thanks for explaining.",
"created_at": "2018-03-23T17:38:32Z"
}
],
"title": "Optimize the composite aggregation for match_all and range queries"
} | {
"commits": [
{
"message": "Optimize the composite aggregation for match_all and range queries\n\nThis change refactors the composite aggregation to add an execution mode that visits documents in the order of the values\npresent in the leading source of the composite definition. This mode does not need to visit all documents since it can early terminate\nthe collection when the leading source value is greater than the lowest value in the queue.\nInstead of collecting the documents in the order of their doc_id, this mode uses the inverted lists (or the bkd tree for numerics) to collect documents\nin the order of the values present in the leading source.\nFor instance the following aggregation:\n\n```\n\"composite\" : {\n \"sources\" : [\n { \"value1\": { \"terms\" : { \"field\": \"timestamp\", \"order\": \"asc\" } } }\n ],\n \"size\": 10\n}\n```\n... can use the field `timestamp` to collect the documents with the 10 lowest values for the field instead of visiting all documents.\nFor composite aggregation with more than one source the execution can early terminate as soon as one of the 10 lowest values produces enough\ncomposite buckets. For instance if visiting the first two lowest timestamp created 10 composite buckets we can early terminate the collection since it\nis guaranteed that the third lowest timestamp cannot create a composite key that compares lower than the one already visited.\n\nThis mode can execute iff:\n * The leading source in the composite definition uses an indexed field of type `date` (works also with `date_histogram` source), `integer`, `long` or `keyword`.\n * The query is a match_all query or a range query over the field that is used as the leading source in the composite definition.\n * The sort order of the leading source is the natural order (ascending since postings and numerics are sorted in ascending order only).\n\nIf these conditions are not met this aggregation visits each document like any other agg."
},
{
"message": "fix checkstyle"
},
{
"message": "handle null point values"
},
{
"message": "restore global ord execution for normal execution"
},
{
"message": "add missing change"
},
{
"message": "fix checkstyle"
},
{
"message": "fix global ord comparaison"
},
{
"message": "Merge branch 'master' into composite_sort_optim"
},
{
"message": "add tests for the composite queue and address review comments"
},
{
"message": "Merge branch 'master' into composite_sort_optim"
},
{
"message": "cosmetics"
},
{
"message": "adapt heuristic to disable sorted docs producer"
},
{
"message": "protect against empty reader"
},
{
"message": "Add missing license"
},
{
"message": "refactor the composite source to create the sorted docs producer directly and adds tests"
},
{
"message": "Merge branch 'master' into composite_sort_optim"
},
{
"message": "fail composite agg that contains an unmapped field and no missing value"
},
{
"message": "implement deferring collection directly in the collector"
},
{
"message": "Merge branch 'master' into composite_sort_optim"
},
{
"message": "line len"
},
{
"message": "Merge branch 'master' into composite_sort_optim"
},
{
"message": "more javadocs and cleanups"
},
{
"message": "make sure that the cost is within the integer range when building the doc id set builder"
},
{
"message": "Merge branch 'master' into composite_sort_optim"
}
],
"files": [
{
"diff": "@@ -545,88 +545,3 @@ GET /_search\n }\n --------------------------------------------------\n // TESTRESPONSE[s/\\.\\.\\.//]\n-\n-==== Index sorting\n-\n-By default this aggregation runs on every document that match the query.\n-Though if the index sort matches the composite sort this aggregation can optimize\n-the execution and can skip documents that contain composite buckets that would not\n-be part of the response.\n-\n-For instance the following aggregations:\n-\n-[source,js]\n---------------------------------------------------\n-GET /_search\n-{\n- \"aggs\" : {\n- \"my_buckets\": {\n- \"composite\" : {\n- \"size\": 2,\n- \"sources\" : [\n- { \"date\": { \"date_histogram\": { \"field\": \"timestamp\", \"interval\": \"1d\", \"order\": \"asc\" } } },\n- { \"product\": { \"terms\": { \"field\": \"product\", \"order\": \"asc\" } } }\n- ]\n- }\n- }\n- }\n-}\n---------------------------------------------------\n-// CONSOLE\n-\n-\\... is much faster on an index that uses the following sort:\n-\n-[source,js]\n---------------------------------------------------\n-PUT twitter\n-{\n- \"settings\" : {\n- \"index\" : {\n- \"sort.field\" : [\"timestamp\", \"product\"],\n- \"sort.order\" : [\"asc\", \"asc\"]\n- }\n- },\n- \"mappings\": {\n- \"sales\": {\n- \"properties\": {\n- \"timestamp\": {\n- \"type\": \"date\"\n- },\n- \"product\": {\n- \"type\": \"keyword\"\n- }\n- }\n- }\n- }\n-}\n---------------------------------------------------\n-// CONSOLE\n-\n-WARNING: The optimization takes effect only if the fields used for sorting are single-valued and follow\n-the same order as the aggregation (`desc` or `asc`).\n-\n-If only the aggregation results are needed it is also better to set the size of the query to 0\n-and `track_total_hits` to false in order to remove other slowing factors:\n-\n-[source,js]\n---------------------------------------------------\n-GET /_search\n-{\n- \"size\": 0,\n- \"track_total_hits\": false,\n- \"aggs\" : {\n- \"my_buckets\": {\n- \"composite\" : {\n- \"size\": 2,\n- \"sources\" : [\n- { \"date\": { \"date_histogram\": { \"field\": \"timestamp\", \"interval\": \"1d\" } } },\n- { \"product\": { \"terms\": { \"field\": \"product\" } } }\n- ]\n- }\n- }\n- }\n-}\n---------------------------------------------------\n-// CONSOLE\n-\n-See <<index-modules-index-sorting, index sorting>> for more details.",
"filename": "docs/reference/aggregations/bucket/composite-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -99,6 +99,7 @@ setup:\n - do:\n search:\n index: test\n+ allow_partial_search_results: false\n body:\n aggregations:\n test:",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/230_composite.yml",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,131 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.CheckedFunction;\n+import org.elasticsearch.index.fielddata.SortedBinaryDocValues;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.search.aggregations.LeafBucketCollector;\n+\n+import java.io.IOException;\n+\n+/**\n+ * A {@link SingleDimensionValuesSource} for binary source ({@link BytesRef}).\n+ */\n+class BinaryValuesSource extends SingleDimensionValuesSource<BytesRef> {\n+ private final CheckedFunction<LeafReaderContext, SortedBinaryDocValues, IOException> docValuesFunc;\n+ private final BytesRef[] values;\n+ private BytesRef currentValue;\n+\n+ BinaryValuesSource(MappedFieldType fieldType, CheckedFunction<LeafReaderContext, SortedBinaryDocValues, IOException> docValuesFunc,\n+ int size, int reverseMul) {\n+ super(fieldType, size, reverseMul);\n+ this.docValuesFunc = docValuesFunc;\n+ this.values = new BytesRef[size];\n+ }\n+\n+ @Override\n+ public void copyCurrent(int slot) {\n+ values[slot] = BytesRef.deepCopyOf(currentValue);\n+ }\n+\n+ @Override\n+ public int compare(int from, int to) {\n+ return compareValues(values[from], values[to]);\n+ }\n+\n+ @Override\n+ int compareCurrent(int slot) {\n+ return compareValues(currentValue, values[slot]);\n+ }\n+\n+ @Override\n+ int compareCurrentWithAfter() {\n+ return compareValues(currentValue, afterValue);\n+ }\n+\n+ int compareValues(BytesRef v1, BytesRef v2) {\n+ return v1.compareTo(v2) * reverseMul;\n+ }\n+\n+ @Override\n+ void setAfter(Comparable<?> value) {\n+ if (value.getClass() == BytesRef.class) {\n+ afterValue = (BytesRef) value;\n+ } else if (value.getClass() == String.class) {\n+ afterValue = new BytesRef((String) value);\n+ } else {\n+ throw new IllegalArgumentException(\"invalid value, expected string, got \" + value.getClass().getSimpleName());\n+ }\n+ }\n+\n+ @Override\n+ BytesRef toComparable(int slot) {\n+ return values[slot];\n+ }\n+\n+ @Override\n+ LeafBucketCollector getLeafCollector(LeafReaderContext context, LeafBucketCollector next) throws IOException {\n+ final SortedBinaryDocValues dvs = docValuesFunc.apply(context);\n+ return new LeafBucketCollector() {\n+ @Override\n+ public void collect(int doc, long bucket) throws IOException {\n+ if (dvs.advanceExact(doc)) {\n+ int num = dvs.docValueCount();\n+ for (int i = 0; i < num; i++) {\n+ currentValue = dvs.nextValue();\n+ next.collect(doc, bucket);\n+ }\n+ }\n+ }\n+ };\n+ }\n+\n+ @Override\n+ LeafBucketCollector getLeafCollector(Comparable<?> value, LeafReaderContext context, LeafBucketCollector next) {\n+ if (value.getClass() != BytesRef.class) {\n+ throw new IllegalArgumentException(\"Expected BytesRef, got \" + value.getClass());\n+ }\n+ currentValue = (BytesRef) value;\n+ return new LeafBucketCollector() {\n+ @Override\n+ public void collect(int doc, long bucket) throws IOException {\n+ next.collect(doc, bucket);\n+ }\n+ };\n+ }\n+\n+ @Override\n+ SortedDocsProducer createSortedDocsProducerOrNull(IndexReader reader, Query query) {\n+ if (checkIfSortedDocsIsApplicable(reader, fieldType) == false ||\n+ (query != null && query.getClass() != MatchAllDocsQuery.class)) {\n+ return null;\n+ }\n+ return new TermsSortedDocsProducer(fieldType.name());\n+ }\n+\n+ @Override\n+ public void close() {}\n+}",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/BinaryValuesSource.java",
"status": "added"
},
{
"diff": "@@ -19,16 +19,12 @@\n \n package org.elasticsearch.search.aggregations.bucket.composite;\n \n-import org.apache.lucene.search.Sort;\n-import org.apache.lucene.search.SortField;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.IndexSortConfig;\n-import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n@@ -154,16 +150,9 @@ protected AggregatorFactory<?> doBuild(SearchContext context, AggregatorFactory<\n if (parent != null) {\n throw new IllegalArgumentException(\"[composite] aggregation cannot be used with a parent aggregation\");\n }\n- final QueryShardContext shardContext = context.getQueryShardContext();\n CompositeValuesSourceConfig[] configs = new CompositeValuesSourceConfig[sources.size()];\n- SortField[] sortFields = new SortField[configs.length];\n- IndexSortConfig indexSortConfig = shardContext.getIndexSettings().getIndexSortConfig();\n- if (indexSortConfig.hasIndexSort()) {\n- Sort sort = indexSortConfig.buildIndexSort(shardContext::fieldMapper, shardContext::getForField);\n- System.arraycopy(sort.getSort(), 0, sortFields, 0, sortFields.length);\n- }\n for (int i = 0; i < configs.length; i++) {\n- configs[i] = sources.get(i).build(context, i, configs.length, sortFields[i]);\n+ configs[i] = sources.get(i).build(context);\n if (configs[i].valuesSource().needsScores()) {\n throw new IllegalArgumentException(\"[sources] cannot access _score\");\n }",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,22 +19,29 @@\n \n package org.elasticsearch.search.aggregations.bucket.composite;\n \n+import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.CollectionTerminatedException;\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.DocIdSetIterator;\n+import org.apache.lucene.search.MultiCollector;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.Weight;\n import org.apache.lucene.util.RoaringDocIdSet;\n+import org.elasticsearch.common.lease.Releasables;\n+import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n+import org.elasticsearch.search.aggregations.BucketCollector;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n import org.elasticsearch.search.aggregations.bucket.BucketsAggregator;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n+import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n@@ -43,97 +50,74 @@\n import java.util.Collections;\n import java.util.List;\n import java.util.Map;\n-import java.util.TreeMap;\n import java.util.stream.Collectors;\n \n final class CompositeAggregator extends BucketsAggregator {\n private final int size;\n- private final CompositeValuesSourceConfig[] sources;\n+ private final SortedDocsProducer sortedDocsProducer;\n private final List<String> sourceNames;\n+ private final int[] reverseMuls;\n private final List<DocValueFormat> formats;\n- private final boolean canEarlyTerminate;\n \n- private final TreeMap<Integer, Integer> keys;\n- private final CompositeValuesComparator array;\n+ private final CompositeValuesCollectorQueue queue;\n \n- private final List<LeafContext> contexts = new ArrayList<>();\n- private LeafContext leaf;\n- private RoaringDocIdSet.Builder builder;\n+ private final List<Entry> entries;\n+ private LeafReaderContext currentLeaf;\n+ private RoaringDocIdSet.Builder docIdSetBuilder;\n+ private BucketCollector deferredCollectors;\n \n CompositeAggregator(String name, AggregatorFactories factories, SearchContext context, Aggregator parent,\n- List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData,\n- int size, CompositeValuesSourceConfig[] sources, CompositeKey rawAfterKey) throws IOException {\n+ List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData,\n+ int size, CompositeValuesSourceConfig[] sourceConfigs, CompositeKey rawAfterKey) throws IOException {\n super(name, factories, context, parent, pipelineAggregators, metaData);\n this.size = size;\n- this.sources = sources;\n- this.sourceNames = Arrays.stream(sources).map(CompositeValuesSourceConfig::name).collect(Collectors.toList());\n- this.formats = Arrays.stream(sources).map(CompositeValuesSourceConfig::format).collect(Collectors.toList());\n- // we use slot 0 to fill the current document (size+1).\n- this.array = new CompositeValuesComparator(context.searcher().getIndexReader(), sources, size+1);\n+ this.sourceNames = Arrays.stream(sourceConfigs).map(CompositeValuesSourceConfig::name).collect(Collectors.toList());\n+ this.reverseMuls = Arrays.stream(sourceConfigs).mapToInt(CompositeValuesSourceConfig::reverseMul).toArray();\n+ this.formats = Arrays.stream(sourceConfigs).map(CompositeValuesSourceConfig::format).collect(Collectors.toList());\n+ final SingleDimensionValuesSource<?>[] sources =\n+ createValuesSources(context.bigArrays(), context.searcher().getIndexReader(), context.query(), sourceConfigs, size);\n+ this.queue = new CompositeValuesCollectorQueue(sources, size);\n+ this.sortedDocsProducer = sources[0].createSortedDocsProducerOrNull(context.searcher().getIndexReader(), context.query());\n if (rawAfterKey != null) {\n- array.setTop(rawAfterKey.values());\n+ queue.setAfter(rawAfterKey.values());\n }\n- this.keys = new TreeMap<>(array::compare);\n- this.canEarlyTerminate = Arrays.stream(sources)\n- .allMatch(CompositeValuesSourceConfig::canEarlyTerminate);\n+ this.entries = new ArrayList<>();\n }\n \n- boolean canEarlyTerminate() {\n- return canEarlyTerminate;\n+ @Override\n+ protected void doClose() {\n+ Releasables.close(queue);\n+ }\n+\n+ @Override\n+ protected void doPreCollection() throws IOException {\n+ List<BucketCollector> collectors = Arrays.asList(subAggregators);\n+ deferredCollectors = BucketCollector.wrap(collectors);\n+ collectableSubAggregators = BucketCollector.NO_OP_COLLECTOR;\n }\n \n- private int[] getReverseMuls() {\n- return Arrays.stream(sources).mapToInt(CompositeValuesSourceConfig::reverseMul).toArray();\n+ @Override\n+ protected void doPostCollection() throws IOException {\n+ finishLeaf();\n }\n \n @Override\n public InternalAggregation buildAggregation(long zeroBucket) throws IOException {\n assert zeroBucket == 0L;\n- consumeBucketsAndMaybeBreak(keys.size());\n+ consumeBucketsAndMaybeBreak(queue.size());\n \n- // Replay all documents that contain at least one top bucket (collected during the first pass).\n- grow(keys.size()+1);\n- final boolean needsScores = needsScores();\n- Weight weight = null;\n- if (needsScores) {\n- Query query = context.query();\n- weight = context.searcher().createNormalizedWeight(query, true);\n- }\n- for (LeafContext context : contexts) {\n- DocIdSetIterator docIdSetIterator = context.docIdSet.iterator();\n- if (docIdSetIterator == null) {\n- continue;\n- }\n- final CompositeValuesSource.Collector collector =\n- array.getLeafCollector(context.ctx, getSecondPassCollector(context.subCollector));\n- int docID;\n- DocIdSetIterator scorerIt = null;\n- if (needsScores) {\n- Scorer scorer = weight.scorer(context.ctx);\n- // We don't need to check if the scorer is null\n- // since we are sure that there are documents to replay (docIdSetIterator it not empty).\n- scorerIt = scorer.iterator();\n- context.subCollector.setScorer(scorer);\n- }\n- while ((docID = docIdSetIterator.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {\n- if (needsScores) {\n- assert scorerIt.docID() < docID;\n- scorerIt.advance(docID);\n- // aggregations should only be replayed on matching documents\n- assert scorerIt.docID() == docID;\n- }\n- collector.collect(docID);\n- }\n+ if (deferredCollectors != NO_OP_COLLECTOR) {\n+ // Replay all documents that contain at least one top bucket (collected during the first pass).\n+ runDeferredCollections();\n }\n \n- int num = Math.min(size, keys.size());\n+ int num = Math.min(size, queue.size());\n final InternalComposite.InternalBucket[] buckets = new InternalComposite.InternalBucket[num];\n- final int[] reverseMuls = getReverseMuls();\n int pos = 0;\n- for (int slot : keys.keySet()) {\n- CompositeKey key = array.toCompositeKey(slot);\n+ for (int slot : queue.getSortedSlot()) {\n+ CompositeKey key = queue.toCompositeKey(slot);\n InternalAggregations aggs = bucketAggregations(slot);\n- int docCount = bucketDocCount(slot);\n+ int docCount = queue.getDocCount(slot);\n buckets[pos++] = new InternalComposite.InternalBucket(sourceNames, formats, key, reverseMuls, docCount, aggs);\n }\n CompositeKey lastBucket = num > 0 ? buckets[num-1].getRawKey() : null;\n@@ -143,125 +127,179 @@ public InternalAggregation buildAggregation(long zeroBucket) throws IOException\n \n @Override\n public InternalAggregation buildEmptyAggregation() {\n- final int[] reverseMuls = getReverseMuls();\n return new InternalComposite(name, size, sourceNames, formats, Collections.emptyList(), null, reverseMuls,\n pipelineAggregators(), metaData());\n }\n \n- @Override\n- protected LeafBucketCollector getLeafCollector(LeafReaderContext ctx, LeafBucketCollector sub) throws IOException {\n- if (leaf != null) {\n- leaf.docIdSet = builder.build();\n- contexts.add(leaf);\n+ private void finishLeaf() {\n+ if (currentLeaf != null) {\n+ DocIdSet docIdSet = docIdSetBuilder.build();\n+ entries.add(new Entry(currentLeaf, docIdSet));\n+ currentLeaf = null;\n+ docIdSetBuilder = null;\n }\n- leaf = new LeafContext(ctx, sub);\n- builder = new RoaringDocIdSet.Builder(ctx.reader().maxDoc());\n- final CompositeValuesSource.Collector inner = array.getLeafCollector(ctx, getFirstPassCollector());\n- return new LeafBucketCollector() {\n- @Override\n- public void collect(int doc, long zeroBucket) throws IOException {\n- assert zeroBucket == 0L;\n- inner.collect(doc);\n- }\n- };\n }\n \n @Override\n- protected void doPostCollection() throws IOException {\n- if (leaf != null) {\n- leaf.docIdSet = builder.build();\n- contexts.add(leaf);\n+ protected LeafBucketCollector getLeafCollector(LeafReaderContext ctx, LeafBucketCollector sub) throws IOException {\n+ finishLeaf();\n+ boolean fillDocIdSet = deferredCollectors != NO_OP_COLLECTOR;\n+ if (sortedDocsProducer != null) {\n+ /**\n+ * The producer will visit documents sorted by the leading source of the composite definition\n+ * and terminates when the leading source value is guaranteed to be greater than the lowest\n+ * composite bucket in the queue.\n+ */\n+ DocIdSet docIdSet = sortedDocsProducer.processLeaf(context.query(), queue, ctx, fillDocIdSet);\n+ if (fillDocIdSet) {\n+ entries.add(new Entry(ctx, docIdSet));\n+ }\n+\n+ /**\n+ * We can bypass search entirely for this segment, all the processing has been done in the previous call.\n+ * Throwing this exception will terminate the execution of the search for this root aggregation,\n+ * see {@link MultiCollector} for more details on how we handle early termination in aggregations.\n+ */\n+ throw new CollectionTerminatedException();\n+ } else {\n+ if (fillDocIdSet) {\n+ currentLeaf = ctx;\n+ docIdSetBuilder = new RoaringDocIdSet.Builder(ctx.reader().maxDoc());\n+ }\n+ final LeafBucketCollector inner = queue.getLeafCollector(ctx, getFirstPassCollector(docIdSetBuilder));\n+ return new LeafBucketCollector() {\n+ @Override\n+ public void collect(int doc, long zeroBucket) throws IOException {\n+ assert zeroBucket == 0L;\n+ inner.collect(doc);\n+ }\n+ };\n }\n }\n \n /**\n- * The first pass selects the top N composite buckets from all matching documents.\n- * It also records all doc ids that contain a top N composite bucket in a {@link RoaringDocIdSet} in order to be\n- * able to replay the collection filtered on the best buckets only.\n+ * The first pass selects the top composite buckets from all matching documents.\n */\n- private CompositeValuesSource.Collector getFirstPassCollector() {\n- return new CompositeValuesSource.Collector() {\n+ private LeafBucketCollector getFirstPassCollector(RoaringDocIdSet.Builder builder) {\n+ return new LeafBucketCollector() {\n int lastDoc = -1;\n \n @Override\n- public void collect(int doc) throws IOException {\n-\n- // Checks if the candidate key in slot 0 is competitive.\n- if (keys.containsKey(0)) {\n- // This key is already in the top N, skip it for now.\n- if (doc != lastDoc) {\n+ public void collect(int doc, long bucket) throws IOException {\n+ int slot = queue.addIfCompetitive();\n+ if (slot != -1) {\n+ if (builder != null && lastDoc != doc) {\n builder.add(doc);\n lastDoc = doc;\n }\n- return;\n- }\n- if (array.hasTop() && array.compareTop(0) <= 0) {\n- // This key is greater than the top value collected in the previous round.\n- if (canEarlyTerminate) {\n- // The index sort matches the composite sort, we can early terminate this segment.\n- throw new CollectionTerminatedException();\n- }\n- // just skip this key for now\n- return;\n- }\n- if (keys.size() >= size) {\n- // The tree map is full, check if the candidate key should be kept.\n- if (array.compare(0, keys.lastKey()) > 0) {\n- // The candidate key is not competitive\n- if (canEarlyTerminate) {\n- // The index sort matches the composite sort, we can early terminate this segment.\n- throw new CollectionTerminatedException();\n- }\n- // just skip this key\n- return;\n- }\n }\n+ }\n+ };\n+ }\n \n- // The candidate key is competitive\n- final int newSlot;\n- if (keys.size() >= size) {\n- // the tree map is full, we replace the last key with this candidate.\n- int slot = keys.pollLastEntry().getKey();\n- // and we recycle the deleted slot\n- newSlot = slot;\n- } else {\n- newSlot = keys.size() + 1;\n+ /**\n+ * Replay the documents that might contain a top bucket and pass top buckets to\n+ * the {@link this#deferredCollectors}.\n+ */\n+ private void runDeferredCollections() throws IOException {\n+ final boolean needsScores = needsScores();\n+ Weight weight = null;\n+ if (needsScores) {\n+ Query query = context.query();\n+ weight = context.searcher().createNormalizedWeight(query, true);\n+ }\n+ deferredCollectors.preCollection();\n+ for (Entry entry : entries) {\n+ DocIdSetIterator docIdSetIterator = entry.docIdSet.iterator();\n+ if (docIdSetIterator == null) {\n+ continue;\n+ }\n+ final LeafBucketCollector subCollector = deferredCollectors.getLeafCollector(entry.context);\n+ final LeafBucketCollector collector = queue.getLeafCollector(entry.context, getSecondPassCollector(subCollector));\n+ DocIdSetIterator scorerIt = null;\n+ if (needsScores) {\n+ Scorer scorer = weight.scorer(entry.context);\n+ if (scorer != null) {\n+ scorerIt = scorer.iterator();\n+ subCollector.setScorer(scorer);\n }\n- // move the candidate key to its new slot.\n- array.move(0, newSlot);\n- keys.put(newSlot, newSlot);\n- if (doc != lastDoc) {\n- builder.add(doc);\n- lastDoc = doc;\n+ }\n+ int docID;\n+ while ((docID = docIdSetIterator.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {\n+ if (needsScores) {\n+ assert scorerIt != null && scorerIt.docID() < docID;\n+ scorerIt.advance(docID);\n+ // aggregations should only be replayed on matching documents\n+ assert scorerIt.docID() == docID;\n }\n+ collector.collect(docID);\n }\n- };\n+ }\n+ deferredCollectors.postCollection();\n }\n \n-\n /**\n- * The second pass delegates the collection to sub-aggregations but only if the collected composite bucket is a top bucket (selected\n- * in the first pass).\n+ * Replay the top buckets from the matching documents.\n */\n- private CompositeValuesSource.Collector getSecondPassCollector(LeafBucketCollector subCollector) throws IOException {\n- return doc -> {\n- Integer bucket = keys.get(0);\n- if (bucket != null) {\n- // The candidate key in slot 0 is a top bucket.\n- // We can defer the collection of this document/bucket to the sub collector\n- collectExistingBucket(subCollector, doc, bucket);\n+ private LeafBucketCollector getSecondPassCollector(LeafBucketCollector subCollector) {\n+ return new LeafBucketCollector() {\n+ @Override\n+ public void collect(int doc, long zeroBucket) throws IOException {\n+ assert zeroBucket == 0;\n+ Integer slot = queue.compareCurrent();\n+ if (slot != null) {\n+ // The candidate key is a top bucket.\n+ // We can defer the collection of this document/bucket to the sub collector\n+ subCollector.collect(doc, slot);\n+ }\n }\n };\n }\n \n- static class LeafContext {\n- final LeafReaderContext ctx;\n- final LeafBucketCollector subCollector;\n- DocIdSet docIdSet;\n+ private static SingleDimensionValuesSource<?>[] createValuesSources(BigArrays bigArrays, IndexReader reader, Query query,\n+ CompositeValuesSourceConfig[] configs, int size) {\n+ final SingleDimensionValuesSource<?>[] sources = new SingleDimensionValuesSource[configs.length];\n+ for (int i = 0; i < sources.length; i++) {\n+ final int reverseMul = configs[i].reverseMul();\n+ if (configs[i].valuesSource() instanceof ValuesSource.Bytes.WithOrdinals && reader instanceof DirectoryReader) {\n+ ValuesSource.Bytes.WithOrdinals vs = (ValuesSource.Bytes.WithOrdinals) configs[i].valuesSource();\n+ sources[i] = new GlobalOrdinalValuesSource(bigArrays, configs[i].fieldType(), vs::globalOrdinalsValues, size, reverseMul);\n+ if (i == 0 && sources[i].createSortedDocsProducerOrNull(reader, query) != null) {\n+ // this the leading source and we can optimize it with the sorted docs producer but\n+ // we don't want to use global ordinals because the number of visited documents\n+ // should be low and global ordinals need one lookup per visited term.\n+ Releasables.close(sources[i]);\n+ sources[i] = new BinaryValuesSource(configs[i].fieldType(), vs::bytesValues, size, reverseMul);\n+ }\n+ } else if (configs[i].valuesSource() instanceof ValuesSource.Bytes) {\n+ ValuesSource.Bytes vs = (ValuesSource.Bytes) configs[i].valuesSource();\n+ sources[i] = new BinaryValuesSource(configs[i].fieldType(), vs::bytesValues, size, reverseMul);\n+ } else if (configs[i].valuesSource() instanceof ValuesSource.Numeric) {\n+ final ValuesSource.Numeric vs = (ValuesSource.Numeric) configs[i].valuesSource();\n+ if (vs.isFloatingPoint()) {\n+ sources[i] = new DoubleValuesSource(bigArrays, configs[i].fieldType(), vs::doubleValues, size, reverseMul);\n+ } else {\n+ if (vs instanceof RoundingValuesSource) {\n+ sources[i] = new LongValuesSource(bigArrays, configs[i].fieldType(), vs::longValues,\n+ ((RoundingValuesSource) vs)::round, configs[i].format(), size, reverseMul);\n+ } else {\n+ sources[i] = new LongValuesSource(bigArrays, configs[i].fieldType(), vs::longValues,\n+ (value) -> value, configs[i].format(), size, reverseMul);\n+ }\n+ }\n+ }\n+ }\n+ return sources;\n+ }\n+\n+ private static class Entry {\n+ final LeafReaderContext context;\n+ final DocIdSet docIdSet;\n \n- LeafContext(LeafReaderContext ctx, LeafBucketCollector subCollector) {\n- this.ctx = ctx;\n- this.subCollector = subCollector;\n+ Entry(LeafReaderContext context, DocIdSet docIdSet) {\n+ this.context = context;\n+ this.docIdSet = docIdSet;\n }\n }\n }\n+",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeAggregator.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,247 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.lease.Releasables;\n+import org.elasticsearch.search.aggregations.LeafBucketCollector;\n+\n+import java.io.IOException;\n+import java.util.Arrays;\n+import java.util.Set;\n+import java.util.TreeMap;\n+\n+/**\n+ * A specialized queue implementation for composite buckets\n+ */\n+final class CompositeValuesCollectorQueue implements Releasable {\n+ // the slot for the current candidate\n+ private static final int CANDIDATE_SLOT = Integer.MAX_VALUE;\n+\n+ private final int maxSize;\n+ private final TreeMap<Integer, Integer> keys;\n+ private final SingleDimensionValuesSource<?>[] arrays;\n+ private final int[] docCounts;\n+ private boolean afterValueSet = false;\n+\n+ /**\n+ * Constructs a composite queue with the specified size and sources.\n+ *\n+ * @param sources The list of {@link CompositeValuesSourceConfig} to build the composite buckets.\n+ * @param size The number of composite buckets to keep.\n+ */\n+ CompositeValuesCollectorQueue(SingleDimensionValuesSource<?>[] sources, int size) {\n+ this.maxSize = size;\n+ this.arrays = sources;\n+ this.docCounts = new int[size];\n+ this.keys = new TreeMap<>(this::compare);\n+ }\n+\n+ void clear() {\n+ keys.clear();\n+ Arrays.fill(docCounts, 0);\n+ afterValueSet = false;\n+ }\n+\n+ /**\n+ * The current size of the queue.\n+ */\n+ int size() {\n+ return keys.size();\n+ }\n+\n+ /**\n+ * Whether the queue is full or not.\n+ */\n+ boolean isFull() {\n+ return keys.size() == maxSize;\n+ }\n+\n+ /**\n+ * Returns a sorted {@link Set} view of the slots contained in this queue.\n+ */\n+ Set<Integer> getSortedSlot() {\n+ return keys.keySet();\n+ }\n+\n+ /**\n+ * Compares the current candidate with the values in the queue and returns\n+ * the slot if the candidate is already in the queue or null if the candidate is not present.\n+ */\n+ Integer compareCurrent() {\n+ return keys.get(CANDIDATE_SLOT);\n+ }\n+\n+ /**\n+ * Returns the lowest value (exclusive) of the leading source.\n+ */\n+ Comparable<?> getLowerValueLeadSource() {\n+ return afterValueSet ? arrays[0].getAfter() : null;\n+ }\n+\n+ /**\n+ * Returns the upper value (inclusive) of the leading source.\n+ */\n+ Comparable<?> getUpperValueLeadSource() throws IOException {\n+ return size() >= maxSize ? arrays[0].toComparable(keys.lastKey()) : null;\n+ }\n+ /**\n+ * Returns the document count in <code>slot</code>.\n+ */\n+ int getDocCount(int slot) {\n+ return docCounts[slot];\n+ }\n+\n+ /**\n+ * Copies the current value in <code>slot</code>.\n+ */\n+ private void copyCurrent(int slot) {\n+ for (int i = 0; i < arrays.length; i++) {\n+ arrays[i].copyCurrent(slot);\n+ }\n+ docCounts[slot] = 1;\n+ }\n+\n+ /**\n+ * Compares the values in <code>slot1</code> with <code>slot2</code>.\n+ */\n+ int compare(int slot1, int slot2) {\n+ for (int i = 0; i < arrays.length; i++) {\n+ int cmp = (slot1 == CANDIDATE_SLOT) ? arrays[i].compareCurrent(slot2) :\n+ arrays[i].compare(slot1, slot2);\n+ if (cmp != 0) {\n+ return cmp;\n+ }\n+ }\n+ return 0;\n+ }\n+\n+ /**\n+ * Sets the after values for this comparator.\n+ */\n+ void setAfter(Comparable<?>[] values) {\n+ assert values.length == arrays.length;\n+ afterValueSet = true;\n+ for (int i = 0; i < arrays.length; i++) {\n+ arrays[i].setAfter(values[i]);\n+ }\n+ }\n+\n+ /**\n+ * Compares the after values with the values in <code>slot</code>.\n+ */\n+ private int compareCurrentWithAfter() {\n+ for (int i = 0; i < arrays.length; i++) {\n+ int cmp = arrays[i].compareCurrentWithAfter();\n+ if (cmp != 0) {\n+ return cmp;\n+ }\n+ }\n+ return 0;\n+ }\n+\n+ /**\n+ * Builds the {@link CompositeKey} for <code>slot</code>.\n+ */\n+ CompositeKey toCompositeKey(int slot) throws IOException {\n+ assert slot < maxSize;\n+ Comparable<?>[] values = new Comparable<?>[arrays.length];\n+ for (int i = 0; i < values.length; i++) {\n+ values[i] = arrays[i].toComparable(slot);\n+ }\n+ return new CompositeKey(values);\n+ }\n+\n+ /**\n+ * Creates the collector that will visit the composite buckets of the matching documents.\n+ * The provided collector <code>in</code> is called on each composite bucket.\n+ */\n+ LeafBucketCollector getLeafCollector(LeafReaderContext context, LeafBucketCollector in) throws IOException {\n+ return getLeafCollector(null, context, in);\n+ }\n+ /**\n+ * Creates the collector that will visit the composite buckets of the matching documents.\n+ * If <code>forceLeadSourceValue</code> is not null, the leading source will use this value\n+ * for each document.\n+ * The provided collector <code>in</code> is called on each composite bucket.\n+ */\n+ LeafBucketCollector getLeafCollector(Comparable<?> forceLeadSourceValue,\n+ LeafReaderContext context, LeafBucketCollector in) throws IOException {\n+ int last = arrays.length - 1;\n+ LeafBucketCollector collector = in;\n+ while (last > 0) {\n+ collector = arrays[last--].getLeafCollector(context, collector);\n+ }\n+ if (forceLeadSourceValue != null) {\n+ collector = arrays[last].getLeafCollector(forceLeadSourceValue, context, collector);\n+ } else {\n+ collector = arrays[last].getLeafCollector(context, collector);\n+ }\n+ return collector;\n+ }\n+\n+ /**\n+ * Check if the current candidate should be added in the queue.\n+ * @return The target slot of the candidate or -1 is the candidate is not competitive.\n+ */\n+ int addIfCompetitive() {\n+ // checks if the candidate key is competitive\n+ Integer topSlot = compareCurrent();\n+ if (topSlot != null) {\n+ // this key is already in the top N, skip it\n+ docCounts[topSlot] += 1;\n+ return topSlot;\n+ }\n+ if (afterValueSet && compareCurrentWithAfter() <= 0) {\n+ // this key is greater than the top value collected in the previous round, skip it\n+ return -1;\n+ }\n+ if (keys.size() >= maxSize) {\n+ // the tree map is full, check if the candidate key should be kept\n+ if (compare(CANDIDATE_SLOT, keys.lastKey()) > 0) {\n+ // the candidate key is not competitive, skip it\n+ return -1;\n+ }\n+ }\n+\n+ // the candidate key is competitive\n+ final int newSlot;\n+ if (keys.size() >= maxSize) {\n+ // the tree map is full, we replace the last key with this candidate\n+ int slot = keys.pollLastEntry().getKey();\n+ // and we recycle the deleted slot\n+ newSlot = slot;\n+ } else {\n+ newSlot = keys.size();\n+ assert newSlot < maxSize;\n+ }\n+ // move the candidate key to its new slot\n+ copyCurrent(newSlot);\n+ keys.put(newSlot, newSlot);\n+ return newSlot;\n+ }\n+\n+\n+ @Override\n+ public void close() {\n+ Releasables.close(arrays);\n+ }\n+}",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeValuesCollectorQueue.java",
"status": "added"
},
{
"diff": "@@ -19,19 +19,13 @@\n \n package org.elasticsearch.search.aggregations.bucket.composite;\n \n-import org.apache.lucene.index.DocValues;\n-import org.apache.lucene.index.IndexReader;\n-import org.apache.lucene.index.LeafReaderContext;\n-import org.apache.lucene.index.SortedNumericDocValues;\n-import org.apache.lucene.index.SortedSetDocValues;\n-import org.apache.lucene.search.SortField;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.xcontent.ToXContentFragment;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.index.IndexSortConfig;\n+import org.elasticsearch.index.query.QueryShardException;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.aggregations.support.ValueType;\n import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;\n@@ -291,46 +285,18 @@ public String format() {\n *\n * @param context The search context for this source.\n * @param config The {@link ValuesSourceConfig} for this source.\n- * @param pos The position of this source in the composite key.\n- * @param numPos The total number of positions in the composite key.\n- * @param sortField The {@link SortField} of the index sort at this position or null if not present.\n */\n- protected abstract CompositeValuesSourceConfig innerBuild(SearchContext context,\n- ValuesSourceConfig<?> config,\n- int pos,\n- int numPos,\n- SortField sortField) throws IOException;\n+ protected abstract CompositeValuesSourceConfig innerBuild(SearchContext context, ValuesSourceConfig<?> config) throws IOException;\n \n- public final CompositeValuesSourceConfig build(SearchContext context, int pos, int numPos, SortField sortField) throws IOException {\n+ public final CompositeValuesSourceConfig build(SearchContext context) throws IOException {\n ValuesSourceConfig<?> config = ValuesSourceConfig.resolve(context.getQueryShardContext(),\n valueType, field, script, missing, null, format);\n- return innerBuild(context, config, pos, numPos, sortField);\n- }\n-\n- protected boolean checkCanEarlyTerminate(IndexReader reader,\n- String fieldName,\n- boolean reverse,\n- SortField sortField) throws IOException {\n- return sortField.getField().equals(fieldName) &&\n- sortField.getReverse() == reverse &&\n- isSingleValued(reader, sortField);\n- }\n-\n- private static boolean isSingleValued(IndexReader reader, SortField field) throws IOException {\n- SortField.Type type = IndexSortConfig.getSortFieldType(field);\n- for (LeafReaderContext context : reader.leaves()) {\n- if (type == SortField.Type.STRING) {\n- final SortedSetDocValues values = DocValues.getSortedSet(context.reader(), field.getField());\n- if (values.cost() > 0 && DocValues.unwrapSingleton(values) == null) {\n- return false;\n- }\n- } else {\n- final SortedNumericDocValues values = DocValues.getSortedNumeric(context.reader(), field.getField());\n- if (values.cost() > 0 && DocValues.unwrapSingleton(values) == null) {\n- return false;\n- }\n- }\n+ if (config.unmapped() && field != null && config.missing() == null) {\n+ // this source cannot produce any values so we refuse to build\n+ // since composite buckets are not created on null values\n+ throw new QueryShardException(context.getQueryShardContext(),\n+ \"failed to find field [\" + field + \"] and [missing] is not provided\");\n }\n- return true;\n+ return innerBuild(context, config);\n }\n }",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeValuesSourceBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,22 +19,25 @@\n \n package org.elasticsearch.search.aggregations.bucket.composite;\n \n+import org.elasticsearch.common.inject.internal.Nullable;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.sort.SortOrder;\n \n class CompositeValuesSourceConfig {\n private final String name;\n+ @Nullable\n+ private final MappedFieldType fieldType;\n private final ValuesSource vs;\n private final DocValueFormat format;\n private final int reverseMul;\n- private final boolean canEarlyTerminate;\n \n- CompositeValuesSourceConfig(String name, ValuesSource vs, DocValueFormat format, SortOrder order, boolean canEarlyTerminate) {\n+ CompositeValuesSourceConfig(String name, @Nullable MappedFieldType fieldType, ValuesSource vs, DocValueFormat format, SortOrder order) {\n this.name = name;\n+ this.fieldType = fieldType;\n this.vs = vs;\n this.format = format;\n- this.canEarlyTerminate = canEarlyTerminate;\n this.reverseMul = order == SortOrder.ASC ? 1 : -1;\n }\n \n@@ -45,6 +48,13 @@ String name() {\n return name;\n }\n \n+ /**\n+ * Returns the {@link MappedFieldType} for this config.\n+ */\n+ MappedFieldType fieldType() {\n+ return fieldType;\n+ }\n+\n /**\n * Returns the {@link ValuesSource} for this configuration.\n */\n@@ -67,11 +77,4 @@ int reverseMul() {\n assert reverseMul == -1 || reverseMul == 1;\n return reverseMul;\n }\n-\n- /**\n- * Returns whether this {@link ValuesSource} is used to sort the index.\n- */\n- boolean canEarlyTerminate() {\n- return canEarlyTerminate;\n- }\n }",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeValuesSourceConfig.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.search.aggregations.bucket.composite;\n \n-import org.apache.lucene.search.SortField;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -29,17 +28,16 @@\n import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.DocValueFormat;\n-import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.support.FieldContext;\n import org.elasticsearch.search.aggregations.support.ValueType;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;\n import org.elasticsearch.search.internal.SearchContext;\n-import org.elasticsearch.search.sort.SortOrder;\n import org.joda.time.DateTimeZone;\n \n import java.io.IOException;\n@@ -217,11 +215,7 @@ private Rounding createRounding() {\n }\n \n @Override\n- protected CompositeValuesSourceConfig innerBuild(SearchContext context,\n- ValuesSourceConfig<?> config,\n- int pos,\n- int numPos,\n- SortField sortField) throws IOException {\n+ protected CompositeValuesSourceConfig innerBuild(SearchContext context, ValuesSourceConfig<?> config) throws IOException {\n Rounding rounding = createRounding();\n ValuesSource orig = config.toValuesSource(context.getQueryShardContext());\n if (orig == null) {\n@@ -230,19 +224,10 @@ protected CompositeValuesSourceConfig innerBuild(SearchContext context,\n if (orig instanceof ValuesSource.Numeric) {\n ValuesSource.Numeric numeric = (ValuesSource.Numeric) orig;\n RoundingValuesSource vs = new RoundingValuesSource(numeric, rounding);\n- boolean canEarlyTerminate = false;\n- final FieldContext fieldContext = config.fieldContext();\n- if (sortField != null &&\n- pos == numPos-1 &&\n- fieldContext != null) {\n- canEarlyTerminate = checkCanEarlyTerminate(context.searcher().getIndexReader(),\n- fieldContext.field(), order() == SortOrder.ASC ? false : true, sortField);\n- }\n- // dates are returned as timestamp in milliseconds-since-the-epoch unless a specific date format\n // is specified in the builder.\n final DocValueFormat docValueFormat = format() == null ? DocValueFormat.RAW : config.format();\n- return new CompositeValuesSourceConfig(name, vs, docValueFormat,\n- order(), canEarlyTerminate);\n+ final MappedFieldType fieldType = config.fieldContext() != null ? config.fieldContext().fieldType() : null;\n+ return new CompositeValuesSourceConfig(name, fieldType, vs, docValueFormat, order());\n } else {\n throw new IllegalArgumentException(\"invalid source, expected numeric, got \" + orig.getClass().getSimpleName());\n }",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/DateHistogramValuesSourceBuilder.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,129 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.search.Query;\n+import org.elasticsearch.common.CheckedFunction;\n+import org.elasticsearch.common.lease.Releasables;\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.common.util.DoubleArray;\n+import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.search.aggregations.LeafBucketCollector;\n+\n+import java.io.IOException;\n+\n+/**\n+ * A {@link SingleDimensionValuesSource} for doubles.\n+ */\n+class DoubleValuesSource extends SingleDimensionValuesSource<Double> {\n+ private final CheckedFunction<LeafReaderContext, SortedNumericDoubleValues, IOException> docValuesFunc;\n+ private final DoubleArray values;\n+ private double currentValue;\n+\n+ DoubleValuesSource(BigArrays bigArrays, MappedFieldType fieldType,\n+ CheckedFunction<LeafReaderContext, SortedNumericDoubleValues, IOException> docValuesFunc,\n+ int size, int reverseMul) {\n+ super(fieldType, size, reverseMul);\n+ this.docValuesFunc = docValuesFunc;\n+ this.values = bigArrays.newDoubleArray(size, false);\n+ }\n+\n+ @Override\n+ void copyCurrent(int slot) {\n+ values.set(slot, currentValue);\n+ }\n+\n+ @Override\n+ int compare(int from, int to) {\n+ return compareValues(values.get(from), values.get(to));\n+ }\n+\n+ @Override\n+ int compareCurrent(int slot) {\n+ return compareValues(currentValue, values.get(slot));\n+ }\n+\n+ @Override\n+ int compareCurrentWithAfter() {\n+ return compareValues(currentValue, afterValue);\n+ }\n+\n+ private int compareValues(double v1, double v2) {\n+ return Double.compare(v1, v2) * reverseMul;\n+ }\n+\n+ @Override\n+ void setAfter(Comparable<?> value) {\n+ if (value instanceof Number) {\n+ afterValue = ((Number) value).doubleValue();\n+ } else {\n+ afterValue = Double.parseDouble(value.toString());\n+ }\n+ }\n+\n+ @Override\n+ Double toComparable(int slot) {\n+ return values.get(slot);\n+ }\n+\n+ @Override\n+ LeafBucketCollector getLeafCollector(LeafReaderContext context, LeafBucketCollector next) throws IOException {\n+ final SortedNumericDoubleValues dvs = docValuesFunc.apply(context);\n+ return new LeafBucketCollector() {\n+ @Override\n+ public void collect(int doc, long bucket) throws IOException {\n+ if (dvs.advanceExact(doc)) {\n+ int num = dvs.docValueCount();\n+ for (int i = 0; i < num; i++) {\n+ currentValue = dvs.nextValue();\n+ next.collect(doc, bucket);\n+ }\n+ }\n+ }\n+ };\n+ }\n+\n+ @Override\n+ LeafBucketCollector getLeafCollector(Comparable<?> value, LeafReaderContext context, LeafBucketCollector next) {\n+ if (value.getClass() != Double.class) {\n+ throw new IllegalArgumentException(\"Expected Double, got \" + value.getClass());\n+ }\n+ currentValue = (Double) value;\n+ return new LeafBucketCollector() {\n+ @Override\n+ public void collect(int doc, long bucket) throws IOException {\n+ next.collect(doc, bucket);\n+ }\n+ };\n+ }\n+\n+ @Override\n+ SortedDocsProducer createSortedDocsProducerOrNull(IndexReader reader, Query query) {\n+ return null;\n+ }\n+\n+ @Override\n+ public void close() {\n+ Releasables.close(values);\n+ }\n+}",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/DoubleValuesSource.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,189 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.SortedSetDocValues;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.CheckedFunction;\n+import org.elasticsearch.common.lease.Releasables;\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.common.util.LongArray;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.search.aggregations.LeafBucketCollector;\n+\n+import java.io.IOException;\n+\n+import static org.apache.lucene.index.SortedSetDocValues.NO_MORE_ORDS;\n+\n+/**\n+ * A {@link SingleDimensionValuesSource} for global ordinals.\n+ */\n+class GlobalOrdinalValuesSource extends SingleDimensionValuesSource<BytesRef> {\n+ private final CheckedFunction<LeafReaderContext, SortedSetDocValues, IOException> docValuesFunc;\n+ private final LongArray values;\n+ private SortedSetDocValues lookup;\n+ private long currentValue;\n+ private Long afterValueGlobalOrd;\n+ private boolean isTopValueInsertionPoint;\n+\n+ private long lastLookupOrd = -1;\n+ private BytesRef lastLookupValue;\n+\n+ GlobalOrdinalValuesSource(BigArrays bigArrays,\n+ MappedFieldType type, CheckedFunction<LeafReaderContext, SortedSetDocValues, IOException> docValuesFunc,\n+ int size, int reverseMul) {\n+ super(type, size, reverseMul);\n+ this.docValuesFunc = docValuesFunc;\n+ this.values = bigArrays.newLongArray(size, false);\n+ }\n+\n+ @Override\n+ void copyCurrent(int slot) {\n+ values.set(slot, currentValue);\n+ }\n+\n+ @Override\n+ int compare(int from, int to) {\n+ return Long.compare(values.get(from), values.get(to)) * reverseMul;\n+ }\n+\n+ @Override\n+ int compareCurrent(int slot) {\n+ return Long.compare(currentValue, values.get(slot)) * reverseMul;\n+ }\n+\n+ @Override\n+ int compareCurrentWithAfter() {\n+ int cmp = Long.compare(currentValue, afterValueGlobalOrd);\n+ if (cmp == 0 && isTopValueInsertionPoint) {\n+ // the top value is missing in this shard, the comparison is against\n+ // the insertion point of the top value so equality means that the value\n+ // is \"after\" the insertion point.\n+ return reverseMul;\n+ }\n+ return cmp * reverseMul;\n+ }\n+\n+ @Override\n+ void setAfter(Comparable<?> value) {\n+ if (value instanceof BytesRef) {\n+ afterValue = (BytesRef) value;\n+ } else if (value instanceof String) {\n+ afterValue = new BytesRef(value.toString());\n+ } else {\n+ throw new IllegalArgumentException(\"invalid value, expected string, got \" + value.getClass().getSimpleName());\n+ }\n+ }\n+\n+ @Override\n+ BytesRef toComparable(int slot) throws IOException {\n+ long globalOrd = values.get(slot);\n+ if (globalOrd == lastLookupOrd) {\n+ return lastLookupValue;\n+ } else {\n+ lastLookupOrd= globalOrd;\n+ lastLookupValue = BytesRef.deepCopyOf(lookup.lookupOrd(values.get(slot)));\n+ return lastLookupValue;\n+ }\n+ }\n+\n+ @Override\n+ LeafBucketCollector getLeafCollector(LeafReaderContext context, LeafBucketCollector next) throws IOException {\n+ final SortedSetDocValues dvs = docValuesFunc.apply(context);\n+ if (lookup == null) {\n+ initLookup(dvs);\n+ }\n+ return new LeafBucketCollector() {\n+ @Override\n+ public void collect(int doc, long bucket) throws IOException {\n+ if (dvs.advanceExact(doc)) {\n+ long ord;\n+ while ((ord = dvs.nextOrd()) != NO_MORE_ORDS) {\n+ currentValue = ord;\n+ next.collect(doc, bucket);\n+ }\n+ }\n+ }\n+ };\n+ }\n+\n+ @Override\n+ LeafBucketCollector getLeafCollector(Comparable<?> value, LeafReaderContext context, LeafBucketCollector next) throws IOException {\n+ if (value.getClass() != BytesRef.class) {\n+ throw new IllegalArgumentException(\"Expected BytesRef, got \" + value.getClass());\n+ }\n+ BytesRef term = (BytesRef) value;\n+ final SortedSetDocValues dvs = docValuesFunc.apply(context);\n+ if (lookup == null) {\n+ initLookup(dvs);\n+ }\n+ return new LeafBucketCollector() {\n+ boolean currentValueIsSet = false;\n+\n+ @Override\n+ public void collect(int doc, long bucket) throws IOException {\n+ if (!currentValueIsSet) {\n+ if (dvs.advanceExact(doc)) {\n+ long ord;\n+ while ((ord = dvs.nextOrd()) != NO_MORE_ORDS) {\n+ if (term.equals(lookup.lookupOrd(ord))) {\n+ currentValueIsSet = true;\n+ currentValue = ord;\n+ break;\n+ }\n+ }\n+ }\n+ }\n+ assert currentValueIsSet;\n+ next.collect(doc, bucket);\n+ }\n+ };\n+ }\n+\n+ @Override\n+ SortedDocsProducer createSortedDocsProducerOrNull(IndexReader reader, Query query) {\n+ if (checkIfSortedDocsIsApplicable(reader, fieldType) == false ||\n+ (query != null && query.getClass() != MatchAllDocsQuery.class)) {\n+ return null;\n+ }\n+ return new TermsSortedDocsProducer(fieldType.name());\n+ }\n+\n+ @Override\n+ public void close() {\n+ Releasables.close(values);\n+ }\n+\n+ private void initLookup(SortedSetDocValues dvs) throws IOException {\n+ lookup = dvs;\n+ if (afterValue != null && afterValueGlobalOrd == null) {\n+ afterValueGlobalOrd = lookup.lookupTerm(afterValue);\n+ if (afterValueGlobalOrd < 0) {\n+ // convert negative insert position\n+ afterValueGlobalOrd = -afterValueGlobalOrd - 1;\n+ isTopValueInsertionPoint = true;\n+ }\n+ }\n+ }\n+}",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/GlobalOrdinalValuesSource.java",
"status": "added"
},
{
"diff": "@@ -19,19 +19,17 @@\n \n package org.elasticsearch.search.aggregations.bucket.composite;\n \n-import org.apache.lucene.search.SortField;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n-import org.elasticsearch.search.aggregations.support.FieldContext;\n import org.elasticsearch.search.aggregations.support.ValueType;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;\n import org.elasticsearch.search.internal.SearchContext;\n-import org.elasticsearch.search.sort.SortOrder;\n \n import java.io.IOException;\n import java.util.Objects;\n@@ -108,27 +106,16 @@ public HistogramValuesSourceBuilder interval(double interval) {\n }\n \n @Override\n- protected CompositeValuesSourceConfig innerBuild(SearchContext context,\n- ValuesSourceConfig<?> config,\n- int pos,\n- int numPos,\n- SortField sortField) throws IOException {\n+ protected CompositeValuesSourceConfig innerBuild(SearchContext context, ValuesSourceConfig<?> config) throws IOException {\n ValuesSource orig = config.toValuesSource(context.getQueryShardContext());\n if (orig == null) {\n orig = ValuesSource.Numeric.EMPTY;\n }\n if (orig instanceof ValuesSource.Numeric) {\n ValuesSource.Numeric numeric = (ValuesSource.Numeric) orig;\n- HistogramValuesSource vs = new HistogramValuesSource(numeric, interval);\n- boolean canEarlyTerminate = false;\n- final FieldContext fieldContext = config.fieldContext();\n- if (sortField != null &&\n- pos == numPos-1 &&\n- fieldContext != null) {\n- canEarlyTerminate = checkCanEarlyTerminate(context.searcher().getIndexReader(),\n- fieldContext.field(), order() == SortOrder.ASC ? false : true, sortField);\n- }\n- return new CompositeValuesSourceConfig(name, vs, config.format(), order(), canEarlyTerminate);\n+ final HistogramValuesSource vs = new HistogramValuesSource(numeric, interval);\n+ final MappedFieldType fieldType = config.fieldContext() != null ? config.fieldContext().fieldType() : null;\n+ return new CompositeValuesSourceConfig(name, fieldType, vs, config.format(), order());\n } else {\n throw new IllegalArgumentException(\"invalid source, expected numeric, got \" + orig.getClass().getSimpleName());\n }",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/HistogramValuesSourceBuilder.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,190 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.document.IntPoint;\n+import org.apache.lucene.document.LongPoint;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.SortedNumericDocValues;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.PointRangeQuery;\n+import org.apache.lucene.search.Query;\n+import org.elasticsearch.common.CheckedFunction;\n+import org.elasticsearch.common.lease.Releasables;\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.common.util.LongArray;\n+import org.elasticsearch.index.mapper.DateFieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.NumberFieldMapper;\n+import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.aggregations.LeafBucketCollector;\n+\n+import java.io.IOException;\n+import java.util.function.LongUnaryOperator;\n+import java.util.function.ToLongFunction;\n+\n+/**\n+ * A {@link SingleDimensionValuesSource} for longs.\n+ */\n+class LongValuesSource extends SingleDimensionValuesSource<Long> {\n+ private final CheckedFunction<LeafReaderContext, SortedNumericDocValues, IOException> docValuesFunc;\n+ private final LongUnaryOperator rounding;\n+ // handles \"format\" for date histogram source\n+ private final DocValueFormat format;\n+\n+ private final LongArray values;\n+ private long currentValue;\n+\n+ LongValuesSource(BigArrays bigArrays, MappedFieldType fieldType,\n+ CheckedFunction<LeafReaderContext, SortedNumericDocValues, IOException> docValuesFunc,\n+ LongUnaryOperator rounding, DocValueFormat format, int size, int reverseMul) {\n+ super(fieldType, size, reverseMul);\n+ this.docValuesFunc = docValuesFunc;\n+ this.rounding = rounding;\n+ this.format = format;\n+ this.values = bigArrays.newLongArray(size, false);\n+ }\n+\n+ @Override\n+ void copyCurrent(int slot) {\n+ values.set(slot, currentValue);\n+ }\n+\n+ @Override\n+ int compare(int from, int to) {\n+ return compareValues(values.get(from), values.get(to));\n+ }\n+\n+ @Override\n+ int compareCurrent(int slot) {\n+ return compareValues(currentValue, values.get(slot));\n+ }\n+\n+ @Override\n+ int compareCurrentWithAfter() {\n+ return compareValues(currentValue, afterValue);\n+ }\n+\n+ private int compareValues(long v1, long v2) {\n+ return Long.compare(v1, v2) * reverseMul;\n+ }\n+\n+ @Override\n+ void setAfter(Comparable<?> value) {\n+ if (value instanceof Number) {\n+ afterValue = ((Number) value).longValue();\n+ } else {\n+ // for date histogram source with \"format\", the after value is formatted\n+ // as a string so we need to retrieve the original value in milliseconds.\n+ afterValue = format.parseLong(value.toString(), false, () -> {\n+ throw new IllegalArgumentException(\"now() is not supported in [after] key\");\n+ });\n+ }\n+ }\n+\n+ @Override\n+ Long toComparable(int slot) {\n+ return values.get(slot);\n+ }\n+\n+ @Override\n+ LeafBucketCollector getLeafCollector(LeafReaderContext context, LeafBucketCollector next) throws IOException {\n+ final SortedNumericDocValues dvs = docValuesFunc.apply(context);\n+ return new LeafBucketCollector() {\n+ @Override\n+ public void collect(int doc, long bucket) throws IOException {\n+ if (dvs.advanceExact(doc)) {\n+ int num = dvs.docValueCount();\n+ for (int i = 0; i < num; i++) {\n+ currentValue = dvs.nextValue();\n+ next.collect(doc, bucket);\n+ }\n+ }\n+ }\n+ };\n+ }\n+\n+ @Override\n+ LeafBucketCollector getLeafCollector(Comparable<?> value, LeafReaderContext context, LeafBucketCollector next) {\n+ if (value.getClass() != Long.class) {\n+ throw new IllegalArgumentException(\"Expected Long, got \" + value.getClass());\n+ }\n+ currentValue = (Long) value;\n+ return new LeafBucketCollector() {\n+ @Override\n+ public void collect(int doc, long bucket) throws IOException {\n+ next.collect(doc, bucket);\n+ }\n+ };\n+ }\n+\n+ @Override\n+ SortedDocsProducer createSortedDocsProducerOrNull(IndexReader reader, Query query) {\n+ if (checkIfSortedDocsIsApplicable(reader, fieldType) == false ||\n+ (query != null &&\n+ query.getClass() != MatchAllDocsQuery.class &&\n+ // if the query is a range query over the same field\n+ (query instanceof PointRangeQuery && fieldType.name().equals((((PointRangeQuery) query).getField()))) == false)) {\n+ return null;\n+ }\n+ final byte[] lowerPoint;\n+ final byte[] upperPoint;\n+ if (query instanceof PointRangeQuery) {\n+ final PointRangeQuery rangeQuery = (PointRangeQuery) query;\n+ lowerPoint = rangeQuery.getLowerPoint();\n+ upperPoint = rangeQuery.getUpperPoint();\n+ } else {\n+ lowerPoint = null;\n+ upperPoint = null;\n+ }\n+\n+ if (fieldType instanceof NumberFieldMapper.NumberFieldType) {\n+ NumberFieldMapper.NumberFieldType ft = (NumberFieldMapper.NumberFieldType) fieldType;\n+ final ToLongFunction<byte[]> toBucketFunction;\n+\n+ switch (ft.typeName()) {\n+ case \"long\":\n+ toBucketFunction = (value) -> rounding.applyAsLong(LongPoint.decodeDimension(value, 0));\n+ break;\n+\n+ case \"int\":\n+ case \"short\":\n+ case \"byte\":\n+ toBucketFunction = (value) -> rounding.applyAsLong(IntPoint.decodeDimension(value, 0));\n+ break;\n+\n+ default:\n+ return null;\n+ }\n+ return new PointsSortedDocsProducer(fieldType.name(), toBucketFunction, lowerPoint, upperPoint);\n+ } else if (fieldType instanceof DateFieldMapper.DateFieldType) {\n+ final ToLongFunction<byte[]> toBucketFunction = (value) -> rounding.applyAsLong(LongPoint.decodeDimension(value, 0));\n+ return new PointsSortedDocsProducer(fieldType.name(), toBucketFunction, lowerPoint, upperPoint);\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ @Override\n+ public void close() {\n+ Releasables.close(values);\n+ }\n+}",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/LongValuesSource.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,181 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.PointValues;\n+import org.apache.lucene.search.CollectionTerminatedException;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.util.DocIdSetBuilder;\n+import org.apache.lucene.util.StringHelper;\n+\n+import java.io.IOException;\n+import java.util.function.ToLongFunction;\n+\n+/**\n+ * A {@link SortedDocsProducer} that can sort documents based on numerics indexed in the provided field.\n+ */\n+class PointsSortedDocsProducer extends SortedDocsProducer {\n+ private final ToLongFunction<byte[]> bucketFunction;\n+ private final byte[] lowerPointQuery;\n+ private final byte[] upperPointQuery;\n+\n+ PointsSortedDocsProducer(String field, ToLongFunction<byte[]> bucketFunction, byte[] lowerPointQuery, byte[] upperPointQuery) {\n+ super(field);\n+ this.bucketFunction = bucketFunction;\n+ this.lowerPointQuery = lowerPointQuery;\n+ this.upperPointQuery = upperPointQuery;\n+ }\n+\n+ @Override\n+ DocIdSet processLeaf(Query query, CompositeValuesCollectorQueue queue,\n+ LeafReaderContext context, boolean fillDocIdSet) throws IOException {\n+ final PointValues values = context.reader().getPointValues(field);\n+ if (values == null) {\n+ // no value for the field\n+ return DocIdSet.EMPTY;\n+ }\n+ long lowerBucket = Long.MIN_VALUE;\n+ Comparable<?> lowerValue = queue.getLowerValueLeadSource();\n+ if (lowerValue != null) {\n+ if (lowerValue.getClass() != Long.class) {\n+ throw new IllegalStateException(\"expected Long, got \" + lowerValue.getClass());\n+ }\n+ lowerBucket = (Long) lowerValue;\n+ }\n+\n+ long upperBucket = Long.MAX_VALUE;\n+ Comparable<?> upperValue = queue.getUpperValueLeadSource();\n+ if (upperValue != null) {\n+ if (upperValue.getClass() != Long.class) {\n+ throw new IllegalStateException(\"expected Long, got \" + upperValue.getClass());\n+ }\n+ upperBucket = (Long) upperValue;\n+ }\n+ DocIdSetBuilder builder = fillDocIdSet ? new DocIdSetBuilder(context.reader().maxDoc(), values, field) : null;\n+ Visitor visitor = new Visitor(context, queue, builder, values.getBytesPerDimension(), lowerBucket, upperBucket);\n+ try {\n+ values.intersect(visitor);\n+ visitor.flush();\n+ } catch (CollectionTerminatedException exc) {}\n+ return fillDocIdSet ? builder.build() : DocIdSet.EMPTY;\n+ }\n+\n+ private class Visitor implements PointValues.IntersectVisitor {\n+ final LeafReaderContext context;\n+ final CompositeValuesCollectorQueue queue;\n+ final DocIdSetBuilder builder;\n+ final int maxDoc;\n+ final int bytesPerDim;\n+ final long lowerBucket;\n+ final long upperBucket;\n+\n+ DocIdSetBuilder bucketDocsBuilder;\n+ DocIdSetBuilder.BulkAdder adder;\n+ int remaining;\n+ long lastBucket;\n+ boolean first = true;\n+\n+ Visitor(LeafReaderContext context, CompositeValuesCollectorQueue queue, DocIdSetBuilder builder,\n+ int bytesPerDim, long lowerBucket, long upperBucket) {\n+ this.context = context;\n+ this.maxDoc = context.reader().maxDoc();\n+ this.queue = queue;\n+ this.builder = builder;\n+ this.lowerBucket = lowerBucket;\n+ this.upperBucket = upperBucket;\n+ this.bucketDocsBuilder = new DocIdSetBuilder(maxDoc);\n+ this.bytesPerDim = bytesPerDim;\n+ }\n+\n+ @Override\n+ public void grow(int count) {\n+ remaining = count;\n+ adder = bucketDocsBuilder.grow(count);\n+ }\n+\n+ @Override\n+ public void visit(int docID) throws IOException {\n+ throw new IllegalStateException(\"should never be called\");\n+ }\n+\n+ @Override\n+ public void visit(int docID, byte[] packedValue) throws IOException {\n+ if (compare(packedValue, packedValue) != PointValues.Relation.CELL_CROSSES_QUERY) {\n+ remaining --;\n+ return;\n+ }\n+\n+ long bucket = bucketFunction.applyAsLong(packedValue);\n+ if (first == false && bucket != lastBucket) {\n+ final DocIdSet docIdSet = bucketDocsBuilder.build();\n+ if (processBucket(queue, context, docIdSet.iterator(), lastBucket, builder) &&\n+ // lower bucket is inclusive\n+ lowerBucket != lastBucket) {\n+ // this bucket does not have any competitive composite buckets,\n+ // we can early terminate the collection because the remaining buckets are guaranteed\n+ // to be greater than this bucket.\n+ throw new CollectionTerminatedException();\n+ }\n+ bucketDocsBuilder = new DocIdSetBuilder(maxDoc);\n+ assert remaining > 0;\n+ adder = bucketDocsBuilder.grow(remaining);\n+ }\n+ lastBucket = bucket;\n+ first = false;\n+ adder.add(docID);\n+ remaining --;\n+ }\n+\n+ @Override\n+ public PointValues.Relation compare(byte[] minPackedValue, byte[] maxPackedValue) {\n+ if ((upperPointQuery != null && StringHelper.compare(bytesPerDim, minPackedValue, 0, upperPointQuery, 0) > 0) ||\n+ (lowerPointQuery != null && StringHelper.compare(bytesPerDim, maxPackedValue, 0, lowerPointQuery, 0) < 0)) {\n+ // does not match the query\n+ return PointValues.Relation.CELL_OUTSIDE_QUERY;\n+ }\n+\n+ // check the current bounds\n+ if (lowerBucket != Long.MIN_VALUE) {\n+ long maxBucket = bucketFunction.applyAsLong(maxPackedValue);\n+ if (maxBucket < lowerBucket) {\n+ return PointValues.Relation.CELL_OUTSIDE_QUERY;\n+ }\n+ }\n+\n+ if (upperBucket != Long.MAX_VALUE) {\n+ long minBucket = bucketFunction.applyAsLong(minPackedValue);\n+ if (minBucket > upperBucket) {\n+ return PointValues.Relation.CELL_OUTSIDE_QUERY;\n+ }\n+ }\n+ return PointValues.Relation.CELL_CROSSES_QUERY;\n+ }\n+\n+ public void flush() throws IOException {\n+ if (first == false) {\n+ final DocIdSet docIdSet = bucketDocsBuilder.build();\n+ processBucket(queue, context, docIdSet.iterator(), lastBucket, builder);\n+ bucketDocsBuilder = null;\n+ }\n+ }\n+ }\n+}",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/PointsSortedDocsProducer.java",
"status": "added"
},
{
"diff": "@@ -51,13 +51,17 @@ public boolean isFloatingPoint() {\n return false;\n }\n \n+ public long round(long value) {\n+ return rounding.round(value);\n+ }\n+\n @Override\n public SortedNumericDocValues longValues(LeafReaderContext context) throws IOException {\n SortedNumericDocValues values = vs.longValues(context);\n return new SortedNumericDocValues() {\n @Override\n public long nextValue() throws IOException {\n- return rounding.round(values.nextValue());\n+ return round(values.nextValue());\n }\n \n @Override",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/RoundingValuesSource.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,143 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.index.IndexOptions;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.search.Query;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.search.aggregations.LeafBucketCollector;\n+import org.elasticsearch.search.sort.SortOrder;\n+\n+import java.io.IOException;\n+\n+/**\n+ * A source that can record and compare values of similar type.\n+ */\n+abstract class SingleDimensionValuesSource<T extends Comparable<T>> implements Releasable {\n+ protected final int size;\n+ protected final int reverseMul;\n+ protected T afterValue;\n+ @Nullable\n+ protected MappedFieldType fieldType;\n+\n+ /**\n+ * Ctr\n+ *\n+ * @param fieldType The fieldType associated with the source.\n+ * @param size The number of values to record.\n+ * @param reverseMul -1 if the natural order ({@link SortOrder#ASC} should be reversed.\n+ */\n+ SingleDimensionValuesSource(@Nullable MappedFieldType fieldType, int size, int reverseMul) {\n+ this.fieldType = fieldType;\n+ this.size = size;\n+ this.reverseMul = reverseMul;\n+ this.afterValue = null;\n+ }\n+\n+ /**\n+ * The current value is filled by a {@link LeafBucketCollector} that visits all the\n+ * values of each document. This method saves this current value in a slot and should only be used\n+ * in the context of a collection.\n+ * See {@link this#getLeafCollector}.\n+ */\n+ abstract void copyCurrent(int slot);\n+\n+ /**\n+ * Compares the value in <code>from</code> with the value in <code>to</code>.\n+ */\n+ abstract int compare(int from, int to);\n+\n+ /**\n+ * The current value is filled by a {@link LeafBucketCollector} that visits all the\n+ * values of each document. This method compares this current value with the value present in\n+ * the provided slot and should only be used in the context of a collection.\n+ * See {@link this#getLeafCollector}.\n+ */\n+ abstract int compareCurrent(int slot);\n+\n+ /**\n+ * The current value is filled by a {@link LeafBucketCollector} that visits all the\n+ * values of each document. This method compares this current value with the after value\n+ * set on this source and should only be used in the context of a collection.\n+ * See {@link this#getLeafCollector}.\n+ */\n+ abstract int compareCurrentWithAfter();\n+\n+ /**\n+ * Sets the after value for this source. Values that compares smaller are filtered.\n+ */\n+ abstract void setAfter(Comparable<?> value);\n+\n+ /**\n+ * Returns the after value set for this source.\n+ */\n+ T getAfter() {\n+ return afterValue;\n+ }\n+\n+ /**\n+ * Transforms the value in <code>slot</code> to a {@link Comparable} object.\n+ */\n+ abstract T toComparable(int slot) throws IOException;\n+\n+ /**\n+ * Creates a {@link LeafBucketCollector} that extracts all values from a document and invokes\n+ * {@link LeafBucketCollector#collect} on the provided <code>next</code> collector for each of them.\n+ * The current value of this source is set on each call and can be accessed by <code>next</code> via\n+ * the {@link this#copyCurrent(int)} and {@link this#compareCurrent(int)} methods. Note that these methods\n+ * are only valid when invoked from the {@link LeafBucketCollector} created in this source.\n+ */\n+ abstract LeafBucketCollector getLeafCollector(LeafReaderContext context, LeafBucketCollector next) throws IOException;\n+\n+ /**\n+ * Creates a {@link LeafBucketCollector} that sets the current value for each document to the provided\n+ * <code>value</code> and invokes {@link LeafBucketCollector#collect} on the provided <code>next</code> collector.\n+ */\n+ abstract LeafBucketCollector getLeafCollector(Comparable<?> value,\n+ LeafReaderContext context, LeafBucketCollector next) throws IOException;\n+\n+ /**\n+ * Returns a {@link SortedDocsProducer} or null if this source cannot produce sorted docs.\n+ */\n+ abstract SortedDocsProducer createSortedDocsProducerOrNull(IndexReader reader, Query query);\n+\n+ /**\n+ * Returns true if a {@link SortedDocsProducer} should be used to optimize the execution.\n+ */\n+ protected boolean checkIfSortedDocsIsApplicable(IndexReader reader, MappedFieldType fieldType) {\n+ if (fieldType == null ||\n+ fieldType.indexOptions() == IndexOptions.NONE ||\n+ // inverse of the natural order\n+ reverseMul == -1) {\n+ return false;\n+ }\n+\n+ if (reader.hasDeletions() &&\n+ (reader.numDocs() == 0 || (double) reader.numDocs() / (double) reader.maxDoc() < 0.5)) {\n+ // do not use the index if it has more than 50% of deleted docs\n+ return false;\n+ }\n+ return true;\n+ }\n+}",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/SingleDimensionValuesSource.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,108 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.DocIdSetIterator;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.util.Bits;\n+import org.apache.lucene.util.DocIdSetBuilder;\n+import org.elasticsearch.common.inject.internal.Nullable;\n+import org.elasticsearch.search.aggregations.LeafBucketCollector;\n+\n+import java.io.IOException;\n+\n+/**\n+ * A producer that visits composite buckets in the order of the value indexed in the leading source of the composite\n+ * definition. It can be used to control which documents should be collected to produce the top composite buckets\n+ * without visiting all documents in an index.\n+ */\n+abstract class SortedDocsProducer {\n+ protected final String field;\n+\n+ SortedDocsProducer(String field) {\n+ this.field = field;\n+ }\n+\n+ /**\n+ * Visits all non-deleted documents in <code>iterator</code> and fills the provided <code>queue</code>\n+ * with the top composite buckets extracted from the collection.\n+ * Documents that contain a top composite bucket are added in the provided <code>builder</code> if it is not null.\n+ *\n+ * Returns true if the queue is full and the current <code>leadSourceBucket</code> did not produce any competitive\n+ * composite buckets.\n+ */\n+ protected boolean processBucket(CompositeValuesCollectorQueue queue, LeafReaderContext context, DocIdSetIterator iterator,\n+ Comparable<?> leadSourceBucket, @Nullable DocIdSetBuilder builder) throws IOException {\n+ final int[] topCompositeCollected = new int[1];\n+ final boolean[] hasCollected = new boolean[1];\n+ final LeafBucketCollector queueCollector = new LeafBucketCollector() {\n+ int lastDoc = -1;\n+\n+ // we need to add the matching document in the builder\n+ // so we build a bulk adder from the approximate cost of the iterator\n+ // and rebuild the adder during the collection if needed\n+ int remainingBits = (int) Math.min(iterator.cost(), Integer.MAX_VALUE);\n+ DocIdSetBuilder.BulkAdder adder = builder == null ? null : builder.grow(remainingBits);\n+\n+ @Override\n+ public void collect(int doc, long bucket) throws IOException {\n+ hasCollected[0] = true;\n+ int slot = queue.addIfCompetitive();\n+ if (slot != -1) {\n+ topCompositeCollected[0]++;\n+ if (adder != null && doc != lastDoc) {\n+ if (remainingBits == 0) {\n+ // the cost approximation was lower than the real size, we need to grow the adder\n+ // by some numbers (128) to ensure that we can add the extra documents\n+ adder = builder.grow(128);\n+ remainingBits = 128;\n+ }\n+ adder.add(doc);\n+ remainingBits --;\n+ lastDoc = doc;\n+ }\n+ }\n+ }\n+ };\n+ final Bits liveDocs = context.reader().getLiveDocs();\n+ final LeafBucketCollector collector = queue.getLeafCollector(leadSourceBucket, context, queueCollector);\n+ while (iterator.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {\n+ if (liveDocs == null || liveDocs.get(iterator.docID())) {\n+ collector.collect(iterator.docID());\n+ }\n+ }\n+ if (queue.isFull() &&\n+ hasCollected[0] &&\n+ topCompositeCollected[0] == 0) {\n+ return true;\n+ }\n+ return false;\n+ }\n+\n+ /**\n+ * Populates the queue with the composite buckets present in the <code>context</code>.\n+ * Returns the {@link DocIdSet} of the documents that contain a top composite bucket in this leaf or\n+ * {@link DocIdSet#EMPTY} if <code>fillDocIdSet</code> is false.\n+ */\n+ abstract DocIdSet processLeaf(Query query, CompositeValuesCollectorQueue queue,\n+ LeafReaderContext context, boolean fillDocIdSet) throws IOException;\n+}",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/SortedDocsProducer.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,79 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.PostingsEnum;\n+import org.apache.lucene.index.Terms;\n+import org.apache.lucene.index.TermsEnum;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.util.BytesRef;\n+import org.apache.lucene.util.DocIdSetBuilder;\n+\n+import java.io.IOException;\n+\n+/**\n+ * A {@link SortedDocsProducer} that can sort documents based on terms indexed in the provided field.\n+ */\n+class TermsSortedDocsProducer extends SortedDocsProducer {\n+ TermsSortedDocsProducer(String field) {\n+ super(field);\n+ }\n+\n+ @Override\n+ DocIdSet processLeaf(Query query, CompositeValuesCollectorQueue queue,\n+ LeafReaderContext context, boolean fillDocIdSet) throws IOException {\n+ final Terms terms = context.reader().terms(field);\n+ if (terms == null) {\n+ // no value for the field\n+ return DocIdSet.EMPTY;\n+ }\n+ BytesRef lowerValue = (BytesRef) queue.getLowerValueLeadSource();\n+ BytesRef upperValue = (BytesRef) queue.getUpperValueLeadSource();\n+ final TermsEnum te = terms.iterator();\n+ if (lowerValue != null) {\n+ if (te.seekCeil(lowerValue) == TermsEnum.SeekStatus.END) {\n+ return DocIdSet.EMPTY ;\n+ }\n+ } else {\n+ if (te.next() == null) {\n+ return DocIdSet.EMPTY;\n+ }\n+ }\n+ DocIdSetBuilder builder = fillDocIdSet ? new DocIdSetBuilder(context.reader().maxDoc(), terms) : null;\n+ PostingsEnum reuse = null;\n+ boolean first = true;\n+ do {\n+ if (upperValue != null && upperValue.compareTo(te.term()) < 0) {\n+ break;\n+ }\n+ reuse = te.postings(reuse, PostingsEnum.NONE);\n+ if (processBucket(queue, context, reuse, te.term(), builder) && !first) {\n+ // this bucket does not have any competitive composite buckets,\n+ // we can early terminate the collection because the remaining buckets are guaranteed\n+ // to be greater than this bucket.\n+ break;\n+ }\n+ first = false;\n+ } while (te.next() != null);\n+ return fillDocIdSet ? builder.build() : DocIdSet.EMPTY;\n+ }\n+}",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/TermsSortedDocsProducer.java",
"status": "added"
},
{
"diff": "@@ -19,18 +19,16 @@\n \n package org.elasticsearch.search.aggregations.bucket.composite;\n \n-import org.apache.lucene.search.SortField;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.search.aggregations.support.FieldContext;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.script.Script;\n-import org.elasticsearch.search.sort.SortOrder;\n \n import java.io.IOException;\n \n@@ -80,21 +78,12 @@ public String type() {\n }\n \n @Override\n- protected CompositeValuesSourceConfig innerBuild(SearchContext context,\n- ValuesSourceConfig<?> config,\n- int pos,\n- int numPos,\n- SortField sortField) throws IOException {\n+ protected CompositeValuesSourceConfig innerBuild(SearchContext context, ValuesSourceConfig<?> config) throws IOException {\n ValuesSource vs = config.toValuesSource(context.getQueryShardContext());\n if (vs == null) {\n vs = ValuesSource.Numeric.EMPTY;\n }\n- boolean canEarlyTerminate = false;\n- final FieldContext fieldContext = config.fieldContext();\n- if (sortField != null && config.fieldContext() != null) {\n- canEarlyTerminate = checkCanEarlyTerminate(context.searcher().getIndexReader(),\n- fieldContext.field(), order() == SortOrder.ASC ? false : true, sortField);\n- }\n- return new CompositeValuesSourceConfig(name, vs, config.format(), order(), canEarlyTerminate);\n+ final MappedFieldType fieldType = config.fieldContext() != null ? config.fieldContext().fieldType() : null;\n+ return new CompositeValuesSourceConfig(name, fieldType, vs, config.format(), order());\n }\n }",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/TermsValuesSourceBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,42 +19,44 @@\n \n package org.elasticsearch.search.aggregations.bucket.composite;\n \n-import org.apache.lucene.analysis.MockAnalyzer;\n import org.apache.lucene.document.Document;\n+import org.apache.lucene.document.DoublePoint;\n+import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.IntPoint;\n+import org.apache.lucene.document.LongPoint;\n import org.apache.lucene.document.SortedNumericDocValuesField;\n import org.apache.lucene.document.SortedSetDocValuesField;\n+import org.apache.lucene.document.StringField;\n import org.apache.lucene.index.DirectoryReader;\n import org.apache.lucene.index.IndexReader;\n-import org.apache.lucene.index.IndexWriterConfig;\n+import org.apache.lucene.index.MultiReader;\n import org.apache.lucene.index.RandomIndexWriter;\n+import org.apache.lucene.search.DocValuesFieldExistsQuery;\n import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.Sort;\n-import org.apache.lucene.search.SortField;\n-import org.apache.lucene.search.SortedNumericSortField;\n-import org.apache.lucene.search.SortedSetSortField;\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.util.BytesRef;\n-import org.apache.lucene.util.LuceneTestCase;\n import org.apache.lucene.util.NumericUtils;\n-import org.apache.lucene.util.TestUtil;\n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.mapper.ContentPath;\n import org.elasticsearch.index.mapper.DateFieldMapper;\n import org.elasticsearch.index.mapper.KeywordFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.NumberFieldMapper;\n+import org.elasticsearch.index.query.QueryShardException;\n+import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorTestCase;\n import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;\n+import org.elasticsearch.search.aggregations.bucket.terms.StringTerms;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.metrics.max.InternalMax;\n+import org.elasticsearch.search.aggregations.metrics.max.MaxAggregationBuilder;\n import org.elasticsearch.search.aggregations.metrics.tophits.TopHits;\n import org.elasticsearch.search.aggregations.metrics.tophits.TopHitsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.support.ValueType;\n import org.elasticsearch.search.sort.SortOrder;\n-import org.elasticsearch.test.IndexSettingsModule;\n import org.joda.time.DateTimeZone;\n import org.junit.After;\n import org.junit.Before;\n@@ -64,12 +66,18 @@\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.HashMap;\n+import java.util.HashSet;\n import java.util.List;\n import java.util.Map;\n+import java.util.Set;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicLong;\n import java.util.function.Consumer;\n+import java.util.function.Function;\n import java.util.function.Supplier;\n \n import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n \n public class CompositeAggregatorTests extends AggregatorTestCase {\n@@ -79,7 +87,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {\n @Before\n public void setUp() throws Exception {\n super.setUp();\n- FIELD_TYPES = new MappedFieldType[5];\n+ FIELD_TYPES = new MappedFieldType[6];\n FIELD_TYPES[0] = new KeywordFieldMapper.KeywordFieldType();\n FIELD_TYPES[0].setName(\"keyword\");\n FIELD_TYPES[0].setHasDocValues(true);\n@@ -101,6 +109,10 @@ public void setUp() throws Exception {\n FIELD_TYPES[4] = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.INTEGER);\n FIELD_TYPES[4].setName(\"price\");\n FIELD_TYPES[4].setHasDocValues(true);\n+\n+ FIELD_TYPES[5] = new KeywordFieldMapper.KeywordFieldType();\n+ FIELD_TYPES[5].setName(\"terms\");\n+ FIELD_TYPES[5].setHasDocValues(true);\n }\n \n @Override\n@@ -110,6 +122,19 @@ public void tearDown() throws Exception {\n FIELD_TYPES = null;\n }\n \n+ public void testUnmappedField() throws Exception {\n+ TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(randomAlphaOfLengthBetween(5, 10))\n+ .field(\"unknown\");\n+ CompositeAggregationBuilder builder = new CompositeAggregationBuilder(\"test\", Collections.singletonList(terms));\n+ IndexSearcher searcher = new IndexSearcher(new MultiReader());\n+ QueryShardException exc =\n+ expectThrows(QueryShardException.class, () -> createAggregatorFactory(builder, searcher));\n+ assertThat(exc.getMessage(), containsString(\"failed to find field [unknown] and [missing] is not provided\"));\n+ // should work when missing is provided\n+ terms.missing(\"missing\");\n+ createAggregatorFactory(builder, searcher);\n+ }\n+\n public void testWithKeyword() throws Exception {\n final List<Map<String, List<Object>>> dataset = new ArrayList<>();\n dataset.addAll(\n@@ -121,8 +146,7 @@ public void testWithKeyword() throws Exception {\n createDocument(\"keyword\", \"c\")\n )\n );\n- final Sort sort = new Sort(new SortedSetSortField(\"keyword\", false));\n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\");\n@@ -139,7 +163,7 @@ public void testWithKeyword() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\");\n@@ -168,8 +192,7 @@ public void testWithKeywordMissingAfter() throws Exception {\n createDocument(\"keyword\", \"delta\")\n )\n );\n- final Sort sort = new Sort(new SortedSetSortField(\"keyword\", false));\n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\");\n@@ -188,7 +211,7 @@ public void testWithKeywordMissingAfter() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\");\n@@ -206,7 +229,7 @@ public void testWithKeywordMissingAfter() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\").order(SortOrder.DESC);\n@@ -236,8 +259,7 @@ public void testWithKeywordDesc() throws Exception {\n createDocument(\"keyword\", \"c\")\n )\n );\n- final Sort sort = new Sort(new SortedSetSortField(\"keyword\", true));\n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\")\n@@ -255,7 +277,7 @@ public void testWithKeywordDesc() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\")\n@@ -285,7 +307,7 @@ public void testMultiValuedWithKeyword() throws Exception {\n )\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\");\n@@ -307,7 +329,7 @@ public void testMultiValuedWithKeyword() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\");\n@@ -339,7 +361,7 @@ public void testMultiValuedWithKeywordDesc() throws Exception {\n )\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\")\n@@ -362,7 +384,7 @@ public void testMultiValuedWithKeywordDesc() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\")\n@@ -394,11 +416,7 @@ public void testWithKeywordAndLong() throws Exception {\n createDocument(\"long\", 100L)\n )\n );\n- final Sort sort = new Sort(\n- new SortedSetSortField(\"keyword\", false),\n- new SortedNumericSortField(\"long\", SortField.Type.LONG)\n- );\n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n new TermsValuesSourceBuilder(\"keyword\").field(\"keyword\"),\n@@ -419,7 +437,7 @@ public void testWithKeywordAndLong() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n new TermsValuesSourceBuilder(\"keyword\").field(\"keyword\"),\n@@ -451,11 +469,7 @@ public void testWithKeywordAndLongDesc() throws Exception {\n createDocument(\"long\", 100L)\n )\n );\n- final Sort sort = new Sort(\n- new SortedSetSortField(\"keyword\", true),\n- new SortedNumericSortField(\"long\", SortField.Type.LONG, true)\n- );\n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -477,7 +491,7 @@ public void testWithKeywordAndLongDesc() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -510,7 +524,7 @@ public void testMultiValuedWithKeywordAndLong() throws Exception {\n )\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -543,7 +557,7 @@ public void testMultiValuedWithKeywordAndLong() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -580,11 +594,10 @@ public void testMultiValuedWithKeywordAndLongDesc() throws Exception {\n createDocument(\"keyword\", Arrays.asList(\"d\", \"d\"), \"long\", Arrays.asList(10L, 100L, 1000L)),\n createDocument(\"keyword\", \"c\"),\n createDocument(\"long\", 100L)\n-\n )\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -619,7 +632,7 @@ public void testMultiValuedWithKeywordAndLongDesc() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -653,7 +666,7 @@ public void testMultiValuedWithKeywordLongAndDouble() throws Exception {\n )\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -688,7 +701,7 @@ public void testMultiValuedWithKeywordLongAndDouble() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -723,7 +736,7 @@ public void testMultiValuedWithKeywordLongAndDouble() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -751,8 +764,12 @@ public void testWithDateHistogram() throws IOException {\n createDocument(\"long\", 4L)\n )\n );\n- final Sort sort = new Sort(new SortedNumericSortField(\"date\", SortField.Type.LONG));\n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"date\"),\n+ LongPoint.newRangeQuery(\n+ \"date\",\n+ asLong(\"2016-09-20T09:00:34\"),\n+ asLong(\"2017-10-20T06:09:24\")\n+ )), dataset,\n () -> {\n DateHistogramValuesSourceBuilder histo = new DateHistogramValuesSourceBuilder(\"date\")\n .field(\"date\")\n@@ -771,7 +788,12 @@ public void testWithDateHistogram() throws IOException {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"date\"),\n+ LongPoint.newRangeQuery(\n+ \"date\",\n+ asLong(\"2016-09-20T11:34:00\"),\n+ asLong(\"2017-10-20T06:09:24\")\n+ )), dataset,\n () -> {\n DateHistogramValuesSourceBuilder histo = new DateHistogramValuesSourceBuilder(\"date\")\n .field(\"date\")\n@@ -802,8 +824,7 @@ public void testWithDateHistogramAndFormat() throws IOException {\n createDocument(\"long\", 4L)\n )\n );\n- final Sort sort = new Sort(new SortedNumericSortField(\"date\", SortField.Type.LONG));\n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"date\")), dataset,\n () -> {\n DateHistogramValuesSourceBuilder histo = new DateHistogramValuesSourceBuilder(\"date\")\n .field(\"date\")\n@@ -823,7 +844,7 @@ public void testWithDateHistogramAndFormat() throws IOException {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"date\")), dataset,\n () -> {\n DateHistogramValuesSourceBuilder histo = new DateHistogramValuesSourceBuilder(\"date\")\n .field(\"date\")\n@@ -845,7 +866,7 @@ public void testWithDateHistogramAndFormat() throws IOException {\n \n public void testThatDateHistogramFailsFormatAfter() throws IOException {\n ElasticsearchParseException exc = expectThrows(ElasticsearchParseException.class,\n- () -> testSearchCase(new MatchAllDocsQuery(), null, Collections.emptyList(),\n+ () -> testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"date\")), Collections.emptyList(),\n () -> {\n DateHistogramValuesSourceBuilder histo = new DateHistogramValuesSourceBuilder(\"date\")\n .field(\"date\")\n@@ -860,7 +881,7 @@ public void testThatDateHistogramFailsFormatAfter() throws IOException {\n assertThat(exc.getCause().getMessage(), containsString(\"now() is not supported in [after] key\"));\n \n exc = expectThrows(ElasticsearchParseException.class,\n- () -> testSearchCase(new MatchAllDocsQuery(), null, Collections.emptyList(),\n+ () -> testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"date\")), Collections.emptyList(),\n () -> {\n DateHistogramValuesSourceBuilder histo = new DateHistogramValuesSourceBuilder(\"date\")\n .field(\"date\")\n@@ -887,8 +908,7 @@ public void testWithDateHistogramAndTimeZone() throws IOException {\n createDocument(\"long\", 4L)\n )\n );\n- final Sort sort = new Sort(new SortedNumericSortField(\"date\", SortField.Type.LONG));\n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"date\")), dataset,\n () -> {\n DateHistogramValuesSourceBuilder histo = new DateHistogramValuesSourceBuilder(\"date\")\n .field(\"date\")\n@@ -908,7 +928,7 @@ public void testWithDateHistogramAndTimeZone() throws IOException {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"date\")), dataset,\n () -> {\n DateHistogramValuesSourceBuilder histo = new DateHistogramValuesSourceBuilder(\"date\")\n .field(\"date\")\n@@ -940,7 +960,12 @@ public void testWithDateHistogramAndKeyword() throws IOException {\n createDocument(\"long\", 4L)\n )\n );\n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"date\"),\n+ LongPoint.newRangeQuery(\n+ \"date\",\n+ asLong(\"2016-09-20T09:00:34\"),\n+ asLong(\"2017-10-20T06:09:24\")\n+ )), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -971,7 +996,12 @@ public void testWithDateHistogramAndKeyword() throws IOException {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"date\"),\n+ LongPoint.newRangeQuery(\n+ \"date\",\n+ asLong(\"2016-09-20T11:34:00\"),\n+ asLong(\"2017-10-20T06:09:24\")\n+ )), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -1007,7 +1037,7 @@ public void testWithKeywordAndHistogram() throws IOException {\n createDocument(\"long\", 4L)\n )\n );\n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"price\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -1035,7 +1065,7 @@ public void testWithKeywordAndHistogram() throws IOException {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"price\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -1075,7 +1105,7 @@ public void testWithHistogramAndKeyword() throws IOException {\n createDocument(\"long\", 4L)\n )\n );\n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"double\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -1105,7 +1135,7 @@ public void testWithHistogramAndKeyword() throws IOException {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"double\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -1138,7 +1168,7 @@ public void testWithKeywordAndDateHistogram() throws IOException {\n createDocument(\"long\", 4L)\n )\n );\n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -1167,7 +1197,7 @@ public void testWithKeywordAndDateHistogram() throws IOException {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), null, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () ->\n new CompositeAggregationBuilder(\"name\",\n Arrays.asList(\n@@ -1202,8 +1232,7 @@ public void testWithKeywordAndTopHits() throws Exception {\n createDocument(\"keyword\", \"c\")\n )\n );\n- final Sort sort = new Sort(new SortedSetSortField(\"keyword\", false));\n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\");\n@@ -1232,7 +1261,7 @@ public void testWithKeywordAndTopHits() throws Exception {\n }\n );\n \n- testSearchCase(new MatchAllDocsQuery(), sort, dataset,\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n () -> {\n TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n .field(\"keyword\");\n@@ -1257,36 +1286,174 @@ public void testWithKeywordAndTopHits() throws Exception {\n );\n }\n \n- private void testSearchCase(Query query, Sort sort,\n+ public void testWithTermsSubAggExecutionMode() throws Exception {\n+ // test with no bucket\n+ for (Aggregator.SubAggCollectionMode mode : Aggregator.SubAggCollectionMode.values()) {\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")),\n+ Collections.singletonList(createDocument()),\n+ () -> {\n+ TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n+ .field(\"keyword\");\n+ return new CompositeAggregationBuilder(\"name\", Collections.singletonList(terms))\n+ .subAggregation(\n+ new TermsAggregationBuilder(\"terms\", ValueType.STRING)\n+ .field(\"terms\")\n+ .collectMode(mode)\n+ .subAggregation(new MaxAggregationBuilder(\"max\").field(\"long\"))\n+ );\n+ }, (result) -> {\n+ assertEquals(0, result.getBuckets().size());\n+ }\n+ );\n+ }\n+\n+ final List<Map<String, List<Object>>> dataset = new ArrayList<>();\n+ dataset.addAll(\n+ Arrays.asList(\n+ createDocument(\"keyword\", \"a\", \"terms\", \"a\", \"long\", 50L),\n+ createDocument(\"keyword\", \"c\", \"terms\", \"d\", \"long\", 78L),\n+ createDocument(\"keyword\", \"a\", \"terms\", \"w\", \"long\", 78L),\n+ createDocument(\"keyword\", \"d\", \"terms\", \"y\", \"long\", 76L),\n+ createDocument(\"keyword\", \"c\", \"terms\", \"y\", \"long\", 70L)\n+ )\n+ );\n+ for (Aggregator.SubAggCollectionMode mode : Aggregator.SubAggCollectionMode.values()) {\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n+ () -> {\n+ TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n+ .field(\"keyword\");\n+ return new CompositeAggregationBuilder(\"name\", Collections.singletonList(terms))\n+ .subAggregation(\n+ new TermsAggregationBuilder(\"terms\", ValueType.STRING)\n+ .field(\"terms\")\n+ .collectMode(mode)\n+ .subAggregation(new MaxAggregationBuilder(\"max\").field(\"long\"))\n+ );\n+ }, (result) -> {\n+ assertEquals(3, result.getBuckets().size());\n+\n+ assertEquals(\"{keyword=a}\", result.getBuckets().get(0).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(0).getDocCount());\n+ StringTerms subTerms = result.getBuckets().get(0).getAggregations().get(\"terms\");\n+ assertEquals(2, subTerms.getBuckets().size());\n+ assertEquals(\"a\", subTerms.getBuckets().get(0).getKeyAsString());\n+ assertEquals(\"w\", subTerms.getBuckets().get(1).getKeyAsString());\n+ InternalMax max = subTerms.getBuckets().get(0).getAggregations().get(\"max\");\n+ assertEquals(50L, (long) max.getValue());\n+ max = subTerms.getBuckets().get(1).getAggregations().get(\"max\");\n+ assertEquals(78L, (long) max.getValue());\n+\n+ assertEquals(\"{keyword=c}\", result.getBuckets().get(1).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(1).getDocCount());\n+ subTerms = result.getBuckets().get(1).getAggregations().get(\"terms\");\n+ assertEquals(2, subTerms.getBuckets().size());\n+ assertEquals(\"d\", subTerms.getBuckets().get(0).getKeyAsString());\n+ assertEquals(\"y\", subTerms.getBuckets().get(1).getKeyAsString());\n+ max = subTerms.getBuckets().get(0).getAggregations().get(\"max\");\n+ assertEquals(78L, (long) max.getValue());\n+ max = subTerms.getBuckets().get(1).getAggregations().get(\"max\");\n+ assertEquals(70L, (long) max.getValue());\n+\n+ assertEquals(\"{keyword=d}\", result.getBuckets().get(2).getKeyAsString());\n+ assertEquals(1L, result.getBuckets().get(2).getDocCount());\n+ subTerms = result.getBuckets().get(2).getAggregations().get(\"terms\");\n+ assertEquals(1, subTerms.getBuckets().size());\n+ assertEquals(\"y\", subTerms.getBuckets().get(0).getKeyAsString());\n+ max = subTerms.getBuckets().get(0).getAggregations().get(\"max\");\n+ assertEquals(76L, (long) max.getValue());\n+ }\n+ );\n+ }\n+ }\n+\n+ public void testRandomStrings() throws IOException {\n+ testRandomTerms(\"keyword\", () -> randomAlphaOfLengthBetween(5, 50), (v) -> (String) v);\n+ }\n+\n+ public void testRandomLongs() throws IOException {\n+ testRandomTerms(\"long\", () -> randomLong(), (v) -> (long) v);\n+ }\n+\n+ public void testRandomInts() throws IOException {\n+ testRandomTerms(\"price\", () -> randomInt(), (v) -> ((Number) v).intValue());\n+ }\n+\n+ private <T extends Comparable<T>, V extends Comparable<T>> void testRandomTerms(String field,\n+ Supplier<T> randomSupplier,\n+ Function<Object, V> transformKey) throws IOException {\n+ int numTerms = randomIntBetween(10, 500);\n+ List<T> terms = new ArrayList<>();\n+ for (int i = 0; i < numTerms; i++) {\n+ terms.add(randomSupplier.get());\n+ }\n+ int numDocs = randomIntBetween(100, 200);\n+ List<Map<String, List<Object>>> dataset = new ArrayList<>();\n+\n+ Set<T> valuesSet = new HashSet<>();\n+ Map<Comparable<?>, AtomicLong> expectedDocCounts = new HashMap<> ();\n+ for (int i = 0; i < numDocs; i++) {\n+ int numValues = randomIntBetween(1, 5);\n+ Set<Object> values = new HashSet<>();\n+ for (int j = 0; j < numValues; j++) {\n+ int rand = randomIntBetween(0, terms.size() - 1);\n+ if (values.add(terms.get(rand))) {\n+ AtomicLong count = expectedDocCounts.computeIfAbsent(terms.get(rand),\n+ (k) -> new AtomicLong(0));\n+ count.incrementAndGet();\n+ valuesSet.add(terms.get(rand));\n+ }\n+ }\n+ dataset.add(Collections.singletonMap(field, new ArrayList<>(values)));\n+ }\n+ List<T> expected = new ArrayList<>(valuesSet);\n+ Collections.sort(expected);\n+\n+ List<Comparable<T>> seen = new ArrayList<>();\n+ AtomicBoolean finish = new AtomicBoolean(false);\n+ int size = randomIntBetween(1, expected.size());\n+ while (finish.get() == false) {\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(field)), dataset,\n+ () -> {\n+ Map<String, Object> afterKey = null;\n+ if (seen.size() > 0) {\n+ afterKey = Collections.singletonMap(field, seen.get(seen.size()-1));\n+ }\n+ TermsValuesSourceBuilder source = new TermsValuesSourceBuilder(field).field(field);\n+ return new CompositeAggregationBuilder(\"name\", Collections.singletonList(source))\n+ .subAggregation(new TopHitsAggregationBuilder(\"top_hits\").storedField(\"_none_\"))\n+ .aggregateAfter(afterKey)\n+ .size(size);\n+ }, (result) -> {\n+ if (result.getBuckets().size() == 0) {\n+ finish.set(true);\n+ }\n+ for (InternalComposite.InternalBucket bucket : result.getBuckets()) {\n+ V term = transformKey.apply(bucket.getKey().get(field));\n+ seen.add(term);\n+ assertThat(bucket.getDocCount(), equalTo(expectedDocCounts.get(term).get()));\n+ }\n+ });\n+ }\n+ assertEquals(expected, seen);\n+ }\n+\n+ private void testSearchCase(List<Query> queries,\n List<Map<String, List<Object>>> dataset,\n Supplier<CompositeAggregationBuilder> create,\n Consumer<InternalComposite> verify) throws IOException {\n- executeTestCase(false, null, query, dataset, create, verify);\n- executeTestCase(true, null, query, dataset, create, verify);\n- if (sort != null) {\n- executeTestCase(false, sort, query, dataset, create, verify);\n- executeTestCase(true, sort, query, dataset, create, verify);\n+ for (Query query : queries) {\n+ executeTestCase(false, query, dataset, create, verify);\n+ executeTestCase(true, query, dataset, create, verify);\n }\n }\n \n private void executeTestCase(boolean reduced,\n- Sort sort,\n Query query,\n List<Map<String, List<Object>>> dataset,\n Supplier<CompositeAggregationBuilder> create,\n Consumer<InternalComposite> verify) throws IOException {\n- IndexSettings indexSettings = createIndexSettings(sort);\n try (Directory directory = newDirectory()) {\n- IndexWriterConfig config = LuceneTestCase.newIndexWriterConfig(random(), new MockAnalyzer(random()));\n- if (sort != null) {\n- config.setIndexSort(sort);\n- /**\n- * Forces the default codec because {@link CompositeValuesSourceBuilder#checkCanEarlyTerminate}\n- * cannot detect single-valued field with the asserting-codec.\n- **/\n- config.setCodec(TestUtil.getDefaultCodec());\n- }\n- try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory, config)) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {\n Document document = new Document();\n for (Map<String, List<Object>> fields : dataset) {\n addToDocument(document, fields);\n@@ -1295,12 +1462,8 @@ private void executeTestCase(boolean reduced,\n }\n }\n try (IndexReader indexReader = DirectoryReader.open(directory)) {\n- IndexSearcher indexSearcher = newSearcher(indexReader, sort == null, sort == null);\n+ IndexSearcher indexSearcher = new IndexSearcher(indexReader);\n CompositeAggregationBuilder aggregationBuilder = create.get();\n- if (sort != null) {\n- CompositeAggregator aggregator = createAggregator(query, aggregationBuilder, indexSearcher, indexSettings, FIELD_TYPES);\n- assertTrue(aggregator.canEarlyTerminate());\n- }\n final InternalComposite composite;\n if (reduced) {\n composite = searchAndReduce(indexSearcher, query, aggregationBuilder, FIELD_TYPES);\n@@ -1312,31 +1475,22 @@ private void executeTestCase(boolean reduced,\n }\n }\n \n- private static IndexSettings createIndexSettings(Sort sort) {\n- Settings.Builder builder = Settings.builder();\n- if (sort != null) {\n- String[] fields = Arrays.stream(sort.getSort())\n- .map(SortField::getField)\n- .toArray(String[]::new);\n- String[] orders = Arrays.stream(sort.getSort())\n- .map((o) -> o.getReverse() ? \"desc\" : \"asc\")\n- .toArray(String[]::new);\n- builder.putList(\"index.sort.field\", fields);\n- builder.putList(\"index.sort.order\", orders);\n- }\n- return IndexSettingsModule.newIndexSettings(new Index(\"_index\", \"0\"), builder.build());\n- }\n-\n private void addToDocument(Document doc, Map<String, List<Object>> keys) {\n for (Map.Entry<String, List<Object>> entry : keys.entrySet()) {\n final String name = entry.getKey();\n for (Object value : entry.getValue()) {\n- if (value instanceof Long) {\n+ if (value instanceof Integer) {\n+ doc.add(new SortedNumericDocValuesField(name, (int) value));\n+ doc.add(new IntPoint(name, (int) value));\n+ } else if (value instanceof Long) {\n doc.add(new SortedNumericDocValuesField(name, (long) value));\n+ doc.add(new LongPoint(name, (long) value));\n } else if (value instanceof Double) {\n doc.add(new SortedNumericDocValuesField(name, NumericUtils.doubleToSortableLong((double) value)));\n+ doc.add(new DoublePoint(name, (double) value));\n } else if (value instanceof String) {\n doc.add(new SortedSetDocValuesField(name, new BytesRef((String) value)));\n+ doc.add(new StringField(name, new BytesRef((String) value), Field.Store.NO));\n } else {\n throw new AssertionError(\"invalid object: \" + value.getClass().getSimpleName());\n }",
"filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeAggregatorTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,330 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.analysis.core.KeywordAnalyzer;\n+import org.apache.lucene.document.Document;\n+import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.LongPoint;\n+import org.apache.lucene.document.SortedNumericDocValuesField;\n+import org.apache.lucene.document.SortedSetDocValuesField;\n+import org.apache.lucene.document.TextField;\n+import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.IndexOptions;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.RandomIndexWriter;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.store.Directory;\n+import org.apache.lucene.util.Bits;\n+import org.apache.lucene.util.BytesRef;\n+import org.apache.lucene.util.NumericUtils;\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.index.fielddata.FieldData;\n+import org.elasticsearch.index.mapper.KeywordFieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.NumberFieldMapper;\n+import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.aggregations.AggregatorTestCase;\n+import org.elasticsearch.search.aggregations.LeafBucketCollector;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Set;\n+\n+import static org.elasticsearch.index.mapper.NumberFieldMapper.NumberType.DOUBLE;\n+import static org.elasticsearch.index.mapper.NumberFieldMapper.NumberType.LONG;\n+import static org.hamcrest.Matchers.equalTo;\n+\n+public class CompositeValuesCollectorQueueTests extends AggregatorTestCase {\n+ static class ClassAndName {\n+ final MappedFieldType fieldType;\n+ final Class<? extends Comparable<?>> clazz;\n+\n+ ClassAndName(MappedFieldType fieldType, Class<? extends Comparable<?>> clazz) {\n+ this.fieldType = fieldType;\n+ this.clazz = clazz;\n+ }\n+ }\n+\n+ public void testRandomLong() throws IOException {\n+ testRandomCase(new ClassAndName(createNumber(\"long\", LONG) , Long.class));\n+ }\n+\n+ public void testRandomDouble() throws IOException {\n+ testRandomCase(new ClassAndName(createNumber(\"double\", DOUBLE) , Double.class));\n+ }\n+\n+ public void testRandomDoubleAndLong() throws IOException {\n+ testRandomCase(new ClassAndName(createNumber(\"double\", DOUBLE), Double.class),\n+ new ClassAndName(createNumber(\"long\", LONG), Long.class));\n+ }\n+\n+ public void testRandomDoubleAndKeyword() throws IOException {\n+ testRandomCase(new ClassAndName(createNumber(\"double\", DOUBLE), Double.class),\n+ new ClassAndName(createKeyword(\"keyword\"), BytesRef.class));\n+ }\n+\n+ public void testRandomKeyword() throws IOException {\n+ testRandomCase(new ClassAndName(createKeyword(\"keyword\"), BytesRef.class));\n+ }\n+\n+ public void testRandomLongAndKeyword() throws IOException {\n+ testRandomCase(new ClassAndName(createNumber(\"long\", LONG), Long.class),\n+ new ClassAndName(createKeyword(\"keyword\"), BytesRef.class));\n+ }\n+\n+ public void testRandomLongAndDouble() throws IOException {\n+ testRandomCase(new ClassAndName(createNumber(\"long\", LONG), Long.class),\n+ new ClassAndName(createNumber(\"double\", DOUBLE) , Double.class));\n+ }\n+\n+ public void testRandomKeywordAndLong() throws IOException {\n+ testRandomCase(new ClassAndName(createKeyword(\"keyword\"), BytesRef.class),\n+ new ClassAndName(createNumber(\"long\", LONG), Long.class));\n+ }\n+\n+ public void testRandomKeywordAndDouble() throws IOException {\n+ testRandomCase(new ClassAndName(createKeyword(\"keyword\"), BytesRef.class),\n+ new ClassAndName(createNumber(\"double\", DOUBLE), Double.class));\n+ }\n+\n+ public void testRandom() throws IOException {\n+ int numTypes = randomIntBetween(3, 8);\n+ ClassAndName[] types = new ClassAndName[numTypes];\n+ for (int i = 0; i < numTypes; i++) {\n+ int rand = randomIntBetween(0, 2);\n+ switch (rand) {\n+ case 0:\n+ types[i] = new ClassAndName(createNumber(Integer.toString(i), LONG), Long.class);\n+ break;\n+ case 1:\n+ types[i] = new ClassAndName(createNumber(Integer.toString(i), DOUBLE), Double.class);\n+ break;\n+ case 2:\n+ types[i] = new ClassAndName(createKeyword(Integer.toString(i)), BytesRef.class);\n+ break;\n+ default:\n+ assert(false);\n+ }\n+ }\n+ testRandomCase(true, types);\n+ }\n+\n+ private void testRandomCase(ClassAndName... types) throws IOException {\n+ testRandomCase(true, types);\n+ testRandomCase(false, types);\n+ }\n+\n+ private void testRandomCase(boolean forceMerge, ClassAndName... types) throws IOException {\n+ final BigArrays bigArrays = BigArrays.NON_RECYCLING_INSTANCE;\n+ int numDocs = randomIntBetween(50, 100);\n+ List<Comparable<?>[]> possibleValues = new ArrayList<>();\n+ for (ClassAndName type : types) {\n+ int numValues = randomIntBetween(1, numDocs*2);\n+ Comparable<?>[] values = new Comparable[numValues];\n+ if (type.clazz == Long.class) {\n+ for (int i = 0; i < numValues; i++) {\n+ values[i] = randomLong();\n+ }\n+ } else if (type.clazz == Double.class) {\n+ for (int i = 0; i < numValues; i++) {\n+ values[i] = randomDouble();\n+ }\n+ } else if (type.clazz == BytesRef.class) {\n+ for (int i = 0; i < numValues; i++) {\n+ values[i] = new BytesRef(randomAlphaOfLengthBetween(5, 50));\n+ }\n+ } else {\n+ assert(false);\n+ }\n+ possibleValues.add(values);\n+ }\n+\n+ Set<CompositeKey> keys = new HashSet<>();\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory, new KeywordAnalyzer())) {\n+ for (int i = 0; i < numDocs; i++) {\n+ Document document = new Document();\n+ List<List<Comparable<?>>> docValues = new ArrayList<>();\n+ boolean hasAllField = true;\n+ for (int j = 0; j < types.length; j++) {\n+ int numValues = randomIntBetween(0, 5);\n+ if (numValues == 0) {\n+ hasAllField = false;\n+ }\n+ List<Comparable<?>> values = new ArrayList<>();\n+ for (int k = 0; k < numValues; k++) {\n+ values.add(possibleValues.get(j)[randomIntBetween(0, possibleValues.get(j).length-1)]);\n+ if (types[j].clazz == Long.class) {\n+ long value = (Long) values.get(k);\n+ document.add(new SortedNumericDocValuesField(types[j].fieldType.name(), value));\n+ document.add(new LongPoint(types[j].fieldType.name(), value));\n+ } else if (types[j].clazz == Double.class) {\n+ document.add(new SortedNumericDocValuesField(types[j].fieldType.name(),\n+ NumericUtils.doubleToSortableLong((Double) values.get(k))));\n+ } else if (types[j].clazz == BytesRef.class) {\n+ BytesRef value = (BytesRef) values.get(k);\n+ document.add(new SortedSetDocValuesField(types[j].fieldType.name(), (BytesRef) values.get(k)));\n+ document.add(new TextField(types[j].fieldType.name(), value.utf8ToString(), Field.Store.NO));\n+ } else {\n+ assert(false);\n+ }\n+ }\n+ docValues.add(values);\n+ }\n+ if (hasAllField) {\n+ List<CompositeKey> comb = createListCombinations(docValues);\n+ keys.addAll(comb);\n+ }\n+ indexWriter.addDocument(document);\n+ }\n+ if (forceMerge) {\n+ indexWriter.forceMerge(1);\n+ }\n+ }\n+ IndexReader reader = DirectoryReader.open(directory);\n+ int size = randomIntBetween(1, keys.size());\n+ SingleDimensionValuesSource<?>[] sources = new SingleDimensionValuesSource[types.length];\n+ for (int i = 0; i < types.length; i++) {\n+ final MappedFieldType fieldType = types[i].fieldType;\n+ if (types[i].clazz == Long.class) {\n+ sources[i] = new LongValuesSource(bigArrays, fieldType,\n+ context -> context.reader().getSortedNumericDocValues(fieldType.name()), value -> value,\n+ DocValueFormat.RAW, size, 1);\n+ } else if (types[i].clazz == Double.class) {\n+ sources[i] = new DoubleValuesSource(bigArrays, fieldType,\n+ context -> FieldData.sortableLongBitsToDoubles(context.reader().getSortedNumericDocValues(fieldType.name())),\n+ size, 1);\n+ } else if (types[i].clazz == BytesRef.class) {\n+ if (forceMerge) {\n+ // we don't create global ordinals but we test this mode when the reader has a single segment\n+ // since ordinals are global in this case.\n+ sources[i] = new GlobalOrdinalValuesSource(bigArrays, fieldType,\n+ context -> context.reader().getSortedSetDocValues(fieldType.name()), size, 1);\n+ } else {\n+ sources[i] = new BinaryValuesSource(fieldType,\n+ context -> FieldData.toString(context.reader().getSortedSetDocValues(fieldType.name())), size, 1);\n+ }\n+ } else {\n+ assert(false);\n+ }\n+ }\n+ CompositeKey[] expected = keys.toArray(new CompositeKey[0]);\n+ Arrays.sort(expected, (a, b) -> compareKey(a, b));\n+ CompositeValuesCollectorQueue queue = new CompositeValuesCollectorQueue(sources, size);\n+ final SortedDocsProducer docsProducer = sources[0].createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery());\n+ for (boolean withProducer : new boolean[] {true, false}) {\n+ if (withProducer && docsProducer == null) {\n+ continue;\n+ }\n+ int pos = 0;\n+ CompositeKey last = null;\n+ while (pos < size) {\n+ queue.clear();\n+ if (last != null) {\n+ queue.setAfter(last.values());\n+ }\n+\n+ for (LeafReaderContext leafReaderContext : reader.leaves()) {\n+ final LeafBucketCollector leafCollector = new LeafBucketCollector() {\n+ @Override\n+ public void collect(int doc, long bucket) throws IOException {\n+ queue.addIfCompetitive();\n+ }\n+ };\n+ if (withProducer) {\n+ assertEquals(DocIdSet.EMPTY,\n+ docsProducer.processLeaf(new MatchAllDocsQuery(), queue, leafReaderContext, false));\n+ } else {\n+ final LeafBucketCollector queueCollector = queue.getLeafCollector(leafReaderContext, leafCollector);\n+ final Bits liveDocs = leafReaderContext.reader().getLiveDocs();\n+ for (int i = 0; i < leafReaderContext.reader().maxDoc(); i++) {\n+ if (liveDocs == null || liveDocs.get(i)) {\n+ queueCollector.collect(i);\n+ }\n+ }\n+ }\n+ }\n+ assertEquals(size, Math.min(queue.size(), expected.length - pos));\n+ int ptr = 0;\n+ for (int slot : queue.getSortedSlot()) {\n+ CompositeKey key = queue.toCompositeKey(slot);\n+ assertThat(key, equalTo(expected[ptr++]));\n+ last = key;\n+ }\n+ pos += queue.size();\n+ }\n+ }\n+ reader.close();\n+ }\n+ }\n+\n+ private static MappedFieldType createNumber(String name, NumberFieldMapper.NumberType type) {\n+ MappedFieldType fieldType = new NumberFieldMapper.NumberFieldType(type);\n+ fieldType.setIndexOptions(IndexOptions.DOCS);\n+ fieldType.setName(name);\n+ fieldType.setHasDocValues(true);\n+ fieldType.freeze();\n+ return fieldType;\n+ }\n+\n+ private static MappedFieldType createKeyword(String name) {\n+ MappedFieldType fieldType = new KeywordFieldMapper.KeywordFieldType();\n+ fieldType.setIndexOptions(IndexOptions.DOCS);\n+ fieldType.setName(name);\n+ fieldType.setHasDocValues(true);\n+ fieldType.freeze();\n+ return fieldType;\n+ }\n+\n+ private static int compareKey(CompositeKey key1, CompositeKey key2) {\n+ assert key1.size() == key2.size();\n+ for (int i = 0; i < key1.size(); i++) {\n+ Comparable<Object> cmp1 = (Comparable<Object>) key1.get(i);\n+ int cmp = cmp1.compareTo(key2.get(i));\n+ if (cmp != 0) {\n+ return cmp;\n+ }\n+ }\n+ return 0;\n+ }\n+\n+ private static List<CompositeKey> createListCombinations(List<List<Comparable<?>>> values) {\n+ List<CompositeKey> keys = new ArrayList<>();\n+ createListCombinations(new Comparable[values.size()], values, 0, values.size(), keys);\n+ return keys;\n+ }\n+\n+ private static void createListCombinations(Comparable<?>[] key, List<List<Comparable<?>>> values,\n+ int pos, int maxPos, List<CompositeKey> keys) {\n+ if (pos == maxPos) {\n+ keys.add(new CompositeKey(key.clone()));\n+ } else {\n+ for (Comparable<?> val : values.get(pos)) {\n+ key[pos] = val;\n+ createListCombinations(key, values, pos + 1, maxPos, keys);\n+ }\n+ }\n+ }\n+}",
"filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeValuesCollectorQueueTests.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,106 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.apache.lucene.document.LongPoint;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.TermQuery;\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.index.mapper.KeywordFieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.NumberFieldMapper;\n+import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.when;\n+\n+public class SingleDimensionValuesSourceTests extends ESTestCase {\n+ public void testBinarySorted() {\n+ MappedFieldType keyword = new KeywordFieldMapper.KeywordFieldType();\n+ keyword.setName(\"keyword\");\n+ BinaryValuesSource source = new BinaryValuesSource(keyword, context -> null, 1, 1);\n+ assertNull(source.createSortedDocsProducerOrNull(mockIndexReader(100, 49), null));\n+ IndexReader reader = mockIndexReader(1, 1);\n+ assertNotNull(source.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));\n+ assertNotNull(source.createSortedDocsProducerOrNull(reader, null));\n+ assertNull(source.createSortedDocsProducerOrNull(reader,\n+ new TermQuery(new Term(\"keyword\", \"toto)\"))));\n+ source = new BinaryValuesSource(keyword, context -> null, 0, -1);\n+ assertNull(source.createSortedDocsProducerOrNull(reader, null));\n+ }\n+\n+ public void testGlobalOrdinalsSorted() {\n+ MappedFieldType keyword = new KeywordFieldMapper.KeywordFieldType();\n+ keyword.setName(\"keyword\");\n+ BinaryValuesSource source = new BinaryValuesSource(keyword, context -> null, 1, 1);\n+ assertNull(source.createSortedDocsProducerOrNull(mockIndexReader(100, 49), null));\n+ IndexReader reader = mockIndexReader(1, 1);\n+ assertNotNull(source.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));\n+ assertNotNull(source.createSortedDocsProducerOrNull(reader, null));\n+ assertNull(source.createSortedDocsProducerOrNull(reader,\n+ new TermQuery(new Term(\"keyword\", \"toto)\"))));\n+ source = new BinaryValuesSource(keyword, context -> null, 1, -1);\n+ assertNull(source.createSortedDocsProducerOrNull(reader, null));\n+ }\n+\n+ public void testNumericSorted() {\n+ for (NumberFieldMapper.NumberType numberType : NumberFieldMapper.NumberType.values()) {\n+ MappedFieldType number = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG);\n+ number.setName(\"number\");\n+ final SingleDimensionValuesSource<?> source;\n+ if (numberType == NumberFieldMapper.NumberType.BYTE ||\n+ numberType == NumberFieldMapper.NumberType.SHORT ||\n+ numberType == NumberFieldMapper.NumberType.INTEGER ||\n+ numberType == NumberFieldMapper.NumberType.LONG) {\n+ source = new LongValuesSource(BigArrays.NON_RECYCLING_INSTANCE,\n+ number, context -> null, value -> value, DocValueFormat.RAW, 1, 1);\n+ assertNull(source.createSortedDocsProducerOrNull(mockIndexReader(100, 49), null));\n+ IndexReader reader = mockIndexReader(1, 1);\n+ assertNotNull(source.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));\n+ assertNotNull(source.createSortedDocsProducerOrNull(reader, null));\n+ assertNotNull(source.createSortedDocsProducerOrNull(reader, LongPoint.newRangeQuery(\"number\", 0, 1)));\n+ assertNull(source.createSortedDocsProducerOrNull(reader, new TermQuery(new Term(\"keyword\", \"toto)\"))));\n+ LongValuesSource sourceRev =\n+ new LongValuesSource(BigArrays.NON_RECYCLING_INSTANCE,\n+ number, context -> null, value -> value, DocValueFormat.RAW, 1, -1);\n+ assertNull(sourceRev.createSortedDocsProducerOrNull(reader, null));\n+ } else if (numberType == NumberFieldMapper.NumberType.HALF_FLOAT ||\n+ numberType == NumberFieldMapper.NumberType.FLOAT ||\n+ numberType == NumberFieldMapper.NumberType.DOUBLE) {\n+ source = new DoubleValuesSource(BigArrays.NON_RECYCLING_INSTANCE,\n+ number, context -> null, 1, 1);\n+ } else{\n+ throw new AssertionError (\"missing type:\" + numberType.typeName());\n+ }\n+ assertNull(source.createSortedDocsProducerOrNull(mockIndexReader(100, 49), null));\n+ }\n+ }\n+\n+ private static IndexReader mockIndexReader(int maxDoc, int numDocs) {\n+ IndexReader reader = mock(IndexReader.class);\n+ when(reader.hasDeletions()).thenReturn(maxDoc - numDocs > 0);\n+ when(reader.maxDoc()).thenReturn(maxDoc);\n+ when(reader.numDocs()).thenReturn(numDocs);\n+ return reader;\n+ }\n+}",
"filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/composite/SingleDimensionValuesSourceTests.java",
"status": "added"
},
{
"diff": "@@ -291,7 +291,6 @@ protected <A extends InternalAggregation, C extends Aggregator> A search(IndexSe\n A internalAgg = (A) a.buildAggregation(0L);\n InternalAggregationTestCase.assertMultiBucketConsumer(internalAgg, bucketConsumer);\n return internalAgg;\n-\n }\n \n protected <A extends InternalAggregation, C extends Aggregator> A searchAndReduce(IndexSearcher searcher,",
"filename": "test/framework/src/main/java/org/elasticsearch/search/aggregations/AggregatorTestCase.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version (bin/elasticsearch --version)**:\r\nVersion: 6.2.1, Build: 7299dc3/2018-02-07T19:34:26.990113Z, JVM: 1.8.0_131\r\n\r\n**Plugins installed**: []\r\nanalysis-icu\r\n\r\n**JVM version (java -version)**:\r\nopenjdk version \"1.8.0_131\"\r\nOpenJDK Runtime Environment (build 1.8.0_131-b11)\r\nOpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)\r\n\r\n**OS version (uname -a if on a Unix-like system)**:\r\nFreeBSD 11.1\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nAfter upgrading to 6.2.1, one of the data (and also client) nodes reject all bulk operations. In the reject response queued tasks grow infinitely while completed tasks doesn't change (see attached logs).\r\n\r\n**Steps to reproduce**:\r\nWe have a lot of different data and indices spread over 40 nodes. So far I could observe this error only on one node. When I try to restart the node with kill, it doesn't stop. Below is the stacktrace.\r\n\r\n**Provide logs (if relevant)**:\r\nThis is just about a minute (logged by an application, uses python elasticsearch client). queued tasks grow, while completed tasks doesn't.\r\n``\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@3684362a on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14926, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@11961ec on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14939, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@573952db on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14951, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@573952db on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14951, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@4127dfd2 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14960, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@4c6147ea on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14961, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@4958c58e on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14960, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@7acb9dba on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14967, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@75af7186 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14967, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@7830996e on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14972, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@513cdd87 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14979, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@1166d220 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14983, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@498b99a2 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 14998, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@64e61a2f on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15005, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@372ab4a0 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15006, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@4381386b on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15006, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@2c661b6d on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15025, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@5fcb36db on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15025, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@21834e56 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15038, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@58444a25 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15038, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@25da450 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15047, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@6f3c0966 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15047, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@27485a8e on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15050, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\nwith: {u'reason': u'rejected execution of org.elasticsearch.transport.TransportService$7@170e6b30 on EsThreadPoolExecutor[name = fmfe16/bulk, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@12006f74[Running, pool size = 24, active threads = 24, queued tasks = 15056, completed tasks = 253961]]', u'type': u'es_rejected_execution_exception'}\r\n``\r\n\r\nAnd this is the stacktrace after I tried to kill the node and it didn't stop:\r\nhttps://pastebin.com/rnzESu5B",
"comments": [
{
"body": "Hi @bra-fsn, we reserve Github for bug reports and feature requests only. Please ask questions like these in the [Elasticsearch forum](https://discuss.elastic.co/c/elasticsearch) instead. Thank you!\r\n\r\nSome hints to get you started:\r\n\r\n* The stack traces indicate that the cluster is doing a couple of update and delete operations when you took them.\r\n* You should check the [documentation on upgrades](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html).\r\n* You can use the [hot threads API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html) to which operations take a significant amount of time (in your case it might make sense to set `ignore_idle_threads` to `false`)",
"created_at": "2018-02-19T08:56:43Z"
},
{
"body": "There's a deadlock in `InternalEngine` (the stack dump even says so). /cc: @s1monw \r\n\r\nThe stack trace below should be pretty self-explanatory of what's going on:\r\n\r\n```\r\nFound one Java-level deadlock:\r\n\r\n=============================\"\r\n\r\n\r\n\"elasticsearch[fmfe16][index][T#24]\":\r\n\r\n waiting for ownable synchronizer 0x00000009df4b02c0, (a java.util.co\\\r\n\tncurrent.locks.ReentrantLock$NonfairSync)\"\r\n,\r\n\t which is held by \"elasticsearch[fmfe16][index][T#9]\\\r\n\t\"\"\r\n\r\n\r\n\"elasticsearch[fmfe16][index][T#9]\":\r\n\r\n waiting to lock monitor 0x0000000afc30c4e8\"\r\n (object 0x00000008b40309c8, a org.elasticsearch.index.engine.LiveVers\\\r\n\tionMap)\"\r\n,\r\n\t which is held by \"elasticsearch[fmfe16][get][T#14]\"\"\r\n\r\n\r\n\"elasticsearch[fmfe16][get][T#14]\":\r\n\r\n waiting for ownable synchronizer 0x00000009df4b02c0, (a java.util.co\\\r\n\tncurrent.locks.ReentrantLock$NonfairSync)\"\r\n,\r\n\t which is held by \"elasticsearch[fmfe16][index][T#9]\\\r\n\t\"\"\r\n\r\n\r\n\r\n\r\nJava stack information for the threads listed above:\r\n\r\n===================================================\r\n\r\n\"elasticsearch[fmfe16][index][T#24]\":\r\n\r\n\tat sun.misc.Unsafe.park(Native Method)\r\n\r\n\t- parking to wait for <0x00000009df4b02c0> (a java.util.concu\\\r\n\trrent.locks.ReentrantLock$NonfairSync)\r\n\r\n\tat java.util.concurrent.locks.LockSupport.park(LockSupport.jav\\\r\n\ta:175)\r\n\r\n\tat java.util.concurrent.locks.AbstractQueuedSynchronizer.parkA\\\r\n\tndCheckInterrupt(AbstractQueuedSynchronizer.java:836)\r\n\r\n\tat java.util.concurrent.locks.AbstractQueuedSynchronizer.acqui\\\r\n\treQueued(AbstractQueuedSynchronizer.java:870)\r\n\r\n\tat java.util.concurrent.locks.AbstractQueuedSynchronizer.acqui\\\r\n\tre(AbstractQueuedSynchronizer.java:1199)\r\n\r\n\tat java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(R\\\r\n\teentrantLock.java:209)\r\n\r\n\tat java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock\\\r\n\t.java:285)\r\n\r\n\tat org.elasticsearch.common.util.concurrent.KeyedLock.acquire(\\\r\n\tKeyedLock.java:76)\r\n\r\n\tat org.elasticsearch.index.engine.LiveVersionMap.acquireLock(L\\\r\n\tiveVersionMap.java:431)\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.pruneDeletedT\\\r\n\tombstones(InternalEngine.java:1661)\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.maybePruneDel\\\r\n\tetedTombstones(InternalEngine.java:1348)\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.refresh(Inter\\\r\n\tnalEngine.java:1427)\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.get(InternalE\\\r\n\tngine.java:637)\r\n\r\n\tat org.elasticsearch.index.shard.IndexShard.get(IndexShard.jav\\\r\n\ta:850)\r\n\r\n\tat org.elasticsearch.index.get.ShardGetService.innerGet(ShardG\\\r\n\tetService.java:153)\r\n\r\n\tat org.elasticsearch.index.get.ShardGetService.get(ShardGetSer\\\r\n\tvice.java:81)\r\n\r\n\tat org.elasticsearch.action.update.UpdateHelper.prepare(Update\\\r\n\tHelper.java:74)\r\n\r\n\tat org.elasticsearch.action.update.TransportUpdateAction.shard\\\r\n\tOperation(TransportUpdateAction.java:174)\r\n\r\n\tat org.elasticsearch.action.update.TransportUpdateAction.shard\\\r\n\tOperation(TransportUpdateAction.java:167)\r\n\r\n\tat org.elasticsearch.action.update.TransportUpdateAction.shard\\\r\n\tOperation(TransportUpdateAction.java:67)\r\n\r\n\tat org.elasticsearch.action.support.single.instance.TransportI\\\r\n\tnstanceSingleOperationAction$ShardTransportHandler.messageReceived(Tra\\\r\n\tnsportInstanceSingleOperationAction.java:243)\r\n\r\n\tat org.elasticsearch.action.support.single.instance.TransportI\\\r\n\tnstanceSingleOperationAction$ShardTransportHandler.messageReceived(Tra\\\r\n\tnsportInstanceSingleOperationAction.java:239)\r\n\r\n\tat org.elasticsearch.transport.TransportRequestHandler.message\\\r\n\tReceived(TransportRequestHandler.java:30)\r\n\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processM\\\r\n\tessageReceived(RequestHandlerRegistry.java:66)\r\n\r\n\tat org.elasticsearch.transport.TcpTransport$RequestHandler.doR\\\r\n\tun(TcpTransport.java:1555)\r\n\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$Cont\\\r\n\textPreservingAbstractRunnable.doRun(ThreadContext.java:635)\r\n\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.r\\\r\n\tun(AbstractRunnable.java:37)\r\n\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoo\\\r\n\tlExecutor.java:1149)\r\n\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPo\\\r\n\tolExecutor.java:624)\r\n\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n\r\n\"elasticsearch[fmfe16][index][T#9]\":\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.getVersionFro\\\r\n\tmMap(InternalEngine.java:732)\r\n\r\n\t- waiting to lock <0x00000008b40309c8> \"\r\n(a org.elasticsearch.index.engine.LiveVersionMap)\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.get(InternalE\\\r\n\tngine.java:627)\r\n\r\n\tat org.elasticsearch.index.shard.IndexShard.get(IndexShard.jav\\\r\n\ta:850)\r\n\r\n\tat org.elasticsearch.index.get.ShardGetService.innerGet(ShardG\\\r\n\tetService.java:153)\r\n\r\n\tat org.elasticsearch.index.get.ShardGetService.get(ShardGetSer\\\r\n\tvice.java:81)\r\n\r\n\tat org.elasticsearch.action.update.UpdateHelper.prepare(Update\\\r\n\tHelper.java:74)\r\n\r\n\tat org.elasticsearch.action.update.TransportUpdateAction.shard\\\r\n\tOperation(TransportUpdateAction.java:174)\r\n\r\n\tat org.elasticsearch.action.update.TransportUpdateAction.shard\\\r\n\tOperation(TransportUpdateAction.java:167)\r\n\r\n\tat org.elasticsearch.action.update.TransportUpdateAction.shard\\\r\n\tOperation(TransportUpdateAction.java:67)\r\n\r\n\tat org.elasticsearch.action.support.single.instance.TransportI\\\r\n\tnstanceSingleOperationAction$ShardTransportHandler.messageReceived(Tra\\\r\n\tnsportInstanceSingleOperationAction.java:243)\r\n\r\n\tat org.elasticsearch.action.support.single.instance.TransportI\\\r\n\tnstanceSingleOperationAction$ShardTransportHandler.messageReceived(Tra\\\r\n\tnsportInstanceSingleOperationAction.java:239)\r\n\r\n\tat org.elasticsearch.transport.TransportRequestHandler.message\\\r\n\tReceived(TransportRequestHandler.java:30)\r\n\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processM\\\r\n\tessageReceived(RequestHandlerRegistry.java:66)\r\n\r\n\tat org.elasticsearch.transport.TcpTransport$RequestHandler.doR\\\r\n\tun(TcpTransport.java:1555)\r\n\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$Cont\\\r\n\textPreservingAbstractRunnable.doRun(ThreadContext.java:635)\r\n\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.r\\\r\n\tun(AbstractRunnable.java:37)\r\n\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoo\\\r\n\tlExecutor.java:1149)\r\n\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPo\\\r\n\tolExecutor.java:624)\r\n\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n\r\n\"elasticsearch[fmfe16][get][T#14]\":\r\n\r\n\tat sun.misc.Unsafe.park(Native Method)\r\n\r\n\t- parking to wait for <0x00000009df4b02c0> (a java.util.concu\\\r\n\trrent.locks.ReentrantLock$NonfairSync)\r\n\r\n\tat java.util.concurrent.locks.LockSupport.park(LockSupport.jav\\\r\n\ta:175)\r\n\r\n\tat java.util.concurrent.locks.AbstractQueuedSynchronizer.parkA\\\r\n\tndCheckInterrupt(AbstractQueuedSynchronizer.java:836)\r\n\r\n\tat java.util.concurrent.locks.AbstractQueuedSynchronizer.acqui\\\r\n\treQueued(AbstractQueuedSynchronizer.java:870)\r\n\r\n\tat java.util.concurrent.locks.AbstractQueuedSynchronizer.acqui\\\r\n\tre(AbstractQueuedSynchronizer.java:1199)\r\n\r\n\tat java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(R\\\r\n\teentrantLock.java:209)\r\n\r\n\tat java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock\\\r\n\t.java:285)\r\n\r\n\tat org.elasticsearch.common.util.concurrent.KeyedLock.acquire(\\\r\n\tKeyedLock.java:76)\r\n\r\n\tat org.elasticsearch.index.engine.LiveVersionMap.acquireLock(L\\\r\n\tiveVersionMap.java:431)\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.pruneDeletedT\\\r\n\tombstones(InternalEngine.java:1661)\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.maybePruneDel\\\r\n\tetedTombstones(InternalEngine.java:1348)\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.refresh(Inter\\\r\n\tnalEngine.java:1427)\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.getVersionFro\\\r\n\tmMap(InternalEngine.java:737)\r\n\r\n\t- locked <0x00000008b40309c8> \"\r\n(a org.elasticsearch.index.engine.LiveVersionMap)\r\n\r\n\tat org.elasticsearch.index.engine.InternalEngine.get(InternalE\\\r\n\tngine.java:627)\r\n\r\n\tat org.elasticsearch.index.shard.IndexShard.get(IndexShard.jav\\\r\n\ta:850)\r\n\r\n\tat org.elasticsearch.index.get.ShardGetService.innerGet(ShardG\\\r\n\tetService.java:153)\r\n\r\n\tat org.elasticsearch.index.get.ShardGetService.get(ShardGetSer\\\r\n\tvice.java:81)\r\n\r\n\tat org.elasticsearch.action.get.TransportGetAction.shardOperat\\\r\n\tion(TransportGetAction.java:88)\r\n\r\n\tat org.elasticsearch.action.get.TransportGetAction.shardOperat\\\r\n\tion(TransportGetAction.java:44)\r\n\r\n\tat org.elasticsearch.action.support.single.shard.TransportSing\\\r\n\tleShardAction$ShardTransportHandler.messageReceived(TransportSingleSha\\\r\n\trdAction.java:293)\r\n\r\n\tat org.elasticsearch.action.support.single.shard.TransportSing\\\r\n\tleShardAction$ShardTransportHandler.messageReceived(TransportSingleSha\\\r\n\trdAction.java:286)\r\n\r\n\tat org.elasticsearch.transport.TransportRequestHandler.message\\\r\n\tReceived(TransportRequestHandler.java:30)\r\n\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processM\\\r\n\tessageReceived(RequestHandlerRegistry.java:66)\r\n\r\n\tat org.elasticsearch.transport.TcpTransport$RequestHandler.doR\\\r\n\tun(TcpTransport.java:1555)\r\n\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$Cont\\\r\n\textPreservingAbstractRunnable.doRun(ThreadContext.java:635)\r\n\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.r\\\r\n\tun(AbstractRunnable.java:37)\r\n\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoo\\\r\n\tlExecutor.java:1149)\r\n\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPo\\\r\n\tolExecutor.java:624)\r\n\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n\r\n\r\n\tFound 1 deadlock.\r\n```\r\n\r\n\r\n\r\n\r\n",
"created_at": "2018-02-19T10:21:19Z"
},
{
"body": "@danielmitterdorfer: this is a bug report, not a question...\r\n",
"created_at": "2018-02-19T10:33:46Z"
},
{
"body": "@ywelsch thanks for double-checking. I did not spot the deadlock in the thread dump originally.",
"created_at": "2018-02-19T11:10:22Z"
},
{
"body": "I am looking into it",
"created_at": "2018-02-19T16:54:05Z"
},
{
"body": "@bra-fsn thanks for reporting this. @danielmitterdorfer @ywelsch I opened a pr for this.",
"created_at": "2018-02-19T21:04:13Z"
},
{
"body": "@s1monw : great, thanks for fixing it!\r\nAlso thanks @ywelsch for not letting this down into the sink. :)",
"created_at": "2018-02-20T08:53:25Z"
}
],
"number": 28714,
"title": "Bulk task queue grows infinitely after upgrading to 6.2.1 (from 5.6.4)"
} | {
"body": "Pruning tombstones is best effort and should not block if a key is currently\r\nlocked. This can cause a deadlock in rare situations if we switch of append\r\nonly optimization while heavily updating the same key in the engine\r\nwhile the LiveVersionMap is locked. This is very rare since this code\r\npatch only executed every 15 seconds by default since that is the interval\r\nwe try to prune the deletes in the version map.\r\n\r\nCloses #28714\r\n",
"number": 28736,
"review_comments": [
{
"body": "Is this necessary? Shouldn't we be able to return the `ReleasableLock` once we have acquired `perNodeLock`? ",
"created_at": "2018-02-20T08:01:28Z"
},
{
"body": "++",
"created_at": "2018-02-20T08:02:20Z"
},
{
"body": "Does it make sense to call `tryAcquire` in the try-with-resources block and do the null check in the block? i.e.:\r\n\r\n```java\r\ntry (Releasable lock = keyedLock.tryAcquire(uid)) {\r\n if (lock != null) {\r\n DeleteVersionValue versionValue = tombstones.get(uid);\r\n // ...\r\n }\r\n}\r\n```\r\n\r\nThat implementation would not need a dummy `ignored` variable and instead actually use the returned value (i.e. `lock`).",
"created_at": "2018-02-20T08:10:22Z"
},
{
"body": "Nit: Empty line",
"created_at": "2018-02-20T08:11:18Z"
},
{
"body": "I have to admit that it took me a moment to understand *why* this is avoiding the deadlock as there need to be two locks involved that are acquired by two threads in opposite order. The reason why this fix works is that this is the inner of the two involved locks. Then there are two cases:\r\n\r\n1. We get this lock, remove the tombstone and return the lock. If another thread is trying to acquire this lock, it can do so after we have left the try-with-resources block.\r\n2. We do not get the lock. Then we will not wait but rather give up (this time).\r\n\r\n(just wrote this down for my own reference.)",
"created_at": "2018-02-20T08:16:39Z"
},
{
"body": "correct",
"created_at": "2018-02-20T08:38:01Z"
},
{
"body": "it does I didn't know you can do that with a null value.",
"created_at": "2018-02-20T08:41:37Z"
},
{
"body": "I left a comment regarding this",
"created_at": "2018-02-20T08:53:33Z"
},
{
"body": "Nit: `awaitStarted`",
"created_at": "2018-02-20T11:42:51Z"
},
{
"body": "Nit: should be named `document3` for consistency?",
"created_at": "2018-02-20T11:43:24Z"
}
],
"title": "Never block on key in `LiveVersionMap#pruneTombstones`"
} | {
"commits": [
{
"message": "Never block on key in `LiveVersionMap#pruneTombstones`\n\nPruning tombstones is best effort and should not block if a key is currently\nlocked. This can cause a deadlock in rare situations if we switch of append\nonly optimization while heavily updating the same key in the engine\nwhile the LiveVersionMap is locked. This is very rare since this code\npatch only executed every 15 seconds by default since that is the interval\nwe try to prune the deletes in the version map.\n\nCloses #28714"
},
{
"message": "Merge branch 'master' into issues/28714"
},
{
"message": "apply feedback from @danielmitterdorfer"
},
{
"message": "add engine test to reproduce the issue"
},
{
"message": "apply comments"
}
],
"files": [
{
"diff": "@@ -63,20 +63,52 @@ public Releasable acquire(T key) {\n while (true) {\n KeyLock perNodeLock = map.get(key);\n if (perNodeLock == null) {\n- KeyLock newLock = new KeyLock(fair);\n- perNodeLock = map.putIfAbsent(key, newLock);\n- if (perNodeLock == null) {\n- newLock.lock();\n- return new ReleasableLock(key, newLock);\n+ ReleasableLock newLock = tryCreateNewLock(key);\n+ if (newLock != null) {\n+ return newLock;\n+ }\n+ } else {\n+ assert perNodeLock != null;\n+ int i = perNodeLock.count.get();\n+ if (i > 0 && perNodeLock.count.compareAndSet(i, i + 1)) {\n+ perNodeLock.lock();\n+ return new ReleasableLock(key, perNodeLock);\n }\n }\n- assert perNodeLock != null;\n- int i = perNodeLock.count.get();\n- if (i > 0 && perNodeLock.count.compareAndSet(i, i + 1)) {\n- perNodeLock.lock();\n- return new ReleasableLock(key, perNodeLock);\n+ }\n+ }\n+\n+ /**\n+ * Tries to acquire the lock for the given key and returns it. If the lock can't be acquired null is returned.\n+ */\n+ public Releasable tryAcquire(T key) {\n+ final KeyLock perNodeLock = map.get(key);\n+ if (perNodeLock == null) {\n+ return tryCreateNewLock(key);\n+ }\n+ if (perNodeLock.tryLock()) { // ok we got it - make sure we increment it accordingly otherwise release it again\n+ int i;\n+ while ((i = perNodeLock.count.get()) > 0) {\n+ // we have to do this in a loop here since even if the count is > 0\n+ // there could be a concurrent blocking acquire that changes the count and then this CAS fails. Since we already got\n+ // the lock we should retry and see if we can still get it or if the count is 0. If that is the case and we give up.\n+ if (perNodeLock.count.compareAndSet(i, i + 1)) {\n+ return new ReleasableLock(key, perNodeLock);\n+ }\n }\n+ perNodeLock.unlock(); // make sure we unlock and don't leave the lock in a locked state\n+ }\n+ return null;\n+ }\n+\n+ private ReleasableLock tryCreateNewLock(T key) {\n+ KeyLock newLock = new KeyLock(fair);\n+ newLock.lock();\n+ KeyLock keyLock = map.putIfAbsent(key, newLock);\n+ if (keyLock == null) {\n+ return new ReleasableLock(key, newLock);\n }\n+ return null;\n }\n \n /**\n@@ -92,11 +124,12 @@ public boolean isHeldByCurrentThread(T key) {\n \n private void release(T key, KeyLock lock) {\n assert lock == map.get(key);\n+ final int decrementAndGet = lock.count.decrementAndGet();\n lock.unlock();\n- int decrementAndGet = lock.count.decrementAndGet();\n if (decrementAndGet == 0) {\n map.remove(key, lock);\n }\n+ assert decrementAndGet >= 0 : decrementAndGet + \" must be >= 0 but wasn't\";\n }\n \n ",
"filename": "server/src/main/java/org/elasticsearch/common/util/concurrent/KeyedLock.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,6 @@\n import java.util.Collections;\n import java.util.Map;\n import java.util.concurrent.atomic.AtomicLong;\n-import java.util.function.Function;\n \n /** Maps _uid value to its version information. */\n final class LiveVersionMap implements ReferenceManager.RefreshListener, Accountable {\n@@ -378,20 +377,25 @@ void removeTombstoneUnderLock(BytesRef uid) {\n \n void pruneTombstones(long currentTime, long pruneInterval) {\n for (Map.Entry<BytesRef, DeleteVersionValue> entry : tombstones.entrySet()) {\n- BytesRef uid = entry.getKey();\n- try (Releasable ignored = acquireLock(uid)) { // can we do it without this lock on each value? maybe batch to a set and get\n- // the lock once per set?\n- // Must re-get it here, vs using entry.getValue(), in case the uid was indexed/deleted since we pulled the iterator:\n- DeleteVersionValue versionValue = tombstones.get(uid);\n- if (versionValue != null) {\n- // check if the value is old enough to be removed\n- final boolean isTooOld = currentTime - versionValue.time > pruneInterval;\n- if (isTooOld) {\n- // version value can't be removed it's\n- // not yet flushed to lucene ie. it's part of this current maps object\n- final boolean isNotTrackedByCurrentMaps = versionValue.time < maps.getMinDeleteTimestamp();\n- if (isNotTrackedByCurrentMaps) {\n- removeTombstoneUnderLock(uid);\n+ final BytesRef uid = entry.getKey();\n+ try (Releasable lock = keyedLock.tryAcquire(uid)) {\n+ // we use tryAcquire here since this is a best effort and we try to be least disruptive\n+ // this method is also called under lock in the engine under certain situations such that this can lead to deadlocks\n+ // if we do use a blocking acquire. see #28714\n+ if (lock != null) { // did we get the lock?\n+ // can we do it without this lock on each value? maybe batch to a set and get the lock once per set?\n+ // Must re-get it here, vs using entry.getValue(), in case the uid was indexed/deleted since we pulled the iterator:\n+ final DeleteVersionValue versionValue = tombstones.get(uid);\n+ if (versionValue != null) {\n+ // check if the value is old enough to be removed\n+ final boolean isTooOld = currentTime - versionValue.time > pruneInterval;\n+ if (isTooOld) {\n+ // version value can't be removed it's\n+ // not yet flushed to lucene ie. it's part of this current maps object\n+ final boolean isNotTrackedByCurrentMaps = versionValue.time < maps.getMinDeleteTimestamp();\n+ if (isNotTrackedByCurrentMaps) {\n+ removeTombstoneUnderLock(uid);\n+ }\n }\n }\n }",
"filename": "server/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.common.util.concurrent;\n \n import org.elasticsearch.common.lease.Releasable;\n-import org.elasticsearch.common.util.concurrent.KeyedLock;\n import org.elasticsearch.test.ESTestCase;\n import org.hamcrest.Matchers;\n \n@@ -31,6 +30,7 @@\n import java.util.Set;\n import java.util.concurrent.ConcurrentHashMap;\n import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicInteger;\n \n import static org.hamcrest.Matchers.equalTo;\n@@ -79,6 +79,34 @@ public void testHasLockedKeys() {\n assertFalse(lock.hasLockedKeys());\n }\n \n+ public void testTryAcquire() throws InterruptedException {\n+ KeyedLock<String> lock = new KeyedLock<>();\n+ Releasable foo = lock.tryAcquire(\"foo\");\n+ Releasable second = lock.tryAcquire(\"foo\");\n+ assertTrue(lock.hasLockedKeys());\n+ foo.close();\n+ assertTrue(lock.hasLockedKeys());\n+ second.close();\n+ assertFalse(lock.hasLockedKeys());\n+ // lock again\n+ Releasable acquire = lock.tryAcquire(\"foo\");\n+ assertNotNull(acquire);\n+ final AtomicBoolean check = new AtomicBoolean(false);\n+ CountDownLatch latch = new CountDownLatch(1);\n+ Thread thread = new Thread(() -> {\n+ latch.countDown();\n+ try (Releasable ignore = lock.acquire(\"foo\")) {\n+ assertTrue(check.get());\n+ }\n+ });\n+ thread.start();\n+ latch.await();\n+ check.set(true);\n+ acquire.close();\n+ foo.close();\n+ thread.join();\n+ }\n+\n public void testLockIsReentrant() throws InterruptedException {\n KeyedLock<String> lock = new KeyedLock<>();\n Releasable foo = lock.acquire(\"foo\");\n@@ -137,7 +165,24 @@ public void run() {\n for (int i = 0; i < numRuns; i++) {\n String curName = names[randomInt(names.length - 1)];\n assert connectionLock.isHeldByCurrentThread(curName) == false;\n- try (Releasable ignored = connectionLock.acquire(curName)) {\n+ Releasable lock;\n+ if (randomIntBetween(0, 10) < 4) {\n+ int tries = 0;\n+ boolean stepOut = false;\n+ while ((lock = connectionLock.tryAcquire(curName)) == null) {\n+ assertFalse(connectionLock.isHeldByCurrentThread(curName));\n+ if (tries++ == 10) {\n+ stepOut = true;\n+ break;\n+ }\n+ }\n+ if (stepOut) {\n+ break;\n+ }\n+ } else {\n+ lock = connectionLock.acquire(curName);\n+ }\n+ try (Releasable ignore = lock) {\n assert connectionLock.isHeldByCurrentThread(curName);\n assert connectionLock.isHeldByCurrentThread(curName + \"bla\") == false;\n if (randomBoolean()) {",
"filename": "server/src/test/java/org/elasticsearch/common/util/concurrent/KeyedLockTests.java",
"status": "modified"
},
{
"diff": "@@ -4517,4 +4517,60 @@ public void testShouldPeriodicallyFlush() throws Exception {\n assertThat(engine.getLastCommittedSegmentInfos(), not(sameInstance(lastCommitInfo)));\n assertThat(engine.getTranslog().uncommittedOperations(), equalTo(0));\n }\n+\n+\n+ public void testStressUpdateSameDocWhileGettingIt() throws IOException, InterruptedException {\n+ final int iters = randomIntBetween(1, 15);\n+ for (int i = 0; i < iters; i++) {\n+ // this is a reproduction of https://github.com/elastic/elasticsearch/issues/28714\n+ try (Store store = createStore(); InternalEngine engine = createEngine(store, createTempDir())) {\n+ final IndexSettings indexSettings = engine.config().getIndexSettings();\n+ final IndexMetaData indexMetaData = IndexMetaData.builder(indexSettings.getIndexMetaData())\n+ .settings(Settings.builder().put(indexSettings.getSettings())\n+ .put(IndexSettings.INDEX_GC_DELETES_SETTING.getKey(), TimeValue.timeValueMillis(1))).build();\n+ engine.engineConfig.getIndexSettings().updateIndexMetaData(indexMetaData);\n+ engine.onSettingsChanged();\n+ ParsedDocument document = testParsedDocument(Integer.toString(0), null, testDocumentWithTextField(), SOURCE, null);\n+ final Engine.Index doc = new Engine.Index(newUid(document), document, SequenceNumbers.UNASSIGNED_SEQ_NO, 0,\n+ Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), 0, false);\n+ // first index an append only document and then delete it. such that we have it in the tombstones\n+ engine.index(doc);\n+ engine.delete(new Engine.Delete(doc.type(), doc.id(), doc.uid()));\n+\n+ // now index more append only docs and refresh so we re-enabel the optimization for unsafe version map\n+ ParsedDocument document1 = testParsedDocument(Integer.toString(1), null, testDocumentWithTextField(), SOURCE, null);\n+ engine.index(new Engine.Index(newUid(document1), document1, SequenceNumbers.UNASSIGNED_SEQ_NO, 0,\n+ Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), 0, false));\n+ engine.refresh(\"test\");\n+ ParsedDocument document2 = testParsedDocument(Integer.toString(2), null, testDocumentWithTextField(), SOURCE, null);\n+ engine.index(new Engine.Index(newUid(document2), document2, SequenceNumbers.UNASSIGNED_SEQ_NO, 0,\n+ Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), 0, false));\n+ engine.refresh(\"test\");\n+ ParsedDocument document3 = testParsedDocument(Integer.toString(3), null, testDocumentWithTextField(), SOURCE, null);\n+ final Engine.Index doc3 = new Engine.Index(newUid(document3), document3, SequenceNumbers.UNASSIGNED_SEQ_NO, 0,\n+ Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), 0, false);\n+ engine.index(doc3);\n+ engine.engineConfig.setEnableGcDeletes(true);\n+ // once we are here the version map is unsafe again and we need to do a refresh inside the get calls to ensure we\n+ // de-optimize. We also enabled GCDeletes which now causes pruning tombstones inside that refresh that is done internally\n+ // to ensure we de-optimize. One get call will purne and the other will try to lock the version map concurrently while\n+ // holding the lock that pruneTombstones needs and we have a deadlock\n+ CountDownLatch awaitStarted = new CountDownLatch(1);\n+ Thread thread = new Thread(() -> {\n+ awaitStarted.countDown();\n+ try (Engine.GetResult getResult = engine.get(new Engine.Get(true, doc3.type(), doc3.id(), doc3.uid()),\n+ engine::acquireSearcher)) {\n+ assertTrue(getResult.exists());\n+ }\n+ });\n+ thread.start();\n+ awaitStarted.await();\n+ try (Engine.GetResult getResult = engine.get(new Engine.Get(true, doc.type(), doc.id(), doc.uid()),\n+ engine::acquireSearcher)) {\n+ assertFalse(getResult.exists());\n+ }\n+ thread.join();\n+ }\n+ }\n+ }\n }",
"filename": "server/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java",
"status": "modified"
},
{
"diff": "@@ -348,4 +348,27 @@ public void testAddAndDeleteRefreshConcurrently() throws IOException, Interrupte\n }\n }\n }\n+\n+ public void testPruneTombstonesWhileLocked() throws InterruptedException, IOException {\n+ LiveVersionMap map = new LiveVersionMap();\n+ BytesRef uid = uid(\"1\");\n+ ;\n+ try (Releasable ignore = map.acquireLock(uid)) {\n+ map.putUnderLock(uid, new DeleteVersionValue(0, 0, 0, 0));\n+ map.beforeRefresh(); // refresh otherwise we won't prune since it's tracked by the current map\n+ map.afterRefresh(false);\n+ Thread thread = new Thread(() -> {\n+ map.pruneTombstones(Long.MAX_VALUE, 0);\n+ });\n+ thread.start();\n+ thread.join();\n+ assertEquals(1, map.getAllTombstones().size());\n+ }\n+ Thread thread = new Thread(() -> {\n+ map.pruneTombstones(Long.MAX_VALUE, 0);\n+ });\n+ thread.start();\n+ thread.join();\n+ assertEquals(0, map.getAllTombstones().size());\n+ }\n }",
"filename": "server/src/test/java/org/elasticsearch/index/engine/LiveVersionMapTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion: 6.2.1, Build: 7299dc3/2018-02-07T19:34:26.990113Z, JVM: 1.8.0_131\r\n\r\n**Plugins installed**: []\r\nanalysis-icu\r\n\r\n**JVM version** (`java -version`):\r\nopenjdk version \"1.8.0_131\"\r\nOpenJDK Runtime Environment (build 1.8.0_131-b11)\r\nOpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nFreeBSD 11.1\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nAfter upgrading to 6.2.1 from 5.6.4 I get these transport errors between nodes (in this case between the master (10.6.145.102) and a data node (10.6.145.235).\r\n\r\n**Steps to reproduce**:\r\nNever saw this on previous elastic versions, it started to appear after the 6.2.1 upgrade. We have several nodes (three distinct masters, 40 mixed data and client nodes and some client only nodes, between them the java versions differ, masters: 1.8.0_51, data nodes: 1.8.0_144, client nodes: 1.8.0_131).\r\nWe have indices created with various 5.x versions and with a lot of different data.\r\nPlease tell me what exact info should I provide.\r\n\r\n**Provide logs (if relevant)**:\r\n``\r\n[2018-02-17T09:45:45,474][DEBUG][o.e.a.a.c.n.s.TransportNodesStatsAction] [fmesm02] failed to execute on node [VuFL_GfOQau3I-ZYHbiW5g]\r\norg.elasticsearch.transport.RemoteTransportException: [Failed to deserialize response from handler [org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler]]\r\nCaused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize response from handler [org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler]\r\n at org.elasticsearch.transport.TcpTransport.handleResponse(TcpTransport.java:1441) [elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1400) [elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:64) [transport-netty4-6.2.1.jar:6.2.1]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) [netty-codec-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-codec-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 249 hex: f9\r\n at org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:375) ~[elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.node.ResponseCollectorService$ComputedNodeStats.<init>(ResponseCollectorService.java:145) ~[elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.common.io.stream.StreamInput.readMap(StreamInput.java:460) ~[elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.node.AdaptiveSelectionStats.<init>(AdaptiveSelectionStats.java:56) ~[elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.common.io.stream.StreamInput.readOptionalWriteable(StreamInput.java:733) ~[elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.action.admin.cluster.node.stats.NodeStats.readFrom(NodeStats.java:239) ~[elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.transport.TransportResponseHandler.read(TransportResponseHandler.java:47) ~[elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.read(TransportService.java:1085) ~[elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.transport.TcpTransport.handleResponse(TcpTransport.java:1437) ~[elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1400) [elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:64) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) ~[?:?]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) ~[?:?]\r\n at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) ~[?:?]\r\n at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]\r\n at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]\r\n at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]\r\n at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) ~[?:?]\r\n at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) ~[?:?]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) ~[?:?]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) ~[?:?]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) ~[?:?]\r\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[?:?]\r\n at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]\r\n at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]\r\n[2018-02-17T09:45:45,487][WARN ][o.e.t.n.Netty4Transport ] [fmesm02] exception caught on transport layer [NettyTcpChannel{localAddress=/10.6.145.102:13476, remoteAddress=10.6.145.235/10.6.145.235:9300}], closing connection\r\njava.lang.IllegalStateException: Message not fully read (response) for requestId [3185238], handler [org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler/org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1@3c93bec2], error [false]; resetting\r\n at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1407) ~[elasticsearch-6.2.1.jar:6.2.1]\r\n at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:64) ~[transport-netty4-6.2.1.jar:6.2.1]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) [netty-codec-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-codec-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-transport-4.1.16.Final.jar:4.1.16.Final]\r\n at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n[2018-02-17T09:45:45,548][INFO ][o.e.c.r.a.AllocationService] [fmesm02] Cluster health status changed from [GREEN] to [YELLOW] (reason: [{fmfe30}{VuFL_GfOQau3I-ZYHbiW5g}{VGhsT_tCRpCngX3cTFSZ0w}{10.6.145.235}{10.6.145.235:9300} transport disconnected]).\r\n``",
"comments": [
{
"body": "The unexpected character varies wildly:\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 146 hex: 92\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 159 hex: 9f\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 242 hex: f2\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 137 hex: 89\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 186 hex: ba\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 251 hex: fb\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 249 hex: f9\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 189 hex: bd\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 153 hex: 99\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 254 hex: fe\r\nCaused by: java.io.IOException: Invalid string; unexpected character: 139 hex: 8b\r\n",
"created_at": "2018-02-17T14:31:02Z"
},
{
"body": "Are all nodes on 6.2.1? Is this issue only occurring between certain nodes? Is the response different if you fire off a `GET /_nodes/stats` request on the master-eligible, the data, or the client nodes? Are all nodes but one reporting a failure in that case, or only a subset of the nodes?\r\nBefore we get into tcp dump territory, can you maybe reproduce this in a simple 2 node cluster (same hardware/OS)? ",
"created_at": "2018-02-17T14:48:47Z"
},
{
"body": "Yes, all nodes are 6.2.1:\r\n``# curl -s 'http://localhost:9200/_cat/nodes?h=node.role,version' | sort | uniq -c\r\n 40 di 6.2.1\r\n 9 i 6.2.1\r\n 3 mi 6.2.1``\r\nThey only differ in their roles and JDK versions.\r\n\r\nI can only see these log entries on the three master(eliglible) nodes. BTW, we query only those for statistics.\r\nThere are many nodes involved, until now 29 of them, each of them are data (and client, mixed) nodes:\r\n``egrep 'exception caught on transport layer' all_masters.log | egrep -o 'remoteAddress.*}' | sort | uniq -c\r\n 2 remoteAddress=10.6.145.193/10.6.145.193:9300}\r\n 1 remoteAddress=10.6.145.195/10.6.145.195:9300}\r\n 2 remoteAddress=10.6.145.197/10.6.145.197:9300}\r\n 1 remoteAddress=10.6.145.199/10.6.145.199:9300}\r\n 4 remoteAddress=10.6.145.200/10.6.145.200:9300}\r\n 1 remoteAddress=10.6.145.201/10.6.145.201:9300}\r\n 1 remoteAddress=10.6.145.202/10.6.145.202:9300}\r\n 1 remoteAddress=10.6.145.203/10.6.145.203:9300}\r\n 1 remoteAddress=10.6.145.204/10.6.145.204:9300}\r\n 1 remoteAddress=10.6.145.206/10.6.145.206:9300}\r\n 1 remoteAddress=10.6.145.210/10.6.145.210:9300}\r\n 1 remoteAddress=10.6.145.211/10.6.145.211:9300}\r\n 1 remoteAddress=10.6.145.223/10.6.145.223:9300}\r\n 2 remoteAddress=10.6.145.225/10.6.145.225:9300}\r\n 1 remoteAddress=10.6.145.226/10.6.145.226:9300}\r\n 3 remoteAddress=10.6.145.227/10.6.145.227:9300}\r\n 3 remoteAddress=10.6.145.228/10.6.145.228:9300}\r\n 1 remoteAddress=10.6.145.232/10.6.145.232:9300}\r\n 2 remoteAddress=10.6.145.233/10.6.145.233:9300}\r\n 1 remoteAddress=10.6.145.234/10.6.145.234:9300}\r\n 3 remoteAddress=10.6.145.235/10.6.145.235:9300}\r\n 3 remoteAddress=10.6.145.236/10.6.145.236:9300}\r\n 1 remoteAddress=10.6.145.237/10.6.145.237:9300}\r\n 1 remoteAddress=10.6.145.239/10.6.145.239:9300}\r\n 3 remoteAddress=10.6.145.240/10.6.145.240:9300}\r\n 2 remoteAddress=10.6.145.241/10.6.145.241:9300}\r\n 2 remoteAddress=10.6.145.242/10.6.145.242:9300}\r\n 2 remoteAddress=10.6.145.243/10.6.145.243:9300}\r\n 2 remoteAddress=10.6.145.244/10.6.145.244:9300}\r\n``\r\nIt happens more often with some of them as you can see, but I can't point one as guilty.\r\nIt also happens on master-eliglible, but non-master nodes, the above contains their logs too.\r\n\r\nToday the log dates are:\r\n``[2018-02-17T00:15:46,519][WARN\r\n[2018-02-17T00:21:08,014][WARN\r\n[2018-02-17T02:16:04,699][WARN\r\n[2018-02-17T02:22:18,086][WARN\r\n[2018-02-17T02:25:28,047][WARN\r\n[2018-02-17T02:31:26,304][WARN\r\n[2018-02-17T02:35:43,636][WARN\r\n[2018-02-17T02:50:40,803][WARN\r\n[2018-02-17T02:55:27,373][WARN\r\n[2018-02-17T03:30:54,778][WARN\r\n[2018-02-17T03:55:15,757][WARN\r\n[2018-02-17T05:40:33,378][WARN\r\n[2018-02-17T05:40:48,091][WARN\r\n[2018-02-17T06:10:37,117][WARN\r\n[2018-02-17T06:10:49,796][WARN\r\n[2018-02-17T06:56:14,371][WARN\r\n[2018-02-17T07:11:15,700][WARN\r\n[2018-02-17T07:26:03,577][WARN\r\n[2018-02-17T07:30:39,050][WARN\r\n[2018-02-17T07:36:26,556][WARN\r\n[2018-02-17T08:06:04,656][WARN\r\n[2018-02-17T08:45:43,094][WARN\r\n[2018-02-17T08:55:47,759][WARN\r\n[2018-02-17T09:00:32,387][WARN\r\n[2018-02-17T09:45:45,487][WARN\r\n[2018-02-17T10:15:53,099][WARN\r\n[2018-02-17T10:20:38,481][WARN\r\n[2018-02-17T10:30:29,252][WARN\r\n[2018-02-17T11:00:27,803][WARN\r\n[2018-02-17T11:10:28,640][WARN\r\n[2018-02-17T11:50:43,091][WARN\r\n[2018-02-17T12:00:48,460][WARN\r\n[2018-02-17T12:35:25,549][WARN\r\n[2018-02-17T12:35:32,719][WARN\r\n[2018-02-17T12:50:30,927][WARN\r\n[2018-02-17T13:00:51,557][WARN\r\n[2018-02-17T13:35:25,834][WARN\r\n[2018-02-17T14:15:54,408][WARN\r\n[2018-02-17T14:25:36,535][WARN\r\n[2018-02-17T14:25:46,726][WARN\r\n[2018-02-17T15:10:28,747][WARN\r\n[2018-02-17T15:15:29,193][WARN\r\n[2018-02-17T15:50:16,142][WARN\r\n[2018-02-17T16:00:48,446][WARN\r\n[2018-02-17T16:00:57,826][WARN\r\n[2018-02-17T16:10:23,455][WARN\r\n[2018-02-17T16:20:25,079][WARN\r\n[2018-02-17T16:35:34,404][WARN\r\n[2018-02-17T16:40:28,779][WARN\r\n[2018-02-17T17:05:34,549][WARN\r\n``\r\nMost of them are close to 0 and 5 minutes which correlates with out statistics collection cronjob.\r\nIt doesn't happen on every _nodes/stats query. I've tried to fire some queries by hand, but nothing happened. Will try running the stats collector more often along with a tcpdump.",
"created_at": "2018-02-17T19:04:27Z"
},
{
"body": "I could capture an error, I sent the dump and log in email to you.",
"created_at": "2018-02-17T19:40:57Z"
},
{
"body": "Thanks for the packet capture. I've had a look and I think I know what's going on.\r\n\r\nThe `AdaptiveSelectionStats` object (introduced in 6.1) serializes the `clientOutgoingConnections` map that's concurrently updated in `SearchTransportService`. Serializing the map consists of first writing the size of the map and then serializing the entries. If the number of entries changes while the map is being serialized, the size and number of entries go out of sync. The deserialization routine expects those to be in sync though. I've opened #28718 as a fix.",
"created_at": "2018-02-18T10:37:56Z"
},
{
"body": "The cluster is fine since yesterday with your latest patch:\r\nhttps://github.com/elastic/elasticsearch/pull/28718/commits/ef03a0e84e3309fe30d9b9238e7886b22e1d116c\r\nThanks for the quick fix!",
"created_at": "2018-02-19T09:16:03Z"
},
{
"body": "Thanks for the detailed report and the packet dump. For anyone else running into this bug:\r\nThis can manifest when using the node stats call (`GET /_nodes/stats`), which queries adaptive replica selection stats by default, even if the feature is not used. The only workaround is to explicitly specify (all) the desired metrics `os,jvm,thread_pool,...` on the `GET /_nodes/stats` request to implicitly exclude the collection of the adaptive replica selection stats.",
"created_at": "2018-02-19T09:24:05Z"
}
],
"number": 28713,
"title": "Getting \"Failed to deserialize response from handler\" after upgrading from 5.6.4 to 6.2.1"
} | {
"body": "The `AdaptiveSelectionStats` object serializes the `clientOutgoingConnections` map that's concurrently updated in `SearchTransportService`. Serializing the map consists of first writing the size of the map and then serializing the entries. If the number of entries changes while the map is being serialized, the size and number of entries go out of sync. The deserialization routine expects those to be in sync though.\r\n\r\nCloses #28713",
"number": 28718,
"review_comments": [
{
"body": "I think this is subject to the same race, concurrent modification during the copy? It won’t manifest as an inconsistent snapshot of size and number of entries when iterating for serialization, but instead a concurrent modification exception thrown during the copy?",
"created_at": "2018-02-18T19:24:57Z"
},
{
"body": "See https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentHashMap.html\r\n\r\n> Similarly, Iterators, Spliterators and Enumerations return elements reflecting the state of the hash table at some point at or since the creation of the iterator/enumeration. They do not throw ConcurrentModificationException",
"created_at": "2018-02-18T19:30:31Z"
},
{
"body": "The problem is I did not realize the backing map was a concurrent hash map (the perils of reviewing on mobile).",
"created_at": "2018-02-18T19:40:21Z"
}
],
"title": "Fix AdaptiveSelectionStats serialization bug"
} | {
"commits": [
{
"message": "Fix AdaptiveSelectionStats serialization bug"
},
{
"message": "There looks to be already a second \"snapshot\" method in this class further down"
},
{
"message": "oops"
}
],
"files": [
{
"diff": "@@ -95,10 +95,6 @@ public SearchTransportService(Settings settings, TransportService transportServi\n this.responseWrapper = responseWrapper;\n }\n \n- public Map<String, Long> getClientConnections() {\n- return Collections.unmodifiableMap(clientConnections);\n- }\n-\n public void sendFreeContext(Transport.Connection connection, final long contextId, OriginalIndices originalIndices) {\n transportService.sendRequest(connection, FREE_CONTEXT_ACTION_NAME, new SearchFreeContextRequest(originalIndices, contextId),\n TransportRequestOptions.EMPTY, new ActionListenerResponseHandler<>(new ActionListener<SearchFreeContextResponse>() {",
"filename": "server/src/main/java/org/elasticsearch/action/search/SearchTransportService.java",
"status": "modified"
},
{
"diff": "@@ -121,7 +121,7 @@ public NodeStats stats(CommonStatsFlags indices, boolean os, boolean process, bo\n script ? scriptService.stats() : null,\n discoveryStats ? discovery.stats() : null,\n ingest ? ingestService.getPipelineExecutionService().stats() : null,\n- adaptiveSelection ? responseCollectorService.getAdaptiveStats(searchTransportService.getClientConnections()) : null\n+ adaptiveSelection ? responseCollectorService.getAdaptiveStats(searchTransportService.getPendingSearchRequests()) : null\n );\n }\n ",
"filename": "server/src/main/java/org/elasticsearch/node/NodeService.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): 6.0.0\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen the cat API is used on APIs that support URL parameters like index names, then calling those endpoints with `&h` to get help results in an error\r\n\r\n**Steps to reproduce**:\r\n\r\n```\r\n# works\r\nGET _cat/shards?help\r\nGET _cat/shards/test\r\n# returns an exception\r\nGET _cat/shards/test?help\r\n```",
"comments": [
{
"body": "I would like to work on this.",
"created_at": "2017-11-23T05:51:48Z"
},
{
"body": "hey @spinscale \r\nI would like to have a go at this. I figured I will ask before doing the work, as the issue does not have an `adoptme` tag.\r\nThanks!",
"created_at": "2017-11-23T06:29:07Z"
},
{
"body": "@jyoti0208 oh crap! I just now saw your earlier comment ( browser did not refresh properly ). Sorry for jumping up the queue :)",
"created_at": "2017-11-23T06:58:56Z"
},
{
"body": "feel free to go ahead! Thanks a lot for working on this!",
"created_at": "2017-11-23T09:33:24Z"
},
{
"body": "I'm getting some broken cat urls when i use \r\n\r\n`http://thecatapi.com/api/images/get?format=xml&results_per_page=20`",
"created_at": "2018-01-23T22:22:41Z"
},
{
"body": "This was explored in #27598 and I do not think that there is much that we [should](https://github.com/elastic/elasticsearch/pull/27598#pullrequestreview-80061309) do here. The behavior would be odd if the index does not exist, or odd if there are unrecognized parameters in the request (do we ignore them? do we throw 404s? do we fail on the unrecognized parameters?). I think we should leave this as-is.",
"created_at": "2018-01-30T11:25:53Z"
},
{
"body": "@jasontedor what about adding a short note in the docs that `help` should not be used with any other url params?",
"created_at": "2018-01-30T19:10:49Z"
},
{
"body": "@olcbean That would be good.",
"created_at": "2018-01-30T19:14:02Z"
}
],
"number": 27424,
"title": "cat API help is broken when url parameters are specified"
} | {
"body": "Add a note to the docs that using the `help` option with cat API, which provide an optional url param, results in an error\r\n\r\nRemoved a duplicate word\r\n\r\nRelates to #27424",
"number": 28686,
"review_comments": [],
"title": "[docs] Add a note to the cat API for the `help` option"
} | {
"commits": [
{
"message": "Add a note to the docs that _cat api `help` option cannot be used if an optional url param is used"
}
],
"files": [
{
"diff": "@@ -55,7 +55,7 @@ GET /_cat/master?help\n --------------------------------------------------\n // CONSOLE\n \n-Might respond respond with:\n+Might respond with:\n \n [source,txt]\n --------------------------------------------------\n@@ -66,6 +66,11 @@ node | n | node name\n --------------------------------------------------\n // TESTRESPONSE[s/[|]/[|]/ _cat]\n \n+NOTE: `help` is not supported if any optional url parameter is used.\n+For example `GET _cat/shards/twitter?help` or `GET _cat/indices/twi*?help`\n+results in an error. Use `GET _cat/shards?help` or `GET _cat/indices?help`\n+instead.\n+\n [float]\n [[headers]]\n === Headers",
"filename": "docs/reference/cat.asciidoc",
"status": "modified"
}
]
} |
{
"body": "elasticsearch-rest-high-level-client Version: 6.2.1\r\n\r\norg.elasticsearch.client.Request#endpoint (line 461) does not URL-encode document ids - this leads to an invalid request if document id contains special characters like \"#\" (e.g. via index(IndexRequest indexRequest))",
"comments": [
{
"body": "That's right, the client leaves ids untouched. Much better than modifying them and leaving users wondering why they can't retrieve those docs back by id. Users can encode their invalid ids as they wish though.",
"created_at": "2018-02-12T09:18:57Z"
},
{
"body": "Elasticsearch supports characters which need URL-encoding like \"#\" in ids - so to my knowledge those ids are perfectly valid ES-ids - right? Thus I need the work-around to build IndexRequests like this:\r\n\r\nnew IndexRequest(index, type, urlEncode(id))\r\n\r\nand retrieve/delete/update like this:\r\n\r\nnew GetRequest(index, type, urlEncode(id))\r\nnew DeleteRequest(index, type, urlEncode(id))\r\nnew UpdateRequest(index, type, urlEncode(id))\r\n\r\nright? In the database they are then stored as unencoded ids.\r\n\r\nFrom my point of view URL-encoding belongs to the final http-request building and not to RestHighLevelClient-Request building? RestHighLevelClient shouldn't worry about how data is sent and that they might need URL-encoding? They could be consistently encoded in the Request#endPoint method which all request builders use.",
"created_at": "2018-02-12T10:02:03Z"
},
{
"body": "@chrbau thanks for the additional info, I think you are right :) my initial response was incorrect, encoding ids doesn't modify their representation in Elasticsearch.",
"created_at": "2018-02-12T10:15:53Z"
}
],
"number": 28625,
"title": "RestHighLevelClient endpoint construction does not URL-encode ids"
} | {
"body": "The REST high-level client supports now encoding of path parts, so that for instance documents with valid ids, but containing characters that need to be encoded as part of urls (`#` etc.), are properly supported. We also make sure that each path part can contain `/` by encoding them properly too.\r\n\r\nCloses #28625",
"number": 28663,
"review_comments": [
{
"body": "Could we use a static encode() method for this? We'll have to reuse it in some URL parameters like routing or parent, right?",
"created_at": "2018-02-13T15:20:47Z"
},
{
"body": "it's complicated :) querystring params should already be encoded by the rest low-level client. but the low-level client expects paths to be externally encoded, as they are provided altogether and we can't encode each part safely anymore. In the high-level client though, it makes sense to encode each part separately as they are provided separately. I think what we do here is all we need.",
"created_at": "2018-02-13T15:23:19Z"
},
{
"body": "I will add tests for this.",
"created_at": "2018-02-13T15:24:02Z"
},
{
"body": "Of course, I just forgot that the low-level client already encodes parameters. Thanks.",
"created_at": "2018-02-13T15:28:01Z"
},
{
"body": "pate -> path\r\n\r\nCan we do something like \r\nif (path.startsWith(\"/\") == false) {\r\npath = \"/\" + path\r\n}\r\n",
"created_at": "2018-02-14T16:32:01Z"
},
{
"body": "I don't think so. parts are not supposed to start with '/'. If they do, like a document with id `/id`, that `/` is part of the id and needs to be encoded, which we do manually. That first slash is added otherwise URI does crazy things in some cases, like trying to extract the scheme when an index name contains `:` (e.g. `cluster:index`). That doesn't happen if the url is absolute, meaning it starts with `/`. That is why we add it, yet when retrieving the encoded url we ignore it as we do substring(1). This stuff is extremely tricky, and buggy, and there isn't a clean way to do what we need in any of the available method utils that I know of/tried.",
"created_at": "2018-02-14T16:40:29Z"
}
],
"title": "REST high-level client: encode path parts"
} | {
"commits": [
{
"message": "REST high-level client: encode path parts\n\nThe REST high-level client supports now encoding of path parts, so that for instance documents with valid ids, but containing characters that need to be encoded as part of urls (`#` etc.), are properly supported. We also make sure that each path part can contain `/` by encoding them properly too.\n\nCloses #28625"
},
{
"message": "add tests for params encoding"
},
{
"message": "fix ccs test failure"
},
{
"message": "fix test failures"
},
{
"message": "Merge branch 'master' into fix/rest-hl-client-encode-parts"
}
],
"files": [
{
"diff": "@@ -73,6 +73,8 @@\n \n import java.io.ByteArrayOutputStream;\n import java.io.IOException;\n+import java.net.URI;\n+import java.net.URISyntaxException;\n import java.nio.charset.Charset;\n import java.util.Collections;\n import java.util.HashMap;\n@@ -568,7 +570,16 @@ static String buildEndpoint(String... parts) {\n StringJoiner joiner = new StringJoiner(\"/\", \"/\", \"\");\n for (String part : parts) {\n if (Strings.hasLength(part)) {\n- joiner.add(part);\n+ try {\n+ //encode each part (e.g. index, type and id) separately before merging them into the path\n+ //we prepend \"/\" to the path part to make this pate absolute, otherwise there can be issues with\n+ //paths that start with `-` or contain `:`\n+ URI uri = new URI(null, null, null, -1, \"/\" + part, null, null);\n+ //manually encode any slash that each part may contain\n+ joiner.add(uri.getRawPath().substring(1).replaceAll(\"/\", \"%2F\"));\n+ } catch (URISyntaxException e) {\n+ throw new IllegalArgumentException(\"Path part [\" + part + \"] couldn't be encoded\", e);\n+ }\n }\n }\n return joiner.toString();",
"filename": "client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.ElasticsearchStatusException;\n import org.elasticsearch.action.DocWriteRequest;\n import org.elasticsearch.action.DocWriteResponse;\n+import org.elasticsearch.action.admin.indices.get.GetIndexRequest;\n import org.elasticsearch.action.bulk.BulkItemResponse;\n import org.elasticsearch.action.bulk.BulkProcessor;\n import org.elasticsearch.action.bulk.BulkRequest;\n@@ -52,6 +53,9 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptType;\n import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n+import org.joda.time.DateTime;\n+import org.joda.time.DateTimeZone;\n+import org.joda.time.format.DateTimeFormat;\n \n import java.io.IOException;\n import java.util.Collections;\n@@ -648,7 +652,7 @@ public void testBulk() throws IOException {\n validateBulkResponses(nbItems, errors, bulkResponse, bulkRequest);\n }\n \n- public void testBulkProcessorIntegration() throws IOException, InterruptedException {\n+ public void testBulkProcessorIntegration() throws IOException {\n int nbItems = randomIntBetween(10, 100);\n boolean[] errors = new boolean[nbItems];\n \n@@ -762,4 +766,69 @@ private void validateBulkResponses(int nbItems, boolean[] errors, BulkResponse b\n }\n }\n }\n+\n+ public void testUrlEncode() throws IOException {\n+ String indexPattern = \"<logstash-{now/M}>\";\n+ String expectedIndex = \"logstash-\" +\n+ DateTimeFormat.forPattern(\"YYYY.MM.dd\").print(new DateTime(DateTimeZone.UTC).monthOfYear().roundFloorCopy());\n+ {\n+ IndexRequest indexRequest = new IndexRequest(indexPattern, \"type\", \"id#1\");\n+ indexRequest.source(\"field\", \"value\");\n+ IndexResponse indexResponse = highLevelClient().index(indexRequest);\n+ assertEquals(expectedIndex, indexResponse.getIndex());\n+ assertEquals(\"type\", indexResponse.getType());\n+ assertEquals(\"id#1\", indexResponse.getId());\n+ }\n+ {\n+ GetRequest getRequest = new GetRequest(indexPattern, \"type\", \"id#1\");\n+ GetResponse getResponse = highLevelClient().get(getRequest);\n+ assertTrue(getResponse.isExists());\n+ assertEquals(expectedIndex, getResponse.getIndex());\n+ assertEquals(\"type\", getResponse.getType());\n+ assertEquals(\"id#1\", getResponse.getId());\n+ }\n+\n+ String docId = \"this/is/the/id/中文\";\n+ {\n+ IndexRequest indexRequest = new IndexRequest(\"index\", \"type\", docId);\n+ indexRequest.source(\"field\", \"value\");\n+ IndexResponse indexResponse = highLevelClient().index(indexRequest);\n+ assertEquals(\"index\", indexResponse.getIndex());\n+ assertEquals(\"type\", indexResponse.getType());\n+ assertEquals(docId, indexResponse.getId());\n+ }\n+ {\n+ GetRequest getRequest = new GetRequest(\"index\", \"type\", docId);\n+ GetResponse getResponse = highLevelClient().get(getRequest);\n+ assertTrue(getResponse.isExists());\n+ assertEquals(\"index\", getResponse.getIndex());\n+ assertEquals(\"type\", getResponse.getType());\n+ assertEquals(docId, getResponse.getId());\n+ }\n+\n+ assertTrue(highLevelClient().indices().exists(new GetIndexRequest().indices(indexPattern, \"index\")));\n+ }\n+\n+ public void testParamsEncode() throws IOException {\n+ //parameters are encoded by the low-level client but let's test that everything works the same when we use the high-level one\n+ String routing = \"routing/中文value#1?\";\n+ {\n+ IndexRequest indexRequest = new IndexRequest(\"index\", \"type\", \"id\");\n+ indexRequest.source(\"field\", \"value\");\n+ indexRequest.routing(routing);\n+ IndexResponse indexResponse = highLevelClient().index(indexRequest);\n+ assertEquals(\"index\", indexResponse.getIndex());\n+ assertEquals(\"type\", indexResponse.getType());\n+ assertEquals(\"id\", indexResponse.getId());\n+ }\n+ {\n+ GetRequest getRequest = new GetRequest(\"index\", \"type\", \"id\").routing(routing);\n+ GetResponse getResponse = highLevelClient().get(getRequest);\n+ assertTrue(getResponse.isExists());\n+ assertEquals(\"index\", getResponse.getIndex());\n+ assertEquals(\"type\", getResponse.getType());\n+ assertEquals(\"id\", getResponse.getId());\n+ assertEquals(routing, getResponse.getField(\"_routing\").getValue());\n+ }\n+ }\n }",
"filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java",
"status": "modified"
},
{
"diff": "@@ -1178,6 +1178,22 @@ public void testBuildEndpoint() {\n assertEquals(\"/a/_create\", Request.buildEndpoint(\"a\", null, null, \"_create\"));\n }\n \n+ public void testBuildEndPointEncodeParts() {\n+ assertEquals(\"/-%23index1,index%232/type/id\", Request.buildEndpoint(\"-#index1,index#2\", \"type\", \"id\"));\n+ assertEquals(\"/index/type%232/id\", Request.buildEndpoint(\"index\", \"type#2\", \"id\"));\n+ assertEquals(\"/index/type/this%2Fis%2Fthe%2Fid\", Request.buildEndpoint(\"index\", \"type\", \"this/is/the/id\"));\n+ assertEquals(\"/index/type/this%7Cis%7Cthe%7Cid\", Request.buildEndpoint(\"index\", \"type\", \"this|is|the|id\"));\n+ assertEquals(\"/index/type/id%231\", Request.buildEndpoint(\"index\", \"type\", \"id#1\"));\n+ assertEquals(\"/%3Clogstash-%7Bnow%2FM%7D%3E/_search\", Request.buildEndpoint(\"<logstash-{now/M}>\", \"_search\"));\n+ assertEquals(\"/中文\", Request.buildEndpoint(\"中文\"));\n+ assertEquals(\"/foo%20bar\", Request.buildEndpoint(\"foo bar\"));\n+ assertEquals(\"/foo+bar\", Request.buildEndpoint(\"foo+bar\"));\n+ assertEquals(\"/foo%2Fbar\", Request.buildEndpoint(\"foo/bar\"));\n+ assertEquals(\"/foo%5Ebar\", Request.buildEndpoint(\"foo^bar\"));\n+ assertEquals(\"/cluster1:index1,index2/_search\", Request.buildEndpoint(\"cluster1:index1,index2\", \"_search\"));\n+ assertEquals(\"/*\", Request.buildEndpoint(\"*\"));\n+ }\n+\n public void testEndpoint() {\n assertEquals(\"/index/type/id\", Request.endpoint(\"index\", \"type\", \"id\"));\n assertEquals(\"/index/type/id/_endpoint\", Request.endpoint(\"index\", \"type\", \"id\", \"_endpoint\"));",
"filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java",
"status": "modified"
},
{
"diff": "@@ -74,7 +74,7 @@ public class RestClientSingleHostIntegTests extends RestClientTestCase {\n \n @BeforeClass\n public static void startHttpServer() throws Exception {\n- pathPrefix = randomBoolean() ? \"/testPathPrefix/\" + randomAsciiOfLengthBetween(1, 5) : \"\";\n+ pathPrefix = randomBoolean() ? \"/testPathPrefix/\" + randomAsciiAlphanumOfLengthBetween(1, 5) : \"\";\n httpServer = createHttpServer();\n defaultHeaders = RestClientTestUtil.randomHeaders(getRandom(), \"Header-default\");\n restClient = createRestClient(false, true);\n@@ -101,6 +101,7 @@ private static class ResponseHandler implements HttpHandler {\n \n @Override\n public void handle(HttpExchange httpExchange) throws IOException {\n+ //copy request body to response body so we can verify it was sent\n StringBuilder body = new StringBuilder();\n try (InputStreamReader reader = new InputStreamReader(httpExchange.getRequestBody(), Consts.UTF_8)) {\n char[] buffer = new char[256];\n@@ -109,6 +110,7 @@ public void handle(HttpExchange httpExchange) throws IOException {\n body.append(buffer, 0, read);\n }\n }\n+ //copy request headers to response headers so we can verify they were sent\n Headers requestHeaders = httpExchange.getRequestHeaders();\n Headers responseHeaders = httpExchange.getResponseHeaders();\n for (Map.Entry<String, List<String>> header : requestHeaders.entrySet()) {\n@@ -214,6 +216,41 @@ public void testGetWithBody() throws IOException {\n bodyTest(\"GET\");\n }\n \n+ public void testEncodeParams() throws IOException {\n+ {\n+ Response response = restClient.performRequest(\"PUT\", \"/200\", Collections.singletonMap(\"routing\", \"this/is/the/routing\"));\n+ assertEquals(pathPrefix + \"/200?routing=this%2Fis%2Fthe%2Frouting\", response.getRequestLine().getUri());\n+ }\n+ {\n+ Response response = restClient.performRequest(\"PUT\", \"/200\", Collections.singletonMap(\"routing\", \"this|is|the|routing\"));\n+ assertEquals(pathPrefix + \"/200?routing=this%7Cis%7Cthe%7Crouting\", response.getRequestLine().getUri());\n+ }\n+ {\n+ Response response = restClient.performRequest(\"PUT\", \"/200\", Collections.singletonMap(\"routing\", \"routing#1\"));\n+ assertEquals(pathPrefix + \"/200?routing=routing%231\", response.getRequestLine().getUri());\n+ }\n+ {\n+ Response response = restClient.performRequest(\"PUT\", \"/200\", Collections.singletonMap(\"routing\", \"中文\"));\n+ assertEquals(pathPrefix + \"/200?routing=%E4%B8%AD%E6%96%87\", response.getRequestLine().getUri());\n+ }\n+ {\n+ Response response = restClient.performRequest(\"PUT\", \"/200\", Collections.singletonMap(\"routing\", \"foo bar\"));\n+ assertEquals(pathPrefix + \"/200?routing=foo+bar\", response.getRequestLine().getUri());\n+ }\n+ {\n+ Response response = restClient.performRequest(\"PUT\", \"/200\", Collections.singletonMap(\"routing\", \"foo+bar\"));\n+ assertEquals(pathPrefix + \"/200?routing=foo%2Bbar\", response.getRequestLine().getUri());\n+ }\n+ {\n+ Response response = restClient.performRequest(\"PUT\", \"/200\", Collections.singletonMap(\"routing\", \"foo/bar\"));\n+ assertEquals(pathPrefix + \"/200?routing=foo%2Fbar\", response.getRequestLine().getUri());\n+ }\n+ {\n+ Response response = restClient.performRequest(\"PUT\", \"/200\", Collections.singletonMap(\"routing\", \"foo^bar\"));\n+ assertEquals(pathPrefix + \"/200?routing=foo%5Ebar\", response.getRequestLine().getUri());\n+ }\n+ }\n+\n /**\n * Verify that credentials are sent on the first request with preemptive auth enabled (default when provided with credentials).\n */",
"filename": "client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n package org.elasticsearch.test.rest.yaml;\n \n import com.carrotsearch.randomizedtesting.RandomizedTest;\n-\n import org.apache.http.Header;\n import org.apache.http.HttpEntity;\n import org.apache.http.HttpHost;\n@@ -85,9 +84,9 @@ public ClientYamlTestResponse callApi(String apiName, Map<String, String> params\n Map<String, String> pathParts = new HashMap<>();\n Map<String, String> queryStringParams = new HashMap<>();\n \n- Set<String> apiRequiredPathParts = restApi.getPathParts().entrySet().stream().filter(e -> e.getValue() == true).map(Entry::getKey)\n+ Set<String> apiRequiredPathParts = restApi.getPathParts().entrySet().stream().filter(Entry::getValue).map(Entry::getKey)\n .collect(Collectors.toSet());\n- Set<String> apiRequiredParameters = restApi.getParams().entrySet().stream().filter(e -> e.getValue() == true).map(Entry::getKey)\n+ Set<String> apiRequiredParameters = restApi.getParams().entrySet().stream().filter(Entry::getValue).map(Entry::getKey)\n .collect(Collectors.toSet());\n \n for (Map.Entry<String, String> entry : params.entrySet()) {\n@@ -151,7 +150,7 @@ public ClientYamlTestResponse callApi(String apiName, Map<String, String> params\n for (String pathPart : restPath.getPathParts()) {\n try {\n finalPath.append('/');\n- // We append \"/\" to the path part to handle parts that start with - or other invalid characters\n+ // We prepend \"/\" to the path part to handle parts that start with - or other invalid characters\n URI uri = new URI(null, null, null, -1, \"/\" + pathPart, null, null);\n //manually escape any slash that each part may contain\n finalPath.append(uri.getRawPath().substring(1).replaceAll(\"/\", \"%2F\"));",
"filename": "test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestClient.java",
"status": "modified"
}
]
} |
{
"body": "Integrations tests frequently take more than 20 seconds to start up an Elasticsearch node on an empty or small cluster state, which is a lot of time for a computer. Take the console output from any build and search for `#wait (Thread[Task worker for ':',5,main]) completed`, the time after `Took` on the same line is the time that the build had to wait for the node to be available. This is an immediate problem for testing but might also be a problem for users if this boils down to an issue that could make things even worse in some adversarial scenarii.\r\n\r\nRelates #28640",
"comments": [
{
"body": "Here is an excerpt from the output, including timestamps, of a test run that seems to have failed because it took more than 30 seconds to bring a node up:\r\n\r\n```\r\n13:18:33 :x-pack-elasticsearch:qa:rolling-upgrade:with-system-key:v6.3.0-SNAPSHOT#upgradedClusterTestCluster#wait\r\n13:18:33 Task ':x-pack-elasticsearch:qa:rolling-upgrade:with-system-key:v6.3.0-SNAPSHOT#upgradedClusterTestCluster#wait' is not up-to-date because:\r\n13:18:33 Task has not declared any outputs.\r\n13:19:03 Node 0 output:\r\n13:19:03 |-----------------------------------------\r\n13:19:03 | failure marker exists: false\r\n13:19:03 | pid file exists: true\r\n13:19:03 | http ports file exists: true\r\n13:19:03 | transport ports file exists: true\r\n13:19:03 |\r\n13:19:03 | [ant output]\r\n13:19:03 |\r\n13:19:03 | [log]\r\n13:19:03 | warning: ignoring JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8\r\n13:19:03 | [2018-02-12T13:18:49,206][INFO ][o.e.n.Node ] [node-0] initializing ...\r\n13:19:03 | [2018-02-12T13:18:49,267][INFO ][o.e.e.NodeEnvironment ] [node-0] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [104.5gb], net total_space [992.6gb], types [ext4]\r\n13:19:03 | [2018-02-12T13:18:49,267][INFO ][o.e.e.NodeEnvironment ] [node-0] heap size [494.9mb], compressed ordinary object pointers [true]\r\n13:19:03 | [2018-02-12T13:18:49,371][INFO ][o.e.n.Node ] [node-0] node name [node-0], node ID [aD7I2WYuQSOXvdOri3TVcQ]\r\n13:19:03 | [2018-02-12T13:18:49,372][INFO ][o.e.n.Node ] [node-0] version[7.0.0-alpha1-SNAPSHOT], pid[2130], build[37e938f/2018-02-12T11:11:56.823Z], OS[Linux/4.4.0-1032-aws/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_162/25.162-b12]\r\n13:19:03 | [2018-02-12T13:18:49,372][INFO ][o.e.n.Node ] [node-0] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.bgRdl4sQ, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Xms512m, -Xmx512m, -ea, -esa, -Des.path.home=/var/lib/jenkins/workspace/elastic+x-pack-elasticsearch+master+multijob-unix-compatibility/elasticsearch-extra/x-pack-elasticsearch/qa/rolling-upgrade/with-system-key/build/cluster/v6.3.0-SNAPSHOT#upgradedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT, -Des.path.conf=/var/lib/jenkins/workspace/elastic+x-pack-elasticsearch+master+multijob-unix-compatibility/elasticsearch-extra/x-pack-elasticsearch/qa/rolling-upgrade/with-system-key/build/cluster/v6.3.0-SNAPSHOT#upgradedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/config]\r\n13:19:03 | [2018-02-12T13:18:49,372][WARN ][o.e.n.Node ] [node-0] version [7.0.0-alpha1-SNAPSHOT] is a pre-release version of Elasticsearch and is not suitable for production\r\n13:19:03 | [2018-02-12T13:18:51,770][INFO ][o.e.p.PluginsService ] [node-0] loaded module [aggs-matrix-stats]\r\n13:19:03 | [2018-02-12T13:18:51,770][INFO ][o.e.p.PluginsService ] [node-0] loaded module [analysis-common]\r\n13:19:03 | [2018-02-12T13:18:51,770][INFO ][o.e.p.PluginsService ] [node-0] loaded module [ingest-common]\r\n13:19:03 | [2018-02-12T13:18:51,770][INFO ][o.e.p.PluginsService ] [node-0] loaded module [lang-expression]\r\n13:19:03 | [2018-02-12T13:18:51,770][INFO ][o.e.p.PluginsService ] [node-0] loaded module [lang-mustache]\r\n13:19:03 | [2018-02-12T13:18:51,770][INFO ][o.e.p.PluginsService ] [node-0] loaded module [lang-painless]\r\n13:19:03 | [2018-02-12T13:18:51,770][INFO ][o.e.p.PluginsService ] [node-0] loaded module [mapper-extras]\r\n13:19:03 | [2018-02-12T13:18:51,771][INFO ][o.e.p.PluginsService ] [node-0] loaded module [parent-join]\r\n13:19:03 | [2018-02-12T13:18:51,771][INFO ][o.e.p.PluginsService ] [node-0] loaded module [percolator]\r\n13:19:03 | [2018-02-12T13:18:51,771][INFO ][o.e.p.PluginsService ] [node-0] loaded module [rank-eval]\r\n13:19:03 | [2018-02-12T13:18:51,771][INFO ][o.e.p.PluginsService ] [node-0] loaded module [reindex]\r\n13:19:03 | [2018-02-12T13:18:51,771][INFO ][o.e.p.PluginsService ] [node-0] loaded module [repository-url]\r\n13:19:03 | [2018-02-12T13:18:51,771][INFO ][o.e.p.PluginsService ] [node-0] loaded module [transport-netty4]\r\n13:19:03 | [2018-02-12T13:18:51,772][INFO ][o.e.p.PluginsService ] [node-0] loaded plugin [x-pack-core]\r\n13:19:03 | [2018-02-12T13:18:51,772][INFO ][o.e.p.PluginsService ] [node-0] loaded plugin [x-pack-deprecation]\r\n13:19:03 | [2018-02-12T13:18:51,772][INFO ][o.e.p.PluginsService ] [node-0] loaded plugin [x-pack-graph]\r\n13:19:03 | [2018-02-12T13:18:51,772][INFO ][o.e.p.PluginsService ] [node-0] loaded plugin [x-pack-logstash]\r\n13:19:03 | [2018-02-12T13:18:51,772][INFO ][o.e.p.PluginsService ] [node-0] loaded plugin [x-pack-ml]\r\n13:19:03 | [2018-02-12T13:18:51,772][INFO ][o.e.p.PluginsService ] [node-0] loaded plugin [x-pack-monitoring]\r\n13:19:03 | [2018-02-12T13:18:51,772][INFO ][o.e.p.PluginsService ] [node-0] loaded plugin [x-pack-security]\r\n13:19:03 | [2018-02-12T13:18:51,773][INFO ][o.e.p.PluginsService ] [node-0] loaded plugin [x-pack-sql]\r\n13:19:03 | [2018-02-12T13:18:51,773][INFO ][o.e.p.PluginsService ] [node-0] loaded plugin [x-pack-upgrade]\r\n13:19:03 | [2018-02-12T13:18:51,773][INFO ][o.e.p.PluginsService ] [node-0] loaded plugin [x-pack-watcher]\r\n13:19:03 | [2018-02-12T13:18:55,244][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/2193] [Main.cc@129] controller (64 bit): Version 7.0.0-alpha1-SNAPSHOT (Build 91f906f0147062) Copyright (c) 2018 Elasticsearch BV\r\n13:19:03 | [2018-02-12T13:18:56,588][DEBUG][o.e.a.ActionModule ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security\r\n13:19:03 | [2018-02-12T13:18:57,137][INFO ][o.e.d.DiscoveryModule ] [node-0] using discovery type [zen]\r\n13:19:03 | [2018-02-12T13:18:57,914][INFO ][o.e.n.Node ] [node-0] initialized\r\n13:19:03 | [2018-02-12T13:18:57,914][INFO ][o.e.n.Node ] [node-0] starting ...\r\n13:19:03 | [2018-02-12T13:18:58,049][INFO ][o.e.t.TransportService ] [node-0] publish_address {127.0.0.1:38864}, bound_addresses {[::1]:45891}, {127.0.0.1:38864}\r\n13:19:03 | [2018-02-12T13:18:58,084][WARN ][o.e.b.BootstrapChecks ] [node-0] HTTPS is required in order to use the token service; please enable HTTPS using the [xpack.security.http.ssl.enabled] setting or disable the token service using the [xpack.security.authc.token.enabled] setting\r\n13:19:03 | [2018-02-12T13:19:02,757][INFO ][o.e.c.s.ClusterApplierService] [node-0] master node changed {previous [], current [{node-0}{Jhp_A43OR9KOW2UvUo7OnQ}{t9GoosVjStSO0Qu49OY70A}{127.0.0.1}{127.0.0.1:42114}{testattr=test, upgraded=first, ml.machine_memory=16825872384, ml.max_open_jobs=20, ml.enabled=true}]}, added {{node-0}{Jhp_A43OR9KOW2UvUo7OnQ}{t9GoosVjStSO0Qu49OY70A}{127.0.0.1}{127.0.0.1:42114}{testattr=test, upgraded=first, ml.machine_memory=16825872384, ml.max_open_jobs=20, ml.enabled=true},}, reason: apply cluster state (from master [master {node-0}{Jhp_A43OR9KOW2UvUo7OnQ}{t9GoosVjStSO0Qu49OY70A}{127.0.0.1}{127.0.0.1:42114}{testattr=test, upgraded=first, ml.machine_memory=16825872384, ml.max_open_jobs=20, ml.enabled=true} committed version [197]])\r\n13:19:03 | [2018-02-12T13:19:03,095][INFO ][o.e.x.s.a.TokenService ] [node-0] refresh keys\r\n13:19:03 | [2018-02-12T13:19:03,492][INFO ][o.e.x.s.a.TokenService ] [node-0] refreshed keys\r\n13:19:03 | [2018-02-12T13:19:03,514][INFO ][o.e.l.LicenseService ] [node-0] license [d7c217c8-e7a6-4dfd-99f4-84f446ff9a6a] mode [trial] - valid\r\n13:19:03 | [2018-02-12T13:19:03,542][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-0] publish_address {127.0.0.1:42608}, bound_addresses {[::1]:39088}, {127.0.0.1:42608}\r\n13:19:03 | [2018-02-12T13:19:03,543][INFO ][o.e.n.Node ] [node-0] started\r\n```\r\n\r\nIn particular, more than half of the 30-second timeout elapsed before the timestamp on the first log line saying `[node-0] initializing ...` log line: the task apparently started at 13:18:33, but the first timestamp in the log is 13:18:49, 16 seconds later.",
"created_at": "2018-02-13T08:30:20Z"
},
{
"body": "To look at the variation in the durations of `*ClusterTestCluster#wait` tasks, I downloaded the logs from a selection of recent builds and extracted the times using the following.\r\n\r\n cat log.txt | sed -e '/ClusterTestCluster#wait.*Took/!d; s/.* Took //; s/ .*//'\r\n\r\nThe distribution looks like this, broken down by OS:\r\n\r\n<img width=\"969\" alt=\"screen shot 2018-02-13 at 09 53 15\" src=\"https://user-images.githubusercontent.com/5058284/36143712-c445c01a-10a3-11e8-8fa3-64a74e4fd8f6.png\">\r\n\r\n",
"created_at": "2018-02-13T09:55:24Z"
},
{
"body": "I just ran some xpack rolling upgrade tests with a profiler to see where time is spent. About 70% out of the ~51 seconds of CPU time that were spent on the Elasticsearch node were spent inside the JVM, mostly runnning compilation (~29 seconds).",
"created_at": "2018-02-13T13:48:22Z"
},
{
"body": "Compilation executes concurrently with application execution. While this is an indication that CPU time is being spent on compilation, it is not necessarily indicative of where the real time during startup is going.",
"created_at": "2018-02-13T18:14:08Z"
},
{
"body": "#28659 has added more logging to the node startup, but that 16 second startup time is still there, and is prior to the first log message:\r\n\r\n```\r\n03:27:18 Task has not declared any outputs.\r\n03:27:48 Node 0 output:\r\n03:27:48 |-----------------------------------------\r\n03:27:48 | failure marker exists: false\r\n03:27:48 | pid file exists: true\r\n03:27:48 | http ports file exists: true\r\n03:27:48 | transport ports file exists: true\r\n03:27:48 |\r\n03:27:48 | [ant output]\r\n03:27:48 |\r\n03:27:48 | [log]\r\n03:27:48 | warning: ignoring JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8\r\n03:27:48 | [2018-02-14T03:27:34,575][DEBUG][o.e.b.SystemCallFilter ] Linux seccomp filter installation successful, threads: [all]\r\n```\r\n// snip \r\n```\r\n03:27:48 | [2018-02-14T03:27:34,804][INFO ][o.e.n.Node ] [node-0] initializing …\r\n03:27:48 | [2018-02-14T03:27:34,871][INFO ][o.e.e.NodeEnvironment ] [node-0] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [340.7gb], net total_space [992.6gb], types [ext4]\r\n03:27:48 | [2018-02-14T03:27:34,871][INFO ][o.e.e.NodeEnvironment ] [node-0] heap size [494.9mb], compressed ordinary object pointers [true]\r\n03:27:48 | [2018-02-14T03:27:34,942][INFO ][o.e.n.Node ] [node-0] node name [node-0], node ID [NcAMsKj8Tc-e55eiJciNNQ]\r\n```\r\n// snip\r\n```\r\n03:27:50 * What went wrong:\r\n03:27:50 Execution failed for task ':x-pack-elasticsearch:qa:rolling-upgrade:with-system-key:v6.3.0-SNAPSHOT#upgradedClusterTestCluster#wait'.\r\n03:27:50 > Failed to start elasticsearch: timed out after 30 seconds\r\n```\r\n\r\n\r\n",
"created_at": "2018-02-14T04:01:46Z"
},
{
"body": "Similar behaviour to what @tvernum reported: 13 seconds startup time before the first log message.\r\n\r\n```\r\n06:29:21 :x-pack-elasticsearch:qa:rolling-upgrade:with-system-key:v5.6.8-SNAPSHOT#upgradedClusterTestCluster#start\r\n[...]\r\n06:29:51 | [log]\r\n06:29:51 | warning: ignoring JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8\r\n06:29:51 | [2018-02-14T06:29:34,804][DEBUG][o.e.b.SystemCallFilter ] Linux seccomp filter installation successful, threads: [all]\r\n06:29:51 | [2018-02-14T06:29:34,811][DEBUG][o.e.b.JarHell ] java.class.path:[...]\r\n```",
"created_at": "2018-02-14T08:26:19Z"
},
{
"body": "That last one was 15 sec from task start to first log message:\r\n\r\n```\r\n22:37:55 Task ':x-pack-elasticsearch:qa:rolling-upgrade:without-system-key:v6.3.0-SNAPSHOT#upgradedClusterTestCluster#wait' is not up-to-date because:\r\n22:37:55 Task has not declared any outputs.\r\n22:38:25 Node 0 output:\r\n22:38:25 |-----------------------------------------\r\n22:38:25 | failure marker exists: false\r\n22:38:25 | pid file exists: true\r\n22:38:25 | http ports file exists: true\r\n22:38:25 | transport ports file exists: true\r\n22:38:25 |\r\n22:38:25 | [ant output]\r\n22:38:25 |\r\n22:38:25 | [log]\r\n22:38:25 | warning: ignoring JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8\r\n22:38:25 | Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.\r\n22:38:25 | [2018-02-13T22:38:10,750][DEBUG][o.e.b.SystemCallFilter ] Linux seccomp filter installation successful, threads: [all]\r\n```\r\n\r\nThe extra logging in #28659 has helped to clarify that the bulk of the delay is occurring prior to the bootstrap phase. @jasontedor can you suggest what to do next?",
"created_at": "2018-02-14T08:48:47Z"
},
{
"body": "The issue is only reproducible on our CI systems so I think we need to find out what's characteristic about them. So the first step is IMHO to get some system metrics (e.g. kernel activity). Knowing more about the machine activity / state at the point of the failure will hopefully help us to reproduce this reliably.\r\n\r\nAdditional supporting data may be:\r\n\r\n* kernel version\r\n* JVM version\r\n* How long the build ran until the failure has occurred\r\n\r\nFurthermore, we need to break down which parts of the startup take how long:\r\n\r\n* JVM initialization (e.g. time from process start until the first log line in the GC log)\r\n* Elasticsearch initialization (time from first log line (`[node-X] initializing ...`) until `[node-X] started`).\r\n\r\n",
"created_at": "2018-02-14T09:36:47Z"
},
{
"body": "> The extra logging in #28659 has helped to clarify that the bulk of the delay is occurring prior to the bootstrap phase.\r\n\r\nThat's an interesting data point. We do expect a delay here from `-XX:+AlwaysPreTouch` but we run these nodes with such small heaps (`512m`) that it seems unlikely to be the (sole) explanation.\r\n\r\n> @jasontedor can you suggest what to do next?\r\n\r\nAs you can see by the comment preceding mine, @danielmitterdorfer will take ownership of this one.",
"created_at": "2018-02-14T13:58:33Z"
},
{
"body": "> That's an interesting data point. We do expect a delay here from -XX:+AlwaysPreTouch but we run these nodes with such small heaps (512m) that it seems unlikely to be the (sole) explanation.\r\n\r\nI agree here.\r\n\r\nAs a next step, we will repeatedly run one of the affected builds on a dedicated worker node in order to expose this issue as often as possible. We will also gather additional system data (like paging activity) and try to correlate system behaviour during the time a build has failed.\r\n\r\nWe could also add more JVM logging to see what the JVM is actually doing during startup (I'd leverage JDK 9 unified logging for this).",
"created_at": "2018-02-14T14:39:57Z"
},
{
"body": "> Additional supporting data may be:\r\n>\r\n> * kernel version\r\n> * JVM version\r\n> * How long the build ran until the failure has occurred\r\n\r\nAn analysis of our build statistics has shown that all these variables are irrelevant. Among others, I also analysed the worker's uptime and the build duration. Uptime is irrelevant but most of the builds failed between one and two hours into the build.\r\n\r\nThe only data point that stands out that all builds have been run on `m4.xlarge` worker instances (i.e. *not* bare metal).",
"created_at": "2018-02-14T14:47:54Z"
},
{
"body": "I did a more detailed investigation on one affected CI node and it turns out that `-XX:+AlwaysPreTouch` seems to be the problem (or rather amplifying the underlying problem).\r\n\r\n### Test scenario\r\n\r\nStart Elasticsearch 5.6.7 on an affected node, once with default settings (i.e. `-XX:+AlwaysPreTouch`) and once without pretouching (i.e. `-XX:-AlwaysPreTouch`). We measure the time from process start until the first log line:\r\n\r\n| `AlwaysPreTouch` | Time to First Log Message | \r\n|------------------|---------------------------| \r\n| enabled | 80 seconds | \r\n| disabled | 3 seconds | \r\n\r\n### Analysis\r\n\r\nHowever, this is not the whole story. I also ran `perf` while starting Elasticsearch and this shows up highest in `perf report`:\r\n\r\n```\r\nOverhead Command Shared Object Symbol \r\n 67.45% java [kernel.kallsyms] [k] isolate_freepages_block\r\n```\r\n\r\nThis function is called when the kernel tries to free up a large enough contiguous block of memory (see also [its source code](https://elixir.bootlin.com/linux/v4.4/source/mm/compaction.c#L389)) which leads me to the assumption that memory in our CI systems is fragmented due to the volume of builds and the kernel is [compacting memory](https://lwn.net/Articles/368869/). Also quoting [Memory Compaction v8](https://lwn.net/Articles/384150/) (note: that source is almost 8 years old now so the information may or may not correct as of today):\r\n\r\n> Memory compaction can be triggered in one of three ways. It may be triggered\r\nexplicitly by writing any value to /proc/sys/vm/compact_memory and compacting\r\nall of memory. It can be triggered on a per-node basis by writing any\r\nvalue to /sys/devices/system/node/nodeN/compact where N is the node ID to\r\nbe compacted. When a process fails to allocate a high-order page, it may\r\ncompact memory in an attempt to satisfy the allocation instead of entering\r\ndirect reclaim.\r\n\r\n\r\n### Next steps\r\n\r\nWe will now record further data to actually back up that assumption.",
"created_at": "2018-02-15T11:52:30Z"
},
{
"body": "Initial test with the same test setup as above on a machine where we have seen timeouts again:\r\n\r\n| `AlwaysPreTouch` | Time to First Log Message [s] | Time until started [s] | \r\n|------------------|-------------------------------|------------------------| \r\n| enabled | 29 | 37 | \r\n| disabled | 3 | 16 | \r\n\r\n\r\nAfter explicitly compacting memory (`echo 1 > /proc/sys/vm/compact_memory`):\r\n\r\n(I waited for 60 seconds with the next test after compacting memory).\r\n\r\n| `AlwaysPreTouch` | Time to First Log Message [s] | Time until started [s] | \r\n|------------------|-------------------------------|------------------------| \r\n| enabled | 29 | 37 | \r\n| disabled | 3 | 16 | \r\n\r\nThe times were identical. Furthermore, there were no noticeable differences in `/proc/pagetypeinfo` or `/proc/vmstat` so I would consider the request for memory compaction ineffective.\r\n\r\nAfter dropping the page cache and Slab objects (`sync && echo 3 > /proc/sys/vm/drop_caches`):\r\n\r\n| `AlwaysPreTouch` | Time to First Log Message [s] | Time until started [s] | \r\n|------------------|-------------------------------|------------------------| \r\n| enabled | 3 | 10 | \r\n| disabled | 1 | 8 | \r\n\r\nThis is a significant improvement so I suggest that we drop the page cache before each build as a first step.",
"created_at": "2018-02-16T14:59:33Z"
},
{
"body": "Hurrah!\r\n\r\nJust to check - you're proposing dropping the page cache at the start of the whole build and not at the start of each individual integration test? If so, this means that dirty pages will accumulate throughout the test run. This might well be fine: I'm just checking I understand.",
"created_at": "2018-02-19T08:07:09Z"
},
{
"body": "Yes, your understanding of my proposal is correct. IMHO this is the most straightforward change that we can make in the first step. You are also right that the situation may get worse over the course of a single build. In that case we would probably need to modify kernel parameters to write back dirty pages more aggressively but I'd rather stick to the stock configuration first and only tune if it should turn out be necessary.",
"created_at": "2018-02-19T08:14:34Z"
},
{
"body": "We have implemented changes in CI yesterday to drop the page cache as well as request memory compaction (it turned out that dropping caches provided the most benefit, and requesting memory compaction afterwards improved the situation even more).\r\n\r\nBefore this change we have seen around 16 build failures caused by ES startup timeouts per day. In the last 24 hours it was only 2 build failures. So while this has improved the situation significantly, we are not quite there yet.\r\n\r\nI'd still want to avoid fiddling with kernel parameters (it's not that I have no idea which knobs to turn, it's rather that I think this should really be our very last resort). I currently analyse the memory requirements of our build and will try to reduce our memory requirements (i.e. reducing compiler memory, test memory, etc.). For example, I already found out that we (IMHO unnecessarily) store geoip data on-heap thus increasing our heap requirements in the build as well (see #28782 for details).",
"created_at": "2018-02-22T13:08:22Z"
},
{
"body": "Two weeks ago we had a problem in our CI infrastructure so the script that dropped the page cache and requested memory compaction was not called (it was called initially as I noted in my previous comment but then we added another check which made it completely ineffective). After this was fixed, we did not see a single failure due to these timeouts within two weeks. Hence, closing.",
"created_at": "2018-03-13T09:12:45Z"
},
{
"body": "Removing unnecessary modules (like, the x-pack ones) can reduce startup time by a few seconds.\r\n\r\n(I still can't get it below 8s which roughly one bajillion CPU cycles, which is really disappointing.)",
"created_at": "2022-03-19T21:29:01Z"
}
],
"number": 28650,
"title": "Investigate startup times"
} | {
"body": "We need to investigate why startup is taking so long in CI for standalone tests. This commit adds logging for bootstrap and network code that is executed before the node starts initializing in case this is the source of the trouble.\r\n\r\nRelates #28650",
"number": 28659,
"review_comments": [],
"title": "Add startup logging for standalone tests"
} | {
"commits": [
{
"message": "Add startup logging for standalone tests\n\nWe need to investigate why startup is taking so long in CI for\nstandalone tests. This commit adds logging for bootstrap and network\ncode that is executed before the node starts initializing in case this\nis the source of the trouble."
}
],
"files": [
{
"diff": "@@ -163,6 +163,7 @@ class NodeInfo {\n }\n \n env = ['JAVA_HOME': project.runtimeJavaHome]\n+ args.addAll(\"-E\", \"logger.org.elasticsearch.bootstrap=debug\", \"-E\", \"logger.org.elasticsearch.common.network=debug\")\n args.addAll(\"-E\", \"node.portsfile=true\")\n String collectedSystemProperties = config.systemProperties.collect { key, value -> \"-D${key}=${value}\" }.join(\" \")\n String esJavaOpts = config.jvmArgs.isEmpty() ? collectedSystemProperties : collectedSystemProperties + \" \" + config.jvmArgs",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy",
"status": "modified"
}
]
} |
{
"body": "Today when offering an item to a size blocking queue that is at capacity, we first increment the size of the queue and then check if the capacity is exceeded or not. If the capacity is indeed exceeded, we do not add the item to the queue and immediately decrement the size of the queue. However, this incremented size is exposed externally even though the offered item was never added to the queue (this is effectively a race on the size of the queue). This can lead to misleading statistics such as the size of a queue backing a thread pool. This commit fixes this issue so that such a size is never exposed. To do this, we replace the hidden CAS loop that increments the size of the queue with a CAS loop that only increments the size of the queue if we are going to be successful in adding the item to the queue.\r\n\r\nRelates #28547",
"comments": [
{
"body": "Thanks @dakrone and @bleskes.",
"created_at": "2018-02-08T13:42:55Z"
}
],
"number": 28557,
"title": "Fix size blocking queue to not lie about its weight"
} | {
"body": "The maxSize won't be updated if an unlucky schedule happens as follows\r\n1. The `queueSizeThread` starts and gets paused before the spin loop\r\n2. The `queueOfferThread` starts and finishes, thus the `spin` flag is set to false\r\n3. The `queueSizeThread` is scheduled but the `spin` is false already, therefore it won't pull the queue size\r\n\r\nThis commit makes the `queueSizeThread` keep polling the queue size until the spin flag is off and the max size is assigned at least one.\r\n\r\nRelates #28557",
"number": 28582,
"review_comments": [],
"title": "TEST: Fix unassigned maxSize in testQueueSize"
} | {
"commits": [
{
"message": "TEST: Fix unassigned maxSize in testQueueSize\n\nThe maxSize won't be updated if an unlucky schedule happens as follows\n1. The `queueSizeThread` starts and gets paused before the spin loop\n2. The `queueOfferThread` starts and finishes, thus the `spin` flag is set to false\n3. The `queueSizeThread` is scheduled but the `spin` is false already, therefore it won't pull the queue size\n\nThis commit makes the `queueSizeThread` keep polling the queue size until the spin flag is off and max size is assigned at least one.\n\nRelates #28557"
}
],
"files": [
{
"diff": "@@ -58,7 +58,7 @@ public void testQueueSize() throws InterruptedException {\n } catch (final InterruptedException e) {\n throw new RuntimeException(e);\n }\n- while (spin.get()) {\n+ while (spin.get() || maxSize.get() == 0) {\n maxSize.set(Math.max(maxSize.get(), sizeBlockingQueue.size()));\n }\n });",
"filename": "server/src/test/java/org/elasticsearch/common/util/concurrent/SizeBlockingQueueTests.java",
"status": "modified"
}
]
} |
{
"body": "We now read the plugin descriptor when removing an old plugin. This is to check if we are removing a plugin that is extended by another plugin. However, when reading the descriptor we enforce that it is of the same version that we are. This is not the case when a user has upgraded Elasticsearch and is now trying to remove an old plugin. This commit fixes this by skipping the version enforcement when reading the plugin descriptor only when removing a plugin.\r\n\r\nCloses #28538",
"comments": [
{
"body": "> This looks ok, but I would rather not have 2 variants of readFromProperties. What about moving the enforcement out to another method in install/startup, so reading the properties is just reading properties?\r\n\r\nI have mixed feelings about this. Currently we validate everything in `readFromProperties`. We have never had a need to skip a portion of the validation (because we did not previously need to read the properties descriptor except when we expected it to be perfect). I am okay reconsidering this, but I would prefer that to be in a follow-up?",
"created_at": "2018-02-06T21:57:29Z"
}
],
"number": 28540,
"title": "Fix the ability to remove old plugin"
} | {
"body": "This commit moves the semantic validation (like which version a plugin\r\nwas built for or which java version it is compatible with) from reading\r\na plugin descriptor, leaving the checks on the format of the descriptor\r\nintact.\r\n\r\nrelates #28540\r\n",
"number": 28581,
"review_comments": [],
"title": "Plugins: Separate plugin semantic validation from properties format validation"
} | {
"commits": [
{
"message": "Plugins: Separate plugin semantic validation from properties format validation\n\nThis commit moves the semantic validation (like which version a plugin\nwas built for or which java version it is compatible with) from reading\na plugin descriptor, leaving the checks on the format of the descriptor\nintact.\n\nrelates #28540"
}
],
"files": [
{
"diff": "@@ -569,6 +569,7 @@ private void verifyPluginName(Path pluginPath, String pluginName, Path candidate\n /** Load information about the plugin, and verify it can be installed with no errors. */\n private PluginInfo loadPluginInfo(Terminal terminal, Path pluginRoot, boolean isBatch, Environment env) throws Exception {\n final PluginInfo info = PluginInfo.readFromProperties(pluginRoot);\n+ PluginsService.verifyCompatibility(info);\n \n // checking for existing version of the plugin\n verifyPluginName(env.pluginsFile(), info.getName(), pluginRoot);\n@@ -653,6 +654,7 @@ private void installMetaPlugin(Terminal terminal, boolean isBatch, Path tmpRoot,\n continue;\n }\n final PluginInfo info = PluginInfo.readFromProperties(plugin);\n+ PluginsService.verifyCompatibility(info);\n verifyPluginName(env.pluginsFile(), info.getName(), plugin);\n pluginPaths.add(plugin);\n }",
"filename": "distribution/tools/plugin-cli/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.plugins;\n \n import joptsimple.OptionSet;\n+import org.elasticsearch.Version;\n import org.elasticsearch.cli.EnvironmentAwareCommand;\n import org.elasticsearch.cli.Terminal;\n import org.elasticsearch.common.Nullable;\n@@ -84,15 +85,11 @@ protected void execute(Terminal terminal, OptionSet options, Environment env) th\n \n private void printPlugin(Environment env, Terminal terminal, Path plugin, String prefix) throws IOException {\n terminal.println(Terminal.Verbosity.SILENT, prefix + plugin.getFileName().toString());\n- try {\n- PluginInfo info = PluginInfo.readFromProperties(env.pluginsFile().resolve(plugin.toAbsolutePath()));\n- terminal.println(Terminal.Verbosity.VERBOSE, info.toString(prefix));\n- } catch (IllegalArgumentException e) {\n- if (e.getMessage().contains(\"incompatible with version\")) {\n- terminal.println(\"WARNING: \" + e.getMessage());\n- } else {\n- throw e;\n- }\n+ PluginInfo info = PluginInfo.readFromProperties(env.pluginsFile().resolve(plugin.toAbsolutePath()));\n+ terminal.println(Terminal.Verbosity.VERBOSE, info.toString(prefix));\n+ if (info.getElasticsearchVersion().equals(Version.CURRENT) == false) {\n+ terminal.println(\"WARNING: plugin [\" + info.getName() + \"] was built for Elasticsearch version \" + info.getVersion() +\n+ \" but version \" + Version.CURRENT + \" is required\");\n }\n }\n }",
"filename": "distribution/tools/plugin-cli/src/main/java/org/elasticsearch/plugins/ListPluginsCommand.java",
"status": "modified"
},
{
"diff": "@@ -86,7 +86,7 @@ void execute(Terminal terminal, Environment env, String pluginName, boolean purg\n \n // first make sure nothing extends this plugin\n List<String> usedBy = new ArrayList<>();\n- Set<PluginsService.Bundle> bundles = PluginsService.getPluginBundles(env.pluginsFile(), false);\n+ Set<PluginsService.Bundle> bundles = PluginsService.getPluginBundles(env.pluginsFile());\n for (PluginsService.Bundle bundle : bundles) {\n for (String extendedPlugin : bundle.plugin.getExtendedPlugins()) {\n if (extendedPlugin.equals(pluginName)) {",
"filename": "distribution/tools/plugin-cli/src/main/java/org/elasticsearch/plugins/RemovePluginCommand.java",
"status": "modified"
},
{
"diff": "@@ -362,11 +362,7 @@ public void testExistingIncompatiblePlugin() throws Exception {\n buildFakePlugin(env, \"fake desc 2\", \"fake_plugin2\", \"org.fake2\");\n \n MockTerminal terminal = listPlugins(home);\n- final String message = String.format(Locale.ROOT,\n- \"plugin [%s] is incompatible with version [%s]; was designed for version [%s]\",\n- \"fake_plugin1\",\n- Version.CURRENT.toString(),\n- \"1.0.0\");\n+ String message = \"plugin [fake_plugin1] was built for Elasticsearch version 1.0 but version \" + Version.CURRENT + \" is required\";\n assertEquals(\n \"fake_plugin1\\n\" + \"WARNING: \" + message + \"\\n\" + \"fake_plugin2\\n\",\n terminal.getOutput());\n@@ -388,11 +384,7 @@ public void testExistingIncompatibleMetaPlugin() throws Exception {\n buildFakePlugin(env, \"fake desc 2\", \"fake_plugin2\", \"org.fake2\");\n \n MockTerminal terminal = listPlugins(home);\n- final String message = String.format(Locale.ROOT,\n- \"plugin [%s] is incompatible with version [%s]; was designed for version [%s]\",\n- \"fake_plugin1\",\n- Version.CURRENT.toString(),\n- \"1.0.0\");\n+ String message = \"plugin [fake_plugin1] was built for Elasticsearch version 1.0 but version \" + Version.CURRENT + \" is required\";\n assertEquals(\n \"fake_plugin2\\nmeta_plugin\\n\\tfake_plugin1\\n\" + \"WARNING: \" + message + \"\\n\",\n terminal.getOutput());",
"filename": "distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/ListPluginsCommandTests.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.http.HttpTransportSettings;\n import org.elasticsearch.plugins.PluginInfo;\n+import org.elasticsearch.plugins.PluginsService;\n import org.elasticsearch.secure_sm.SecureSM;\n import org.elasticsearch.transport.TcpTransport;\n \n@@ -161,7 +162,7 @@ static Map<String, URL> getCodebaseJarMap(Set<URL> urls) {\n static Map<String,Policy> getPluginPermissions(Environment environment) throws IOException, NoSuchAlgorithmException {\n Map<String,Policy> map = new HashMap<>();\n // collect up set of plugins and modules by listing directories.\n- Set<Path> pluginsAndModules = new LinkedHashSet<>(PluginInfo.extractAllPlugins(environment.pluginsFile()));\n+ Set<Path> pluginsAndModules = new LinkedHashSet<>(PluginsService.findPluginDirs(environment.pluginsFile()));\n \n if (Files.exists(environment.modulesFile())) {\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(environment.modulesFile())) {",
"filename": "server/src/main/java/org/elasticsearch/bootstrap/Security.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.plugins.Platforms;\n import org.elasticsearch.plugins.PluginInfo;\n+import org.elasticsearch.plugins.PluginsService;\n \n import java.io.Closeable;\n import java.io.IOException;\n@@ -70,7 +71,7 @@ void spawnNativePluginControllers(final Environment environment) throws IOExcept\n * For each plugin, attempt to spawn the controller daemon. Silently ignore any plugin that\n * don't include a controller for the correct platform.\n */\n- List<Path> paths = PluginInfo.extractAllPlugins(pluginsFile);\n+ List<Path> paths = PluginsService.findPluginDirs(pluginsFile);\n for (Path plugin : paths) {\n final PluginInfo info = PluginInfo.readFromProperties(plugin);\n final Path spawnPath = Platforms.nativeControllerPath(plugin);",
"filename": "server/src/main/java/org/elasticsearch/bootstrap/Spawner.java",
"status": "modified"
},
{
"diff": "@@ -150,67 +150,13 @@ public void writeTo(final StreamOutput out) throws IOException {\n }\n \n /**\n- * Extracts all {@link PluginInfo} from the provided {@code rootPath} expanding meta plugins if needed.\n- * @param rootPath the path where the plugins are installed\n- * @return A list of all plugin paths installed in the {@code rootPath}\n- * @throws IOException if an I/O exception occurred reading the plugin descriptors\n- */\n- public static List<Path> extractAllPlugins(final Path rootPath) throws IOException {\n- final List<Path> plugins = new LinkedList<>(); // order is already lost, but some filesystems have it\n- final Set<String> seen = new HashSet<>();\n- if (Files.exists(rootPath)) {\n- try (DirectoryStream<Path> stream = Files.newDirectoryStream(rootPath)) {\n- for (Path plugin : stream) {\n- if (FileSystemUtils.isDesktopServicesStore(plugin) ||\n- plugin.getFileName().toString().startsWith(\".removing-\")) {\n- continue;\n- }\n- if (seen.add(plugin.getFileName().toString()) == false) {\n- throw new IllegalStateException(\"duplicate plugin: \" + plugin);\n- }\n- if (MetaPluginInfo.isMetaPlugin(plugin)) {\n- try (DirectoryStream<Path> subStream = Files.newDirectoryStream(plugin)) {\n- for (Path subPlugin : subStream) {\n- if (MetaPluginInfo.isPropertiesFile(subPlugin) ||\n- FileSystemUtils.isDesktopServicesStore(subPlugin)) {\n- continue;\n- }\n- if (seen.add(subPlugin.getFileName().toString()) == false) {\n- throw new IllegalStateException(\"duplicate plugin: \" + subPlugin);\n- }\n- plugins.add(subPlugin);\n- }\n- }\n- } else {\n- plugins.add(plugin);\n- }\n- }\n- }\n- }\n- return plugins;\n- }\n-\n- /**\n- * Reads and validates the plugin descriptor file.\n- *\n- * @param path the path to the root directory for the plugin\n- * @return the plugin info\n- * @throws IOException if an I/O exception occurred reading the plugin descriptor\n- */\n- public static PluginInfo readFromProperties(final Path path) throws IOException {\n- return readFromProperties(path, true);\n- }\n-\n- /**\n- * Reads and validates the plugin descriptor file. If {@code enforceVersion} is false then version enforcement for the plugin descriptor\n- * is skipped.\n+ * Reads the plugin descriptor file.\n *\n * @param path the path to the root directory for the plugin\n- * @param enforceVersion whether or not to enforce the version when reading plugin descriptors\n * @return the plugin info\n * @throws IOException if an I/O exception occurred reading the plugin descriptor\n */\n- static PluginInfo readFromProperties(final Path path, final boolean enforceVersion) throws IOException {\n+ public static PluginInfo readFromProperties(final Path path) throws IOException {\n final Path descriptor = path.resolve(ES_PLUGIN_PROPERTIES);\n \n final Map<String, String> propsMap;\n@@ -244,22 +190,12 @@ static PluginInfo readFromProperties(final Path path, final boolean enforceVersi\n \"property [elasticsearch.version] is missing for plugin [\" + name + \"]\");\n }\n final Version esVersion = Version.fromString(esVersionString);\n- if (enforceVersion && esVersion.equals(Version.CURRENT) == false) {\n- final String message = String.format(\n- Locale.ROOT,\n- \"plugin [%s] is incompatible with version [%s]; was designed for version [%s]\",\n- name,\n- Version.CURRENT.toString(),\n- esVersionString);\n- throw new IllegalArgumentException(message);\n- }\n final String javaVersionString = propsMap.remove(\"java.version\");\n if (javaVersionString == null) {\n throw new IllegalArgumentException(\n \"property [java.version] is missing for plugin [\" + name + \"]\");\n }\n JarHell.checkVersionFormat(javaVersionString);\n- JarHell.checkJavaVersion(name, javaVersionString);\n final String classname = propsMap.remove(\"classname\");\n if (classname == null) {\n throw new IllegalArgumentException(",
"filename": "server/src/main/java/org/elasticsearch/plugins/PluginInfo.java",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,7 @@\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.component.LifecycleComponent;\n import org.elasticsearch.common.inject.Module;\n+import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Setting.Property;\n@@ -56,6 +57,7 @@\n import java.util.HashSet;\n import java.util.Iterator;\n import java.util.LinkedHashSet;\n+import java.util.LinkedList;\n import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n@@ -278,6 +280,59 @@ public int hashCode() {\n }\n }\n \n+ /**\n+ * Extracts all installed plugin directories from the provided {@code rootPath} expanding meta plugins if needed.\n+ * @param rootPath the path where the plugins are installed\n+ * @return A list of all plugin paths installed in the {@code rootPath}\n+ * @throws IOException if an I/O exception occurred reading the directories\n+ */\n+ public static List<Path> findPluginDirs(final Path rootPath) throws IOException {\n+ final List<Path> plugins = new ArrayList<>();\n+ final Set<String> seen = new HashSet<>();\n+ if (Files.exists(rootPath)) {\n+ try (DirectoryStream<Path> stream = Files.newDirectoryStream(rootPath)) {\n+ for (Path plugin : stream) {\n+ if (FileSystemUtils.isDesktopServicesStore(plugin) ||\n+ plugin.getFileName().toString().startsWith(\".removing-\") ||\n+ plugin.getFileName().toString().startsWith(\".installing-\")) {\n+ continue;\n+ }\n+ if (seen.add(plugin.getFileName().toString()) == false) {\n+ throw new IllegalStateException(\"duplicate plugin: \" + plugin);\n+ }\n+ if (MetaPluginInfo.isMetaPlugin(plugin)) {\n+ try (DirectoryStream<Path> subStream = Files.newDirectoryStream(plugin)) {\n+ for (Path subPlugin : subStream) {\n+ if (MetaPluginInfo.isPropertiesFile(subPlugin) ||\n+ FileSystemUtils.isDesktopServicesStore(subPlugin)) {\n+ continue;\n+ }\n+ if (seen.add(subPlugin.getFileName().toString()) == false) {\n+ throw new IllegalStateException(\"duplicate plugin: \" + subPlugin);\n+ }\n+ plugins.add(subPlugin);\n+ }\n+ }\n+ } else {\n+ plugins.add(plugin);\n+ }\n+ }\n+ }\n+ }\n+ return plugins;\n+ }\n+\n+ /**\n+ * Verify the given plugin is compatible with the current Elasticsearch installation.\n+ */\n+ static void verifyCompatibility(PluginInfo info) {\n+ if (info.getElasticsearchVersion().equals(Version.CURRENT) == false) {\n+ throw new IllegalArgumentException(\"Plugin [\" + info.getName() + \"] was built for Elasticsearch version \"\n+ + info.getElasticsearchVersion() + \" but version \" + Version.CURRENT + \" is running\");\n+ }\n+ JarHell.checkJavaVersion(info.getName(), info.getJavaVersion());\n+ }\n+\n // similar in impl to getPluginBundles, but DO NOT try to make them share code.\n // we don't need to inherit all the leniency, and things are different enough.\n static Set<Bundle> getModuleBundles(Path modulesDirectory) throws IOException {\n@@ -326,28 +381,15 @@ static void checkForFailedPluginRemovals(final Path pluginsDirectory) throws IOE\n * @throws IOException if an I/O exception occurs reading the plugin bundles\n */\n static Set<Bundle> getPluginBundles(final Path pluginsDirectory) throws IOException {\n- return getPluginBundles(pluginsDirectory, true);\n- }\n-\n- /**\n- * Get the plugin bundles from the specified directory. If {@code enforceVersion} is true, then the version in each plugin descriptor\n- * must match the current version.\n- *\n- * @param pluginsDirectory the directory\n- * @param enforceVersion whether or not to enforce the version when reading plugin descriptors\n- * @return the set of plugin bundles in the specified directory\n- * @throws IOException if an I/O exception occurs reading the plugin bundles\n- */\n- static Set<Bundle> getPluginBundles(final Path pluginsDirectory, final boolean enforceVersion) throws IOException {\n Logger logger = Loggers.getLogger(PluginsService.class);\n Set<Bundle> bundles = new LinkedHashSet<>();\n \n- List<Path> infos = PluginInfo.extractAllPlugins(pluginsDirectory);\n+ List<Path> infos = findPluginDirs(pluginsDirectory);\n for (Path plugin : infos) {\n logger.trace(\"--- adding plugin [{}]\", plugin.toAbsolutePath());\n final PluginInfo info;\n try {\n- info = PluginInfo.readFromProperties(plugin, enforceVersion);\n+ info = PluginInfo.readFromProperties(plugin);\n } catch (IOException e) {\n throw new IllegalStateException(\"Could not load plugin descriptor for existing plugin [\"\n + plugin.getFileName() + \"]. Was the plugin built before 2.0?\", e);\n@@ -480,6 +522,8 @@ static void checkBundleJarHell(Bundle bundle, Map<String, Set<URL>> transitiveUr\n private Plugin loadBundle(Bundle bundle, Map<String, Plugin> loaded) {\n String name = bundle.plugin.getName();\n \n+ verifyCompatibility(bundle.plugin);\n+\n // collect loaders of extended plugins\n List<ClassLoader> extendedLoaders = new ArrayList<>();\n for (String extendedPluginName : bundle.plugin.getExtendedPlugins()) {",
"filename": "server/src/main/java/org/elasticsearch/plugins/PluginsService.java",
"status": "modified"
},
{
"diff": "@@ -113,7 +113,7 @@ public void testExtractAllPluginsWithDuplicates() throws Exception {\n \"classname\", \"FakePlugin\");\n \n IllegalStateException exc =\n- expectThrows(IllegalStateException.class, () -> PluginInfo.extractAllPlugins(pluginDir));\n+ expectThrows(IllegalStateException.class, () -> PluginsService.findPluginDirs(pluginDir));\n assertThat(exc.getMessage(), containsString(\"duplicate plugin\"));\n assertThat(exc.getMessage(), endsWith(\"plugin1\"));\n }",
"filename": "server/src/test/java/org/elasticsearch/plugins/MetaPluginInfoTests.java",
"status": "modified"
},
{
"diff": "@@ -103,20 +103,6 @@ public void testReadFromPropertiesJavaVersionMissing() throws Exception {\n assertThat(e.getMessage(), containsString(\"[java.version] is missing\"));\n }\n \n- public void testReadFromPropertiesJavaVersionIncompatible() throws Exception {\n- String pluginName = \"fake-plugin\";\n- Path pluginDir = createTempDir().resolve(pluginName);\n- PluginTestUtil.writePluginProperties(pluginDir,\n- \"description\", \"fake desc\",\n- \"name\", pluginName,\n- \"elasticsearch.version\", Version.CURRENT.toString(),\n- \"java.version\", \"1000000.0\",\n- \"classname\", \"FakePlugin\",\n- \"version\", \"1.0\");\n- IllegalStateException e = expectThrows(IllegalStateException.class, () -> PluginInfo.readFromProperties(pluginDir));\n- assertThat(e.getMessage(), containsString(pluginName + \" requires Java\"));\n- }\n-\n public void testReadFromPropertiesBadJavaVersionFormat() throws Exception {\n String pluginName = \"fake-plugin\";\n Path pluginDir = createTempDir().resolve(pluginName);\n@@ -143,17 +129,6 @@ public void testReadFromPropertiesBogusElasticsearchVersion() throws Exception {\n assertThat(e.getMessage(), containsString(\"version needs to contain major, minor, and revision\"));\n }\n \n- public void testReadFromPropertiesOldElasticsearchVersion() throws Exception {\n- Path pluginDir = createTempDir().resolve(\"fake-plugin\");\n- PluginTestUtil.writePluginProperties(pluginDir,\n- \"description\", \"fake desc\",\n- \"name\", \"my_plugin\",\n- \"version\", \"1.0\",\n- \"elasticsearch.version\", Version.V_5_0_0.toString());\n- IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> PluginInfo.readFromProperties(pluginDir));\n- assertThat(e.getMessage(), containsString(\"was designed for version [5.0.0]\"));\n- }\n-\n public void testReadFromPropertiesJvmMissingClassname() throws Exception {\n Path pluginDir = createTempDir().resolve(\"fake-plugin\");\n PluginTestUtil.writePluginProperties(pluginDir,",
"filename": "server/src/test/java/org/elasticsearch/plugins/PluginInfoTests.java",
"status": "modified"
},
{
"diff": "@@ -590,4 +590,18 @@ public void testNonExtensibleDep() throws Exception {\n IllegalStateException e = expectThrows(IllegalStateException.class, () -> newPluginsService(settings));\n assertEquals(\"Plugin [myplugin] cannot extend non-extensible plugin [nonextensible]\", e.getMessage());\n }\n+\n+ public void testIncompatibleElasticsearchVersion() throws Exception {\n+ PluginInfo info = new PluginInfo(\"my_plugin\", \"desc\", \"1.0\", Version.V_5_0_0,\n+ \"1.8\", \"FakePlugin\", Collections.emptyList(), false, false);\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> PluginsService.verifyCompatibility(info));\n+ assertThat(e.getMessage(), containsString(\"was built for Elasticsearch version 5.0.0\"));\n+ }\n+\n+ public void testIncompatibleJavaVersion() throws Exception {\n+ PluginInfo info = new PluginInfo(\"my_plugin\", \"desc\", \"1.0\", Version.CURRENT,\n+ \"1000000.0\", \"FakePlugin\", Collections.emptyList(), false, false);\n+ IllegalStateException e = expectThrows(IllegalStateException.class, () -> PluginsService.verifyCompatibility(info));\n+ assertThat(e.getMessage(), containsString(\"my_plugin requires Java\"));\n+ }\n }",
"filename": "server/src/test/java/org/elasticsearch/plugins/PluginsServiceTests.java",
"status": "modified"
}
]
} |
{
"body": "Today we are lenient and we open an index if it has broken settings. This can happen if a user installs a plugin that registers an index setting, creates an index with that setting, stop their node, removes the plugin, and then restarts the node. In this case, the index will have a setting that we do not recognize yet we open the index anyway. This leniency is dangerous so this commit removes it. Note that we still are lenient on upgrades and we should really reconsider this in a follow-up.\r\n",
"comments": [
{
"body": "@s1monw I am opening this PR for discussion.",
"created_at": "2017-10-12T19:27:10Z"
},
{
"body": "@jasontedor why didn't you merge this?",
"created_at": "2017-12-05T09:11:17Z"
},
{
"body": "I forgot about this one, sorry @s1monw. I will bring up to date with master and integrate tomorrow.",
"created_at": "2017-12-06T02:26:17Z"
}
],
"number": 26995,
"title": "Do not open indices with broken settings"
} | {
"body": "- remove the leniency on opening indices with unrecognized settings and\r\ninstead such an index will be closed with the unrecognized settings archived\r\n- do not open any index with archived settings\r\n- users can remove archived settings via the wildcard archived\r\n- return \"index_open_exception\" 400 status code when trying to open an\r\nindex with broken settings\r\n\r\nRelates to #26995\r\nCloses #26998\r\n",
"number": 28574,
"review_comments": [
{
"body": "I do not think we need a new exception type? Is there an existing one that we can reuse?",
"created_at": "2018-03-14T13:21:37Z"
},
{
"body": "Instead of a string comparison, can we push a boolean down?",
"created_at": "2018-03-14T13:21:56Z"
},
{
"body": "@jasontedor Thanks Jason for the review! \r\n\r\nI have addressed your other comment about passing boolean.\r\n\r\nAbout the exception type - I could not find any relevant exception. `ILLEGAL_INDEX_SHARD_STATE_EXCEPTION` - 404, `INDEX_CLOSED_EXCEPTION` - 400? `ElasticsearchException` that was used in the code before produces 500, while I think we want to produce 400, as it is a user's error to open an index with broken settings. What would you suggest?",
"created_at": "2018-03-15T23:50:52Z"
},
{
"body": "Why not `IllegalArgumentException`?",
"created_at": "2018-03-16T00:03:36Z"
}
],
"title": "Do not open indices with broken index settings"
} | {
"commits": [
{
"message": "Do not open indices with broken index settings\n\n- remove the leniency on opening indices with unrecognized settings and\ninstead such an index will be closed with the unrecognized settings archived\n- do not open any index with archived settings\n- users can remove archived settings via the wildcard archived\n- return \"index_open_exception\" 400 status code when trying to open an\nindex with broken settings\n\nRelates to #26995\n\nCloses #26998"
},
{
"message": "Do not open indices with broken index settings\n\n- remove the leniency on opening indices with unrecognized settings and\ninstead such an index will be closed\nwith the unrecognized settings archived\n- do not open any index with archived settings\n- users can remove archived settings via the wildcard archived\n- return \"index_open_exception\" 400 status code when trying to open an\nindex with broken settings\n\nRelates to #26995\n\nCloses #26998"
},
{
"message": "Do not open indices with broken index settings\n\n- remove the leniency on opening indices with unrecognized settings and\ninstead such an index will be closed\nwith the unrecognized settings archived\n- do not open any index with archived settings\n- users can remove archived settings via the wildcard archived\n- return illegal_argument_exception 400 status code when trying\nto open an index with broken/archived settings\n\nRelates to #26995\n\nCloses #26998"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into do-not-open-indexes-with-broken-settings"
},
{
"message": "Do not open indices with broken index settings\n\n- remove the leniency on opening indices with unrecognized settings and\ninstead such an index will be closed\nwith the unrecognized settings archived\n- do not open any index with archived settings\n- users can remove archived settings via the wildcard archived\n- return illegal_argument_exception 400 status code when trying\nto open an index with broken/archived settings\n\nRelates to #26995\n\nCloses #26998"
}
],
"files": [
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.cluster.metadata;\n \n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.indices.close.CloseIndexClusterStateUpdateRequest;\n@@ -193,9 +192,9 @@ public ClusterState execute(ClusterState currentState) {\n // We need to check that this index can be upgraded to the current version\n indexMetaData = metaDataIndexUpgradeService.upgradeIndexMetaData(indexMetaData, minIndexCompatibilityVersion);\n try {\n- indicesService.verifyIndexMetadata(indexMetaData, indexMetaData);\n+ indicesService.verifyIndexMetadata(indexMetaData, indexMetaData, false);\n } catch (Exception e) {\n- throw new ElasticsearchException(\"Failed to verify index \" + indexMetaData.getIndex(), e);\n+ throw new IllegalArgumentException(\"Failed to open index! Failed to verify index \" + indexMetaData.getIndex(), e);\n }\n \n mdBuilder.put(indexMetaData, true);",
"filename": "server/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexStateService.java",
"status": "modified"
},
{
"diff": "@@ -276,16 +276,18 @@ public ClusterState execute(ClusterState currentState) {\n for (Index index : openIndices) {\n final IndexMetaData currentMetaData = currentState.getMetaData().getIndexSafe(index);\n final IndexMetaData updatedMetaData = updatedState.metaData().getIndexSafe(index);\n- indicesService.verifyIndexMetadata(currentMetaData, updatedMetaData);\n+ indicesService.verifyIndexMetadata(currentMetaData, updatedMetaData, false);\n }\n for (Index index : closeIndices) {\n final IndexMetaData currentMetaData = currentState.getMetaData().getIndexSafe(index);\n final IndexMetaData updatedMetaData = updatedState.metaData().getIndexSafe(index);\n // Verifies that the current index settings can be updated with the updated dynamic settings.\n- indicesService.verifyIndexMetadata(currentMetaData, updatedMetaData);\n+ // Ignore archived settings during verification, as closed indexes can have archived settings\n+ indicesService.verifyIndexMetadata(currentMetaData, updatedMetaData, true);\n // Now check that we can create the index with the updated settings (dynamic and non-dynamic).\n // This step is mandatory since we allow to update non-dynamic settings on closed indices.\n- indicesService.verifyIndexMetadata(updatedMetaData, updatedMetaData);\n+ // Ignore archived settings during verification, , as closed indexes can have archived settings\n+ indicesService.verifyIndexMetadata(updatedMetaData, updatedMetaData, true);\n }\n } catch (IOException ex) {\n throw ExceptionsHelper.convertToElastic(ex);",
"filename": "server/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java",
"status": "modified"
},
{
"diff": "@@ -124,7 +124,7 @@ public void performStateRecovery(final GatewayStateRecoveredListener listener) t\n try {\n if (electedIndexMetaData.getState() == IndexMetaData.State.OPEN) {\n // verify that we can actually create this index - if not we recover it as closed with lots of warn logs\n- indicesService.verifyIndexMetadata(electedIndexMetaData, electedIndexMetaData);\n+ indicesService.verifyIndexMetadata(electedIndexMetaData, electedIndexMetaData, false);\n }\n } catch (Exception e) {\n final Index electedIndex = electedIndexMetaData.getIndex();",
"filename": "server/src/main/java/org/elasticsearch/gateway/Gateway.java",
"status": "modified"
},
{
"diff": "@@ -406,6 +406,7 @@ public void onStoreClosed(ShardId shardId) {\n final IndexService indexService =\n createIndexService(\n \"create index\",\n+ true,\n indexMetaData,\n indicesQueryCache,\n indicesFieldDataCache,\n@@ -428,14 +429,15 @@ public void onStoreClosed(ShardId shardId) {\n * This creates a new IndexService without registering it\n */\n private synchronized IndexService createIndexService(final String reason,\n+ final boolean ignoreArchivedSettings,\n IndexMetaData indexMetaData,\n IndicesQueryCache indicesQueryCache,\n IndicesFieldDataCache indicesFieldDataCache,\n List<IndexEventListener> builtInListeners,\n IndexingOperationListener... indexingOperationListeners) throws IOException {\n final IndexSettings idxSettings = new IndexSettings(indexMetaData, this.settings, indexScopedSettings);\n // we ignore private settings since they are not registered settings\n- indexScopedSettings.validate(indexMetaData.getSettings(), true, true, true);\n+ indexScopedSettings.validate(indexMetaData.getSettings(), true, true, ignoreArchivedSettings);\n logger.debug(\"creating Index [{}], shards [{}]/[{}] - reason [{}]\",\n indexMetaData.getIndex(),\n idxSettings.getNumberOfShards(),\n@@ -485,16 +487,17 @@ public synchronized MapperService createIndexMapperService(IndexMetaData indexMe\n * This method will throw an exception if the creation or the update fails.\n * The created {@link IndexService} will not be registered and will be closed immediately.\n */\n- public synchronized void verifyIndexMetadata(IndexMetaData metaData, IndexMetaData metaDataUpdate) throws IOException {\n+ public synchronized void verifyIndexMetadata(IndexMetaData metaData, IndexMetaData metaDataUpdate, boolean ignoreArchivedSettings)\n+ throws IOException {\n final List<Closeable> closeables = new ArrayList<>();\n try {\n IndicesFieldDataCache indicesFieldDataCache = new IndicesFieldDataCache(settings, new IndexFieldDataCache.Listener() {});\n closeables.add(indicesFieldDataCache);\n IndicesQueryCache indicesQueryCache = new IndicesQueryCache(settings);\n closeables.add(indicesQueryCache);\n // this will also fail if some plugin fails etc. which is nice since we can verify that early\n- final IndexService service =\n- createIndexService(\"metadata verification\", metaData, indicesQueryCache, indicesFieldDataCache, emptyList());\n+ final IndexService service = createIndexService(\"metadata verification\", ignoreArchivedSettings, metaData,\n+ indicesQueryCache, indicesFieldDataCache, emptyList());\n closeables.add(() -> service.close(\"metadata verification\", false));\n service.mapperService().merge(metaData, MapperService.MergeReason.MAPPING_RECOVERY);\n if (metaData.equals(metaDataUpdate) == false) {",
"filename": "server/src/main/java/org/elasticsearch/indices/IndicesService.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.action.admin.indices.create;\n \n import com.carrotsearch.hppc.cursors.ObjectCursor;\n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.UnavailableShardsException;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n@@ -59,7 +58,6 @@\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n import static org.hamcrest.Matchers.hasToString;\n-import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.lessThanOrEqualTo;\n import static org.hamcrest.core.IsNull.notNullValue;\n \n@@ -392,12 +390,11 @@ public Settings onNodeStopped(String nodeName) throws Exception {\n assertThat(stateAfterRestart.getMetaData().index(metaData.getIndex()).getState(), equalTo(IndexMetaData.State.CLOSE));\n \n // try to open the index\n- final ElasticsearchException e =\n- expectThrows(ElasticsearchException.class, () -> client().admin().indices().prepareOpen(\"test\").get());\n+ final Exception e =\n+ expectThrows(IllegalArgumentException.class, () -> client().admin().indices().prepareOpen(\"test\").get());\n assertThat(e, hasToString(containsString(\"Failed to verify index \" + metaData.getIndex())));\n assertNotNull(e.getCause());\n- assertThat(e.getCause(), instanceOf(IllegalArgumentException.class));\n- assertThat(e, hasToString(containsString(\"unknown setting [index.foo]\")));\n+ assertThat(e.getCause().getMessage(), hasToString(containsString(\"unknown setting [index.foo]\")));\n }\n \n }",
"filename": "server/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexIT.java",
"status": "modified"
},
{
"diff": "@@ -42,7 +42,6 @@\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.discovery.zen.ElectMasterService;\n import org.elasticsearch.env.NodeEnvironment;\n-import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.indices.IndexClosedException;\n import org.elasticsearch.node.Node;\n import org.elasticsearch.test.ESIntegTestCase;\n@@ -55,8 +54,8 @@\n \n import static org.elasticsearch.action.support.WriteRequest.RefreshPolicy.IMMEDIATE;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.hamcrest.Matchers.startsWith;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.nullValue;\n@@ -374,12 +373,15 @@ public Settings onNodeStopped(final String nodeName) throws Exception {\n });\n }\n \n+\n /**\n- * This test really tests worst case scenario where we have a broken setting or any setting that prevents an index from being\n- * allocated in our metadata that we recover. In that case we now have the ability to check the index on local recovery from disk\n- * if it is sane and if we can successfully create an IndexService. This also includes plugins etc.\n+ * This test tests that we don't open indices with unknown index settings.\n+ * - when a node starts an IndexService for the index with unknown settings will not be created.\n+ * Then index will be closed with the unknown settings archived.\n+ * - an index with archived settings can not be opened with index open API.\n+ * - to open this index, archived settings must be removed via the wildcard archived.*\n */\n- public void testRecoverBrokenIndexMetadata() throws Exception {\n+ public void testDoNotOpenIndexWithUnknownOrAchivedSettings() throws Exception {\n logger.info(\"--> starting one node\");\n internalCluster().startNode();\n logger.info(\"--> indexing a simple document\");\n@@ -400,10 +402,7 @@ public void testRecoverBrokenIndexMetadata() throws Exception {\n for (NodeEnvironment services : internalCluster().getInstances(NodeEnvironment.class)) {\n IndexMetaData brokenMeta = IndexMetaData.builder(metaData).settings(Settings.builder().put(metaData.getSettings())\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT.minimumIndexCompatibilityVersion().id)\n- // this is invalid but should be archived\n- .put(\"index.similarity.BM25.type\", \"classic\")\n- // this one is not validated ahead of time and breaks allocation\n- .put(\"index.analysis.filter.myCollator.type\", \"icu_collation\")\n+ .put(\"index.unknown.setting\", \"true\")\n ).build();\n IndexMetaData.FORMAT.write(brokenMeta, services.indexPaths(brokenMeta.getIndex()));\n }\n@@ -412,14 +411,24 @@ public void testRecoverBrokenIndexMetadata() throws Exception {\n // this is crucial otherwise the state call below might not contain the index yet\n ensureGreen(metaData.getIndex().getName());\n state = client().admin().cluster().prepareState().get().getState();\n+ // assert that the index can't be opened\n assertEquals(IndexMetaData.State.CLOSE, state.getMetaData().index(metaData.getIndex()).getState());\n- assertEquals(\"classic\", state.getMetaData().index(metaData.getIndex()).getSettings().get(\"archived.index.similarity.BM25.type\"));\n- // try to open it with the broken setting - fail again!\n- ElasticsearchException ex = expectThrows(ElasticsearchException.class, () -> client().admin().indices().prepareOpen(\"test\").get());\n- assertEquals(ex.getMessage(), \"Failed to verify index \" + metaData.getIndex());\n+ // assert that the unrecognized setting got archived\n+ assertEquals(\"true\", state.getMetaData().index(metaData.getIndex()).getSettings().get(\"archived.index.unknown.setting\"));\n+\n+ // try to open it with the archived setting - fail again with IndexOpenException\n+ Exception ex = expectThrows(IllegalArgumentException.class, () -> client().admin().indices().prepareOpen(\"test\").get());\n+ assertThat(ex.getMessage(), startsWith(\"Failed to open index! Failed to verify index \" + metaData.getIndex()));\n assertNotNull(ex.getCause());\n assertEquals(IllegalArgumentException.class, ex.getCause().getClass());\n- assertEquals(ex.getCause().getMessage(), \"Unknown filter type [icu_collation] for [myCollator]\");\n+ assertThat(ex.getCause().getMessage(), startsWith(\"unknown setting [archived.index.unknown.setting]\"));\n+\n+ // delete archived settings and try to open index again - this time successful\n+ client().admin().indices().prepareUpdateSettings(\"test\").setSettings(Settings.builder().putNull(\"archived.*\")).get();\n+ client().admin().indices().prepareOpen(\"test\").get();\n+ state = client().admin().cluster().prepareState().get().getState();\n+ assertNull(state.getMetaData().index(metaData.getIndex()).getSettings().get(\"archived.index.unknown.setting\"));\n+ assertEquals(IndexMetaData.State.OPEN, state.getMetaData().index(metaData.getIndex()).getState());\n }\n \n /**\n@@ -472,10 +481,9 @@ public void testRecoverMissingAnalyzer() throws Exception {\n assertEquals(IndexMetaData.State.CLOSE, state.getMetaData().index(metaData.getIndex()).getState());\n \n // try to open it with the broken setting - fail again!\n- ElasticsearchException ex = expectThrows(ElasticsearchException.class, () -> client().admin().indices().prepareOpen(\"test\").get());\n- assertEquals(ex.getMessage(), \"Failed to verify index \" + metaData.getIndex());\n+ Exception ex = expectThrows(IllegalArgumentException.class, () -> client().admin().indices().prepareOpen(\"test\").get());\n+ assertThat(ex.getMessage(), startsWith(\"Failed to open index! Failed to verify index \" + metaData.getIndex()));\n assertNotNull(ex.getCause());\n- assertEquals(MapperParsingException.class, ex.getCause().getClass());\n assertThat(ex.getCause().getMessage(), containsString(\"analyzer [test] not found for field [field1]\"));\n }\n ",
"filename": "server/src/test/java/org/elasticsearch/gateway/GatewayIndexStateIT.java",
"status": "modified"
}
]
} |
{
"body": "This change adds a shallow copy method for aggregation builders. This method returns a copy of the builder replacing the factoriesBuilder and metaData.\r\nThis method is used when the builder is rewritten (AggregationBuilder#rewrite) in order to make sure that we create a new instance of the parent builder when sub aggregations are rewritten.\r\n\r\nRelates #27782",
"comments": [
{
"body": "@colings86 sorry ;), I asked for help because I thought that each builder would need a specific test for the added method but BaseAggregationTestCase can \r\ntest it simply so I decided to do it in a single pr. I'll add a specific test for #27782 in a follow up.",
"created_at": "2018-01-29T22:21:13Z"
},
{
"body": "Thanks for reviewing @colings86 .\r\nI pushed a commit after yesterday's discussion that changes the method name (`shallowCopy`) and ensures that we also recreate mutable objects in the builders.\r\nCan you take another look ?",
"created_at": "2018-01-31T10:05:52Z"
}
],
"number": 28430,
"title": "Add a shallow copy method to aggregation builders"
} | {
"body": "This commit adds a test to check that the rewrite of a sub-aggregation triggers a copy of the parent aggregation.\r\n\r\nRelates #28430\r\nCloses #27782",
"number": 28491,
"review_comments": [],
"title": "Add a test for sub-aggregations rewrite"
} | {
"commits": [
{
"message": "Add a test for sub-aggregations rewrite\n\nThis commit adds a test to check that the rewrite of a sub-aggregation triggers a copy of the parent aggregation.\n\nRelates #28430\nCloses #27782"
}
],
"files": [
{
"diff": "@@ -33,10 +33,13 @@\n import org.elasticsearch.search.aggregations.BaseAggregationTestCase;\n import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregator.KeyedFilter;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.support.ValueType;\n \n import java.io.IOException;\n import java.util.Collections;\n \n+import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n \n public class FiltersTests extends BaseAggregationTestCase<FiltersAggregationBuilder> {\n@@ -154,5 +157,22 @@ public void testRewrite() throws IOException {\n assertEquals(\"my-filter\", ((FiltersAggregationBuilder) rewritten).filters().get(0).key());\n assertThat(((FiltersAggregationBuilder) rewritten).filters().get(0).filter(), instanceOf(MatchAllQueryBuilder.class));\n assertTrue(((FiltersAggregationBuilder) rewritten).isKeyed());\n+\n+ // test sub-agg filter that does rewrite\n+ original = new TermsAggregationBuilder(\"terms\", ValueType.BOOLEAN)\n+ .subAggregation(\n+ new FiltersAggregationBuilder(\"my-agg\", new KeyedFilter(\"my-filter\", new BoolQueryBuilder()))\n+ );\n+ rewritten = original.rewrite(new QueryRewriteContext(xContentRegistry(), null, null, () -> 0L));\n+ assertNotSame(original, rewritten);\n+ assertNotEquals(original, rewritten);\n+ assertThat(rewritten, instanceOf(TermsAggregationBuilder.class));\n+ assertThat(rewritten.getSubAggregations().size(), equalTo(1));\n+ AggregationBuilder subAgg = rewritten.getSubAggregations().get(0);\n+ assertThat(subAgg, instanceOf(FiltersAggregationBuilder.class));\n+ assertNotSame(original.getSubAggregations().get(0), subAgg);\n+ assertEquals(\"my-agg\", subAgg.getName());\n+ assertSame(rewritten,\n+ rewritten.rewrite(new QueryRewriteContext(xContentRegistry(), null, null, () -> 0L)));\n }\n }",
"filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/FiltersTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**\r\n6.0.0, 6.0.1\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** :\r\nopenjdk version \"1.8.0_151\"\r\nOpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.17.10.2-b12)\r\nOpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)\r\n\r\n**OS version** :\r\nUbuntu 17.10 with kernel 4.13.0-19-generic\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nQuery with **filter aggregation** based on **terms filter lookup** **nested below other aggregation** terminates with error **\"query must be rewritten first\"**. The same query executed on version 5.6.5 finishes with success. \r\n\r\n**This error is very similar to already reported and closed: #21301**\r\n\r\n**The only difference is that now query fails when nested aggregation is used.** \r\n\r\n\r\n**Steps to reproduce**:\r\nRecreation steps are based on those provided in issue #21301:\r\n\r\n\r\n1. create index test_posts\r\n\r\n```\r\nPUT test_posts\r\n{\r\n \"mappings\": {\r\n \"post\": {\r\n \"properties\": {\r\n \"mentionIDs\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n2. create index test_users\r\n\r\n```\r\nPUT test_users\r\n{\r\n \"mappings\": {\r\n \"user\": {\r\n \"properties\": {\r\n \"notifications\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n3. index some user\r\n\r\n```\r\nPUT test_users/user/USR|CLIENTID|1234\r\n{\r\n \"notifications\": [\"abc\"]\r\n}\r\n```\r\n\r\n\r\n4. insert some post\r\n\r\n```\r\nPUT test_posts/post/POST|4321\r\n{\r\n \"mentionIDs\": [\"abc\"]\r\n}\r\n```\r\n\r\n5. execute search (this search finishes with success - it is similar to that from #21301)\r\n\r\n```\r\nGET test_posts/_search\r\n{\r\n \"aggs\": {\r\n \"itemsNotify\": {\r\n \"filter\": {\r\n \"terms\": {\r\n \"mentionIDs\": {\r\n \"index\": \"test_users\",\r\n \"type\": \"user\",\r\n \"id\": \"USR|CLIENTID|1234\",\r\n \"path\": \"notifications\",\r\n \"routing\": \"CLIENTID\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n**6. This search fails with error on ES 6.0.0 and ES 6.0.1:**\r\n\r\n```\r\nGET /test_posts/_search\r\n{\r\n \"aggs\": {\r\n \"facets\": {\r\n \"global\": {},\r\n \"aggs\": {\r\n \"filteredFacets\": {\r\n \"filter\": {\r\n \"terms\": {\r\n \"mentionIDs\": {\r\n \"index\": \"test_users\",\r\n \"type\": \"user\",\r\n \"id\": \"USR|CLIENTID|1234\",\r\n \"path\": \"notifications\",\r\n \"routing\": \"CLIENTID\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n\r\n\r\nQuery result is:\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"unsupported_operation_exception\",\r\n \"reason\": \"query must be rewritten first\"\r\n }\r\n ],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"all shards failed\",\r\n \"phase\": \"query\",\r\n \"grouped\": true,\r\n \"failed_shards\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"test_posts\",\r\n \"node\": \"5ipdZJUdSqKf5gnOkwlkDg\",\r\n \"reason\": {\r\n \"type\": \"unsupported_operation_exception\",\r\n \"reason\": \"query must be rewritten first\"\r\n }\r\n }\r\n ]\r\n },\r\n \"status\": 500\r\n}\r\n```\r\n\r\n**Above query finishes with success on ES 5.6.5**\r\n\r\n**Fragment of logs from server**:\r\n[es-query-must-be-rewritten.log](https://github.com/elastic/elasticsearch/files/1552226/es-query-must-be-rewritten.log)\r\n\r\n",
"comments": [
{
"body": "Thanks @mrszg it's a bug in the rewrite phase of aggregations, only the root aggregation gets rewritten and sub aggregations are skipped. I'll take a look.",
"created_at": "2017-12-13T08:40:15Z"
},
{
"body": "It seems that version 6.1.0 is affected too.",
"created_at": "2017-12-14T08:24:05Z"
},
{
"body": "@jimczi Do you know when this bug will be fixed?",
"created_at": "2017-12-18T09:56:31Z"
},
{
"body": "Unfortunately this bug requires a larger fix than I initially thought. The rewriting of aggregations has a bug when multiple rewrite of the query is needed, the `terms` query is rewritten in multiple rounds (create the fetch query to get the document, retrieve the document and creates the final query) but currently only one rewrite is called. We know how to fix this but it will require some time, I cannot give any date or version for the fix though @lbrzekowski , I'll update this issue when I have a pr ready for it.",
"created_at": "2017-12-18T10:04:25Z"
},
{
"body": "@jimczi Can you provide any update on this issue? ",
"created_at": "2018-01-07T19:02:20Z"
},
{
"body": "Hi, \r\n\r\nIs this fixed in elasticsearch 6.2 ? I tried using 6.2 but I still see the same error. which release this will get fixed ?\r\n",
"created_at": "2018-05-02T21:45:20Z"
},
{
"body": "Same question as @bbansal - is there a chance of getting this fix in 6.2, or will we need to wait for 6.3 to become current & migrate?",
"created_at": "2018-05-16T16:09:30Z"
},
{
"body": "@bbansal , @jgnieuwhof sorry the fix is targeted for 6.3 only so you'll need to wait for the release and migrate to the new version.",
"created_at": "2018-05-16T16:23:12Z"
},
{
"body": "@jimczi - noted, thanks for the update :)",
"created_at": "2018-05-16T16:25:12Z"
},
{
"body": "Hi,\r\n\r\nI am using v6.7.1 but found that this is still not fixed. Please let us know when will this be fixed?",
"created_at": "2019-04-08T10:40:13Z"
},
{
"body": "Hi @hrsvrma , this issue is fixed since version 6.3 so if you can reproduce a similar failure please open a new issue with a clear recreation.",
"created_at": "2019-04-08T10:47:45Z"
},
{
"body": "HI @jimczi , Thanks for your response. As per your suggestion, I'll open a new issue with clear recreation steps.",
"created_at": "2019-04-08T10:59:56Z"
}
],
"number": 27782,
"title": "ES 6.0 regression: \"query must be rewritten first\" exception when using terms lookup filter in nested aggregation"
} | {
"body": "This commit adds a test to check that the rewrite of a sub-aggregation triggers a copy of the parent aggregation.\r\n\r\nRelates #28430\r\nCloses #27782",
"number": 28491,
"review_comments": [],
"title": "Add a test for sub-aggregations rewrite"
} | {
"commits": [
{
"message": "Add a test for sub-aggregations rewrite\n\nThis commit adds a test to check that the rewrite of a sub-aggregation triggers a copy of the parent aggregation.\n\nRelates #28430\nCloses #27782"
}
],
"files": [
{
"diff": "@@ -33,10 +33,13 @@\n import org.elasticsearch.search.aggregations.BaseAggregationTestCase;\n import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregator.KeyedFilter;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.support.ValueType;\n \n import java.io.IOException;\n import java.util.Collections;\n \n+import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n \n public class FiltersTests extends BaseAggregationTestCase<FiltersAggregationBuilder> {\n@@ -154,5 +157,22 @@ public void testRewrite() throws IOException {\n assertEquals(\"my-filter\", ((FiltersAggregationBuilder) rewritten).filters().get(0).key());\n assertThat(((FiltersAggregationBuilder) rewritten).filters().get(0).filter(), instanceOf(MatchAllQueryBuilder.class));\n assertTrue(((FiltersAggregationBuilder) rewritten).isKeyed());\n+\n+ // test sub-agg filter that does rewrite\n+ original = new TermsAggregationBuilder(\"terms\", ValueType.BOOLEAN)\n+ .subAggregation(\n+ new FiltersAggregationBuilder(\"my-agg\", new KeyedFilter(\"my-filter\", new BoolQueryBuilder()))\n+ );\n+ rewritten = original.rewrite(new QueryRewriteContext(xContentRegistry(), null, null, () -> 0L));\n+ assertNotSame(original, rewritten);\n+ assertNotEquals(original, rewritten);\n+ assertThat(rewritten, instanceOf(TermsAggregationBuilder.class));\n+ assertThat(rewritten.getSubAggregations().size(), equalTo(1));\n+ AggregationBuilder subAgg = rewritten.getSubAggregations().get(0);\n+ assertThat(subAgg, instanceOf(FiltersAggregationBuilder.class));\n+ assertNotSame(original.getSubAggregations().get(0), subAgg);\n+ assertEquals(\"my-agg\", subAgg.getName());\n+ assertSame(rewritten,\n+ rewritten.rewrite(new QueryRewriteContext(xContentRegistry(), null, null, () -> 0L)));\n }\n }",
"filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/FiltersTests.java",
"status": "modified"
}
]
} |
{
"body": "In #22948 we added a deprecation warning when using `.date` on a long field. However, since scripts run in a reduced permission environment, the underlying call to log the message results in a security manager exception. We need to do this in a doPrivileged block.\r\n\r\nSee https://discuss.elastic.co/t/painless-script-appender-deprecation-rolling-java-security-accesscontaccess-denied/117313",
"comments": [
{
"body": "cc @nik9000 ",
"created_at": "2018-01-28T05:37:11Z"
},
{
"body": "Eep. This is a pretty terrible bug. I'll take a look at it soon-ish. There are 5 or 6 things ahead of it in my TODO list but I'll see about bumping it up.",
"created_at": "2018-01-30T13:27:29Z"
},
{
"body": "I had a look at this. I can't reproduce it, but from the stacktrace in the discuss post it *looks* like this isn't an \"all the time\" thing. It *looks* like this is triggered when log4j goes digging around in the filesystem which it doesn't do all the time. I'm going to open a PR that goes under the assumption that this is caused by that.",
"created_at": "2018-01-31T18:06:46Z"
},
{
"body": "OK. I can reproduce \"something\" happening locally. If I leave the code as is it can break file rolling. But instead of failing the request the line fails to log. I can fix that with the fix that, I think, should fix this issue.",
"created_at": "2018-01-31T20:10:30Z"
}
],
"number": 28408,
"title": "ScriptDocValues getDate() on long field triggers AccessControlException"
} | {
"body": "If you call `getDates()` on a long or date type field add a deprecation\r\nwarning to the response and log something to the deprecation logger.\r\nThis *mostly* worked just fine but if the deprecation logger happens to\r\nroll then the roll will be performed with the script's permissions\r\nrather than the permissions of the server. And scripts don't have\r\npermissions to, say, open files. So the rolling failed. This fixes that\r\nby wrapping the call the deprecation logger in `doPriviledged`.\r\n\r\nThis is a strange `doPrivileged` call because it doens't check\r\nElasticsearch's `SpecialPermission`. `SpecialPermission` is a permission\r\nthat no-script code has and that scripts never have. Usually all\r\n`doPrivileged` calls check `SpecialPermission` to make sure that they\r\nare not accidentally acting on behalf of a script. But in this case we\r\nare *intentionally* acting on behalf of a script.\r\n\r\nCloses #28408",
"number": 28485,
"review_comments": [
{
"body": "Why do we need this abstracted? Can't the Longs.deprecated method just call deprecationLogger.deprecated directly?",
"created_at": "2018-02-01T22:25:07Z"
},
{
"body": "It could but then I couldn't test this. There isn't any consistent way to make it fail without this funny unit test dance that I do because it usually only fails when you rotate the log files. At least, that is the only time I could get it to fail locally.",
"created_at": "2018-02-01T22:57:28Z"
}
],
"title": "Scripts: Fix security for deprecation warning"
} | {
"commits": [
{
"message": "Scripts: Fix security for deprecation warning\n\nIf you call `getDates()` on a long or date type field add a deprecation\nwarning to the response and log something to the deprecation logger.\nThis *mostly* worked just fine but if the deprecation logger happens to\nroll then the roll will be performed with the script's permissions\nrather than the permissions of the server. And scripts don't have\npermissions to, say, open files. So the rolling failed. This fixes that\nby wrapping the call the deprecation logger in `doPriviledged`.\n\nThis is a strange `doPrivileged` call because it doens't check\nElasticsearch's `SpecialPermission`. `SpecialPermission` is a permission\nthat no-script code has and that scripts never have. Usually all\n`doPrivileged` calls check `SpecialPermission` to make sure that they\nare not accidentally acting on behalf of a script. But in this case we\nare *intentionally* acting on behalf of a script.\n\nCloses #28408"
},
{
"message": "Merge branch 'master' into script_date"
},
{
"message": "Explain"
}
],
"files": [
{
"diff": "@@ -83,6 +83,9 @@ setup:\n \n ---\n \"date\":\n+ - skip:\n+ features: \"warnings\"\n+\n - do:\n search:\n body:\n@@ -101,6 +104,28 @@ setup:\n source: \"doc.date.value\"\n - match: { hits.hits.0.fields.field.0: '2017-01-01T12:11:12.000Z' }\n \n+ - do:\n+ warnings:\n+ - getDate is no longer necessary on date fields as the value is now a date.\n+ search:\n+ body:\n+ script_fields:\n+ field:\n+ script:\n+ source: \"doc['date'].date\"\n+ - match: { hits.hits.0.fields.field.0: '2017-01-01T12:11:12.000Z' }\n+\n+ - do:\n+ warnings:\n+ - getDates is no longer necessary on date fields as the values are now dates.\n+ search:\n+ body:\n+ script_fields:\n+ field:\n+ script:\n+ source: \"doc['date'].dates.get(0)\"\n+ - match: { hits.hits.0.fields.field.0: '2017-01-01T12:11:12.000Z' }\n+\n ---\n \"geo_point\":\n - do:\n@@ -165,6 +190,9 @@ setup:\n \n ---\n \"long\":\n+ - skip:\n+ features: \"warnings\"\n+\n - do:\n search:\n body:\n@@ -183,6 +211,28 @@ setup:\n source: \"doc['long'].value\"\n - match: { hits.hits.0.fields.field.0: 12348732141234 }\n \n+ - do:\n+ warnings:\n+ - getDate on numeric fields is deprecated. Use a date field to get dates.\n+ search:\n+ body:\n+ script_fields:\n+ field:\n+ script:\n+ source: \"doc['long'].date\"\n+ - match: { hits.hits.0.fields.field.0: '2361-04-26T03:22:21.234Z' }\n+\n+ - do:\n+ warnings:\n+ - getDates on numeric fields is deprecated. Use a date field to get dates.\n+ search:\n+ body:\n+ script_fields:\n+ field:\n+ script:\n+ source: \"doc['long'].dates.get(0)\"\n+ - match: { hits.hits.0.fields.field.0: '2361-04-26T03:22:21.234Z' }\n+\n ---\n \"integer\":\n - do:",
"filename": "modules/lang-painless/src/test/resources/rest-api-spec/test/painless/50_script_doc_values.yml",
"status": "modified"
},
{
"diff": "@@ -35,18 +35,21 @@\n import org.joda.time.ReadableDateTime;\n \n import java.io.IOException;\n+import java.security.AccessController;\n+import java.security.PrivilegedAction;\n import java.util.AbstractList;\n import java.util.Arrays;\n import java.util.Comparator;\n import java.util.List;\n+import java.util.function.Consumer;\n import java.util.function.UnaryOperator;\n \n \n /**\n * Script level doc values, the assumption is that any implementation will\n * implement a <code>getValue</code> and a <code>getValues</code> that return\n * the relevant type that then can be used in scripts.\n- * \n+ *\n * Implementations should not internally re-use objects for the values that they\n * return as a single {@link ScriptDocValues} instance can be reused to return\n * values form multiple documents.\n@@ -94,14 +97,30 @@ public static final class Longs extends ScriptDocValues<Long> {\n protected static final DeprecationLogger deprecationLogger = new DeprecationLogger(ESLoggerFactory.getLogger(Longs.class));\n \n private final SortedNumericDocValues in;\n+ /**\n+ * Callback for deprecated fields. In production this should always point to\n+ * {@link #deprecationLogger} but tests will override it so they can test that\n+ * we use the required permissions when calling it.\n+ */\n+ private final Consumer<String> deprecationCallback;\n private long[] values = new long[0];\n private int count;\n private Dates dates;\n private int docId = -1;\n \n+ /**\n+ * Standard constructor.\n+ */\n public Longs(SortedNumericDocValues in) {\n- this.in = in;\n+ this(in, deprecationLogger::deprecated);\n+ }\n \n+ /**\n+ * Constructor for testing the deprecation callback.\n+ */\n+ Longs(SortedNumericDocValues in, Consumer<String> deprecationCallback) {\n+ this.in = in;\n+ this.deprecationCallback = deprecationCallback;\n }\n \n @Override\n@@ -142,7 +161,7 @@ public long getValue() {\n \n @Deprecated\n public ReadableDateTime getDate() throws IOException {\n- deprecationLogger.deprecated(\"getDate on numeric fields is deprecated. Use a date field to get dates.\");\n+ deprecated(\"getDate on numeric fields is deprecated. Use a date field to get dates.\");\n if (dates == null) {\n dates = new Dates(in);\n dates.setNextDocId(docId);\n@@ -152,7 +171,7 @@ public ReadableDateTime getDate() throws IOException {\n \n @Deprecated\n public List<ReadableDateTime> getDates() throws IOException {\n- deprecationLogger.deprecated(\"getDates on numeric fields is deprecated. Use a date field to get dates.\");\n+ deprecated(\"getDates on numeric fields is deprecated. Use a date field to get dates.\");\n if (dates == null) {\n dates = new Dates(in);\n dates.setNextDocId(docId);\n@@ -169,6 +188,22 @@ public Long get(int index) {\n public int size() {\n return count;\n }\n+\n+ /**\n+ * Log a deprecation log, with the server's permissions, not the permissions of the\n+ * script calling this method. We need to do this to prevent errors when rolling\n+ * the log file.\n+ */\n+ private void deprecated(String message) {\n+ // Intentionally not calling SpecialPermission.check because this is supposed to be called by scripts\n+ AccessController.doPrivileged(new PrivilegedAction<Void>() {\n+ @Override\n+ public Void run() {\n+ deprecationCallback.accept(message);\n+ return null;\n+ }\n+ });\n+ }\n }\n \n public static final class Dates extends ScriptDocValues<ReadableDateTime> {\n@@ -177,15 +212,32 @@ public static final class Dates extends ScriptDocValues<ReadableDateTime> {\n private static final ReadableDateTime EPOCH = new DateTime(0, DateTimeZone.UTC);\n \n private final SortedNumericDocValues in;\n+ /**\n+ * Callback for deprecated fields. In production this should always point to\n+ * {@link #deprecationLogger} but tests will override it so they can test that\n+ * we use the required permissions when calling it.\n+ */\n+ private final Consumer<String> deprecationCallback;\n /**\n * Values wrapped in {@link MutableDateTime}. Null by default an allocated on first usage so we allocate a reasonably size. We keep\n * this array so we don't have allocate new {@link MutableDateTime}s on every usage. Instead we reuse them for every document.\n */\n private MutableDateTime[] dates;\n private int count;\n \n+ /**\n+ * Standard constructor.\n+ */\n public Dates(SortedNumericDocValues in) {\n+ this(in, deprecationLogger::deprecated);\n+ }\n+\n+ /**\n+ * Constructor for testing deprecation logging.\n+ */\n+ Dates(SortedNumericDocValues in, Consumer<String> deprecationCallback) {\n this.in = in;\n+ this.deprecationCallback = deprecationCallback;\n }\n \n /**\n@@ -204,7 +256,7 @@ public ReadableDateTime getValue() {\n */\n @Deprecated\n public ReadableDateTime getDate() {\n- deprecationLogger.deprecated(\"getDate is no longer necessary on date fields as the value is now a date.\");\n+ deprecated(\"getDate is no longer necessary on date fields as the value is now a date.\");\n return getValue();\n }\n \n@@ -213,7 +265,7 @@ public ReadableDateTime getDate() {\n */\n @Deprecated\n public List<ReadableDateTime> getDates() {\n- deprecationLogger.deprecated(\"getDates is no longer necessary on date fields as the values are now dates.\");\n+ deprecated(\"getDates is no longer necessary on date fields as the values are now dates.\");\n return this;\n }\n \n@@ -274,6 +326,22 @@ void refreshArray() throws IOException {\n dates[i] = new MutableDateTime(in.nextValue(), DateTimeZone.UTC);\n }\n }\n+\n+ /**\n+ * Log a deprecation log, with the server's permissions, not the permissions of the\n+ * script calling this method. We need to do this to prevent errors when rolling\n+ * the log file.\n+ */\n+ private void deprecated(String message) {\n+ // Intentionally not calling SpecialPermission.check because this is supposed to be called by scripts\n+ AccessController.doPrivileged(new PrivilegedAction<Void>() {\n+ @Override\n+ public Void run() {\n+ deprecationCallback.accept(message);\n+ return null;\n+ }\n+ });\n+ }\n }\n \n public static final class Doubles extends ScriptDocValues<Double> {",
"filename": "server/src/main/java/org/elasticsearch/index/fielddata/ScriptDocValues.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,17 @@\n import org.joda.time.ReadableDateTime;\n \n import java.io.IOException;\n+import java.security.AccessControlContext;\n+import java.security.AccessController;\n+import java.security.PermissionCollection;\n+import java.security.Permissions;\n+import java.security.PrivilegedAction;\n+import java.security.ProtectionDomain;\n+import java.util.HashSet;\n+import java.util.Set;\n+import java.util.function.Consumer;\n+\n+import static org.hamcrest.Matchers.containsInAnyOrder;\n \n public class ScriptDocValuesDatesTests extends ESTestCase {\n public void test() throws IOException {\n@@ -39,12 +50,19 @@ public void test() throws IOException {\n values[d][i] = expectedDates[d][i].getMillis();\n }\n }\n- Dates dates = wrap(values);\n+ Set<String> warnings = new HashSet<>();\n+ Dates dates = wrap(values, deprecationMessage -> {\n+ warnings.add(deprecationMessage);\n+ /* Create a temporary directory to prove we are running with the\n+ * server's permissions. */\n+ createTempDir();\n+ });\n \n for (int round = 0; round < 10; round++) {\n int d = between(0, values.length - 1);\n dates.setNextDocId(d);\n assertEquals(expectedDates[d].length > 0 ? expectedDates[d][0] : new DateTime(0, DateTimeZone.UTC), dates.getValue());\n+ assertEquals(expectedDates[d].length > 0 ? expectedDates[d][0] : new DateTime(0, DateTimeZone.UTC), dates.getDate());\n \n assertEquals(values[d].length, dates.size());\n for (int i = 0; i < values[d].length; i++) {\n@@ -54,9 +72,33 @@ public void test() throws IOException {\n Exception e = expectThrows(UnsupportedOperationException.class, () -> dates.add(new DateTime()));\n assertEquals(\"doc values are unmodifiable\", e.getMessage());\n }\n+\n+ /*\n+ * Invoke getDates without any privileges to verify that\n+ * it still works without any. In particularly, this\n+ * verifies that the callback that we've configured\n+ * above works. That callback creates a temporary\n+ * directory which is not possible with \"noPermissions\".\n+ */\n+ PermissionCollection noPermissions = new Permissions();\n+ AccessControlContext noPermissionsAcc = new AccessControlContext(\n+ new ProtectionDomain[] {\n+ new ProtectionDomain(null, noPermissions)\n+ }\n+ );\n+ AccessController.doPrivileged(new PrivilegedAction<Void>() {\n+ public Void run() {\n+ dates.getDates();\n+ return null;\n+ }\n+ }, noPermissionsAcc);\n+\n+ assertThat(warnings, containsInAnyOrder(\n+ \"getDate is no longer necessary on date fields as the value is now a date.\",\n+ \"getDates is no longer necessary on date fields as the values are now dates.\"));\n }\n \n- private Dates wrap(long[][] values) {\n+ private Dates wrap(long[][] values, Consumer<String> deprecationHandler) {\n return new Dates(new AbstractSortedNumericDocValues() {\n long[] current;\n int i;\n@@ -75,6 +117,6 @@ public int docValueCount() {\n public long nextValue() {\n return current[i++];\n }\n- });\n+ }, deprecationHandler);\n }\n }",
"filename": "server/src/test/java/org/elasticsearch/index/fielddata/ScriptDocValuesDatesTests.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,17 @@\n import org.joda.time.ReadableDateTime;\n \n import java.io.IOException;\n+import java.security.AccessControlContext;\n+import java.security.AccessController;\n+import java.security.PermissionCollection;\n+import java.security.Permissions;\n+import java.security.PrivilegedAction;\n+import java.security.ProtectionDomain;\n+import java.util.HashSet;\n+import java.util.Set;\n+import java.util.function.Consumer;\n+\n+import static org.hamcrest.Matchers.containsInAnyOrder;\n \n public class ScriptDocValuesLongsTests extends ESTestCase {\n public void testLongs() throws IOException {\n@@ -36,7 +47,7 @@ public void testLongs() throws IOException {\n values[d][i] = randomLong();\n }\n }\n- Longs longs = wrap(values);\n+ Longs longs = wrap(values, deprecationMessage -> {fail(\"unexpected deprecation: \" + deprecationMessage);});\n \n for (int round = 0; round < 10; round++) {\n int d = between(0, values.length - 1);\n@@ -66,7 +77,13 @@ public void testDates() throws IOException {\n values[d][i] = dates[d][i].getMillis();\n }\n }\n- Longs longs = wrap(values);\n+ Set<String> warnings = new HashSet<>();\n+ Longs longs = wrap(values, deprecationMessage -> {\n+ warnings.add(deprecationMessage);\n+ /* Create a temporary directory to prove we are running with the\n+ * server's permissions. */\n+ createTempDir();\n+ });\n \n for (int round = 0; round < 10; round++) {\n int d = between(0, values.length - 1);\n@@ -82,12 +99,36 @@ public void testDates() throws IOException {\n assertEquals(\"doc values are unmodifiable\", e.getMessage());\n }\n \n- assertWarnings(\n+ /*\n+ * Invoke getDates without any privileges to verify that\n+ * it still works without any. In particularly, this\n+ * verifies that the callback that we've configured\n+ * above works. That callback creates a temporary\n+ * directory which is not possible with \"noPermissions\".\n+ */\n+ PermissionCollection noPermissions = new Permissions();\n+ AccessControlContext noPermissionsAcc = new AccessControlContext(\n+ new ProtectionDomain[] {\n+ new ProtectionDomain(null, noPermissions)\n+ }\n+ );\n+ AccessController.doPrivileged(new PrivilegedAction<Void>() {\n+ public Void run() {\n+ try {\n+ longs.getDates();\n+ } catch (IOException e) {\n+ throw new RuntimeException(\"unexpected\", e);\n+ }\n+ return null;\n+ }\n+ }, noPermissionsAcc);\n+\n+ assertThat(warnings, containsInAnyOrder(\n \"getDate on numeric fields is deprecated. Use a date field to get dates.\",\n- \"getDates on numeric fields is deprecated. Use a date field to get dates.\");\n+ \"getDates on numeric fields is deprecated. Use a date field to get dates.\"));\n }\n \n- private Longs wrap(long[][] values) {\n+ private Longs wrap(long[][] values, Consumer<String> deprecationCallback) {\n return new Longs(new AbstractSortedNumericDocValues() {\n long[] current;\n int i;\n@@ -106,6 +147,6 @@ public int docValueCount() {\n public long nextValue() {\n return current[i++];\n }\n- });\n+ }, deprecationCallback);\n }\n }",
"filename": "server/src/test/java/org/elasticsearch/index/fielddata/ScriptDocValuesLongsTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 6.1.1\r\n\r\n**OS version**: CentOS 7\r\n\r\n**JVM version (java -version)**:\r\n```\r\n$ java -version\r\njava version \"1.8.0_141\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_141-b15)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)\r\n```\r\n\r\n**Description of the problem including expected versus actual behavior:**\r\nI use post_filter in scroll API. Elasticsearch returns correct hits.total.\r\nHowever, when hits.hits array is empty, Elasticsearch returns incorrect the total actual number of retrieved documents.\r\n\r\n**Steps to reproduce**:\r\n*Template*\r\n```json\r\nPUT _template/scroll_test_template\r\n{\r\n \"index_patterns\": \"scroll-test\",\r\n \"mappings\": {\r\n \"doc\": {\r\n \"properties\":\r\n {\r\n \"keyword_field\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n*Document*\r\n```json\r\nPOST scroll-test/doc/_bulk\r\n{ \"index\": {} }\r\n{ \"keyword_field\" : \"value00000\" }\r\n{ \"index\": {} }\r\n{ \"keyword_field\" : \"value00001\" }\r\n{ \"index\": {} }\r\n{ \"keyword_field\" : \"value00002\" }\r\n .\r\n .\r\n .\r\n{ \"index\": {} }\r\n{ \"keyword_field\" : \"value49997\" }\r\n{ \"index\": {} }\r\n{ \"keyword_field\" : \"value49998\" }\r\n{ \"index\": {} }\r\n{ \"keyword_field\" : \"value49999\" }\r\n```\r\n*Query*\r\n```json\r\nGET scroll-test/_search?scroll=1m\r\n{\r\n \"size\": 50,\r\n \"post_filter\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"wildcard\": {\r\n \"keyword_field\": \"value*010*\"\r\n }\r\n } \r\n ]\r\n }\r\n }\r\n}\r\n\r\nGET /_search/scroll/\r\n{\r\n \"scroll\" : \"1m\",\r\n \"scroll_id\": \"{_scroll_id}\"\r\n}\r\n```\r\nhits.total is 199. However, when hits.hits array is empty, the total actual number of retrieved documents is 111.\r\n",
"comments": [
{
"body": "@jimczi could you take a look at this?",
"created_at": "2018-01-29T13:49:08Z"
},
{
"body": "Thanks for reporting and for providing a simple recreation @Nao-Mk2 .\r\nThe issue in your case appears only if you use a non-scoring query (wildcard query uses a constant score) and `post_filter` in a `scroll` context. I opened https://github.com/elastic/elasticsearch/pull/28459 to fix this.",
"created_at": "2018-02-01T15:51:08Z"
},
{
"body": "Awesome, thanks for the quick response and fix!",
"created_at": "2018-02-02T01:13:07Z"
}
],
"number": 28411,
"title": "Incorrect the number of hits return when use post_filter in scroll"
} | {
"body": "This change fixes the handling of the `terminate_after` option when post_filters (or min_score) are used.\r\n`post_filter` should be applied before `terminate_after` in order to terminate the query when enough document are accepted by the filters.\r\nThis commit also changes the type of exception thrown by `terminate_after` in order to ensure that multi collectors (aggregations) do not try to continue the collection when enough documents have been collected.\r\n\r\nCloses #28411",
"number": 28459,
"review_comments": [],
"title": "Search option terminate_after does not handle post_filters and aggregations correctly"
} | {
"commits": [
{
"message": "Search option terminate_after does not handle post_filters and aggregations correctly\n\nThis change fixes the handling of the `terminate_after` option when post_filters (or min_score) are used.\n`post_filter` should be applied before `terminate_after` in order to terminate the query when enough document are accepted\nby the post_filters.\nThis commit also changes the type of exception thrown by `terminate_after` in order to ensure that multi collectors (aggregations)\ndo not try to continue the collection when enough documents have been collected.\n\nCloses #28411"
},
{
"message": "add a note regarding terminate_after used with post_filter and aggs"
}
],
"files": [
{
"diff": "@@ -120,6 +120,12 @@ all clients support GET with body, POST is allowed as well.\n [float]\n === Fast check for any matching docs\n \n+NOTE: `terminate_after` is always applied **after** the `post_filter` and stops\n+ the query as well as the aggregation executions when enough hits have been\n+ collected on the shard. Though the doc count on aggregations may not reflect\n+ the `hits.total` in the response since aggregations are applied **before** the\n+ post filtering.\n+\n In case we only want to know if there are any documents matching a\n specific query, we can set the `size` to `0` to indicate that we are not\n interested in the search results. Also we can set `terminate_after` to `1`\n@@ -128,7 +134,7 @@ matching document was found (per shard).\n \n [source,js]\n --------------------------------------------------\n-GET /_search?q=message:elasticsearch&size=0&terminate_after=1\n+GET /_search?q=message:number&size=0&terminate_after=1\n --------------------------------------------------\n // CONSOLE\n // TEST[setup:twitter]",
"filename": "docs/reference/search/request-body.asciidoc",
"status": "modified"
},
{
"diff": "@@ -27,39 +27,55 @@\n import org.apache.lucene.search.LeafCollector;\n \n import java.io.IOException;\n-import java.util.concurrent.atomic.AtomicBoolean;\n \n /**\n * A {@link Collector} that early terminates collection after <code>maxCountHits</code> docs have been collected.\n */\n public class EarlyTerminatingCollector extends FilterCollector {\n+ static final class EarlyTerminationException extends RuntimeException {\n+ EarlyTerminationException(String msg) {\n+ super(msg);\n+ }\n+ }\n+\n private final int maxCountHits;\n private int numCollected;\n- private boolean terminatedEarly = false;\n+ private boolean forceTermination;\n \n- EarlyTerminatingCollector(final Collector delegate, int maxCountHits) {\n+ /**\n+ * Ctr\n+ * @param delegate The delegated collector.\n+ * @param maxCountHits The number of documents to collect before termination.\n+ * @param forceTermination Whether the collection should be terminated with an exception ({@link EarlyTerminationException})\n+ * that is not caught by other {@link Collector} or with a {@link CollectionTerminatedException} otherwise.\n+ */\n+ EarlyTerminatingCollector(final Collector delegate, int maxCountHits, boolean forceTermination) {\n super(delegate);\n this.maxCountHits = maxCountHits;\n+ this.forceTermination = forceTermination;\n }\n \n @Override\n public LeafCollector getLeafCollector(LeafReaderContext context) throws IOException {\n if (numCollected >= maxCountHits) {\n- throw new CollectionTerminatedException();\n+ if (forceTermination) {\n+ throw new EarlyTerminationException(\"early termination [CountBased]\");\n+ } else {\n+ throw new CollectionTerminatedException();\n+ }\n }\n return new FilterLeafCollector(super.getLeafCollector(context)) {\n @Override\n public void collect(int doc) throws IOException {\n- super.collect(doc);\n- if (++numCollected >= maxCountHits) {\n- terminatedEarly = true;\n- throw new CollectionTerminatedException();\n+ if (++numCollected > maxCountHits) {\n+ if (forceTermination) {\n+ throw new EarlyTerminationException(\"early termination [CountBased]\");\n+ } else {\n+ throw new CollectionTerminatedException();\n+ }\n }\n+ super.collect(doc);\n };\n };\n }\n-\n- public boolean terminatedEarly() {\n- return terminatedEarly;\n- }\n }",
"filename": "server/src/main/java/org/elasticsearch/search/query/EarlyTerminatingCollector.java",
"status": "modified"
},
{
"diff": "@@ -171,16 +171,9 @@ static QueryCollectorContext createEarlyTerminationCollectorContext(int numHits)\n @Override\n Collector create(Collector in) throws IOException {\n assert collector == null;\n- this.collector = new EarlyTerminatingCollector(in, numHits);\n+ this.collector = new EarlyTerminatingCollector(in, numHits, true);\n return collector;\n }\n-\n- @Override\n- void postProcess(QuerySearchResult result) throws IOException {\n- if (collector.terminatedEarly()) {\n- result.terminatedEarly(true);\n- }\n- }\n };\n }\n }",
"filename": "server/src/main/java/org/elasticsearch/search/query/QueryCollectorContext.java",
"status": "modified"
},
{
"diff": "@@ -177,6 +177,13 @@ static boolean execute(SearchContext searchContext,\n final LinkedList<QueryCollectorContext> collectors = new LinkedList<>();\n // whether the chain contains a collector that filters documents\n boolean hasFilterCollector = false;\n+ if (searchContext.terminateAfter() != SearchContext.DEFAULT_TERMINATE_AFTER) {\n+ // add terminate_after before the filter collectors\n+ // it will only be applied on documents accepted by these filter collectors\n+ collectors.add(createEarlyTerminationCollectorContext(searchContext.terminateAfter()));\n+ // this collector can filter documents during the collection\n+ hasFilterCollector = true;\n+ }\n if (searchContext.parsedPostFilter() != null) {\n // add post filters before aggregations\n // it will only be applied to top hits\n@@ -194,12 +201,6 @@ static boolean execute(SearchContext searchContext,\n // this collector can filter documents during the collection\n hasFilterCollector = true;\n }\n- if (searchContext.terminateAfter() != SearchContext.DEFAULT_TERMINATE_AFTER) {\n- // apply terminate after after all filters collectors\n- collectors.add(createEarlyTerminationCollectorContext(searchContext.terminateAfter()));\n- // this collector can filter documents during the collection\n- hasFilterCollector = true;\n- }\n \n boolean timeoutSet = scrollContext == null && searchContext.timeout() != null &&\n searchContext.timeout().equals(SearchService.NO_TIMEOUT) == false;\n@@ -263,6 +264,8 @@ static boolean execute(SearchContext searchContext,\n \n try {\n searcher.search(query, queryCollector);\n+ } catch (EarlyTerminatingCollector.EarlyTerminationException e) {\n+ queryResult.terminatedEarly(true);\n } catch (TimeExceededException e) {\n assert timeoutSet : \"TimeExceededException thrown even though timeout wasn't set\";\n ",
"filename": "server/src/main/java/org/elasticsearch/search/query/QueryPhase.java",
"status": "modified"
},
{
"diff": "@@ -103,11 +103,11 @@ private EmptyTopDocsCollectorContext(IndexReader reader, Query query,\n this.collector = hitCountCollector;\n this.hitCountSupplier = hitCountCollector::getTotalHits;\n } else {\n- this.collector = new EarlyTerminatingCollector(hitCountCollector, 0);\n+ this.collector = new EarlyTerminatingCollector(hitCountCollector, 0, false);\n this.hitCountSupplier = () -> hitCount;\n }\n } else {\n- this.collector = new EarlyTerminatingCollector(new TotalHitCountCollector(), 0);\n+ this.collector = new EarlyTerminatingCollector(new TotalHitCountCollector(), 0, false);\n // for bwc hit count is set to 0, it will be converted to -1 by the coordinating node\n this.hitCountSupplier = () -> 0;\n }",
"filename": "server/src/main/java/org/elasticsearch/search/query/TopDocsCollectorContext.java",
"status": "modified"
},
{
"diff": "@@ -181,6 +181,37 @@ public void testPostFilterDisablesCountOptimization() throws Exception {\n dir.close();\n }\n \n+ public void testTerminateAfterWithFilter() throws Exception {\n+ Directory dir = newDirectory();\n+ final Sort sort = new Sort(new SortField(\"rank\", SortField.Type.INT));\n+ IndexWriterConfig iwc = newIndexWriterConfig()\n+ .setIndexSort(sort);\n+ RandomIndexWriter w = new RandomIndexWriter(random(), dir, iwc);\n+ Document doc = new Document();\n+ for (int i = 0; i < 10; i++) {\n+ doc.add(new StringField(\"foo\", Integer.toString(i), Store.NO));\n+ }\n+ w.addDocument(doc);\n+ w.close();\n+\n+ IndexReader reader = DirectoryReader.open(dir);\n+ IndexSearcher contextSearcher = new IndexSearcher(reader);\n+ TestSearchContext context = new TestSearchContext(null, indexShard);\n+ context.setTask(new SearchTask(123L, \"\", \"\", \"\", null, Collections.emptyMap()));\n+ context.parsedQuery(new ParsedQuery(new MatchAllDocsQuery()));\n+ context.terminateAfter(1);\n+ context.setSize(10);\n+ for (int i = 0; i < 10; i++) {\n+ context.parsedPostFilter(new ParsedQuery(new TermQuery(new Term(\"foo\", Integer.toString(i)))));\n+ QueryPhase.execute(context, contextSearcher, checkCancelled -> {});\n+ assertEquals(1, context.queryResult().topDocs().totalHits);\n+ assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1));\n+ }\n+ reader.close();\n+ dir.close();\n+ }\n+\n+\n public void testMinScoreDisablesCountOptimization() throws Exception {\n Directory dir = newDirectory();\n final Sort sort = new Sort(new SortField(\"rank\", SortField.Type.INT));\n@@ -346,6 +377,8 @@ public void testTerminateAfterEarlyTermination() throws Exception {\n assertTrue(context.queryResult().terminatedEarly());\n assertThat(context.queryResult().topDocs().totalHits, equalTo(1L));\n assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1));\n+ assertThat(collector.getTotalHits(), equalTo(1));\n+ context.queryCollectors().clear();\n }\n {\n context.setSize(0);",
"filename": "server/src/test/java/org/elasticsearch/search/query/QueryPhaseTests.java",
"status": "modified"
},
{
"diff": "@@ -236,7 +236,7 @@ public void testSimpleTerminateAfterCount() throws Exception {\n refresh();\n \n SearchResponse searchResponse;\n- for (int i = 1; i <= max; i++) {\n+ for (int i = 1; i < max; i++) {\n searchResponse = client().prepareSearch(\"test\")\n .setQuery(QueryBuilders.rangeQuery(\"field\").gte(1).lte(max))\n .setTerminateAfter(i).execute().actionGet();",
"filename": "server/src/test/java/org/elasticsearch/search/simple/SimpleSearchIT.java",
"status": "modified"
}
]
} |
{
"body": "For repro steps, please see:\r\n\r\nhttps://discuss.elastic.co/t/error-using-multiple-geospan-queries-as-bool-should-clauses/113945\r\n\r\nThe http 500 error occur sporadically under high concurrency (ES queries run as part of an EMR Spark job).\r\n\r\nThe same job without code changes run in ES 5.5 without issues.\r\n",
"comments": [
{
"body": "Hi @INRIX-Trang-Nguyen, thank you for your report. We would appreciate if you can format your issue according to the template and offer clear steps for reproduction, as this would allow us to better triage this and offer a solution if necessary. ",
"created_at": "2018-01-31T14:13:01Z"
},
{
"body": "This is a bug introduced in 6.0 that appears when more than one `geo_shape` query need to fetch pre-indexed shapes in an index. There is a race condition that can lead to an AIOOBE. I opened https://github.com/elastic/elasticsearch/pull/28458 to fix it.",
"created_at": "2018-01-31T15:16:03Z"
},
{
"body": "Awesome, thanks for the quick response and fix! Will this make it into ES 6.3?",
"created_at": "2018-02-01T09:18:26Z"
},
{
"body": "Yes it will.",
"created_at": "2018-02-01T09:23:18Z"
},
{
"body": "Hi, guys!\r\nWhat would be a good way to query points inside multiple pre-indexed shapes (eg.: querying for all points inside all shapes)?\r\n\r\nAlso, can you please advise what are the ways to get all points in shapes / shapes in shapes with Geo query geo_shape (multiple pre-indexed shape) on big geo-spatial data in ElasticSearch?\r\n\r\nWhen you have some time, please, I put this question to [StackOverflow](https://stackoverflow.com/questions/66448137/ways-to-get-all-points-in-shapes-shapes-in-shapes-with-geo-query-geo-shape-mu) and also to [ElasticSearch discuss forum](https://discuss.elastic.co/t/ways-to-get-all-points-in-shapes-shapes-in-shapes-with-geo-query-geo-shape-multiple-pre-indexed-shape-on-big-geo-spatial-data-in-elasticsearch/266024).\r\n\r\nThank you in advance!\r\n\r\ncc: @jimczi @jkakavas @nknize @russcam @thomasneirynck ",
"created_at": "2021-03-03T09:45:51Z"
}
],
"number": 28456,
"title": "Boolean queries with indexed geoshapes break on Elastic Search 6.0"
} | {
"body": "This change fixes a possible AIOOB during the parsing of the document that contains the indexed shape.\r\nThis change ensures that the parsing does not continue when the field that contains the shape has been found.\r\n\r\nCloses #28456",
"number": 28458,
"review_comments": [],
"title": "Fix AIOOB on indexed geo_shape query"
} | {
"commits": [
{
"message": "Fix AIOOB on indexed geo_shape query\n\nThis change fixes a possible AIOOB during the parsing of the document that contains the indexed shape.\nThis change ensures that the parsing does not continue when the field that contains the shape has been found.\n\nCloses #28456"
}
],
"files": [
{
"diff": "@@ -409,6 +409,7 @@ public void onResponse(GetResponse response) {\n parser.nextToken();\n if (++currentPathSlot == pathElements.length) {\n listener.onResponse(ShapeParser.parse(parser));\n+ return;\n }\n } else {\n parser.nextToken();",
"filename": "server/src/main/java/org/elasticsearch/index/query/GeoShapeQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -119,6 +119,7 @@ protected GetResponse executeGet(GetRequest getRequest) {\n XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint();\n builder.startObject();\n builder.field(expectedShapePath, indexedShapeToReturn);\n+ builder.field(randomAlphaOfLengthBetween(10, 20), \"something\");\n builder.endObject();\n json = builder.string();\n } catch (IOException ex) {\n@@ -227,13 +228,7 @@ public void testFromJson() throws IOException {\n \n @Override\n public void testMustRewrite() throws IOException {\n- GeoShapeQueryBuilder sqb;\n- do {\n- sqb = doCreateTestQueryBuilder();\n- // do this until we get one without a shape\n- } while (sqb.shape() != null);\n-\n- GeoShapeQueryBuilder query = sqb;\n+ GeoShapeQueryBuilder query = doCreateTestQueryBuilder(true);\n \n UnsupportedOperationException e = expectThrows(UnsupportedOperationException.class, () -> query.toQuery(createShardContext()));\n assertEquals(\"query must be rewritten first\", e.getMessage());\n@@ -244,6 +239,23 @@ public void testMustRewrite() throws IOException {\n assertEquals(geoShapeQueryBuilder, rewrite);\n }\n \n+ public void testMultipleRewrite() throws IOException {\n+ GeoShapeQueryBuilder shape = doCreateTestQueryBuilder(true);\n+ QueryBuilder builder = new BoolQueryBuilder()\n+ .should(shape)\n+ .should(shape);\n+\n+ builder = rewriteAndFetch(builder, createShardContext());\n+\n+ GeoShapeQueryBuilder expectedShape = new GeoShapeQueryBuilder(GEO_SHAPE_FIELD_NAME, indexedShapeToReturn);\n+ expectedShape.strategy(shape.strategy());\n+ expectedShape.relation(shape.relation());\n+ QueryBuilder expected = new BoolQueryBuilder()\n+ .should(expectedShape)\n+ .should(expectedShape);\n+ assertEquals(expected, builder);\n+ }\n+\n public void testIgnoreUnmapped() throws IOException {\n ShapeType shapeType = ShapeType.randomType(random());\n ShapeBuilder shape = RandomShapeGenerator.createShapeWithin(random(), null, shapeType);",
"filename": "server/src/test/java/org/elasticsearch/index/query/GeoShapeQueryBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**\r\n6.0.0, 6.0.1\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** :\r\nopenjdk version \"1.8.0_151\"\r\nOpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.17.10.2-b12)\r\nOpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)\r\n\r\n**OS version** :\r\nUbuntu 17.10 with kernel 4.13.0-19-generic\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nQuery with **filter aggregation** based on **terms filter lookup** **nested below other aggregation** terminates with error **\"query must be rewritten first\"**. The same query executed on version 5.6.5 finishes with success. \r\n\r\n**This error is very similar to already reported and closed: #21301**\r\n\r\n**The only difference is that now query fails when nested aggregation is used.** \r\n\r\n\r\n**Steps to reproduce**:\r\nRecreation steps are based on those provided in issue #21301:\r\n\r\n\r\n1. create index test_posts\r\n\r\n```\r\nPUT test_posts\r\n{\r\n \"mappings\": {\r\n \"post\": {\r\n \"properties\": {\r\n \"mentionIDs\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n2. create index test_users\r\n\r\n```\r\nPUT test_users\r\n{\r\n \"mappings\": {\r\n \"user\": {\r\n \"properties\": {\r\n \"notifications\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n3. index some user\r\n\r\n```\r\nPUT test_users/user/USR|CLIENTID|1234\r\n{\r\n \"notifications\": [\"abc\"]\r\n}\r\n```\r\n\r\n\r\n4. insert some post\r\n\r\n```\r\nPUT test_posts/post/POST|4321\r\n{\r\n \"mentionIDs\": [\"abc\"]\r\n}\r\n```\r\n\r\n5. execute search (this search finishes with success - it is similar to that from #21301)\r\n\r\n```\r\nGET test_posts/_search\r\n{\r\n \"aggs\": {\r\n \"itemsNotify\": {\r\n \"filter\": {\r\n \"terms\": {\r\n \"mentionIDs\": {\r\n \"index\": \"test_users\",\r\n \"type\": \"user\",\r\n \"id\": \"USR|CLIENTID|1234\",\r\n \"path\": \"notifications\",\r\n \"routing\": \"CLIENTID\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n**6. This search fails with error on ES 6.0.0 and ES 6.0.1:**\r\n\r\n```\r\nGET /test_posts/_search\r\n{\r\n \"aggs\": {\r\n \"facets\": {\r\n \"global\": {},\r\n \"aggs\": {\r\n \"filteredFacets\": {\r\n \"filter\": {\r\n \"terms\": {\r\n \"mentionIDs\": {\r\n \"index\": \"test_users\",\r\n \"type\": \"user\",\r\n \"id\": \"USR|CLIENTID|1234\",\r\n \"path\": \"notifications\",\r\n \"routing\": \"CLIENTID\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n\r\n\r\nQuery result is:\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"unsupported_operation_exception\",\r\n \"reason\": \"query must be rewritten first\"\r\n }\r\n ],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"all shards failed\",\r\n \"phase\": \"query\",\r\n \"grouped\": true,\r\n \"failed_shards\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"test_posts\",\r\n \"node\": \"5ipdZJUdSqKf5gnOkwlkDg\",\r\n \"reason\": {\r\n \"type\": \"unsupported_operation_exception\",\r\n \"reason\": \"query must be rewritten first\"\r\n }\r\n }\r\n ]\r\n },\r\n \"status\": 500\r\n}\r\n```\r\n\r\n**Above query finishes with success on ES 5.6.5**\r\n\r\n**Fragment of logs from server**:\r\n[es-query-must-be-rewritten.log](https://github.com/elastic/elasticsearch/files/1552226/es-query-must-be-rewritten.log)\r\n\r\n",
"comments": [
{
"body": "Thanks @mrszg it's a bug in the rewrite phase of aggregations, only the root aggregation gets rewritten and sub aggregations are skipped. I'll take a look.",
"created_at": "2017-12-13T08:40:15Z"
},
{
"body": "It seems that version 6.1.0 is affected too.",
"created_at": "2017-12-14T08:24:05Z"
},
{
"body": "@jimczi Do you know when this bug will be fixed?",
"created_at": "2017-12-18T09:56:31Z"
},
{
"body": "Unfortunately this bug requires a larger fix than I initially thought. The rewriting of aggregations has a bug when multiple rewrite of the query is needed, the `terms` query is rewritten in multiple rounds (create the fetch query to get the document, retrieve the document and creates the final query) but currently only one rewrite is called. We know how to fix this but it will require some time, I cannot give any date or version for the fix though @lbrzekowski , I'll update this issue when I have a pr ready for it.",
"created_at": "2017-12-18T10:04:25Z"
},
{
"body": "@jimczi Can you provide any update on this issue? ",
"created_at": "2018-01-07T19:02:20Z"
},
{
"body": "Hi, \r\n\r\nIs this fixed in elasticsearch 6.2 ? I tried using 6.2 but I still see the same error. which release this will get fixed ?\r\n",
"created_at": "2018-05-02T21:45:20Z"
},
{
"body": "Same question as @bbansal - is there a chance of getting this fix in 6.2, or will we need to wait for 6.3 to become current & migrate?",
"created_at": "2018-05-16T16:09:30Z"
},
{
"body": "@bbansal , @jgnieuwhof sorry the fix is targeted for 6.3 only so you'll need to wait for the release and migrate to the new version.",
"created_at": "2018-05-16T16:23:12Z"
},
{
"body": "@jimczi - noted, thanks for the update :)",
"created_at": "2018-05-16T16:25:12Z"
},
{
"body": "Hi,\r\n\r\nI am using v6.7.1 but found that this is still not fixed. Please let us know when will this be fixed?",
"created_at": "2019-04-08T10:40:13Z"
},
{
"body": "Hi @hrsvrma , this issue is fixed since version 6.3 so if you can reproduce a similar failure please open a new issue with a clear recreation.",
"created_at": "2019-04-08T10:47:45Z"
},
{
"body": "HI @jimczi , Thanks for your response. As per your suggestion, I'll open a new issue with clear recreation steps.",
"created_at": "2019-04-08T10:59:56Z"
}
],
"number": 27782,
"title": "ES 6.0 regression: \"query must be rewritten first\" exception when using terms lookup filter in nested aggregation"
} | {
"body": "This change adds a shallow copy method for aggregation builders. This method returns a copy of the builder replacing the factoriesBuilder and metaData.\r\nThis method is used when the builder is rewritten (AggregationBuilder#rewrite) in order to make sure that we create a new instance of the parent builder when sub aggregations are rewritten.\r\n\r\nRelates #27782",
"number": 28430,
"review_comments": [
{
"body": "if this is supposed to produce and exact clone of the passed in builder then should we not set the factoriesBuilder here too?",
"created_at": "2018-01-30T12:15:55Z"
},
{
"body": "I think we might want to copy the list here rather than share it between the old and new copies? Otherwise if the builder is cloned and then `addRange` is called on the copy it will also change the original?",
"created_at": "2018-01-30T12:20:36Z"
},
{
"body": "I think we should copy the ranges list?",
"created_at": "2018-01-30T12:21:12Z"
},
{
"body": "I think we should copy the ranges list?",
"created_at": "2018-01-30T12:21:27Z"
},
{
"body": "It seems that BucketCountThresholds is mutable so I think we need to clone it as well?",
"created_at": "2018-01-30T12:22:41Z"
},
{
"body": "It seems that BucketCountThresholds is mutable so I think we need to clone it as well?",
"created_at": "2018-01-30T12:23:42Z"
},
{
"body": "Is this filterBuilder potentially mutable too?",
"created_at": "2018-01-30T12:24:10Z"
},
{
"body": "It seems that BucketCountThresholds is mutable so I think we need to clone it as well?",
"created_at": "2018-01-30T12:24:24Z"
},
{
"body": "Are any of these object mutable?",
"created_at": "2018-01-30T12:26:07Z"
}
],
"title": "Add a shallow copy method to aggregation builders"
} | {
"commits": [
{
"message": "Add a shallowCopy method to aggregation builders\n\nThis change adds a shallow copy method for aggregation builders. This method returns a copy of the builder replacing the factoriesBuilder and metaDada\nThis method is used when the builder is rewritten (AggregationBuilder#rewrite) in order to make sure that\nwe create a new instance of the parent builder when sub aggregations are rewritten.\n\nRelates #27782"
},
{
"message": "fix check style in aggs module"
}
],
"files": [
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.MultiValueMode;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.support.MultiValuesSourceAggregationBuilder;\n@@ -46,6 +47,17 @@ public MatrixStatsAggregationBuilder(String name) {\n super(name, ValuesSourceType.NUMERIC, ValueType.NUMERIC);\n }\n \n+ protected MatrixStatsAggregationBuilder(MatrixStatsAggregationBuilder clone,\n+ AggregatorFactories.Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.multiValueMode = clone.multiValueMode;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(AggregatorFactories.Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new MatrixStatsAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "modules/aggs-matrix-stats/src/main/java/org/elasticsearch/search/aggregations/matrix/stats/MatrixStatsAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -56,6 +56,14 @@ protected LeafOnly(String name, ValuesSourceType valuesSourceType, ValueType tar\n super(name, valuesSourceType, targetValueType);\n }\n \n+ protected LeafOnly(LeafOnly<VS, AB> clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ if (factoriesBuilder.count() > 0) {\n+ throw new AggregationInitializationException(\"Aggregator [\" + name + \"] of type [\"\n+ + getType() + \"] cannot accept sub-aggregations\");\n+ }\n+ }\n+\n /**\n * Read from a stream that does not serialize its targetValueType. This should be used by most subclasses.\n */\n@@ -95,6 +103,18 @@ protected MultiValuesSourceAggregationBuilder(String name, ValuesSourceType valu\n this.targetValueType = targetValueType;\n }\n \n+ protected MultiValuesSourceAggregationBuilder(MultiValuesSourceAggregationBuilder<VS, AB> clone,\n+ Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.valuesSourceType = clone.valuesSourceType;\n+ this.targetValueType = clone.targetValueType;\n+ this.fields = new ArrayList<>(clone.fields);\n+ this.valueType = clone.valueType;\n+ this.format = clone.format;\n+ this.missingMap = new HashMap<>(clone.missingMap);\n+ this.missing = clone.missing;\n+ }\n+\n protected MultiValuesSourceAggregationBuilder(StreamInput in, ValuesSourceType valuesSourceType, ValueType targetValueType)\n throws IOException {\n super(in);",
"filename": "modules/aggs-matrix-stats/src/main/java/org/elasticsearch/search/aggregations/support/MultiValuesSourceAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,8 @@\n import org.elasticsearch.index.mapper.ParentFieldMapper;\n import org.elasticsearch.join.mapper.ParentIdFieldMapper;\n import org.elasticsearch.join.mapper.ParentJoinFieldMapper;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.support.FieldContext;\n@@ -43,6 +45,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n public class ChildrenAggregationBuilder\n@@ -68,6 +71,19 @@ public ChildrenAggregationBuilder(String name, String childType) {\n this.childType = childType;\n }\n \n+ protected ChildrenAggregationBuilder(ChildrenAggregationBuilder clone,\n+ Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.childType = clone.childType;\n+ this.childFilter = clone.childFilter;\n+ this.parentFilter = clone.parentFilter;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new ChildrenAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "modules/parent-join/src/main/java/org/elasticsearch/join/aggregations/ChildrenAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -45,6 +45,13 @@ public AbstractAggregationBuilder(String name) {\n super(name);\n }\n \n+ protected AbstractAggregationBuilder(AbstractAggregationBuilder<AB> clone,\n+ AggregatorFactories.Builder factoriesBuilder,\n+ Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder);\n+ this.metaData = metaData;\n+ }\n+\n /**\n * Read from a stream.\n */\n@@ -149,9 +156,7 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params)\n if (factoriesBuilder != null && (factoriesBuilder.count()) > 0) {\n builder.field(\"aggregations\");\n factoriesBuilder.toXContent(builder, params);\n-\n }\n-\n return builder.endObject();\n }\n ",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/AbstractAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryRewriteContext;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.slice.SliceBuilder;\n \n import java.io.IOException;\n import java.util.List;\n@@ -52,6 +53,11 @@ protected AggregationBuilder(String name) {\n this.name = name;\n }\n \n+ protected AggregationBuilder(AggregationBuilder clone, AggregatorFactories.Builder factoriesBuilder) {\n+ this.name = clone.name;\n+ this.factoriesBuilder = factoriesBuilder;\n+ }\n+\n /** Return this aggregation's name. */\n public String getName() {\n return name;\n@@ -96,15 +102,22 @@ public List<PipelineAggregationBuilder> getPipelineAggregations() {\n @Override\n public abstract AggregationBuilder subAggregations(AggregatorFactories.Builder subFactories);\n \n+ /**\n+ * Create a shallow copy of this builder and replacing {@link #factoriesBuilder} and <code>metaData</code>.\n+ * Used by {@link #rewrite(QueryRewriteContext)}.\n+ */\n+ protected abstract AggregationBuilder shallowCopy(AggregatorFactories.Builder factoriesBuilder, Map<String, Object> metaData);\n+\n public final AggregationBuilder rewrite(QueryRewriteContext context) throws IOException {\n AggregationBuilder rewritten = doRewrite(context);\n- if (rewritten == this) {\n- return rewritten;\n- }\n- rewritten.setMetaData(getMetaData());\n AggregatorFactories.Builder rewrittenSubAggs = factoriesBuilder.rewrite(context);\n- rewritten.subAggregations(rewrittenSubAggs);\n- return rewritten;\n+ if (rewritten != this) {\n+ return rewritten.setMetaData(getMetaData()).subAggregations(rewrittenSubAggs);\n+ } else if (rewrittenSubAggs != factoriesBuilder) {\n+ return shallowCopy(rewrittenSubAggs, getMetaData());\n+ } else {\n+ return this;\n+ }\n }\n \n /**",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/AggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.index.query.Rewriteable;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n@@ -116,6 +117,18 @@ public AdjacencyMatrixAggregationBuilder(String name, Map<String, QueryBuilder>\n this(name, DEFAULT_SEPARATOR, filters);\n }\n \n+ protected AdjacencyMatrixAggregationBuilder(AdjacencyMatrixAggregationBuilder clone,\n+ Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.filters = new ArrayList<>(clone.filters);\n+ this.separator = clone.separator;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new AdjacencyMatrixAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * @param name\n * the name of this aggregation",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.index.IndexSortConfig;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.internal.SearchContext;\n@@ -67,11 +68,25 @@ private CompositeAggregationBuilder(String name) {\n this(name, null);\n }\n \n+\n public CompositeAggregationBuilder(String name, List<CompositeValuesSourceBuilder<?>> sources) {\n super(name);\n this.sources = sources;\n }\n \n+ protected CompositeAggregationBuilder(CompositeAggregationBuilder clone,\n+ AggregatorFactories.Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.sources = new ArrayList<>(clone.sources);\n+ this.after = clone.after;\n+ this.size = clone.size;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(AggregatorFactories.Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new CompositeAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n public CompositeAggregationBuilder(StreamInput in) throws IOException {\n super(in);\n int num = in.readVInt();",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder;\n@@ -58,6 +59,17 @@ public FilterAggregationBuilder(String name, QueryBuilder filter) {\n this.filter = filter;\n }\n \n+ protected FilterAggregationBuilder(FilterAggregationBuilder clone,\n+ AggregatorFactories.Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.filter = clone.filter;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(AggregatorFactories.Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new FilterAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.index.query.Rewriteable;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n@@ -41,6 +42,7 @@\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.List;\n+import java.util.Map;\n import java.util.Objects;\n \n import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder;\n@@ -96,6 +98,19 @@ public FiltersAggregationBuilder(String name, QueryBuilder... filters) {\n this.keyed = false;\n }\n \n+ public FiltersAggregationBuilder(FiltersAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.filters = new ArrayList<>(clone.filters);\n+ this.keyed = clone.keyed;\n+ this.otherBucket = clone.otherBucket;\n+ this.otherBucketKey = clone.otherBucketKey;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new FiltersAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,8 @@\n import org.elasticsearch.index.fielddata.MultiGeoPointValues;\n import org.elasticsearch.index.fielddata.SortedBinaryDocValues;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.bucket.BucketUtils;\n@@ -49,6 +51,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n public class GeoGridAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource.GeoPoint, GeoGridAggregationBuilder>\n@@ -100,6 +103,18 @@ public GeoGridAggregationBuilder(String name) {\n super(name, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT);\n }\n \n+ protected GeoGridAggregationBuilder(GeoGridAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.precision = clone.precision;\n+ this.requiredSize = clone.requiredSize;\n+ this.shardSize = clone.shardSize;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new GeoGridAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -24,11 +24,14 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n \n public class GlobalAggregationBuilder extends AbstractAggregationBuilder<GlobalAggregationBuilder> {\n public static final String NAME = \"global\";\n@@ -37,6 +40,15 @@ public GlobalAggregationBuilder(String name) {\n super(name);\n }\n \n+ protected GlobalAggregationBuilder(GlobalAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new GlobalAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/global/GlobalAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,8 @@\n import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.BucketOrder;\n@@ -136,6 +138,23 @@ public DateHistogramAggregationBuilder(String name) {\n super(name, ValuesSourceType.NUMERIC, ValueType.DATE);\n }\n \n+ protected DateHistogramAggregationBuilder(DateHistogramAggregationBuilder clone,\n+ Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.interval = clone.interval;\n+ this.dateHistogramInterval = clone.dateHistogramInterval;\n+ this.offset = clone.offset;\n+ this.extendedBounds = clone.extendedBounds;\n+ this.order = clone.order;\n+ this.keyed = clone.keyed;\n+ this.minDocCount = clone.minDocCount;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new DateHistogramAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /** Read from a stream, for internal use only. */\n public DateHistogramAggregationBuilder(StreamInput in) throws IOException {\n super(in, ValuesSourceType.NUMERIC, ValueType.DATE);",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,8 @@\n import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.BucketOrder;\n@@ -43,6 +45,7 @@\n \n import java.io.IOException;\n import java.util.List;\n+import java.util.Map;\n import java.util.Objects;\n \n /**\n@@ -98,6 +101,22 @@ public HistogramAggregationBuilder(String name) {\n super(name, ValuesSourceType.NUMERIC, ValueType.DOUBLE);\n }\n \n+ protected HistogramAggregationBuilder(HistogramAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.interval = clone.interval;\n+ this.offset = clone.offset;\n+ this.minBound = clone.minBound;\n+ this.maxBound = clone.maxBound;\n+ this.order = clone.order;\n+ this.keyed = clone.keyed;\n+ this.minDocCount = clone.minDocCount;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new HistogramAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /** Read from a stream, for internal use only. */\n public HistogramAggregationBuilder(StreamInput in) throws IOException {\n super(in, ValuesSourceType.NUMERIC, ValueType.DOUBLE);",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,8 @@\n import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.support.ValueType;\n@@ -36,6 +38,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n \n public class MissingAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource, MissingAggregationBuilder> {\n public static final String NAME = \"missing\";\n@@ -54,6 +57,15 @@ public MissingAggregationBuilder(String name, ValueType targetValueType) {\n super(name, ValuesSourceType.ANY, targetValueType);\n }\n \n+ protected MissingAggregationBuilder(MissingAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new MissingAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/missing/MissingAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -26,12 +26,15 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.mapper.ObjectMapper;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregationExecutionException;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n public class NestedAggregationBuilder extends AbstractAggregationBuilder<NestedAggregationBuilder> {\n@@ -54,6 +57,16 @@ public NestedAggregationBuilder(String name, String path) {\n this.path = path;\n }\n \n+ protected NestedAggregationBuilder(NestedAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.path = clone.path;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new NestedAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -28,12 +28,15 @@\n import org.elasticsearch.index.query.support.NestedScope;\n import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregationExecutionException;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n public class ReverseNestedAggregationBuilder extends AbstractAggregationBuilder<ReverseNestedAggregationBuilder> {\n@@ -45,6 +48,17 @@ public ReverseNestedAggregationBuilder(String name) {\n super(name);\n }\n \n+ public ReverseNestedAggregationBuilder(ReverseNestedAggregationBuilder clone,\n+ Builder factoriesBuilder, Map<String, Object> map) {\n+ super(clone, factoriesBuilder, map);\n+ this.path = clone.path;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new ReverseNestedAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,8 @@\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n@@ -32,6 +34,7 @@\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n+import java.util.Map;\n import java.util.Objects;\n import java.util.function.Function;\n \n@@ -47,6 +50,14 @@ protected AbstractRangeBuilder(String name, InternalRange.Factory<?, ?> rangeFac\n this.rangeFactory = rangeFactory;\n }\n \n+ protected AbstractRangeBuilder(AbstractRangeBuilder<AB, R> clone,\n+ AggregatorFactories.Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.rangeFactory = clone.rangeFactory;\n+ this.ranges = new ArrayList<>(clone.ranges);\n+ this.keyed = clone.keyed;\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric;\n@@ -33,6 +34,7 @@\n import org.joda.time.DateTime;\n \n import java.io.IOException;\n+import java.util.Map;\n \n public class DateRangeAggregationBuilder extends AbstractRangeBuilder<DateRangeAggregationBuilder, RangeAggregator.Range> {\n public static final String NAME = \"date_range\";\n@@ -62,6 +64,15 @@ public DateRangeAggregationBuilder(String name) {\n super(name, InternalDateRange.FACTORY);\n }\n \n+ protected DateRangeAggregationBuilder(DateRangeAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new DateRangeAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/range/DateRangeAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -32,8 +32,10 @@\n import org.elasticsearch.common.xcontent.XContentParser.Token;\n import org.elasticsearch.common.xcontent.XContentParserUtils;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n+import org.elasticsearch.search.aggregations.bucket.geogrid.GeoGridAggregationBuilder;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder;\n import org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory;\n@@ -44,6 +46,7 @@\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n+import java.util.Map;\n import java.util.Objects;\n \n import static org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range.FROM_FIELD;\n@@ -253,6 +256,20 @@ public GeoDistanceAggregationBuilder(StreamInput in) throws IOException {\n this(name, null, InternalGeoDistance.FACTORY);\n }\n \n+ protected GeoDistanceAggregationBuilder(GeoDistanceAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.origin = clone.origin;\n+ this.distanceType = clone.distanceType;\n+ this.unit = clone.unit;\n+ this.keyed = clone.keyed;\n+ this.ranges = new ArrayList<>(clone.ranges);\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new GeoDistanceAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n GeoDistanceAggregationBuilder origin(GeoPoint origin) {\n this.origin = origin;\n return this;",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/range/GeoDistanceAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.common.xcontent.XContentParser.Token;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.support.ValueType;\n@@ -50,6 +51,7 @@\n import java.util.ArrayList;\n import java.util.Collections;\n import java.util.List;\n+import java.util.Map;\n import java.util.Objects;\n \n \n@@ -222,6 +224,17 @@ public IpRangeAggregationBuilder(String name) {\n super(name, ValuesSourceType.BYTES, ValueType.IP);\n }\n \n+ protected IpRangeAggregationBuilder(IpRangeAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.ranges = new ArrayList<>(clone.ranges);\n+ this.keyed = clone.keyed;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new IpRangeAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n @Override\n public String getType() {\n return NAME;",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/range/IpRangeAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range;\n@@ -33,6 +34,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n \n public class RangeAggregationBuilder extends AbstractRangeBuilder<RangeAggregationBuilder, Range> {\n public static final String NAME = \"range\";\n@@ -69,6 +71,15 @@ public RangeAggregationBuilder(StreamInput in) throws IOException {\n super(in, InternalRange.FACTORY, Range::new);\n }\n \n+ protected RangeAggregationBuilder(RangeAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new RangeAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Add a new range to this aggregation.\n *",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n@@ -36,6 +37,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n public class DiversifiedAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource, DiversifiedAggregationBuilder> {\n@@ -64,6 +66,18 @@ public DiversifiedAggregationBuilder(String name) {\n super(name, ValuesSourceType.ANY, null);\n }\n \n+ protected DiversifiedAggregationBuilder(DiversifiedAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.shardSize = clone.shardSize;\n+ this.maxDocsPerValue = clone.maxDocsPerValue;\n+ this.executionHint = clone.executionHint;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new DiversifiedAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/DiversifiedAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -25,11 +25,14 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n public class SamplerAggregationBuilder extends AbstractAggregationBuilder<SamplerAggregationBuilder> {\n@@ -43,6 +46,16 @@ public SamplerAggregationBuilder(String name) {\n super(name);\n }\n \n+ protected SamplerAggregationBuilder(SamplerAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.shardSize = clone.shardSize;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new SamplerAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/sampler/SamplerAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.Aggregator;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n@@ -48,6 +49,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder;\n@@ -129,6 +131,21 @@ public SignificantTermsAggregationBuilder(StreamInput in) throws IOException {\n significanceHeuristic = in.readNamedWriteable(SignificanceHeuristic.class);\n }\n \n+ protected SignificantTermsAggregationBuilder(SignificantTermsAggregationBuilder clone,\n+ Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.bucketCountThresholds = new BucketCountThresholds(clone.bucketCountThresholds);\n+ this.executionHint = clone.executionHint;\n+ this.filterBuilder = clone.filterBuilder;\n+ this.includeExclude = clone.includeExclude;\n+ this.significanceHeuristic = clone.significanceHeuristic;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new SignificantTermsAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n @Override\n protected void innerWriteTo(StreamOutput out) throws IOException {\n bucketCountThresholds.writeTo(out);",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregationInitializationException;\n import org.elasticsearch.search.aggregations.Aggregator;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic;\n@@ -45,6 +46,7 @@\n import java.io.IOException;\n import java.util.Arrays;\n import java.util.List;\n+import java.util.Map;\n import java.util.Objects;\n \n public class SignificantTextAggregationBuilder extends AbstractAggregationBuilder<SignificantTextAggregationBuilder> {\n@@ -123,6 +125,23 @@ public AggregationBuilder parse(String aggregationName, XContentParser parser)\n };\n }\n \n+ protected SignificantTextAggregationBuilder(SignificantTextAggregationBuilder clone,\n+ Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.bucketCountThresholds = new BucketCountThresholds(clone.bucketCountThresholds);\n+ this.fieldName = clone.fieldName;\n+ this.filterBuilder = clone.filterBuilder;\n+ this.filterDuplicateText = clone.filterDuplicateText;\n+ this.includeExclude = clone.includeExclude;\n+ this.significanceHeuristic = clone.significanceHeuristic;\n+ this.sourceFieldNames = clone.sourceFieldNames;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new SignificantTextAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n protected TermsAggregator.BucketCountThresholds getBucketCountThresholds() {\n return new TermsAggregator.BucketCountThresholds(bucketCountThresholds);\n }",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTextAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.BucketOrder;\n@@ -44,6 +45,7 @@\n \n import java.io.IOException;\n import java.util.List;\n+import java.util.Map;\n import java.util.Objects;\n \n public class TermsAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource, TermsAggregationBuilder>\n@@ -109,6 +111,21 @@ public TermsAggregationBuilder(String name, ValueType valueType) {\n super(name, ValuesSourceType.ANY, valueType);\n }\n \n+ protected TermsAggregationBuilder(TermsAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.order = clone.order;\n+ this.executionHint = clone.executionHint;\n+ this.includeExclude = clone.includeExclude;\n+ this.collectMode = clone.collectMode;\n+ this.bucketCountThresholds = new BucketCountThresholds(clone.bucketCountThresholds);\n+ this.showTermDocCountError = clone.showTermDocCountError;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new TermsAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.support.ValueType;\n@@ -37,6 +38,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n \n public class AvgAggregationBuilder extends ValuesSourceAggregationBuilder.LeafOnly<ValuesSource.Numeric, AvgAggregationBuilder> {\n public static final String NAME = \"avg\";\n@@ -55,13 +57,22 @@ public AvgAggregationBuilder(String name) {\n super(name, ValuesSourceType.NUMERIC, ValueType.NUMERIC);\n }\n \n+ public AvgAggregationBuilder(AvgAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */\n public AvgAggregationBuilder(StreamInput in) throws IOException {\n super(in, ValuesSourceType.NUMERIC, ValueType.NUMERIC);\n }\n \n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new AvgAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n @Override\n protected void innerWriteTo(StreamOutput out) {\n // Do nothing, no extra state to write to stream",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/avg/AvgAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.support.ValueType;\n@@ -37,6 +38,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n public final class CardinalityAggregationBuilder\n@@ -65,6 +67,11 @@ public CardinalityAggregationBuilder(String name, ValueType targetValueType) {\n super(name, ValuesSourceType.ANY, targetValueType);\n }\n \n+ public CardinalityAggregationBuilder(CardinalityAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.precisionThreshold = clone.precisionThreshold;\n+ }\n+\n /**\n * Read from a stream.\n */\n@@ -75,6 +82,11 @@ public CardinalityAggregationBuilder(StreamInput in) throws IOException {\n }\n }\n \n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new CardinalityAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n @Override\n protected void innerWriteTo(StreamOutput out) throws IOException {\n boolean hasPrecisionThreshold = precisionThreshold != null;",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.support.ValueType;\n@@ -36,6 +37,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n import java.util.Objects;\n \n public class GeoBoundsAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource.GeoPoint, GeoBoundsAggregationBuilder> {\n@@ -58,6 +60,16 @@ public GeoBoundsAggregationBuilder(String name) {\n super(name, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT);\n }\n \n+ protected GeoBoundsAggregationBuilder(GeoBoundsAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ this.wrapLongitude = clone.wrapLongitude;\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new GeoBoundsAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.support.ValueType;\n@@ -36,6 +37,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Map;\n \n public class GeoCentroidAggregationBuilder\n extends ValuesSourceAggregationBuilder.LeafOnly<ValuesSource.GeoPoint, GeoCentroidAggregationBuilder> {\n@@ -55,6 +57,15 @@ public GeoCentroidAggregationBuilder(String name) {\n super(name, ValuesSourceType.GEOPOINT, ValueType.GEOPOINT);\n }\n \n+ protected GeoCentroidAggregationBuilder(GeoCentroidAggregationBuilder clone, Builder factoriesBuilder, Map<String, Object> metaData) {\n+ super(clone, factoriesBuilder, metaData);\n+ }\n+\n+ @Override\n+ protected AggregationBuilder shallowCopy(Builder factoriesBuilder, Map<String, Object> metaData) {\n+ return new GeoCentroidAggregationBuilder(this, factoriesBuilder, metaData);\n+ }\n+\n /**\n * Read from a stream.\n */",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/geocentroid/GeoCentroidAggregationBuilder.java",
"status": "modified"
}
]
} |
{
"body": "This was first raised on the forum: https://discuss.elastic.co/t/failed-shards-with-top-hits/116407/5\r\n\r\nThe details for reproducing this are still not fully known but I will update the issue with more information as I investigate it.\r\n\r\nThe query that caused the bug is a deeply nested aggregation with a top_hits leaf aggregation:\r\n```\r\n },\r\n \"aggs\":{ \r\n \"attributes_products\":{ \r\n \"filter\":{ \r\n \"bool\":{ \r\n \"must\":[ \r\n \tsome filters\r\n ]\r\n }\r\n },\r\n \"aggs\":{ \r\n \"attributes\":{ \r\n \"nested\":{ \r\n \"path\":\"attributes\"\r\n },\r\n \"aggs\":{ \r\n \"code\":{ \r\n \"terms\":{ \r\n \"field\":\"attributes.code\",\r\n \"size\":40\r\n },\r\n \"aggs\":{ \r\n \"translations\":{ \r\n \"nested\":{ \r\n \"path\":\"attributes.translated_fields\"\r\n },\r\n \"aggs\":{ \r\n \"sk\":{ \r\n \"nested\":{ \r\n \"path\":\"attributes.translated_fields.sk\"\r\n },\r\n \"aggs\":{ \r\n \"value\":{ \r\n \"terms\":{ \r\n \"field\":\"attributes.translated_fields.sk.value\"\r\n \"size\":40\r\n },\r\n \"aggs\":{ \r\n \"source\":{ \r\n \"aggs\":{ \r\n \"top_attributes\":{ \r\n \"top_hits\":{ \r\n \"size\":1\r\n }\r\n },\r\n \"products\":{ \r\n \"reverse_nested\":{ \r\n\r\n },\r\n \"aggs\":{ \r\n \"cardinality\":{ \r\n \"cardinality\":{ \r\n \"field\":\"parent_id\"\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"reverse_nested\":{ \r\n \"path\":\"attributes\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n```\r\n\r\nOn 4 of the 5 shards involved in the request a NullPointerException was thrown:\r\n```\r\norg.elasticsearch.transport.RemoteTransportException: [EWaVA2I][127.0.0.1:9300][indices:data/read/search[phase/query]]\r\nCaused by: java.lang.NullPointerException\r\n at org.elasticsearch.search.aggregations.bucket.BestBucketsDeferringCollector.prepareSelectedBuckets(BestBucketsDeferringCollector.java:160) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.bucket.DeferringBucketCollector.replay(DeferringBucketCollector.java:44) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.AggregatorBase.runDeferredCollections(AggregatorBase.java:206) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.bucket.terms.LongTermsAggregator.buildAggregation(LongTermsAggregator.java:156) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.AggregatorFactory$MultiBucketAggregatorWrapper.buildAggregation(AggregatorFactory.java:147) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketAggregations(BucketsAggregator.java:116) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.bucket.nested.NestedAggregator.buildAggregation(NestedAggregator.java:97) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.AggregatorFactory$MultiBucketAggregatorWrapper.buildAggregation(AggregatorFactory.java:147) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketAggregations(BucketsAggregator.java:116) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.bucket.filter.FilterAggregator.buildAggregation(FilterAggregator.java:72) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketAggregations(BucketsAggregator.java:116) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.bucket.global.GlobalAggregator.buildAggregation(GlobalAggregator.java:59) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.aggregations.AggregationPhase.execute(AggregationPhase.java:129) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:114) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:248) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:263) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:330) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:327) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644) [elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.5.0.jar:5.5.0]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]\r\n```\r\nThe code that this exception references is:\r\nhttps://github.com/elastic/elasticsearch/blob/5.5/core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestBucketsDeferringCollector.java#L160\r\n\r\nWhich has a comment that we do not expected the scorer to ever be null here.\r\n\r\nAlthough this bug was found on 5.5 and I have not been able to reproduce it yet I suspect it might still exist on recent versions as on master we make the same assumption:\r\nhttps://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/search/aggregations/bucket/BestBucketsDeferringCollector.java#L168",
"comments": [
{
"body": "The example in the forum is not correctly formatted but the aggregation that fails with an NPE starts with:\r\n\r\n````\r\n\"attributes\":{\r\n \"global\":{\r\n````\r\n\r\nThe `breadth_first` mode of the `terms` aggregation doesn't handle the `global` context when scores are needed by a sub aggregation. This is the case here because of the `top_hits` aggregation that by default uses the `_score` as the `sort` criteria. This is resolved in 6x by https://github.com/elastic/elasticsearch/pull/27942 but a simple fix for 5x is to use a different sort on the `top_hits` aggregation:\r\n\r\n````\r\n \"top_attributes\":{\r\n \"top_hits\":{\r\n \"size\":1,\r\n \"sort\": [\"_doc\"]\r\n }\r\n````\r\n\r\nThis will prevent the `terms` aggregation to rebuild a scorer to replay matching documents on the top buckets. In 5x this scorer is always built from the query and this leads to NPE when executed in a `global` context (which by definition ignores the query and match all document). All document have the same `score` in the `global` context so switching to `_doc` should yield the same result.\r\n\r\n\r\n \r\n\r\n\r\n\r\n\r\n",
"created_at": "2018-01-26T22:27:08Z"
},
{
"body": "I tested on 6x and it fails for another reason. We cannot access the score of children (`nested`) documents in the `breadth_first` mode. The `terms` aggregation is not aware that the execution is done in a `nested` context (where only the parent document matches the query) so we should force the `depth_first` mode when a `nested` sub-aggregation needs to access the score. I'll work on a pr and link to that issue.",
"created_at": "2018-01-26T22:44:59Z"
}
],
"number": 28394,
"title": "Scorer can be unexpectedly null in BestBucketsDeferringCollector when there is a sub agg which uses the score"
} | {
"body": "This commit forces the depth_first mode for `terms` aggregation that contain a sub-aggregation that need to access the score of the document\r\nin a nested context (the `terms` aggregation is a child of a `nested` aggregation). The score of children documents is not accessible in\r\nbreadth_first mode because the `terms` aggregation cannot access the nested context.\r\n\r\nClose #28394\r\n",
"number": 28421,
"review_comments": [
{
"body": "maybe randomly force breadth_first mode? ",
"created_at": "2018-02-02T13:49:09Z"
}
],
"title": "Force depth_first mode execution for terms aggregation under a nested context"
} | {
"commits": [
{
"message": "Force depth_first mode execution for terms aggregation under a nested context\n\nThis commit forces the depth_first mode for `terms` aggregation that contain a sub-aggregation that need to access the score of the document\nin a nested context (the `terms` aggregation is a child of a `nested` aggregation). The score of children documents is not accessible in\nbreadth_first mode because the `terms` aggregation cannot access the nested context.\n\nClose #28394"
},
{
"message": "add missing change"
},
{
"message": "add test for depth_first execution mode on nested aggregation context"
}
],
"files": [
{
"diff": "@@ -47,7 +47,7 @@\n import java.util.List;\n import java.util.Map;\n \n-class NestedAggregator extends BucketsAggregator implements SingleBucketAggregator {\n+public class NestedAggregator extends BucketsAggregator implements SingleBucketAggregator {\n \n static final ParseField PATH_FIELD = new ParseField(\"path\");\n ",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.search.aggregations.bucket.DeferableBucketAggregator;\n import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation.Bucket;\n import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;\n+import org.elasticsearch.search.aggregations.bucket.nested.NestedAggregator;\n import org.elasticsearch.search.aggregations.metrics.NumericMetricsAggregator;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.support.AggregationPath;\n@@ -187,7 +188,16 @@ public TermsAggregator(String name, AggregatorFactories factories, SearchContext\n this.bucketCountThresholds = bucketCountThresholds;\n this.order = InternalOrder.validate(order, this);\n this.format = format;\n- this.collectMode = collectMode;\n+ if (subAggsNeedScore() && descendsFromNestedAggregator(parent)) {\n+ /**\n+ * Force the execution to depth_first because we need to access the score of\n+ * nested documents in a sub-aggregation and we are not able to generate this score\n+ * while replaying deferred documents.\n+ */\n+ this.collectMode = SubAggCollectionMode.DEPTH_FIRST;\n+ } else {\n+ this.collectMode = collectMode;\n+ }\n // Don't defer any child agg if we are dependent on it for pruning results\n if (order instanceof Aggregation){\n AggregationPath path = ((Aggregation) order).path();\n@@ -203,6 +213,25 @@ public TermsAggregator(String name, AggregatorFactories factories, SearchContext\n }\n }\n \n+ static boolean descendsFromNestedAggregator(Aggregator parent) {\n+ while (parent != null) {\n+ if (parent.getClass() == NestedAggregator.class) {\n+ return true;\n+ }\n+ parent = parent.parent();\n+ }\n+ return false;\n+ }\n+\n+ private boolean subAggsNeedScore() {\n+ for (Aggregator subAgg : subAggregators) {\n+ if (subAgg.needsScores()) {\n+ return true;\n+ }\n+ }\n+ return false;\n+ }\n+\n /**\n * Internal Optimization for ordering {@link InternalTerms.Bucket}s by a sub aggregation.\n * <p>",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregator.java",
"status": "modified"
},
{
"diff": "@@ -30,8 +30,11 @@\n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.index.RandomIndexWriter;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.DocValuesFieldExistsQuery;\n import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.NumericUtils;\n@@ -44,6 +47,9 @@\n import org.elasticsearch.index.mapper.KeywordFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.NumberFieldMapper;\n+import org.elasticsearch.index.mapper.SeqNoFieldMapper;\n+import org.elasticsearch.index.mapper.TypeFieldMapper;\n+import org.elasticsearch.index.mapper.UidFieldMapper;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n import org.elasticsearch.search.SearchHit;\n@@ -59,9 +65,14 @@\n import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.global.InternalGlobal;\n+import org.elasticsearch.search.aggregations.bucket.nested.InternalNested;\n+import org.elasticsearch.search.aggregations.bucket.nested.NestedAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.nested.NestedAggregator;\n import org.elasticsearch.search.aggregations.metrics.tophits.InternalTopHits;\n import org.elasticsearch.search.aggregations.metrics.tophits.TopHitsAggregationBuilder;\n import org.elasticsearch.search.aggregations.support.ValueType;\n+import org.elasticsearch.search.sort.FieldSortBuilder;\n+import org.elasticsearch.search.sort.ScoreSortBuilder;\n \n import java.io.IOException;\n import java.net.InetAddress;\n@@ -74,6 +85,7 @@\n import java.util.function.BiFunction;\n import java.util.function.Function;\n \n+import static org.elasticsearch.index.mapper.SeqNoFieldMapper.PRIMARY_TERM_NAME;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.instanceOf;\n@@ -999,6 +1011,81 @@ public void testGlobalAggregationWithScore() throws IOException {\n }\n }\n \n+ public void testWithNestedAggregations() throws IOException {\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {\n+ for (int i = 0; i < 10; i++) {\n+ int[] nestedValues = new int[i];\n+ for (int j = 0; j < i; j++) {\n+ nestedValues[j] = j;\n+ }\n+ indexWriter.addDocuments(generateDocsWithNested(Integer.toString(i), i, nestedValues));\n+ }\n+ indexWriter.commit();\n+ for (Aggregator.SubAggCollectionMode mode : Aggregator.SubAggCollectionMode.values()) {\n+ for (boolean withScore : new boolean[]{true, false}) {\n+ NestedAggregationBuilder nested = new NestedAggregationBuilder(\"nested\", \"nested_object\")\n+ .subAggregation(new TermsAggregationBuilder(\"terms\", ValueType.LONG)\n+ .field(\"nested_value\")\n+ // force the breadth_first mode\n+ .collectMode(mode)\n+ .order(BucketOrder.key(true))\n+ .subAggregation(\n+ new TopHitsAggregationBuilder(\"top_hits\")\n+ .sort(withScore ? new ScoreSortBuilder() : new FieldSortBuilder(\"_doc\"))\n+ .storedField(\"_none_\")\n+ )\n+ );\n+ MappedFieldType fieldType = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG);\n+ fieldType.setHasDocValues(true);\n+ fieldType.setName(\"nested_value\");\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n+ InternalNested result = search(newSearcher(indexReader, false, true),\n+ // match root document only\n+ new DocValuesFieldExistsQuery(PRIMARY_TERM_NAME), nested, fieldType);\n+ InternalMultiBucketAggregation<?, ?> terms = result.getAggregations().get(\"terms\");\n+ assertThat(terms.getBuckets().size(), equalTo(9));\n+ int ptr = 9;\n+ for (MultiBucketsAggregation.Bucket bucket : terms.getBuckets()) {\n+ InternalTopHits topHits = bucket.getAggregations().get(\"top_hits\");\n+ assertThat(topHits.getHits().totalHits, equalTo((long) ptr));\n+ if (withScore) {\n+ assertThat(topHits.getHits().getMaxScore(), equalTo(1f));\n+ } else {\n+ assertThat(topHits.getHits().getMaxScore(), equalTo(Float.NaN));\n+ }\n+ --ptr;\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n+ private final SeqNoFieldMapper.SequenceIDFields sequenceIDFields = SeqNoFieldMapper.SequenceIDFields.emptySeqID();\n+ private List<Document> generateDocsWithNested(String id, int value, int[] nestedValues) {\n+ List<Document> documents = new ArrayList<>();\n+\n+ for (int nestedValue : nestedValues) {\n+ Document document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"docs#\" + id, UidFieldMapper.Defaults.NESTED_FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"__nested_object\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new SortedNumericDocValuesField(\"nested_value\", nestedValue));\n+ documents.add(document);\n+ }\n+\n+ Document document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"docs#\" + id, UidFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"docs\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new SortedNumericDocValuesField(\"value\", value));\n+ document.add(sequenceIDFields.primaryTerm);\n+ documents.add(document);\n+\n+ return documents;\n+ }\n+\n+\n private IndexReader createIndexWithLongs() throws IOException {\n Directory directory = newDirectory();\n RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory);",
"filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorTests.java",
"status": "modified"
}
]
} |
{
"body": "`InternalEngineTests.testRefreshScopedSearcher` threw an `AssertionError` in https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+intake/858/console\r\n\r\n```\r\nFAILURE 0.12s J0 | InternalEngineTests.testRefreshScopedSearcher <<< FAILURES!\r\n > Throwable #1: java.lang.AssertionError: expected same:<IndexSearcher(ElasticsearchDirectoryReader(FilterLeafReader(_0(7.1.0):c10) FilterLeafReader(_1(7.1.0):c1)); executor=null)> was not:<IndexSearcher(ElasticsearchDirectoryReader(FilterLeafReader(_0(7.1.0):c10) FilterLeafReader(_1(7.1.0):c1)); executor=null)>\r\n > \tat __randomizedtesting.SeedInfo.seed([168988A8C61979AC:7115CA98C7335368]:0)\r\n > \tat org.elasticsearch.index.engine.InternalEngineTests.testRefreshScopedSearcher(InternalEngineTests.java:3950)\r\n > \tat java.lang.Thread.run(Thread.java:745)\r\n```\r\n\r\n```\r\nREPRODUCE WITH: gradle :core:test \\\r\n -Dtests.seed=168988A8C61979AC \\\r\n -Dtests.class=org.elasticsearch.index.engine.InternalEngineTests \\\r\n -Dtests.method=\"testRefreshScopedSearcher\" \\\r\n -Dtests.security.manager=true \\\r\n -Dtests.locale=pt \\\r\n -Dtests.timezone=JST\r\n```\r\n\r\nThis did not reproduce for me locally.",
"comments": [
{
"body": "Another instance on master but did not reproduce locally. https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+g1gc/5547/console\r\n\r\n```\r\n./gradlew :server:test \\\r\n -Dtests.seed=8A11D983DCE84E35 \\\r\n -Dtests.class=org.elasticsearch.index.engine.InternalEngineTests \\\r\n -Dtests.method=\"testRefreshScopedSearcher\" \\\r\n -Dtests.security.manager=true \\\r\n -Dtests.jvm.argline=\"-XX:-UseConcMarkSweepGC -XX:+UseG1GC\" \\\r\n -Dtests.locale=es-PA \\\r\n -Dtests.timezone=Pacific/Guadalcanal\r\n```\r\n\r\nLog: [testRefreshScopedSearcher.txt](https://github.com/elastic/elasticsearch/files/1669749/testRefreshScopedSearcher.txt)\r\n",
"created_at": "2018-01-27T03:52:55Z"
}
],
"number": 27514,
"title": "AssertionError in InternalEngineTests.testRefreshScopedSearcher"
} | {
"body": "This change switches the merge policy to none (for this specific test) in order to make sure that refreshes are always triggered by a change in the writer.\r\n\r\n Closes #27514",
"number": 28417,
"review_comments": [],
"title": "Fix intermittent failure in InternalEngineTest#testRefreshScopedSearcher"
} | {
"commits": [
{
"message": "Fix intermittent failure in InternalEngineTest#testRefreshScopedSearcher\n\nThis change switches the merge policy to none (for this specific test) in order to make sure that refreshes are always triggered\n by a change in the writer.\n\n Closes #27514"
},
{
"message": "Merge branch 'master' into tests/refresh_scope"
}
],
"files": [
{
"diff": "@@ -4095,61 +4095,67 @@ public void assertNotSameReader(Searcher left, Searcher right) {\n }\n \n public void testRefreshScopedSearcher() throws IOException {\n- try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n- Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)){\n- assertSameReader(getSearcher, searchSearcher);\n- }\n- for (int i = 0; i < 10; i++) {\n- final String docId = Integer.toString(i);\n+ try (Store store = createStore();\n+ InternalEngine engine =\n+ // disable merges to make sure that the reader doesn't change unexpectedly during the test\n+ createEngine(defaultSettings, store, createTempDir(), NoMergePolicy.INSTANCE)) {\n+\n+ try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n+ Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)) {\n+ assertSameReader(getSearcher, searchSearcher);\n+ }\n+ for (int i = 0; i < 10; i++) {\n+ final String docId = Integer.toString(i);\n+ final ParsedDocument doc =\n+ testParsedDocument(docId, null, testDocumentWithTextField(), SOURCE, null);\n+ Engine.Index primaryResponse = indexForDoc(doc);\n+ engine.index(primaryResponse);\n+ }\n+ assertTrue(engine.refreshNeeded());\n+ engine.refresh(\"test\", Engine.SearcherScope.INTERNAL);\n+ try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n+ Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)) {\n+ assertEquals(10, getSearcher.reader().numDocs());\n+ assertEquals(0, searchSearcher.reader().numDocs());\n+ assertNotSameReader(getSearcher, searchSearcher);\n+ }\n+ engine.refresh(\"test\", Engine.SearcherScope.EXTERNAL);\n+\n+ try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n+ Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)) {\n+ assertEquals(10, getSearcher.reader().numDocs());\n+ assertEquals(10, searchSearcher.reader().numDocs());\n+ assertSameReader(getSearcher, searchSearcher);\n+ }\n+\n+ // now ensure external refreshes are reflected on the internal reader\n+ final String docId = Integer.toString(10);\n final ParsedDocument doc =\n testParsedDocument(docId, null, testDocumentWithTextField(), SOURCE, null);\n Engine.Index primaryResponse = indexForDoc(doc);\n engine.index(primaryResponse);\n- }\n- assertTrue(engine.refreshNeeded());\n- engine.refresh(\"test\", Engine.SearcherScope.INTERNAL);\n- try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n- Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)){\n- assertEquals(10, getSearcher.reader().numDocs());\n- assertEquals(0, searchSearcher.reader().numDocs());\n- assertNotSameReader(getSearcher, searchSearcher);\n- }\n- engine.refresh(\"test\", Engine.SearcherScope.EXTERNAL);\n-\n- try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n- Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)){\n- assertEquals(10, getSearcher.reader().numDocs());\n- assertEquals(10, searchSearcher.reader().numDocs());\n- assertSameReader(getSearcher, searchSearcher);\n- }\n \n- // now ensure external refreshes are reflected on the internal reader\n- final String docId = Integer.toString(10);\n- final ParsedDocument doc =\n- testParsedDocument(docId, null, testDocumentWithTextField(), SOURCE, null);\n- Engine.Index primaryResponse = indexForDoc(doc);\n- engine.index(primaryResponse);\n-\n- engine.refresh(\"test\", Engine.SearcherScope.EXTERNAL);\n+ engine.refresh(\"test\", Engine.SearcherScope.EXTERNAL);\n \n- try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n- Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)){\n- assertEquals(11, getSearcher.reader().numDocs());\n- assertEquals(11, searchSearcher.reader().numDocs());\n- assertSameReader(getSearcher, searchSearcher);\n- }\n+ try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n+ Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)) {\n+ assertEquals(11, getSearcher.reader().numDocs());\n+ assertEquals(11, searchSearcher.reader().numDocs());\n+ assertSameReader(getSearcher, searchSearcher);\n+ }\n \n- try (Searcher searcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL)){\n- engine.refresh(\"test\", Engine.SearcherScope.INTERNAL);\n- try (Searcher nextSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL)){\n- assertSame(searcher.searcher(), nextSearcher.searcher());\n+ try (Searcher searcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL)) {\n+ engine.refresh(\"test\", Engine.SearcherScope.INTERNAL);\n+ try (Searcher nextSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL)) {\n+ assertSame(searcher.searcher(), nextSearcher.searcher());\n+ }\n }\n- }\n \n- try (Searcher searcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)){\n- engine.refresh(\"test\", Engine.SearcherScope.EXTERNAL);\n- try (Searcher nextSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)){\n- assertSame(searcher.searcher(), nextSearcher.searcher());\n+ try (Searcher searcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)) {\n+ engine.refresh(\"test\", Engine.SearcherScope.EXTERNAL);\n+ try (Searcher nextSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)) {\n+ assertSame(searcher.searcher(), nextSearcher.searcher());\n+ }\n }\n }\n }",
"filename": "server/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`):\r\nElasticsearch Version: `Version: 6.1.2, Build: 5b1fea5/2018-01-10T02:35:59.208Z, JVM: 1.8.0_151`\r\nREST High Level Java Client Verstion: 6.1.2\r\n\r\n**Plugins installed**: X-Pack\r\n\r\n**JVM version** (`java -version`):\r\nClient Version:\r\n```\r\njava version \"1.8.0_161\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_161-b12)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)\r\n```\r\nServer Version:\r\n```\r\nopenjdk version \"1.8.0_151\"\r\nOpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.16.04.2-b12)\r\nOpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)\r\n```\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nClient Version:\r\n```\r\nLinux wrk-it-303 4.13.0-31-generic #34~16.04.1-Ubuntu SMP Fri Jan 19 17:11:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\r\n```\r\nServer Version:\r\n```\r\nLinux rd-es-02 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\r\n```\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen using the high level rest client to query and retrieve script field values containing an array of anything other than strings, the HTTP response is 200, but an exception is thrown parsing the response. A typical error log is:\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Create or use any index with at least one document,\r\n 2. Construct a query which will produce at least one result,\r\n 3. Execute that query include a scripted field which produces an object, an array, or null.\r\n\r\nThis sample class shows the exceptions encountered using several trivial scripts; modify the constants at the top so that any document is found.\r\n```java\r\nimport java.io.IOException;\r\nimport java.util.HashMap;\r\n\r\nimport org.apache.http.HttpHost;\r\nimport org.elasticsearch.action.search.SearchRequest;\r\nimport org.elasticsearch.client.RestClient;\r\nimport org.elasticsearch.client.RestHighLevelClient;\r\nimport org.elasticsearch.index.query.QueryBuilders;\r\nimport org.elasticsearch.script.Script;\r\nimport org.elasticsearch.script.ScriptType;\r\n\r\npublic final class HighLevelClientScripts {\r\n private static final String ELASTICSEARCH_HOST = \"localhost\";\r\n private static final String TARGET_INDEX = \"time-series\";\r\n private static final String DOCUMENT_ID = \"NDcnKHrpsDNKN5pbJZGl6SOidqlrtDQ5IEKjdAd2p1E=\";\r\n private static final String[] TEST_SCRIPTS = new String[]{\r\n \"null\",\r\n \"new HashMap()\",\r\n \"new String[]{}\"\r\n };\r\n\r\n public static void main(String[] args) throws IOException {\r\n try (final RestHighLevelClient client = new RestHighLevelClient(\r\n RestClient.builder(new HttpHost(ELASTICSEARCH_HOST, 9200))\r\n )) {\r\n for (final String script : TEST_SCRIPTS) {\r\n final SearchRequest request = new SearchRequest(TARGET_INDEX);\r\n\r\n request.source()\r\n .query(QueryBuilders.idsQuery().addIds(DOCUMENT_ID))\r\n .scriptField(\"result\", new Script(ScriptType.INLINE, \"painless\", script, new HashMap<>()));\r\n\r\n try {\r\n client.search(request);\r\n System.out.println(\"Script '\" + script + \"' succeeded!\");\r\n }\r\n catch (IOException e) {\r\n System.out.println(\"Script '\" + script + \"' caused an exception\");\r\n e.printStackTrace(System.out);\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n**Output of the sample class**:\r\n```\r\nScript 'null' caused an exception\r\njava.io.IOException: Unable to parse response body for Response{requestLine=GET /time-series/_search?typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&search_type=query_then_fetch&batched_reduce_size=512 HTTP/1.1, host=http://rd-es-01:9200, response=HTTP/1.1 200 OK}\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:462)\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:429)\r\n\tat org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:368)\r\n\tat com.ze.zemart.elasticsearchclient.HighLevelClientScripts.main(HighLevelClientScripts.java:36)\r\nCaused by: ParsingException[[innerHitParser] failed to parse field [fields]]; nested: ParsingException[Failed to parse object: unexpected token [VALUE_NULL] found];\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parseValue(ObjectParser.java:316)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parseSub(ObjectParser.java:325)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:169)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.apply(ObjectParser.java:183)\r\n\tat org.elasticsearch.search.SearchHit.fromXContent(SearchHit.java:500)\r\n\tat org.elasticsearch.search.SearchHits.fromXContent(SearchHits.java:150)\r\n\tat org.elasticsearch.action.search.SearchResponse.fromXContent(SearchResponse.java:281)\r\n\tat org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:573)\r\n\tat org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAndParseEntity$2(RestHighLevelClient.java:429)\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:460)\r\n\t... 3 more\r\nCaused by: ParsingException[Failed to parse object: unexpected token [VALUE_NULL] found]\r\n\tat org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownToken(XContentParserUtils.java:67)\r\n\tat org.elasticsearch.common.xcontent.XContentParserUtils.parseStoredFieldsValue(XContentParserUtils.java:108)\r\n\tat org.elasticsearch.common.document.DocumentField.fromXContent(DocumentField.java:142)\r\n\tat org.elasticsearch.search.SearchHit.parseFields(SearchHit.java:610)\r\n\tat org.elasticsearch.search.SearchHit.lambda$declareInnerHitsParseFields$13(SearchHit.java:522)\r\n\tat org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareObject$1(AbstractObjectParser.java:148)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.lambda$declareField$1(ObjectParser.java:214)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parseValue(ObjectParser.java:314)\r\n\t... 12 more\r\nScript 'new HashMap()' caused an exception\r\njava.io.IOException: Unable to parse response body for Response{requestLine=GET /time-series/_search?typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&search_type=query_then_fetch&batched_reduce_size=512 HTTP/1.1, host=http://rd-es-01:9200, response=HTTP/1.1 200 OK}\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:462)\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:429)\r\n\tat org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:368)\r\n\tat com.ze.zemart.elasticsearchclient.HighLevelClientScripts.main(HighLevelClientScripts.java:36)\r\nCaused by: ParsingException[[innerHitParser] failed to parse field [fields]]; nested: ParsingException[Failed to parse object: unexpected token [START_OBJECT] found];\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parseValue(ObjectParser.java:316)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parseSub(ObjectParser.java:325)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:169)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.apply(ObjectParser.java:183)\r\n\tat org.elasticsearch.search.SearchHit.fromXContent(SearchHit.java:500)\r\n\tat org.elasticsearch.search.SearchHits.fromXContent(SearchHits.java:150)\r\n\tat org.elasticsearch.action.search.SearchResponse.fromXContent(SearchResponse.java:281)\r\n\tat org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:573)\r\n\tat org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAndParseEntity$2(RestHighLevelClient.java:429)\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:460)\r\n\t... 3 more\r\nCaused by: ParsingException[Failed to parse object: unexpected token [START_OBJECT] found]\r\n\tat org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownToken(XContentParserUtils.java:67)\r\n\tat org.elasticsearch.common.xcontent.XContentParserUtils.parseStoredFieldsValue(XContentParserUtils.java:108)\r\n\tat org.elasticsearch.common.document.DocumentField.fromXContent(DocumentField.java:142)\r\n\tat org.elasticsearch.search.SearchHit.parseFields(SearchHit.java:610)\r\n\tat org.elasticsearch.search.SearchHit.lambda$declareInnerHitsParseFields$13(SearchHit.java:522)\r\n\tat org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareObject$1(AbstractObjectParser.java:148)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.lambda$declareField$1(ObjectParser.java:214)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parseValue(ObjectParser.java:314)\r\n\t... 12 more\r\nScript 'new String[]{}' caused an exception\r\njava.io.IOException: Unable to parse response body for Response{requestLine=GET /time-series/_search?typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&search_type=query_then_fetch&batched_reduce_size=512 HTTP/1.1, host=http://rd-es-01:9200, response=HTTP/1.1 200 OK}\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:462)\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:429)\r\n\tat org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:368)\r\n\tat com.ze.zemart.elasticsearchclient.HighLevelClientScripts.main(HighLevelClientScripts.java:36)\r\nCaused by: ParsingException[[innerHitParser] failed to parse field [fields]]; nested: ParsingException[Failed to parse object: unexpected token [START_ARRAY] found];\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parseValue(ObjectParser.java:316)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parseSub(ObjectParser.java:325)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:169)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.apply(ObjectParser.java:183)\r\n\tat org.elasticsearch.search.SearchHit.fromXContent(SearchHit.java:500)\r\n\tat org.elasticsearch.search.SearchHits.fromXContent(SearchHits.java:150)\r\n\tat org.elasticsearch.action.search.SearchResponse.fromXContent(SearchResponse.java:281)\r\n\tat org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:573)\r\n\tat org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAndParseEntity$2(RestHighLevelClient.java:429)\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:460)\r\n\t... 3 more\r\nCaused by: ParsingException[Failed to parse object: unexpected token [START_ARRAY] found]\r\n\tat org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownToken(XContentParserUtils.java:67)\r\n\tat org.elasticsearch.common.xcontent.XContentParserUtils.parseStoredFieldsValue(XContentParserUtils.java:108)\r\n\tat org.elasticsearch.common.document.DocumentField.fromXContent(DocumentField.java:142)\r\n\tat org.elasticsearch.search.SearchHit.parseFields(SearchHit.java:610)\r\n\tat org.elasticsearch.search.SearchHit.lambda$declareInnerHitsParseFields$13(SearchHit.java:522)\r\n\tat org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareObject$1(AbstractObjectParser.java:148)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.lambda$declareField$1(ObjectParser.java:214)\r\n\tat org.elasticsearch.common.xcontent.ObjectParser.parseValue(ObjectParser.java:314)\r\n\t... 12 more\r\n```",
"comments": [
{
"body": "thanks a lot for raising this and for the clear recreation @wfhartford !",
"created_at": "2018-01-31T10:33:47Z"
}
],
"number": 28380,
"title": "High Level REST client fails to parse query results with certain script fields"
} | {
"body": "Script fields can get a bit more complicated than just stored fields. A script can return null, an object and also an array. Extended parsing to support such valid values. Also renamed util method from `parseStoredFieldsValue` to `parseFieldsValue` given that it can parse stored fields but also script fields, anything that's returned as `fields`.\r\n\r\nCloses #28380",
"number": 28395,
"review_comments": [],
"title": "Fix parsing of script fields"
} | {
"commits": [
{
"message": "REST high-level client: Fix parsing of script fields\n\nScript fields can get a bit more complicated than just stored fields. A script can return null, an object and also an array. Extended parsing to support such valid values. Also renamed util method from `parseStoredFieldsValue` to `parseFieldsValue` given that it can parse stored fields but also script fields, anything that's returned as `fields`.\n\nCloses #28380"
}
],
"files": [
{
"diff": "@@ -23,7 +23,6 @@\n import org.apache.http.entity.ContentType;\n import org.apache.http.entity.StringEntity;\n import org.apache.http.nio.entity.NStringEntity;\n-import org.apache.lucene.search.join.ScoreMode;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchStatusException;\n import org.elasticsearch.action.search.ClearScrollRequest;\n@@ -35,9 +34,7 @@\n import org.elasticsearch.action.search.SearchScrollRequest;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.index.query.MatchQueryBuilder;\n-import org.elasticsearch.index.query.NestedQueryBuilder;\n import org.elasticsearch.index.query.ScriptQueryBuilder;\n import org.elasticsearch.index.query.TermsQueryBuilder;\n import org.elasticsearch.join.aggregations.Children;\n@@ -66,6 +63,8 @@\n import java.io.IOException;\n import java.util.Arrays;\n import java.util.Collections;\n+import java.util.List;\n+import java.util.Map;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.hamcrest.Matchers.both;\n@@ -432,6 +431,47 @@ public void testSearchWithSuggest() throws IOException {\n }\n }\n \n+ public void testSearchWithWeirdScriptFields() throws Exception {\n+ HttpEntity entity = new NStringEntity(\"{ \\\"field\\\":\\\"value\\\"}\", ContentType.APPLICATION_JSON);\n+ client().performRequest(\"PUT\", \"test/type/1\", Collections.emptyMap(), entity);\n+ client().performRequest(\"POST\", \"/test/_refresh\");\n+\n+ {\n+ SearchRequest searchRequest = new SearchRequest(\"test\").source(SearchSourceBuilder.searchSource()\n+ .scriptField(\"result\", new Script(\"null\")));\n+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);\n+ SearchHit searchHit = searchResponse.getHits().getAt(0);\n+ List<Object> values = searchHit.getFields().get(\"result\").getValues();\n+ assertNotNull(values);\n+ assertEquals(1, values.size());\n+ assertNull(values.get(0));\n+ }\n+ {\n+ SearchRequest searchRequest = new SearchRequest(\"test\").source(SearchSourceBuilder.searchSource()\n+ .scriptField(\"result\", new Script(\"new HashMap()\")));\n+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);\n+ SearchHit searchHit = searchResponse.getHits().getAt(0);\n+ List<Object> values = searchHit.getFields().get(\"result\").getValues();\n+ assertNotNull(values);\n+ assertEquals(1, values.size());\n+ assertThat(values.get(0), instanceOf(Map.class));\n+ Map<?, ?> map = (Map<?, ?>) values.get(0);\n+ assertEquals(0, map.size());\n+ }\n+ {\n+ SearchRequest searchRequest = new SearchRequest(\"test\").source(SearchSourceBuilder.searchSource()\n+ .scriptField(\"result\", new Script(\"new String[]{}\")));\n+ SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync);\n+ SearchHit searchHit = searchResponse.getHits().getAt(0);\n+ List<Object> values = searchHit.getFields().get(\"result\").getValues();\n+ assertNotNull(values);\n+ assertEquals(1, values.size());\n+ assertThat(values.get(0), instanceOf(List.class));\n+ List<?> list = (List<?>) values.get(0);\n+ assertEquals(0, list.size());\n+ }\n+ }\n+\n public void testSearchScroll() throws Exception {\n \n for (int i = 0; i < 100; i++) {",
"filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java",
"status": "modified"
},
{
"diff": "@@ -36,7 +36,7 @@\n import java.util.Objects;\n \n import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken;\n-import static org.elasticsearch.common.xcontent.XContentParserUtils.parseStoredFieldsValue;\n+import static org.elasticsearch.common.xcontent.XContentParserUtils.parseFieldsValue;\n \n /**\n * A single field name and values part of {@link SearchHit} and {@link GetResult}.\n@@ -139,7 +139,7 @@ public static DocumentField fromXContent(XContentParser parser) throws IOExcepti\n ensureExpectedToken(XContentParser.Token.START_ARRAY, token, parser::getTokenLocation);\n List<Object> values = new ArrayList<>();\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n- values.add(parseStoredFieldsValue(parser));\n+ values.add(parseFieldsValue(parser));\n }\n return new DocumentField(fieldName, values);\n }",
"filename": "server/src/main/java/org/elasticsearch/common/document/DocumentField.java",
"status": "modified"
},
{
"diff": "@@ -39,8 +39,8 @@ private XContentParserUtils() {\n }\n \n /**\n- * Makes sure that current token is of type {@link XContentParser.Token#FIELD_NAME} and the field name is equal to the provided one\n- * @throws ParsingException if the token is not of type {@link XContentParser.Token#FIELD_NAME} or is not equal to the given field name\n+ * Makes sure that current token is of type {@link Token#FIELD_NAME} and the field name is equal to the provided one\n+ * @throws ParsingException if the token is not of type {@link Token#FIELD_NAME} or is not equal to the given field name\n */\n public static void ensureFieldName(XContentParser parser, Token token, String fieldName) throws IOException {\n ensureExpectedToken(Token.FIELD_NAME, token, parser::getTokenLocation);\n@@ -62,7 +62,7 @@ public static void throwUnknownField(String field, XContentLocation location) {\n /**\n * @throws ParsingException with a \"unknown token found\" reason\n */\n- public static void throwUnknownToken(XContentParser.Token token, XContentLocation location) {\n+ public static void throwUnknownToken(Token token, XContentLocation location) {\n String message = \"Failed to parse object: unexpected token [%s] found\";\n throw new ParsingException(location, String.format(Locale.ROOT, message, token));\n }\n@@ -83,27 +83,36 @@ public static void ensureExpectedToken(Token expected, Token actual, Supplier<XC\n * Parse the current token depending on its token type. The following token types will be\n * parsed by the corresponding parser methods:\n * <ul>\n- * <li>XContentParser.Token.VALUE_STRING: parser.text()</li>\n- * <li>XContentParser.Token.VALUE_NUMBER: parser.numberValue()</li>\n- * <li>XContentParser.Token.VALUE_BOOLEAN: parser.booleanValue()</li>\n- * <li>XContentParser.Token.VALUE_EMBEDDED_OBJECT: parser.binaryValue()</li>\n+ * <li>{@link Token#VALUE_STRING}: {@link XContentParser#text()}</li>\n+ * <li>{@link Token#VALUE_NUMBER}: {@link XContentParser#numberValue()} ()}</li>\n+ * <li>{@link Token#VALUE_BOOLEAN}: {@link XContentParser#booleanValue()} ()}</li>\n+ * <li>{@link Token#VALUE_EMBEDDED_OBJECT}: {@link XContentParser#binaryValue()} ()}</li>\n+ * <li>{@link Token#VALUE_NULL}: returns null</li>\n+ * <li>{@link Token#START_OBJECT}: {@link XContentParser#mapOrdered()} ()}</li>\n+ * <li>{@link Token#START_ARRAY}: {@link XContentParser#listOrderedMap()} ()}</li>\n * </ul>\n *\n- * @throws ParsingException if the token none of the allowed values\n+ * @throws ParsingException if the token is none of the allowed values\n */\n- public static Object parseStoredFieldsValue(XContentParser parser) throws IOException {\n- XContentParser.Token token = parser.currentToken();\n+ public static Object parseFieldsValue(XContentParser parser) throws IOException {\n+ Token token = parser.currentToken();\n Object value = null;\n- if (token == XContentParser.Token.VALUE_STRING) {\n+ if (token == Token.VALUE_STRING) {\n //binary values will be parsed back and returned as base64 strings when reading from json and yaml\n value = parser.text();\n- } else if (token == XContentParser.Token.VALUE_NUMBER) {\n+ } else if (token == Token.VALUE_NUMBER) {\n value = parser.numberValue();\n- } else if (token == XContentParser.Token.VALUE_BOOLEAN) {\n+ } else if (token == Token.VALUE_BOOLEAN) {\n value = parser.booleanValue();\n- } else if (token == XContentParser.Token.VALUE_EMBEDDED_OBJECT) {\n+ } else if (token == Token.VALUE_EMBEDDED_OBJECT) {\n //binary values will be parsed back and returned as BytesArray when reading from cbor and smile\n value = new BytesArray(parser.binaryValue());\n+ } else if (token == Token.VALUE_NULL) {\n+ value = null;\n+ } else if (token == Token.START_OBJECT) {\n+ value = parser.mapOrdered();\n+ } else if (token == Token.START_ARRAY) {\n+ value = parser.listOrderedMap();\n } else {\n throwUnknownToken(token, parser.getTokenLocation());\n }\n@@ -132,7 +141,7 @@ public static Object parseStoredFieldsValue(XContentParser parser) throws IOExce\n */\n public static <T> void parseTypedKeysObject(XContentParser parser, String delimiter, Class<T> objectClass, Consumer<T> consumer)\n throws IOException {\n- if (parser.currentToken() != XContentParser.Token.START_OBJECT && parser.currentToken() != XContentParser.Token.START_ARRAY) {\n+ if (parser.currentToken() != Token.START_OBJECT && parser.currentToken() != Token.START_ARRAY) {\n throwUnknownToken(parser.currentToken(), parser.getTokenLocation());\n }\n String currentFieldName = parser.currentName();",
"filename": "server/src/main/java/org/elasticsearch/common/xcontent/XContentParserUtils.java",
"status": "modified"
},
{
"diff": "@@ -69,7 +69,7 @@\n import static org.elasticsearch.common.xcontent.ConstructingObjectParser.optionalConstructorArg;\n import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken;\n import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureFieldName;\n-import static org.elasticsearch.common.xcontent.XContentParserUtils.parseStoredFieldsValue;\n+import static org.elasticsearch.common.xcontent.XContentParserUtils.parseFieldsValue;\n import static org.elasticsearch.search.fetch.subphase.highlight.HighlightField.readHighlightField;\n \n /**\n@@ -604,7 +604,7 @@ private static void declareMetaDataFields(ObjectParser<Map<String, Object>, Void\n fieldMap.put(field.getName(), field);\n }, (p, c) -> {\n List<Object> values = new ArrayList<>();\n- values.add(parseStoredFieldsValue(p));\n+ values.add(parseFieldsValue(p));\n return new DocumentField(metadatafield, values);\n }, new ParseField(metadatafield), ValueType.VALUE);\n }\n@@ -649,15 +649,15 @@ private static Explanation parseExplanation(XContentParser parser) throws IOExce\n String description = null;\n List<Explanation> details = new ArrayList<>();\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n- ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, () -> parser.getTokenLocation());\n+ ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation);\n String currentFieldName = parser.currentName();\n token = parser.nextToken();\n if (Fields.VALUE.equals(currentFieldName)) {\n value = parser.floatValue();\n } else if (Fields.DESCRIPTION.equals(currentFieldName)) {\n description = parser.textOrNull();\n } else if (Fields.DETAILS.equals(currentFieldName)) {\n- ensureExpectedToken(XContentParser.Token.START_ARRAY, token, () -> parser.getTokenLocation());\n+ ensureExpectedToken(XContentParser.Token.START_ARRAY, token, parser::getTokenLocation);\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n details.add(parseExplanation(parser));\n }",
"filename": "server/src/main/java/org/elasticsearch/search/SearchHit.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.util.SetOnce;\n import org.elasticsearch.common.CheckedBiConsumer;\n+import org.elasticsearch.common.CheckedConsumer;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.bytes.BytesArray;\n@@ -32,12 +33,14 @@\n import java.util.ArrayList;\n import java.util.Base64;\n import java.util.List;\n+import java.util.Map;\n \n import static org.elasticsearch.common.xcontent.XContentHelper.toXContent;\n import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken;\n import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureFieldName;\n import static org.elasticsearch.common.xcontent.XContentParserUtils.parseTypedKeysObject;\n import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.instanceOf;\n \n public class XContentParserUtilsTests extends ESTestCase {\n \n@@ -54,39 +57,39 @@ public void testEnsureExpectedToken() throws IOException {\n }\n }\n \n- public void testParseStoredFieldsValueString() throws IOException {\n+ public void testStoredFieldsValueString() throws IOException {\n final String value = randomAlphaOfLengthBetween(0, 10);\n- assertParseStoredFieldsValue(value, (xcontentType, result) -> assertEquals(value, result));\n+ assertParseFieldsSimpleValue(value, (xcontentType, result) -> assertEquals(value, result));\n }\n \n- public void testParseStoredFieldsValueInt() throws IOException {\n+ public void testStoredFieldsValueInt() throws IOException {\n final Integer value = randomInt();\n- assertParseStoredFieldsValue(value, (xcontentType, result) -> assertEquals(value, result));\n+ assertParseFieldsSimpleValue(value, (xcontentType, result) -> assertEquals(value, result));\n }\n \n- public void testParseStoredFieldsValueLong() throws IOException {\n+ public void testStoredFieldsValueLong() throws IOException {\n final Long value = randomLong();\n- assertParseStoredFieldsValue(value, (xcontentType, result) -> assertEquals(value, result));\n+ assertParseFieldsSimpleValue(value, (xcontentType, result) -> assertEquals(value, result));\n }\n \n- public void testParseStoredFieldsValueDouble() throws IOException {\n+ public void testStoredFieldsValueDouble() throws IOException {\n final Double value = randomDouble();\n- assertParseStoredFieldsValue(value, (xcontentType, result) -> assertEquals(value, ((Number) result).doubleValue(), 0.0d));\n+ assertParseFieldsSimpleValue(value, (xcontentType, result) -> assertEquals(value, ((Number) result).doubleValue(), 0.0d));\n }\n \n- public void testParseStoredFieldsValueFloat() throws IOException {\n+ public void testStoredFieldsValueFloat() throws IOException {\n final Float value = randomFloat();\n- assertParseStoredFieldsValue(value, (xcontentType, result) -> assertEquals(value, ((Number) result).floatValue(), 0.0f));\n+ assertParseFieldsSimpleValue(value, (xcontentType, result) -> assertEquals(value, ((Number) result).floatValue(), 0.0f));\n }\n \n- public void testParseStoredFieldsValueBoolean() throws IOException {\n+ public void testStoredFieldsValueBoolean() throws IOException {\n final Boolean value = randomBoolean();\n- assertParseStoredFieldsValue(value, (xcontentType, result) -> assertEquals(value, result));\n+ assertParseFieldsSimpleValue(value, (xcontentType, result) -> assertEquals(value, result));\n }\n \n- public void testParseStoredFieldsValueBinary() throws IOException {\n+ public void testStoredFieldsValueBinary() throws IOException {\n final byte[] value = randomUnicodeOfLength(scaledRandomIntBetween(10, 1000)).getBytes(\"UTF-8\");\n- assertParseStoredFieldsValue(value, (xcontentType, result) -> {\n+ assertParseFieldsSimpleValue(value, (xcontentType, result) -> {\n if (xcontentType == XContentType.JSON || xcontentType == XContentType.YAML) {\n //binary values will be parsed back and returned as base64 strings when reading from json and yaml\n assertArrayEquals(value, Base64.getDecoder().decode((String) result));\n@@ -97,27 +100,50 @@ public void testParseStoredFieldsValueBinary() throws IOException {\n });\n }\n \n- public void testParseStoredFieldsValueUnknown() throws IOException {\n+ public void testStoredFieldsValueNull() throws IOException {\n+ assertParseFieldsSimpleValue(null, (xcontentType, result) -> assertNull(result));\n+ }\n+\n+ public void testStoredFieldsValueObject() throws IOException {\n+ assertParseFieldsValue((builder) -> builder.startObject().endObject(),\n+ (xcontentType, result) -> assertThat(result, instanceOf(Map.class)));\n+ }\n+\n+ public void testStoredFieldsValueArray() throws IOException {\n+ assertParseFieldsValue((builder) -> builder.startArray().endArray(),\n+ (xcontentType, result) -> assertThat(result, instanceOf(List.class)));\n+ }\n+\n+ public void testParseFieldsValueUnknown() {\n ParsingException e = expectThrows(ParsingException.class, () ->\n- assertParseStoredFieldsValue(null, (x, r) -> fail(\"Should have thrown a parsing exception\")));\n+ assertParseFieldsValue((builder) -> {}, (x, r) -> fail(\"Should have thrown a parsing exception\")));\n assertThat(e.getMessage(), containsString(\"unexpected token\"));\n }\n \n- private void assertParseStoredFieldsValue(final Object value, final CheckedBiConsumer<XContentType, Object, IOException> consumer)\n+ private void assertParseFieldsSimpleValue(final Object value, final CheckedBiConsumer<XContentType, Object, IOException> assertConsumer)\n throws IOException {\n+ assertParseFieldsValue((builder) -> builder.value(value), assertConsumer);\n+ }\n+\n+ private void assertParseFieldsValue(final CheckedConsumer<XContentBuilder, IOException> fieldBuilder,\n+ final CheckedBiConsumer<XContentType, Object, IOException> assertConsumer) throws IOException {\n final XContentType xContentType = randomFrom(XContentType.values());\n try (XContentBuilder builder = XContentBuilder.builder(xContentType.xContent())) {\n final String fieldName = randomAlphaOfLengthBetween(0, 10);\n \n builder.startObject();\n- builder.field(fieldName, value);\n+ builder.startArray(fieldName);\n+ fieldBuilder.accept(builder);\n+ builder.endArray();\n builder.endObject();\n \n try (XContentParser parser = createParser(builder)) {\n ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation);\n ensureFieldName(parser, parser.nextToken(), fieldName);\n+ ensureExpectedToken(XContentParser.Token.START_ARRAY, parser.nextToken(), parser::getTokenLocation);\n assertNotNull(parser.nextToken());\n- consumer.accept(xContentType, XContentParserUtils.parseStoredFieldsValue(parser));\n+ assertConsumer.accept(xContentType, XContentParserUtils.parseFieldsValue(parser));\n+ ensureExpectedToken(XContentParser.Token.END_ARRAY, parser.nextToken(), parser::getTokenLocation);\n ensureExpectedToken(XContentParser.Token.END_OBJECT, parser.nextToken(), parser::getTokenLocation);\n assertNull(parser.nextToken());\n }",
"filename": "server/src/test/java/org/elasticsearch/common/xcontent/XContentParserUtilsTests.java",
"status": "modified"
},
{
"diff": "@@ -56,6 +56,7 @@\n import static org.elasticsearch.test.XContentTestUtils.insertRandomFields;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertToXContentEquivalent;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.notNullValue;\n import static org.hamcrest.Matchers.nullValue;\n \n@@ -258,7 +259,7 @@ public void testSerializeShardTarget() throws Exception {\n assertThat(results.getAt(1).getShard(), equalTo(target));\n }\n \n- public void testNullSource() throws Exception {\n+ public void testNullSource() {\n SearchHit searchHit = new SearchHit(0, \"_id\", new Text(\"_type\"), null);\n \n assertThat(searchHit.getSourceAsMap(), nullValue());\n@@ -277,6 +278,73 @@ public void testHasSource() {\n assertTrue(searchHit.hasSource());\n }\n \n+ public void testWeirdScriptFields() throws Exception {\n+ {\n+ XContentParser parser = createParser(XContentType.JSON.xContent(), \"{\\n\" +\n+ \" \\\"_index\\\": \\\"twitter\\\",\\n\" +\n+ \" \\\"_type\\\": \\\"tweet\\\",\\n\" +\n+ \" \\\"_id\\\": \\\"1\\\",\\n\" +\n+ \" \\\"_score\\\": 1.0,\\n\" +\n+ \" \\\"fields\\\": {\\n\" +\n+ \" \\\"result\\\": [null]\\n\" +\n+ \" }\\n\" +\n+ \"}\");\n+ SearchHit searchHit = SearchHit.fromXContent(parser);\n+ Map<String, DocumentField> fields = searchHit.getFields();\n+ assertEquals(1, fields.size());\n+ DocumentField result = fields.get(\"result\");\n+ assertNotNull(result);\n+ assertEquals(1, result.getValues().size());\n+ assertNull(result.getValues().get(0));\n+ }\n+ {\n+ XContentParser parser = createParser(XContentType.JSON.xContent(), \"{\\n\" +\n+ \" \\\"_index\\\": \\\"twitter\\\",\\n\" +\n+ \" \\\"_type\\\": \\\"tweet\\\",\\n\" +\n+ \" \\\"_id\\\": \\\"1\\\",\\n\" +\n+ \" \\\"_score\\\": 1.0,\\n\" +\n+ \" \\\"fields\\\": {\\n\" +\n+ \" \\\"result\\\": [{}]\\n\" +\n+ \" }\\n\" +\n+ \"}\");\n+\n+ SearchHit searchHit = SearchHit.fromXContent(parser);\n+ Map<String, DocumentField> fields = searchHit.getFields();\n+ assertEquals(1, fields.size());\n+ DocumentField result = fields.get(\"result\");\n+ assertNotNull(result);\n+ assertEquals(1, result.getValues().size());\n+ Object value = result.getValues().get(0);\n+ assertThat(value, instanceOf(Map.class));\n+ Map<?, ?> map = (Map<?, ?>) value;\n+ assertEquals(0, map.size());\n+ }\n+ {\n+ XContentParser parser = createParser(JsonXContent.jsonXContent, \"{\\n\" +\n+ \" \\\"_index\\\": \\\"twitter\\\",\\n\" +\n+ \" \\\"_type\\\": \\\"tweet\\\",\\n\" +\n+ \" \\\"_id\\\": \\\"1\\\",\\n\" +\n+ \" \\\"_score\\\": 1.0,\\n\" +\n+ \" \\\"fields\\\": {\\n\" +\n+ \" \\\"result\\\": [\\n\" +\n+ \" []\\n\" +\n+ \" ]\\n\" +\n+ \" }\\n\" +\n+ \"}\");\n+\n+ SearchHit searchHit = SearchHit.fromXContent(parser);\n+ Map<String, DocumentField> fields = searchHit.getFields();\n+ assertEquals(1, fields.size());\n+ DocumentField result = fields.get(\"result\");\n+ assertNotNull(result);\n+ assertEquals(1, result.getValues().size());\n+ Object value = result.getValues().get(0);\n+ assertThat(value, instanceOf(List.class));\n+ List<?> list = (List<?>) value;\n+ assertEquals(0, list.size());\n+ }\n+ }\n+\n private static Explanation createExplanation(int depth) {\n String description = randomAlphaOfLengthBetween(5, 20);\n float value = randomFloat();",
"filename": "server/src/test/java/org/elasticsearch/search/SearchHitTests.java",
"status": "modified"
}
]
} |
{
"body": "Today after writing an operation to an engine, we will call `IndexShard#afterWriteOperation` to flush a new commit if needed. The `shouldFlush` condition is purely based on the uncommitted translog size and the translog flush threshold size setting. However this can cause a replica execute an infinite loop of flushing in the following situation.\r\n\r\n1. Primary has a fully baked index commit with its local checkpoint equals to max_seqno\r\n2. Primary sends that fully baked commit, then replays all retained translog operations to the replica\r\n3. No operations are added to Lucence on the replica as seqno of these operations are at most the local checkpoint\r\n4. Once translog operations are replayed, the target calls `IndexShard#afterWriteOperation` to flush. If the total size of the replaying operations exceeds the flush threshold size, this call will `Engine#flush`. However the engine won't flush as its index writer does not have any uncommitted operations. The method `IndexShard#afterWriteOperation` will keep flushing as the condition `shouldFlush` is still true.\r\n\r\nThis issue can be avoided if we always flush if the `shouldFlush` condition is true.",
"comments": [
{
"body": "@dnhatn great find.\r\n\r\nI think that we would actually want the flush to happen in this case so that the translog can be cleaned up. The current approach here says: There's more than 500mb worth of uncommitted data (which is actually all committed), but no uncommitted change to Lucene, so let's ignore this. If we would forcibly flush even though there are no changes to Lucene, this would allow us to free the translog.\r\nIt also shows a broader issue: when the local checkpoint is stuck, there's a possibility for every newly added operation to cause a flush (incl. rolling of translog generations).",
"created_at": "2018-01-24T08:18:25Z"
},
{
"body": "This is a great find. I'm not sure though that this is the right fix. The main problem is that the uncommitted bytes stats is off. All ops in the translog are actually in lucene. The problem is that uncommitted bytes is calculated based on the translog gen file, which is indeed pointed to by lucene. This is amplified by the fact that we now ship more of the translog to create a history on the replica, which is not relevant for the flushing logic. \r\n\r\nI wonder if we should always force flush at the end of recovery as an easy fix. Another option is to flush when lucene doesn't point to the right generation, even if there are no pending ops.\r\n\r\nI want to think about this some more.\r\n\r\n> It also shows a broader issue: when the local checkpoint is stuck, there's a possibility for every newly added operation to cause a flush (incl. rolling of translog generations).\r\n\r\nAgreed. It is a broader issue that has implication for the entire replication group. Last we talked about it we thought of having a fail safe of in line with \"if a specific in sync shard lags behind with more than x ops, fail it\". x can be something large like 10K ops or something. The downside of course is that it will hide bugs.\r\n",
"created_at": "2018-01-24T08:35:14Z"
},
{
"body": "I agreed. I am not sure if this is a right approach either. I was trying to fix this by only sending translog operations after the local checkpoint in peer-recovery. However, this can happen in other cases hence I switched to this approach.",
"created_at": "2018-01-24T13:48:08Z"
},
{
"body": "> I was trying to fix this by only sending translog operations after the local checkpoint in peer-recovery. \r\n\r\nWe don't do this by design - we need to build a translog with history on the replica.",
"created_at": "2018-01-24T14:34:05Z"
},
{
"body": "@bleskes and @ywelsch I've updated the PR according to our discussion. Could you please take another look? Thank you!",
"created_at": "2018-01-24T19:30:24Z"
},
{
"body": "I've addressed your feedbacks. Could you please take another look? Thank you!",
"created_at": "2018-01-25T16:19:15Z"
},
{
"body": "Thanks @ywelsch @bleskes and @simonw for helpful reviews.",
"created_at": "2018-01-25T19:29:20Z"
},
{
"body": "good change and catch @dnhatn quite some insight into the system needed to get there, the dark side of the force is strong down there ;)",
"created_at": "2018-01-26T11:27:53Z"
}
],
"number": 28350,
"title": "Replica recovery could go into an endless flushing loop"
} | {
"body": "If the translog flush threshold is too small (eg. smaller than the\r\ntranslog header), we may repeatedly flush even there is no uncommitted\r\noperation because the shouldFlush condition can still be true after\r\nflushing. This is currently avoided by adding an extra guard against the\r\nuncommitted operations. However, this extra guard makes the shouldFlush\r\ncomplicated. This commit replaces that extra guard by a lower bound for\r\ntranslog flush threshold. We keep the lower bound small for convenience\r\nin testing.\r\n\r\nRelates #28350\r\nRelates #23779",
"number": 28382,
"review_comments": [
{
"body": "can we assert somewhere that this is correct?",
"created_at": "2018-01-31T17:48:11Z"
},
{
"body": "can we add an assertion that the uncommitOps is >0 ?",
"created_at": "2018-02-01T16:20:23Z"
},
{
"body": "I think we need to adapt the comment above?",
"created_at": "2018-02-01T16:21:49Z"
}
],
"title": "Add lower bound for translog flush threshold"
} | {
"commits": [
{
"message": "Add lower bound for translog flush threshold\n\nIf the translog flush threshold is too small (eg. smaller than the\ntranslog header), we may repeatedly flush even there is no uncommitted\noperation because the shouldFlush condition can still be true after\nflushing. This is currently avoided by adding an extra guard against the\nuncommitted operations. However, this extra guard makes the shouldFlush\ncomplicated. This commit replaces that extra guard by a lower bound for\ntranslog flush threshold. We keep the lower bound small for convenience\nin testing.\n\nRelates #28350\nRelates #23606"
},
{
"message": "Merge branch 'master' into min-flush-threshold"
},
{
"message": "Merge branch 'master' into min-flush-threshold"
},
{
"message": "Add assertions"
}
],
"files": [
{
"diff": "@@ -183,8 +183,15 @@ public final class IndexSettings {\n Setting.timeSetting(\"index.refresh_interval\", DEFAULT_REFRESH_INTERVAL, new TimeValue(-1, TimeUnit.MILLISECONDS),\n Property.Dynamic, Property.IndexScope);\n public static final Setting<ByteSizeValue> INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE_SETTING =\n- Setting.byteSizeSetting(\"index.translog.flush_threshold_size\", new ByteSizeValue(512, ByteSizeUnit.MB), Property.Dynamic,\n- Property.IndexScope);\n+ Setting.byteSizeSetting(\"index.translog.flush_threshold_size\", new ByteSizeValue(512, ByteSizeUnit.MB),\n+ /*\n+ * An empty translog occupies 43 bytes on disk. If the flush threshold is below this, the flush thread\n+ * can get stuck in an infinite loop as the shouldPeriodicallyFlush can still be true after flushing.\n+ * However, small thresholds are useful for testing so we do not add a large lower bound here.\n+ */\n+ new ByteSizeValue(Translog.DEFAULT_HEADER_SIZE_IN_BYTES + 1, ByteSizeUnit.BYTES),\n+ new ByteSizeValue(Long.MAX_VALUE, ByteSizeUnit.BYTES),\n+ Property.Dynamic, Property.IndexScope);\n \n /**\n * Controls how long translog files that are no longer needed for persistence reasons\n@@ -219,9 +226,9 @@ public final class IndexSettings {\n * generation threshold. However, small thresholds are useful for testing so we\n * do not add a large lower bound here.\n */\n- new ByteSizeValue(64, ByteSizeUnit.BYTES),\n+ new ByteSizeValue(Translog.DEFAULT_HEADER_SIZE_IN_BYTES + 1, ByteSizeUnit.BYTES),\n new ByteSizeValue(Long.MAX_VALUE, ByteSizeUnit.BYTES),\n- new Property[]{Property.Dynamic, Property.IndexScope});\n+ Property.Dynamic, Property.IndexScope);\n \n /**\n * Index setting to enable / disable deletes garbage collection.",
"filename": "server/src/main/java/org/elasticsearch/index/IndexSettings.java",
"status": "modified"
},
{
"diff": "@@ -1470,21 +1470,16 @@ public boolean shouldPeriodicallyFlush() {\n if (uncommittedSizeOfCurrentCommit < flushThreshold) {\n return false;\n }\n+ assert translog.uncommittedOperations() > 0 : \"translog required to flush periodically but not contain any uncommitted operation; \"\n+ + \"uncommitted translog size [\" + uncommittedSizeOfCurrentCommit + \"], flush threshold [\" + flushThreshold + \"]\";\n /*\n * We should only flush ony if the shouldFlush condition can become false after flushing.\n * This condition will change if the `uncommittedSize` of the new commit is smaller than\n * the `uncommittedSize` of the current commit. This method is to maintain translog only,\n * thus the IndexWriter#hasUncommittedChanges condition is not considered.\n */\n final long uncommittedSizeOfNewCommit = translog.sizeOfGensAboveSeqNoInBytes(localCheckpointTracker.getCheckpoint() + 1);\n- /*\n- * If flushThreshold is too small, we may repeatedly flush even there is no uncommitted operation\n- * as #sizeOfGensAboveSeqNoInByte and #uncommittedSizeInBytes can return different values.\n- * An empty translog file has non-zero `uncommittedSize` (the translog header), and method #sizeOfGensAboveSeqNoInBytes can\n- * return 0 now(no translog gen contains ops above local checkpoint) but method #uncommittedSizeInBytes will return an actual\n- * non-zero value after rolling a new translog generation. This can be avoided by checking the actual uncommitted operations.\n- */\n- return uncommittedSizeOfNewCommit < uncommittedSizeOfCurrentCommit && translog.uncommittedOperations() > 0;\n+ return uncommittedSizeOfNewCommit < uncommittedSizeOfCurrentCommit;\n }\n \n @Override",
"filename": "server/src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -108,6 +108,7 @@ public class Translog extends AbstractIndexShardComponent implements IndexShardC\n public static final String CHECKPOINT_FILE_NAME = \"translog\" + CHECKPOINT_SUFFIX;\n \n static final Pattern PARSE_STRICT_ID_PATTERN = Pattern.compile(\"^\" + TRANSLOG_FILE_PREFIX + \"(\\\\d+)(\\\\.tlog)$\");\n+ public static final int DEFAULT_HEADER_SIZE_IN_BYTES = TranslogWriter.getHeaderLength(UUIDs.randomBase64UUID());\n \n // the list of translog readers is guaranteed to be in order of translog generation\n private final List<TranslogReader> readers = new ArrayList<>();\n@@ -451,7 +452,10 @@ public long sizeOfGensAboveSeqNoInBytes(long minSeqNo) {\n * @throws IOException if creating the translog failed\n */\n TranslogWriter createWriter(long fileGeneration) throws IOException {\n- return createWriter(fileGeneration, getMinFileGeneration(), globalCheckpointSupplier.getAsLong());\n+ final TranslogWriter writer = createWriter(fileGeneration, getMinFileGeneration(), globalCheckpointSupplier.getAsLong());\n+ assert writer.sizeInBytes() == DEFAULT_HEADER_SIZE_IN_BYTES : \"Mismatch translog header size; \" +\n+ \"empty translog size [\" + writer.sizeInBytes() + \", header size [\" + DEFAULT_HEADER_SIZE_IN_BYTES + \"]\";\n+ return writer;\n }\n \n /**",
"filename": "server/src/main/java/org/elasticsearch/index/translog/Translog.java",
"status": "modified"
},
{
"diff": "@@ -350,7 +350,7 @@ public void testMaybeFlush() throws Exception {\n });\n assertEquals(0, translog.uncommittedOperations());\n translog.sync();\n- long size = translog.uncommittedSizeInBytes();\n+ long size = Math.max(translog.uncommittedSizeInBytes(), Translog.DEFAULT_HEADER_SIZE_IN_BYTES + 1);\n logger.info(\"--> current translog size: [{}] num_ops [{}] generation [{}]\", translog.uncommittedSizeInBytes(),\n translog.uncommittedOperations(), translog.getGeneration());\n client().admin().indices().prepareUpdateSettings(\"test\").setSettings(Settings.builder().put(",
"filename": "server/src/test/java/org/elasticsearch/index/shard/IndexShardIT.java",
"status": "modified"
}
]
} |
{
"body": "**Bug Report**\r\n\r\n**Elasticsearch version:** 6.1.0 (it appears as though the behavior is the same in 6.0.1, too)\r\n\r\n**Plugins installed**: None that don't come out of the box.\r\n\r\n**JVM version:** 1.8.0_131\r\n\r\n**OS version:** macOS Sierra\r\n\r\n**Description of the problem including expected versus actual behavior**\r\n\r\nPercolator queries are not matching in instances that I expect them to. Playing around with the debugger, if I force it to skip the extract fields step for a query before it saves (which results in query.extraction_result = failed), subsequent attempts to use that percolated query work as expected. So, it seems possible that this is related to something in the path where queries are ruled out before being fully parsed/executed.\r\n\r\nI will include a query and a document below - when I index the document and then run the provided query against it, the document is found. When I add the same query to a percolator document and query the user against it, it is not found (unless I use a debugger and force it to not extract terms).\r\n\r\nI've had a tough time making sense of some of the code related to the extracted fields/terms, but I'm still trying to see if I can figure out where it is going wrong. Any pointers would be appreciated.\r\n\r\n**Steps to reproduce**\r\n\r\nThe first 3 steps are just meant to show that the percolator query should match by querying the document from the other direction first.\r\n\r\n1. Create a user index\r\n\r\n```\r\ncurl -H \"Content-Type: application/json\" -XPUT http://localhost:9200/targeting-users -d '{\r\n \"mappings\": {\r\n \"user\": {\r\n \"properties\": {\r\n \"address\": {\r\n \"properties\": {\r\n \"city\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"country\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"countryId\": {\r\n \"type\": \"long\"\r\n },\r\n \"postalCode\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"subdivision\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"subdivisionId\": {\r\n \"type\": \"long\"\r\n }\r\n }\r\n },\r\n \"affiliations\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"expirationDate\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"location\": {\r\n \"properties\": {\r\n \"blockingBrands\": {\r\n \"type\": \"long\"\r\n },\r\n \"city\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"country\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"countryId\": {\r\n \"type\": \"long\"\r\n },\r\n \"description\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"id\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"indexTime\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"locationId\": {\r\n \"type\": \"long\"\r\n },\r\n \"orgId\": {\r\n \"type\": \"long\"\r\n },\r\n \"parentId\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"postalCode\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"subdivision\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"subdivisionId\": {\r\n \"type\": \"long\"\r\n },\r\n \"targetingLabels\": {\r\n \"type\": \"long\"\r\n },\r\n \"teamId\": {\r\n \"type\": \"long\"\r\n },\r\n \"unverifiedBrands\": {\r\n \"type\": \"long\"\r\n },\r\n \"verifiedBrands\": {\r\n \"type\": \"long\"\r\n }\r\n }\r\n },\r\n \"org\": {\r\n \"properties\": {\r\n \"categories\": {\r\n \"type\": \"long\"\r\n },\r\n \"indexId\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"indexTime\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"orgId\": {\r\n \"type\": \"long\"\r\n },\r\n \"orgName\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"type\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n },\r\n \"status\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"team\": {\r\n \"properties\": {\r\n \"approvedBrands\": {\r\n \"type\": \"long\"\r\n },\r\n \"blockedBrands\": {\r\n \"type\": \"long\"\r\n },\r\n \"blockingBrands\": {\r\n \"type\": \"long\"\r\n },\r\n \"categories\": {\r\n \"type\": \"long\"\r\n },\r\n \"id\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"indexTime\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"industries\": {\r\n \"type\": \"long\"\r\n },\r\n \"manualBrandApproval\": {\r\n \"type\": \"boolean\"\r\n },\r\n \"memberSegments\": {\r\n \"type\": \"long\"\r\n },\r\n \"orgId\": {\r\n \"type\": \"long\"\r\n },\r\n \"parentId\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"positions\": {\r\n \"type\": \"long\"\r\n },\r\n \"privateToImplicitTargeting\": {\r\n \"type\": \"boolean\"\r\n },\r\n \"teamId\": {\r\n \"type\": \"long\"\r\n },\r\n \"teamName\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"type\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"age\": {\r\n \"type\": \"integer\"\r\n },\r\n \"attributes\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"name\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"values\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n },\r\n \"categories\": {\r\n \"type\": \"long\"\r\n },\r\n \"expertRanks\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"categoryId\": {\r\n \"type\": \"long\"\r\n },\r\n \"expirationDate\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"rank\": {\r\n \"type\": \"integer\"\r\n }\r\n }\r\n },\r\n \"firstName\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"gender\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"indexTime\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"lastLogin\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"lastName\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"recommendedTo\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"orgId\": {\r\n \"type\": \"long\"\r\n },\r\n \"rating\": {\r\n \"type\": \"double\"\r\n }\r\n }\r\n },\r\n \"userId\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"userUuid\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"query\": {\r\n \"type\": \"percolator\"\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n```\r\n\r\n2. Index the user\r\n\r\n```\r\ncurl -H \"Content-Type: application/json\" -XPUT http://localhost:9200/targeting-users/user/1 -d '{\r\n \"userId\": 1,\r\n \"userUuid\": \"XXXXXXXXXXXXXXXXX\",\r\n \"firstName\": \"Test\",\r\n \"lastName\": \"User\",\r\n \"age\": 20,\r\n \"gender\": \"MALE\",\r\n \"address\": {\r\n \"countryId\": 232,\r\n \"country\": \"US\",\r\n \"subdivisionId\": 4077,\r\n \"subdivision\": \"UT\",\r\n \"postalCode\": null,\r\n \"city\": null\r\n },\r\n \"categories\": [\r\n 46,\r\n 185\r\n ],\r\n \"expertRanks\": [],\r\n \"recommendedTo\": [],\r\n \"affiliations\": [\r\n {\r\n \"org\": {\r\n \"type\": \"R\",\r\n \"orgId\": 1,\r\n \"orgName\": \"Test Company\",\r\n \"categories\": [\r\n 49,\r\n 177\r\n ],\r\n \"indexTime\": \"2018-01-04T06:00:26.212\"\r\n },\r\n \"team\": {\r\n \"type\": \"RETAIL\",\r\n \"teamId\": 5,\r\n \"teamName\": \"Test Team\",\r\n \"orgId\": 1,\r\n \"privateToImplicitTargeting\": false,\r\n \"manualBrandApproval\": false,\r\n \"approvedBrands\": [],\r\n \"blockedBrands\": [],\r\n \"blockingBrands\": [],\r\n \"positions\": [],\r\n \"categories\": [],\r\n \"memberSegments\": [],\r\n \"industries\": [],\r\n \"indexTime\": \"2018-01-04T06:00:26.212\"\r\n },\r\n \"location\": {\r\n \"locationId\": 10,\r\n \"teamId\": 5,\r\n \"orgId\": 1,\r\n \"description\": \"Test Location\",\r\n \"countryId\": 232,\r\n \"country\": \"US\",\r\n \"subdivisionId\": 4077,\r\n \"subdivision\": \"UT\",\r\n \"postalCode\": \"84111\",\r\n \"city\": \"Salt Lake City\",\r\n \"blockingBrands\": [\r\n 2\r\n ],\r\n \"verifiedBrands\": [\r\n 7\r\n ],\r\n \"unverifiedBrands\": [\r\n 55,\r\n 80\r\n ],\r\n \"targetingLabels\": [],\r\n \"indexTime\": \"2018-01-04T06:00:26.212\"\r\n },\r\n \"status\": \"ACTIVE\",\r\n \"expirationDate\": null\r\n }\r\n ],\r\n \"attributes\": [],\r\n \"lastLogin\": null,\r\n \"indexTime\": \"2018-01-04T06:00:26.180\"\r\n}'\r\n```\r\n\r\n3. Run the following query against the user index\r\n\r\n```\r\ncurl -H \"Content-Type: application/json\" -XPOST http://localhost:9200/targeting-users/_search -d '{\r\n\"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.team.type\": [\r\n \"RETAIL\",\r\n \"MANUFACTURER\",\r\n \"PARTNER\"\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"bool\": {\r\n \"must_not\": [\r\n {\r\n \"exists\": {\r\n \"field\": \"affiliations.expirationDate\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"range\": {\r\n \"affiliations.expirationDate\": {\r\n \"from\": \"now\",\r\n \"to\": null,\r\n \"include_lower\": true,\r\n \"include_upper\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.status\": [\r\n \"ACTIVE\"\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"terms\": {\r\n \"affiliations.team.type\": [\r\n \"RETAIL\",\r\n \"MANUFACTURER\",\r\n \"PARTNER\"\r\n ],\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.location.countryId\": [\r\n 232\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.location.verifiedBrands\": [\r\n 55\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.location.unverifiedBrands\": [\r\n 55\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"must_not\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"term\": {\r\n \"affiliations.team.manualBrandApproval\": {\r\n \"value\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"must_not\": [\r\n {\r\n \"terms\": {\r\n \"affiliations.team.approvedBrands\": [\r\n 55\r\n ],\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"term\": {\r\n \"affiliations.team.blockedBrands\": {\r\n \"value\": 55,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"term\": {\r\n \"affiliations.team.blockingBrands\": {\r\n \"value\": 55,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"term\": {\r\n \"affiliations.location.blockingBrands\": {\r\n \"value\": 55,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n}'\r\n```\r\n\r\n**Result:** 1 result\r\n**Expected Results:** 1 result\r\n\r\n4. Now, create the percolator index\r\n\r\n```\r\ncurl -H \"Content-Type: application/json\" -XPUT http://localhost:9200/tag-queries -d '{\r\n \"mappings\": {\r\n \"tag\": {\r\n \"properties\": {\r\n \"address\": {\r\n \"properties\": {\r\n \"city\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"country\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"countryId\": {\r\n \"type\": \"long\"\r\n },\r\n \"postalCode\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"subdivision\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"subdivisionId\": {\r\n \"type\": \"long\"\r\n }\r\n }\r\n },\r\n \"affiliations\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"expirationDate\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"location\": {\r\n \"properties\": {\r\n \"blockingBrands\": {\r\n \"type\": \"long\"\r\n },\r\n \"city\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"country\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"countryId\": {\r\n \"type\": \"long\"\r\n },\r\n \"description\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"id\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"indexTime\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"locationId\": {\r\n \"type\": \"long\"\r\n },\r\n \"orgId\": {\r\n \"type\": \"long\"\r\n },\r\n \"parentId\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"postalCode\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"subdivision\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"subdivisionId\": {\r\n \"type\": \"long\"\r\n },\r\n \"targetingLabels\": {\r\n \"type\": \"long\"\r\n },\r\n \"teamId\": {\r\n \"type\": \"long\"\r\n },\r\n \"unverifiedBrands\": {\r\n \"type\": \"long\"\r\n },\r\n \"verifiedBrands\": {\r\n \"type\": \"long\"\r\n }\r\n }\r\n },\r\n \"org\": {\r\n \"properties\": {\r\n \"categories\": {\r\n \"type\": \"long\"\r\n },\r\n \"indexId\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"indexTime\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"orgId\": {\r\n \"type\": \"long\"\r\n },\r\n \"orgName\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"type\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n },\r\n \"status\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"team\": {\r\n \"properties\": {\r\n \"approvedBrands\": {\r\n \"type\": \"long\"\r\n },\r\n \"blockedBrands\": {\r\n \"type\": \"long\"\r\n },\r\n \"blockingBrands\": {\r\n \"type\": \"long\"\r\n },\r\n \"categories\": {\r\n \"type\": \"long\"\r\n },\r\n \"id\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"indexTime\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"industries\": {\r\n \"type\": \"long\"\r\n },\r\n \"manualBrandApproval\": {\r\n \"type\": \"boolean\"\r\n },\r\n \"memberSegments\": {\r\n \"type\": \"long\"\r\n },\r\n \"orgId\": {\r\n \"type\": \"long\"\r\n },\r\n \"parentId\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"positions\": {\r\n \"type\": \"long\"\r\n },\r\n \"privateToImplicitTargeting\": {\r\n \"type\": \"boolean\"\r\n },\r\n \"teamId\": {\r\n \"type\": \"long\"\r\n },\r\n \"teamName\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"type\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"age\": {\r\n \"type\": \"integer\"\r\n },\r\n \"attributes\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"name\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"values\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n },\r\n \"categories\": {\r\n \"type\": \"long\"\r\n },\r\n \"expertRanks\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"categoryId\": {\r\n \"type\": \"long\"\r\n },\r\n \"expirationDate\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"rank\": {\r\n \"type\": \"integer\"\r\n }\r\n }\r\n },\r\n \"firstName\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"gender\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"indexTime\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"lastLogin\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\\u0027T\\u0027HH:mm:ss.SSS\"\r\n },\r\n \"lastName\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"recommendedTo\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"orgId\": {\r\n \"type\": \"long\"\r\n },\r\n \"rating\": {\r\n \"type\": \"double\"\r\n }\r\n }\r\n },\r\n \"userId\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"userUuid\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"query\": {\r\n \"type\": \"percolator\"\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n```\r\n\r\n5. Create a percolator record with the query from step 3\r\n\r\n```\r\ncurl -H \"Content-Type: application/json\" -XPUT http://localhost:9200/tag-queries/tag/1 -d '{\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.team.type\": [\r\n \"RETAIL\",\r\n \"MANUFACTURER\",\r\n \"PARTNER\"\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"bool\": {\r\n \"must_not\": [\r\n {\r\n \"exists\": {\r\n \"field\": \"affiliations.expirationDate\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"range\": {\r\n \"affiliations.expirationDate\": {\r\n \"from\": \"now\",\r\n \"to\": null,\r\n \"include_lower\": true,\r\n \"include_upper\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.status\": [\r\n \"ACTIVE\"\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"terms\": {\r\n \"affiliations.team.type\": [\r\n \"RETAIL\",\r\n \"MANUFACTURER\",\r\n \"PARTNER\"\r\n ],\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.location.countryId\": [\r\n 232\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.location.verifiedBrands\": [\r\n 55\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.location.unverifiedBrands\": [\r\n 55\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"must_not\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"term\": {\r\n \"affiliations.team.manualBrandApproval\": {\r\n \"value\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"must_not\": [\r\n {\r\n \"terms\": {\r\n \"affiliations.team.approvedBrands\": [\r\n 55\r\n ],\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"term\": {\r\n \"affiliations.team.blockedBrands\": {\r\n \"value\": 55,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"term\": {\r\n \"affiliations.team.blockingBrands\": {\r\n \"value\": 55,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"term\": {\r\n \"affiliations.location.blockingBrands\": {\r\n \"value\": 55,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n}'\r\n```\r\n\r\n6. Use the user from step 2 to do a percolator query\r\n\r\n```\r\ncurl -H \"Content-Type: application/json\" -XPOST http://localhost:9200/tag-queries/_search -d '{\r\n \"query\": {\r\n \"percolate\": {\r\n \"field\": \"query\",\r\n \"document\": {\r\n \"userId\": 1,\r\n \"userUuid\": \"XXXXXXXXXXXXXXXXX\",\r\n \"firstName\": \"Test\",\r\n \"lastName\": \"User\",\r\n \"age\": 20,\r\n \"gender\": \"MALE\",\r\n \"address\": {\r\n \"countryId\": 232,\r\n \"country\": \"US\",\r\n \"subdivisionId\": 4077,\r\n \"subdivision\": \"UT\",\r\n \"postalCode\": null,\r\n \"city\": null\r\n },\r\n \"categories\": [\r\n 46,\r\n 185\r\n ],\r\n \"expertRanks\": [],\r\n \"recommendedTo\": [],\r\n \"affiliations\": [\r\n {\r\n \"org\": {\r\n \"type\": \"R\",\r\n \"orgId\": 1,\r\n \"orgName\": \"Test Company\",\r\n \"categories\": [\r\n 49,\r\n 177\r\n ],\r\n \"indexTime\": \"2018-01-04T06:00:26.212\"\r\n },\r\n \"team\": {\r\n \"type\": \"RETAIL\",\r\n \"teamId\": 5,\r\n \"teamName\": \"Test Team\",\r\n \"orgId\": 1,\r\n \"privateToImplicitTargeting\": false,\r\n \"manualBrandApproval\": false,\r\n \"approvedBrands\": [],\r\n \"blockedBrands\": [],\r\n \"blockingBrands\": [],\r\n \"positions\": [],\r\n \"categories\": [],\r\n \"memberSegments\": [],\r\n \"industries\": [],\r\n \"indexTime\": \"2018-01-04T06:00:26.212\"\r\n },\r\n \"location\": {\r\n \"locationId\": 10,\r\n \"teamId\": 5,\r\n \"orgId\": 1,\r\n \"description\": \"Test Location\",\r\n \"countryId\": 232,\r\n \"country\": \"US\",\r\n \"subdivisionId\": 4077,\r\n \"subdivision\": \"UT\",\r\n \"postalCode\": \"84111\",\r\n \"city\": \"Salt Lake City\",\r\n \"blockingBrands\": [\r\n 2\r\n ],\r\n \"verifiedBrands\": [\r\n 7\r\n ],\r\n \"unverifiedBrands\": [\r\n 55,\r\n 80\r\n ],\r\n \"targetingLabels\": [],\r\n \"indexTime\": \"2018-01-04T06:00:26.212\"\r\n },\r\n \"status\": \"ACTIVE\",\r\n \"expirationDate\": null\r\n }\r\n ],\r\n \"attributes\": [],\r\n \"lastLogin\": null,\r\n \"indexTime\": \"2018-01-04T06:00:26.180\"\r\n }\r\n }\r\n }\r\n}'\r\n```\r\n\r\n**Result:** 0 results\r\n**Expected Result:** 1 result",
"comments": [
{
"body": "Hey @suresk, I'm looking at the last 3 snippets here and in step 5 you're trying to store a percolator query with a `percolate` query. I'm not sure what you expect in step 6, but I don't expect that to match with the percolator query you stored in step 5. What would you expect in this case? A `percolate` query is used to matched stored percolator queries and not meant to be used inside a percolator query.",
"created_at": "2018-01-22T08:05:52Z"
},
{
"body": "Hi @martijnvg - Sorry! It was just a copy/paste error. I had the right description for that step, but grabbed the wrong JSON block. I've updated my issue - does it make more sense now?",
"created_at": "2018-01-22T08:51:38Z"
},
{
"body": "@suresk I see what you mean, but the percolator query in step 5 hasn't been updated yet. Just to be sure that percolator should look like:\r\n\r\n```\r\nPUT /tag-queries/tag/1\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.team.type\": [\r\n \"RETAIL\",\r\n \"MANUFACTURER\",\r\n \"PARTNER\"\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"bool\": {\r\n \"must_not\": [\r\n {\r\n \"exists\": {\r\n \"field\": \"affiliations.expirationDate\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"range\": {\r\n \"affiliations.expirationDate\": {\r\n \"from\": \"now\",\r\n \"to\": null,\r\n \"include_lower\": true,\r\n \"include_upper\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.status\": [\r\n \"ACTIVE\"\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"terms\": {\r\n \"affiliations.team.type\": [\r\n \"RETAIL\",\r\n \"MANUFACTURER\",\r\n \"PARTNER\"\r\n ],\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.location.countryId\": [\r\n 232\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.location.verifiedBrands\": [\r\n 55\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"terms\": {\r\n \"affiliations.location.unverifiedBrands\": [\r\n 55\r\n ],\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"must_not\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"term\": {\r\n \"affiliations.team.manualBrandApproval\": {\r\n \"value\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n {\r\n \"bool\": {\r\n \"must_not\": [\r\n {\r\n \"terms\": {\r\n \"affiliations.team.approvedBrands\": [\r\n 55\r\n ],\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"term\": {\r\n \"affiliations.team.blockedBrands\": {\r\n \"value\": 55,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"term\": {\r\n \"affiliations.team.blockingBrands\": {\r\n \"value\": 55,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"term\": {\r\n \"affiliations.location.blockingBrands\": {\r\n \"value\": 55,\r\n \"boost\": 1\r\n }\r\n }\r\n },\r\n \"path\": \"affiliations\",\r\n \"ignore_unmapped\": false,\r\n \"score_mode\": \"none\",\r\n \"boost\": 1\r\n }\r\n }\r\n ],\r\n \"disable_coord\": false,\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n}\r\n```\r\n\r\nIf that is the case then I can reproduce what you mean.",
"created_at": "2018-01-22T19:13:03Z"
},
{
"body": "Yes, that is what I mean.. I'm not sure why it didn't update before, but it looks like it is updated now.",
"created_at": "2018-01-22T19:18:09Z"
},
{
"body": "I investigated this bug and this caused by the fact that minimum should matches are extracted incorrectly from nested queries. I will work on a fix. @suresk thanks for reporting!",
"created_at": "2018-01-23T10:36:23Z"
},
{
"body": "Hey guys, I'm facing a similar problem with nested queries and percolator.\r\n\r\nJust to be clear, this problem happens with any nested query, right? Because I have a simple percolate nested query that when executed against the index, it finds the document, but when used with the percolate query, no query is found for the document.\r\n\r\nI'll leave my scenario here, so you guys can confirm it.\r\n\r\nElasticsearch version: 5.5.0\r\nDocker Image: docker.elastic.co/elasticsearch/elasticsearch:5.5.0\r\n\r\nCreating the index\r\n```\r\nPUT my_index\r\n{\r\n \"mappings\": {\r\n \"percolator\": {\r\n \"properties\": {\r\n \"query\": {\r\n \"type\": \"percolator\"\r\n }\r\n }\r\n },\r\n \"my_type\": {\r\n \"properties\": {\r\n \"tags\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"tag\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\nAdding a new document\r\n```\r\nPOST my_index/my_type/1\r\n{\r\n \"tags\": [\r\n {\"tag\": \"first_tag\"},\r\n {\"tag\": \"second_tag\"}\r\n ]\r\n}\r\n```\r\n\r\nTesting the query we're gonna save in the percolator type\r\n```\r\nGET my_index/my_type/_search\r\n{\r\n \"query\": {\r\n \"nested\": {\r\n \"path\": \"tags\",\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"match\": {\r\n \"tags.tag\": \"first_tag\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\nThe query works as expected.\r\n\r\nAdding the query to the percolator type\r\n```\r\nPUT my_index/percolator/1?refresh\r\n{\r\n \"query\": {\r\n \"nested\": {\r\n \"path\": \"tags\",\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"match\": {\r\n \"tags.tag\": \"first_tag\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\nPercolating the document\r\n```\r\nGET my_index/_search\r\n{\r\n \"query\": {\r\n \"percolate\": {\r\n \"field\": \"query\",\r\n \"document_type\": \"percolator\",\r\n \"document\": {\r\n \"tags\": [\r\n {\r\n \"tag\": \"first_tag\"\r\n },\r\n {\r\n \"tag\": \"second_tag\"\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n}\r\n```\r\nHere nothing is found!\r\n\r\nI really hope to be doing something wrong, because for me this is a very important feature and my app depends on it.",
"created_at": "2018-01-26T13:45:03Z"
},
{
"body": "Hey guys!\r\n\r\nAfter running some more tests I realized what I was doing wrong. I confused the parameter \"document_type\" in the percolate query, thinking that I should inform the type which holds the queries, when in fact I should inform the type which holds the documents.\r\n\r\nSo please disregard my previous comment.\r\n\r\nThanks!",
"created_at": "2018-01-29T14:03:41Z"
},
{
"body": "@leonardocaldas Thanks for testing some more. You are right about the fact that this bug can occur with other queries too. Basically if a percolator query contains duplicated clauses this error can occur. The fix in #28353 will prevent this bug from happening.",
"created_at": "2018-01-30T08:19:29Z"
},
{
"body": "I've just noticed this bug with `must` condition. I'm executing the following query:\r\n```\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"term\": {\r\n \"stories.storyId\": 2\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"path\": \"stories\"\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"term\": {\r\n \"stories.storyId\": 1\r\n }\r\n },\r\n {\r\n \"term\": {\r\n \"stories.storyName\": \"story1\"\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"path\": \"stories\"\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"range\": {\r\n \"stories.executedAt\": {\r\n \"gte\": \"now-1w\"\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"path\": \"stories\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```\r\n\r\nI get back the document: \r\n```\r\n\r\n{\r\n \"took\": 3,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 4.568616,\r\n \"hits\": [\r\n {\r\n \"_index\": \"user-180306\",\r\n \"_type\": \"user\",\r\n \"_id\": \"kpF7-mEBH3oCPpbkErLB\",\r\n \"_score\": 4.568616,\r\n \"_source\": {\r\n \"stories\": [\r\n {\r\n \"storyId\": 1,\r\n \"storyName\": \"story1\",\r\n \"executedAt\": \"2018-03-06T07:45:00.000+01:00\"\r\n },\r\n {\r\n \"storyId\": 2,\r\n \"storyName\": \"story2\",\r\n \"executedAt\": \"2018-03-06T07:55:00.000+01:00\"\r\n }\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n\r\nBut when i percolate that search, and run that document from result against the percolator, I don't get the percolated query in result list (i get other valid queries).",
"created_at": "2018-03-06T09:37:02Z"
}
],
"number": 28315,
"title": "Percolator query not returning document that the query matches"
} | {
"body": "If a percolator query contains duplicate query clauses somewhere in the query tree then\r\nwhen these clauses are extracted then they should not affect the msm.\r\n\r\nThis can lead a percolator query that should be a valid match not become a candidate match,\r\nbecause at query time, the msm that is being used by the CoveringQuery would never match with\r\nthe msm used at index time.\r\n\r\nPR for #28315",
"number": 28353,
"review_comments": [],
"title": "Do not take duplicate query extractions into account for minimum_should_match attribute"
} | {
"commits": [
{
"message": "percolator: Do not take duplicate query extractions into account for minimum_should_match attribute\n\nIf a percolator query contains duplicate query clauses somewhere in the query tree then\nwhen these clauses are extracted then they should not affect the msm.\n\nThis can lead a percolator query that should be a valid match not become a candidate match,\nbecause at query time, the msm that is being used by the CoveringQuery would never match with\nthe msm used at index time.\n\nCloses #28315"
}
],
"files": [
{
"diff": "@@ -380,7 +380,21 @@ private static BiFunction<Query, Version, Result> booleanQuery() {\n msm += 1;\n }\n } else {\n- msm += result.minimumShouldMatch;\n+ // In case that there are duplicate query extractions we need to be careful with incrementing msm,\n+ // because that could lead to valid matches not becoming candidate matches:\n+ // query: (field:val1 AND field:val2) AND (field:val2 AND field:val3)\n+ // doc: field: val1 val2 val3\n+ // So lets be protective and decrease the msm:\n+ int resultMsm = result.minimumShouldMatch;\n+ for (QueryExtraction queryExtraction : result.extractions) {\n+ if (extractions.contains(queryExtraction)) {\n+ // To protect against negative msm:\n+ // (sub results could consist out of disjunction and conjunction and\n+ // then we do not know which extraction contributed to msm)\n+ resultMsm = Math.max(0, resultMsm - 1);\n+ }\n+ }\n+ msm += resultMsm;\n }\n verified &= result.verified;\n matchAllDocs &= result.matchAllDocs;\n@@ -519,10 +533,16 @@ private static Result handleDisjunction(List<Query> disjunctions, int requiredSh\n if (subResult.matchAllDocs) {\n numMatchAllClauses++;\n }\n+ int resultMsm = subResult.minimumShouldMatch;\n+ for (QueryExtraction extraction : subResult.extractions) {\n+ if (terms.contains(extraction)) {\n+ resultMsm = Math.max(1, resultMsm - 1);\n+ }\n+ }\n+ msmPerClause[i] = resultMsm;\n terms.addAll(subResult.extractions);\n \n QueryExtraction[] t = subResult.extractions.toArray(new QueryExtraction[1]);\n- msmPerClause[i] = subResult.minimumShouldMatch;\n if (subResult.extractions.size() == 1 && t[0].range != null) {\n rangeFieldNames[i] = t[0].range.fieldName;\n }",
"filename": "modules/percolator/src/main/java/org/elasticsearch/percolator/QueryAnalyzer.java",
"status": "modified"
},
{
"diff": "@@ -72,6 +72,7 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.common.CheckedFunction;\n import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -622,6 +623,55 @@ public void testPercolateSmallAndLargeDocument() throws Exception {\n }\n }\n \n+ public void testDuplicatedClauses() throws Exception {\n+ List<ParseContext.Document> docs = new ArrayList<>();\n+\n+ BooleanQuery.Builder builder = new BooleanQuery.Builder();\n+ BooleanQuery.Builder builder1 = new BooleanQuery.Builder();\n+ builder1.add(new TermQuery(new Term(\"field\", \"value1\")), BooleanClause.Occur.MUST);\n+ builder1.add(new TermQuery(new Term(\"field\", \"value2\")), BooleanClause.Occur.MUST);\n+ builder.add(builder1.build(), BooleanClause.Occur.MUST);\n+ BooleanQuery.Builder builder2 = new BooleanQuery.Builder();\n+ builder2.add(new TermQuery(new Term(\"field\", \"value2\")), BooleanClause.Occur.MUST);\n+ builder2.add(new TermQuery(new Term(\"field\", \"value3\")), BooleanClause.Occur.MUST);\n+ builder.add(builder2.build(), BooleanClause.Occur.MUST);\n+ addQuery(builder.build(), docs);\n+\n+ builder = new BooleanQuery.Builder()\n+ .setMinimumNumberShouldMatch(2);\n+ builder1 = new BooleanQuery.Builder();\n+ builder1.add(new TermQuery(new Term(\"field\", \"value1\")), BooleanClause.Occur.MUST);\n+ builder1.add(new TermQuery(new Term(\"field\", \"value2\")), BooleanClause.Occur.MUST);\n+ builder.add(builder1.build(), BooleanClause.Occur.SHOULD);\n+ builder2 = new BooleanQuery.Builder();\n+ builder2.add(new TermQuery(new Term(\"field\", \"value2\")), BooleanClause.Occur.MUST);\n+ builder2.add(new TermQuery(new Term(\"field\", \"value3\")), BooleanClause.Occur.MUST);\n+ builder.add(builder2.build(), BooleanClause.Occur.SHOULD);\n+ BooleanQuery.Builder builder3 = new BooleanQuery.Builder();\n+ builder3.add(new TermQuery(new Term(\"field\", \"value3\")), BooleanClause.Occur.MUST);\n+ builder3.add(new TermQuery(new Term(\"field\", \"value4\")), BooleanClause.Occur.MUST);\n+ builder.add(builder3.build(), BooleanClause.Occur.SHOULD);\n+ addQuery(builder.build(), docs);\n+\n+ indexWriter.addDocuments(docs);\n+ indexWriter.close();\n+ directoryReader = DirectoryReader.open(directory);\n+ IndexSearcher shardSearcher = newSearcher(directoryReader);\n+ shardSearcher.setQueryCache(null);\n+\n+ Version v = Version.CURRENT;\n+ List<BytesReference> sources = Collections.singletonList(new BytesArray(\"{}\"));\n+\n+ MemoryIndex memoryIndex = new MemoryIndex();\n+ memoryIndex.addField(\"field\", \"value1 value2 value3\", new WhitespaceAnalyzer());\n+ IndexSearcher percolateSearcher = memoryIndex.createSearcher();\n+ PercolateQuery query = (PercolateQuery) fieldType.percolateQuery(\"_name\", queryStore, sources, percolateSearcher, v);\n+ TopDocs topDocs = shardSearcher.search(query, 10, new Sort(SortField.FIELD_DOC), true, true);\n+ assertEquals(2L, topDocs.totalHits);\n+ assertEquals(0, topDocs.scoreDocs[0].doc);\n+ assertEquals(1, topDocs.scoreDocs[1].doc);\n+ }\n+\n private void duelRun(PercolateQuery.QueryStore queryStore, MemoryIndex memoryIndex, IndexSearcher shardSearcher) throws IOException {\n boolean requireScore = randomBoolean();\n IndexSearcher percolateSearcher = memoryIndex.createSearcher();",
"filename": "modules/percolator/src/test/java/org/elasticsearch/percolator/CandidateQueryTests.java",
"status": "modified"
},
{
"diff": "@@ -100,8 +100,10 @@\n import java.util.List;\n import java.util.Map;\n import java.util.function.Function;\n+import java.util.stream.Collectors;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchPhraseQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n@@ -850,6 +852,79 @@ public void testEncodeRange() {\n }\n }\n \n+ public void testDuplicatedClauses() throws Exception {\n+ addQueryFieldMappings();\n+\n+ QueryBuilder qb = boolQuery()\n+ .must(boolQuery().must(termQuery(\"field\", \"value1\")).must(termQuery(\"field\", \"value2\")))\n+ .must(boolQuery().must(termQuery(\"field\", \"value2\")).must(termQuery(\"field\", \"value3\")));\n+ ParsedDocument doc = mapperService.documentMapper(\"doc\").parse(SourceToParse.source(\"test\", \"doc\", \"1\",\n+ XContentFactory.jsonBuilder().startObject()\n+ .field(fieldName, qb)\n+ .endObject().bytes(),\n+ XContentType.JSON));\n+\n+ List<String> values = Arrays.stream(doc.rootDoc().getFields(fieldType.queryTermsField.name()))\n+ .map(f -> f.binaryValue().utf8ToString())\n+ .sorted()\n+ .collect(Collectors.toList());\n+ assertThat(values.size(), equalTo(3));\n+ assertThat(values.get(0), equalTo(\"field\\0value1\"));\n+ assertThat(values.get(1), equalTo(\"field\\0value2\"));\n+ assertThat(values.get(2), equalTo(\"field\\0value3\"));\n+ int msm = doc.rootDoc().getFields(fieldType.minimumShouldMatchField.name())[0].numericValue().intValue();\n+ assertThat(msm, equalTo(3));\n+\n+ qb = boolQuery()\n+ .must(boolQuery().must(termQuery(\"field\", \"value1\")).must(termQuery(\"field\", \"value2\")))\n+ .must(boolQuery().must(termQuery(\"field\", \"value2\")).must(termQuery(\"field\", \"value3\")))\n+ .must(boolQuery().must(termQuery(\"field\", \"value3\")).must(termQuery(\"field\", \"value4\")))\n+ .must(boolQuery().should(termQuery(\"field\", \"value4\")).should(termQuery(\"field\", \"value5\")));\n+ doc = mapperService.documentMapper(\"doc\").parse(SourceToParse.source(\"test\", \"doc\", \"1\",\n+ XContentFactory.jsonBuilder().startObject()\n+ .field(fieldName, qb)\n+ .endObject().bytes(),\n+ XContentType.JSON));\n+\n+ values = Arrays.stream(doc.rootDoc().getFields(fieldType.queryTermsField.name()))\n+ .map(f -> f.binaryValue().utf8ToString())\n+ .sorted()\n+ .collect(Collectors.toList());\n+ assertThat(values.size(), equalTo(5));\n+ assertThat(values.get(0), equalTo(\"field\\0value1\"));\n+ assertThat(values.get(1), equalTo(\"field\\0value2\"));\n+ assertThat(values.get(2), equalTo(\"field\\0value3\"));\n+ assertThat(values.get(3), equalTo(\"field\\0value4\"));\n+ assertThat(values.get(4), equalTo(\"field\\0value5\"));\n+ msm = doc.rootDoc().getFields(fieldType.minimumShouldMatchField.name())[0].numericValue().intValue();\n+ assertThat(msm, equalTo(4));\n+\n+ qb = boolQuery()\n+ .minimumShouldMatch(3)\n+ .should(boolQuery().should(termQuery(\"field\", \"value1\")).should(termQuery(\"field\", \"value2\")))\n+ .should(boolQuery().should(termQuery(\"field\", \"value2\")).should(termQuery(\"field\", \"value3\")))\n+ .should(boolQuery().should(termQuery(\"field\", \"value3\")).should(termQuery(\"field\", \"value4\")))\n+ .should(boolQuery().should(termQuery(\"field\", \"value4\")).should(termQuery(\"field\", \"value5\")));\n+ doc = mapperService.documentMapper(\"doc\").parse(SourceToParse.source(\"test\", \"doc\", \"1\",\n+ XContentFactory.jsonBuilder().startObject()\n+ .field(fieldName, qb)\n+ .endObject().bytes(),\n+ XContentType.JSON));\n+\n+ values = Arrays.stream(doc.rootDoc().getFields(fieldType.queryTermsField.name()))\n+ .map(f -> f.binaryValue().utf8ToString())\n+ .sorted()\n+ .collect(Collectors.toList());\n+ assertThat(values.size(), equalTo(5));\n+ assertThat(values.get(0), equalTo(\"field\\0value1\"));\n+ assertThat(values.get(1), equalTo(\"field\\0value2\"));\n+ assertThat(values.get(2), equalTo(\"field\\0value3\"));\n+ assertThat(values.get(3), equalTo(\"field\\0value4\"));\n+ assertThat(values.get(4), equalTo(\"field\\0value5\"));\n+ msm = doc.rootDoc().getFields(fieldType.minimumShouldMatchField.name())[0].numericValue().intValue();\n+ assertThat(msm, equalTo(3));\n+ }\n+\n private static byte[] subByteArray(byte[] source, int offset, int length) {\n return Arrays.copyOfRange(source, offset, offset + length);\n }",
"filename": "modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -1108,6 +1108,66 @@ public void testPointRangeQuerySelectRanges() {\n assertEquals(\"_field1\", new ArrayList<>(result.extractions).get(1).range.fieldName);\n }\n \n+ public void testExtractQueryMetadata_duplicatedClauses() {\n+ BooleanQuery.Builder builder = new BooleanQuery.Builder();\n+ builder.add(\n+ new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(\"field\", \"value1\")), BooleanClause.Occur.MUST)\n+ .add(new TermQuery(new Term(\"field\", \"value2\")), BooleanClause.Occur.MUST)\n+ .build(),\n+ BooleanClause.Occur.MUST\n+ );\n+ builder.add(\n+ new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(\"field\", \"value2\")), BooleanClause.Occur.MUST)\n+ .add(new TermQuery(new Term(\"field\", \"value3\")), BooleanClause.Occur.MUST)\n+ .build(),\n+ BooleanClause.Occur.MUST\n+ );\n+ builder.add(\n+ new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(\"field\", \"value3\")), BooleanClause.Occur.MUST)\n+ .add(new TermQuery(new Term(\"field\", \"value4\")), BooleanClause.Occur.MUST)\n+ .build(),\n+ BooleanClause.Occur.MUST\n+ );\n+ Result result = analyze(builder.build(), Version.CURRENT);\n+ assertThat(result.verified, is(true));\n+ assertThat(result.matchAllDocs, is(false));\n+ assertThat(result.minimumShouldMatch, equalTo(4));\n+ assertTermsEqual(result.extractions, new Term(\"field\", \"value1\"), new Term(\"field\", \"value2\"),\n+ new Term(\"field\", \"value3\"), new Term(\"field\", \"value4\"));\n+\n+ builder = new BooleanQuery.Builder().setMinimumNumberShouldMatch(2);\n+ builder.add(\n+ new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(\"field\", \"value1\")), BooleanClause.Occur.MUST)\n+ .add(new TermQuery(new Term(\"field\", \"value2\")), BooleanClause.Occur.MUST)\n+ .build(),\n+ BooleanClause.Occur.SHOULD\n+ );\n+ builder.add(\n+ new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(\"field\", \"value2\")), BooleanClause.Occur.MUST)\n+ .add(new TermQuery(new Term(\"field\", \"value3\")), BooleanClause.Occur.MUST)\n+ .build(),\n+ BooleanClause.Occur.SHOULD\n+ );\n+ builder.add(\n+ new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(\"field\", \"value3\")), BooleanClause.Occur.MUST)\n+ .add(new TermQuery(new Term(\"field\", \"value4\")), BooleanClause.Occur.MUST)\n+ .build(),\n+ BooleanClause.Occur.SHOULD\n+ );\n+ result = analyze(builder.build(), Version.CURRENT);\n+ assertThat(result.verified, is(true));\n+ assertThat(result.matchAllDocs, is(false));\n+ assertThat(result.minimumShouldMatch, equalTo(2));\n+ assertTermsEqual(result.extractions, new Term(\"field\", \"value1\"), new Term(\"field\", \"value2\"),\n+ new Term(\"field\", \"value3\"), new Term(\"field\", \"value4\"));\n+ }\n+\n private static void assertDimension(byte[] expected, Consumer<byte[]> consumer) {\n byte[] dest = new byte[expected.length];\n consumer.accept(dest);",
"filename": "modules/percolator/src/test/java/org/elasticsearch/percolator/QueryAnalyzerTests.java",
"status": "modified"
}
]
} |
{
"body": "**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen combining an affix setting with a group setting, the affix setting does not correctly find the namespaces anymore due to a non-working regex in [Setting.AffixKey](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/common/settings/Setting.java#L1300-L1314), when any setting inside that group setting is supplied\r\n\r\n**Steps to reproduce**:\r\n\r\n```java\r\npublic void testAffixNamespacesWithGroupSetting() {\r\n final Setting.AffixSetting<Settings> affixSetting =\r\n Setting.affixKeySetting(\"prefix.\",\"suffix\",\r\n (key) -> Setting.groupSetting(key + \".\", Setting.Property.Dynamic, Setting.Property.NodeScope));\r\n\r\n // works\r\n assertThat(affixSetting.getNamespaces(Settings.builder().put(\"prefix.infix.suffix\", \"anything\").build()), hasSize(1));\r\n // breaks, has size 0\r\n assertThat(affixSetting.getNamespaces(Settings.builder().put(\"prefix.infix.suffix.anything\", \"anything\").build()), hasSize(1));\r\n}\r\n```",
"comments": [
{
"body": "Side note: I think the best workaround here would be to get rid of group settings all together. They just add leniency.",
"created_at": "2018-01-03T14:54:39Z"
},
{
"body": "Hello,\r\nIt seems that AffixSetting accept only list settings (among settings who has a complex pattern), so I modified the regex pattern to make it also support group settings.",
"created_at": "2018-01-07T15:34:13Z"
},
{
"body": "@PnPie Thanks for the interest @PnPie but please note that this issue does not have the adoptme label; there is work on this issue in-flight already.",
"created_at": "2018-01-07T15:40:13Z"
},
{
"body": "Okay, I just see that it's not assigned, there is no problem :)",
"created_at": "2018-01-07T15:58:11Z"
}
],
"number": 28047,
"title": "Settings: Combining affix setting with group settings results in namespace issues"
} | {
"body": "This introduces a settings updater that allows to specify a list of\r\nsettings. Whenever one of those settings changes, the whole block of\r\nsettings is passed to the consumer.\r\n\r\nThis also fixes an issue with affix settings, when used in combination\r\nwith group settings, which could result in no found settings when used\r\nto get a setting for a namespace.\r\n\r\nLastly logging has been slightly changed, so that filtered settings now\r\nonly log the setting key.\r\n\r\nAnother bug has been fixed for the mock log appender, which did not\r\nwork, when checking for the exact message.\r\n\r\nCloses #28047",
"number": 28338,
"review_comments": [],
"title": "Settings: Introduce settings updater for a list of settings"
} | {
"commits": [
{
"message": "Settings: Introduce settings updater for a list of settings\n\nThis introduces a settings updater that allows to specify a list of\nsettings. Whenever one of those settings changes, the whole block of\nsettings is passed to the consumer.\n\nThis also fixes an issue with affix settings, when used in combination\nwith group settings, which could result in no found settings when used\nto get a setting for a namespace.\n\nLastly logging has been slightly changed, so that filtered settings now\nonly log the setting key.\n\nAnother bug has been fixed for the mock log appender, which did not\nwork, when checking for the exact message.\n\nCloses #28047"
}
],
"files": [
{
"diff": "@@ -194,6 +194,16 @@ public synchronized <T> void addSettingsUpdateConsumer(Setting<T> setting, Consu\n addSettingsUpdater(setting.newUpdater(consumer, logger, validator));\n }\n \n+ /**\n+ * Adds a settings consumer that is only executed if any setting in the supplied list of settings is changed. In that case all the\n+ * settings are specified in the argument are returned.\n+ *\n+ * Also automatically adds empty consumers for all settings in order to activate logging\n+ */\n+ public synchronized void addSettingsUpdateConsumer(Consumer<Settings> consumer, List<? extends Setting<?>> settings) {\n+ addSettingsUpdater(Setting.groupedSettingsUpdater(consumer, logger, settings));\n+ }\n+\n /**\n * Adds a settings consumer for affix settings. Affix settings have a namespace associated to it that needs to be available to the\n * consumer in order to be processed correctly.",
"filename": "server/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java",
"status": "modified"
},
{
"diff": "@@ -509,10 +509,10 @@ public Tuple<A, B> getValue(Settings current, Settings previous) {\n @Override\n public void apply(Tuple<A, B> value, Settings current, Settings previous) {\n if (aSettingUpdater.hasChanged(current, previous)) {\n- logger.info(\"updating [{}] from [{}] to [{}]\", aSetting.key, aSetting.getRaw(previous), aSetting.getRaw(current));\n+ logSettingUpdate(aSetting, current, previous, logger);\n }\n if (bSettingUpdater.hasChanged(current, previous)) {\n- logger.info(\"updating [{}] from [{}] to [{}]\", bSetting.key, bSetting.getRaw(previous), bSetting.getRaw(current));\n+ logSettingUpdate(bSetting, current, previous, logger);\n }\n consumer.accept(value.v1(), value.v2());\n }\n@@ -524,6 +524,46 @@ public String toString() {\n };\n }\n \n+ static AbstractScopedSettings.SettingUpdater<Settings> groupedSettingsUpdater(Consumer<Settings> consumer, Logger logger,\n+ final List<? extends Setting<?>> configuredSettings) {\n+\n+ return new AbstractScopedSettings.SettingUpdater<Settings>() {\n+\n+ private Settings get(Settings settings) {\n+ return settings.filter(s -> {\n+ for (Setting<?> setting : configuredSettings) {\n+ if (setting.key.match(s)) {\n+ return true;\n+ }\n+ }\n+ return false;\n+ });\n+ }\n+\n+ @Override\n+ public boolean hasChanged(Settings current, Settings previous) {\n+ Settings currentSettings = get(current);\n+ Settings previousSettings = get(previous);\n+ return currentSettings.equals(previousSettings) == false;\n+ }\n+\n+ @Override\n+ public Settings getValue(Settings current, Settings previous) {\n+ return get(current);\n+ }\n+\n+ @Override\n+ public void apply(Settings value, Settings current, Settings previous) {\n+ consumer.accept(value);\n+ }\n+\n+ @Override\n+ public String toString() {\n+ return \"Updater grouped: \" + configuredSettings.stream().map(Setting::getKey).collect(Collectors.joining(\", \"));\n+ }\n+ };\n+ }\n+\n public static class AffixSetting<T> extends Setting<T> {\n private final AffixKey key;\n private final Function<String, Setting<T>> delegateFactory;\n@@ -541,7 +581,7 @@ boolean isGroupSetting() {\n }\n \n private Stream<String> matchStream(Settings settings) {\n- return settings.keySet().stream().filter((key) -> match(key)).map(settingKey -> key.getConcreteString(settingKey));\n+ return settings.keySet().stream().filter(this::match).map(key::getConcreteString);\n }\n \n public Set<String> getSettingsDependencies(String settingsKey) {\n@@ -812,9 +852,7 @@ public Settings getValue(Settings current, Settings previous) {\n \n @Override\n public void apply(Settings value, Settings current, Settings previous) {\n- if (logger.isInfoEnabled()) { // getRaw can create quite some objects\n- logger.info(\"updating [{}] from [{}] to [{}]\", key, getRaw(previous), getRaw(current));\n- }\n+ Setting.logSettingUpdate(GroupSetting.this, current, previous, logger);\n consumer.accept(value);\n }\n \n@@ -902,7 +940,7 @@ public T getValue(Settings current, Settings previous) {\n \n @Override\n public void apply(T value, Settings current, Settings previous) {\n- logger.info(\"updating [{}] from [{}] to [{}]\", key, getRaw(previous), getRaw(current));\n+ logSettingUpdate(Setting.this, current, previous, logger);\n consumer.accept(value);\n }\n }\n@@ -1138,6 +1176,16 @@ private static String arrayToParsableString(List<String> array) {\n }\n }\n \n+ static void logSettingUpdate(Setting setting, Settings current, Settings previous, Logger logger) {\n+ if (logger.isInfoEnabled()) {\n+ if (setting.isFiltered()) {\n+ logger.info(\"updating [{}]\", setting.key);\n+ } else {\n+ logger.info(\"updating [{}] from [{}] to [{}]\", setting.key, setting.getRaw(previous), setting.getRaw(current));\n+ }\n+ }\n+ }\n+\n public static Setting<Settings> groupSetting(String key, Property... properties) {\n return groupSetting(key, (s) -> {}, properties);\n }\n@@ -1308,8 +1356,8 @@ public static final class AffixKey implements Key {\n if (suffix == null) {\n pattern = Pattern.compile(\"(\" + Pattern.quote(prefix) + \"((?:[-\\\\w]+[.])*[-\\\\w]+$))\");\n } else {\n- // the last part of this regexp is for lists since they are represented as x.${namespace}.y.1, x.${namespace}.y.2\n- pattern = Pattern.compile(\"(\" + Pattern.quote(prefix) + \"([-\\\\w]+)\\\\.\" + Pattern.quote(suffix) + \")(?:\\\\.\\\\d+)?\");\n+ // the last part of this regexp is to support both list and group keys\n+ pattern = Pattern.compile(\"(\" + Pattern.quote(prefix) + \"([-\\\\w]+)\\\\.\" + Pattern.quote(suffix) + \")(?:\\\\..*)?\");\n }\n }\n ",
"filename": "server/src/main/java/org/elasticsearch/common/settings/Setting.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@\n \n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.hasSize;\n import static org.hamcrest.Matchers.hasToString;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.is;\n@@ -712,4 +713,79 @@ public void testTimeValue() {\n assertThat(setting.get(Settings.EMPTY).getMillis(), equalTo(random.getMillis() * factor));\n }\n \n+ public void testSettingsGroupUpdater() {\n+ Setting<Integer> intSetting = Setting.intSetting(\"prefix.foo\", 1, Property.NodeScope, Property.Dynamic);\n+ Setting<Integer> intSetting2 = Setting.intSetting(\"prefix.same\", 1, Property.NodeScope, Property.Dynamic);\n+ AbstractScopedSettings.SettingUpdater<Settings> updater = Setting.groupedSettingsUpdater(s -> {}, logger,\n+ Arrays.asList(intSetting, intSetting2));\n+\n+ Settings current = Settings.builder().put(\"prefix.foo\", 123).put(\"prefix.same\", 5555).build();\n+ Settings previous = Settings.builder().put(\"prefix.foo\", 321).put(\"prefix.same\", 5555).build();\n+ assertTrue(updater.apply(current, previous));\n+ }\n+\n+ public void testSettingsGroupUpdaterRemoval() {\n+ Setting<Integer> intSetting = Setting.intSetting(\"prefix.foo\", 1, Property.NodeScope, Property.Dynamic);\n+ Setting<Integer> intSetting2 = Setting.intSetting(\"prefix.same\", 1, Property.NodeScope, Property.Dynamic);\n+ AbstractScopedSettings.SettingUpdater<Settings> updater = Setting.groupedSettingsUpdater(s -> {}, logger,\n+ Arrays.asList(intSetting, intSetting2));\n+\n+ Settings current = Settings.builder().put(\"prefix.same\", 5555).build();\n+ Settings previous = Settings.builder().put(\"prefix.foo\", 321).put(\"prefix.same\", 5555).build();\n+ assertTrue(updater.apply(current, previous));\n+ }\n+\n+ public void testSettingsGroupUpdaterWithAffixSetting() {\n+ Setting<Integer> intSetting = Setting.intSetting(\"prefix.foo\", 1, Property.NodeScope, Property.Dynamic);\n+ Setting.AffixSetting<String> prefixKeySetting =\n+ Setting.prefixKeySetting(\"prefix.foo.bar.\", key -> Setting.simpleString(key, Property.NodeScope, Property.Dynamic));\n+ Setting.AffixSetting<String> affixSetting =\n+ Setting.affixKeySetting(\"prefix.foo.\", \"suffix\", key -> Setting.simpleString(key,Property.NodeScope, Property.Dynamic));\n+\n+ AbstractScopedSettings.SettingUpdater<Settings> updater = Setting.groupedSettingsUpdater(s -> {}, logger,\n+ Arrays.asList(intSetting, prefixKeySetting, affixSetting));\n+\n+ Settings.Builder currentSettingsBuilder = Settings.builder()\n+ .put(\"prefix.foo.bar.baz\", \"foo\")\n+ .put(\"prefix.foo.infix.suffix\", \"foo\");\n+ Settings.Builder previousSettingsBuilder = Settings.builder()\n+ .put(\"prefix.foo.bar.baz\", \"foo\")\n+ .put(\"prefix.foo.infix.suffix\", \"foo\");\n+ boolean removePrefixKeySetting = randomBoolean();\n+ boolean changePrefixKeySetting = randomBoolean();\n+ boolean removeAffixKeySetting = randomBoolean();\n+ boolean changeAffixKeySetting = randomBoolean();\n+ boolean removeAffixNamespace = randomBoolean();\n+\n+ if (removePrefixKeySetting) {\n+ previousSettingsBuilder.remove(\"prefix.foo.bar.baz\");\n+ }\n+ if (changePrefixKeySetting) {\n+ currentSettingsBuilder.put(\"prefix.foo.bar.baz\", \"bar\");\n+ }\n+ if (removeAffixKeySetting) {\n+ previousSettingsBuilder.remove(\"prefix.foo.infix.suffix\");\n+ }\n+ if (changeAffixKeySetting) {\n+ currentSettingsBuilder.put(\"prefix.foo.infix.suffix\", \"bar\");\n+ }\n+ if (removeAffixKeySetting == false && changeAffixKeySetting == false && removeAffixNamespace) {\n+ currentSettingsBuilder.remove(\"prefix.foo.infix.suffix\");\n+ currentSettingsBuilder.put(\"prefix.foo.infix2.suffix\", \"bar\");\n+ previousSettingsBuilder.put(\"prefix.foo.infix2.suffix\", \"bar\");\n+ }\n+\n+ boolean expectedChange = removeAffixKeySetting || removePrefixKeySetting || changeAffixKeySetting || changePrefixKeySetting\n+ || removeAffixNamespace;\n+ assertThat(updater.apply(currentSettingsBuilder.build(), previousSettingsBuilder.build()), is(expectedChange));\n+ }\n+\n+ public void testAffixNamespacesWithGroupSetting() {\n+ final Setting.AffixSetting<Settings> affixSetting =\n+ Setting.affixKeySetting(\"prefix.\",\"suffix\",\n+ (key) -> Setting.groupSetting(key + \".\", Setting.Property.Dynamic, Setting.Property.NodeScope));\n+\n+ assertThat(affixSetting.getNamespaces(Settings.builder().put(\"prefix.infix.suffix\", \"anything\").build()), hasSize(1));\n+ assertThat(affixSetting.getNamespaces(Settings.builder().put(\"prefix.infix.suffix.anything\", \"anything\").build()), hasSize(1));\n+ }\n }",
"filename": "server/src/test/java/org/elasticsearch/common/settings/SettingTests.java",
"status": "modified"
},
{
"diff": "@@ -18,16 +18,22 @@\n */\n package org.elasticsearch.common.settings;\n \n-import org.elasticsearch.common.Strings;\n+import org.apache.logging.log4j.Level;\n+import org.apache.logging.log4j.Logger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.logging.ServerLoggers;\n+import org.elasticsearch.common.settings.Setting.Property;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.MockLogAppender;\n import org.elasticsearch.test.rest.FakeRestRequest;\n \n import java.io.IOException;\n import java.util.Arrays;\n import java.util.HashSet;\n+import java.util.function.Consumer;\n \n import static org.hamcrest.CoreMatchers.equalTo;\n \n@@ -100,7 +106,43 @@ public void testSettingsFiltering() throws IOException {\n .build(),\n \"a.b.*.d\"\n );\n+ }\n+\n+ public void testFilteredSettingIsNotLogged() throws Exception {\n+ Settings oldSettings = Settings.builder().put(\"key\", \"old\").build();\n+ Settings newSettings = Settings.builder().put(\"key\", \"new\").build();\n+\n+ Setting<String> filteredSetting = Setting.simpleString(\"key\", Property.Filtered);\n+ assertExpectedLogMessages((testLogger) -> Setting.logSettingUpdate(filteredSetting, newSettings, oldSettings, testLogger),\n+ new MockLogAppender.SeenEventExpectation(\"secure logging\", \"org.elasticsearch.test\", Level.INFO, \"updating [key]\"),\n+ new MockLogAppender.UnseenEventExpectation(\"unwanted old setting name\", \"org.elasticsearch.test\", Level.INFO, \"*old*\"),\n+ new MockLogAppender.UnseenEventExpectation(\"unwanted new setting name\", \"org.elasticsearch.test\", Level.INFO, \"*new*\")\n+ );\n+ }\n+\n+ public void testRegularSettingUpdateIsFullyLogged() throws Exception {\n+ Settings oldSettings = Settings.builder().put(\"key\", \"old\").build();\n+ Settings newSettings = Settings.builder().put(\"key\", \"new\").build();\n+\n+ Setting<String> regularSetting = Setting.simpleString(\"key\");\n+ assertExpectedLogMessages((testLogger) -> Setting.logSettingUpdate(regularSetting, newSettings, oldSettings, testLogger),\n+ new MockLogAppender.SeenEventExpectation(\"regular logging\", \"org.elasticsearch.test\", Level.INFO,\n+ \"updating [key] from [old] to [new]\"));\n+ }\n \n+ private void assertExpectedLogMessages(Consumer<Logger> consumer,\n+ MockLogAppender.LoggingExpectation ... expectations) throws IllegalAccessException {\n+ Logger testLogger = Loggers.getLogger(\"org.elasticsearch.test\");\n+ MockLogAppender appender = new MockLogAppender();\n+ ServerLoggers.addAppender(testLogger, appender);\n+ try {\n+ appender.start();\n+ Arrays.stream(expectations).forEach(appender::addExpectation);\n+ consumer.accept(testLogger);\n+ appender.assertAllExpectationsMatched();\n+ } finally {\n+ ServerLoggers.removeAppender(testLogger, appender);\n+ }\n }\n \n private void testFiltering(Settings source, Settings filtered, String... patterns) throws IOException {",
"filename": "server/src/test/java/org/elasticsearch/common/settings/SettingsFilterTests.java",
"status": "modified"
},
{
"diff": "@@ -92,7 +92,7 @@ public void match(LogEvent event) {\n saw = true;\n }\n } else {\n- if (event.getMessage().toString().contains(message)) {\n+ if (event.getMessage().getFormattedMessage().contains(message)) {\n saw = true;\n }\n }",
"filename": "test/framework/src/main/java/org/elasticsearch/test/MockLogAppender.java",
"status": "modified"
}
]
} |
{
"body": "Given the following scenario in a multi-node cluster:\r\n\r\n- apply some set of transient settings\r\n- ensure settings are applied\r\n- apply addition persistent setting\r\n- transient settings are ignored but still available in cluster state\r\n\r\nThis is reproducible with the following:\r\n\r\n- Create 6 nodes\r\n- Create a \"test\" index (default settings)\r\n- Apply transient settings to exclude 3 of the nodes by `_name` with allocation filtering\r\n- Ensure shards have moved off of the 3 excluded nodes\r\n- Apply a persistent setting (I used `cluster.info.update.interval` but any setting will work)\r\n- Shards no longer conform to the transient allocation filtering settings\r\n- Settings are still shown in the cluster state\r\n\r\nThis reproduces on master, 6.x, and 6.1, but does not on 6.0 and earlier.\r\n\r\nAttached is the (very quickly hacked together) test that reproduces this:\r\n[ClusterBugIT.java.txt](https://github.com/elastic/elasticsearch/files/1647618/ClusterBugIT.java.txt)",
"comments": [
{
"body": "Looking into the root cause for this, it appears that the `affixMapUpdateConsumer` that's added in `FilterAllocationDecider` is called with an empty map when the persistent setting is added, so it's not passing the existing transient settings to the consumer",
"created_at": "2018-01-19T19:47:24Z"
}
],
"number": 28316,
"title": "Setting persistent settings causes transient settings not to be applied"
} | {
"body": "Previously if an affixMap setting was registered, and then a completely\r\ndifferent setting was applied, the affixMap update consumer would be notified\r\nwith an empty map. This caused settings that were previously set to be unset in\r\nlocal state in a consumer that assumed it would only be called when the affixMap\r\nsetting was changed.\r\n\r\nThis commit changes the behavior so if a prefix `foo.` is registered, any\r\nsetting under the prefix will have the update consumer notified if there are\r\nchanges starting with `foo.`.\r\n\r\nResolves #28316",
"number": 28317,
"review_comments": [
{
"body": "Do you think we should test the patch for persistent settings as well, or a mix of the two? We observed that issue effects persistent and transient settings.",
"created_at": "2018-01-22T17:27:15Z"
},
{
"body": "@jhalterman the unit test for this is outside the scope of persistent or transient (the fix is to the settings themselves, it actually doesn't matter whether the settings are transient or persistent), so I don't think a test for that is necessary.",
"created_at": "2018-01-22T17:33:04Z"
}
],
"title": " Fix setting notification for complex setting (affixMap settings) that could cause transient settings to be ignored"
} | {
"commits": [
{
"message": "Notify affixMap settings when any under the registered prefix matches\n\nPreviously if an affixMap setting was registered, and then a completely\ndifferent setting was applied, the affixMap update consumer would be notified\nwith an empty map. This caused settings that were previously set to be unset in\nlocal state in a consumer that assumed it would only be called when the affixMap\nsetting was changed.\n\nThis commit changes the behavior so if a prefix `foo.` is registered, any\nsetting under the prefix will have the update consumer notified if there are\nchanges starting with `foo.`.\n\nResolves #28316"
},
{
"message": "Add unit test"
},
{
"message": "Address feedback"
},
{
"message": "Merge remote-tracking branch 'origin/master' into affix-setting-fix"
}
],
"files": [
{
"diff": "@@ -597,7 +597,7 @@ AbstractScopedSettings.SettingUpdater<Map<String, T>> newAffixMapUpdater(Consume\n \n @Override\n public boolean hasChanged(Settings current, Settings previous) {\n- return Stream.concat(matchStream(current), matchStream(previous)).findAny().isPresent();\n+ return current.filter(k -> match(k)).equals(previous.filter(k -> match(k))) == false;\n }\n \n @Override\n@@ -612,7 +612,7 @@ public Map<String, T> getValue(Settings current, Settings previous) {\n if (updater.hasChanged(current, previous)) {\n // only the ones that have changed otherwise we might get too many updates\n // the hasChanged above checks only if there are any changes\n- T value = updater.getValue(current, previous);\n+ T value = updater.getValue(current, previous);\n if ((omitDefaults && value.equals(concreteSetting.getDefault(current))) == false) {\n result.put(namespace, value);\n }",
"filename": "server/src/main/java/org/elasticsearch/common/settings/Setting.java",
"status": "modified"
},
{
"diff": "@@ -21,11 +21,14 @@\n \n import org.apache.logging.log4j.Logger;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n@@ -34,7 +37,9 @@\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n \n+import java.util.HashSet;\n import java.util.List;\n+import java.util.Set;\n \n import static org.hamcrest.Matchers.equalTo;\n \n@@ -156,5 +161,58 @@ public void testInvalidIPFilterClusterSettings() {\n .execute().actionGet());\n assertEquals(\"invalid IP address [192.168.1.1.] for [\" + filterSetting.getKey() + ipKey + \"]\", e.getMessage());\n }\n+\n+ public void testTransientSettingsStillApplied() throws Exception {\n+ List<String> nodes = internalCluster().startNodes(6);\n+ Set<String> excludeNodes = new HashSet<>(nodes.subList(0, 3));\n+ Set<String> includeNodes = new HashSet<>(nodes.subList(3, 6));\n+ logger.info(\"--> exclude: [{}], include: [{}]\",\n+ Strings.collectionToCommaDelimitedString(excludeNodes),\n+ Strings.collectionToCommaDelimitedString(includeNodes));\n+ ensureStableCluster(6);\n+ client().admin().indices().prepareCreate(\"test\").get();\n+ ensureGreen(\"test\");\n+\n+ Settings exclude = Settings.builder().put(\"cluster.routing.allocation.exclude._name\",\n+ Strings.collectionToCommaDelimitedString(excludeNodes)).build();\n+\n+ logger.info(\"--> updating settings\");\n+ client().admin().cluster().prepareUpdateSettings().setTransientSettings(exclude).get();\n+\n+ logger.info(\"--> waiting for relocation\");\n+ waitForRelocation(ClusterHealthStatus.GREEN);\n+\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+\n+ for (ShardRouting shard : state.getRoutingTable().shardsWithState(ShardRoutingState.STARTED)) {\n+ String node = state.getRoutingNodes().node(shard.currentNodeId()).node().getName();\n+ logger.info(\"--> shard on {} - {}\", node, shard);\n+ assertTrue(\"shard on \" + node + \" but should only be on the include node list: \" +\n+ Strings.collectionToCommaDelimitedString(includeNodes),\n+ includeNodes.contains(node));\n+ }\n+\n+ Settings other = Settings.builder().put(\"cluster.info.update.interval\", \"45s\").build();\n+\n+ logger.info(\"--> updating settings with random persistent setting\");\n+ client().admin().cluster().prepareUpdateSettings()\n+ .setPersistentSettings(other).setTransientSettings(exclude).get();\n+\n+ logger.info(\"--> waiting for relocation\");\n+ waitForRelocation(ClusterHealthStatus.GREEN);\n+\n+ state = client().admin().cluster().prepareState().get().getState();\n+\n+ // The transient settings still exist in the state\n+ assertThat(state.metaData().transientSettings(), equalTo(exclude));\n+\n+ for (ShardRouting shard : state.getRoutingTable().shardsWithState(ShardRoutingState.STARTED)) {\n+ String node = state.getRoutingNodes().node(shard.currentNodeId()).node().getName();\n+ logger.info(\"--> shard on {} - {}\", node, shard);\n+ assertTrue(\"shard on \" + node + \" but should only be on the include node list: \" +\n+ Strings.collectionToCommaDelimitedString(includeNodes),\n+ includeNodes.contains(node));\n+ }\n+ }\n }\n ",
"filename": "server/src/test/java/org/elasticsearch/cluster/allocation/FilteringAllocationIT.java",
"status": "modified"
},
{
"diff": "@@ -261,6 +261,21 @@ public void testAddConsumerAffixMap() {\n assertEquals(2, listResults.size());\n assertEquals(2, intResults.size());\n \n+ service.applySettings(Settings.builder()\n+ .put(\"foo.test.bar\", 2)\n+ .put(\"foo.test_1.bar\", 7)\n+ .putList(\"foo.test_list.list\", \"16\", \"17\")\n+ .putList(\"foo.test_list_1.list\", \"18\", \"19\", \"20\")\n+ .build());\n+\n+ assertEquals(2, intResults.get(\"test\").intValue());\n+ assertEquals(7, intResults.get(\"test_1\").intValue());\n+ assertEquals(Arrays.asList(16, 17), listResults.get(\"test_list\"));\n+ assertEquals(Arrays.asList(18, 19, 20), listResults.get(\"test_list_1\"));\n+ assertEquals(2, listResults.size());\n+ assertEquals(2, intResults.size());\n+\n+\n listResults.clear();\n intResults.clear();\n \n@@ -286,6 +301,35 @@ public void testAddConsumerAffixMap() {\n \n }\n \n+ public void testAffixMapConsumerNotCalledWithNull() {\n+ Setting.AffixSetting<Integer> prefixSetting = Setting.prefixKeySetting(\"eggplant.\",\n+ (k) -> Setting.intSetting(k, 1, Property.Dynamic, Property.NodeScope));\n+ Setting.AffixSetting<Integer> otherSetting = Setting.prefixKeySetting(\"other.\",\n+ (k) -> Setting.intSetting(k, 1, Property.Dynamic, Property.NodeScope));\n+ AbstractScopedSettings service = new ClusterSettings(Settings.EMPTY,new HashSet<>(Arrays.asList(prefixSetting, otherSetting)));\n+ Map<String, Integer> affixResults = new HashMap<>();\n+\n+ Consumer<Map<String,Integer>> consumer = (map) -> {\n+ logger.info(\"--> consuming settings {}\", map);\n+ affixResults.clear();\n+ affixResults.putAll(map);\n+ };\n+ service.addAffixMapUpdateConsumer(prefixSetting, consumer, (s, k) -> {}, randomBoolean());\n+ assertEquals(0, affixResults.size());\n+ service.applySettings(Settings.builder()\n+ .put(\"eggplant._name\", 2)\n+ .build());\n+ assertThat(affixResults.size(), equalTo(1));\n+ assertThat(affixResults.get(\"_name\"), equalTo(2));\n+\n+ service.applySettings(Settings.builder()\n+ .put(\"eggplant._name\", 2)\n+ .put(\"other.thing\", 3)\n+ .build());\n+\n+ assertThat(affixResults.get(\"_name\"), equalTo(2));\n+ }\n+\n public void testApply() {\n Setting<Integer> testSetting = Setting.intSetting(\"foo.bar\", 1, Property.Dynamic, Property.NodeScope);\n Setting<Integer> testSetting2 = Setting.intSetting(\"foo.bar.baz\", 1, Property.Dynamic, Property.NodeScope);",
"filename": "server/src/test/java/org/elasticsearch/common/settings/ScopedSettingsTests.java",
"status": "modified"
}
]
} |
{
"body": "I think the behavior specified in [`IndexAliasesIT#testIndicesGetAliases`](https://github.com/elastic/elasticsearch/blob/5b2ab96364335539affe99151546552423700f6e/core/src/test/java/org/elasticsearch/aliases/IndexAliasesIT.java#L571-L583) is wrong. Namely:\r\n - create indices foobar, test, test123, foobarbaz, bazbar\r\n - add an alias alias1 -> foobar\r\n - add an alias alias2 -> foobar\r\n - execute get aliases on the transport layer specifying alias1 as the only alias to get\r\n - the response includes foobar, test, test123, foobarbaz, bazbar albeit with empty alias metadata for all indices except foobar which contains alias1 only\r\n - previously the response would only contain foobar with alias metadata for alias1\r\n\r\nThis was a breaking change resulting from #25114, the specific change in behavior arising from a [change to MetaData](https://github.com/elastic/elasticsearch/commit/5b2ab96364335539affe99151546552423700f6e#diff-d6d141c41772a9088a29c2b838e5d8c4).\r\n\r\nI am opening this personally considering it a bug but for discussion where we might decide to only document this behavior (not my preference, I think this behavior is weird and not intuitive).\r\n\r\nRelates #27743",
"comments": [
{
"body": "It's worth noting that this only affects the transport layer. The REST layer doesn't include the extra indices. The transport layer should probably be changed to return only indices that contain the alias when an alias name is specified.",
"created_at": "2017-12-13T03:06:17Z"
},
{
"body": "The transport client is exactly what this issue is being reported for.",
"created_at": "2017-12-13T04:51:23Z"
},
{
"body": "@jasontedor @dakrone \r\nYes Transport client got affected. Now we are explicitly filtering the indices which contains AliasMetaData. Its performance hit too for our application. \r\n\r\nIn which release /version we can expect its fix.? \r\n\r\n",
"created_at": "2017-12-13T05:12:13Z"
},
{
"body": "> The transport client is exactly what this issue is being reported for.\n\nYes I understand, I was only clarifying for the sake of other people\nreading.\n\nOn Dec 12, 2017 9:51 PM, \"Jason Tedor\" <notifications@github.com> wrote:\n\n> The transport client is exactly what this issue is being reported for.\n>\n> —\n> You are receiving this because you were assigned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/issues/27763#issuecomment-351281851>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AABKdIlsoCe_HDhcnutz3ym833dV9H3Nks5s_1fNgaJpZM4Q-Cu4>\n> .\n>\n",
"created_at": "2017-12-13T05:14:37Z"
},
{
"body": "> In which release /version we can expect its fix.?\r\n\r\nRight now you can not have any expectation, no decision has been reached. ",
"created_at": "2017-12-13T11:30:20Z"
},
{
"body": "+1",
"created_at": "2017-12-15T14:09:58Z"
},
{
"body": "We discussed this in Fix-it-Friday and agree that this is a bug. Would you take care of this @dakrone.",
"created_at": "2017-12-15T14:40:13Z"
},
{
"body": "@antitech I want to be clear that while we agree this is bug, it is not a high priority bug so there is no expectation about the timeline for a fix.",
"created_at": "2017-12-15T14:40:59Z"
},
{
"body": "@jasontedor its ok. I'll apply any temporary fix for now . Just a request can you please attach this bug with aliases tag.",
"created_at": "2017-12-15T16:52:44Z"
},
{
"body": "+1",
"created_at": "2018-01-19T10:09:38Z"
},
{
"body": "Reopening this issue, because a qa test failed in another project, in this case the aliases were being expanded (via AliasesRequest#aliases(...)) before arriving in the transport action.",
"created_at": "2018-01-23T08:08:36Z"
},
{
"body": "@martijnvg Is it possible to resolve this issue?",
"created_at": "2018-04-13T02:01:27Z"
}
],
"number": 27763,
"title": "Get aliases for specific aliases returns all indices"
} | {
"body": "PR for #27763",
"number": 28294,
"review_comments": [
{
"body": "nit: than -> then (or just leave it out)",
"created_at": "2018-01-18T14:27:59Z"
},
{
"body": "I tried to find a test that covers this branch, at least in IndexAliasesIT I couldn't find any. Is there a test anywhere else? If no, maybe it would make sense to add one. ",
"created_at": "2018-01-18T14:29:17Z"
},
{
"body": "Yes, see my other comment.",
"created_at": "2018-01-18T15:41:13Z"
}
],
"title": "Do not return all indices if a specific alias is requested via get aliases api"
} | {
"commits": [
{
"message": "Do not return all indices if a specific alias is requested via get aliases api.\n\nIf a get alias api call requests a specific alias pattern then\nindices not having any matching aliases should not be included in the response.\n\nCloses #27763"
}
],
"files": [
{
"diff": "@@ -62,8 +62,7 @@ protected GetAliasesResponse newResponse() {\n @Override\n protected void masterOperation(GetAliasesRequest request, ClusterState state, ActionListener<GetAliasesResponse> listener) {\n String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(state, request);\n- @SuppressWarnings(\"unchecked\")\n- ImmutableOpenMap<String, List<AliasMetaData>> result = (ImmutableOpenMap) state.metaData().findAliases(request.aliases(), concreteIndices);\n+ ImmutableOpenMap<String, List<AliasMetaData>> result = state.metaData().findAliases(request.aliases(), concreteIndices);\n listener.onResponse(new GetAliasesResponse(result));\n }\n ",
"filename": "server/src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java",
"status": "modified"
},
{
"diff": "@@ -275,14 +275,12 @@ public ImmutableOpenMap<String, List<AliasMetaData>> findAliases(final String[]\n \n if (!filteredValues.isEmpty()) {\n // Make the list order deterministic\n- CollectionUtil.timSort(filteredValues, new Comparator<AliasMetaData>() {\n- @Override\n- public int compare(AliasMetaData o1, AliasMetaData o2) {\n- return o1.alias().compareTo(o2.alias());\n- }\n- });\n+ CollectionUtil.timSort(filteredValues, Comparator.comparing(AliasMetaData::alias));\n+ mapBuilder.put(index, Collections.unmodifiableList(filteredValues));\n+ } else if (matchAllAliases) {\n+ // in case all aliases are requested then it is desired to return the concrete index with no aliases (#25114):\n+ mapBuilder.put(index, Collections.emptyList());\n }\n- mapBuilder.put(index, Collections.unmodifiableList(filteredValues));\n }\n return mapBuilder.build();\n }",
"filename": "server/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.rest.action.admin.indices;\n \n-import com.carrotsearch.hppc.cursors.ObjectCursor;\n import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n import org.elasticsearch.action.admin.indices.alias.get.GetAliasesRequest;\n import org.elasticsearch.action.admin.indices.alias.get.GetAliasesResponse;",
"filename": "server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java",
"status": "modified"
},
{
"diff": "@@ -570,24 +570,20 @@ public void testIndicesGetAliases() throws Exception {\n logger.info(\"--> getting alias1\");\n GetAliasesResponse getResponse = admin().indices().prepareGetAliases(\"alias1\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(5));\n+ assertThat(getResponse.getAliases().size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"alias1\"));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getSearchRouting(), nullValue());\n- assertTrue(getResponse.getAliases().get(\"test\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"test123\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"foobarbaz\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n AliasesExistResponse existsResponse = admin().indices().prepareAliasesExist(\"alias1\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n \n logger.info(\"--> getting all aliases that start with alias*\");\n getResponse = admin().indices().prepareGetAliases(\"alias*\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(5));\n+ assertThat(getResponse.getAliases().size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").size(), equalTo(2));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"alias1\"));\n@@ -599,10 +595,6 @@ public void testIndicesGetAliases() throws Exception {\n assertThat(getResponse.getAliases().get(\"foobar\").get(1).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(1).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(1).getSearchRouting(), nullValue());\n- assertTrue(getResponse.getAliases().get(\"test\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"test123\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"foobarbaz\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n existsResponse = admin().indices().prepareAliasesExist(\"alias*\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n \n@@ -687,13 +679,12 @@ public void testIndicesGetAliases() throws Exception {\n logger.info(\"--> getting f* for index *bar\");\n getResponse = admin().indices().prepareGetAliases(\"f*\").addIndices(\"*bar\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(2));\n+ assertThat(getResponse.getAliases().size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"foo\"));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getSearchRouting(), nullValue());\n- assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n existsResponse = admin().indices().prepareAliasesExist(\"f*\")\n .addIndices(\"*bar\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n@@ -702,14 +693,13 @@ public void testIndicesGetAliases() throws Exception {\n logger.info(\"--> getting f* for index *bac\");\n getResponse = admin().indices().prepareGetAliases(\"foo\").addIndices(\"*bac\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(2));\n+ assertThat(getResponse.getAliases().size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"foo\"));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getSearchRouting(), nullValue());\n- assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n existsResponse = admin().indices().prepareAliasesExist(\"foo\")\n .addIndices(\"*bac\").get();\n assertThat(existsResponse.exists(), equalTo(true));",
"filename": "server/src/test/java/org/elasticsearch/aliases/IndexAliasesIT.java",
"status": "modified"
}
]
} |
{
"body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**:\r\n5.2.0\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nResult is IllegalArgumentException. Expected empty strings to be ignored.\r\n\r\nIs this an issue? If not, is there a recommended resolution?\r\n\r\n**Steps to reproduce**:\r\nIn Sense:\r\n```\r\nPUT twitter\r\n{\r\n \"mappings\": {\r\n \"tweet\": {\r\n \"properties\": {\r\n \"message\": {\r\n \"type\": \"completion\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT twitter/tweet/3\r\n{\r\n \"message\" : \"\"\r\n}\r\n```\r\nEdit: Updated description to simpler example.",
"comments": [
{
"body": "I agree - empty string should be ignored.\r\n\r\nSimpler recreation:\r\n\r\n```\r\nPUT twitter\r\n{\r\n \"mappings\": {\r\n \"tweet\": {\r\n \"properties\": {\r\n \"message\": {\r\n \"type\": \"completion\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT twitter/tweet/3\r\n{\r\n \"message\" : \"\"\r\n}\r\n```",
"created_at": "2017-02-12T12:16:24Z"
},
{
"body": "Have the same issue. Does anyone have a solution for that?",
"created_at": "2018-01-01T15:18:10Z"
},
{
"body": "@elastic/es-search-aggs \r\n\r\nJim has a PR open for this #28289",
"created_at": "2018-03-22T23:19:36Z"
}
],
"number": 23121,
"title": "Empty string with completion type results in IllegalArgumentException"
} | {
"body": "This change makes sure that an empty completion input does not throw an IAE when indexing.\r\nInstead the input is simply ignored.\r\n\r\nCloses #23121",
"number": 28289,
"review_comments": [],
"title": "Ignore empty completion input"
} | {
"commits": [
{
"message": "Ignore empty completion input\n\nThis change makes sure that an empty completion input does not throw an IAE when indexing.\nInstead the input is simply ignored.\n\nCloses #23121"
}
],
"files": [
{
"diff": "@@ -457,6 +457,10 @@ public Mapper parse(ParseContext context) throws IOException {\n }\n input = input.substring(0, len);\n }\n+ if (input.length() == 0) {\n+ // Ignore empty inputs\n+ continue;\n+ }\n CompletionInputMetaData metaData = completionInput.getValue();\n if (fieldType().hasContextMappings()) {\n fieldType().getContextMappings().addField(context.doc(), fieldType().name(),",
"filename": "server/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -395,6 +395,16 @@ public void testFieldValueValidation() throws Exception {\n assertThat(cause, instanceOf(IllegalArgumentException.class));\n assertThat(cause.getMessage(), containsString(\"[0x1e]\"));\n }\n+\n+ // empty inputs are ignored\n+ ParsedDocument doc = defaultMapper.parse(SourceToParse.source(\"test\", \"type1\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"completion\", \"\")\n+ .endObject()\n+ .bytes(),\n+ XContentType.JSON));\n+ assertThat(doc.docs().size(), equalTo(1));\n+ assertNull(doc.docs().get(0).get(\"completion\"));\n }\n \n public void testPrefixQueryType() throws Exception {",
"filename": "server/src/test/java/org/elasticsearch/index/mapper/CompletionFieldMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "When parsing the control groups to which the Elasticsearch process belongs, we extract a map from subsystems to paths by parsing /proc/self/cgroup. This file contains colon-delimited entries of the\r\nform hierarchy-ID:subsystem-list:cgroup-path. For control group version 1 hierarchies, the subsystem-list is a comma-delimited list of the subsystems for that hierarchy. For control group version 2 hierarchies (which can only exist on Linux kernels since version 4.5), the subsystem-list is an empty string. The previous parsing of /proc/self/cgroup incorrectly accounted for this possibility (a + instead of a * in a regular expression). This commit addresses this issue, adds a test case that covers this possibility, and simplifies the code that parses /proc/self/cgroup.\r\n\r\nCloses #23486",
"comments": [
{
"body": "Thank you for reviewing @tlrx.",
"created_at": "2017-03-06T22:15:45Z"
},
{
"body": "@jasontedor will be this backported to 5.2.x?",
"created_at": "2017-08-22T09:37:36Z"
},
{
"body": "@jalberto No, that branch is unmaintained now. ",
"created_at": "2017-08-22T10:03:33Z"
},
{
"body": "@jalberto Cherry-picking the patch worked for me: https://github.com/elastic/elasticsearch/issues/23486#issuecomment-369090398",
"created_at": "2018-02-28T01:38:21Z"
}
],
"number": 23493,
"title": "Handle existence of cgroup version 2 hierarchy"
} | {
"body": "Versions 5.1.1 to 5.3.0 of Elasticsearch had a problem where these versions did not handle new kernels properly due to improper handling of cgroup v2. On Linux kernels that support cgroup2 and the unified hierarchy is mounted, Elasticsearch would never start. This means full-cluster restart upgrade tests on such systems will never succeed. This commit skips these tests on OS that have the unified cgroup v2 hierarchy.\r\n\r\nRelates #26968, relates #23493\r\n",
"number": 28268,
"review_comments": [],
"title": "Skip restart upgrades for buggy cgroup2 handling"
} | {
"commits": [
{
"message": "Skip restart upgrades for buggy cgroup2 handling\n\nVersions 5.1.1 to 5.3.0 of Elasticsearch had a problem where these\nversions did not handle new kernels properly due to improper handling of\ncgroup v2. On Linux kernels that support cgroup2 and the unified\nhierarchy is mounted, Elasticsearch would never start. This means\nfull-cluster restart upgrade tests on such systems will never\nsucceed. This commit skips these tests on OS that have the unified\ncgroup v2 hierarchy."
}
],
"files": [
{
"diff": "@@ -146,6 +146,11 @@ task verifyVersions {\n */\n allprojects {\n ext.bwc_tests_enabled = true\n+ /*\n+ * Versions of Elasticsearch 5.1.1 through 5.3.0 inclusive did not start on versions of Linux with cgroups v2 enabled (kernel >= 4.5).\n+ * This property is provided to all projects that need to check conditionally if they should skip a BWC test task.\n+ */\n+ ext.cgroupsV2Enabled = Os.isFamily(Os.FAMILY_UNIX) && \"mount\".execute().text.readLines().any { it =~ /.*type cgroup2.*/ }\n }\n \n task verifyBwcTestsEnabled {",
"filename": "build.gradle",
"status": "modified"
},
{
"diff": "@@ -97,6 +97,16 @@ public class Version {\n return otherVersion.suffix == '' || suffix < otherVersion.suffix\n }\n \n+ /**\n+ * Elasticsearch versions 5.1.1 through 5.3.0 fail to start on versions of Linux that support cgroups v2 (kernel >= 4.5). This is a\n+ * convenience method for checking if the current version falls into that range.\n+ *\n+ * @return true if the version is one impacted by the cgroups v2 bug, otherwise false\n+ */\n+ public boolean isVersionBrokenIfCgroupsV2Enabled() {\n+ return onOrAfter(\"5.1.1\") && onOrBefore(\"5.3.0\")\n+ }\n+\n boolean equals(o) {\n if (this.is(o)) return true\n if (getClass() != o.class) return false",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy",
"status": "modified"
},
{
"diff": "@@ -17,7 +17,6 @@\n * under the License.\n */\n \n-\n import org.elasticsearch.gradle.Version\n import org.elasticsearch.gradle.test.RestIntegTestTask\n \n@@ -31,6 +30,11 @@ task bwcTest {\n }\n \n for (Version version : versionCollection.versionsIndexCompatibleWithCurrent) {\n+ // the BWC version under test will fail to start in this situation so we skip creating the test task\n+ if (project.cgroupsV2Enabled && version.isVersionBrokenIfCgroupsV2Enabled()) {\n+ continue\n+ }\n+\n String baseName = \"v${version}\"\n \n Task oldClusterTest = tasks.create(name: \"${baseName}#oldClusterTest\", type: RestIntegTestTask) {",
"filename": "qa/full-cluster-restart/build.gradle",
"status": "modified"
}
]
} |
{
"body": "When parsing the control groups to which the Elasticsearch process belongs, we extract a map from subsystems to paths by parsing /proc/self/cgroup. This file contains colon-delimited entries of the\r\nform hierarchy-ID:subsystem-list:cgroup-path. For control group version 1 hierarchies, the subsystem-list is a comma-delimited list of the subsystems for that hierarchy. For control group version 2 hierarchies (which can only exist on Linux kernels since version 4.5), the subsystem-list is an empty string. The previous parsing of /proc/self/cgroup incorrectly accounted for this possibility (a + instead of a * in a regular expression). This commit addresses this issue, adds a test case that covers this possibility, and simplifies the code that parses /proc/self/cgroup.\r\n\r\nCloses #23486",
"comments": [
{
"body": "Thank you for reviewing @tlrx.",
"created_at": "2017-03-06T22:15:45Z"
},
{
"body": "@jasontedor will be this backported to 5.2.x?",
"created_at": "2017-08-22T09:37:36Z"
},
{
"body": "@jalberto No, that branch is unmaintained now. ",
"created_at": "2017-08-22T10:03:33Z"
},
{
"body": "@jalberto Cherry-picking the patch worked for me: https://github.com/elastic/elasticsearch/issues/23486#issuecomment-369090398",
"created_at": "2018-02-28T01:38:21Z"
}
],
"number": 23493,
"title": "Handle existence of cgroup version 2 hierarchy"
} | {
"body": "Versions 5.1.1 to 5.3.0 of Elasticsearch had a problem where these versions did not handle new kernels properly due to improper handling of cgroup v2. On Linux kernels that support cgroup2 and the unified hierarchy is mounted, Elasticsearch would never start. This means rolling upgrade tests on such systems will never succeed. This commit skips these tests on OS that have the unified cgroup v2 hierarchy.\r\n\r\nRelates #26968, relates #23493\r\n",
"number": 28267,
"review_comments": [],
"title": "Skip rolling upgrades for buggy cgroup2 handling"
} | {
"commits": [
{
"message": "Skip rolling upgrades for buggy cgroup2 handling\n\nVersions 5.1.1 to 5.3.0 of Elasticsearch had a problem where these\nversions did not handle new kernels properly due to improper handling of\ncgroup v2. On Linux kernels that support cgroup2 and the unified\nhierarchy is mounted, Elasticsearch would never start. This means\nrolling upgrade tests on such systems will never succeed. This commit\nskips these tests on OS that have the unified cgroup v2 hierarchy."
}
],
"files": [
{
"diff": "@@ -169,6 +169,11 @@ task verifyVersions {\n */\n allprojects {\n ext.bwc_tests_enabled = true\n+ /*\n+ * Versions of Elasticsearch 5.1.1 through 5.3.0 inclusive did not start on versions of Linux with cgroups v2 enabled (kernel >= 4.5).\n+ * This property is provided to all projects that need to check conditionally if they should skip a BWC test task.\n+ */\n+ ext.cgroupsV2Enabled = Os.isFamily(Os.FAMILY_UNIX) && \"mount\".execute().text.readLines().any { it =~ /.*type cgroup2.*/ }\n }\n \n task verifyBwcTestsEnabled {",
"filename": "build.gradle",
"status": "modified"
},
{
"diff": "@@ -81,4 +81,15 @@ public class Version {\n public boolean after(String compareTo) {\n return id > fromString(compareTo).id\n }\n+\n+ /**\n+ * Elasticsearch versions 5.1.1 through 5.3.0 fail to start on versions of Linux that support cgroups v2 (kernel >= 4.5). This is a\n+ * convenience method for checking if the current version falls into that range.\n+ *\n+ * @return true if the version is one impacted by the cgroups v2 bug, otherwise false\n+ */\n+ public boolean isVersionBrokenIfCgroupsV2Enabled() {\n+ return onOrAfter(\"5.1.1\") && onOrBefore(\"5.3.0\")\n+ }\n+\n }",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy",
"status": "modified"
},
{
"diff": "@@ -17,8 +17,9 @@\n * under the License.\n */\n \n-import org.elasticsearch.gradle.test.RestIntegTestTask\n+\n import org.elasticsearch.gradle.Version\n+import org.elasticsearch.gradle.test.RestIntegTestTask\n \n apply plugin: 'elasticsearch.standalone-test'\n \n@@ -29,6 +30,10 @@ task bwcTest {\n }\n \n for (Version version : wireCompatVersions) {\n+ if (project.cgroupsV2Enabled && version.isVersionBrokenIfCgroupsV2Enabled()) {\n+ continue\n+ }\n+\n String baseName = \"v${version}\"\n \n Task oldClusterTest = tasks.create(name: \"${baseName}#oldClusterTest\", type: RestIntegTestTask) {",
"filename": "qa/rolling-upgrade/build.gradle",
"status": "modified"
}
]
} |
{
"body": "It looks like if you:\r\n1. Start 5.x\r\n2. Add a persistent cluster setting that is unsupported by 6.x\r\n3. Upgrade to 6.x\r\n4. Attempt to update another setting\r\n\r\nThen you get an error back about the archived setting not being a valid setting. You can clear the archived setting with `PUT _cluster/settings { \"persistent\": { \"archived.*\": null } }` but you *must* do this before updating any other settings. It feels like you should be able to deal with the archived settings at your leisure.\r\n\r\nI put together a test that reproduces this by adding [this](https://gist.github.com/nik9000/84320047e828c773ec381eb433d64573) to FullClusterRestartIT.",
"comments": [
{
"body": "We discussed this during Fix-it-Friday and agreed that we should not archive unknown and broken cluster settings. Instead, we should fail to recover the cluster state. The solution for users in an upgrade case would be to rollback to the previous version, address the settings that would be unknown or broken in the next major version, and then proceed with the upgrade.",
"created_at": "2018-01-05T14:39:11Z"
},
{
"body": "The solution does not seem to apply for transient settings. I'm getting acknowledgement from ES, but the invalid setting stays. (in my case `indices.store.throttle.type`)",
"created_at": "2018-01-29T11:43:07Z"
},
{
"body": "@otrosien how were you able to keep transient settings between versions? Did you do a rolling upgrade from 5.6 to 6.x?",
"created_at": "2018-01-29T19:45:24Z"
},
{
"body": "@otrosien 's teammate here. @mayya-sharipova Yes, we did a rolling upgrade of Elasticsearch. after the upgrade, the transient settings remained, but trying to either remove the unsupported setting or change any other setting in the transient set throws the error:\r\n```\r\ncurl -XPUT -H\"Content-Type: application/json\" -s localhost:9200/_cluster/settings -d '{\"transient\": { \"indices.*\":null } }'\r\n\r\n> {\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[1Mwia6T][172.31.164.55:9300][cluster:admin/settings/update]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"unknown setting [indices.store.throttle.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings\"},\"status\":400}\r\n```\r\n\r\nFor us the problem is not \"archival\" of bad settings, but the complete inability to edit transient settings now that they contain one unsupported setting.\r\n\r\nWe can update any persistent settings because those were empty before the upgrade, but for the settings that exist in our transient settings, the transient versions take precedence according to documentation: https://www.elastic.co/guide/en/elasticsearch/reference/6.1/cluster-update-settings.html#_precedence_of_settings\r\nso we cannot effectively change any of those settings now.\r\n\r\nWe would expect to have a bugfix release of Elasticsearch, which allows this cleanup without requiring a full cluster restart.\r\n\r\nAt this point, the only option we have is to create a new cluster in parallel, index to it, and change DNS settings. This is extremely expensive, because our cluster is large(ish), with 100s of data nodes. service disruption by way of a full-cluster restart is not an option for us.",
"created_at": "2018-01-30T05:57:47Z"
},
{
"body": "@adichad which exact version are you using? I'm asking because as far as I can tell from glancing at the code, https://github.com/elastic/elasticsearch/pull/27671 should allow you to remove that setting.",
"created_at": "2018-01-30T08:20:28Z"
},
{
"body": "@bleskes the masters are on 6.1.1, the data nodes still on 6.1.0. `indices.store.throttle.type` is still a cluster-wide setting, so from my understanding #27671 doesn't apply.",
"created_at": "2018-01-30T11:00:22Z"
},
{
"body": "@otrosien @adichad \r\n\r\n`indices.store.throttle.type` setting was deprecated in 6.0 [1] , so after the upgrade it should have `archived` prefix added to this setting. Did you try to remove the archived version of this setting:\r\n\r\n```\r\ncurl -XPUT -H \"Content-Type: application/json\" -s localhost:9200/_cluster/settings -d '{\"transient\": { \"archived.indices.*\":null } }'\r\n```\r\n\r\n[1]https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking_60_settings_changes.html#_store_throttling_settings",
"created_at": "2018-01-30T17:39:42Z"
},
{
"body": "@mayya-sharipova we tried all variations of removing that setting. Apparently it was not moved to `archived` when we upgraded. Is it somehow possible to trigger this?",
"created_at": "2018-01-31T09:46:42Z"
},
{
"body": "Having the same issue in #28524 \r\n\r\nWere unable to rollback, so a force reset solution would be nice. \r\n\r\nSince its a production cluster we also dont want to shutdown for this...",
"created_at": "2018-02-07T14:10:29Z"
},
{
"body": "If the official solution is what @jasontedor said, this should really make it to the [documentation on rolling upgrade procedure](https://www.elastic.co/guide/en/elasticsearch/reference/master/rolling-upgrades.html)",
"created_at": "2018-02-07T14:15:35Z"
},
{
"body": "This should not be the official solution for this.\r\n\r\nGetting hell lot of errors downgrading / rollbacking ending in:\r\n\r\n> \r\n> nested: IllegalStateException[index [products_37_es/mZ1tmbEdTaeNYSpCquAWGA] version not supported: 6.1.3 the node version is: 6.0.0]; ]\r\n> \r\n> org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];",
"created_at": "2018-02-07T14:36:51Z"
},
{
"body": "There is a misunderstanding here. This [comment](https://github.com/elastic/elasticsearch/issues/28026#issuecomment-355569742) that is being referred to as the \"official solution\" is not a solution. It is a proposal for how we should change Elasticsearch so that users can not end up in the situation that is causing so many problems here. It requires code changes to implement that solution and a new release carrying that solution.",
"created_at": "2018-02-07T14:55:08Z"
},
{
"body": "Thanks @jasontedor for the clarification.\r\nIs there a workaround for @scratchy who has a cluster with newly created indices, and who therefore cannot rollback?",
"created_at": "2018-02-07T15:00:09Z"
},
{
"body": "@faxm0dem The current workaround is to remove archived settings by `PUT _cluster/settings { \"persistent\": { \"archived.*\": null } } `. But it looks like deprecated settings have not been added `archived` prefix. We will discuss in our next meeting possible workarounds from this.",
"created_at": "2018-02-07T16:45:27Z"
},
{
"body": "If you have dedicated master nodes, we were able to workaround this by downgrading them to a previous version (5.6.1 in our case) and then removing the offending settings, then re-upgrading.",
"created_at": "2018-02-09T17:55:29Z"
},
{
"body": "Oh very cool thanks! @scratchy can you try this?",
"created_at": "2018-02-11T13:21:13Z"
},
{
"body": "I work with @waltrinehart. We were able to apply the downgrade master workaround for `transient` settings, but not for `permanent` settings. We cannot upgrade from 6.1.1 to 6.2.2 because of the stuck permanent settings. The only way forward that we see is to downgrade to 5.x and do a `full cluster restart` to remove the `permanent` setting, which is not really a viable option for us. In the current state we cannot modify cluster settings at all. This also implies that we cannot disable shard allocation before doing a rolling upgrade.\r\n\r\nThe only real solution that we see right now is a software patch allowing us to remove this setting and move forward. ",
"created_at": "2018-02-28T21:11:48Z"
},
{
"body": "We found that shutting down all of our master nodes simultaneously and starting them back up was sufficient to clear the `persistent` setting. \r\n\r\nThe cluster still required initializing all the shards even though the data nodes stayed up. This isn't possible for everyone though, so I think an alternative path without such disruption is still needed.\r\n\r\nFollow cluster recovery we saw the setting was properly archived and could be removed. Confirms that it is an issue that crops up during rolling upgrades.",
"created_at": "2018-03-06T17:05:17Z"
},
{
"body": "We integrated a change (#28888) that will automatically archive any unknown or invalid settings on any settings update. This prevents their presence for failing the request and once archived they can be deleted.",
"created_at": "2018-03-17T20:49:23Z"
},
{
"body": "@jasontedor do you know when this will be released?",
"created_at": "2018-03-25T10:04:53Z"
},
{
"body": "@dorony The change #28888 will be in the next 6.2 patch release (6.2.4) which is not yet released although we do not provide release dates. ",
"created_at": "2018-03-25T12:13:34Z"
},
{
"body": "I'm no expert, but I'm suffering from this bug/situation right now and, if you're looking for QA feedback: this has put our *production* deployment in a very precarious state.",
"created_at": "2018-05-04T02:16:47Z"
},
{
"body": "I am running ES 6.3.0 and I executed:\r\n\r\n`curl -H \"Content-Type: application/json\" -XPUT 'localhost:9200/_cluster/settings' -d '{ \"persistent\" : { \"archived.*\":null }}'`\r\n\r\nand restarted the full cluster. That did it for me.",
"created_at": "2018-11-23T14:59:30Z"
},
{
"body": "The situation described in the OP is still true today (e.g. for upgrades from snapshots built from `7.x` to `master`) but the other points raised in this thread seem to have been addressed by #28888.\r\n\r\nDo we still consider this a bug? We could say that if you upgrade your cluster without addressing all the deprecation warnings first then there is a risk that some things may not work for you. In this case it's `PUT _cluster/settings` that doesn't work, and it's [fixable](https://github.com/elastic/elasticsearch/issues/28026#issuecomment-441261706). If we let a cluster carry on without taking explicit action to remove these broken settings then I expect they'll never get removed. I'm raising this for discussion again.",
"created_at": "2019-06-10T13:30:32Z"
},
{
"body": "We discussed this today and agreed that we are happy with the behaviour as it stands, so this can be closed.",
"created_at": "2019-06-12T14:23:18Z"
},
{
"body": "Hey team, sorry to dig up an old issue but we just hit this during cloud-observability upgrade (from 6.8 to 7.8). Some of our clusters have setting\r\n```\r\nxpack.notification.slack.account.<account_name>.url\r\n```\r\nwhich is apparently not supported in 7.x and hence got `archived.*`. I wonder why there wouldn't be an additional check/action in 7.x upgrade assistant to warn about unsupported settings? Or even check and remove them if they have no effect. \r\n\r\nWhen upgrade succeeds, those settings leave cluster basically unusable (at least, on Elastic Cloud)",
"created_at": "2020-06-26T15:12:20Z"
},
{
"body": "@chingis-elastic that this was not caught ahead of the upgrade sounds like it might be a bug somewhere in the deprecation or upgrade assistance areas. Would you open a new issue for it to make sure that gets investigated? Closed issues like this don't normally see any further activity.",
"created_at": "2020-06-29T07:13:12Z"
}
],
"number": 28026,
"title": "Archived settings prevent updating other settings"
} | {
"body": "Currently unknown or invalid cluster settings get archived.\r\n\r\nFor a better user experience, we stop archving broken cluster settings.\r\nInstead, we will fail to recover the cluster state.\r\nThe solution for users in an upgrade case would be to rollback\r\nto the previous version, address the settings that would be unknown\r\nor invalid the next major version, and then proceed with the upgrade.\r\n\r\nCloses #28026",
"number": 28253,
"review_comments": [
{
"body": "Can you please mention the exact type that is thrown (`IllegalStateException`)?",
"created_at": "2018-09-21T06:10:51Z"
},
{
"body": "nit: We usually use the `x == false` idiom instead of `!x`.",
"created_at": "2018-09-21T06:11:44Z"
},
{
"body": "I'm not saying that this is wrong but it seems we don't use it (so far?) in our code base. For consistency I'd probably check for expected exceptions as we did before but maybe we want to use `ExpectedException` more broadly in the future?",
"created_at": "2018-09-21T06:16:30Z"
},
{
"body": "I think we should do a deeper inspection of the underlying cause and maybe even the error message (at least whether it matches a certain pattern)?",
"created_at": "2018-09-21T06:17:59Z"
},
{
"body": "Given that this error will occur during an upgrade which is a high stress scenario anyway and the mitigation is not entirely straightforward, I think it would be good if we could provide specific guidance how to address this problem? Otherwise, people would need to turn to the migration guide for 7.0 and check the changes to find out how to handle this?",
"created_at": "2018-09-21T06:20:46Z"
}
],
"title": "Discontinue archiving broken cluster settings"
} | {
"commits": [
{
"message": "Discontinue archiving broken cluster settings\n\nCurrently unknown or invalid cluster settings get archived.\n\nFor a better user experience, we stop archving broken cluster settings.\nInstead, we will fail to recover the cluster state.\nThe solution for users in an upgrade case would be to rollback\nto the previous version, address the settings that would be unknown\nor invalid the next major version, and then proceed with the upgrade.\n\nCloses #28026"
}
],
"files": [
{
"diff": "@@ -14,3 +14,9 @@ primary shards of the opened index to be allocated.\n \n ==== Shard preferences `_primary`, `_primary_first`, `_replica`, and `_replica_first` are removed\n These shard preferences are removed in favour of the `_prefer_nodes` and `_only_nodes` preferences.\n+\n+==== Discontinue archiving broken cluster settings\n+We will no longer archive unknown or invalid cluster settings (prepending \"archived.\" to a broken setting's name).\n+Instead, we will fail to recover a cluster state with broken cluster settings.\n+The solution for users in an upgrade case would be to rollback to the previous version,\n+address the settings that would be unknown or invalid in the next major version, and then proceed with the upgrade.",
"filename": "docs/reference/migration/migrate_7_0/cluster.asciidoc",
"status": "modified"
},
{
"diff": "@@ -681,6 +681,44 @@ public Settings archiveUnknownOrInvalidSettings(\n }\n }\n \n+\n+ /**\n+ * Checks invalid or unknown settings. Any setting that is not recognized or fails validation\n+ * will be processed by consumers.\n+ * An exception will be thrown if any invalid or unknown setting is found.\n+ *\n+ * @param settings the {@link Settings} instance to scan for unknown or invalid settings\n+ * @param unknownConsumer callback on unknown settings (consumer receives unknown key and its\n+ * associated value)\n+ * @param invalidConsumer callback on invalid settings (consumer receives invalid key, its\n+ * associated value and an exception)\n+ */\n+ public void checkUnknownOrInvalidSettings(\n+ final Settings settings,\n+ final Consumer<Map.Entry<String, String>> unknownConsumer,\n+ final BiConsumer<Map.Entry<String, String>, IllegalArgumentException> invalidConsumer) {\n+ List<String> failedKeys = new ArrayList<>();\n+ for (String key : settings.keySet()) {\n+ try {\n+ Setting<?> setting = get(key);\n+ if (setting != null) {\n+ setting.get(settings);\n+ } else {\n+ if (!isPrivateSetting(key)) {\n+ failedKeys.add(key);\n+ unknownConsumer.accept(new Entry(key, settings));\n+ }\n+ }\n+ } catch (IllegalArgumentException ex) {\n+ failedKeys.add(key);\n+ invalidConsumer.accept(new Entry(key, settings), ex);\n+ }\n+ }\n+ if (failedKeys.size() > 0) {\n+ throw new IllegalStateException(\"Invalid or unknown settings: \" + String.join(\", \", failedKeys));\n+ }\n+ }\n+\n private static final class Entry implements Map.Entry<String, String> {\n \n private final String key;",
"filename": "server/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java",
"status": "modified"
},
{
"diff": "@@ -137,27 +137,25 @@ public void performStateRecovery(final GatewayStateRecoveredListener listener) t\n }\n }\n final ClusterSettings clusterSettings = clusterService.getClusterSettings();\n- metaDataBuilder.persistentSettings(\n- clusterSettings.archiveUnknownOrInvalidSettings(\n- metaDataBuilder.persistentSettings(),\n- e -> logUnknownSetting(\"persistent\", e),\n- (e, ex) -> logInvalidSetting(\"persistent\", e, ex)));\n- metaDataBuilder.transientSettings(\n- clusterSettings.archiveUnknownOrInvalidSettings(\n- metaDataBuilder.transientSettings(),\n- e -> logUnknownSetting(\"transient\", e),\n- (e, ex) -> logInvalidSetting(\"transient\", e, ex)));\n+ clusterSettings.checkUnknownOrInvalidSettings(\n+ metaDataBuilder.persistentSettings(),\n+ e -> logUnknownSetting(\"persistent\", e),\n+ (e, ex) -> logInvalidSetting(\"persistent\", e, ex));\n+ clusterSettings.checkUnknownOrInvalidSettings(\n+ metaDataBuilder.transientSettings(),\n+ e -> logUnknownSetting(\"transient\", e),\n+ (e, ex) -> logInvalidSetting(\"transient\", e, ex));\n ClusterState.Builder builder = clusterService.newClusterStateBuilder();\n builder.metaData(metaDataBuilder);\n listener.onSuccess(builder.build());\n }\n \n private void logUnknownSetting(String settingType, Map.Entry<String, String> e) {\n- logger.warn(\"ignoring unknown {} setting: [{}] with value [{}]; archiving\", settingType, e.getKey(), e.getValue());\n+ logger.warn(\"unknown {} setting: [{}] with value [{}]\", settingType, e.getKey(), e.getValue());\n }\n \n private void logInvalidSetting(String settingType, Map.Entry<String, String> e, IllegalArgumentException ex) {\n- logger.warn(() -> new ParameterizedMessage(\"ignoring invalid {} setting: [{}] with value [{}]; archiving\",\n+ logger.warn(() -> new ParameterizedMessage(\"invalid {} setting: [{}] with value [{}]\",\n settingType, e.getKey(), e.getValue()), ex);\n }\n ",
"filename": "server/src/main/java/org/elasticsearch/gateway/Gateway.java",
"status": "modified"
},
{
"diff": "@@ -49,6 +49,8 @@\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n import org.elasticsearch.test.InternalTestCluster.RestartCallback;\n+import org.junit.Rule;\n+import org.junit.rules.ExpectedException;\n \n import java.io.IOException;\n import java.util.List;\n@@ -64,6 +66,8 @@\n @ClusterScope(scope = Scope.TEST, numDataNodes = 0)\n public class GatewayIndexStateIT extends ESIntegTestCase {\n \n+ @Rule\n+ public ExpectedException expectedException = ExpectedException.none();\n private final Logger logger = Loggers.getLogger(GatewayIndexStateIT.class);\n \n public void testMappingMetaDataParsed() throws Exception {\n@@ -479,7 +483,7 @@ public void testRecoverMissingAnalyzer() throws Exception {\n assertThat(ex.getCause().getMessage(), containsString(\"analyzer [test] not found for field [field1]\"));\n }\n \n- public void testArchiveBrokenClusterSettings() throws Exception {\n+ public void testFailBrokenClusterSettings() throws Exception {\n logger.info(\"--> starting one node\");\n internalCluster().startNode();\n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field1\", \"value1\").setRefreshPolicy(IMMEDIATE).get();\n@@ -502,20 +506,9 @@ public void testArchiveBrokenClusterSettings() throws Exception {\n .put(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey(), \"broken\").build()).build();\n MetaData.FORMAT.write(brokenMeta, nodeEnv.nodeDataPaths());\n }\n- internalCluster().fullRestart();\n- ensureYellow(\"test\"); // wait for state recovery\n- state = client().admin().cluster().prepareState().get().getState();\n- assertEquals(\"true\", state.metaData().persistentSettings().get(\"archived.this.is.unknown\"));\n- assertEquals(\"broken\", state.metaData().persistentSettings().get(\"archived.\"\n- + ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey()));\n \n- // delete these settings\n- client().admin().cluster().prepareUpdateSettings().setPersistentSettings(Settings.builder().putNull(\"archived.*\")).get();\n-\n- state = client().admin().cluster().prepareState().get().getState();\n- assertNull(state.metaData().persistentSettings().get(\"archived.this.is.unknown\"));\n- assertNull(state.metaData().persistentSettings().get(\"archived.\"\n- + ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES_SETTING.getKey()));\n- assertHitCount(client().prepareSearch().setQuery(matchAllQuery()).get(), 1L);\n+ expectedException.expect(ElasticsearchException.class);\n+ internalCluster().fullRestart();\n }\n+\n }",
"filename": "server/src/test/java/org/elasticsearch/gateway/GatewayIndexStateIT.java",
"status": "modified"
}
]
} |
{
"body": "Fix for #25534 \r\n\r\n@colings86 Please take a look.\r\n",
"comments": [
{
"body": "Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?\n",
"created_at": "2017-07-21T08:00:56Z"
},
{
"body": "jenkins test this",
"created_at": "2017-07-21T08:23:47Z"
},
{
"body": "@fred84 the build failed but the failure doesn't seem to be related to your change. It also doesn't reproduce for me locally so I'm going to kick off another build and see if we get the error again.",
"created_at": "2017-07-21T08:58:44Z"
},
{
"body": "jenkins retest this",
"created_at": "2017-07-21T08:58:53Z"
},
{
"body": "@colings86 I updated PR. There some code duplication left, I'm not sure where to put Triple class used in 2 test cases.",
"created_at": "2017-07-23T17:30:25Z"
},
{
"body": "@colings86 PR is updated. I also found that validation for min/max values in short/integer/long works not as intended. I will provide more details a bit later).",
"created_at": "2017-07-27T12:46:38Z"
},
{
"body": "@colings86 I moved proposed changes for byte/short/int/long validation to separate PR: https://github.com/fred84/elasticsearch/pull/2/",
"created_at": "2017-07-28T08:26:17Z"
},
{
"body": "@fred84 I'm a little confused, are you saying that https://github.com/fred84/elasticsearch/pull/2 replaces this PR? IF so, could you open that PR against this repo instead of against your fork?",
"created_at": "2017-08-01T12:11:00Z"
},
{
"body": "@colings86 I think we should solve half_float/float/double validation in this PR and then I will create separate PR for byte/short/int/double. ",
"created_at": "2017-08-01T14:06:39Z"
},
{
"body": "jenkins please test this",
"created_at": "2017-08-03T09:15:40Z"
},
{
"body": "jenkins please test this",
"created_at": "2017-08-04T08:01:10Z"
},
{
"body": "@jpountz I updated PR. Please take a look.",
"created_at": "2017-08-08T07:00:38Z"
},
{
"body": "jenkins please test this",
"created_at": "2017-08-08T08:57:58Z"
},
{
"body": "@fred84 thanks for the PR, its now merged and backported to 6.x and 6.0.",
"created_at": "2017-08-09T11:47:18Z"
},
{
"body": "@colings86 @jpountz Thanks for review!",
"created_at": "2017-08-09T12:33:22Z"
}
],
"number": 25826,
"title": "Reject out of range numbers for float, double and half_float"
} | {
"body": "Since #25826 we reject infinite values for float, double and half_float\r\ndatatypes. This change adds this restriction to the documentation for the\r\nsupported datatypes.\r\n\r\nCloses #27653",
"number": 28240,
"review_comments": [],
"title": "[Docs] Clarify numeric datatype ranges"
} | {
"commits": [
{
"message": "[Docs] Clarify numeric datatype ranges\n\nSince #25826 we reject infinite values for float, double and half_float\ndatatypes. This change adds this restriction to the documentation for the\nsupported datatypes.\n\nCloses #27653"
}
],
"files": [
{
"diff": "@@ -8,9 +8,9 @@ The following numeric types are supported:\n `integer`:: A signed 32-bit integer with a minimum value of +-2^31^+ and a maximum value of +2^31^-1+.\n `short`:: A signed 16-bit integer with a minimum value of +-32,768+ and a maximum value of +32,767+.\n `byte`:: A signed 8-bit integer with a minimum value of +-128+ and a maximum value of +127+.\n-`double`:: A double-precision 64-bit IEEE 754 floating point number.\n-`float`:: A single-precision 32-bit IEEE 754 floating point number.\n-`half_float`:: A half-precision 16-bit IEEE 754 floating point number.\n+`double`:: A double-precision 64-bit IEEE 754 floating point number, restricted to finite values.\n+`float`:: A single-precision 32-bit IEEE 754 floating point number, restricted to finite values.\n+`half_float`:: A half-precision 16-bit IEEE 754 floating point number, restricted to finite values.\n `scaled_float`:: A floating point number that is backed by a `long`, scaled by a fixed `double` scaling factor.\n \n Below is an example of configuring a mapping with numeric fields:",
"filename": "docs/reference/mapping/types/numeric.asciidoc",
"status": "modified"
}
]
} |
{
"body": "Executing an atomic alias operations on the **same index** and **same alias** returns inconsistent results:\r\n\r\n```\r\n# 1) Put an index\r\nPUT some-concrete-index\r\n\r\n# 2) no alias exists yet\r\nGET some-concrete-index/_alias\r\n\r\n# 3) add an alias, then remove it\r\nPOST /_aliases\r\n{\r\n \"actions\": [\r\n {\r\n \"add\": {\r\n \"index\": \"some-concrete-index\",\r\n \"alias\": \"oci-cmdb_service_members\"\r\n }\r\n },\r\n {\r\n \"remove\": {\r\n \"index\": \"some-concrete-index\",\r\n \"alias\": \"oci-cmdb_service_members\"\r\n }\r\n }\r\n ]\r\n}\r\n\r\n# 4) Order shouldn't matter, and it does not seem to, as the alias now exists\r\nGET some-concrete-index/_alias\r\n\r\n#5) Execute the same operation again:\r\nPOST /_aliases\r\n{\r\n \"actions\": [\r\n {\r\n \"add\": {\r\n \"index\": \"some-concrete-index\",\r\n \"alias\": \"oci-cmdb_service_members\"\r\n }\r\n },\r\n {\r\n \"remove\": {\r\n \"index\": \"some-concrete-index\",\r\n \"alias\": \"oci-cmdb_service_members\"\r\n }\r\n }\r\n ]\r\n}\r\n\r\n#6) The alias is removed now\r\nGET some-concrete-index/_alias\r\n```\r\n\r\nTo sum up:\r\nThe same request will produce different result:\r\n1) If the alias doesn’t exist before the request, it will be created \r\n2) If the alias doest exists, it will be removed\r\n\r\n",
"comments": [
{
"body": "I did some digging and this has to do with the fact that we treat aliases in the remove command as wild card expressions and [resolve them in advance](https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java#L142) against the cluster state before we run the command. This means that if the alias was not there when we started, the remove command will never be executed when the cluster state update. I think this is trappy as there is no correlation between the cluster state that this pre filtering is run on and the cluster state which actually serves as base for the commands execution. On top of it it doesn't account for relationship between commands. I think we should not prefilter and resolve wild cards when we execute the commands on the cluster state thread.",
"created_at": "2017-12-07T17:59:18Z"
},
{
"body": "I added the [discuss] label because I would like to discuss this in the context of eagerness to validate. This issue comes up with regards to #28231, and #30195. This also influences whether certain data-structures that are useful for early validation have value in being created as metadata is being changed and built by alias actions, rather than being re-created and validated all at once after all actions (whether legal or not, depending on the order) when the final MetaData is being built and validated (re: #29575).\r\n\r\n\r\nto highlight some examples that all behave differently:\r\n\r\n```\r\nPUT foo\r\n\r\n# add alias, then remove alias\r\n# 1st call: alias created. 2nd call: alias removed\r\nPOST _aliases\r\n{\r\n \"actions\": [\r\n { \"add\": { \"index\": \"foo\", \"alias\": \"logs\" } },\r\n { \"remove\": {\"index\": \"foo\", \"alias\": \"logs\" } }\r\n ]\r\n}\r\n\r\n# remove alias, then add alias\r\n# after all calls: alias created\r\nPOST _aliases\r\n{\r\n \"actions\": [\r\n { \"remove\": {\"index\": \"foo\", \"alias\": \"logs\" } },\r\n { \"add\": { \"index\": \"foo\", \"alias\": \"logs\" } }\r\n ]\r\n}\r\n\r\n# remove_index, then add alias\r\n# after all calls: exception thrown when trying to add alias, index remains.\r\nPOST _aliases\r\n{\r\n \"actions\": [\r\n { \"remove_index\": {\"index\": \"foo\"} },\r\n { \"add\": { \"index\": \"foo\", \"alias\": \"logs\" } }\r\n ]\r\n}\r\n\r\n# add alias, twice, updated\r\n# after all calls: last alias metadata definition wins\r\nPOST _aliases\r\n{\r\n \"actions\": [\r\n { \"add\": { \"index\": \"foo\",\"alias\": \"logs\", \"filter\": { \"exists\": { \"field\": \"FIELD_NAME\" } }}},\r\n { \"add\": { \"index\": \"foo\",\"alias\": \"logs\" }}\r\n ]\r\n}\r\n```",
"created_at": "2018-04-27T18:02:34Z"
},
{
"body": "@bleskes what would you like to do on this issue?",
"created_at": "2018-07-27T13:48:50Z"
},
{
"body": "@tomcallahan we discussed it in the core infra sync a while ago and agree that the actions should be applied one by one, modifying things as they go, based on the output of the previous command. If I recall correctly, the conversation went with validating correctness on every step. That said, is write index validation ended being done once in the end, so we might need to revisit it.\r\n\r\nBottom line - it's a valid issue and someone needs to pick it up. I expect more work here as the alias code needs some cleanup IMO. I believe that is also what made @mayya-sharipova fight an up hill battle with her PR. ",
"created_at": "2018-07-31T14:11:35Z"
},
{
"body": "@bleskes To confirm, you were saying that we need to take another approach for this. So, I am going to close my previous PR on this, and somebody else can work on it.",
"created_at": "2018-07-31T18:25:12Z"
},
{
"body": "@mayya-sharipova I think your PR is an improvement but indeed, this should be picked up in a more structural way. I'm OK with you just closing it, if you're not going to spend more time in that area (and otherwise we need to finish that discussion we started)",
"created_at": "2018-08-05T14:12:23Z"
},
{
"body": "Does that mean that renaming an alias is not working properly? \r\n\r\nhttps://www.elastic.co/guide/en/elasticsearch/reference/6.2/indices-aliases.html\r\n```\r\nPOST /_aliases\r\n{\r\n \"actions\" : [\r\n { \"remove\" : { \"index\" : \"test1\", \"alias\" : \"alias1\" } },\r\n { \"add\" : { \"index\" : \"test2\", \"alias\" : \"alias1\" } }\r\n ]\r\n}\r\n```",
"created_at": "2019-09-19T18:49:43Z"
},
{
"body": "@ddreonomy The renaming of alias should work well. If you experience any problem, please create a separate issue.\r\nThis PR is about adding/removing alias for the same index.",
"created_at": "2019-09-19T18:58:36Z"
}
],
"number": 27689,
"title": "Order of atomic alias operations seems inconsistent"
} | {
"body": "Currently aliases' names are resolved in advance against the cluster state\r\nat the point of time before executing an alias update command command.\r\nThis means that if an alias was not there when we started this command,\r\nbut created as a part of this command, the subsequent remove operation within\r\nthe same command on the same alias will not be able to find this alias,\r\nand will not be executed.\r\n\r\nThis is not correct as there is no correlation between the cluster state\r\nthat the alias resolution is run on and the cluster state which actually serves\r\nas base for the commands execution.\r\nOn top of this, the current situation doesn't account\r\nfor relationship between commands.\r\n\r\nThis commit postpones aliases' wild card resolution\r\nuntil we execute the commands on the cluster state thread,\r\nso the resolution is run on the cluster state\r\nwe are modifying.\r\n\r\nCloses #27689",
"number": 28231,
"review_comments": [],
"title": "Postpone aliases resolution until execution of alias update command"
} | {
"commits": [
{
"message": "Postpone aliases resolution until execution of alias update command\n\nCurrently aliases' names are resolved in advance against the cluster state\nat the point of time before executing an alias update command command.\nThis means that if an alias was not there when we started this command,\nbut created as a part of this command, the subsequent remove operation within\nthe same command on the same alias will not be able to find this alias,\nand will not be executed.\n\nThis is not correct as there is no correlation between the cluster state\nthat the alias resolution is run on and the cluster state which actually serves\nas base for the commands execution.\nOn top of this, the current situation doesn't account\nfor relationship between commands.\n\nThis commit postpones aliases' wild card resolution\nuntil we execute the commands on the cluster state thread,\nso the resolution is run on the cluster state\nwe are modifying.\n\nCloses #27689"
}
],
"files": [
{
"diff": "@@ -0,0 +1,41 @@\n+---\n+\"Remove operation should be able to consistently see an alias created in the same request\":\n+ - skip:\n+ version: \" - 6.99.99\"\n+ reason: since 7.0 aliases are resolved against the cluster state we are modifying, and not in advance\n+\n+ - do:\n+ indices.create:\n+ index: test_index\n+\n+ - do:\n+ indices.update_aliases:\n+ body:\n+ actions:\n+ - add:\n+ index: test_index\n+ alias: test_alias\n+ - remove:\n+ index: test_index\n+ alias: test_alias\n+\n+ - do:\n+ indices.exists_alias:\n+ name: test_alias\n+ - is_false: ''\n+\n+ - do:\n+ indices.update_aliases:\n+ body:\n+ actions:\n+ - add:\n+ index: test_index\n+ alias: test_alias\n+ - remove:\n+ index: test_index\n+ alias: test_alias\n+\n+ - do:\n+ indices.exists_alias:\n+ name: test_alias\n+ - is_false: ''",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.update_aliases/40_postpone_aliases_resolution_to_execution.yml",
"status": "added"
},
{
"diff": "@@ -99,12 +99,12 @@ protected void masterOperation(final IndicesAliasesRequest request, final Cluste\n for (String index : concreteIndices) {\n switch (action.actionType()) {\n case ADD:\n- for (String alias : concreteAliases(action, state.metaData(), index)) {\n+ for (String alias : action.aliases()) {\n finalActions.add(new AliasAction.Add(index, alias, action.filter(), action.indexRouting(), action.searchRouting()));\n }\n break;\n case REMOVE:\n- for (String alias : concreteAliases(action, state.metaData(), index)) {\n+ for (String alias : action.aliases()) {\n finalActions.add(new AliasAction.Remove(index, alias));\n }\n break;\n@@ -116,9 +116,6 @@ protected void masterOperation(final IndicesAliasesRequest request, final Cluste\n }\n }\n }\n- if (finalActions.isEmpty() && false == actions.isEmpty()) {\n- throw new AliasesNotFoundException(aliases.toArray(new String[aliases.size()]));\n- }\n request.aliasActions().clear();\n IndicesAliasesClusterStateUpdateRequest updateRequest = new IndicesAliasesClusterStateUpdateRequest(unmodifiableList(finalActions))\n .ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout());\n@@ -136,22 +133,4 @@ public void onFailure(Exception t) {\n }\n });\n }\n-\n- private static String[] concreteAliases(AliasActions action, MetaData metaData, String concreteIndex) {\n- if (action.expandAliasesWildcards()) {\n- //for DELETE we expand the aliases\n- String[] indexAsArray = {concreteIndex};\n- ImmutableOpenMap<String, List<AliasMetaData>> aliasMetaData = metaData.findAliases(action.aliases(), indexAsArray);\n- List<String> finalAliases = new ArrayList<>();\n- for (ObjectCursor<List<AliasMetaData>> curAliases : aliasMetaData.values()) {\n- for (AliasMetaData aliasMeta: curAliases.value) {\n- finalAliases.add(aliasMeta.alias());\n- }\n- }\n- return finalAliases.toArray(new String[finalAliases.size()]);\n- } else {\n- //for ADD and REMOVE_INDEX we just return the current aliases\n- return action.aliases();\n- }\n- }\n }",
"filename": "server/src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,9 @@\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.rest.action.admin.indices.AliasesNotFoundException;\n+\n+import java.util.List;\n \n /**\n * Individual operation to perform on the cluster state as part of an {@link IndicesAliasesRequest}.\n@@ -51,14 +54,16 @@ public String getIndex() {\n \n /**\n * Apply the action.\n- * \n+ *\n * @param aliasValidator call to validate a new alias before adding it to the builder\n * @param metadata metadata builder for the changes made by all actions as part of this request\n * @param index metadata for the index being changed\n * @return did this action make any changes?\n */\n abstract boolean apply(NewAliasValidator aliasValidator, MetaData.Builder metadata, IndexMetaData index);\n \n+ abstract String getAlias();\n+\n /**\n * Validate a new alias.\n */\n@@ -99,6 +104,7 @@ public Add(String index, String alias, @Nullable String filter, @Nullable String\n /**\n * Alias to add to the index.\n */\n+ @Override\n public String getAlias() {\n return alias;\n }\n@@ -144,6 +150,7 @@ public Remove(String index, String alias) {\n /**\n * Alias to remove from the index.\n */\n+ @Override\n public String getAlias() {\n return alias;\n }\n@@ -155,11 +162,19 @@ boolean removeIndex() {\n \n @Override\n boolean apply(NewAliasValidator aliasValidator, MetaData.Builder metadata, IndexMetaData index) {\n- if (false == index.getAliases().containsKey(alias)) {\n- return false;\n+ // As in the remove action an alias may contain wildcards, we first need to expand alias wildcards\n+ List<String> concreteAliases = metadata.findAliases(alias, getIndex());\n+ if (concreteAliases.isEmpty()) {\n+ throw new AliasesNotFoundException(alias);\n }\n- metadata.put(IndexMetaData.builder(index).removeAlias(alias));\n- return true;\n+ Boolean changed = false;\n+ for (String concreteAlias : concreteAliases){\n+ if (index.getAliases().containsKey(concreteAlias)) {\n+ metadata.put(IndexMetaData.builder(index).removeAlias(concreteAlias));\n+ changed = true;\n+ }\n+ }\n+ return changed;\n }\n }\n \n@@ -181,5 +196,10 @@ boolean removeIndex() {\n boolean apply(NewAliasValidator aliasValidator, MetaData.Builder metadata, IndexMetaData index) {\n throw new UnsupportedOperationException();\n }\n+\n+ @Override\n+ public String getAlias() {\n+ return null;\n+ }\n }\n-}\n\\ No newline at end of file\n+}",
"filename": "server/src/main/java/org/elasticsearch/cluster/metadata/AliasAction.java",
"status": "modified"
},
{
"diff": "@@ -1081,6 +1081,36 @@ public MetaData build() {\n customs.build(), allIndicesArray, allOpenIndicesArray, allClosedIndicesArray, aliasAndIndexLookup);\n }\n \n+ /**\n+ * Finds the specific index aliases that match with the specified alias directly or partially via wildcards and\n+ * that point to the specified concrete index or match partially with the index via wildcards.\n+ *\n+ * @param alias The names of the index alias to find, could be a pattern to resolve\n+ * @param concreteIndex The concrete index, the index aliases must point to in order to be returned\n+ * @return a list of concrete aliases corresponding to the given alias and concrete index\n+ */\n+ public List<String> findAliases(final String alias, String concreteIndex) {\n+ List<String> concreteAliases = new ArrayList<>();\n+ if (alias.length() == 0) {\n+ return concreteAliases;\n+ }\n+ if (!indices.keys().contains(concreteIndex)) {\n+ return concreteAliases;\n+ }\n+ boolean matchAllAliases = (alias.equals(ALL)) ? true : false;\n+ IndexMetaData indexMetaData = indices.get(concreteIndex);\n+ for (ObjectCursor<AliasMetaData> cursor : indexMetaData.getAliases().values()) {\n+ final String concreteAlias = cursor.value.alias();\n+ if (matchAllAliases || Regex.simpleMatch(alias, concreteAlias)) {\n+ concreteAliases.add(concreteAlias);\n+ }\n+ }\n+ if (concreteAliases.size() > 1) {\n+ Collections.sort(concreteAliases);\n+ }\n+ return concreteAliases;\n+ }\n+\n public static String toXContent(MetaData metaData) throws IOException {\n XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON);\n builder.startObject();",
"filename": "server/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.rest.action.admin.indices.AliasesNotFoundException;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -98,6 +99,8 @@ ClusterState innerExecute(ClusterState currentState, Iterable<AliasAction> actio\n Map<String, IndexService> indices = new HashMap<>();\n try {\n boolean changed = false;\n+ boolean executed = false; // if at least a single action is executed\n+ List<String> notFoundAliases = new ArrayList<>();\n // Gather all the indexes that must be removed first so:\n // 1. We don't cause error when attempting to replace an index with a alias of the same name.\n // 2. We don't allow removal of aliases from indexes that we're just going to delete anyway. That'd be silly.\n@@ -110,6 +113,7 @@ ClusterState innerExecute(ClusterState currentState, Iterable<AliasAction> actio\n }\n indicesToDelete.add(index.getIndex());\n changed = true;\n+ executed = true;\n }\n }\n // Remove the indexes if there are any to remove\n@@ -154,7 +158,19 @@ ClusterState innerExecute(ClusterState currentState, Iterable<AliasAction> actio\n xContentRegistry);\n }\n };\n- changed |= action.apply(newAliasValidator, metadata, index);\n+ try {\n+ changed |= action.apply(newAliasValidator, metadata, index);\n+ executed = true;\n+ } catch (AliasesNotFoundException e) {\n+ notFoundAliases.add(action.getAlias());\n+ executed |= false;\n+ }\n+ }\n+\n+ // if no action has been executed,\n+ // it means that a user supplied a nonexisting alias\n+ if (executed == false) {\n+ throw new AliasesNotFoundException(notFoundAliases.toArray(new String[notFoundAliases.size()]));\n }\n \n if (changed) {",
"filename": "server/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: `Version: 6.1.1, Build: bd92e7f/2017-12-17T20:23:25.338Z, JVM: 1.8.0_144`\r\n\r\n\r\n**Plugins installed**: `[analysis-icu, analysis-phonetic]`\r\n\r\n\r\n**JVM version**: `java version \"1.8.0_144\"`\r\n\r\n**OS version**: `Darwin Kernel Version 17.3.0`\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nDaitch-Mokotoff analyzer returns only one token when it should return multiple.\r\n\r\n**Steps to reproduce**:\r\n\r\n```\r\n...\r\n \"analyzer_daitch_mokotoff\": {\r\n \"type\": \"custom\",\r\n \"tokenizer\": \"lowercase\",\r\n \"filter\": [\r\n \"daitch_mokotoff\"\r\n ]\r\n }\r\n```\r\n\r\n```\r\ncurl -XGET 'http://localhost:9200/indexname/_analyze?pretty' -H 'Content-Type: application/json' -d'{\r\n \"analyzer\": \"analyzer_daitch_mokotoff\",\r\n \"text\": \"CHAUPTMAN\"\r\n}'\r\n```\r\n\r\nThis should return 573660 (ch sounding like tch) and 473660 (ch sounding like kh) but instead only returns 473660.\r\n\r\n```\r\n{\r\n \"tokens\" : [\r\n {\r\n \"token\" : \"473660\",\r\n \"start_offset\" : 0,\r\n \"end_offset\" : 9,\r\n \"type\" : \"word\",\r\n \"position\" : 0\r\n }\r\n ]\r\n}\r\n```\r\n\r\nSee Daitch-Mokotoff soundex spec here: http://www.avotaynu.com/soundex.htm\r\n\r\nUntil this is fixed, the D-M soundex feature in the phonetic plugin is not usable.",
"comments": [
{
"body": "Thanks @bkazez , we use the encoder without branching in the phonetic filter which is why you see only the first token. We should use the dedicated Lucene filter `DaitchMokotoffSoundexFilter` instead. I'll work on a fix.",
"created_at": "2018-01-15T15:45:32Z"
}
],
"number": 28211,
"title": "Daitch-Mokotoff soundex gives incorrect results when it should return multiple encodings"
} | {
"body": "This commit changes the phonetic filter factory to use a DaitchMokotoffSoundexFilter\r\ninstead of a PhoneticFilter with a daitch_mokotoff encoder when daitch_mokotoff is selected.\r\nThe latter does not hanlde branching when computing the soundex and fails to encode multiple\r\nvariations when possible.\r\n\r\nCloses #28211",
"number": 28225,
"review_comments": [],
"title": "Fix daitch_mokotoff phonetic filter to use the dedicated Lucene filter"
} | {
"commits": [
{
"message": "Fix daitch_mokotoff phonetic filter to use the dedicated Lucene filter\n\nThis commit changes the phonetic filter factory to use a DaitchMokotoffSoundexFilter\ninstead of a PhoneticFilter with a daitch_mokotoff encoder when daitch_mokotoff is selected.\nThe latter does not hanlde branching when computing the soundex and fails to encode multiple\nvariations when possible.\n\nCloses #28211"
}
],
"files": [
{
"diff": "@@ -33,6 +33,7 @@\n import org.apache.commons.codec.language.bm.RuleType;\n import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.analysis.phonetic.BeiderMorseFilter;\n+import org.apache.lucene.analysis.phonetic.DaitchMokotoffSoundexFilter;\n import org.apache.lucene.analysis.phonetic.DoubleMetaphoneFilter;\n import org.apache.lucene.analysis.phonetic.PhoneticFilter;\n import org.elasticsearch.common.settings.Settings;\n@@ -53,13 +54,15 @@ public class PhoneticTokenFilterFactory extends AbstractTokenFilterFactory {\n private List<String> languageset;\n private NameType nametype;\n private RuleType ruletype;\n+ private boolean isDaitchMokotoff;\n \n public PhoneticTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {\n super(indexSettings, name, settings);\n this.languageset = null;\n this.nametype = null;\n this.ruletype = null;\n this.maxcodelength = 0;\n+ this.isDaitchMokotoff = false;\n this.replace = settings.getAsBoolean(\"replace\", true);\n // weird, encoder is null at last step in SimplePhoneticAnalysisTests, so we set it to metaphone as default\n String encodername = settings.get(\"encoder\", \"metaphone\");\n@@ -106,7 +109,8 @@ public PhoneticTokenFilterFactory(IndexSettings indexSettings, Environment envir\n } else if (\"nysiis\".equalsIgnoreCase(encodername)) {\n this.encoder = new Nysiis();\n } else if (\"daitch_mokotoff\".equalsIgnoreCase(encodername)) {\n- this.encoder = new DaitchMokotoffSoundex();\n+ this.encoder = null;\n+ this.isDaitchMokotoff = true;\n } else {\n throw new IllegalArgumentException(\"unknown encoder [\" + encodername + \"] for phonetic token filter\");\n }\n@@ -115,6 +119,9 @@ public PhoneticTokenFilterFactory(IndexSettings indexSettings, Environment envir\n @Override\n public TokenStream create(TokenStream tokenStream) {\n if (encoder == null) {\n+ if (isDaitchMokotoff) {\n+ return new DaitchMokotoffSoundexFilter(tokenStream, !replace);\n+ }\n if (ruletype != null && nametype != null) {\n LanguageSet langset = null;\n if (languageset != null && languageset.size() > 0) {",
"filename": "plugins/analysis-phonetic/src/main/java/org/elasticsearch/index/analysis/PhoneticTokenFilterFactory.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.apache.lucene.analysis.BaseTokenStreamTestCase;\n import org.apache.lucene.analysis.Tokenizer;\n import org.apache.lucene.analysis.core.WhitespaceTokenizer;\n+import org.apache.lucene.analysis.phonetic.DaitchMokotoffSoundexFilter;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.settings.Settings;\n@@ -72,4 +73,14 @@ public void testPhoneticTokenFilterBeiderMorseWithLanguage() throws IOException\n \"rmba\", \"rmbalt\", \"rmbo\", \"rmbolt\", \"rmbu\", \"rmbult\" };\n BaseTokenStreamTestCase.assertTokenStreamContents(filterFactory.create(tokenizer), expected);\n }\n+\n+ public void testPhoneticTokenFilterDaitchMotokoff() throws IOException {\n+ TokenFilterFactory filterFactory = analysis.tokenFilter.get(\"daitch_mokotoff\");\n+ Tokenizer tokenizer = new WhitespaceTokenizer();\n+ tokenizer.setReader(new StringReader(\"chauptman\"));\n+ String[] expected = new String[] { \"473660\", \"573660\" };\n+ assertThat(filterFactory.create(tokenizer), instanceOf(DaitchMokotoffSoundexFilter.class));\n+ BaseTokenStreamTestCase.assertTokenStreamContents(filterFactory.create(tokenizer), expected);\n+ }\n+\n }",
"filename": "plugins/analysis-phonetic/src/test/java/org/elasticsearch/index/analysis/SimplePhoneticAnalysisTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): 5.6.5\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nFrom this discussion: https://discuss.elastic.co/t/cant-upgrade-elasticsearch-5-2-1-to-5-6-5/115403\r\n\r\nWhen a user set on an index a weird setting like: `\"index.routing.allocation.exclude.tag\": null` (was done in 5.2), when trying to restart the cluster, the master node is sending NPE:\r\n\r\n```\r\n[2018-01-14T17:36:24,392][WARN ][o.e.d.z.ZenDiscovery ] [xg-ops-elk-javaes-mgt-2] failed to validate incoming join request from node [{xg-ops-elk-javaes-mgt-3}{SQaSuQ1aS-izcNs4P9yItQ}{_Wc_mrnfS5Ghb3RjuJLiJg}{10.0.23.55}{10.0.23.55:9300}]\r\norg.elasticsearch.transport.RemoteTransportException: [xg-ops-elk-javaes-mgt-3][10.0.23.55:9300][internal:discovery/zen/join/validate]\r\nCaused by: java.lang.NullPointerException\r\n\tat org.elasticsearch.cluster.node.DiscoveryNodeFilters.buildFromKeyValue(DiscoveryNodeFilters.java:73) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cluster.metadata.IndexMetaData$Builder.build(IndexMetaData.java:1044) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cluster.metadata.IndexMetaData.readFrom(IndexMetaData.java:724) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cluster.metadata.MetaData.readFrom(MetaData.java:676) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.cluster.ClusterState.readFrom(ClusterState.java:659) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.discovery.zen.MembershipAction$ValidateJoinRequest.readFrom(MembershipAction.java:171) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.transport.TcpTransport.handleRequest(TcpTransport.java:1510) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1396) ~[elasticsearch-5.2.1.jar:5.2.1]\r\n\tat org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) ~[?:?]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) ~[?:?]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) ~[?:?]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]\r\n\tat io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]\r\n\tat io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) ~[?:?]\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) ~[?:?]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) ~[?:?]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) ~[?:?]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) ~[?:?]\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) ~[?:?]\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_73]\r\n```\r\n\r\nAs this can happen, I think we should try to be a bit safer when `exclude` or `include` values are `null`.\r\n\r\nThe code which is failing: https://github.com/elastic/elasticsearch/blob/5.6/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeFilters.java#L69-L81\r\n\r\n```java\r\nString[] values = Strings.tokenizeToStringArray(entry.getValue(), \",\");\r\nif (values.length > 0) {\r\n // ...\r\n```\r\n\r\n`values` is `null` in that case.",
"comments": [
{
"body": "This was indeed caused by the following change in #22591: https://github.com/elastic/elasticsearch/pull/22591/files#diff-ff07a4bd5fb72fe4530a40b955791d17R67\r\n\r\nI've opened #28224 as a fix.\r\n",
"created_at": "2018-01-15T15:08:38Z"
},
{
"body": "great! high efficiency!\r\n",
"created_at": "2018-01-16T15:19:58Z"
},
{
"body": "@ywelsch I think this is closed by #28224?",
"created_at": "2018-01-17T14:54:45Z"
},
{
"body": "yes, thanks @jasontedor ",
"created_at": "2018-01-17T15:26:32Z"
}
],
"number": 28213,
"title": "NPE when upgrading from 5.2 to 5.6 with index.routing.allocation.exclude.tag: null"
} | {
"body": "This method has a different contract than all the other methods in this class, returning null instead of an empty array when receiving a null input. While switching over some methods from `delimitedListToStringArray` to this method `tokenizeToStringArray`, this resulted in unexpected `null`s in some places of our code.\r\n\r\nRelates #28213",
"number": 28224,
"review_comments": [],
"title": "Never return null from Strings.tokenizeToStringArray"
} | {
"commits": [
{
"message": "Never return null from Strings.tokenizeToStringArray"
}
],
"files": [
{
"diff": "@@ -474,6 +474,9 @@ public static String[] split(String toSplit, String delimiter) {\n * @see #delimitedListToStringArray\n */\n public static String[] tokenizeToStringArray(final String s, final String delimiters) {\n+ if (s == null) {\n+ return EMPTY_ARRAY;\n+ }\n return toStringArray(tokenizeToCollection(s, delimiters, ArrayList::new));\n }\n \n@@ -536,7 +539,7 @@ public static String[] delimitedListToStringArray(String str, String delimiter)\n */\n public static String[] delimitedListToStringArray(String str, String delimiter, String charsToDelete) {\n if (str == null) {\n- return new String[0];\n+ return EMPTY_ARRAY;\n }\n if (delimiter == null) {\n return new String[]{str};",
"filename": "server/src/main/java/org/elasticsearch/common/Strings.java",
"status": "modified"
},
{
"diff": "@@ -194,6 +194,14 @@ public void testInvalidIPFilter() {\n assertEquals(\"invalid IP address [\" + invalidIP + \"] for [\" + filterSetting.getKey() + ipKey + \"]\", e.getMessage());\n }\n \n+ public void testNull() {\n+ Setting<String> filterSetting = randomFrom(IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING,\n+ IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING, IndexMetaData.INDEX_ROUTING_EXCLUDE_GROUP_SETTING);\n+\n+ IndexMetaData.builder(\"test\")\n+ .settings(settings(Version.CURRENT).putNull(filterSetting.getKey() + \"name\")).numberOfShards(2).numberOfReplicas(0).build();\n+ }\n+\n public void testWildcardIPFilter() {\n String ipKey = randomFrom(\"_ip\", \"_host_ip\", \"_publish_ip\");\n Setting<String> filterSetting = randomFrom(IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING,",
"filename": "server/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDeciderTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**\r\n6.1.0\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`):\r\njava version \"1.8.0_131\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_131-b11)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)\r\n\r\n**OS version**:\r\nUbuntu 17.10 with 4.13.0-21-generic\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen index's mapping includes text and date properties, result of **simple_query_search**, which includes asterisk for prefix query,\r\nreturns no documents. \r\nWhen there's no date properties on index's mappping, query works fine.\r\nThe same query works fine on 5.6 version.\r\n\r\n\r\n**Steps to reproduce**:\r\n1. Create index \"tests\" with text and date properties.\r\n```\r\ncurl -XPUT http://localhost:9200/tests -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{\r\n \"mappings\": {\r\n \"test\": {\r\n \"properties\": {\r\n \"date\": {\r\n \"type\": \"date\"\r\n },\r\n \"name\": {\r\n \"type\": \"text\",\r\n \"fields\": {\r\n \"raw\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n```\r\n\r\n2. Put some data\r\n```\r\ncurl -XPUT http://localhost:9200/tests/test/1 -H 'Content-Type: application/json' -H 'Accept: application/json' -d '\r\n{\r\n \"date\": \"2018-01-12\",\r\n \"name\": \"test1\"\r\n}'\r\n```\r\n\r\n3. Try to search with simple_query_search and asterisk.\r\n```\r\ncurl -XPOST http://localhost:9200/tests/_search -H 'Content-Type: application/json' -H 'Accept: application/json' -d '\r\n{\r\n \"query\": {\r\n \"simple_query_string\": {\r\n \"query\": \"te*\"\r\n }\r\n }\r\n}'\r\n```\r\n\r\nReceived result: no documents found\r\nExpected result: document with id 1 found\r\n\r\n\r\n**Provide logs (if relevant)**:\r\nthere's no errors on logs",
"comments": [
{
"body": "I was able to reproduce this on ES 6.1.0 and 6.1.1, for some reason it's being rewritten to `\"explanation\" : \"MatchNoDocsQuery(\\\"empty string passed to query parser\\\")\"`\r\n\r\nFor the meantime, `query_string` behaves correctly and retrieves the document.",
"created_at": "2018-01-12T22:37:49Z"
},
{
"body": "@jimczi I've built elasticsearch from your branch 'bugs/lenient_simple_query_string' and I run steps to reproduce the bug and nothing changes from my perspective. The response is still 0 documents instead of 1 document found. I've noticed that the only change is in \"explanation.description\" in response of \r\n```\r\ncurl -XPOST http://localhost:9200/tests/test/1/_explain -H 'Content-Type: application/json' -H 'Accept: application/json' -d '\r\n{\r\n \"query\": {\r\n \"simple_query_string\": {\r\n \"query\": \"te*\"\r\n }\r\n }\r\n}'\r\n```",
"created_at": "2018-01-15T15:42:38Z"
},
{
"body": "You're right @lbrzekowski, my fix was only to get the expected exceptions. I pushed another commit to continue with the other fields when leniency is on and we hit an exception on a field.",
"created_at": "2018-01-15T16:58:14Z"
}
],
"number": 28204,
"title": "Wrong result of \"simple_query_search\" with prefix query."
} | {
"body": "This change converts any exception that occurs during the parsing of\r\na simple_query_string to a match_no_docs query (instead of a null query)\r\nwhen leniency is activated.\r\n\r\nCloses #28204",
"number": 28219,
"review_comments": [
{
"body": "nit: can you update the comment to reflect that we return MatchNoDocs instead of null now?",
"created_at": "2018-01-15T14:35:39Z"
}
],
"title": "Fix simple_query_string on invalid input"
} | {
"commits": [
{
"message": "Fix simple_query_string on invalid input\n\nThis change converts any exception that occurs during the parsing of\na simple_query_string to a match_no_docs query (instead of a null query)\nwhen leniency is activated.\n\nCloses #28204"
},
{
"message": "we should not fail on exceptions if lenient is true"
},
{
"message": "Merge branch 'master' into bugs/lenient_simple_query_string"
},
{
"message": "address review"
},
{
"message": "Merge branch 'master' into bugs/lenient_simple_query_string"
}
],
"files": [
{
"diff": "@@ -33,6 +33,7 @@\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.SynonymQuery;\n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.query.AbstractQueryBuilder;\n@@ -86,11 +87,11 @@ private Analyzer getAnalyzer(MappedFieldType ft) {\n }\n \n /**\n- * Rethrow the runtime exception, unless the lenient flag has been set, returns null\n+ * Rethrow the runtime exception, unless the lenient flag has been set, returns {@link MatchNoDocsQuery}\n */\n private Query rethrowUnlessLenient(RuntimeException e) {\n if (settings.lenient()) {\n- return null;\n+ return Queries.newMatchNoDocsQuery(\"failed query, caused by \" + e.getMessage());\n }\n throw e;\n }\n@@ -115,7 +116,7 @@ public Query newDefaultQuery(String text) {\n try {\n return queryBuilder.parse(MultiMatchQueryBuilder.Type.MOST_FIELDS, weights, text, null);\n } catch (IOException e) {\n- return rethrowUnlessLenient(new IllegalArgumentException(e.getMessage()));\n+ return rethrowUnlessLenient(new IllegalStateException(e.getMessage()));\n }\n }\n \n@@ -135,7 +136,7 @@ public Query newFuzzyQuery(String text, int fuzziness) {\n settings.fuzzyMaxExpansions, settings.fuzzyTranspositions);\n disjuncts.add(wrapWithBoost(query, entry.getValue()));\n } catch (RuntimeException e) {\n- rethrowUnlessLenient(e);\n+ disjuncts.add(rethrowUnlessLenient(e));\n }\n }\n if (disjuncts.size() == 1) {\n@@ -156,7 +157,7 @@ public Query newPhraseQuery(String text, int slop) {\n }\n return queryBuilder.parse(MultiMatchQueryBuilder.Type.PHRASE, phraseWeights, text, null);\n } catch (IOException e) {\n- return rethrowUnlessLenient(new IllegalArgumentException(e.getMessage()));\n+ return rethrowUnlessLenient(new IllegalStateException(e.getMessage()));\n } finally {\n queryBuilder.setPhraseSlop(0);\n }\n@@ -184,7 +185,7 @@ public Query newPrefixQuery(String text) {\n disjuncts.add(wrapWithBoost(query, entry.getValue()));\n }\n } catch (RuntimeException e) {\n- return rethrowUnlessLenient(e);\n+ disjuncts.add(rethrowUnlessLenient(e));\n }\n }\n if (disjuncts.size() == 1) {",
"filename": "server/src/main/java/org/elasticsearch/index/search/SimpleQueryStringQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -46,15 +46,18 @@\n import org.elasticsearch.test.AbstractQueryTestCase;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.HashMap;\n import java.util.HashSet;\n+import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n import java.util.Set;\n \n import static org.hamcrest.Matchers.anyOf;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.either;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n@@ -607,6 +610,21 @@ public void testToFuzzyQuery() throws Exception {\n assertEquals(expected, query);\n }\n \n+ public void testLenientToPrefixQuery() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+\n+ Query query = new SimpleQueryStringBuilder(\"t*\")\n+ .field(DATE_FIELD_NAME)\n+ .field(STRING_FIELD_NAME)\n+ .lenient(true)\n+ .toQuery(createShardContext());\n+ List<Query> expectedQueries = new ArrayList<>();\n+ expectedQueries.add(new MatchNoDocsQuery(\"\"));\n+ expectedQueries.add(new PrefixQuery(new Term(STRING_FIELD_NAME, \"t\")));\n+ DisjunctionMaxQuery expected = new DisjunctionMaxQuery(expectedQueries, 1.0f);\n+ assertEquals(expected, query);\n+ }\n+\n private static IndexMetaData newIndexMeta(String name, Settings oldIndexSettings, Settings indexSettings) {\n Settings build = Settings.builder().put(oldIndexSettings)\n .put(indexSettings)",
"filename": "server/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): 6.11\r\n\r\n**Plugins installed**: analysis-icu\r\n\r\n**JVM version** (`java -version`):\r\n\r\n```\r\nopenjdk version \"1.8.0_151\"\r\nOpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-1~deb9u1-b12)\r\nOpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)\r\n```\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n\r\nLinux plop 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3 (2017-12-03) x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nTL;DR Since upgrade to 6.0 (then last 6.1.1), Terms aggregation on integer field shows result on data that should not exists for the provided query.\r\n\r\n[Original post on forum](https://discuss.elastic.co/t/terms-aggregation-shows-up-irrevelant-data/111287/4)\r\n\r\nHere is a first request I do, in order to assert that I do not have any data > 60:\r\n\r\n```\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"term\": {\r\n \"media_id\": \"aaa\"\r\n }\r\n },\r\n {\r\n \"range\": {\r\n \"eng.visu\": {\r\n \"gte\": 60\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"size\": 9999\r\n} \r\n```\r\n\r\n`eng.visu` is an array of 1 to 5 integers, always < 60 for this `media_id`.\r\n\r\nResult is as expected:\r\n\r\n```\r\n{\r\n \"took\": 483,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 0,\r\n \"max_score\": null,\r\n \"hits\": []\r\n }\r\n}\r\n```\r\nBut then, I do a terms aggregation on those data:\r\n```\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"term\": {\r\n \"media_id\": \"aaa\"\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"aggs\": {\r\n \"__all__\": {\r\n \"terms\": {\r\n \"field\": \"eng.visu\",\r\n \"size\": 9999\r\n }\r\n }\r\n },\r\n \"size\": 0\r\n}\r\n```\r\n\r\nAnd the result:\r\n```\r\n{\r\n\"took\": 24,\r\n\"timed_out\": false,\r\n\"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n},\r\n\"hits\": {\r\n \"total\": 18670,\r\n \"max_score\": 0,\r\n \"hits\": []\r\n},\r\n\"aggregations\": {\r\n \"__all__\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": 1,\r\n \"doc_count\": 690\r\n },\r\n {\r\n \"key\": 0,\r\n \"doc_count\": 674\r\n },\r\n {\r\n \"key\": 2,\r\n \"doc_count\": 655\r\n },\r\n ...\r\n {\r\n \"key\": 80,\r\n \"doc_count\": 298\r\n },\r\n {\r\n \"key\": 82,\r\n \"doc_count\": 298\r\n },\r\n ...\r\n {\r\n \"key\": 5276,\r\n \"doc_count\": 1\r\n }\r\n ]\r\n }\r\n}\r\n}\r\n```\r\nAs you can see, I have keys that are really greater than 60.",
"comments": [
{
"body": "@fvilpoix thanks for opening this issue, I tried to come up with a reproduction on 6.1.1 with a minimal including a few simple example documents but failed so far. If you are able to reproduce this on a minimal setup (reduced mapping, a small number of examples that exhibit this behaviour on a fresh index) and could share this, that would be ideal. Otherwise, can you provide your example mapping and maybe a few documents from your data so we can reproduce this better?",
"created_at": "2018-01-02T19:08:24Z"
},
{
"body": "Thanks for your answer.\r\n\r\nI've tried to reduce the dataset to a minimal size in order to reproduce the issue, but as soon as I do a reindexation into a new index, the aggregation works normally.\r\n\r\nYou can find in attachement the mapping of the migrated index from 5.x, thus some significant data. But the Index as ~30.000.000 documents:\r\n\r\n[mapping.json.txt](https://github.com/elastic/elasticsearch/files/1600686/mapping.json.txt)\r\n[data.json.txt](https://github.com/elastic/elasticsearch/files/1600818/data.json.txt)\r\n\r\nThe data has been extracted with the following query :+1: \r\n```\r\nGET engagement-2017/_search\r\n{\r\n \"aggs\": {\r\n \"filtered\": {\r\n \"filter\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"term\": {\r\n \"media_id\": \"f4ab5b1493891cb5\"\r\n }\r\n },\r\n {\r\n \"range\": {\r\n \"@timestamp\": {\r\n \"gte\": \"2017-11-11 00:00\",\r\n \"lte\": \"2017-11-11 01:00\",\r\n \"format\": \"yyyy-MM-dd HH:mm\"\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"aggs\": {\r\n \"__all__\": {\r\n \"terms\": {\r\n \"field\": \"eng.visu\",\r\n \"size\": 9999\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"size\": 999\r\n}\r\n```\r\nYou can see that media_id filter is not fullfilled, as many others are fetched. For this media_id, eng.visu should never contains data > 197.\r\n\r\n",
"created_at": "2018-01-03T15:40:19Z"
},
{
"body": "Thanks, I don't see anything obviously wrong in the mapping or the query. Just a few things to clarify:\r\n* I assume \"engagement-2017\" is an alias, since the mapping mentions \"engagement_from_prod\" as an index name. Can you clarify this and check what indices it points to?\r\n* I see a \"\\_default\\_\", \"doc\" and \"engagement\" type in the mapping, are they all still used after migrating to 6.0? If so, is it possible that they accidentally contain any documents with the same `media_id` with the unexpected \"eng.visu\" data that might have been removed from other types an now still show up (the aggregation runs across all types I think) \r\n\r\n> You can see that media_id filter is not fullfilled, as many others are fetched.\r\n\r\nI don't completely understand what you mean by \"not fullfilled\". If you mean that the \"hits\" in the attached \"data.json.txt\" contain documents with all kinds of `media_id`, I think that is because the query part in is a \"match_all\" and you retrieve 999 hits. Why is this necessary for the aggregation? Maybe I'm missing something.\r\n\r\nOther than that, could you also again send the full query (inklusing the request URL) you use to check that for this media id (\"f4ab5b1493891cb5\") there are no hits with values >197 and if possible the response (maybe redacted to not expose private data)?\r\n ",
"created_at": "2018-01-04T13:05:59Z"
},
{
"body": "Thank you for your help!\r\n\r\n* yes, `engagement-2017` is an alias, linked on `engagement_from_prod`. From cerebro:\r\n\r\n* There are documents into `doc` and `engagement`. \r\n\r\nQueries/aggregations are done without any type filter, so I believe it matchs all documents from any type, right?\r\n\r\nYou're right, I've given you the result of the wrong attempt, using an filter aggregation instead of a aggregation with filtered query. Sorry for the wase of time.\r\n\r\nI've done the right one, Data is perfect, but aggregation is weird, as there is data > 197 even that there are none in hits!!! (or I haven't understood how aggregations work :smile:)\r\n\r\n```json\r\nGET engagement-2017/_search?\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"term\": {\r\n \"media_id\": \"f4ab5b1493891cb5\"\r\n }\r\n },\r\n {\r\n \"range\": {\r\n \"@timestamp\": {\r\n \"gte\": \"2017-11-11 00:00\",\r\n \"lte\": \"2017-11-11 01:00\",\r\n \"format\": \"yyyy-MM-dd HH:mm\"\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"aggs\": {\r\n \"__all__\": {\r\n \"terms\": {\r\n \"field\": \"eng.visu\",\r\n \"size\": 9999,\r\n \"order\" : { \"_key\" : \"asc\" }\r\n }\r\n }\r\n },\r\n \"size\": 200\r\n}\r\n```\r\n[data.json.txt](https://github.com/elastic/elasticsearch/files/1603582/data.json.txt)\r\n\r\n\r\n\r\nHere is the full query asserting no data exists with values > 197:\r\n\r\n```json\r\nGET engagement-2017/_search\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"term\": {\r\n \"media_id\": \"f4ab5b1493891cb5\"\r\n }\r\n },\r\n {\r\n \"range\": {\r\n \"eng.visu\": {\r\n \"gt\": 197\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"size\": 9999\r\n} \r\n```\r\n\r\nAnd result:\r\n\r\n```json\r\n{\r\n \"took\": 49,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 0,\r\n \"max_score\": null,\r\n \"hits\": []\r\n }\r\n}\r\n```",
"created_at": "2018-01-04T14:29:22Z"
},
{
"body": "Many thanks, this is strange indeed. Looking at the aggregation result, there are indeed only 159 documents for `\"media_id\": \"f4ab5b1493891cb5\"` but if I sum up the \"doc_counts\" in the terms aggregation buckets I get a total of 829, so something with the aggregation scope seems wrong.\r\n@colings86 would you mind sharing your opinion on this when you have time? I might have missed something obvious.",
"created_at": "2018-01-04T15:28:28Z"
},
{
"body": "Thanks @fvilpoix , this is a bug in 6x that appears on indices created in 5x which is why you cannot reproduce when you reindex the data. I am able to reproduce with a fresh install of 6x with an index created in 5x. I opened an issue in Lucene with a patch (https://issues.apache.org/jira/browse/LUCENE-8117) but if we are not able to release a new version of Lucene for the next release (6.2) we'll add a workaround in es directly.",
"created_at": "2018-01-04T18:40:32Z"
},
{
"body": "Thanks everyone for the bugfix, I hope it will solve my problem!",
"created_at": "2018-01-15T17:18:54Z"
}
],
"number": 28044,
"title": "Terms aggregation shows up irrevelant data"
} | {
"body": "Closes #28044",
"number": 28218,
"review_comments": [],
"title": "upgrade to lucene 7.2.1"
} | {
"commits": [
{
"message": "upgrade to lucene 7.2.1"
},
{
"message": "remove unrelated changes"
},
{
"message": "do not update Lucene version for 6.2.0 yet"
}
],
"files": [
{
"diff": "@@ -1,5 +1,5 @@\n elasticsearch = 7.0.0-alpha1\n-lucene = 7.2.0\n+lucene = 7.2.1\n \n # optional dependencies\n spatial4j = 0.6",
"filename": "buildSrc/version.properties",
"status": "modified"
},
{
"diff": "@@ -1,7 +1,7 @@\n :version: 7.0.0-alpha1\n :major-version: 7.x\n-:lucene_version: 7.2.0\n-:lucene_version_path: 7_2_0\n+:lucene_version: 7.2.1\n+:lucene_version_path: 7_2_1\n :branch: master\n :jdk: 1.8.0_131\n :jdk_major: 8",
"filename": "docs/Versions.asciidoc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1 @@\n+51fbb33cdb17bb36a0e86485685bba18eb1c2ccf\n\\ No newline at end of file",
"filename": "modules/lang-expression/licenses/lucene-expressions-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+cfdfcd54c052cdd08140c7cd4daa7929b9657da0\n\\ No newline at end of file",
"filename": "plugins/analysis-icu/licenses/lucene-analyzers-icu-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+21418892a16434ecb4f8efdbf4e62838f58a6a59\n\\ No newline at end of file",
"filename": "plugins/analysis-kuromoji/licenses/lucene-analyzers-kuromoji-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+970e860a6e252e7c1dc117c45176a847ce961ffc\n\\ No newline at end of file",
"filename": "plugins/analysis-phonetic/licenses/lucene-analyzers-phonetic-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+ec08375a8392720cc378995d8234cd6138a735f6\n\\ No newline at end of file",
"filename": "plugins/analysis-smartcn/licenses/lucene-analyzers-smartcn-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+58305876f7fb0fbfad288910378cf4770da43892\n\\ No newline at end of file",
"filename": "plugins/analysis-stempel/licenses/lucene-analyzers-stempel-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+51cf40e2606863840e52d7e8981314a5a0323e06\n\\ No newline at end of file",
"filename": "plugins/analysis-ukrainian/licenses/lucene-analyzers-morfologik-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+324c3a090a04136720f4ef612db03b5c14866efa\n\\ No newline at end of file",
"filename": "server/licenses/lucene-analyzers-common-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+bc8dc9cc1555543532953d1dff33b67f849e19f9\n\\ No newline at end of file",
"filename": "server/licenses/lucene-backward-codecs-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+91897dbbbbada95ccddbd90505f0a0ba6bf7c199\n\\ No newline at end of file",
"filename": "server/licenses/lucene-core-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+5dbae570b1a4e54cd978fe5c3ed2d6b2f87be968\n\\ No newline at end of file",
"filename": "server/licenses/lucene-grouping-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+2f4b8c93563409cfebb36d910c4dab4910678689\n\\ No newline at end of file",
"filename": "server/licenses/lucene-highlighter-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+3121a038d472f51087500dd6da9146a9b0031ae4\n\\ No newline at end of file",
"filename": "server/licenses/lucene-join-7.2.1.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+21233b2baeed2aaa5acf8359bf8c4a90cc6bf553\n\\ No newline at end of file",
"filename": "server/licenses/lucene-memory-7.2.1.jar.sha1",
"status": "added"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 6.1.1\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): From the official docker\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Official docker, On docker for Mac\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen using `meta` attr for aggregations, it should be returned by ES.\r\nSince ES6, adding `meta` for `filters` aggregation when using a `range` filter on a `date` field does not return the `meta` (other filters and other field types seem to work).\r\n\r\n**Steps to reproduce**:\r\n\r\n1. Add the document:\r\n```\r\n{\r\n \"event_id\": 1,\r\n \"@timestamp\": \"2018-01-10T00:00:00.0000000Z\"\r\n}\r\n```\r\n\r\n2. Run the query:\r\n```js\r\n{\r\n \"query\": {\r\n \"match_all\": {}\r\n },\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"f90f4e\": {\r\n \"filters\": {\r\n \"filters\": {\r\n \"some_label\": {\r\n \"range\": {\r\n \"@timestamp\": {\r\n \"gte\": \"2016-12-01\",\r\n \"lte\": \"2017-02-01\"\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"meta\": {\r\n \"as_\": \"filters()\"\r\n },\r\n \"aggs\": {\r\n \"3018ae\": {\r\n \"cardinality\": {'field': \"event_id\" }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nAnd the result is:\r\n```js\r\n{\r\n ...,\r\n \"hits\": {...},\r\n \"aggregations\": {\r\n \"f90f4e\": {\r\n \"buckets\": {\r\n \"some_label\": {\r\n \"doc_count\": 1,\r\n \"3018ae\": {\r\n \"value\": 1\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nIn ES5.6.3, the result is\r\n```\r\n{\r\n ...,\r\n \"aggregations\": {\r\n \"f90f4e\": {\r\n \"meta\": {\r\n \"as_\": \"filters()\"\r\n },\r\n \"buckets\": {\r\n \"some_label\": {\r\n \"doc_count\": 0,\r\n \"3018ae\": {\r\n \"value\": 0\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\nas expected.\r\n\r\nUsing anything but range, and range on a non-date field works.\r\nReplacing `@timestamp` to `event_id`, gives the expected result as well.\r\n\r\n",
"comments": [
{
"body": "Also, when using the `_all` endpoint, using `range` with **any** type of field doesn't return the `meta`.\r\n\r\n```\r\nPOST http://../_all/_search\r\n```\r\nWith this query \r\n```\r\n{\r\n \"query\": {\r\n \"match_all\": {}\r\n },\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"f90f4e\": {\r\n \"filters\": {\r\n \"filters\": {\r\n \"some_label\": {\r\n \"range\": {\r\n \"event_id\": {\r\n \"gte\": 1,\r\n \"lte\": 20\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"meta\": {\r\n \"as_\": \"filters()\"\r\n },\r\n \"aggs\": {\r\n \"3018ae\": {\r\n \"cardinality\": {'field': \"event_id\" }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\ndoens't work, but using\r\n\r\n```\r\nPOST http://../some-index/_search\r\n```\r\nworks as expected.\r\n",
"created_at": "2018-01-11T13:13:05Z"
},
{
"body": "@ohadravid thanks for raising this issue. I have found the bug, it seems that when the filters in the filters aggregation are rewritten we don't add the metadata to the new rewritten version. I will have a PR up to fix this shortly.",
"created_at": "2018-01-11T15:10:50Z"
}
],
"number": 28170,
"title": "Filters aggregation does not return meta for date fields in range filter"
} | {
"body": "Previous to this change, if any filters in the filters aggregation were rewritten, the rewritten version of the FiltersAggregationBuilder would not contain the metadata form the original. This is because `AbstractAggregationBuilder.getMetadata()` returns an empty map when not metadata is set.\r\n\r\nCloses #28170",
"number": 28185,
"review_comments": [
{
"body": "Shouldn't the agg be responsible for copying the metadata when it's rewritten ? This is a bit confusing now since we don't know if the rewritten metadata is empty on purpose or not. We could just copy the metadata explicitly in `Filter(s)Agg...#doRewrite` to avoid ambiguity ? ",
"created_at": "2018-01-12T09:13:39Z"
},
{
"body": "The way I have been thinking about it is that this superclass manages the metadata in every other respect (toXContent, setters, getters) so it seem natural to me that it would handle the metadata while rewriting the agg? This way the way that metadata is handled should be consistent across all aggregations?",
"created_at": "2018-01-12T10:29:21Z"
},
{
"body": "Actually it is a bit confusing because `AbractAggregationBuilder` handles some of the metadata stuff too, but since most if not all aggregations extend AbstractAggregationBuilder I think it is still nice to have consistent logic for metadata across all the aggs. Maybe separately we should look at whether we still need AggregationBuilder and AbstractAggregationBuilder or if we could merge them to make things a bit simpler?",
"created_at": "2018-01-12T10:33:21Z"
},
{
"body": "> Maybe separately we should look at whether we still need AggregationBuilder and AbstractAggregationBuilder or if we could merge them to make things a bit simpler?\r\n\r\nagreed, it will also simplify the fix for https://github.com/elastic/elasticsearch/issues/27782 which requires a new clone function. ",
"created_at": "2018-01-12T10:48:45Z"
},
{
"body": "what prevents us from always blindly setting the metadata? Rewriting should never modify it?",
"created_at": "2018-01-12T13:52:51Z"
},
{
"body": "good point. I'll change it to do this since it simplifies things",
"created_at": "2018-01-12T14:55:50Z"
}
],
"title": "Adds metadata to rewritten aggregations"
} | {
"commits": [
{
"message": "Adds metadata to rewritten aggregations\n\nPrevious to this change, if any filters in the filters aggregation were rewritten, the rewritten version of the FiltersAggregationBuilder would not contain the metadata form the original. This is because `AbstractAggregationBuilder.getMetadata()` returns an empty map when not metadata is set.\n\nCloses #28170"
},
{
"message": "Always set metadata when rewritten"
}
],
"files": [
{
"diff": "@@ -101,9 +101,7 @@ public final AggregationBuilder rewrite(QueryRewriteContext context) throws IOEx\n if (rewritten == this) {\n return rewritten;\n }\n- if (getMetaData() != null && rewritten.getMetaData() == null) {\n- rewritten.setMetaData(getMetaData());\n- }\n+ rewritten.setMetaData(getMetaData());\n AggregatorFactories.Builder rewrittenSubAggs = factoriesBuilder.rewrite(context);\n rewritten.subAggregations(rewrittenSubAggs);\n return rewritten;",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/AggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,8 @@\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n import java.io.IOException;\n+import java.util.HashMap;\n+import java.util.Map;\n \n public class FiltersAggsRewriteIT extends ESSingleNodeTestCase {\n \n@@ -58,10 +60,14 @@ public void testWrapperQueryIsRewritten() throws IOException {\n }\n FiltersAggregationBuilder builder = new FiltersAggregationBuilder(\"titles\", new FiltersAggregator.KeyedFilter(\"titleterms\",\n new WrapperQueryBuilder(bytesReference)));\n+ Map<String, Object> metadata = new HashMap<>();\n+ metadata.put(randomAlphaOfLengthBetween(1, 20), randomAlphaOfLengthBetween(1, 20));\n+ builder.setMetaData(metadata);\n SearchResponse searchResponse = client().prepareSearch(\"test\").setSize(0).addAggregation(builder).get();\n assertEquals(3, searchResponse.getHits().getTotalHits());\n InternalFilters filters = searchResponse.getAggregations().get(\"titles\");\n assertEquals(1, filters.getBuckets().size());\n assertEquals(2, filters.getBuckets().get(0).getDocCount());\n+ assertEquals(metadata, filters.getMetaData());\n }\n }",
"filename": "server/src/test/java/org/elasticsearch/search/aggregations/FiltersAggsRewriteIT.java",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,7 @@\n import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregator.KeyedFilter;\n \n import java.io.IOException;\n+import java.util.Collections;\n \n import static org.hamcrest.Matchers.instanceOf;\n \n@@ -123,6 +124,7 @@ public void testOtherBucket() throws IOException {\n public void testRewrite() throws IOException {\n // test non-keyed filter that doesn't rewrite\n AggregationBuilder original = new FiltersAggregationBuilder(\"my-agg\", new MatchAllQueryBuilder());\n+ original.setMetaData(Collections.singletonMap(randomAlphaOfLengthBetween(1, 20), randomAlphaOfLengthBetween(1, 20)));\n AggregationBuilder rewritten = original.rewrite(new QueryRewriteContext(xContentRegistry(), null, null, () -> 0L));\n assertSame(original, rewritten);\n ",
"filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/FiltersTests.java",
"status": "modified"
}
]
} |
{
"body": "**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen combining an affix setting with a group setting, the affix setting does not correctly find the namespaces anymore due to a non-working regex in [Setting.AffixKey](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/common/settings/Setting.java#L1300-L1314), when any setting inside that group setting is supplied\r\n\r\n**Steps to reproduce**:\r\n\r\n```java\r\npublic void testAffixNamespacesWithGroupSetting() {\r\n final Setting.AffixSetting<Settings> affixSetting =\r\n Setting.affixKeySetting(\"prefix.\",\"suffix\",\r\n (key) -> Setting.groupSetting(key + \".\", Setting.Property.Dynamic, Setting.Property.NodeScope));\r\n\r\n // works\r\n assertThat(affixSetting.getNamespaces(Settings.builder().put(\"prefix.infix.suffix\", \"anything\").build()), hasSize(1));\r\n // breaks, has size 0\r\n assertThat(affixSetting.getNamespaces(Settings.builder().put(\"prefix.infix.suffix.anything\", \"anything\").build()), hasSize(1));\r\n}\r\n```",
"comments": [
{
"body": "Side note: I think the best workaround here would be to get rid of group settings all together. They just add leniency.",
"created_at": "2018-01-03T14:54:39Z"
},
{
"body": "Hello,\r\nIt seems that AffixSetting accept only list settings (among settings who has a complex pattern), so I modified the regex pattern to make it also support group settings.",
"created_at": "2018-01-07T15:34:13Z"
},
{
"body": "@PnPie Thanks for the interest @PnPie but please note that this issue does not have the adoptme label; there is work on this issue in-flight already.",
"created_at": "2018-01-07T15:40:13Z"
},
{
"body": "Okay, I just see that it's not assigned, there is no problem :)",
"created_at": "2018-01-07T15:58:11Z"
}
],
"number": 28047,
"title": "Settings: Combining affix setting with group settings results in namespace issues"
} | {
"body": "Currently AffixSetting support only list settings (among settings who has a complex pattern), not group settings.\r\nThis PR is to make AffixSetting also accept group settings.\r\nRelates to #28047",
"number": 28122,
"review_comments": [],
"title": "Add group settings support to AffixSetting"
} | {
"commits": [
{
"message": "Add group settings support to AffixSetting\n\nCurrently AffixSetting support only list settings (among settings who has a complex pattern), not group settings.\nThis PR is to make AffixSetting also accept group settings."
}
],
"files": [
{
"diff": "@@ -1308,8 +1308,10 @@ public static final class AffixKey implements Key {\n if (suffix == null) {\n pattern = Pattern.compile(\"(\" + Pattern.quote(prefix) + \"((?:[-\\\\w]+[.])*[-\\\\w]+$))\");\n } else {\n- // the last part of this regexp is for lists since they are represented as x.${namespace}.y.1, x.${namespace}.y.2\n- pattern = Pattern.compile(\"(\" + Pattern.quote(prefix) + \"([-\\\\w]+)\\\\.\" + Pattern.quote(suffix) + \")(?:\\\\.\\\\d+)?\");\n+ /**\n+ * the last part of this regexp is to support both {@link ListKey} and {@link GroupKey}\n+ */\n+ pattern = Pattern.compile(\"(\" + Pattern.quote(prefix) + \"([-\\\\w]+)\\\\.\" + Pattern.quote(suffix) + \")(?:\\\\..*)?\");\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Setting.java",
"status": "modified"
},
{
"diff": "@@ -573,26 +573,59 @@ public void testAffixKeySetting() {\n assertTrue(listAffixSetting.hasComplexMatcher());\n assertTrue(listAffixSetting.match(\"foo.test.bar\"));\n assertTrue(listAffixSetting.match(\"foo.test_1.bar\"));\n+ assertTrue(listAffixSetting.match(\"foo.test_1.bar.1\"));\n assertFalse(listAffixSetting.match(\"foo.buzz.baz.bar\"));\n assertFalse(listAffixSetting.match(\"foo.bar\"));\n assertFalse(listAffixSetting.match(\"foo.baz\"));\n assertFalse(listAffixSetting.match(\"foo\"));\n+\n+ Setting<Settings> groupAffixSetting = Setting.affixKeySetting(\"foo.\", \"bar\",\n+ (key) -> Setting.groupSetting(key + \".\", Property.Dynamic, Property.NodeScope));\n+ assertTrue(groupAffixSetting.hasComplexMatcher());\n+ assertTrue(groupAffixSetting.match(\"foo.test.bar\"));\n+ assertTrue(groupAffixSetting.match(\"foo.test.bar.1.value\"));\n+ assertTrue(groupAffixSetting.match(\"foo.test.bar.2.value\"));\n+ assertTrue(groupAffixSetting.match(\"foo.buzz.bar.anything\"));\n+ assertFalse(groupAffixSetting.match(\"foo.bar\"));\n+ assertFalse(groupAffixSetting.match(\"foo\"));\n }\n \n public void testAffixSettingNamespaces() {\n- Setting.AffixSetting<Boolean> setting =\n+ Setting.AffixSetting<Boolean> booleanAffixSetting =\n Setting.affixKeySetting(\"foo.\", \"enable\", (key) -> Setting.boolSetting(key, false, Property.NodeScope));\n- Settings build = Settings.builder()\n+ Settings booleanSettins = Settings.builder()\n .put(\"foo.bar.enable\", \"true\")\n .put(\"foo.baz.enable\", \"true\")\n .put(\"foo.boom.enable\", \"true\")\n .put(\"something.else\", \"true\")\n .build();\n- Set<String> namespaces = setting.getNamespaces(build);\n+ Set<String> namespaces = booleanAffixSetting.getNamespaces(booleanSettins);\n assertEquals(3, namespaces.size());\n assertTrue(namespaces.contains(\"bar\"));\n assertTrue(namespaces.contains(\"baz\"));\n assertTrue(namespaces.contains(\"boom\"));\n+\n+ Setting.AffixSetting<List<String>> listAffixSetting = Setting.affixKeySetting(\"foo.\", \"enable\",\n+ (key) -> Setting.listSetting(key, Collections.emptyList(), Function.identity(), Property.NodeScope));\n+ List<String> input = Arrays.asList(\"test1\", \"test2\", \"test3\");\n+ Settings listSettings = Settings.builder().putList(\"foo.bay.enable\", input.toArray(new String[0])).build();\n+ namespaces = listAffixSetting.getNamespaces(listSettings);\n+ assertEquals(1, namespaces.size());\n+ assertTrue(namespaces.contains(\"bay\"));\n+\n+ Setting.AffixSetting<Settings> groupAffixSetting = Setting.affixKeySetting(\"foo.\", \"enable\",\n+ (key) -> Setting.groupSetting(key + \".\", Property.NodeScope));\n+ Settings groupSettings = Settings.builder()\n+ .put(\"foo.test.enable.1.value\", \"1\")\n+ .put(\"foo.test.enable.2.value\", \"2\")\n+ .put(\"foo.test.enable.3.value\", \"3\")\n+ .put(\"foo.bar.enable.boom\", \"something\")\n+ .put(\"foo.bar.enable.buzz\", \"anything\")\n+ .build();\n+ namespaces = groupAffixSetting.getNamespaces(groupSettings);\n+ assertEquals(2, namespaces.size());\n+ assertTrue(namespaces.contains(\"test\"));\n+ assertTrue(namespaces.contains(\"bar\"));\n }\n \n public void testAffixAsMap() {",
"filename": "core/src/test/java/org/elasticsearch/common/settings/SettingTests.java",
"status": "modified"
}
]
} |
{
"body": "Currently it parses the string as a double, meaning it might lose accuracy if the value is an long which is greater than 2^52. We should try to detect whether the string represents a long or a double first.",
"comments": [
{
"body": "I'll look into this issue.",
"created_at": "2017-12-28T22:10:55Z"
},
{
"body": "@Chaycej are you planning to resolve this issue? If not, I'm willing to do this. :)",
"created_at": "2018-01-05T23:32:20Z"
},
{
"body": "If someone is already working on this please ignore my pull request otherwise let me know your thoughts!",
"created_at": "2018-01-06T17:52:44Z"
}
],
"number": 28012,
"title": "StringTerms.Bucket.getKeyAsNumber should detect integers"
} | {
"body": "This close #28012 ",
"number": 28118,
"review_comments": [
{
"body": "I think a comment why this is being swallowed is in order here.",
"created_at": "2018-01-06T17:59:19Z"
},
{
"body": "I think it's best to name the exception as `ignored` here; it makes it clearer that the swallowing is intentional (and some IDEs complain about unused exceptions unless they are named `ignored`).",
"created_at": "2018-01-06T17:59:49Z"
},
{
"body": "This comment is a translation of the code to English, information that can already be gleaned by reading the code. The comment I am looking for here is *why* the code is written the way that it is written. *Why* are we doing this, why is it okay to swallow an exception?",
"created_at": "2018-01-06T18:49:17Z"
},
{
"body": "This comment is still not explaining why we are doing this. It's again an English translation of the code. May I suggest going back to the original issue and reading why we are making this change? That is what we want to say here, the why.",
"created_at": "2018-01-08T12:09:24Z"
},
{
"body": "@matarrese This is still not what we would want in a comment here. We want to explain why we doing this. Please look at the original issue that provoked this change and leave a comment here that explains the why behind first parsing as a long, and then leniently parsing as a double.",
"created_at": "2018-01-10T11:09:18Z"
}
],
"title": "StringTerms.Bucket.getKeyAsNumber detection type"
} | {
"commits": [
{
"message": "parse long first"
},
{
"message": "comment"
},
{
"message": "comment update"
},
{
"message": "testing returned type"
},
{
"message": "comment update"
},
{
"message": "updated comment, 3rd attempt"
},
{
"message": "Merge branch 'master' into getKeyAsNumber_check_type\n\n* master: (59 commits)\n Correct backport replica rollback to 6.2 (#28181)\n Backport replica rollback to 6.2 (#28181)\n Rename deleteLocalTranslog to createNewTranslog\n AwaitsFix #testRecoveryAfterPrimaryPromotion\n TEST: init unassigned gcp in testAcquireIndexCommit\n Replica start peer recovery with safe commit (#28181)\n Truncate tlog cli should assign global checkpoint (#28192)\n Fix lock accounting in releasable lock\n Add ability to associate an ID with tasks (#27764)\n [DOCS] Removed differencies between text and code (#27993)\n text fixes (#28136)\n Update getting-started.asciidoc (#28145)\n [Docs] Spelling fix in painless-getting-started.asciidoc (#28187)\n Fixed the cat.health REST test to accept 4ms, not just 4.0ms (#28186)\n Do not keep 5.x commits once having 6.x commits (#28188)\n Rename core module to server (#28180)\n upgraded jna from 4.4.0-1 to 4.5.1 (#28183)\n [TEST] Do not call RandomizedTest.scaledRandomIntBetween from multiple threads\n Primary send safe commit in file-based recovery (#28038)\n [Docs] Correct response json in rank-eval.asciidoc\n ..."
},
{
"message": "Merge branch 'master' into pr/28118\n\n* master: (94 commits)\n Completely remove Painless Type from AnalyzerCaster in favor of Java Class. (#28329)\n Fix spelling error\n Reindex: Wait for deletion in test\n Reindex: log more on rare test failure\n Ensure we protect Collections obtained from scripts from self-referencing (#28335)\n [Docs] Fix asciidoc style in composite agg docs\n Adds the ability to specify a format on composite date_histogram source (#28310)\n Provide a better error message for the case when all shards failed (#28333)\n [Test] Re-Add integer_range and date_range field types for query builder tests (#28171)\n Added Put Mapping API to high-level Rest client (#27869)\n Revert change that does not return all indices if a specific alias is requested via get alias api. (#28294)\n Painless: Replace Painless Type with Java Class during Casts (#27847)\n Notify affixMap settings when any under the registered prefix matches (#28317)\n Trim down usages of `ShardOperationFailedException` interface (#28312)\n Do not return all indices if a specific alias is requested via get aliases api.\n [Test] Lower bwc version for rank-eval rest tests\n CountedBitSet doesn't need to extend BitSet. (#28239)\n Calculate sum in Kahan summation algorithm in aggregations (#27807) (#27848)\n Remove the `update_all_types` option. (#28288)\n Add information when master node left to DiscoveryNodes' shortSummary() (#28197)\n ..."
},
{
"message": "Reword comment"
},
{
"message": "Merge branch 'master' into pr/28118\n\n* master: (23 commits)\n Update Netty to 4.1.16.Final (#28345)\n Fix peer recovery flushing loop (#28350)\n REST high-level client: add support for exists alias (#28332)\n REST high-level client: move to POST when calling API to retrieve which support request body (#28342)\n Add Indices Aliases API to the high level REST client (#27876)\n Java Api clean up: remove deprecated `isShardsAcked` (#28311)\n [Docs] Fix explanation for `from` and `size` example (#28320)\n Adapt bwc version after backport #28358\n Always return the after_key in composite aggregation response (#28358)\n Adds test name to MockPageCacheRecycler exception (#28359)\n Adds a note in the `terms` aggregation docs regarding pagination (#28360)\n [Test] Fix DiscoveryNodesTests.testDeltas() (#28361)\n Update packaging tests to work with meta plugins (#28336)\n Remove Painless Type from MethodWriter in favor of Java Class. (#28346)\n [Doc] Fixs typo in reverse-nested-aggregation.asciidoc (#28348)\n Reindex: Shore up rethrottle test\n Only assert single commit iff index created on 6.2\n isHeldByCurrentThread should return primitive bool\n [Docs] Clarify `html` encoder in highlighting.asciidoc (#27766)\n Fix GeoDistance query example (#28355)\n ..."
}
],
"files": [
{
"diff": "@@ -64,10 +64,18 @@ public Object getKey() {\n return getKeyAsString();\n }\n \n+ // this method is needed for scripted numeric aggs\n @Override\n public Number getKeyAsNumber() {\n- // this method is needed for scripted numeric aggs\n- return Double.parseDouble(termBytes.utf8ToString());\n+ /*\n+ * If the term is a long greater than 2^52 then parsing as a double would lose accuracy. Therefore, we first parse as a long and\n+ * if this fails then we attempt to parse the term as a double.\n+ */\n+ try {\n+ return Long.parseLong(termBytes.utf8ToString());\n+ } catch (final NumberFormatException ignored) {\n+ return Double.parseDouble(termBytes.utf8ToString());\n+ }\n }\n \n @Override",
"filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTerms.java",
"status": "modified"
},
{
"diff": "@@ -70,6 +70,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.core.IsNull.notNullValue;\n \n @ESIntegTestCase.SuiteScopeTestCase\n@@ -313,6 +314,7 @@ private void runTestFieldWithPartitionedFiltering(String field) throws Exception\n assertThat(terms.getName(), equalTo(\"terms\"));\n for (Bucket bucket : terms.getBuckets()) {\n assertTrue(foundTerms.add(bucket.getKeyAsNumber()));\n+ assertThat(bucket.getKeyAsNumber(), instanceOf(Double.class));\n }\n }\n assertEquals(expectedCardinality, foundTerms.size());",
"filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/DoubleTermsIT.java",
"status": "modified"
},
{
"diff": "@@ -67,6 +67,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.core.IsNull.notNullValue;\n \n @ESIntegTestCase.SuiteScopeTestCase\n@@ -431,6 +432,7 @@ public void testScriptSingleValue() throws Exception {\n assertThat(bucket, notNullValue());\n assertThat(key(bucket), equalTo(\"\" + i));\n assertThat(bucket.getKeyAsNumber().intValue(), equalTo(i));\n+ assertThat(bucket.getKeyAsNumber(), instanceOf(Long.class));\n assertThat(bucket.getDocCount(), equalTo(1L));\n }\n }",
"filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/LongTermsIT.java",
"status": "modified"
}
]
} |
{
"body": "... yet it is allowed. See this example\r\n\r\n**Elasticsearch version**: 5.2.1\r\n\r\n```\r\nDELETE test\r\nDELETE test2\r\n\r\nPUT test\r\n{\r\n \"mappings\": {\r\n \"foo\": {\r\n \"properties\": {\r\n \"x\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT test2\r\n{\r\n \"mappings\": {\r\n \"foo\": {\r\n \"properties\": {\r\n \"x\": {\r\n \"type\": \"date\",\r\n \"format\": [ \"yyyy-MM-dd\" ]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nGET test/foo/_mapping\r\n\r\nGET test2/foo/_mapping\r\n\r\nPUT _bulk\r\n{ \"index\" : {\"_index\" : \"test\", \"_type\" : \"foo\" } }\r\n{ \"x\" : \"2017-03-20\" }\r\n{ \"index\" : {\"_index\" : \"test2\", \"_type\" : \"foo\" } }\r\n{ \"x\" : \"2017-03-20\" }\r\n```\r\n\r\nthe mapping calls return\r\n\r\n```\r\nGET test/foo/_mapping\r\n{\r\n \"test\": {\r\n \"mappings\": {\r\n \"foo\": {\r\n \"properties\": {\r\n \"x\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nGET test2/foo/_mapping\r\n{\r\n \"test2\": {\r\n \"mappings\": {\r\n \"foo\": {\r\n \"properties\": {\r\n \"x\": {\r\n \"type\": \"date\",\r\n \"format\": \"[yyyy-MM-dd]\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nTrying to index the same document in both indices results in\r\n\r\n```\r\nPUT _bulk\r\n{ \"index\" : {\"_index\" : \"test\", \"_type\" : \"foo\" } }\r\n{ \"x\" : \"2017-03-20\" }\r\n{ \"index\" : {\"_index\" : \"test2\", \"_type\" : \"foo\" } }\r\n{ \"x\" : \"2017-03-20\" }\r\n\r\n{\r\n \"took\": 5,\r\n \"errors\": true,\r\n \"items\": [\r\n {\r\n \"index\": {\r\n \"_index\": \"test\",\r\n \"_type\": \"foo\",\r\n \"_id\": \"AVrq5TOq1NgG2ZBFgd-y\",\r\n \"_version\": 1,\r\n \"result\": \"created\",\r\n \"_shards\": {\r\n \"total\": 2,\r\n \"successful\": 1,\r\n \"failed\": 0\r\n },\r\n \"created\": true,\r\n \"status\": 201\r\n }\r\n },\r\n {\r\n \"index\": {\r\n \"_index\": \"test2\",\r\n \"_type\": \"foo\",\r\n \"_id\": \"AVrq5TOq1NgG2ZBFgd-z\",\r\n \"status\": 400,\r\n \"error\": {\r\n \"type\": \"mapper_parsing_exception\",\r\n \"reason\": \"failed to parse [x]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"\"\"Invalid format: \"2017-03-20\"\"\"\"\r\n }\r\n }\r\n }\r\n }\r\n ]\r\n}\r\n```",
"comments": [
{
"body": "I got the same issue where you able to solve the problem ?\r\n\r\n--> Defining Index\r\nPUT /sentiment\r\n{\r\n \"mappings\": {\r\n \"post\": { \r\n \"_all\": { \"enabled\": true }, \r\n \"properties\": { \r\n \"Created_at\": {\"type\": \"date\" , \"format\": \"ddd MMM DD HH:mm:ss ZZ YYYY\"},\r\n \"Hashtages\": { \"type\": \"text\" }, \r\n \"ID\": { \"type\": \"text\" },\r\n \"Lang\": { \"type\": \"text\" },\r\n \"Retweet_count\": { \"type\": \"integer\" },\r\n \"Source\": { \"type\": \"text\" },\r\n \"Text\": { \"type\": \"text\" },\r\n \"favourite_count\":{ \"type\": \"integer\" },\r\n \"sentiment\": { \"type\": \"text\" }\r\n }\r\n }\r\n }\r\n}\r\n\r\n--> Put Query\r\n\r\nPUT /sentiment/post/?pretty\r\n{\r\n \"Source\":\"Twitter\",\r\n \"ID\":\"895590234951430149\",\r\n \"Text\" : \"RT 4\",\r\n \"Lang\":\"en\",\r\n \"Retweet_count\":0,\r\n \"favorite_count\":0,\r\n \"sentiment\":\"Positive\",\r\n \"Created_at\":\"Thu Aug 10 10:18:45 +0000 2017\",\r\n \"Hashtages\":[\"BookLoversDay\"]\r\n}",
"created_at": "2017-08-10T11:15:32Z"
},
{
"body": "This issue reproduces on latest release.\r\n\r\nLet me handle this.",
"created_at": "2018-01-06T00:38:34Z"
}
],
"number": 23650,
"title": "Mapping: Specifying a date format as array creates invalid mapping"
} | {
"body": "Limit date `format` attribute to String values only (to avoid serialization issues)\r\nCloses #23650 ",
"number": 28117,
"review_comments": [
{
"body": "Instead of instance testing here I would suggest changing the method to only take a String argument (the format) and change all call sites to convert to String instead. I should be save to do so since we to it here anyway currently.\r\nThat way we also don't need to throw an exception for bad formats, Jodas \"forPattern\" will do so already.",
"created_at": "2018-07-24T20:13:26Z"
}
],
"title": "support only string format for date, root object & date range"
} | {
"commits": [
{
"message": "support only string format for date, root object & date range"
},
{
"message": "Merge branch 'master' into issue-23650\n\n Conflicts:\n\tcore/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java\n\tcore/src/test/java/org/elasticsearch/index/mapper/DateFieldMapperTests.java\n\tcore/src/test/java/org/elasticsearch/index/mapper/RangeFieldMapperTests.java\n\tcore/src/test/java/org/elasticsearch/index/mapper/RootObjectMapperTests.java"
},
{
"message": "change proto of parseDateTimeFormatter to support only Strings"
},
{
"message": "Revert \"change proto of parseDateTimeFormatter to support only Strings\"\n\nThis reverts commit 4d41cf83955f96ef499f9b13ea002ffb0b0b281b."
},
{
"message": "Merge branch 'master' into issue-23650"
}
],
"files": [
{
"diff": "@@ -265,7 +265,10 @@ private static IndexOptions nodeIndexOptionValue(final Object propNode) {\n }\n \n public static FormatDateTimeFormatter parseDateTimeFormatter(Object node) {\n- return Joda.forPattern(node.toString());\n+ if (node instanceof String) {\n+ return Joda.forPattern((String) node);\n+ }\n+ throw new IllegalArgumentException(\"Invalid format: [\" + node.toString() + \"]: expected string value\");\n }\n \n public static void parseTermVector(String fieldName, String termVector, FieldMapper.Builder builder) throws MapperParsingException {",
"filename": "server/src/main/java/org/elasticsearch/index/mapper/TypeParsers.java",
"status": "modified"
},
{
"diff": "@@ -414,4 +414,22 @@ public void testMergeText() throws Exception {\n () -> mapper.merge(update.mapping()));\n assertEquals(\"mapper [date] of different type, current_type [date], merged_type [text]\", e.getMessage());\n }\n+\n+ public void testIllegalFormatField() throws Exception {\n+ String mapping = Strings.toString(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", \"date\")\n+ .array(\"format\", \"test_format\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject());\n+\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> parser.parse(\"type\", new CompressedXContent(mapping)));\n+ assertEquals(\"Invalid format: [[test_format]]: expected string value\", e.getMessage());\n+ }\n }",
"filename": "server/src/test/java/org/elasticsearch/index/mapper/DateFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -443,4 +443,22 @@ public void testSerializeDefaults() throws Exception {\n }\n }\n \n+ public void testIllegalFormatField() throws Exception {\n+ String mapping = Strings.toString(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", \"date_range\")\n+ .array(\"format\", \"test_format\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject());\n+\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> parser.parse(\"type\", new CompressedXContent(mapping)));\n+ assertEquals(\"Invalid format: [[test_format]]: expected string value\", e.getMessage());\n+ }\n+\n }",
"filename": "server/src/test/java/org/elasticsearch/index/mapper/RangeFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -159,4 +159,30 @@ public void testDynamicTemplates() throws Exception {\n mapper = mapperService.merge(\"type\", new CompressedXContent(mapping3), MergeReason.MAPPING_UPDATE);\n assertEquals(mapping3, mapper.mappingSource().toString());\n }\n+\n+ public void testIllegalFormatField() throws Exception {\n+ String dynamicMapping = Strings.toString(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .startArray(\"dynamic_date_formats\")\n+ .startArray().value(\"test_format\").endArray()\n+ .endArray()\n+ .endObject()\n+ .endObject());\n+ String mapping = Strings.toString(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .startArray(\"date_formats\")\n+ .startArray().value(\"test_format\").endArray()\n+ .endArray()\n+ .endObject()\n+ .endObject());\n+\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+ for (String m : Arrays.asList(mapping, dynamicMapping)) {\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> parser.parse(\"type\", new CompressedXContent(m)));\n+ assertEquals(\"Invalid format: [[test_format]]: expected string value\", e.getMessage());\n+ }\n+ }\n }",
"filename": "server/src/test/java/org/elasticsearch/index/mapper/RootObjectMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "Elasticsearch 5.4.1\r\nRollover API problem\r\n\r\nWe are using Rollover to create new index upon a document count condition is reached.\r\n\r\nBut while ingestion is happening, if we run the rollover API, getting below error:\r\n\"Alias [test-schema-active-logs] has more than one indices associated with it [[test-schema-000004, test-schema-000005]], can't execute a single index op\"\r\n\r\nFrom Rollover API understanding write alias should automatically switch to new index (test-schema-000005) created and move the alias from the old index (test-schema-000004). How can this error be handled?",
"comments": [
{
"body": "@ankitachow, Please ask questions on [https://discuss.elastic.co](https://discuss.elastic.co) where we can give you a better support. We use Github for bug reports and feature requests. Thank you.",
"created_at": "2017-10-11T20:12:42Z"
},
{
"body": "TBH it sounds like a bug to me. But would be great if @ankitachow shares a full script to reproduce all the steps done.\r\n\r\n@ankitachow could you do that?",
"created_at": "2017-10-11T20:17:47Z"
},
{
"body": "@dadoonet Sure see below steps.\r\n\r\nWe are ingesting data through ES-Hadoop connector continuously. Below are the steps conducted in production:\r\n1.\tData ingested with a template having below information\r\na.\twrite & search alias\r\nb.\tno. of shards = no. of nodes\r\nc.\tbest_compression\r\n2.\tRollover based on certain doc count\r\n3.\tShrink the index. The template of the compressed index has\r\na.\tNo. of shards = 1\r\nb.\tbest_compression\r\n4.\tRemove the search-logs alias from the old index and add it to the compressed index\r\n5.\tForcemerge\r\n6.\tDelete the old index\r\n\r\nDuring Rollover, sometimes(80%) we are getting above error in Spark job and its stopping ingestion. New rolled over index getting created properly. Once we start ingesting again, data gets written to new index created from rollover.\r\n\r\nBelow is our rollover API command.\r\n\r\nRESPONSE=$(curl -s -XPOST ''$ip':9200/'$active_writealias'/_rollover?pretty=true' -d'\r\n{\r\n \"conditions\": {\r\n \"max_docs\": \"'\"$rollovercond\"'\"\r\n }\r\n}')\r\n\r\nBut if we run the script after ingestion, there's no error.",
"created_at": "2017-10-11T20:26:30Z"
},
{
"body": "The error you are getting seems to indicate that 2 indices are defined behind the write alias.\r\nThis should not happen.\r\n\r\nDo you call `_rollover` API only from one single machine? Or is it executed from different nodes?\r\nCan you share the elasticsearch logs when the problem appears? I mean some lines before the problem and some lines after if any.",
"created_at": "2017-10-11T20:55:39Z"
},
{
"body": "The template for the test-schema is as follows:\r\n{\r\n \"template\": \"test-schema-*\",\r\n \"settings\": {\r\n \"number_of_shards\": 13,\r\n \"number_of_replicas\": 0, \r\n\t\"refresh_interval\" : \"30s\",\r\n\t\"codec\":\"best_compression\"\r\n },\r\n \"aliases\": {\r\n \"test-schema-active-logs\": {},\r\n \"test-schema-search-logs\": {}\r\n },\r\n \"mappings\":{ \r\n\t\t\"test-log\":{ \r\n\t\t\t\"_all\":{\"enabled\": false},\r\n\t\t\t\"properties\":{ .....\r\n\r\nSo, rollover is creating the new index and also creating the write alias point to the new index which shouldn't happen.\r\nThe rollover API is called from only 1 machine. There's no problem with the Elasticsearch front. So, elasticsearch rollover runs fine. The ES hadoop spark job faies giving below error.\r\n\r\n \"Alias [test-schema-active-logs] has more than one indices associated with it [[test-schema-000004, test-schema-000005]], can't execute a single index op\"",
"created_at": "2017-10-11T21:12:18Z"
},
{
"body": "I also met this bug,especially in multi-thread writing is very easy to happen.\r\n_aliases API wrote 'Renaming an alias is a simple remove then add operation within the same API. This operation is atomic, no need to worry about a short period of time where the alias does not point to an index' in document,so this bug is because you did not use this api or _aliases API has this bug?",
"created_at": "2017-12-01T01:29:18Z"
},
{
"body": "I can reproduce this with the below test snippet.\r\n\r\n```java\r\npublic void testIndexingAndRolloverConcurrently() throws Exception {\r\n client().admin().indices().preparePutTemplate(\"logs\")\r\n .setPatterns(Collections.singletonList(\"logs-*\"))\r\n .addAlias(new Alias(\"logs-write\"))\r\n .get();\r\n assertAcked(client().admin().indices().prepareCreate(\"logs-000001\").get());\r\n ensureYellow(\"logs-write\");\r\n\r\n final AtomicBoolean done = new AtomicBoolean();\r\n final Thread rolloverThread = new Thread(() -> {\r\n while (done.get() == false) {\r\n client().admin().indices()\r\n .prepareRolloverIndex(\"logs-write\")\r\n .addMaxIndexSizeCondition(new ByteSizeValue(1))\r\n .get();\r\n }\r\n });\r\n rolloverThread.start();\r\n try {\r\n int numDocs = 10_000;\r\n for (int i = 0; i < numDocs; i++) {\r\n logger.info(\"--> add doc [{}]\", i);\r\n IndexResponse resp = index(\"logs-write\", \"doc\", Integer.toString(i), \"{}\");\r\n assertThat(resp.status(), equalTo(RestStatus.CREATED));\r\n }\r\n } finally {\r\n done.set(true);\r\n rolloverThread.join();\r\n }\r\n}\r\n```\r\n\r\nWe create an index with alias (via template) and update index alias in two separate cluster tasks. This can be a root cause of this issue.\r\n\r\nhttps://github.com/dnhatn/elasticsearch/blob/c7ce5a07f26f09ec4e5e92d07aa08f338fbb41b8/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java#L133-L135",
"created_at": "2018-01-01T00:02:26Z"
},
{
"body": "Hi guys, I am having a similar issue with a newer version\r\n\r\nSo, We were trying a rollover indice with our newly setup cluster with Elasticsearch 6.2\r\n\r\nWhen we are trying to rollover the indice, It gives the following error. \r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"Rollover alias [active-fusion-logs] can point to multiple indices, found duplicated alias [[search-fusion-logs, active-fusion-logs]] in index template [fusion-logs]\"\r\n }\r\n ],\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"Rollover alias [active-fusion-logs] can point to multiple indices, found duplicated alias [[search-fusion-logs, active-fusion-logs]] in index template [fusion-logs]\"\r\n },\r\n \"status\": 400\r\n}\r\n```\r\n\r\nPlease find details below of the template that we are having and steps that I used. This can be fairly used to reproduce the issue. \r\n\r\n\r\nTemplate name : fusion-logs\r\n```\r\nPUT _template/fusion-logs\r\n{\r\n \"template\": \"fusion-logs-*\",\r\n \"settings\": {\r\n \"number_of_shards\": 2,\r\n \"number_of_replicas\": 1,\r\n \"routing.allocation.include.box_type\": \"hot\"\r\n },\r\n \"aliases\": {\r\n \"active-fusion-logs\": {},\r\n \"search-fusion-logs\": {}\r\n },\r\n \"mappings\": {\r\n \"logs\": {\r\n \"properties\": {\r\n \"host\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"job_id\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"job_result\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nWe inserted a 1000 documents in the above active-fusion-logs index and then used the following to roll over the index\r\n\r\n```\r\nPOST active-fusion-logs/_rollover\r\n{\r\n \"conditions\": {\r\n \"max_docs\": 1000\r\n }\r\n}\r\n```\r\n\r\nThe above API gives us an error when we are trying to rollover\r\n\r\nSome other info about the cluster. \r\n1. There is no other index other than the above index.\r\n2. active-fusion-logs is aliased to just one write index\r\n3. search-fusion-logs is aliased to multiple indexes\r\n\r\nAlso, I had tried the same thing with Elasticsearch 5.3.2 and it worked as expected without the error.",
"created_at": "2018-03-14T07:17:32Z"
},
{
"body": "> \"reason\": \"Rollover alias [active-fusion-logs] can point to multiple indices, found duplicated alias [[search-fusion-logs, active-fusion-logs]] in index template [fusion-logs]\"\r\n\r\nYou should remove alias `[active-fusion-logs]` from the index template ` [fusion-logs]`.\r\n\r\n````\r\nPUT _template/fusion-logs\r\n{\r\n \"template\": \"fusion-logs-*\",\r\n \"settings\": {\r\n \"number_of_shards\": 2,\r\n \"number_of_replicas\": 1,\r\n \"routing.allocation.include.box_type\": \"hot\"\r\n },\r\n\r\n \"aliases\": {\r\n \"active-fusion-logs\": {}, // Remove this line\r\n \"search-fusion-logs\": {}\r\n },\r\n \"mappings\": {\r\n \"logs\": {\r\n \"properties\": {\r\n \"host\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"job_id\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"job_result\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```",
"created_at": "2018-03-14T12:29:54Z"
},
{
"body": "Oh. Great. That worked. I am not sure how I missed that out. Thanks! @dnhatn ",
"created_at": "2018-03-14T12:41:58Z"
},
{
"body": "I having this problem with 6.4\r\n\r\n\r\nPUT _template/application-logs\r\n{\r\n \"template\": \"xx-*\",\r\n \"settings\": {\r\n \"number_of_shards\": 2,\r\n \"number_of_replicas\": 1,\r\n \"routing.allocation.include.box_type\": \"hot\",\r\n \"index\": {\r\n \"codec\": \"best_compression\",\r\n \"mapping\": {\r\n \"total_fields\": {\r\n \"limit\": \"10000\"\r\n }\r\n },\r\n \"refresh_interval\": \"5s\"\r\n }\r\n },\r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"date\": {\"type\": \"date\",\"format\": \"yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis\"},\r\n \"logData\": {\"type\": \"text\"},\r\n \"message\": {\"type\": \"text\"},\r\n \"logger_name\": {\"type\": \"keyword\"},\r\n \"thread_name\": {\"type\": \"keyword\"},\r\n \"level\": {\"type\": \"keyword\"},\r\n \"levelvalue\": {\"type\": \"long\"},\r\n \"stack_trace\": {\"type\": \"text\"}\r\n }\r\n }\r\n }, \r\n \"aliases\": {\r\n \"search-application-logs\": {}\r\n }\r\n}\r\n\r\nPOST /search-application-logs/_rollover?dry_run\r\n{\r\n \"conditions\": {\r\n \"max_age\": \"1d\",\r\n \"max_docs\": 5,\r\n \"max_size\": \"5gb\"\r\n }\r\n}\r\n \"reason\": \"Rollover alias [search-application-logs] can point to multiple indices, found duplicated alias [[search-application-logs]] in index template [application-logs]\"\r\n\r\nI would like to setup rollover policy on alias so it would take effect on all the indexes that follow pattern setup in template. ",
"created_at": "2018-09-10T18:15:22Z"
},
{
"body": "My application will follow the date format similar to mentioned in this article https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-rollover-index.html\r\n\r\nMultiple indexes logs-2018.09.09-1 and logs-2018.09.10-1 would be pointing to same alias \"logs_write\". how to best setup rollover in this type of situation?\r\n\r\nPUT logs-2018.09.09-1\r\n{\r\n \"aliases\": {\r\n \"logs_write\": {}\r\n }\r\n}\r\n\r\nPUT logs-2018.09.10-1\r\n{\r\n \"aliases\": {\r\n \"logs_write\": {}\r\n }\r\n}\r\n\r\nPUT logs-2018.09.10-1/_doc/1\r\n{\r\n \"message\": \"a dummy log\"\r\n}\r\n\r\nPOST logs_write/_refresh\r\n\r\nPOST /logs_write/_rollover \r\n{\r\n \"conditions\": {\r\n \"max_docs\": \"1\"\r\n }\r\n}\r\n\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"source alias maps to multiple indices\"\r\n",
"created_at": "2018-09-10T18:23:47Z"
},
{
"body": "I am getting the same problem as @kkr78. I just have ONE index though. This is occurring on 6.3.0.\r\n\r\n_\"reason\": \"Rollover alias [my-index] can point to multiple indices, found duplicated alias [[my-index]] in index template [mytemplate]\"_\r\n\r\n**Index : my-index-2018.09.01-1**\r\n**Alias : my-index**\r\n```\r\n{\r\n \"mytemplate\": {\r\n \"order\": 0,\r\n \"index_patterns\": [\r\n \"my-index-*\"\r\n ],\r\n \"settings\": {},\r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"@timestamp\": {\r\n \"type\": \"date\"\r\n }\r\n }\r\n }\r\n },\r\n \"aliases\": {\r\n \"my-index\": {}\r\n }\r\n }\r\n}\r\n```",
"created_at": "2018-10-23T20:29:53Z"
}
],
"number": 26976,
"title": "Alias [test-schema-active-logs] has more than one indices associated with it [[......]], can't execute a single index op"
} | {
"body": "If a newly created index from a rollover request matches with an index template whose aliases contains the rollover request alias, the alias will point to multiple indices. This will cause indexing requests to be rejected. To avoid such situation, we make sure that there is no duplicated alias before creating a new index; otherwise abort and report an error to the caller.\r\n\r\nCloses #26976\r\n \r\n ",
"number": 28110,
"review_comments": [],
"title": "Fail rollover if duplicated alias found in templates"
} | {
"commits": [
{
"message": "Fail rollover if duplicated alias found in templates\n\nIf newly created index matches with an index template whose aliases\ncontains the rollover request alias, the rollover alias will point to\nmultiple indices. This causes indexing requests to be rejected. To avoid\nthis situation, we make sure that there is no duplicated alias before\ncreating a new index; otherwise report an error to the caller."
}
],
"files": [
{
"diff": "@@ -37,9 +37,11 @@\n import org.elasticsearch.cluster.metadata.AliasOrIndex;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n import org.elasticsearch.cluster.metadata.MetaDataIndexAliasesService;\n+import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -115,6 +117,7 @@ protected void masterOperation(final RolloverRequest rolloverRequest, final Clus\n : generateRolloverIndexName(sourceProvidedName, indexNameExpressionResolver);\n final String rolloverIndexName = indexNameExpressionResolver.resolveDateMathExpression(unresolvedName);\n MetaDataCreateIndexService.validateIndexName(rolloverIndexName, state); // will fail if the index already exists\n+ checkNoDuplicatedAliasInIndexTemplate(metaData, rolloverIndexName, rolloverRequest.getAlias());\n client.admin().indices().prepareStats(sourceIndexName).clear().setDocs(true).execute(\n new ActionListener<IndicesStatsResponse>() {\n @Override\n@@ -237,4 +240,19 @@ static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest(final Stri\n .mappings(createIndexRequest.mappings());\n }\n \n+ /**\n+ * If the newly created index matches with an index template whose aliases contains the rollover alias,\n+ * the rollover alias will point to multiple indices. This causes indexing requests to be rejected.\n+ * To avoid this, we make sure that there is no duplicated alias in index templates before creating a new index.\n+ */\n+ static void checkNoDuplicatedAliasInIndexTemplate(MetaData metaData, String rolloverIndexName, String rolloverRequestAlias) {\n+ final List<IndexTemplateMetaData> matchedTemplates = MetaDataIndexTemplateService.findTemplates(metaData, rolloverIndexName);\n+ for (IndexTemplateMetaData template : matchedTemplates) {\n+ if (template.aliases().containsKey(rolloverRequestAlias)) {\n+ throw new IllegalArgumentException(String.format(Locale.ROOT,\n+ \"Rollover alias [%s] can point to multiple indices, found duplicated alias [%s] in index template [%s]\",\n+ rolloverRequestAlias, template.aliases().keys(), template.name()));\n+ }\n+ }\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java",
"status": "modified"
},
{
"diff": "@@ -277,7 +277,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n \n // we only find a template when its an API call (a new index)\n // find templates, highest order are better matching\n- List<IndexTemplateMetaData> templates = findTemplates(request, currentState);\n+ List<IndexTemplateMetaData> templates = MetaDataIndexTemplateService.findTemplates(currentState.metaData(), request.index());\n \n Map<String, Custom> customs = new HashMap<>();\n \n@@ -564,22 +564,6 @@ public void onFailure(String source, Exception e) {\n }\n super.onFailure(source, e);\n }\n-\n- private List<IndexTemplateMetaData> findTemplates(CreateIndexClusterStateUpdateRequest request, ClusterState state) throws IOException {\n- List<IndexTemplateMetaData> templateMetadata = new ArrayList<>();\n- for (ObjectCursor<IndexTemplateMetaData> cursor : state.metaData().templates().values()) {\n- IndexTemplateMetaData metadata = cursor.value;\n- for (String template: metadata.patterns()) {\n- if (Regex.simpleMatch(template, request.index())) {\n- templateMetadata.add(metadata);\n- break;\n- }\n- }\n- }\n-\n- CollectionUtil.timSort(templateMetadata, Comparator.comparingInt(IndexTemplateMetaData::order).reversed());\n- return templateMetadata;\n- }\n }\n \n private void validate(CreateIndexClusterStateUpdateRequest request, ClusterState state) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n \n import com.carrotsearch.hppc.cursors.ObjectCursor;\n \n+import org.apache.lucene.util.CollectionUtil;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.support.master.MasterNodeRequest;\n@@ -48,6 +49,7 @@\n \n import java.util.ArrayList;\n import java.util.Collections;\n+import java.util.Comparator;\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.List;\n@@ -193,6 +195,23 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n });\n }\n \n+ /**\n+ * Finds index templates whose index pattern matched with the given index name.\n+ * The result is sorted by {@link IndexTemplateMetaData#order} descending.\n+ */\n+ public static List<IndexTemplateMetaData> findTemplates(MetaData metaData, String indexName) {\n+ final List<IndexTemplateMetaData> matchedTemplates = new ArrayList<>();\n+ for (ObjectCursor<IndexTemplateMetaData> cursor : metaData.templates().values()) {\n+ final IndexTemplateMetaData template = cursor.value;\n+ final boolean matched = template.patterns().stream().anyMatch(pattern -> Regex.simpleMatch(pattern, indexName));\n+ if (matched) {\n+ matchedTemplates.add(template);\n+ }\n+ }\n+ CollectionUtil.timSort(matchedTemplates, Comparator.comparingInt(IndexTemplateMetaData::order).reversed());\n+ return matchedTemplates;\n+ }\n+\n private static void validateAndAddTemplate(final PutRequest request, IndexTemplateMetaData.Builder templateBuilder,\n IndicesService indicesService, NamedXContentRegistry xContentRegistry) throws Exception {\n Index createdIndex = null;",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java",
"status": "modified"
},
{
"diff": "@@ -277,4 +277,15 @@ public void testRolloverMaxSize() throws Exception {\n assertThat(\"No rollover with an empty index\", response.isRolledOver(), equalTo(false));\n }\n }\n+\n+ public void testRejectIfAliasFoundInTemplate() throws Exception {\n+ client().admin().indices().preparePutTemplate(\"logs\")\n+ .setPatterns(Collections.singletonList(\"logs-*\")).addAlias(new Alias(\"logs-write\")).get();\n+ assertAcked(client().admin().indices().prepareCreate(\"logs-000001\").get());\n+ ensureYellow(\"logs-write\");\n+ final IllegalArgumentException error = expectThrows(IllegalArgumentException.class,\n+ () -> client().admin().indices().prepareRolloverIndex(\"logs-write\").addMaxIndexSizeCondition(new ByteSizeValue(1)).get());\n+ assertThat(error.getMessage(), equalTo(\n+ \"Rollover alias [logs-write] can point to multiple indices, found duplicated alias [[logs-write]] in index template [logs]\"));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/rollover/RolloverIT.java",
"status": "modified"
},
{
"diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.settings.Settings;\n@@ -40,11 +41,13 @@\n import org.elasticsearch.test.ESTestCase;\n import org.mockito.ArgumentCaptor;\n \n+import java.util.Arrays;\n import java.util.List;\n import java.util.Locale;\n import java.util.Set;\n \n import static org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.evaluateConditions;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.hasSize;\n import static org.mockito.Matchers.any;\n@@ -241,6 +244,19 @@ public void testCreateIndexRequest() throws Exception {\n assertThat(createIndexRequest.cause(), equalTo(\"rollover_index\"));\n }\n \n+ public void testRejectDuplicateAlias() throws Exception {\n+ final IndexTemplateMetaData template = IndexTemplateMetaData.builder(\"test-template\")\n+ .patterns(Arrays.asList(\"foo-*\", \"bar-*\"))\n+ .putAlias(AliasMetaData.builder(\"foo-write\")).putAlias(AliasMetaData.builder(\"bar-write\"))\n+ .build();\n+ final MetaData metaData = MetaData.builder().put(createMetaData(), false).put(template).build();\n+ String indexName = randomFrom(\"foo-123\", \"bar-xyz\");\n+ String aliasName = randomFrom(\"foo-write\", \"bar-write\");\n+ final IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n+ () -> TransportRolloverAction.checkNoDuplicatedAliasInIndexTemplate(metaData, indexName, aliasName));\n+ assertThat(ex.getMessage(), containsString(\"index template [test-template]\"));\n+ }\n+\n private IndicesStatsResponse createIndicesStatResponse(long totalDocs, long primaryDocs) {\n final CommonStats primaryStats = mock(CommonStats.class);\n when(primaryStats.getDocs()).thenReturn(new DocsStats(primaryDocs, 0, between(1, 10000)));",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverActionTests.java",
"status": "modified"
},
{
"diff": "@@ -20,8 +20,10 @@\n package org.elasticsearch.action.admin.indices.template.put;\n \n import org.elasticsearch.action.admin.indices.alias.Alias;\n+import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.AliasValidator;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;\n import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService;\n import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService.PutRequest;\n@@ -38,16 +40,17 @@\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n-import java.util.HashMap;\n import java.util.HashSet;\n import java.util.List;\n-import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.CountDownLatch;\n+import java.util.stream.Collectors;\n \n import static org.hamcrest.CoreMatchers.containsString;\n import static org.hamcrest.CoreMatchers.equalTo;\n import static org.hamcrest.CoreMatchers.instanceOf;\n+import static org.hamcrest.Matchers.contains;\n+import static org.hamcrest.Matchers.empty;\n \n public class MetaDataIndexTemplateServiceTests extends ESSingleNodeTestCase {\n public void testIndexTemplateInvalidNumberOfShards() {\n@@ -154,6 +157,18 @@ public void testAliasInvalidFilterInvalidJson() throws Exception {\n assertThat(errors.get(0).getMessage(), equalTo(\"failed to parse filter for alias [invalid_alias]\"));\n }\n \n+ public void testFindTemplates() throws Exception {\n+ client().admin().indices().prepareDeleteTemplate(\"*\").get(); // Delete all existing templates\n+ putTemplateDetail(new PutRequest(\"test\", \"foo-1\").patterns(Arrays.asList(\"foo-*\")).order(1));\n+ putTemplateDetail(new PutRequest(\"test\", \"foo-2\").patterns(Arrays.asList(\"foo-*\")).order(2));\n+ putTemplateDetail(new PutRequest(\"test\", \"bar\").patterns(Arrays.asList(\"bar-*\")).order(between(0, 100)));\n+ final ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ assertThat(MetaDataIndexTemplateService.findTemplates(state.metaData(), \"foo-1234\").stream()\n+ .map(IndexTemplateMetaData::name).collect(Collectors.toList()), contains(\"foo-2\", \"foo-1\"));\n+ assertThat(MetaDataIndexTemplateService.findTemplates(state.metaData(), \"bar-xyz\").stream()\n+ .map(IndexTemplateMetaData::name).collect(Collectors.toList()), contains(\"bar\"));\n+ assertThat(MetaDataIndexTemplateService.findTemplates(state.metaData(), \"baz\"), empty());\n+ }\n \n private static List<Throwable> putTemplate(NamedXContentRegistry xContentRegistry, PutRequest request) {\n MetaDataCreateIndexService createIndexService = new MetaDataCreateIndexService(",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/template/put/MetaDataIndexTemplateServiceTests.java",
"status": "modified"
}
]
} |
{
"body": "In elasticsearch 6.1.0 and 6.1.1 I can not use variable substitutions for x-pack related configuration in elasticsearch. Also mentioned here: https://discuss.elastic.co/t/x-pack-6-1-0-does-not-handle-variable-substitutions/111953\r\n`\r\nxpack:\r\n security:\r\n authc:\r\n realms:\r\n ssl:\r\n certificate_authorities: [ \"${ELASTICSEARCH_CONFIG_PATH}/certs/myca.crt\" ]\r\n`\r\nI am receiving:\r\n`java.nio.file.NoSuchFileException: /app/volumes/config/${ELASTICSEARCH_CONFIG_PATH}/certs/myca.crt`\r\nLast known working version is 6.0.1. I am quite surprised this bug passed release testing cycle so I tried to find if this is a feature but it seems it is not.",
"comments": [
{
"body": "On its face this looks like an X-Pack issue which we do not handle here as this is for open source Elasticsearch only. However, this is a core feature implemented in open source Elasticsearch.\r\n\r\nHave you set `ELASTICSEARCH_CONFIG_PATH`? What does \r\n\r\n```\r\n$ echo $ELASTICSEARCH_CONFIG_PATH\r\n```\r\n\r\nshow? Are you sure that you exported the environment variable? If you start the process (suspend it before it fails) and get its pid, does `/proc/<PID>/environ` show that environment variable as being exposed to Elasticsearch?\r\n\r\nAre you sure that you did not mean `ES_CONF_PATH`?\r\n\r\n> I am quite surprised this bug passed release testing cycle so I tried to find if this is a feature but it seems it is not.\r\n\r\nI am not sure what this mean. Bugs happen. So does user error. Let's figure it out together.",
"created_at": "2017-12-20T14:24:05Z"
},
{
"body": "I run ES in docker. I use my entrypoint script which exports ELASTICSEARCH_CONFIG_PATH variable before starting elasticsearch. I know the variable is exported (even see it in environ). To fix the issue I just need to build the docker image with older ES - 6.0.1 (not 6.1.0 or 6.1.1). The issue can be easily replicated by very simple installation of elasticsearch + x-pack without the docker. I do not know if this is elasticsearch or x-pack problem or their integration issue, but in pure elasticsearch var substition works (for example for path.data).\r\n\r\n> I am not sure what this mean. Bugs happen. So does user error. Let's figure it out together.\r\n\r\nI wanted to say it looked like a feature for the first time - no one reported it, I expected a lot of people will be affected by this.\r\n",
"created_at": "2017-12-20T14:45:34Z"
},
{
"body": "Would you share a Dockerfile including entrypoint and reproduction of this issue please?",
"created_at": "2017-12-20T14:50:10Z"
},
{
"body": "Dockerfile:\r\n```\r\nFROM myrepository/elastic/common\r\n\r\nARG APP_VERSION\r\nARG FILES_REPOSITORY\r\n\r\nRUN yum install -y java-1.8.0-openjdk && yum clean all\r\n\r\n# install elasticsearch\r\nRUN cd /opt && \\\r\n wget -q \"${FILES_REPOSITORY}/elasticsearch/elasticsearch-${APP_VERSION}.tar.gz\" && \\\r\n tar -zxf elasticsearch-${APP_VERSION}.tar.gz && \\\r\n rm -f elasticsearch-${APP_VERSION}.tar.gz && \\\r\n mv /opt/elasticsearch* /opt/elasticsearch && \\\r\n chmod 755 -- /opt/elasticsearch/bin/*\r\n\r\n# set required env vars\r\nENV PATH=/opt/elasticsearch/bin:$PATH \\\r\n JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk\r\n\r\n# install plugins\r\nRUN for plugin in ingest-geoip ingest-user-agent x-pack; do elasticsearch-plugin install --batch \"${FILES_REPOSITORY}/elasticsearch/plugins/${plugin}-${APP_VERSION}.zip\"; done\r\n\r\nCOPY files/app /app\r\nRUN chmod 755 /app/init/entrypoint.sh\r\n\r\nCMD [ \"/app/init/entrypoint.sh\" ]\r\n```\r\nPart of the entrypoint.sh (without comments, var, ... checking):\r\n```\r\n#!/bin/bash\r\n\r\n. /app/init/libs/bootstrap-all.inc.sh\r\n\r\n. /app/init/vars.inc.sh\r\n\r\n\r\n# check users, set stdin/out perms, set TZ, ...\r\ninit_base_environment\r\n\r\nes_opts=\"\\\r\n-Enetwork.host=0.0.0.0 \\\r\n-Enetwork.bind_host=0.0.0.0 \\\r\n-Etransport.bind_host=0.0.0.0 \\\r\n-Ehttp.bind_host=0.0.0.0 \\\r\n-Epath.data=${_ELASTICSEARCH_DATA_PATH} \\\r\n-Epath.logs=${_ELASTICSEARCH_LOGS_PATH} \\\r\n-Ehttp.port=${ELASTICSEARCH_CLIENTS_HTTP_PORT} \\\r\n-Etransport.tcp.port=${ELASTICSEARCH_NODES_TRANSPORT_PORT} \\\r\n-Etransport.profiles.client.port=${ELASTICSEARCH_CLIENTS_TRANSPORT_PORT} \\\r\n-Etransport.profiles.client.xpack.security.type=client \\\r\n-Ebootstrap.memory_lock=true \\\r\n\"\r\n\r\n[[ -n \"$ELASTICSEARCH_PUBLIC_HOST\" ]] && es_opts=\"-Enetwork.publish_host=\\\"${ELASTICSEARCH_PUBLIC_HOST}\\\" -Etransport.profiles.client.publish_host=\\\"${ELASTICSEARCH_PUBLIC_HOST}\\\" $es_opts\"\r\n[[ -n \"$ELASTICSEARCH_INTERNODE_HOST\" ]] && es_opts=\"-Etransport.publish_host=\\\"${ELASTICSEARCH_INTERNODE_HOST}\\\" -Ehttp.publish_host=\\\"${ELASTICSEARCH_INTERNODE_HOST}\\\" $es_opts\"\r\n[[ -n \"$ELASTICSEARCH_CLUSTER_NAME\" ]] && es_opts=\"-Ecluster.name=${ELASTICSEARCH_CLUSTER_NAME} $es_opts\"\r\n[[ -n \"$ELASTICSEARCH_NODE_NAME\" ]] && es_opts=\"-Enode.name=${ELASTICSEARCH_NODE_NAME} $es_opts\"\r\n\r\n\r\n# security overrides\r\nif is_true \"$ELASTICSEARCH_DISABLE_SECURITY_OVERRIDES\"; then\r\n log_warn \"!!! WARNING: BASIC SECURITY OVERRIDES ARE DISABLED !!!\"\r\nelse\r\n log_info \"Setting basic security overrides\"\r\n\r\nes_opts=\"$es_opts \\\r\n-Expack.ssl.key=${_ELASTICSEARCH_CLIENTS_SSL_KEY_FILE} \\\r\n-Expack.ssl.certificate=${_ELASTICSEARCH_CLIENTS_SSL_CERT_FILE} \\\r\n-Expack.ssl.certificate_authorities=${_ELASTICSEARCH_CLIENTS_SSL_CA_FILE} \\\r\n-Expack.ssl.supported_protocols=TLSv1.2 \\\r\n-Expack.ssl.client_authentication=required \\\r\n-Expack.security.enabled=true \\\r\n-Expack.security.http.ssl.enabled=true \\\r\n-Expack.security.http.ssl.client_authentication=optional \\\r\n-Expack.security.http.ssl.supported_protocols=TLSv1.2 \\\r\n-Expack.security.transport.ssl.enabled=true \\\r\n-Expack.security.transport.ssl.key=${_ELASTICSEARCH_NODES_SSL_KEY_FILE} \\\r\n-Expack.security.transport.ssl.certificate=${_ELASTICSEARCH_NODES_SSL_CERT_FILE} \\\r\n-Expack.security.transport.ssl.certificate_authorities=${_ELASTICSEARCH_NODES_SSL_CA_FILE} \\\r\n-Expack.security.transport.ssl.supported_protocols=TLSv1.2 \\\r\n-Expack.security.transport.ssl.client_authentication=required \\\r\n-Etransport.profiles.client.xpack.security.ssl.key=${_ELASTICSEARCH_CLIENTS_SSL_KEY_FILE} \\\r\n-Etransport.profiles.client.xpack.security.ssl.certificate=${_ELASTICSEARCH_CLIENTS_SSL_CERT_FILE} \\\r\n-Etransport.profiles.client.xpack.security.ssl.certificate_authorities=${_ELASTICSEARCH_CLIENTS_SSL_CA_FILE} \\\r\n-Etransport.profiles.client.xpack.security.ssl.supported_protocols=TLSv1.2 \\\r\n-Etransport.profiles.client.xpack.security.ssl.client_authentication=optional \\\r\n-Expack.security.authc.token.enabled=true \\\r\n\"\r\nfi\r\n\r\n\r\n# defaults\r\nELASTICSEARCH_HEAP_SIZE=\"${ELASTICSEARCH_HEAP_SIZE:-1g}\"\r\n\r\n\r\nES_JAVA_OPTS=\"-Xms${ELASTICSEARCH_HEAP_SIZE} -Xmx${ELASTICSEARCH_HEAP_SIZE} $ES_JAVA_OPTS\"\r\nexport ES_JAVA_OPTS=\"-Des.cgroups.hierarchy.override=/ -Dlog4j2.disable.jmx=true $ES_JAVA_OPTS\"\r\n\r\nexport ES_HOME=\"/opt/elasticsearch\"\r\n\r\nexport ES_PATH_CONF=\"${_ELASTICSEARCH_CONFIG_PATH}\"\r\n\r\n# useful for path reference in config files, where we can use this variable ... useful - this is independent of ES own ES_PATH_CONF var which could be renamed in future versions again\r\nexport ELASTICSEARCH_CONFIG_PATH=\"$_ELASTICSEARCH_CONFIG_PATH\"\r\n\r\n\r\ncd /opt/elasticsearch\r\nswitch_app \"./bin/elasticsearch --verbose -p /var/run/elastic/elasticsearch.pid $es_opts\"\r\n\r\n\r\n\r\n\r\n\r\n... where switch_app is:\r\nfunction switch_app {\r\n log_info \"Switching to application process\"\r\n eval \"exec gosu elastic $*\" \r\n}\r\n```\r\nTo reproduce the issue just try to use ${ELASTICSEARCH_CONFIG_PATH} in the arbitrary place under xpack: configuration. For example to set certificate_authorities.",
"created_at": "2017-12-20T15:30:50Z"
},
{
"body": "I want to help you but this is too far from a useable reproduction. For example, you are trying to switch to the `elastic` user yet you never created this user. Yes, I could do this, but this is the Nth hurdle in trying to get a working reproduction from what you've provided where now N is too large (I do not know your base image but `gosu` is not installed, `wget` is not installed, you have referenced several scripts in the entrypoint that are not provided here, etc. (really, there's more)). I need to put this back on you: please provide a simple reproduction that I can use to debug this. It should not take me more than a few minutes from what you provide to have a working reproduction of the issue that I can iterate on.",
"created_at": "2017-12-21T03:34:10Z"
},
{
"body": "I can reproduce, it seems to be exclusively a problem with array settings, so my guess is that it's a by-product of #26878\r\n\r\nIn 6.0.1 running with this config file:\r\n```\r\ndiscovery.zen.ping.unicast.hosts: [ \"${THIS_DOES_NOT_EXIST}\" ]\r\n```\r\nwould fail to start with:\r\n```\r\nException in thread \"main\" java.lang.IllegalArgumentException: Could not resolve placeholder 'THIS_DOES_NOT_EXIST'\r\n```\r\n\r\nIn 6.1, it starts, but logs:\r\n```\r\n[WARN ][o.e.d.z.UnicastZenPing ] failed to resolve host [${THIS_DOES_NOT_EXIST}]\r\n```\r\n",
"created_at": "2017-12-21T04:31:00Z"
},
{
"body": "It seems to have been a somewhat consious decision in #26878\r\nhttps://github.com/elastic/elasticsearch/pull/26878/files#diff-ec9e18970c4ce90d89639c46bb07218eR1206\r\n\r\nAssigning to @s1monw to comment.\r\n",
"created_at": "2017-12-21T04:47:28Z"
},
{
"body": "Thanks for triaging this one @tvernum.\r\n\r\nThanks for the report @vbohata, no further action is needed from you, we will take it from here.",
"created_at": "2017-12-22T02:24:15Z"
},
{
"body": "@jasontedor thank you. I will wait until this is fixed to upgrade.",
"created_at": "2018-01-04T21:13:24Z"
}
],
"number": 27926,
"title": "Since Elasticsearch 6.1.0 environment variable substitutions in lists do not work"
} | {
"body": "Since Elasticsearch 6.1.0 environment variable substitutions in lists do not work\r\nThis commit fixes it.\r\n\r\nCloses #27926",
"number": 28106,
"review_comments": [
{
"body": "We have `replacePropertyPlaceholders(Function<String, String>)`; we use this rather than pulling from the same settings object as I think this will be a more realistic test?",
"created_at": "2018-01-08T19:14:53Z"
},
{
"body": "Nit: spacing between `while` and `(`.",
"created_at": "2018-01-08T19:15:03Z"
},
{
"body": "Nit: spacing between `!` and `value`.",
"created_at": "2018-01-08T19:15:17Z"
},
{
"body": "Can we use a clearer name than `value2`?",
"created_at": "2018-01-08T19:15:32Z"
},
{
"body": "`ls` is never used again after the next line; do we need this local variable?",
"created_at": "2018-01-08T19:15:56Z"
},
{
"body": "Can this be a local final variable?",
"created_at": "2018-01-08T19:16:05Z"
},
{
"body": "Can we use a clearer name than `value`?",
"created_at": "2018-01-08T19:16:20Z"
},
{
"body": "Can we test replacing different placeholders?",
"created_at": "2018-01-08T19:16:54Z"
},
{
"body": "I don't think this will be needed if you take the suggestion below, but in general `assertFalse` and `assertTrue` should be avoided because they do not give good error messages (a failing expectation would only say `AssertionError`). Here, we could use `assertThat(value, not(isEmptyOrNullString())` as then the expectation would say\r\n\r\n```\r\nExpected: not (null or an empty string)\r\n but: was \"\"\r\n```\r\n\r\nwhich is a lot more helpful!",
"created_at": "2018-01-08T19:21:48Z"
},
{
"body": "+1 for `final`. I am not very clear what you meant by **local** ? Do you recommend to declare `li` before the while loop on the line 1213? \r\n ",
"created_at": "2018-01-08T22:27:57Z"
},
{
"body": "thank you, this is very helpful to know",
"created_at": "2018-01-08T22:39:13Z"
},
{
"body": "Sorry for not being clear. What I mean is that these variables (which are local variables, they only have scope of the current) block I think would clearer if they were final because they I know by the declaration they are never mutated instead of having to keep checking the code \"is this the same object as when the variable was first declared?\".",
"created_at": "2018-01-08T22:50:39Z"
},
{
"body": "Can we use two different placeholders here to test that that works?",
"created_at": "2018-01-09T22:10:55Z"
},
{
"body": "Nit: space after `))` so `)) }`",
"created_at": "2018-01-09T22:11:17Z"
},
{
"body": "Nit: space after `))` so `)) }`",
"created_at": "2018-01-09T22:11:35Z"
},
{
"body": "I wonder if it's simpler to do away with this `if`? If they are equal, it's okay to do the set?",
"created_at": "2018-01-09T22:12:10Z"
}
],
"title": "Fix environment variable substitutions in list setting"
} | {
"commits": [
{
"message": "Fix environment variable substitutions in list setting\n\nSince Elasticsearch 6.1.0 environment variable substitutions in lists do not work\nThis commit fixes it.\n\nCloses #27926"
},
{
"message": "Correcting code"
},
{
"message": "Correcting code"
}
],
"files": [
{
"diff": "@@ -64,6 +64,7 @@\n import java.util.NoSuchElementException;\n import java.util.Set;\n import java.util.TreeMap;\n+import java.util.ListIterator;\n import java.util.concurrent.TimeUnit;\n import java.util.function.Function;\n import java.util.function.Predicate;\n@@ -414,7 +415,7 @@ public List<String> getAsList(String key, List<String> defaultValue, Boolean com\n final Object valueFromPrefix = settings.get(key);\n if (valueFromPrefix != null) {\n if (valueFromPrefix instanceof List) {\n- return ((List<String>) valueFromPrefix); // it's already unmodifiable since the builder puts it as a such\n+ return Collections.unmodifiableList((List<String>) valueFromPrefix);\n } else if (commaDelimited) {\n String[] strings = Strings.splitStringByCommaToArray(get(key));\n if (strings.length > 0) {\n@@ -1042,7 +1043,7 @@ public Builder putList(String setting, String... values) {\n */\n public Builder putList(String setting, List<String> values) {\n remove(setting);\n- map.put(setting, Collections.unmodifiableList(new ArrayList<>(values)));\n+ map.put(setting, new ArrayList<>(values));\n return this;\n }\n \n@@ -1210,10 +1211,20 @@ public boolean shouldRemoveMissingPlaceholder(String placeholderName) {\n Iterator<Map.Entry<String, Object>> entryItr = map.entrySet().iterator();\n while (entryItr.hasNext()) {\n Map.Entry<String, Object> entry = entryItr.next();\n- if (entry.getValue() == null || entry.getValue() instanceof List) {\n+ if (entry.getValue() == null) {\n // a null value obviously can't be replaced\n continue;\n }\n+ if (entry.getValue() instanceof List) {\n+ final ListIterator<String> li = ((List<String>) entry.getValue()).listIterator();\n+ while (li.hasNext()) {\n+ final String settingValueRaw = li.next();\n+ final String settingValueResolved = propertyPlaceholder.replacePlaceholders(settingValueRaw, placeholderResolver);\n+ li.set(settingValueResolved);\n+ }\n+ continue;\n+ }\n+\n String value = propertyPlaceholder.replacePlaceholders(Settings.toString(entry.getValue()), placeholderResolver);\n // if the values exists and has length, we should maintain it in the map\n // otherwise, the replace process resolved into removing it",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java",
"status": "modified"
},
{
"diff": "@@ -68,6 +68,16 @@ public void testReplacePropertiesPlaceholderSystemProperty() {\n assertThat(settings.get(\"setting1\"), equalTo(value));\n }\n \n+ public void testReplacePropertiesPlaceholderSystemPropertyList() {\n+ final String hostname = randomAlphaOfLength(16);\n+ final String hostip = randomAlphaOfLength(16);\n+ final Settings settings = Settings.builder()\n+ .putList(\"setting1\", \"${HOSTNAME}\", \"${HOSTIP}\")\n+ .replacePropertyPlaceholders(name -> name.equals(\"HOSTNAME\") ? hostname : name.equals(\"HOSTIP\") ? hostip : null)\n+ .build();\n+ assertThat(settings.getAsList(\"setting1\"), contains(hostname, hostip));\n+ }\n+\n public void testReplacePropertiesPlaceholderSystemVariablesHaveNoEffect() {\n final String value = System.getProperty(\"java.home\");\n assertNotNull(value);",
"filename": "core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java",
"status": "modified"
}
]
} |
{
"body": "this test fails with security manager enabled since yesterday \n\n```\nmvn test -Pdev -Dtests.seed=2DCB147CE811B0B6 -Dtests.class=org.elasticsearch.index.mapper.date.SimpleDateMappingTests -Dtests.method=\"testLocale\" -Des.logger.level=DEBUG -Des.node.mode=network -Dtests.security.manager=true -Dtests.nightly=false -Dtests.heap.size=1024m -Dtests.jvm.argline=\"-server\" -Dtests.locale=es_AR -Dtests.timezone=Atlantic/Faeroe\n```\n\nlikely related to changes pushed yesterady maybe this one https://github.com/elastic/elasticsearch/pull/10965 @rmuir @rjernst can you take a look...\n\nhere is a CI failure for this http://build-us-00.elastic.co/job/es_core_master_metal/9112/\n",
"comments": [
{
"body": "if it only happens on JDK9EA, its likely just a bug in that early access release.\n",
"created_at": "2015-05-05T13:45:00Z"
},
{
"body": "I pushed a assume for those tests\n",
"created_at": "2015-05-05T14:54:24Z"
},
{
"body": "There have been a lot of JDK 9 EA releases since this was pushed, we should check to see if the assumeFalse can be removed so I'm marking this as adoptme\n",
"created_at": "2016-09-27T14:30:34Z"
},
{
"body": "@dakrone I see you added a blocker label. Was that intended?",
"created_at": "2016-12-06T11:25:16Z"
},
{
"body": "@bleskes whoops I don't think so, I'll remove it.",
"created_at": "2016-12-06T15:30:49Z"
},
{
"body": "@dakrone I retested and open a joda time issue https://github.com/JodaOrg/joda-time/issues/462",
"created_at": "2017-12-20T16:32:48Z"
}
],
"number": 10984,
"title": "JDK9EA has buggy locale support when running with security manager"
} | {
"body": "Java 9 added some enhancements to the internationalization support that\r\nimpact our date parsing support. To ensure flawless BWC and consistent\r\nbehavior going forward Java 9 runtimes requrie the system property\r\n`java.locale.providers=COMPAT` to be set.\r\n\r\nCloses #10984",
"number": 28080,
"review_comments": [],
"title": "Pass `java.locale.providers=COMPAT` to Java 9 onwards"
} | {
"commits": [
{
"message": "Pass `java.locale.providers=COMPAT` to Java 9 onwards\n\nJava 9 added some enhancements to the internationalization support that\nimpact our date parsing support. To ensure flawless BWC and consistent\nbehavior going forward Java 9 runtimes requrie the system property\n`java.locale.providers=COMPAT` to be set.\n\nCloses #10984"
}
],
"files": [
{
"diff": "@@ -220,43 +220,6 @@ public void testSimpleDateRange() throws Exception {\n assertHitCount(searchResponse, 2L);\n }\n \n- public void testLocaleDependentDate() throws Exception {\n- assumeFalse(\"Locals are buggy on JDK9EA\", Constants.JRE_IS_MINIMUM_JAVA9 && systemPropertyAsBoolean(\"tests.security.manager\", false));\n- assertAcked(prepareCreate(\"test\")\n- .addMapping(\"type1\",\n- jsonBuilder().startObject()\n- .startObject(\"type1\")\n- .startObject(\"properties\")\n- .startObject(\"date_field\")\n- .field(\"type\", \"date\")\n- .field(\"format\", \"E, d MMM yyyy HH:mm:ss Z\")\n- .field(\"locale\", \"de\")\n- .endObject()\n- .endObject()\n- .endObject()\n- .endObject()));\n- ensureGreen();\n- for (int i = 0; i < 10; i++) {\n- client().prepareIndex(\"test\", \"type1\", \"\" + i).setSource(\"date_field\", \"Mi, 06 Dez 2000 02:55:00 -0800\").execute().actionGet();\n- client().prepareIndex(\"test\", \"type1\", \"\" + (10 + i)).setSource(\"date_field\", \"Do, 07 Dez 2000 02:55:00 -0800\").execute().actionGet();\n- }\n-\n- refresh();\n- for (int i = 0; i < 10; i++) {\n- SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"date_field\").gte(\"Di, 05 Dez 2000 02:55:00 -0800\").lte(\"Do, 07 Dez 2000 00:00:00 -0800\"))\n- .execute().actionGet();\n- assertHitCount(searchResponse, 10L);\n-\n-\n- searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"date_field\").gte(\"Di, 05 Dez 2000 02:55:00 -0800\").lte(\"Fr, 08 Dez 2000 00:00:00 -0800\"))\n- .execute().actionGet();\n- assertHitCount(searchResponse, 20L);\n-\n- }\n- }\n-\n public void testSimpleTerminateAfterCount() throws Exception {\n prepareCreate(\"test\").setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0)).get();\n ensureGreen();\n@@ -273,7 +236,6 @@ public void testSimpleTerminateAfterCount() throws Exception {\n refresh();\n \n SearchResponse searchResponse;\n-\n for (int i = 1; i <= max; i++) {\n searchResponse = client().prepareSearch(\"test\")\n .setQuery(QueryBuilders.rangeQuery(\"field\").gte(1).lte(max))",
"filename": "core/src/test/java/org/elasticsearch/search/simple/SimpleSearchIT.java",
"status": "modified"
},
{
"diff": "@@ -94,3 +94,6 @@ ${heap.dump.path}\n \n # JDK 9+ GC logging\n 9-:-Xlog:gc*,gc+age=trace,safepoint:file=${loggc}:utctime,pid,tags:filecount=32,filesize=64m\n+# due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise\n+# time/date parsing will break in an incompatible way for some date patterns and locals\n+9-:-Djava.locale.providers=COMPAT",
"filename": "distribution/src/main/resources/config/jvm.options",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,38 @@\n+---\n+\"Test Index and Search locale dependent mappings / dates\":\n+ - skip:\n+ version: \" - 6.99.99\"\n+ reason: JDK9 only supports this with a special sysproperty added in 7.0.0\n+ - do:\n+ indices.create:\n+ index: test_index\n+ body:\n+ settings:\n+ number_of_shards: 1\n+ mappings:\n+ doc:\n+ properties:\n+ date_field:\n+ type: date\n+ format: \"E, d MMM yyyy HH:mm:ss Z\"\n+ locale: \"de\"\n+ - do:\n+ bulk:\n+ refresh: true\n+ body:\n+ - '{\"index\": {\"_index\": \"test_index\", \"_type\": \"doc\", \"_id\": \"1\"}}'\n+ - '{\"date_field\": \"Mi, 06 Dez 2000 02:55:00 -0800\"}'\n+ - '{\"index\": {\"_index\": \"test_index\", \"_type\": \"doc\", \"_id\": \"2\"}}'\n+ - '{\"date_field\": \"Do, 07 Dez 2000 02:55:00 -0800\"}'\n+\n+ - do:\n+ search:\n+ index: test_index\n+ body: {\"query\" : {\"range\" : {\"date_field\" : {\"gte\": \"Di, 05 Dez 2000 02:55:00 -0800\", \"lte\": \"Do, 07 Dez 2000 00:00:00 -0800\"}}}}\n+ - match: { hits.total: 1 }\n+\n+ - do:\n+ search:\n+ index: test_index\n+ body: {\"query\" : {\"range\" : {\"date_field\" : {\"gte\": \"Di, 05 Dez 2000 02:55:00 -0800\", \"lte\": \"Fr, 08 Dez 2000 00:00:00 -0800\"}}}}\n+ - match: { hits.total: 2 }",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search/180_local_dependent_mapping.yml",
"status": "added"
}
]
} |
{
"body": "When a new snapshot is created it is added to the cluster state as a snapshot-in-progress in `INIT` state, and the initialization is kicked off in a new runnable task by `SnapshotService.beginSnapshot()`. The\r\ninitialization writes multiple files before updating the cluster state to change the snapshot-in-progress to `STARTED` state. \r\n\r\nThis leaves a short window in which the snapshot could be deleted (let's say, because the snapshot is stuck in `INIT` or because it takes too much time to upload all the initialization files for all snapshotted indices). If the `INIT` snapshot is deleted, a race begins between the deletion which sets the snapshot-in-progress to ABORTED in cluster state and tries to finalize the snapshot and the initialization in `SnapshotService.beginSnapshot()` which changes the state back to `STARTED`.\r\n\r\nThis pull request changes `SnapshotService.beginSnapshot()` so that an `ABORTED` snapshot is not started if it has been deleted during initialization. It also adds a test that would have failed\r\nwith the previous behaviour, and changes few method names here and there.",
"comments": [
{
"body": "This has been backported to 6.0.3 along with #28078 in 9b6d37a77097550939d9cb5aabc0ec14487173b0 and in 5.6.7 in 84503a1f789efbe38f37dd3e7bc86b017aa90c9d",
"created_at": "2018-01-15T14:52:38Z"
},
{
"body": "Sorry, I mixed up labels. This was merged in 5.6.7 and 6.1.3.",
"created_at": "2018-01-15T20:27:18Z"
}
],
"number": 27931,
"title": "Do not start snapshots that are deleted during initialization"
} | {
"body": "With the current snapshot/restore logic, a newly created snapshot is added by\r\nthe `SnapshotService.createSnapshot()` method as a `SnapshotInProgress` object in\r\nthe cluster state. This snapshot has the INIT state. Once the cluster state\r\nupdate is processed, the `beginSnapshot()` method is executed using the `SNAPSHOT`\r\nthread pool.\r\n\r\nThe `beginSnapshot()` method starts the initialization of the snapshot using the\r\n`initializeSnapshot()` method. This method reads the repository data and then\r\nwrites the global metadata file and an index metadata file per index to be\r\nsnapshotted. These operations can take some time to be completed (it could \r\nbe many minutes).\r\n\r\nAt this stage and if the master node is disconnected the snapshot can be stucked\r\nin INIT state on versions 5.6.4/6.0.0 or lower (pull request #27214 fixed this on\r\n5.6.5/6.0.1 and higher).\r\n\r\nIf the snapshot is not stucked but the initialization takes some time and the\r\nuser decides to abort the snapshot, a delete snapshot request can sneak in. The\r\n deletion updates the cluster state to check the state of the `SnapshotInProgress`.\r\nWhen the snapshot is in INIT, it executes the` endSnapshot()` method (which returns\r\nimmediately) and then the snapshot's state is updated to `ABORTED` in the cluster\r\nstate. The deletion will then listen for the snapshot completion in order to\r\ncontinue with the deletion of the snapshot.\r\n\r\nBut before returning, the `endSnapshot()` method added a new `Runnable` to the \r\nSNAPSHOT thread pool that forces the finalization of the initializing snapshot. This\r\nfinalization writes the snapshot metadata file and updates the index-N file in\r\nthe repository.\r\n\r\nAt this stage two things can potentially be executed concurrently: the initialization\r\nof the snapshot and the finalization of the snapshot. When the `initializeSnapshot()`\r\nis terminated, the cluster state is updated to start the snapshot and to move it to\r\nthe `STARTED` state (this is before #27931 which prevents an `ABORTED` snapshot to be\r\nstarted at all). The snapshot is started and shards start to be snapshotted but they\r\nquickly fail because the snapshot was `ABORTED` by the deletion. All shards are\r\nreported as `FAILED` to the master node, which executes `endSnapshot()` too (using\r\n`SnapshotStateExecutor`).\r\n\r\nThen many things can happen, depending on the execution of tasks by the `SNAPSHOT`\r\nthread pool and the time taken by each read/write/delete operation by the repository\r\nimplementation. Especially on S3, where operations can take time (disconnections,\r\nretries, timeouts) and where the data consistency model allows to read old data or\r\nrequires some time for objects to be replicated.\r\n\r\nHere are some scenario seen in cluster logs:\r\n\r\na) the snapshot is finalized by the snapshot deletion. Snapshot metadata file exists\r\nin the repository so the future finalization by the snapshot creation will fail with\r\na \"fail to finalize snapshot\" message in logs. Deletion process continues.\r\n\r\nb) the snapshot is finalized by the snapshot creation. Snapshot metadata file exists\r\nin the repository so the future finalization by the snapshot deletion will fail with\r\na \"fail to finalize snapshot\" message in logs. Deletion process continues.\r\n\r\nc) both finalizations are executed concurrently, things can fail at different read or\r\nwrite operations. Shards failures can be lost as well as final snapshot state, depending\r\non which SnapshotInProgress.Entry is used to finalize the snapshot.\r\n\r\nd) the snapshot is finalized by the snapshot deletion, the snapshot in progress is\r\nremoved from the cluster state, triggering the execution of the completion listeners.\r\nThe deletion process continues and the `deleteSnapshotFromRepository()` is executed using\r\nthe `SNAPSHOT` thread pool. This method reads the repository data, the snapshot metadata\r\nand the index metadata for all indices included in the snapshot before updated the index-N\r\n file from the repository. It can also take some time and I think these operations could\r\npotentially be executed concurrently with the finalization of the snapshot by the snapshot\r\ncreation, leading to corrupted data.\r\n\r\nThis commit does not solve all the issues reported here, but it removes the finalization\r\nof the snapshot by the snapshot deletion. This way, the deletion marks the snapshot as\r\n`ABORTED` in cluster state and waits for the snapshot completion. It is the responsibility\r\nof the snapshot execution to detect the abortion and terminates itself correctly. This\r\navoids concurrent snapshot finalizations and also ordinates the operations: the deletion\r\naborts the snapshot and waits for the snapshot completion, the creation detects the abortion\r\nand stops by itself and finalizes the snapshot, then the deletion resumes and continues\r\nthe deletion process.\r\n\r\nCloses #27974",
"number": 28078,
"review_comments": [
{
"body": "`state == State.STARTED`?\r\nOtherwise no need to define the local variable `state` above\r\n ",
"created_at": "2018-01-05T09:40:49Z"
},
{
"body": "add `assert entry.state() == State.ABORTED` here. You can directly write the message as \"snapshot was aborted during initialization\" which makes it clearer which situation is handled here.",
"created_at": "2018-01-05T09:49:08Z"
},
{
"body": "It was not really updated. That's just set here so that endSnapshot is called below. Maybe instead of updatedSnapshot and accepted variables we should have an endSnapshot variable that captures the snapshot to end.\r\n ",
"created_at": "2018-01-05T09:58:37Z"
}
],
"title": "Avoid concurrent snapshot finalizations when deleting an INIT snapshot"
} | {
"commits": [
{
"message": "Avoid concurrent snapshot finalization when deleting an initializing snapshot\n\nWith the current snapshot/restore logic, a newly created snapshot is added by\nthe SnapshotService.createSnapshot() method as a SnapshotInProgress object in\nthe cluster state. This snapshot has the INIT state. Once the cluster state\nupdate is processed, the beginSnapshot() method is executed using the SNAPSHOT\nthread pool.\n\nThe beginSnapshot() method starts the initialization of the snapshot using the\ninitializeSnapshot() method. This method reads the repository data and then\nwrites the global metadata file and an index metadata file per index to be\nsnapshotted. These operations can take some time to be completed (many minutes).\n\nAt this stage and if the master node is disconnected the snapshot can be stucked\nin INIT state on versions 5.6.4/6.0.0 or lower (pull request #27214 fixed this on\n5.6.5/6.0.1 and higher).\n\nIf the snapshot is not stucked but the initialization takes some time and the\nuser decides to abort the snapshot, a delete snapshot request can sneak in. The\n deletion updates the cluster state to check the state of the SnapshotInProgress.\nWhen the snapshot is in INIT, it executes the endSnapshot() method (which returns\nimmediately) and then the snapshot's state is updated to ABORTED in the cluster\nstate. The deletion will then listen for the snapshot completion in order to\ncontinue with the deletion of the snapshot.\n\nBut before returning, the endSnapshot() method added a new Runnable to the SNAPSHOT\nthread pool that forces the finalization of the initializing snapshot. This\nfinalization writes the snapshot metadata file and updates the index-N file in\nthe repository.\n\nAt this stage two things can potentially be executed concurrently: the initialization\nof the snapshot and the finalization of the snapshot. When the initializeSnapshot()\nis terminated, the cluster state is updated to start the snapshot and to move it to\nthe STARTED state (this is before #27931 which prevents an ABORTED snapshot to be\nstarted at all). The snapshot is started and shards start to be snapshotted but they\nquickly fail because the snapshot was ABORTED by the deletion. All shards are\nreported as FAILED to the master node, which executes endSnapshot() too (using\nSnapshotStateExecutor).\n\nThen many things can happen, depending on the execution of tasks by the SNAPSHOT\nthread pool and the time taken by each read/write/delete operation by the repository\nimplementation. Especially on S3, where operations can take time (disconnections,\nretries, timeouts) and where the data consistency model allows to read old data or\nrequires some time for objects to be replicated.\n\nHere are some scenario seen in cluster logs:\n\na) the snapshot is finalized by the snapshot deletion. Snapshot metadata file exists\nin the repository so the future finalization by the snapshot creation will fail with\na \"fail to finalize snapshot\" message in logs. Deletion process continues.\n\nb) the snapshot is finalized by the snapshot creation. Snapshot metadata file exists\nin the repository so the future finalization by the snapshot deletion will fail with\na \"fail to finalize snapshot\" message in logs. Deletion process continues.\n\nc) both finalizations are executed concurrently, things can fail at different read or\nwrite operations. Shards failures can be lost as well as final snapshot state, depending\non which SnapshotInProgress.Entry is used to finalize the snapshot.\n\nd) the snapshot is finalized by the snapshot deletion, the snapshot in progress is\nremoved from the cluster state, triggering the execution of the completion listeners.\nThe deletion process continues and the deleteSnapshotFromRepository() is executed using\nthe SNAPSHOT thread pool. This method reads the repository data, the snapshot metadata\nand the index metadata for all indices included in the snapshot before updated the index-N\n file from the repository. It can also take some time and I think these operations could\npotentially be executed concurrently with the finalization of the snapshot by the snapshot\ncreation, leading to corrupted data.\n\nThis commit does not solve all the issues reported here, but it removes the finalization\nof the snapshot by the snapshot deletion. This way, the deletion marks the snapshot as\nABORTED in cluster state and waits for the snapshot completion. It is the responsability\nof the snapshot execution to detect the abortion and terminates itself correctly. This\navoids concurrent snapshot finalizations and also ordinates the operations: the deletion\naborts the snapshot and waits for the snapshot completion, the creation detects the abortion\nand stops by itself and finalizes the snapshot, then the deletion resumes and continues\nthe deletion process."
},
{
"message": "Apply feedback"
}
],
"files": [
{
"diff": "@@ -372,26 +372,32 @@ private void beginSnapshot(final ClusterState clusterState,\n return;\n }\n clusterService.submitStateUpdateTask(\"update_snapshot [\" + snapshot.snapshot() + \"]\", new ClusterStateUpdateTask() {\n- boolean accepted = false;\n- SnapshotsInProgress.Entry updatedSnapshot;\n+\n+ SnapshotsInProgress.Entry endSnapshot;\n String failure = null;\n \n @Override\n public ClusterState execute(ClusterState currentState) {\n SnapshotsInProgress snapshots = currentState.custom(SnapshotsInProgress.TYPE);\n List<SnapshotsInProgress.Entry> entries = new ArrayList<>();\n for (SnapshotsInProgress.Entry entry : snapshots.entries()) {\n- if (entry.snapshot().equals(snapshot.snapshot()) && entry.state() != State.ABORTED) {\n- // Replace the snapshot that was just created\n+ if (entry.snapshot().equals(snapshot.snapshot()) == false) {\n+ entries.add(entry);\n+ continue;\n+ }\n+\n+ if (entry.state() != State.ABORTED) {\n+ // Replace the snapshot that was just intialized\n ImmutableOpenMap<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shards = shards(currentState, entry.indices());\n if (!partial) {\n Tuple<Set<String>, Set<String>> indicesWithMissingShards = indicesWithMissingShards(shards, currentState.metaData());\n Set<String> missing = indicesWithMissingShards.v1();\n Set<String> closed = indicesWithMissingShards.v2();\n if (missing.isEmpty() == false || closed.isEmpty() == false) {\n- StringBuilder failureMessage = new StringBuilder();\n- updatedSnapshot = new SnapshotsInProgress.Entry(entry, State.FAILED, shards);\n- entries.add(updatedSnapshot);\n+ endSnapshot = new SnapshotsInProgress.Entry(entry, State.FAILED, shards);\n+ entries.add(endSnapshot);\n+\n+ final StringBuilder failureMessage = new StringBuilder();\n if (missing.isEmpty() == false) {\n failureMessage.append(\"Indices don't have primary shards \");\n failureMessage.append(missing);\n@@ -407,13 +413,16 @@ public ClusterState execute(ClusterState currentState) {\n continue;\n }\n }\n- updatedSnapshot = new SnapshotsInProgress.Entry(entry, State.STARTED, shards);\n+ SnapshotsInProgress.Entry updatedSnapshot = new SnapshotsInProgress.Entry(entry, State.STARTED, shards);\n entries.add(updatedSnapshot);\n- if (!completed(shards.values())) {\n- accepted = true;\n+ if (completed(shards.values())) {\n+ endSnapshot = updatedSnapshot;\n }\n } else {\n- entries.add(entry);\n+ assert entry.state() == State.ABORTED : \"expecting snapshot to be aborted during initialization\";\n+ failure = \"snapshot was aborted during initialization\";\n+ endSnapshot = entry;\n+ entries.add(endSnapshot);\n }\n }\n return ClusterState.builder(currentState)\n@@ -448,8 +457,8 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n // We should end snapshot only if 1) we didn't accept it for processing (which happens when there\n // is nothing to do) and 2) there was a snapshot in metadata that we should end. Otherwise we should\n // go ahead and continue working on this snapshot rather then end here.\n- if (!accepted && updatedSnapshot != null) {\n- endSnapshot(updatedSnapshot, failure);\n+ if (endSnapshot != null) {\n+ endSnapshot(endSnapshot, failure);\n }\n }\n });\n@@ -750,6 +759,11 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n }\n entries.add(updatedSnapshot);\n } else if (snapshot.state() == State.INIT && newMaster) {\n+ changed = true;\n+ // Mark the snapshot as aborted as it failed to start from the previous master\n+ updatedSnapshot = new SnapshotsInProgress.Entry(snapshot, State.ABORTED, snapshot.shards());\n+ entries.add(updatedSnapshot);\n+\n // Clean up the snapshot that failed to start from the old master\n deleteSnapshot(snapshot.snapshot(), new DeleteSnapshotListener() {\n @Override\n@@ -935,7 +949,7 @@ private Tuple<Set<String>, Set<String>> indicesWithMissingShards(ImmutableOpenMa\n *\n * @param entry snapshot\n */\n- void endSnapshot(SnapshotsInProgress.Entry entry) {\n+ void endSnapshot(final SnapshotsInProgress.Entry entry) {\n endSnapshot(entry, null);\n }\n \n@@ -1144,24 +1158,26 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n } else {\n // This snapshot is currently running - stopping shards first\n waitForSnapshot = true;\n- ImmutableOpenMap<ShardId, ShardSnapshotStatus> shards;\n- if (snapshotEntry.state() == State.STARTED && snapshotEntry.shards() != null) {\n- // snapshot is currently running - stop started shards\n- ImmutableOpenMap.Builder<ShardId, ShardSnapshotStatus> shardsBuilder = ImmutableOpenMap.builder();\n+\n+ final ImmutableOpenMap<ShardId, ShardSnapshotStatus> shards;\n+\n+ final State state = snapshotEntry.state();\n+ if (state == State.INIT) {\n+ // snapshot is still initializing, mark it as aborted\n+ shards = snapshotEntry.shards();\n+\n+ } else if (state == State.STARTED) {\n+ // snapshot is started - mark every non completed shard as aborted\n+ final ImmutableOpenMap.Builder<ShardId, ShardSnapshotStatus> shardsBuilder = ImmutableOpenMap.builder();\n for (ObjectObjectCursor<ShardId, ShardSnapshotStatus> shardEntry : snapshotEntry.shards()) {\n ShardSnapshotStatus status = shardEntry.value;\n- if (!status.state().completed()) {\n- shardsBuilder.put(shardEntry.key, new ShardSnapshotStatus(status.nodeId(), State.ABORTED,\n- \"aborted by snapshot deletion\"));\n- } else {\n- shardsBuilder.put(shardEntry.key, status);\n+ if (status.state().completed() == false) {\n+ status = new ShardSnapshotStatus(status.nodeId(), State.ABORTED, \"aborted by snapshot deletion\");\n }\n+ shardsBuilder.put(shardEntry.key, status);\n }\n shards = shardsBuilder.build();\n- } else if (snapshotEntry.state() == State.INIT) {\n- // snapshot hasn't started yet - end it\n- shards = snapshotEntry.shards();\n- endSnapshot(snapshotEntry);\n+\n } else {\n boolean hasUncompletedShards = false;\n // Cleanup in case a node gone missing and snapshot wasn't updated for some reason\n@@ -1178,7 +1194,8 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n logger.debug(\"trying to delete completed snapshot - should wait for shards to finalize on all nodes\");\n return currentState;\n } else {\n- // no shards to wait for - finish the snapshot\n+ // no shards to wait for but a node is gone - this is the only case\n+ // where we force to finish the snapshot\n logger.debug(\"trying to delete completed snapshot with no finalizing shards - can delete immediately\");\n shards = snapshotEntry.shards();\n endSnapshot(snapshotEntry);",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -3151,7 +3151,7 @@ public void testSnapshottingWithMissingSequenceNumbers() {\n assertThat(shardStats.getSeqNoStats().getMaxSeqNo(), equalTo(15L));\n }\n \n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/27974\")\n+ @TestLogging(\"org.elasticsearch.snapshots:TRACE\")\n public void testAbortedSnapshotDuringInitDoesNotStart() throws Exception {\n final Client client = client();\n \n@@ -3163,11 +3163,11 @@ public void testAbortedSnapshotDuringInitDoesNotStart() throws Exception {\n ));\n \n createIndex(\"test-idx\");\n- final int nbDocs = scaledRandomIntBetween(1, 100);\n+ final int nbDocs = scaledRandomIntBetween(100, 500);\n for (int i = 0; i < nbDocs; i++) {\n index(\"test-idx\", \"_doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n }\n- refresh();\n+ flushAndRefresh(\"test-idx\");\n assertThat(client.prepareSearch(\"test-idx\").setSize(0).get().getHits().getTotalHits(), equalTo((long) nbDocs));\n \n // Create a snapshot",
"filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java",
"status": "modified"
}
]
} |
{
"body": "If the master disconnects from the cluster after initiating snapshot, but just before the snapshot switches from INIT to STARTED state, the snapshot can get indefinitely stuck in the INIT state. This error is specific to v5.x+ and was triggered by keeping the master node that stepped down in the node list, the cleanup logic in snapshot/restore assumed that if master steps down it is always removed from the the node list. This commit changes the logic to trigger cleanup even if no nodes left the cluster.\r\n \r\nCloses #27180",
"comments": [],
"number": 27214,
"title": "Fix snapshot getting stuck in INIT state"
} | {
"body": "With the current snapshot/restore logic, a newly created snapshot is added by\r\nthe `SnapshotService.createSnapshot()` method as a `SnapshotInProgress` object in\r\nthe cluster state. This snapshot has the INIT state. Once the cluster state\r\nupdate is processed, the `beginSnapshot()` method is executed using the `SNAPSHOT`\r\nthread pool.\r\n\r\nThe `beginSnapshot()` method starts the initialization of the snapshot using the\r\n`initializeSnapshot()` method. This method reads the repository data and then\r\nwrites the global metadata file and an index metadata file per index to be\r\nsnapshotted. These operations can take some time to be completed (it could \r\nbe many minutes).\r\n\r\nAt this stage and if the master node is disconnected the snapshot can be stucked\r\nin INIT state on versions 5.6.4/6.0.0 or lower (pull request #27214 fixed this on\r\n5.6.5/6.0.1 and higher).\r\n\r\nIf the snapshot is not stucked but the initialization takes some time and the\r\nuser decides to abort the snapshot, a delete snapshot request can sneak in. The\r\n deletion updates the cluster state to check the state of the `SnapshotInProgress`.\r\nWhen the snapshot is in INIT, it executes the` endSnapshot()` method (which returns\r\nimmediately) and then the snapshot's state is updated to `ABORTED` in the cluster\r\nstate. The deletion will then listen for the snapshot completion in order to\r\ncontinue with the deletion of the snapshot.\r\n\r\nBut before returning, the `endSnapshot()` method added a new `Runnable` to the \r\nSNAPSHOT thread pool that forces the finalization of the initializing snapshot. This\r\nfinalization writes the snapshot metadata file and updates the index-N file in\r\nthe repository.\r\n\r\nAt this stage two things can potentially be executed concurrently: the initialization\r\nof the snapshot and the finalization of the snapshot. When the `initializeSnapshot()`\r\nis terminated, the cluster state is updated to start the snapshot and to move it to\r\nthe `STARTED` state (this is before #27931 which prevents an `ABORTED` snapshot to be\r\nstarted at all). The snapshot is started and shards start to be snapshotted but they\r\nquickly fail because the snapshot was `ABORTED` by the deletion. All shards are\r\nreported as `FAILED` to the master node, which executes `endSnapshot()` too (using\r\n`SnapshotStateExecutor`).\r\n\r\nThen many things can happen, depending on the execution of tasks by the `SNAPSHOT`\r\nthread pool and the time taken by each read/write/delete operation by the repository\r\nimplementation. Especially on S3, where operations can take time (disconnections,\r\nretries, timeouts) and where the data consistency model allows to read old data or\r\nrequires some time for objects to be replicated.\r\n\r\nHere are some scenario seen in cluster logs:\r\n\r\na) the snapshot is finalized by the snapshot deletion. Snapshot metadata file exists\r\nin the repository so the future finalization by the snapshot creation will fail with\r\na \"fail to finalize snapshot\" message in logs. Deletion process continues.\r\n\r\nb) the snapshot is finalized by the snapshot creation. Snapshot metadata file exists\r\nin the repository so the future finalization by the snapshot deletion will fail with\r\na \"fail to finalize snapshot\" message in logs. Deletion process continues.\r\n\r\nc) both finalizations are executed concurrently, things can fail at different read or\r\nwrite operations. Shards failures can be lost as well as final snapshot state, depending\r\non which SnapshotInProgress.Entry is used to finalize the snapshot.\r\n\r\nd) the snapshot is finalized by the snapshot deletion, the snapshot in progress is\r\nremoved from the cluster state, triggering the execution of the completion listeners.\r\nThe deletion process continues and the `deleteSnapshotFromRepository()` is executed using\r\nthe `SNAPSHOT` thread pool. This method reads the repository data, the snapshot metadata\r\nand the index metadata for all indices included in the snapshot before updated the index-N\r\n file from the repository. It can also take some time and I think these operations could\r\npotentially be executed concurrently with the finalization of the snapshot by the snapshot\r\ncreation, leading to corrupted data.\r\n\r\nThis commit does not solve all the issues reported here, but it removes the finalization\r\nof the snapshot by the snapshot deletion. This way, the deletion marks the snapshot as\r\n`ABORTED` in cluster state and waits for the snapshot completion. It is the responsibility\r\nof the snapshot execution to detect the abortion and terminates itself correctly. This\r\navoids concurrent snapshot finalizations and also ordinates the operations: the deletion\r\naborts the snapshot and waits for the snapshot completion, the creation detects the abortion\r\nand stops by itself and finalizes the snapshot, then the deletion resumes and continues\r\nthe deletion process.\r\n\r\nCloses #27974",
"number": 28078,
"review_comments": [
{
"body": "`state == State.STARTED`?\r\nOtherwise no need to define the local variable `state` above\r\n ",
"created_at": "2018-01-05T09:40:49Z"
},
{
"body": "add `assert entry.state() == State.ABORTED` here. You can directly write the message as \"snapshot was aborted during initialization\" which makes it clearer which situation is handled here.",
"created_at": "2018-01-05T09:49:08Z"
},
{
"body": "It was not really updated. That's just set here so that endSnapshot is called below. Maybe instead of updatedSnapshot and accepted variables we should have an endSnapshot variable that captures the snapshot to end.\r\n ",
"created_at": "2018-01-05T09:58:37Z"
}
],
"title": "Avoid concurrent snapshot finalizations when deleting an INIT snapshot"
} | {
"commits": [
{
"message": "Avoid concurrent snapshot finalization when deleting an initializing snapshot\n\nWith the current snapshot/restore logic, a newly created snapshot is added by\nthe SnapshotService.createSnapshot() method as a SnapshotInProgress object in\nthe cluster state. This snapshot has the INIT state. Once the cluster state\nupdate is processed, the beginSnapshot() method is executed using the SNAPSHOT\nthread pool.\n\nThe beginSnapshot() method starts the initialization of the snapshot using the\ninitializeSnapshot() method. This method reads the repository data and then\nwrites the global metadata file and an index metadata file per index to be\nsnapshotted. These operations can take some time to be completed (many minutes).\n\nAt this stage and if the master node is disconnected the snapshot can be stucked\nin INIT state on versions 5.6.4/6.0.0 or lower (pull request #27214 fixed this on\n5.6.5/6.0.1 and higher).\n\nIf the snapshot is not stucked but the initialization takes some time and the\nuser decides to abort the snapshot, a delete snapshot request can sneak in. The\n deletion updates the cluster state to check the state of the SnapshotInProgress.\nWhen the snapshot is in INIT, it executes the endSnapshot() method (which returns\nimmediately) and then the snapshot's state is updated to ABORTED in the cluster\nstate. The deletion will then listen for the snapshot completion in order to\ncontinue with the deletion of the snapshot.\n\nBut before returning, the endSnapshot() method added a new Runnable to the SNAPSHOT\nthread pool that forces the finalization of the initializing snapshot. This\nfinalization writes the snapshot metadata file and updates the index-N file in\nthe repository.\n\nAt this stage two things can potentially be executed concurrently: the initialization\nof the snapshot and the finalization of the snapshot. When the initializeSnapshot()\nis terminated, the cluster state is updated to start the snapshot and to move it to\nthe STARTED state (this is before #27931 which prevents an ABORTED snapshot to be\nstarted at all). The snapshot is started and shards start to be snapshotted but they\nquickly fail because the snapshot was ABORTED by the deletion. All shards are\nreported as FAILED to the master node, which executes endSnapshot() too (using\nSnapshotStateExecutor).\n\nThen many things can happen, depending on the execution of tasks by the SNAPSHOT\nthread pool and the time taken by each read/write/delete operation by the repository\nimplementation. Especially on S3, where operations can take time (disconnections,\nretries, timeouts) and where the data consistency model allows to read old data or\nrequires some time for objects to be replicated.\n\nHere are some scenario seen in cluster logs:\n\na) the snapshot is finalized by the snapshot deletion. Snapshot metadata file exists\nin the repository so the future finalization by the snapshot creation will fail with\na \"fail to finalize snapshot\" message in logs. Deletion process continues.\n\nb) the snapshot is finalized by the snapshot creation. Snapshot metadata file exists\nin the repository so the future finalization by the snapshot deletion will fail with\na \"fail to finalize snapshot\" message in logs. Deletion process continues.\n\nc) both finalizations are executed concurrently, things can fail at different read or\nwrite operations. Shards failures can be lost as well as final snapshot state, depending\non which SnapshotInProgress.Entry is used to finalize the snapshot.\n\nd) the snapshot is finalized by the snapshot deletion, the snapshot in progress is\nremoved from the cluster state, triggering the execution of the completion listeners.\nThe deletion process continues and the deleteSnapshotFromRepository() is executed using\nthe SNAPSHOT thread pool. This method reads the repository data, the snapshot metadata\nand the index metadata for all indices included in the snapshot before updated the index-N\n file from the repository. It can also take some time and I think these operations could\npotentially be executed concurrently with the finalization of the snapshot by the snapshot\ncreation, leading to corrupted data.\n\nThis commit does not solve all the issues reported here, but it removes the finalization\nof the snapshot by the snapshot deletion. This way, the deletion marks the snapshot as\nABORTED in cluster state and waits for the snapshot completion. It is the responsability\nof the snapshot execution to detect the abortion and terminates itself correctly. This\navoids concurrent snapshot finalizations and also ordinates the operations: the deletion\naborts the snapshot and waits for the snapshot completion, the creation detects the abortion\nand stops by itself and finalizes the snapshot, then the deletion resumes and continues\nthe deletion process."
},
{
"message": "Apply feedback"
}
],
"files": [
{
"diff": "@@ -372,26 +372,32 @@ private void beginSnapshot(final ClusterState clusterState,\n return;\n }\n clusterService.submitStateUpdateTask(\"update_snapshot [\" + snapshot.snapshot() + \"]\", new ClusterStateUpdateTask() {\n- boolean accepted = false;\n- SnapshotsInProgress.Entry updatedSnapshot;\n+\n+ SnapshotsInProgress.Entry endSnapshot;\n String failure = null;\n \n @Override\n public ClusterState execute(ClusterState currentState) {\n SnapshotsInProgress snapshots = currentState.custom(SnapshotsInProgress.TYPE);\n List<SnapshotsInProgress.Entry> entries = new ArrayList<>();\n for (SnapshotsInProgress.Entry entry : snapshots.entries()) {\n- if (entry.snapshot().equals(snapshot.snapshot()) && entry.state() != State.ABORTED) {\n- // Replace the snapshot that was just created\n+ if (entry.snapshot().equals(snapshot.snapshot()) == false) {\n+ entries.add(entry);\n+ continue;\n+ }\n+\n+ if (entry.state() != State.ABORTED) {\n+ // Replace the snapshot that was just intialized\n ImmutableOpenMap<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shards = shards(currentState, entry.indices());\n if (!partial) {\n Tuple<Set<String>, Set<String>> indicesWithMissingShards = indicesWithMissingShards(shards, currentState.metaData());\n Set<String> missing = indicesWithMissingShards.v1();\n Set<String> closed = indicesWithMissingShards.v2();\n if (missing.isEmpty() == false || closed.isEmpty() == false) {\n- StringBuilder failureMessage = new StringBuilder();\n- updatedSnapshot = new SnapshotsInProgress.Entry(entry, State.FAILED, shards);\n- entries.add(updatedSnapshot);\n+ endSnapshot = new SnapshotsInProgress.Entry(entry, State.FAILED, shards);\n+ entries.add(endSnapshot);\n+\n+ final StringBuilder failureMessage = new StringBuilder();\n if (missing.isEmpty() == false) {\n failureMessage.append(\"Indices don't have primary shards \");\n failureMessage.append(missing);\n@@ -407,13 +413,16 @@ public ClusterState execute(ClusterState currentState) {\n continue;\n }\n }\n- updatedSnapshot = new SnapshotsInProgress.Entry(entry, State.STARTED, shards);\n+ SnapshotsInProgress.Entry updatedSnapshot = new SnapshotsInProgress.Entry(entry, State.STARTED, shards);\n entries.add(updatedSnapshot);\n- if (!completed(shards.values())) {\n- accepted = true;\n+ if (completed(shards.values())) {\n+ endSnapshot = updatedSnapshot;\n }\n } else {\n- entries.add(entry);\n+ assert entry.state() == State.ABORTED : \"expecting snapshot to be aborted during initialization\";\n+ failure = \"snapshot was aborted during initialization\";\n+ endSnapshot = entry;\n+ entries.add(endSnapshot);\n }\n }\n return ClusterState.builder(currentState)\n@@ -448,8 +457,8 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n // We should end snapshot only if 1) we didn't accept it for processing (which happens when there\n // is nothing to do) and 2) there was a snapshot in metadata that we should end. Otherwise we should\n // go ahead and continue working on this snapshot rather then end here.\n- if (!accepted && updatedSnapshot != null) {\n- endSnapshot(updatedSnapshot, failure);\n+ if (endSnapshot != null) {\n+ endSnapshot(endSnapshot, failure);\n }\n }\n });\n@@ -750,6 +759,11 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n }\n entries.add(updatedSnapshot);\n } else if (snapshot.state() == State.INIT && newMaster) {\n+ changed = true;\n+ // Mark the snapshot as aborted as it failed to start from the previous master\n+ updatedSnapshot = new SnapshotsInProgress.Entry(snapshot, State.ABORTED, snapshot.shards());\n+ entries.add(updatedSnapshot);\n+\n // Clean up the snapshot that failed to start from the old master\n deleteSnapshot(snapshot.snapshot(), new DeleteSnapshotListener() {\n @Override\n@@ -935,7 +949,7 @@ private Tuple<Set<String>, Set<String>> indicesWithMissingShards(ImmutableOpenMa\n *\n * @param entry snapshot\n */\n- void endSnapshot(SnapshotsInProgress.Entry entry) {\n+ void endSnapshot(final SnapshotsInProgress.Entry entry) {\n endSnapshot(entry, null);\n }\n \n@@ -1144,24 +1158,26 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n } else {\n // This snapshot is currently running - stopping shards first\n waitForSnapshot = true;\n- ImmutableOpenMap<ShardId, ShardSnapshotStatus> shards;\n- if (snapshotEntry.state() == State.STARTED && snapshotEntry.shards() != null) {\n- // snapshot is currently running - stop started shards\n- ImmutableOpenMap.Builder<ShardId, ShardSnapshotStatus> shardsBuilder = ImmutableOpenMap.builder();\n+\n+ final ImmutableOpenMap<ShardId, ShardSnapshotStatus> shards;\n+\n+ final State state = snapshotEntry.state();\n+ if (state == State.INIT) {\n+ // snapshot is still initializing, mark it as aborted\n+ shards = snapshotEntry.shards();\n+\n+ } else if (state == State.STARTED) {\n+ // snapshot is started - mark every non completed shard as aborted\n+ final ImmutableOpenMap.Builder<ShardId, ShardSnapshotStatus> shardsBuilder = ImmutableOpenMap.builder();\n for (ObjectObjectCursor<ShardId, ShardSnapshotStatus> shardEntry : snapshotEntry.shards()) {\n ShardSnapshotStatus status = shardEntry.value;\n- if (!status.state().completed()) {\n- shardsBuilder.put(shardEntry.key, new ShardSnapshotStatus(status.nodeId(), State.ABORTED,\n- \"aborted by snapshot deletion\"));\n- } else {\n- shardsBuilder.put(shardEntry.key, status);\n+ if (status.state().completed() == false) {\n+ status = new ShardSnapshotStatus(status.nodeId(), State.ABORTED, \"aborted by snapshot deletion\");\n }\n+ shardsBuilder.put(shardEntry.key, status);\n }\n shards = shardsBuilder.build();\n- } else if (snapshotEntry.state() == State.INIT) {\n- // snapshot hasn't started yet - end it\n- shards = snapshotEntry.shards();\n- endSnapshot(snapshotEntry);\n+\n } else {\n boolean hasUncompletedShards = false;\n // Cleanup in case a node gone missing and snapshot wasn't updated for some reason\n@@ -1178,7 +1194,8 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n logger.debug(\"trying to delete completed snapshot - should wait for shards to finalize on all nodes\");\n return currentState;\n } else {\n- // no shards to wait for - finish the snapshot\n+ // no shards to wait for but a node is gone - this is the only case\n+ // where we force to finish the snapshot\n logger.debug(\"trying to delete completed snapshot with no finalizing shards - can delete immediately\");\n shards = snapshotEntry.shards();\n endSnapshot(snapshotEntry);",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -3151,7 +3151,7 @@ public void testSnapshottingWithMissingSequenceNumbers() {\n assertThat(shardStats.getSeqNoStats().getMaxSeqNo(), equalTo(15L));\n }\n \n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/27974\")\n+ @TestLogging(\"org.elasticsearch.snapshots:TRACE\")\n public void testAbortedSnapshotDuringInitDoesNotStart() throws Exception {\n final Client client = client();\n \n@@ -3163,11 +3163,11 @@ public void testAbortedSnapshotDuringInitDoesNotStart() throws Exception {\n ));\n \n createIndex(\"test-idx\");\n- final int nbDocs = scaledRandomIntBetween(1, 100);\n+ final int nbDocs = scaledRandomIntBetween(100, 500);\n for (int i = 0; i < nbDocs; i++) {\n index(\"test-idx\", \"_doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n }\n- refresh();\n+ flushAndRefresh(\"test-idx\");\n assertThat(client.prepareSearch(\"test-idx\").setSize(0).get().getHits().getTotalHits(), equalTo((long) nbDocs));\n \n // Create a snapshot",
"filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): 6.0/ Any 6.x version\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): 1.8 JDK\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Any\r\n \r\n**Description of the problem including expected versus actual behavior**:\r\n\r\n**Steps to reproduce**:\r\n\r\n[1] Create in 5.6.x a test index\r\n\r\n```\r\ncurl -XPOST localhost:9200/test/test -d '{ \"test\":true }' \r\n```\r\n\r\n[2] Upgrade 5.6.x to 6.x\r\n\r\n[3] Start Elasticsearch\r\n\r\n[4] Run the shrinking\r\n\r\n```\r\nPUT test/_settings\r\n{\r\n \"settings\": {\r\n \"index.blocks.write\": true \r\n }\r\n}\r\n\r\nPOST test/_shrink/test_shrink\r\n{\r\n \"settings\": {\r\n \"index.number_of_replicas\": 1,\r\n \"index.number_of_shards\": 1, \r\n \"index.codec\": \"best_compression\" \r\n }\r\n}\r\n```\r\n\r\n[5] Check the cluster health\r\n\r\n```\r\n{\r\n \"cluster_name\": \"elasticsearch\",\r\n \"status\": \"red\", <----------------- IT's red one\r\n \"timed_out\": false,\r\n \"number_of_nodes\": 1,\r\n \"number_of_data_nodes\": 1,\r\n \"active_primary_shards\": 10,\r\n \"active_shards\": 10,\r\n \"relocating_shards\": 0,\r\n \"initializing_shards\": 0,\r\n \"unassigned_shards\": 11,\r\n \"delayed_unassigned_shards\": 0,\r\n \"number_of_pending_tasks\": 0,\r\n \"number_of_in_flight_fetch\": 0,\r\n \"task_max_waiting_in_queue_millis\": 0,\r\n \"active_shards_percent_as_number\": 47.61904761904761\r\n}\r\n```\r\n\r\n[7] Check the RED indices\r\n\r\n```\r\ntest_shrink 0 p UNASSIGNED \r\ntest_shrink 0 r UNASSIGNED \r\n```\r\n\r\n\r\n\r\n*Logs show the following*\r\n\r\n```\r\n2018-01-03T10:22:23,627][WARN ][o.e.c.a.s.ShardStateAction] [-VTajZI] [test_shrink][0] received shard failed for shard id [[test_shrink][0]], allocation id [hYCRkpmGRRC7CZCQ_As67g], primary term [0], message [failed recovery], failure [RecoveryFailedException[[test_shrink][0]: Recovery failed on {-VTajZI}{-VTajZIrRT66GVxLwD4X6w}{qW38ZFvvRFCLi0miV_YWMA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17179869184, ml.max_open_jobs=20, ml.enabled=true}]; nested: IndexShardRecoveryException[failed recovery]; nested: IllegalArgumentException[Cannot use addIndexes(Directory) with indexes that have been created by a different Lucene version. The current index was generated by Lucene 7 while one of the directories contains an index that was generated with Lucene 6]; ]\r\norg.elasticsearch.indices.recovery.RecoveryFailedException: [test_shrink][0]: Recovery failed on {-VTajZI}{-VTajZIrRT66GVxLwD4X6w}{qW38ZFvvRFCLi0miV_YWMA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17179869184, ml.max_open_jobs=20, ml.enabled=true}\r\n\tat org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$9(IndexShard.java:2077) [elasticsearch-6.1.1.jar:6.1.1]\r\n\tat org.elasticsearch.index.shard.IndexShard$$Lambda$2882/401965331.run(Unknown Source) [elasticsearch-6.1.1.jar:6.1.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:568) [elasticsearch-6.1.1.jar:6.1.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_45]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_45]\r\n\tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]\r\nCaused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery\r\n\tat org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:334) ~[elasticsearch-6.1.1.jar:6.1.1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.recoverFromLocalShards(StoreRecovery.java:122) ~[elasticsearch-6.1.1.jar:6.1.1]\r\n\tat org.elasticsearch.index.shard.IndexShard.recoverFromLocalShards(IndexShard.java:1565) ~[elasticsearch-6.1.1.jar:6.1.1]\r\n\tat org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$9(IndexShard.java:2072) ~[elasticsearch-6.1.1.jar:6.1.1]\r\n\t... 5 more\r\nCaused by: java.lang.IllegalArgumentException: Cannot use addIndexes(Directory) with indexes that have been created by a different Lucene version. The current index was generated by Lucene 7 while one of the directories contains an index that was generated with Lucene 6\r\n\tat org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2830) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.addIndices(StoreRecovery.java:161) ~[elasticsearch-6.1.1.jar:6.1.1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromLocalShards$3(StoreRecovery.java:130) ~[elasticsearch-6.1.1.jar:6.1.1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery$$Lambda$2886/1712721504.run(Unknown Source) ~[?:?]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:292) ~[elasticsearch-6.1.1.jar:6.1.1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.recoverFromLocalShards(StoreRecovery.java:122) ~[elasticsearch-6.1.1.jar:6.1.1]\r\n\tat org.elasticsearch.index.shard.IndexShard.recoverFromLocalShards(IndexShard.java:1565) ~[elasticsearch-6.1.1.jar:6.1.1]\r\n\tat org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$9(IndexShard.java:2072) ~[elasticsearch-6.1.1.jar:6.1.1]\r\n\t... 5 more\r\n[2018-01-03T10:22:23,634][INFO ][o.e.c.r.a.AllocationService] [-VTajZI] Cluster health status changed from [YELLOW] to [RED] (reason: [shards failed [[test_shrink][0]] ...]).\r\n```\r\n\r\n\r\nThe issue seems to be:\r\n\r\n> Caused by: java.lang.IllegalArgumentException: Cannot use addIndexes(Directory) with indexes that have been created by a different Lucene version. The current index was generated by Lucene 7 while one of the directories contains an index that was generated with Lucene 6\r\n\r\n\r\n~~CC @s1monw here, i think that this might be caused by https://github.com/elastic/elasticsearch/pull/22469 ?~~\r\n\r\n ",
"comments": [
{
"body": "I don't think this is due to #22469. Lucene complains that the creation versions differ because we changed the way norms are encoded in Lucene 7, so you can't add Lucene 6 files to a Lucene 7 index. We should fix shrinking to also carry over the Lucene creation version, in addition to the Elasticsearch creation version.",
"created_at": "2018-01-03T13:00:03Z"
}
],
"number": 28061,
"title": "Shrink upgraded index (created on 5.x) fails in 6.x"
} | {
"body": "Lucene does not allow adding Lucene 6 files to a Lucene 7 index. This PR ensures that we carry over the Lucene version to the newly created Lucene index.\r\n\r\nCloses #28061",
"number": 28076,
"review_comments": [
{
"body": "Can we copy the first index into the target directory instead of writing a dummy commit point? something like this:\r\n```Java\r\n Directory dir = sources[0];\r\n for (String file : dir.listAll()) {\r\n target.copyFrom(dir, file, file, IOContext.DEFAULT);\r\n }\r\n```\r\n\r\nThis way we also apply the same optimizations for hardlinking etc and we are totally safe?",
"created_at": "2018-01-04T11:07:52Z"
},
{
"body": "Yannick and I discussed this option first, but this needs extra care, for instance to not copy the write lock. It's also a bit more involved if we want to track statistics as well for the first source. In the end it's not clear to me which option is better.",
"created_at": "2018-01-04T11:26:20Z"
},
{
"body": "@s1monw I initially started with the approach that you've outlined, but found it to be more complex for the reasons that @jpountz stated. In the future, we can hopefully create an IndexWriter for an older Lucene version (@jpountz will raise this on the Lucene project).",
"created_at": "2018-01-04T11:33:02Z"
},
{
"body": "I was just opening the issue but I'll wait to see the conclusion here first in case we decide copying the first directory manually is still a better trade-off.",
"created_at": "2018-01-04T11:41:21Z"
},
{
"body": "sounds good to me!",
"created_at": "2018-01-04T12:37:23Z"
}
],
"title": "Allow shrinking of indices from a previous major"
} | {
"commits": [
{
"message": "Allow shrinking of indices from a previous major"
}
],
"files": [
{
"diff": "@@ -145,14 +145,22 @@ boolean recoverFromLocalShards(BiConsumer<String, MappingMetaData> mappingUpdate\n void addIndices(final RecoveryState.Index indexRecoveryStats, final Directory target, final Sort indexSort, final Directory[] sources,\n final long maxSeqNo, final long maxUnsafeAutoIdTimestamp, IndexMetaData indexMetaData, int shardId, boolean split,\n boolean hasNested) throws IOException {\n+\n+ // clean target directory (if previous recovery attempt failed) and create a fresh segment file with the proper lucene version\n+ Lucene.cleanLuceneIndex(target);\n+ assert sources.length > 0;\n+ final int luceneIndexCreatedVersionMajor = Lucene.readSegmentInfos(sources[0]).getIndexCreatedVersionMajor();\n+ new SegmentInfos(luceneIndexCreatedVersionMajor).commit(target);\n+\n final Directory hardLinkOrCopyTarget = new org.apache.lucene.store.HardlinkCopyDirectoryWrapper(target);\n+\n IndexWriterConfig iwc = new IndexWriterConfig(null)\n .setCommitOnClose(false)\n // we don't want merges to happen here - we call maybe merge on the engine\n // later once we stared it up otherwise we would need to wait for it here\n // we also don't specify a codec here and merges should use the engines for this index\n .setMergePolicy(NoMergePolicy.INSTANCE)\n- .setOpenMode(IndexWriterConfig.OpenMode.CREATE);\n+ .setOpenMode(IndexWriterConfig.OpenMode.APPEND);\n if (indexSort != null) {\n iwc.setIndexSort(indexSort);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java",
"status": "modified"
},
{
"diff": "@@ -423,6 +423,73 @@ public void testShrink() throws IOException {\n assertEquals(numDocs, totalHits);\n }\n \n+ public void testShrinkAfterUpgrade() throws IOException {\n+ String shrunkenIndex = index + \"_shrunk\";\n+ int numDocs;\n+ if (runningAgainstOldCluster) {\n+ XContentBuilder mappingsAndSettings = jsonBuilder();\n+ mappingsAndSettings.startObject();\n+ {\n+ mappingsAndSettings.startObject(\"mappings\");\n+ mappingsAndSettings.startObject(\"doc\");\n+ mappingsAndSettings.startObject(\"properties\");\n+ {\n+ mappingsAndSettings.startObject(\"field\");\n+ mappingsAndSettings.field(\"type\", \"text\");\n+ mappingsAndSettings.endObject();\n+ }\n+ mappingsAndSettings.endObject();\n+ mappingsAndSettings.endObject();\n+ mappingsAndSettings.endObject();\n+ }\n+ mappingsAndSettings.endObject();\n+ client().performRequest(\"PUT\", \"/\" + index, Collections.emptyMap(),\n+ new StringEntity(mappingsAndSettings.string(), ContentType.APPLICATION_JSON));\n+\n+ numDocs = randomIntBetween(512, 1024);\n+ indexRandomDocuments(numDocs, true, true, i -> {\n+ return JsonXContent.contentBuilder().startObject()\n+ .field(\"field\", \"value\")\n+ .endObject();\n+ });\n+ } else {\n+ String updateSettingsRequestBody = \"{\\\"settings\\\": {\\\"index.blocks.write\\\": true}}\";\n+ Response rsp = client().performRequest(\"PUT\", \"/\" + index + \"/_settings\", Collections.emptyMap(),\n+ new StringEntity(updateSettingsRequestBody, ContentType.APPLICATION_JSON));\n+ assertEquals(200, rsp.getStatusLine().getStatusCode());\n+\n+ String shrinkIndexRequestBody = \"{\\\"settings\\\": {\\\"index.number_of_shards\\\": 1}}\";\n+ rsp = client().performRequest(\"PUT\", \"/\" + index + \"/_shrink/\" + shrunkenIndex, Collections.emptyMap(),\n+ new StringEntity(shrinkIndexRequestBody, ContentType.APPLICATION_JSON));\n+ assertEquals(200, rsp.getStatusLine().getStatusCode());\n+\n+ numDocs = countOfIndexedRandomDocuments();\n+ }\n+\n+ Response rsp = client().performRequest(\"POST\", \"/_refresh\");\n+ assertEquals(200, rsp.getStatusLine().getStatusCode());\n+\n+ Map<?, ?> response = toMap(client().performRequest(\"GET\", \"/\" + index + \"/_search\"));\n+ assertNoFailures(response);\n+ int totalShards = (int) XContentMapValues.extractValue(\"_shards.total\", response);\n+ assertThat(totalShards, greaterThan(1));\n+ int successfulShards = (int) XContentMapValues.extractValue(\"_shards.successful\", response);\n+ assertEquals(totalShards, successfulShards);\n+ int totalHits = (int) XContentMapValues.extractValue(\"hits.total\", response);\n+ assertEquals(numDocs, totalHits);\n+\n+ if (runningAgainstOldCluster == false) {\n+ response = toMap(client().performRequest(\"GET\", \"/\" + shrunkenIndex + \"/_search\"));\n+ assertNoFailures(response);\n+ totalShards = (int) XContentMapValues.extractValue(\"_shards.total\", response);\n+ assertEquals(1, totalShards);\n+ successfulShards = (int) XContentMapValues.extractValue(\"_shards.successful\", response);\n+ assertEquals(1, successfulShards);\n+ totalHits = (int) XContentMapValues.extractValue(\"hits.total\", response);\n+ assertEquals(numDocs, totalHits);\n+ }\n+ }\n+\n void assertBasicSearchWorks(int count) throws IOException {\n logger.info(\"--> testing basic search\");\n Map<String, Object> response = toMap(client().performRequest(\"GET\", \"/\" + index + \"/_search\"));",
"filename": "qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java",
"status": "modified"
}
]
} |
{
"body": "Elasticsearch 5.4.1\r\nRollover API problem\r\n\r\nWe are using Rollover to create new index upon a document count condition is reached.\r\n\r\nBut while ingestion is happening, if we run the rollover API, getting below error:\r\n\"Alias [test-schema-active-logs] has more than one indices associated with it [[test-schema-000004, test-schema-000005]], can't execute a single index op\"\r\n\r\nFrom Rollover API understanding write alias should automatically switch to new index (test-schema-000005) created and move the alias from the old index (test-schema-000004). How can this error be handled?",
"comments": [
{
"body": "@ankitachow, Please ask questions on [https://discuss.elastic.co](https://discuss.elastic.co) where we can give you a better support. We use Github for bug reports and feature requests. Thank you.",
"created_at": "2017-10-11T20:12:42Z"
},
{
"body": "TBH it sounds like a bug to me. But would be great if @ankitachow shares a full script to reproduce all the steps done.\r\n\r\n@ankitachow could you do that?",
"created_at": "2017-10-11T20:17:47Z"
},
{
"body": "@dadoonet Sure see below steps.\r\n\r\nWe are ingesting data through ES-Hadoop connector continuously. Below are the steps conducted in production:\r\n1.\tData ingested with a template having below information\r\na.\twrite & search alias\r\nb.\tno. of shards = no. of nodes\r\nc.\tbest_compression\r\n2.\tRollover based on certain doc count\r\n3.\tShrink the index. The template of the compressed index has\r\na.\tNo. of shards = 1\r\nb.\tbest_compression\r\n4.\tRemove the search-logs alias from the old index and add it to the compressed index\r\n5.\tForcemerge\r\n6.\tDelete the old index\r\n\r\nDuring Rollover, sometimes(80%) we are getting above error in Spark job and its stopping ingestion. New rolled over index getting created properly. Once we start ingesting again, data gets written to new index created from rollover.\r\n\r\nBelow is our rollover API command.\r\n\r\nRESPONSE=$(curl -s -XPOST ''$ip':9200/'$active_writealias'/_rollover?pretty=true' -d'\r\n{\r\n \"conditions\": {\r\n \"max_docs\": \"'\"$rollovercond\"'\"\r\n }\r\n}')\r\n\r\nBut if we run the script after ingestion, there's no error.",
"created_at": "2017-10-11T20:26:30Z"
},
{
"body": "The error you are getting seems to indicate that 2 indices are defined behind the write alias.\r\nThis should not happen.\r\n\r\nDo you call `_rollover` API only from one single machine? Or is it executed from different nodes?\r\nCan you share the elasticsearch logs when the problem appears? I mean some lines before the problem and some lines after if any.",
"created_at": "2017-10-11T20:55:39Z"
},
{
"body": "The template for the test-schema is as follows:\r\n{\r\n \"template\": \"test-schema-*\",\r\n \"settings\": {\r\n \"number_of_shards\": 13,\r\n \"number_of_replicas\": 0, \r\n\t\"refresh_interval\" : \"30s\",\r\n\t\"codec\":\"best_compression\"\r\n },\r\n \"aliases\": {\r\n \"test-schema-active-logs\": {},\r\n \"test-schema-search-logs\": {}\r\n },\r\n \"mappings\":{ \r\n\t\t\"test-log\":{ \r\n\t\t\t\"_all\":{\"enabled\": false},\r\n\t\t\t\"properties\":{ .....\r\n\r\nSo, rollover is creating the new index and also creating the write alias point to the new index which shouldn't happen.\r\nThe rollover API is called from only 1 machine. There's no problem with the Elasticsearch front. So, elasticsearch rollover runs fine. The ES hadoop spark job faies giving below error.\r\n\r\n \"Alias [test-schema-active-logs] has more than one indices associated with it [[test-schema-000004, test-schema-000005]], can't execute a single index op\"",
"created_at": "2017-10-11T21:12:18Z"
},
{
"body": "I also met this bug,especially in multi-thread writing is very easy to happen.\r\n_aliases API wrote 'Renaming an alias is a simple remove then add operation within the same API. This operation is atomic, no need to worry about a short period of time where the alias does not point to an index' in document,so this bug is because you did not use this api or _aliases API has this bug?",
"created_at": "2017-12-01T01:29:18Z"
},
{
"body": "I can reproduce this with the below test snippet.\r\n\r\n```java\r\npublic void testIndexingAndRolloverConcurrently() throws Exception {\r\n client().admin().indices().preparePutTemplate(\"logs\")\r\n .setPatterns(Collections.singletonList(\"logs-*\"))\r\n .addAlias(new Alias(\"logs-write\"))\r\n .get();\r\n assertAcked(client().admin().indices().prepareCreate(\"logs-000001\").get());\r\n ensureYellow(\"logs-write\");\r\n\r\n final AtomicBoolean done = new AtomicBoolean();\r\n final Thread rolloverThread = new Thread(() -> {\r\n while (done.get() == false) {\r\n client().admin().indices()\r\n .prepareRolloverIndex(\"logs-write\")\r\n .addMaxIndexSizeCondition(new ByteSizeValue(1))\r\n .get();\r\n }\r\n });\r\n rolloverThread.start();\r\n try {\r\n int numDocs = 10_000;\r\n for (int i = 0; i < numDocs; i++) {\r\n logger.info(\"--> add doc [{}]\", i);\r\n IndexResponse resp = index(\"logs-write\", \"doc\", Integer.toString(i), \"{}\");\r\n assertThat(resp.status(), equalTo(RestStatus.CREATED));\r\n }\r\n } finally {\r\n done.set(true);\r\n rolloverThread.join();\r\n }\r\n}\r\n```\r\n\r\nWe create an index with alias (via template) and update index alias in two separate cluster tasks. This can be a root cause of this issue.\r\n\r\nhttps://github.com/dnhatn/elasticsearch/blob/c7ce5a07f26f09ec4e5e92d07aa08f338fbb41b8/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java#L133-L135",
"created_at": "2018-01-01T00:02:26Z"
},
{
"body": "Hi guys, I am having a similar issue with a newer version\r\n\r\nSo, We were trying a rollover indice with our newly setup cluster with Elasticsearch 6.2\r\n\r\nWhen we are trying to rollover the indice, It gives the following error. \r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"Rollover alias [active-fusion-logs] can point to multiple indices, found duplicated alias [[search-fusion-logs, active-fusion-logs]] in index template [fusion-logs]\"\r\n }\r\n ],\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"Rollover alias [active-fusion-logs] can point to multiple indices, found duplicated alias [[search-fusion-logs, active-fusion-logs]] in index template [fusion-logs]\"\r\n },\r\n \"status\": 400\r\n}\r\n```\r\n\r\nPlease find details below of the template that we are having and steps that I used. This can be fairly used to reproduce the issue. \r\n\r\n\r\nTemplate name : fusion-logs\r\n```\r\nPUT _template/fusion-logs\r\n{\r\n \"template\": \"fusion-logs-*\",\r\n \"settings\": {\r\n \"number_of_shards\": 2,\r\n \"number_of_replicas\": 1,\r\n \"routing.allocation.include.box_type\": \"hot\"\r\n },\r\n \"aliases\": {\r\n \"active-fusion-logs\": {},\r\n \"search-fusion-logs\": {}\r\n },\r\n \"mappings\": {\r\n \"logs\": {\r\n \"properties\": {\r\n \"host\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"job_id\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"job_result\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nWe inserted a 1000 documents in the above active-fusion-logs index and then used the following to roll over the index\r\n\r\n```\r\nPOST active-fusion-logs/_rollover\r\n{\r\n \"conditions\": {\r\n \"max_docs\": 1000\r\n }\r\n}\r\n```\r\n\r\nThe above API gives us an error when we are trying to rollover\r\n\r\nSome other info about the cluster. \r\n1. There is no other index other than the above index.\r\n2. active-fusion-logs is aliased to just one write index\r\n3. search-fusion-logs is aliased to multiple indexes\r\n\r\nAlso, I had tried the same thing with Elasticsearch 5.3.2 and it worked as expected without the error.",
"created_at": "2018-03-14T07:17:32Z"
},
{
"body": "> \"reason\": \"Rollover alias [active-fusion-logs] can point to multiple indices, found duplicated alias [[search-fusion-logs, active-fusion-logs]] in index template [fusion-logs]\"\r\n\r\nYou should remove alias `[active-fusion-logs]` from the index template ` [fusion-logs]`.\r\n\r\n````\r\nPUT _template/fusion-logs\r\n{\r\n \"template\": \"fusion-logs-*\",\r\n \"settings\": {\r\n \"number_of_shards\": 2,\r\n \"number_of_replicas\": 1,\r\n \"routing.allocation.include.box_type\": \"hot\"\r\n },\r\n\r\n \"aliases\": {\r\n \"active-fusion-logs\": {}, // Remove this line\r\n \"search-fusion-logs\": {}\r\n },\r\n \"mappings\": {\r\n \"logs\": {\r\n \"properties\": {\r\n \"host\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"job_id\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"job_result\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```",
"created_at": "2018-03-14T12:29:54Z"
},
{
"body": "Oh. Great. That worked. I am not sure how I missed that out. Thanks! @dnhatn ",
"created_at": "2018-03-14T12:41:58Z"
},
{
"body": "I having this problem with 6.4\r\n\r\n\r\nPUT _template/application-logs\r\n{\r\n \"template\": \"xx-*\",\r\n \"settings\": {\r\n \"number_of_shards\": 2,\r\n \"number_of_replicas\": 1,\r\n \"routing.allocation.include.box_type\": \"hot\",\r\n \"index\": {\r\n \"codec\": \"best_compression\",\r\n \"mapping\": {\r\n \"total_fields\": {\r\n \"limit\": \"10000\"\r\n }\r\n },\r\n \"refresh_interval\": \"5s\"\r\n }\r\n },\r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"date\": {\"type\": \"date\",\"format\": \"yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis\"},\r\n \"logData\": {\"type\": \"text\"},\r\n \"message\": {\"type\": \"text\"},\r\n \"logger_name\": {\"type\": \"keyword\"},\r\n \"thread_name\": {\"type\": \"keyword\"},\r\n \"level\": {\"type\": \"keyword\"},\r\n \"levelvalue\": {\"type\": \"long\"},\r\n \"stack_trace\": {\"type\": \"text\"}\r\n }\r\n }\r\n }, \r\n \"aliases\": {\r\n \"search-application-logs\": {}\r\n }\r\n}\r\n\r\nPOST /search-application-logs/_rollover?dry_run\r\n{\r\n \"conditions\": {\r\n \"max_age\": \"1d\",\r\n \"max_docs\": 5,\r\n \"max_size\": \"5gb\"\r\n }\r\n}\r\n \"reason\": \"Rollover alias [search-application-logs] can point to multiple indices, found duplicated alias [[search-application-logs]] in index template [application-logs]\"\r\n\r\nI would like to setup rollover policy on alias so it would take effect on all the indexes that follow pattern setup in template. ",
"created_at": "2018-09-10T18:15:22Z"
},
{
"body": "My application will follow the date format similar to mentioned in this article https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-rollover-index.html\r\n\r\nMultiple indexes logs-2018.09.09-1 and logs-2018.09.10-1 would be pointing to same alias \"logs_write\". how to best setup rollover in this type of situation?\r\n\r\nPUT logs-2018.09.09-1\r\n{\r\n \"aliases\": {\r\n \"logs_write\": {}\r\n }\r\n}\r\n\r\nPUT logs-2018.09.10-1\r\n{\r\n \"aliases\": {\r\n \"logs_write\": {}\r\n }\r\n}\r\n\r\nPUT logs-2018.09.10-1/_doc/1\r\n{\r\n \"message\": \"a dummy log\"\r\n}\r\n\r\nPOST logs_write/_refresh\r\n\r\nPOST /logs_write/_rollover \r\n{\r\n \"conditions\": {\r\n \"max_docs\": \"1\"\r\n }\r\n}\r\n\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"source alias maps to multiple indices\"\r\n",
"created_at": "2018-09-10T18:23:47Z"
},
{
"body": "I am getting the same problem as @kkr78. I just have ONE index though. This is occurring on 6.3.0.\r\n\r\n_\"reason\": \"Rollover alias [my-index] can point to multiple indices, found duplicated alias [[my-index]] in index template [mytemplate]\"_\r\n\r\n**Index : my-index-2018.09.01-1**\r\n**Alias : my-index**\r\n```\r\n{\r\n \"mytemplate\": {\r\n \"order\": 0,\r\n \"index_patterns\": [\r\n \"my-index-*\"\r\n ],\r\n \"settings\": {},\r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"@timestamp\": {\r\n \"type\": \"date\"\r\n }\r\n }\r\n }\r\n },\r\n \"aliases\": {\r\n \"my-index\": {}\r\n }\r\n }\r\n}\r\n```",
"created_at": "2018-10-23T20:29:53Z"
}
],
"number": 26976,
"title": "Alias [test-schema-active-logs] has more than one indices associated with it [[......]], can't execute a single index op"
} | {
"body": "Today when executing a rollover request, we create an index with alias\r\n(via template), then update index aliases in two separate cluster tasks.\r\nIn the interval between these two actions, the alias will associate to\r\ntwo indices. This causes indexing requests to that alias to be rejected.\r\n\r\nThis commit merges these two actions into a single cluster update task.\r\n\r\nCloses #26976",
"number": 28039,
"review_comments": [],
"title": "Make index rollover action atomic"
} | {
"commits": [
{
"message": "Make index rollover action atomic\n\nToday when executing a rollover request, we create an index with alias\n(via template), then update index aliases in two separate cluster tasks.\nIn the interval between these two actions, the alias will associate to\ntwo indices. This causes indexing requests to that alias to be rejected."
}
],
"files": [
{
"diff": "@@ -39,7 +39,7 @@\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n-import org.elasticsearch.cluster.metadata.MetaDataIndexAliasesService;\n+import org.elasticsearch.cluster.metadata.MetaDataIndexRolloverService;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -63,20 +63,17 @@\n public class TransportRolloverAction extends TransportMasterNodeAction<RolloverRequest, RolloverResponse> {\n \n private static final Pattern INDEX_NAME_PATTERN = Pattern.compile(\"^.*-\\\\d+$\");\n- private final MetaDataCreateIndexService createIndexService;\n- private final MetaDataIndexAliasesService indexAliasesService;\n+ private final MetaDataIndexRolloverService indexRolloverService;\n private final ActiveShardsObserver activeShardsObserver;\n private final Client client;\n \n @Inject\n public TransportRolloverAction(Settings settings, TransportService transportService, ClusterService clusterService,\n- ThreadPool threadPool, MetaDataCreateIndexService createIndexService,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,\n- MetaDataIndexAliasesService indexAliasesService, Client client) {\n+ ThreadPool threadPool, MetaDataIndexRolloverService indexRolloverService,\n+ ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Client client) {\n super(settings, RolloverAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver,\n RolloverRequest::new);\n- this.createIndexService = createIndexService;\n- this.indexAliasesService = indexAliasesService;\n+ this.indexRolloverService = indexRolloverService;\n this.client = client;\n this.activeShardsObserver = new ActiveShardsObserver(settings, clusterService, threadPool);\n }\n@@ -128,26 +125,22 @@ public void onResponse(IndicesStatsResponse statsResponse) {\n return;\n }\n if (conditionResults.size() == 0 || conditionResults.stream().anyMatch(result -> result.matched)) {\n- CreateIndexClusterStateUpdateRequest updateRequest = prepareCreateIndexRequest(unresolvedName, rolloverIndexName,\n- rolloverRequest);\n- createIndexService.createIndex(updateRequest, ActionListener.wrap(createIndexClusterStateUpdateResponse -> {\n- // switch the alias to point to the newly created index\n- indexAliasesService.indicesAliases(\n- prepareRolloverAliasesUpdateRequest(sourceIndexName, rolloverIndexName,\n- rolloverRequest),\n- ActionListener.wrap(aliasClusterStateUpdateResponse -> {\n- if (aliasClusterStateUpdateResponse.isAcknowledged()) {\n- activeShardsObserver.waitForActiveShards(new String[]{rolloverIndexName},\n- rolloverRequest.getCreateIndexRequest().waitForActiveShards(),\n- rolloverRequest.masterNodeTimeout(),\n- isShardsAcked -> listener.onResponse(new RolloverResponse(sourceIndexName, rolloverIndexName,\n- conditionResults, false, true, true, isShardsAcked)),\n- listener::onFailure);\n- } else {\n- listener.onResponse(new RolloverResponse(sourceIndexName, rolloverIndexName, conditionResults,\n- false, true, false, false));\n- }\n- }, listener::onFailure));\n+ CreateIndexClusterStateUpdateRequest createIndexRequest = prepareCreateIndexRequest(unresolvedName,\n+ rolloverIndexName, rolloverRequest);\n+ IndicesAliasesClusterStateUpdateRequest updateAliasRequest = prepareRolloverAliasesUpdateRequest(sourceIndexName,\n+ rolloverIndexName, rolloverRequest);\n+ indexRolloverService.rollover(createIndexRequest, updateAliasRequest, ActionListener.wrap(clusterStateResponse -> {\n+ if (clusterStateResponse.isAcknowledged()) {\n+ activeShardsObserver.waitForActiveShards(new String[]{rolloverIndexName},\n+ rolloverRequest.getCreateIndexRequest().waitForActiveShards(),\n+ rolloverRequest.masterNodeTimeout(),\n+ isShardsAcked -> listener.onResponse(new RolloverResponse(sourceIndexName, rolloverIndexName,\n+ conditionResults, false, true, true, isShardsAcked)),\n+ listener::onFailure);\n+ } else {\n+ listener.onResponse(new RolloverResponse(sourceIndexName, rolloverIndexName, conditionResults,\n+ false, true, false, false));\n+ }\n }, listener::onFailure));\n } else {\n // conditions not met",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/rollover/TransportRolloverAction.java",
"status": "modified"
},
{
"diff": "@@ -219,20 +219,45 @@ public void createIndex(final CreateIndexClusterStateUpdateRequest request,\n \n private void onlyCreateIndex(final CreateIndexClusterStateUpdateRequest request,\n final ActionListener<ClusterStateUpdateResponse> listener) {\n- Settings.Builder updatedSettingsBuilder = Settings.builder();\n- Settings build = updatedSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build();\n- indexScopedSettings.validate(build, true); // we do validate here - index setting must be consistent\n- request.settings(build);\n+ final IndexCreationTask indexCreationTask = indexCreationTask(request);\n clusterService.submitStateUpdateTask(\"create-index [\" + request.index() + \"], cause [\" + request.cause() + \"]\",\n- new IndexCreationTask(logger, allocationService, request, listener, indicesService, aliasValidator, xContentRegistry, settings,\n- this::validate));\n+ new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, request, listener) {\n+ @Override\n+ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n+ return new ClusterStateUpdateResponse(acknowledged);\n+ }\n+\n+ @Override\n+ public ClusterState execute(ClusterState currentState) throws Exception {\n+ return indexCreationTask.execute(currentState);\n+ }\n+\n+ @Override\n+ public void onFailure(String source, Exception e) {\n+ if (e instanceof ResourceAlreadyExistsException) {\n+ logger.trace((Supplier<?>) () -> new ParameterizedMessage(\"[{}] failed to create\", request.index()), e);\n+ } else {\n+ logger.debug((Supplier<?>) () -> new ParameterizedMessage(\"[{}] failed to create\", request.index()), e);\n+ }\n+ super.onFailure(source, e);\n+ }\n+ });\n }\n \n interface IndexValidator {\n void validate(CreateIndexClusterStateUpdateRequest request, ClusterState state);\n }\n \n- static class IndexCreationTask extends AckedClusterStateUpdateTask<ClusterStateUpdateResponse> {\n+ IndexCreationTask indexCreationTask(final CreateIndexClusterStateUpdateRequest request) {\n+ Settings.Builder updatedSettingsBuilder = Settings.builder();\n+ Settings build = updatedSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build();\n+ indexScopedSettings.validate(build, true); // we do validate here - index setting must be consistent\n+ request.settings(build);\n+ return new IndexCreationTask(logger, allocationService, request, indicesService, aliasValidator, xContentRegistry,\n+ settings, this::validate);\n+ }\n+\n+ static class IndexCreationTask {\n \n private final IndicesService indicesService;\n private final AliasValidator aliasValidator;\n@@ -244,10 +269,8 @@ static class IndexCreationTask extends AckedClusterStateUpdateTask<ClusterStateU\n private final IndexValidator validator;\n \n IndexCreationTask(Logger logger, AllocationService allocationService, CreateIndexClusterStateUpdateRequest request,\n- ActionListener<ClusterStateUpdateResponse> listener, IndicesService indicesService,\n- AliasValidator aliasValidator, NamedXContentRegistry xContentRegistry,\n+ IndicesService indicesService, AliasValidator aliasValidator, NamedXContentRegistry xContentRegistry,\n Settings settings, IndexValidator validator) {\n- super(Priority.URGENT, request, listener);\n this.request = request;\n this.logger = logger;\n this.allocationService = allocationService;\n@@ -258,13 +281,7 @@ static class IndexCreationTask extends AckedClusterStateUpdateTask<ClusterStateU\n this.validator = validator;\n }\n \n- @Override\n- protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n- return new ClusterStateUpdateResponse(acknowledged);\n- }\n-\n- @Override\n- public ClusterState execute(ClusterState currentState) throws Exception {\n+ ClusterState execute(ClusterState currentState) throws Exception {\n Index createdIndex = null;\n String removalExtraInfo = null;\n IndexRemovalReason removalReason = IndexRemovalReason.FAILURE;\n@@ -555,16 +572,6 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n }\n }\n \n- @Override\n- public void onFailure(String source, Exception e) {\n- if (e instanceof ResourceAlreadyExistsException) {\n- logger.trace((Supplier<?>) () -> new ParameterizedMessage(\"[{}] failed to create\", request.index()), e);\n- } else {\n- logger.debug((Supplier<?>) () -> new ParameterizedMessage(\"[{}] failed to create\", request.index()), e);\n- }\n- super.onFailure(source, e);\n- }\n-\n private List<IndexTemplateMetaData> findTemplates(CreateIndexClusterStateUpdateRequest request, ClusterState state) throws IOException {\n List<IndexTemplateMetaData> templateMetadata = new ArrayList<>();\n for (ObjectCursor<IndexTemplateMetaData> cursor : state.metaData().templates().values()) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java",
"status": "modified"
},
{
"diff": "@@ -88,12 +88,12 @@ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n \n @Override\n public ClusterState execute(ClusterState currentState) {\n- return innerExecute(currentState, request.actions());\n+ return executeAliasActions(currentState, request.actions());\n }\n });\n }\n \n- ClusterState innerExecute(ClusterState currentState, Iterable<AliasAction> actions) {\n+ ClusterState executeAliasActions(ClusterState currentState, Iterable<AliasAction> actions) {\n List<Index> indicesToClose = new ArrayList<>();\n Map<String, IndexService> indices = new HashMap<>();\n try {",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,69 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.metadata;\n+\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.admin.indices.alias.IndicesAliasesClusterStateUpdateRequest;\n+import org.elasticsearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest;\n+import org.elasticsearch.cluster.AckedClusterStateUpdateTask;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n+import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.inject.Inject;\n+\n+/**\n+ * A service is responsible for rollover index including creating a new index and updating index aliases.\n+ */\n+public class MetaDataIndexRolloverService {\n+ private final MetaDataCreateIndexService createIndexService;\n+ private final MetaDataIndexAliasesService aliasesService;\n+ private final ClusterService clusterService;\n+\n+ @Inject\n+ public MetaDataIndexRolloverService(MetaDataCreateIndexService createIndexService, MetaDataIndexAliasesService aliasesService,\n+ ClusterService clusterService) {\n+ this.createIndexService = createIndexService;\n+ this.aliasesService = aliasesService;\n+ this.clusterService = clusterService;\n+ }\n+\n+ /**\n+ * Executes a create index request and an update index alias in a single cluster task action.\n+ */\n+ public void rollover(final CreateIndexClusterStateUpdateRequest createIndexRequest,\n+ final IndicesAliasesClusterStateUpdateRequest updateAliasRequest,\n+ final ActionListener<ClusterStateUpdateResponse> listener) {\n+ final MetaDataCreateIndexService.IndexCreationTask indexCreationTask = createIndexService.indexCreationTask(createIndexRequest);\n+ clusterService.submitStateUpdateTask(\"rollover\",\n+ new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, updateAliasRequest, listener) {\n+ @Override\n+ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n+ return new ClusterStateUpdateResponse(acknowledged);\n+ }\n+\n+ @Override\n+ public ClusterState execute(ClusterState currentState) throws Exception {\n+ final ClusterState newClusterState = indexCreationTask.execute(currentState);\n+ return aliasesService.executeAliasActions(newClusterState, updateAliasRequest.actions());\n+ }\n+ });\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexRolloverService.java",
"status": "added"
},
{
"diff": "@@ -22,13 +22,15 @@\n import org.elasticsearch.ResourceAlreadyExistsException;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse;\n+import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.InternalSettingsPlugin;\n import org.joda.time.DateTime;\n@@ -39,6 +41,7 @@\n import java.util.Collections;\n import java.util.Map;\n import java.util.Set;\n+import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.stream.Collectors;\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n@@ -277,4 +280,35 @@ public void testRolloverMaxSize() throws Exception {\n assertThat(\"No rollover with an empty index\", response.isRolledOver(), equalTo(false));\n }\n }\n+\n+ public void testIndexingAndRolloverConcurrently() throws Exception {\n+ client().admin().indices().preparePutTemplate(\"logs\")\n+ .setPatterns(Collections.singletonList(\"logs-*\"))\n+ .addAlias(new Alias(\"logs-write\"))\n+ .get();\n+ assertAcked(client().admin().indices().prepareCreate(\"logs-000001\").get());\n+ ensureYellow(\"logs-write\");\n+\n+ final AtomicBoolean done = new AtomicBoolean();\n+ final Thread rolloverThread = new Thread(() -> {\n+ while (done.get() == false) {\n+ client().admin().indices()\n+ .prepareRolloverIndex(\"logs-write\")\n+ .addMaxIndexSizeCondition(new ByteSizeValue(1))\n+ .get();\n+ }\n+ });\n+ rolloverThread.start();\n+ try {\n+ int numDocs = between(20, 500);\n+ for (int i = 0; i < numDocs; i++) {\n+ logger.info(\"--> add doc [{}]\", i);\n+ IndexResponse resp = index(\"logs-write\", \"doc\", Integer.toString(i), \"{}\");\n+ assertThat(resp.status(), equalTo(RestStatus.CREATED));\n+ }\n+ } finally {\n+ done.set(true);\n+ rolloverThread.join();\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/rollover/RolloverIT.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,6 @@\n import org.apache.logging.log4j.Logger;\n import org.apache.lucene.search.Sort;\n import org.elasticsearch.Version;\n-import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest;\n import org.elasticsearch.action.admin.indices.shrink.ResizeType;\n@@ -87,7 +86,6 @@ public class IndexCreationTaskTests extends ESTestCase {\n private final Logger logger = mock(Logger.class);\n private final AllocationService allocationService = mock(AllocationService.class);\n private final MetaDataCreateIndexService.IndexValidator validator = mock(MetaDataCreateIndexService.IndexValidator.class);\n- private final ActionListener listener = mock(ActionListener.class);\n private final ClusterState state = mock(ClusterState.class);\n private final Settings.Builder clusterStateSettings = Settings.builder();\n private final MapperService mapper = mock(MapperService.class);\n@@ -387,7 +385,7 @@ private ClusterState executeTask() throws Exception {\n setupState();\n setupRequest();\n final MetaDataCreateIndexService.IndexCreationTask task = new MetaDataCreateIndexService.IndexCreationTask(\n- logger, allocationService, request, listener, indicesService, aliasValidator, xContentRegistry, clusterStateSettings.build(),\n+ logger, allocationService, request, indicesService, aliasValidator, xContentRegistry, clusterStateSettings.build(),\n validator\n );\n return task.execute(state);",
"filename": "core/src/test/java/org/elasticsearch/cluster/metadata/IndexCreationTaskTests.java",
"status": "modified"
},
{
"diff": "@@ -64,15 +64,15 @@ public void testAddAndRemove() {\n ClusterState before = createIndex(ClusterState.builder(ClusterName.DEFAULT).build(), index);\n \n // Add an alias to it\n- ClusterState after = service.innerExecute(before, singletonList(new AliasAction.Add(index, \"test\", null, null, null)));\n+ ClusterState after = service.executeAliasActions(before, singletonList(new AliasAction.Add(index, \"test\", null, null, null)));\n AliasOrIndex alias = after.metaData().getAliasAndIndexLookup().get(\"test\");\n assertNotNull(alias);\n assertTrue(alias.isAlias());\n assertThat(alias.getIndices(), contains(after.metaData().index(index)));\n \n // Remove the alias from it while adding another one\n before = after;\n- after = service.innerExecute(before, Arrays.asList(\n+ after = service.executeAliasActions(before, Arrays.asList(\n new AliasAction.Remove(index, \"test\"),\n new AliasAction.Add(index, \"test_2\", null, null, null)));\n assertNull(after.metaData().getAliasAndIndexLookup().get(\"test\"));\n@@ -83,7 +83,7 @@ public void testAddAndRemove() {\n \n // Now just remove on its own\n before = after;\n- after = service.innerExecute(before, singletonList(new AliasAction.Remove(index, \"test_2\")));\n+ after = service.executeAliasActions(before, singletonList(new AliasAction.Remove(index, \"test_2\")));\n assertNull(after.metaData().getAliasAndIndexLookup().get(\"test\"));\n assertNull(after.metaData().getAliasAndIndexLookup().get(\"test_2\"));\n }\n@@ -94,7 +94,7 @@ public void testSwapIndexWithAlias() {\n before = createIndex(before, \"test_2\");\n \n // Now remove \"test\" and add an alias to \"test\" to \"test_2\" in one go\n- ClusterState after = service.innerExecute(before, Arrays.asList(\n+ ClusterState after = service.executeAliasActions(before, Arrays.asList(\n new AliasAction.Add(\"test_2\", \"test\", null, null, null),\n new AliasAction.RemoveIndex(\"test\")));\n AliasOrIndex alias = after.metaData().getAliasAndIndexLookup().get(\"test\");\n@@ -108,7 +108,7 @@ public void testAddAliasToRemovedIndex() {\n ClusterState before = createIndex(ClusterState.builder(ClusterName.DEFAULT).build(), \"test\");\n \n // Attempt to add an alias to \"test\" at the same time as we remove it\n- IndexNotFoundException e = expectThrows(IndexNotFoundException.class, () -> service.innerExecute(before, Arrays.asList(\n+ IndexNotFoundException e = expectThrows(IndexNotFoundException.class, () -> service.executeAliasActions(before, Arrays.asList(\n new AliasAction.Add(\"test\", \"alias\", null, null, null),\n new AliasAction.RemoveIndex(\"test\"))));\n assertEquals(\"test\", e.getIndex().getName());\n@@ -119,7 +119,7 @@ public void testRemoveIndexTwice() {\n ClusterState before = createIndex(ClusterState.builder(ClusterName.DEFAULT).build(), \"test\");\n \n // Try to remove an index twice. This should just remove the index once....\n- ClusterState after = service.innerExecute(before, Arrays.asList(\n+ ClusterState after = service.executeAliasActions(before, Arrays.asList(\n new AliasAction.RemoveIndex(\"test\"),\n new AliasAction.RemoveIndex(\"test\")));\n assertNull(after.metaData().getAliasAndIndexLookup().get(\"test\"));",
"filename": "core/src/test/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesServiceTests.java",
"status": "modified"
}
]
} |
{
"body": "Be able to update child type mapping without specifying it's `_parent` field, because it should be considered as unchanged if it is not explicitly specified.\r\nIn this case, the merged mapper's `parentType` field should be `null`, so we can just merge it to keep the original one instead of throwing an exception of \"trying to change it to 'null' \"\r\n\r\nClose #23381 ",
"comments": [
{
"body": "Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?\n",
"created_at": "2017-04-29T19:49:44Z"
},
{
"body": "@colings86 please could somebody review this",
"created_at": "2017-05-26T16:15:21Z"
},
{
"body": "Thanks for your comments @martijnvg, I added a merge case with current parent field while I found a new problem: I cannot find the `_parent` field with this assertion didn't pass\r\n`assertThat(mergedMapper2.mappers().getMapper(\"_parent\"), notNullValue());`\r\n\r\nAnd then I found that the parent field name of a document is in format: `_parent#[parent type]`, so in the test it should be `_parent#parent`, if a new mapper without parent field, it's name is just `_parent`, and when we do the merge we firstly call the `super.doMerge` https://github.com/PnPie/elasticsearch/blob/346889a93bb18cda93736a8076aef198b8f86fe1/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java#L298 method of `ParentFieldMapper` and it always replaces the `fieldType` with the new one's, so in consequence the parent field name is changed to the new one's.\r\n\r\nSo previously after this we can find the `_parent` field with\r\n`assertThat(mergedMapper.mappers().getMapper(\"_parent\"), notNullValue());`\r\nbut in fact the parent field shouldn't be changed and it should be `_parent#parent`, so I modified in `ParentFieldMapper`'s `doMerge` method and make it keep the original parent field name.\r\n\r\nAnd I also add a test for trying to add parent field to an exiting mapper without parent.",
"created_at": "2017-06-01T14:35:39Z"
},
{
"body": "Cool, looks good @PnPie. I'll merge it in soon and backport to 5.x branch.",
"created_at": "2017-06-02T13:38:58Z"
}
],
"number": 24407,
"title": "keep _parent field while updating child type mapping"
} | {
"body": "A bug introduced in #24407 currently prevents `eager_global_ordinals` from\r\nbeing updated. This new approach should fix the issue while still allowing\r\nmapping updates to not specify the `_parent` field if it doesn't need\r\nupdating, which was the goal of #24407.",
"number": 28014,
"review_comments": [],
"title": "Allow update of `eager_global_ordinals` on `_parent`."
} | {
"commits": [
{
"message": "Allow update of `eager_global_ordinals` on `_parent`.\n\nA bug introduced in #24407 currently prevents `eager_global_ordinals` from\nbeing updated. This new approach should fix the issue while still allowing\nmapping updates to not specify the `_parent` field if it doesn't need\nupdating, which was the goal of #24407."
}
],
"files": [
{
"diff": "@@ -303,15 +303,16 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n @Override\n protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n ParentFieldMapper fieldMergeWith = (ParentFieldMapper) mergeWith;\n- ParentFieldType currentFieldType = (ParentFieldType) fieldType.clone();\n- super.doMerge(mergeWith, updateAllTypes);\n if (fieldMergeWith.parentType != null && Objects.equals(parentType, fieldMergeWith.parentType) == false) {\n throw new IllegalArgumentException(\"The _parent field's type option can't be changed: [\" + parentType + \"]->[\" + fieldMergeWith.parentType + \"]\");\n }\n-\n- if (active()) {\n- fieldType = currentFieldType;\n+ // If fieldMergeWith is not active it means the user provided a mapping\n+ // update that does not explicitly configure the _parent field, so we\n+ // ignore it.\n+ if (fieldMergeWith.active()) {\n+ super.doMerge(mergeWith, updateAllTypes);\n }\n+\n }\n \n /**",
"filename": "server/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.test.IndexSettingsModule;\n import org.elasticsearch.test.InternalSettingsPlugin;\n \n+import java.io.IOException;\n import java.util.Collection;\n import java.util.Collections;\n import java.util.HashSet;\n@@ -138,4 +139,23 @@ private static int getNumberOfFieldWithParentPrefix(ParseContext.Document doc) {\n return numFieldWithParentPrefix;\n }\n \n+ public void testUpdateEagerGlobalOrds() throws IOException {\n+ String parentMapping = XContentFactory.jsonBuilder().startObject().startObject(\"parent_type\")\n+ .endObject().endObject().string();\n+ String childMapping = XContentFactory.jsonBuilder().startObject().startObject(\"child_type\")\n+ .startObject(\"_parent\").field(\"type\", \"parent_type\").endObject()\n+ .endObject().endObject().string();\n+ IndexService indexService = createIndex(\"test\", Settings.builder().put(\"index.version.created\", Version.V_5_6_0).build());\n+ indexService.mapperService().merge(\"parent_type\", new CompressedXContent(parentMapping), MergeReason.MAPPING_UPDATE, false);\n+ indexService.mapperService().merge(\"child_type\", new CompressedXContent(childMapping), MergeReason.MAPPING_UPDATE, false);\n+\n+ assertTrue(indexService.mapperService().documentMapper(\"child_type\").parentFieldMapper().fieldType().eagerGlobalOrdinals());\n+\n+ String childMappingUpdate = XContentFactory.jsonBuilder().startObject().startObject(\"child_type\")\n+ .startObject(\"_parent\").field(\"type\", \"parent_type\").field(\"eager_global_ordinals\", false).endObject()\n+ .endObject().endObject().string();\n+ indexService.mapperService().merge(\"child_type\", new CompressedXContent(childMappingUpdate), MergeReason.MAPPING_UPDATE, false);\n+\n+ assertFalse(indexService.mapperService().documentMapper(\"child_type\").parentFieldMapper().fieldType().eagerGlobalOrdinals());\n+ }\n }",
"filename": "server/src/test/java/org/elasticsearch/index/mapper/ParentFieldMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Feature request -->\r\n\r\n**Describe the feature**:\r\n\r\n<!-- Bug report -->\r\n\r\n*Elasticsearch version** (`bin/elasticsearch --version`):\r\n\r\n6.1.1\r\n\r\n**Plugins installed**: [LTR]\r\n\r\n**JVM version** (`java -version`):\r\n\r\n```\r\n(venv) doug@wiz$~/ws/elasticsearch-learning-to-rank(es6) $ java -version\r\njava version \"1.8.0_121\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_121-b13)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)\r\n```\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n\r\n```\r\nDarwin wiz.local 17.3.0 Darwin Kernel Version 17.3.0: Thu Nov 9 18:09:22 PST 2017; root:xnu-4570.31.3~1/RELEASE_X86_64 x86_64\r\n```\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nRescore query weights are not carried through when the rescore query is rewritten. Debugging shows that weights appear to be dropped when [query rescore builder is rewritten](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/search/rescore/QueryRescorerBuilder.java#L244). \r\n\r\nYou can see in the code: \r\n\r\n```\r\n @Override\r\n public QueryRescorerBuilder rewrite(QueryRewriteContext ctx) throws IOException {\r\n QueryBuilder rewrite = queryBuilder.rewrite(ctx);\r\n if (rewrite == queryBuilder) {\r\n return this;\r\n }\r\n return new QueryRescorerBuilder(rewrite);\r\n }\r\n```\r\n\r\nThat various rescore settings are not carried through to the rewritten QueryRescoreBuilder. Only the rewritten query builder is carried through, not the weights that are also part of this. IE I would expect something like:\r\n\r\n```\r\n @Override\r\n public QueryRescorerBuilder rewrite(QueryRewriteContext ctx) throws IOException {\r\n QueryBuilder rewrite = queryBuilder.rewrite(ctx);\r\n if (rewrite == queryBuilder) {\r\n return this;\r\n }\r\n return new QueryRescorerBuilder(rewrite)\r\n .setRescoreQueryWeight(rescoreQueryWeight) \r\n . (additional settings carried through )\r\n }\r\n```\r\n\r\n**Steps to reproduce**:\r\n\r\nSee this [integration test](https://github.com/o19s/elasticsearch-learning-to-rank/blob/es6_1/src/test/java/com/o19s/es/ltr/query/StoredLtrQueryIT.java#L58)\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n",
"comments": [
{
"body": "Do not know the protocol, but should this be closed with the merge of #27981 ?",
"created_at": "2017-12-26T20:06:27Z"
},
{
"body": "I forgot to add the magic words in the commit message. Thanks for checking.",
"created_at": "2017-12-26T21:06:29Z"
}
],
"number": 27979,
"title": "Rescoring drops query weight / rescore weight when rescore query retwritten"
} | {
"body": "When rewriting a query rescorer, the weights and score mode need to be carried to the new rescorer\r\n \r\nFixes #27979",
"number": 27981,
"review_comments": [],
"title": "Carry forward weights, etc on rescore rewrite"
} | {
"commits": [
{
"message": "Carry forward weights, etc on rescore rewrite"
}
],
"files": [
{
"diff": "@@ -246,6 +246,10 @@ public QueryRescorerBuilder rewrite(QueryRewriteContext ctx) throws IOException\n if (rewrite == queryBuilder) {\n return this;\n }\n- return new QueryRescorerBuilder(rewrite);\n+ QueryRescorerBuilder queryRescoreBuilder = new QueryRescorerBuilder(rewrite);\n+ queryRescoreBuilder.setQueryWeight(queryWeight);\n+ queryRescoreBuilder.setRescoreQueryWeight(rescoreQueryWeight);\n+ queryRescoreBuilder.setScoreMode(scoreMode);\n+ return queryRescoreBuilder;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/rescore/QueryRescorerBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,12 +19,14 @@\n \n package org.elasticsearch.search.rescore;\n \n+import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -38,8 +40,11 @@\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.TextFieldMapper;\n+import org.elasticsearch.index.query.BoolQueryBuilder;\n import org.elasticsearch.index.query.MatchAllQueryBuilder;\n+import org.elasticsearch.index.query.MatchQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n+import org.elasticsearch.index.query.QueryRewriteContext;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.Rewriteable;\n import org.elasticsearch.search.SearchModule;\n@@ -165,6 +170,45 @@ public void testRescoreQueryNull() throws IOException {\n assertEquals(\"rescore_query cannot be null\", e.getMessage());\n }\n \n+ class AlwaysRewriteQueryBuilder extends MatchAllQueryBuilder {\n+\n+ protected QueryBuilder doRewrite(QueryRewriteContext queryShardContext) throws IOException {\n+ return new MatchAllQueryBuilder();\n+ }\n+ }\n+\n+ public void testRewritingKeepsSettings() throws IOException {\n+\n+ final long nowInMillis = randomNonNegativeLong();\n+ Settings indexSettings = Settings.builder()\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n+ IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(randomAlphaOfLengthBetween(1, 10), indexSettings);\n+ // shard context will only need indicesQueriesRegistry for building Query objects nested in query rescorer\n+ QueryShardContext mockShardContext = new QueryShardContext(0, idxSettings, null, null, null, null, null, xContentRegistry(),\n+ namedWriteableRegistry, null, null, () -> nowInMillis, null) {\n+ @Override\n+ public MappedFieldType fieldMapper(String name) {\n+ TextFieldMapper.Builder builder = new TextFieldMapper.Builder(name);\n+ return builder.build(new Mapper.BuilderContext(idxSettings.getSettings(), new ContentPath(1))).fieldType();\n+ }\n+ };\n+\n+ QueryBuilder rewriteQb = new AlwaysRewriteQueryBuilder();\n+ org.elasticsearch.search.rescore.QueryRescorerBuilder rescoreBuilder = new\n+ org.elasticsearch.search.rescore.QueryRescorerBuilder(rewriteQb);\n+\n+ rescoreBuilder.setQueryWeight(randomFloat());\n+ rescoreBuilder.setRescoreQueryWeight(randomFloat());\n+ rescoreBuilder.setScoreMode(QueryRescoreMode.Max);\n+\n+ QueryRescoreContext rescoreContext = (QueryRescoreContext) rescoreBuilder.buildContext(mockShardContext);\n+ QueryRescorerBuilder rescoreRewritten = rescoreBuilder.rewrite(mockShardContext);\n+ assertEquals(rescoreRewritten.getQueryWeight(), rescoreBuilder.getQueryWeight(), 0.01f);\n+ assertEquals(rescoreRewritten.getRescoreQueryWeight(), rescoreBuilder.getRescoreQueryWeight(), 0.01f);\n+ assertEquals(rescoreRewritten.getScoreMode(), rescoreBuilder.getScoreMode());\n+\n+ }\n+\n /**\n * test parsing exceptions for incorrect rescorer syntax\n */",
"filename": "core/src/test/java/org/elasticsearch/search/rescore/QueryRescorerBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**\r\nVersion: 6.1.0, Build: c0c1ba0/2017-12-12T12:32:54.550Z, JVM: 1.8.0_131\r\nVersion: 5.6.5, Build: 6a37571/2017-12-04T07:50:10.466Z, JVM: 1.8.0_144\r\n\r\n**JVM version**:\r\n1.8.0_131\r\n\r\n**OS version**\r\nUbuntu 14.04 - Linux 4.4.0-93-generic #116~14.04.1-Ubuntu\r\nArch Linux - Linux 4.14.6-1-ARCH\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nAggregations with nested and filter aggregation changed from ES 5.6.5, returning wrong bucket count.\r\n\r\nExpected behavior:\r\nIt should return correct bucket count when using both nested and filter aggregation\r\n\r\nActual behavior:\r\nWrong bucket count, including some null keys\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Create a new index with nested fields\r\n```\r\nPUT /catalog\r\n{\r\n \"settings\": {\r\n \"number_of_shards\": 1\r\n },\r\n \"mappings\": {\r\n \"product\": {\r\n \"properties\": {\r\n \"name\": { \"type\": \"text\" },\r\n \"attributes\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"key\": { \"type\": \"keyword\" },\r\n \"value\": { \"type\": \"keyword\" }\r\n }\r\n },\r\n \"ranges\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"key\": { \"type\": \"keyword\" },\r\n \"value\": { \"type\": \"double\" }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n 2. Index some documents\r\n```\r\nPUT /catalog/product/1\r\n{\r\n \"name\": \"Product 1\",\r\n \"attributes\": [\r\n { \"key\": \"category\", \"value\": \"t-shirts\" },\r\n { \"key\": \"brand\", \"value\": \"fake1\" }\r\n ],\r\n \"ranges\": [ { \"key\": \"price\", \"value\": 10 } ]\r\n}\r\n\r\nPUT /catalog/product/2\r\n{\r\n \"name\": \"Product 2\",\r\n \"attributes\": [\r\n { \"key\": \"category\", \"value\": \"shoes\" },\r\n { \"key\": \"brand\", \"value\": \"fake2\" }\r\n ],\r\n \"ranges\": [ { \"key\": \"price\", \"value\": 100 } ]\r\n}\r\n\r\nPUT /catalog/product/3\r\n{\r\n \"name\": \"Product 3\",\r\n \"attributes\": [\r\n { \"key\": \"category\", \"value\": \"candy\" },\r\n { \"key\": \"brand\", \"value\": \"fake3\" }\r\n ],\r\n \"ranges\": [ { \"key\": \"price\", \"value\": 5 } ]\r\n}\r\n```\r\n\r\n 3. Do a nested aggregation to create facets (match_all is just for this test, there are some filters)\r\n```\r\nGET /catalog/product/_search\r\n{\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"1\": {\r\n \"filter\": {\r\n \"match_all\": {}\r\n },\r\n \"aggs\": {\r\n \"2\": {\r\n \"nested\": {\r\n \"path\": \"attributes\"\r\n },\r\n \"aggs\": {\r\n \"3\": {\r\n \"terms\": {\r\n \"field\": \"attributes.key\",\r\n \"size\": 100\r\n },\r\n \"aggs\": {\r\n \"4\": {\r\n \"terms\": {\r\n \"field\": \"attributes.value\",\r\n \"size\": 100\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n4. Do a nested aggregation using script (notice the \"key\": \"null1\")\r\n```\r\nGET /catalog/product/_search\r\n{\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"1\": {\r\n \"filter\": {\r\n \"match_all\": {}\r\n },\r\n \"aggs\": {\r\n \"2\": {\r\n \"nested\": {\r\n \"path\": \"attributes\"\r\n },\r\n \"aggs\": {\r\n \"3\": {\r\n \"terms\": {\r\n \"script\": \"doc['attributes.key'].value + '1'\",\r\n \"size\": 100\r\n },\r\n \"aggs\": {\r\n \"4\": {\r\n \"terms\": {\r\n \"field\": \"attributes.value\",\r\n \"size\": 100\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n**Provide logs (if relevant)**:\r\n\r\nReturned values with ES 6.1.x (only aggregation node)\r\nWhere is brand fake3 and category candy? Why the null1?\r\n\r\n```\r\n{\r\n \"1\": {\r\n \"2\": {\r\n \"3\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"4\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"fake1\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"fake2\",\r\n \"doc_count\": 1\r\n }\r\n ]\r\n },\r\n \"key\": \"brand\",\r\n \"doc_count\": 2\r\n },\r\n {\r\n \"4\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"shoes\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"t-shirts\",\r\n \"doc_count\": 1\r\n }\r\n ]\r\n },\r\n \"key\": \"category\",\r\n \"doc_count\": 2\r\n }\r\n ]\r\n },\r\n \"doc_count\": 6\r\n },\r\n \"doc_count\": 3\r\n }\r\n}\r\n```\r\n```\r\n{\r\n \"1\": {\r\n \"2\": {\r\n \"3\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"4\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"fake1\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"fake2\",\r\n \"doc_count\": 1\r\n }\r\n ]\r\n },\r\n \"key\": \"brand1\",\r\n \"doc_count\": 2\r\n },\r\n {\r\n \"4\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"shoes\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"t-shirts\",\r\n \"doc_count\": 1\r\n }\r\n ]\r\n },\r\n \"key\": \"category1\",\r\n \"doc_count\": 2\r\n },\r\n {\r\n \"4\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"fake2\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"shoes\",\r\n \"doc_count\": 1\r\n }\r\n ]\r\n },\r\n \"key\": \"null1\",\r\n \"doc_count\": 2\r\n }\r\n ]\r\n },\r\n \"doc_count\": 6\r\n },\r\n \"doc_count\": 3\r\n }\r\n}\r\n```\r\n\r\nReturned values with ES 5.6.5 (only aggregation node)\r\nThis version returns both fake3 and candy buckets and no null keys\r\n\r\n```\r\n{\r\n \"1\": {\r\n \"2\": {\r\n \"3\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"4\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"fake1\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"fake2\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"fake3\",\r\n \"doc_count\": 1\r\n }\r\n ]\r\n },\r\n \"key\": \"brand\",\r\n \"doc_count\": 3\r\n },\r\n {\r\n \"4\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"candy\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"shoes\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"t-shirts\",\r\n \"doc_count\": 1\r\n }\r\n ]\r\n },\r\n \"key\": \"category\",\r\n \"doc_count\": 3\r\n }\r\n ]\r\n },\r\n \"doc_count\": 6\r\n },\r\n \"doc_count\": 3\r\n }\r\n}\r\n```\r\n```\r\n{\r\n \"1\": {\r\n \"2\": {\r\n \"3\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"4\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"fake1\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"fake2\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"fake3\",\r\n \"doc_count\": 1\r\n }\r\n ]\r\n },\r\n \"key\": \"brand1\",\r\n \"doc_count\": 3\r\n },\r\n {\r\n \"4\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"candy\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"shoes\",\r\n \"doc_count\": 1\r\n },\r\n {\r\n \"key\": \"t-shirts\",\r\n \"doc_count\": 1\r\n }\r\n ]\r\n },\r\n \"key\": \"category1\",\r\n \"doc_count\": 3\r\n }\r\n ]\r\n },\r\n \"doc_count\": 6\r\n },\r\n \"doc_count\": 3\r\n }\r\n}\r\n```\r\n\r\nI'm not sure if this is a bug or a changed feature, but I wanna understand why this behavior is different with ES 6.1.x.",
"comments": [
{
"body": "@martijnvg could you take a look please?",
"created_at": "2017-12-20T09:15:32Z"
},
{
"body": "@felipe-fg Thanks for reporting! The bug has been fixed.",
"created_at": "2017-12-21T18:41:07Z"
},
{
"body": "@martijnvg Thank you for fixing it that fast.",
"created_at": "2017-12-21T20:50:22Z"
}
],
"number": 27912,
"title": "Aggregations using nested and filter is different on ES 6.1.x"
} | {
"body": "Add a method that is invoked before the `getLeafCollector(...)` of children aggregators is invoked.\r\n\r\nIn the case of nested aggregator this allows it to push down buffered child docs down to children aggregator.\r\nBefore this was done as part of the `NestedAggregator#getLeafCollector(...)`, but by then the children aggregators\r\nhave already moved on to the next segment and this causes incorrect results to be produced.\r\n\r\nCloses #27912\r\n",
"number": 27946,
"review_comments": [
{
"body": "I think we have called these subLeafCollectors elsewhere so maybe we should call this `preGetSubLeafCollectors()` to avoid confusion with parent-child aggs?",
"created_at": "2017-12-21T16:49:14Z"
},
{
"body": "nit: It seems weird to call `doPostCollection()` during collection. Maybe we should have a separate method that both call?",
"created_at": "2017-12-21T17:11:13Z"
}
],
"title": "Fix incorrect results for aggregations nested under a nested aggregation"
} | {
"commits": [
{
"message": "aggs: Add a method that is invoked before the `getLeafCollector(...)` of children aggregators is invoked.\n\nIn the case of nested aggregator this allows it to push down buffered child docs down to children aggregator.\nBefore this was done as part of the `NestedAggregator#getLeafCollector(...)`, but by then the children aggregators\nhave already moved on to the next segment and this causes incorrect results to be produced.\n\nCloses #27912"
}
],
"files": [
{
"diff": "@@ -105,7 +105,7 @@ public boolean needsScores() {\n };\n addRequestCircuitBreakerBytes(DEFAULT_WEIGHT);\n }\n- \n+\n /**\n * Increment or decrement the number of bytes that have been allocated to service\n * this request and potentially trigger a {@link CircuitBreakingException}. The\n@@ -114,7 +114,7 @@ public boolean needsScores() {\n * If memory has been returned, decrement it without tripping the breaker.\n * For performance reasons subclasses should not call this millions of times\n * each with small increments and instead batch up into larger allocations.\n- * \n+ *\n * @param bytes the number of bytes to register or negative to deregister the bytes\n * @return the cumulative size in bytes allocated by this aggregator to service this request\n */\n@@ -162,10 +162,18 @@ public List<PipelineAggregator> pipelineAggregators() {\n \n @Override\n public final LeafBucketCollector getLeafCollector(LeafReaderContext ctx) throws IOException {\n+ preGetSubLeafCollectors();\n final LeafBucketCollector sub = collectableSubAggregators.getLeafCollector(ctx);\n return getLeafCollector(ctx, sub);\n }\n \n+ /**\n+ * Can be overridden by aggregator implementations that like the perform an operation before the leaf collectors\n+ * of children aggregators are instantiated for the next segment.\n+ */\n+ protected void preGetSubLeafCollectors() throws IOException {\n+ }\n+\n /**\n * Can be overridden by aggregator implementation to be called back when the collection phase starts.\n */",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/AggregatorBase.java",
"status": "modified"
},
{
"diff": "@@ -102,13 +102,21 @@ public void collect(int parentDoc, long bucket) throws IOException {\n }\n };\n } else {\n- doPostCollection();\n return bufferingNestedLeafBucketCollector = new BufferingNestedLeafBucketCollector(sub, parentDocs, childDocs);\n }\n }\n \n+ @Override\n+ protected void preGetSubLeafCollectors() throws IOException {\n+ processBufferedDocs();\n+ }\n+\n @Override\n protected void doPostCollection() throws IOException {\n+ processBufferedDocs();\n+ }\n+\n+ private void processBufferedDocs() throws IOException {\n if (bufferingNestedLeafBucketCollector != null) {\n bufferingNestedLeafBucketCollector.postCollect();\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.document.Document;\n import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.SortedDocValuesField;\n import org.apache.lucene.document.SortedNumericDocValuesField;\n import org.apache.lucene.document.SortedSetDocValuesField;\n import org.apache.lucene.index.DirectoryReader;\n@@ -45,9 +46,13 @@\n import org.elasticsearch.index.mapper.SeqNoFieldMapper;\n import org.elasticsearch.index.mapper.TypeFieldMapper;\n import org.elasticsearch.index.mapper.UidFieldMapper;\n+import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.search.aggregations.AggregatorTestCase;\n import org.elasticsearch.search.aggregations.BucketOrder;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n+import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.terms.StringTerms;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;\n import org.elasticsearch.search.aggregations.metrics.max.InternalMax;\n@@ -523,6 +528,118 @@ public void testNestedOrdering_random() throws IOException {\n }\n }\n \n+ public void testPreGetChildLeafCollectors() throws IOException {\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter iw = new RandomIndexWriter(random(), directory)) {\n+ List<Document> documents = new ArrayList<>();\n+ Document document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"_doc#1\", UidFieldMapper.Defaults.NESTED_FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"__nested_field\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new SortedDocValuesField(\"key\", new BytesRef(\"key1\")));\n+ document.add(new SortedDocValuesField(\"value\", new BytesRef(\"a1\")));\n+ documents.add(document);\n+ document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"_doc#1\", UidFieldMapper.Defaults.NESTED_FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"__nested_field\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new SortedDocValuesField(\"key\", new BytesRef(\"key2\")));\n+ document.add(new SortedDocValuesField(\"value\", new BytesRef(\"b1\")));\n+ documents.add(document);\n+ document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"_doc#1\", UidFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"_doc\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(sequenceIDFields.primaryTerm);\n+ documents.add(document);\n+ iw.addDocuments(documents);\n+ iw.commit();\n+ documents.clear();\n+\n+ document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"_doc#2\", UidFieldMapper.Defaults.NESTED_FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"__nested_field\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new SortedDocValuesField(\"key\", new BytesRef(\"key1\")));\n+ document.add(new SortedDocValuesField(\"value\", new BytesRef(\"a2\")));\n+ documents.add(document);\n+ document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"_doc#2\", UidFieldMapper.Defaults.NESTED_FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"__nested_field\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new SortedDocValuesField(\"key\", new BytesRef(\"key2\")));\n+ document.add(new SortedDocValuesField(\"value\", new BytesRef(\"b2\")));\n+ documents.add(document);\n+ document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"_doc#2\", UidFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"_doc\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(sequenceIDFields.primaryTerm);\n+ documents.add(document);\n+ iw.addDocuments(documents);\n+ iw.commit();\n+ documents.clear();\n+\n+ document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"_doc#3\", UidFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"__nested_field\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new SortedDocValuesField(\"key\", new BytesRef(\"key1\")));\n+ document.add(new SortedDocValuesField(\"value\", new BytesRef(\"a3\")));\n+ documents.add(document);\n+ document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"_doc#3\", UidFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"__nested_field\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new SortedDocValuesField(\"key\", new BytesRef(\"key2\")));\n+ document.add(new SortedDocValuesField(\"value\", new BytesRef(\"b3\")));\n+ documents.add(document);\n+ document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"_doc#1\", UidFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"_doc\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(sequenceIDFields.primaryTerm);\n+ documents.add(document);\n+ iw.addDocuments(documents);\n+ iw.commit();\n+ }\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n+ TermsAggregationBuilder valueBuilder = new TermsAggregationBuilder(\"value\", ValueType.STRING).field(\"value\");\n+ TermsAggregationBuilder keyBuilder = new TermsAggregationBuilder(\"key\", ValueType.STRING).field(\"key\");\n+ keyBuilder.subAggregation(valueBuilder);\n+ NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder(NESTED_AGG, \"nested_field\");\n+ nestedBuilder.subAggregation(keyBuilder);\n+ FilterAggregationBuilder filterAggregationBuilder = new FilterAggregationBuilder(\"filterAgg\", new MatchAllQueryBuilder());\n+ filterAggregationBuilder.subAggregation(nestedBuilder);\n+\n+ MappedFieldType fieldType1 = new KeywordFieldMapper.KeywordFieldType();\n+ fieldType1.setName(\"key\");\n+ fieldType1.setHasDocValues(true);\n+ MappedFieldType fieldType2 = new KeywordFieldMapper.KeywordFieldType();\n+ fieldType2.setName(\"value\");\n+ fieldType2.setHasDocValues(true);\n+\n+ Filter filter = search(newSearcher(indexReader, false, true),\n+ Queries.newNonNestedFilter(Version.CURRENT), filterAggregationBuilder, fieldType1, fieldType2);\n+\n+ assertEquals(\"filterAgg\", filter.getName());\n+ assertEquals(3L, filter.getDocCount());\n+\n+ Nested nested = filter.getAggregations().get(NESTED_AGG);\n+ assertEquals(6L, nested.getDocCount());\n+\n+ StringTerms keyAgg = nested.getAggregations().get(\"key\");\n+ assertEquals(2, keyAgg.getBuckets().size());\n+ Terms.Bucket key1 = keyAgg.getBuckets().get(0);\n+ assertEquals(\"key1\", key1.getKey());\n+ StringTerms valueAgg = key1.getAggregations().get(\"value\");\n+ assertEquals(3, valueAgg.getBuckets().size());\n+ assertEquals(\"a1\", valueAgg.getBuckets().get(0).getKey());\n+ assertEquals(\"a2\", valueAgg.getBuckets().get(1).getKey());\n+ assertEquals(\"a3\", valueAgg.getBuckets().get(2).getKey());\n+\n+ Terms.Bucket key2 = keyAgg.getBuckets().get(1);\n+ assertEquals(\"key2\", key2.getKey());\n+ valueAgg = key2.getAggregations().get(\"value\");\n+ assertEquals(3, valueAgg.getBuckets().size());\n+ assertEquals(\"b1\", valueAgg.getBuckets().get(0).getKey());\n+ assertEquals(\"b2\", valueAgg.getBuckets().get(1).getKey());\n+ assertEquals(\"b3\", valueAgg.getBuckets().get(2).getKey());\n+ }\n+ }\n+ }\n+\n private double generateMaxDocs(List<Document> documents, int numNestedDocs, int id, String path, String fieldName) {\n return DoubleStream.of(generateDocuments(documents, numNestedDocs, id, path, fieldName))\n .max().orElse(Double.NEGATIVE_INFINITY);",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorTests.java",
"status": "modified"
}
]
} |
{
"body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 5.1.2\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`):\r\nopenjdk version \"1.8.0_151\"\r\nOpenJDK Runtime Environment (build 1.8.0_151-b12)\r\nOpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n\r\nLinux xxxx 2.6.32-642.el6.x86_64 #1 SMP Tue May 10 17:27:01 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nI just make a query with some filters and retrieve the corresponding documents. Also, I try to add aggregations about these documents with a `NestedAggregation` and **without any filter** used in the `query` so I used at the very beginning the `GlobalAggregation`.\r\n\r\nWhen I make the request I get a `Null Pointer exception` from elastic log. However, I get results from aggregations but also, an error.\r\n\r\nRequest:\r\n\r\n```\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"nested\": {\r\n \"path\": \"attributes\",\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"nested\": {\r\n \"path\": \"attributes.value\",\r\n \"query\": {\r\n \"term\": {\r\n \"attributes.value.group_value\": \"4\"\r\n }\r\n }\r\n }\r\n },\r\n {\r\n \"term\": {\r\n \"attributes.id\": \"2\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"path\": \"path\",\r\n \"query\": {\r\n \"term\": {\r\n \"path.id_category\": \"1\"\r\n }\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"aggregations\": {\r\n \"all_wine_types\": {\r\n \"global\": {},\r\n \"aggregations\": {\r\n \"all_wine_types\": {\r\n \"nested\": {\r\n \"path\": \"attributes\"\r\n },\r\n \"aggregations\": {\r\n \"all_wine_types\": {\r\n \"filter\": {\r\n \"term\": {\r\n \"attributes.id\": \"2\"\r\n }\r\n },\r\n \"aggregations\": {\r\n \"all_wine_type_values\": {\r\n \"nested\": {\r\n \"path\": \"attributes.value\"\r\n },\r\n \"aggregations\": {\r\n \"wine_type_value\": {\r\n \"terms\": {\r\n \"field\": \"attributes.value.group_value\"\r\n },\r\n \"aggregations\": {\r\n \"data\": {\r\n \"reverse_nested\": {},\r\n \"aggregations\": {\r\n \"data\": {\r\n \"top_hits\": {\r\n \"size\": 1,\r\n \"_source\": {\r\n \"includes\": [\r\n \"attributes\"\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"root_data\": {\r\n \"reverse_nested\": {},\r\n \"aggregations\": {\r\n \"data\": {\r\n \"top_hits\": {\r\n \"size\": 1,\r\n \"_source\": {\r\n \"includes\": [\r\n \"path\"\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"size\": 0\r\n}\r\n\r\n```\r\n\r\nResponse:\r\n\r\n```\r\n{\r\n \"took\": 45,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 4,\r\n \"failed\": 1,\r\n \"failures\": [\r\n {\r\n \"shard\": 4,\r\n \"index\": \"xxxxx\",\r\n \"node\": \"4__8482PTbacWtnUnX4d8g\",\r\n \"reason\": {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n }\r\n ]\r\n },\r\n \"hits\": {\r\n \"total\": 18544,\r\n \"max_score\": 0,\r\n \"hits\": []\r\n },\r\n```\r\n\r\n**Steps to reproduce**:\r\n\r\nNo steps!\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n```\r\n2017-12-20T15:54:54,361][DEBUG][o.e.a.s.TransportSearchAction] [4__8482] [xxxxx][4], node[4__8482PTbacWtnUnX4d8g], [P], s[STARTED], a[id=LOwrt7QKRK6bdrhhakAkRg]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[xxxxx], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[product], routing='null', preference='null', requestCache=null, scroll=null, source={\r\n \"size\" : 0,\r\n \"query\" : {\r\n \"bool\" : {\r\n \"filter\" : [\r\n {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"nested\" : {\r\n \"query\" : {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"nested\" : {\r\n \"query\" : {\r\n \"term\" : {\r\n \"attributes.value.group_value\" : {\r\n \"value\" : \"4\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n },\r\n \"path\" : \"attributes.value\",\r\n \"ignore_unmapped\" : false,\r\n \"score_mode\" : \"avg\",\r\n \"boost\" : 1.0\r\n }\r\n },\r\n {\r\n \"term\" : {\r\n \"attributes.id\" : {\r\n \"value\" : \"2\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"path\" : \"attributes\",\r\n \"ignore_unmapped\" : false,\r\n \"score_mode\" : \"avg\",\r\n \"boost\" : 1.0\r\n }\r\n },\r\n {\r\n \"nested\" : {\r\n \"query\" : {\r\n \"term\" : {\r\n \"path.id_category\" : {\r\n \"value\" : \"1\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n },\r\n \"path\" : \"path\",\r\n \"ignore_unmapped\" : false,\r\n \"score_mode\" : \"avg\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"aggregations\" : {\r\n \"all_wine_types\" : {\r\n \"global\" : { },\r\n \"aggregations\" : {\r\n \"all_wine_types\" : {\r\n \"nested\" : {\r\n \"path\" : \"attributes\"\r\n },\r\n \"aggregations\" : {\r\n \"all_wine_types\" : {\r\n \"filter\" : {\r\n \"term\" : {\r\n \"attributes.id\" : {\r\n \"value\" : \"2\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n },\r\n \"aggregations\" : {\r\n \"all_wine_type_values\" : {\r\n \"nested\" : {\r\n \"path\" : \"attributes.value\"\r\n },\r\n \"aggregations\" : {\r\n \"wine_type_value\" : {\r\n \"terms\" : {\r\n \"field\" : \"attributes.value.group_value\",\r\n \"size\" : 10,\r\n \"shard_size\" : -1,\r\n \"min_doc_count\" : 1,\r\n \"shard_min_doc_count\" : 0,\r\n \"show_term_doc_count_error\" : false,\r\n \"order\" : [\r\n {\r\n \"_count\" : \"desc\"\r\n },\r\n {\r\n \"_term\" : \"asc\"\r\n }\r\n ]\r\n },\r\n \"aggregations\" : {\r\n \"data\" : {\r\n \"reverse_nested\" : { },\r\n \"aggregations\" : {\r\n \"data\" : {\r\n \"top_hits\" : {\r\n \"from\" : 0,\r\n \"size\" : 1,\r\n \"version\" : false,\r\n \"explain\" : false,\r\n \"_source\" : {\r\n \"includes\" : [\r\n \"attributes\"\r\n ],\r\n \"excludes\" : [ ]\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"root_data\" : {\r\n \"reverse_nested\" : { },\r\n \"aggregations\" : {\r\n \"data\" : {\r\n \"top_hits\" : {\r\n \"from\" : 0,\r\n \"size\" : 1,\r\n \"version\" : false,\r\n \"explain\" : false,\r\n \"_source\" : {\r\n \"includes\" : [\r\n \"path\"\r\n ],\r\n \"excludes\" : [ ]\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"ext\" : { }\r\n}}]\r\norg.elasticsearch.transport.RemoteTransportException: [4__8482][192.168.1.12:9300][indices:data/read/search[phase/query]]\r\nCaused by: java.lang.NullPointerException\r\n```\r\n\r\nThank you so much!",
"comments": [
{
"body": "Hey @xserrat, did you see a stacktrace with a NullPointerException in the logs? Also does this issue reproduce on a more recent ES version? (at least 5.6.x?) ",
"created_at": "2017-12-21T09:30:46Z"
},
{
"body": "Hi @martijnvg !\r\n\r\nI didn't see nothing else :( The whole log error is what I pasted before... \r\n\r\nAbout the version, I'm using that one because we've not migrated to the latest but I will try to index the documents with the version you've said and I'll tell you what happen then!",
"created_at": "2017-12-21T13:43:37Z"
},
{
"body": "Sorry, finally I haven't had enough time to try the query with a latest version. Also, I decide to change the logic of my app to prevent the use of Global Aggregation.",
"created_at": "2018-01-06T09:48:45Z"
}
],
"number": 27928,
"title": "Null pointer exception using Global Aggregation and then Nested Aggregation"
} | {
"body": "This change fixes the deferring collector when it is executed in a global context\r\nwith a sub collector thats requires to access scores (e.g. top_hits aggregation).\r\nThe deferring collector replays the best buckets for each document and re-executes the original query\r\nif scores are needed. When executed in a global context, the query to replay is a simple match_all\r\n query and not the original query.\r\n\r\nCloses #22321\r\nCloses #27928",
"number": 27942,
"review_comments": [],
"title": "Fix global aggregation that requires breadth first and scores"
} | {
"commits": [
{
"message": "Fix global aggregation that requires breadth first and scores\n\nThis change fixes the deferring collector when it is executed in a global context\nwith a sub collector thats requires to access scores (e.g. top_hits aggregation).\nThe deferring collector replays the best buckets for each document and re-executes the original query\nif scores are needed. When executed in a global context, the query to replay is a simple match_all\n query and not the original query.\n\nCloses #22321\nCloses #27928"
},
{
"message": "add tests"
}
],
"files": [
{
"diff": "@@ -21,6 +21,8 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.DocIdSetIterator;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.Query;\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.Weight;\n import org.apache.lucene.util.packed.PackedInts;\n@@ -59,16 +61,22 @@ private static class Entry {\n final List<Entry> entries = new ArrayList<>();\n BucketCollector collector;\n final SearchContext searchContext;\n+ final boolean isGlobal;\n LeafReaderContext context;\n PackedLongValues.Builder docDeltas;\n PackedLongValues.Builder buckets;\n long maxBucket = -1;\n boolean finished = false;\n LongHash selectedBuckets;\n \n- /** Sole constructor. */\n- public BestBucketsDeferringCollector(SearchContext context) {\n+ /**\n+ * Sole constructor.\n+ * @param context The search context\n+ * @param isGlobal Whether this collector visits all documents (global context)\n+ */\n+ public BestBucketsDeferringCollector(SearchContext context, boolean isGlobal) {\n this.searchContext = context;\n+ this.isGlobal = isGlobal;\n }\n \n @Override\n@@ -144,11 +152,11 @@ public void prepareSelectedBuckets(long... selectedBuckets) throws IOException {\n }\n this.selectedBuckets = hash;\n \n- boolean needsScores = collector.needsScores();\n+ boolean needsScores = needsScores();\n Weight weight = null;\n if (needsScores) {\n- weight = searchContext.searcher()\n- .createNormalizedWeight(searchContext.query(), true);\n+ Query query = isGlobal ? new MatchAllDocsQuery() : searchContext.query();\n+ weight = searchContext.searcher().createNormalizedWeight(query, true);\n }\n for (Entry entry : entries) {\n final LeafBucketCollector leafCollector = collector.getLeafCollector(entry.context);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestBucketsDeferringCollector.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.BucketCollector;\n+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregator;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.internal.SearchContext;\n \n@@ -61,10 +62,20 @@ protected void doPreCollection() throws IOException {\n collectableSubAggregators = BucketCollector.wrap(collectors);\n }\n \n+ public static boolean descendsFromGlobalAggregator(Aggregator parent) {\n+ while (parent != null) {\n+ if (parent.getClass() == GlobalAggregator.class) {\n+ return true;\n+ }\n+ parent = parent.parent();\n+ }\n+ return false;\n+ }\n+\n public DeferringBucketCollector getDeferringCollector() {\n // Default impl is a collector that selects the best buckets\n // but an alternative defer policy may be based on best docs.\n- return new BestBucketsDeferringCollector(context());\n+ return new BestBucketsDeferringCollector(context(), descendsFromGlobalAggregator(parent()));\n }\n \n /**\n@@ -74,7 +85,7 @@ public DeferringBucketCollector getDeferringCollector() {\n * recording of all doc/bucketIds from the first pass and then the sub class\n * should call {@link #runDeferredCollections(long...)} for the selected set\n * of buckets that survive the pruning.\n- * \n+ *\n * @param aggregator\n * the child aggregator\n * @return true if the aggregator should be deferred until a first pass at",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferableBucketAggregator.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,8 @@\n import org.apache.lucene.index.RandomIndexWriter;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.Query;\n import org.apache.lucene.search.ScoreDoc;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.TopDocs;\n@@ -41,6 +43,8 @@\n import java.util.HashSet;\n import java.util.Set;\n \n+import static org.mockito.Mockito.when;\n+\n public class BestBucketsDeferringCollectorTests extends AggregatorTestCase {\n \n public void testReplay() throws Exception {\n@@ -59,17 +63,38 @@ public void testReplay() throws Exception {\n IndexSearcher indexSearcher = new IndexSearcher(indexReader);\n \n TermQuery termQuery = new TermQuery(new Term(\"field\", String.valueOf(randomInt(maxNumValues))));\n+ Query rewrittenQuery = indexSearcher.rewrite(termQuery);\n TopDocs topDocs = indexSearcher.search(termQuery, numDocs);\n \n SearchContext searchContext = createSearchContext(indexSearcher, createIndexSettings());\n- BestBucketsDeferringCollector collector = new BestBucketsDeferringCollector(searchContext);\n+ when(searchContext.query()).thenReturn(rewrittenQuery);\n+ BestBucketsDeferringCollector collector = new BestBucketsDeferringCollector(searchContext, false) {\n+ @Override\n+ public boolean needsScores() {\n+ return true;\n+ }\n+ };\n Set<Integer> deferredCollectedDocIds = new HashSet<>();\n collector.setDeferredCollector(Collections.singleton(bla(deferredCollectedDocIds)));\n collector.preCollection();\n indexSearcher.search(termQuery, collector);\n collector.postCollection();\n collector.replay(0);\n \n+ assertEquals(topDocs.scoreDocs.length, deferredCollectedDocIds.size());\n+ for (ScoreDoc scoreDoc : topDocs.scoreDocs) {\n+ assertTrue(\"expected docid [\" + scoreDoc.doc + \"] is missing\", deferredCollectedDocIds.contains(scoreDoc.doc));\n+ }\n+\n+ topDocs = indexSearcher.search(new MatchAllDocsQuery(), numDocs);\n+ collector = new BestBucketsDeferringCollector(searchContext, true);\n+ deferredCollectedDocIds = new HashSet<>();\n+ collector.setDeferredCollector(Collections.singleton(bla(deferredCollectedDocIds)));\n+ collector.preCollection();\n+ indexSearcher.search(new MatchAllDocsQuery(), collector);\n+ collector.postCollection();\n+ collector.replay(0);\n+\n assertEquals(topDocs.scoreDocs.length, deferredCollectedDocIds.size());\n for (ScoreDoc scoreDoc : topDocs.scoreDocs) {\n assertTrue(\"expected docid [\" + scoreDoc.doc + \"] is missing\", deferredCollectedDocIds.contains(scoreDoc.doc));",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/BestBucketsDeferringCollectorTests.java",
"status": "modified"
},
{
"diff": "@@ -46,14 +46,21 @@\n import org.elasticsearch.index.mapper.NumberFieldMapper;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n+import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregationBuilders;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorTestCase;\n import org.elasticsearch.search.aggregations.BucketOrder;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation;\n import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.global.InternalGlobal;\n+import org.elasticsearch.search.aggregations.metrics.tophits.InternalTopHits;\n+import org.elasticsearch.search.aggregations.metrics.tophits.TopHitsAggregationBuilder;\n import org.elasticsearch.search.aggregations.support.ValueType;\n \n import java.io.IOException;\n@@ -67,6 +74,8 @@\n import java.util.function.BiFunction;\n import java.util.function.Function;\n \n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.instanceOf;\n \n public class TermsAggregatorTests extends AggregatorTestCase {\n@@ -933,6 +942,63 @@ public void testMixLongAndDouble() throws Exception {\n }\n }\n \n+ public void testGlobalAggregationWithScore() throws IOException {\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {\n+ Document document = new Document();\n+ document.add(new SortedDocValuesField(\"keyword\", new BytesRef(\"a\")));\n+ indexWriter.addDocument(document);\n+ document = new Document();\n+ document.add(new SortedDocValuesField(\"keyword\", new BytesRef(\"c\")));\n+ indexWriter.addDocument(document);\n+ document = new Document();\n+ document.add(new SortedDocValuesField(\"keyword\", new BytesRef(\"e\")));\n+ indexWriter.addDocument(document);\n+ try (IndexReader indexReader = maybeWrapReaderEs(indexWriter.getReader())) {\n+ IndexSearcher indexSearcher = newIndexSearcher(indexReader);\n+ String executionHint = randomFrom(TermsAggregatorFactory.ExecutionMode.values()).toString();\n+ Aggregator.SubAggCollectionMode collectionMode = randomFrom(Aggregator.SubAggCollectionMode.values());\n+ GlobalAggregationBuilder globalBuilder = new GlobalAggregationBuilder(\"global\")\n+ .subAggregation(\n+ new TermsAggregationBuilder(\"terms\", ValueType.STRING)\n+ .executionHint(executionHint)\n+ .collectMode(collectionMode)\n+ .field(\"keyword\")\n+ .order(BucketOrder.key(true))\n+ .subAggregation(\n+ new TermsAggregationBuilder(\"sub_terms\", ValueType.STRING)\n+ .executionHint(executionHint)\n+ .collectMode(collectionMode)\n+ .field(\"keyword\").order(BucketOrder.key(true))\n+ .subAggregation(\n+ new TopHitsAggregationBuilder(\"top_hits\")\n+ .storedField(\"_none_\")\n+ )\n+ )\n+ );\n+\n+ MappedFieldType fieldType = new KeywordFieldMapper.KeywordFieldType();\n+ fieldType.setName(\"keyword\");\n+ fieldType.setHasDocValues(true);\n+\n+ InternalGlobal result = searchAndReduce(indexSearcher, new MatchAllDocsQuery(), globalBuilder, fieldType);\n+ InternalMultiBucketAggregation<?, ?> terms = result.getAggregations().get(\"terms\");\n+ assertThat(terms.getBuckets().size(), equalTo(3));\n+ for (MultiBucketsAggregation.Bucket bucket : terms.getBuckets()) {\n+ InternalMultiBucketAggregation<?, ?> subTerms = bucket.getAggregations().get(\"sub_terms\");\n+ assertThat(subTerms.getBuckets().size(), equalTo(1));\n+ MultiBucketsAggregation.Bucket subBucket = subTerms.getBuckets().get(0);\n+ InternalTopHits topHits = subBucket.getAggregations().get(\"top_hits\");\n+ assertThat(topHits.getHits().getHits().length, equalTo(1));\n+ for (SearchHit hit : topHits.getHits()) {\n+ assertThat(hit.getScore(), greaterThan(0f));\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n private IndexReader createIndexWithLongs() throws IOException {\n Directory directory = newDirectory();\n RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory);",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.1.1\r\n\r\n**Plugins installed**: -\r\n\r\n**JVM version**: jre1.8.0_111\r\n\r\n**OS version**: Windows 7, Ubuntu 16\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nindex_out_of_bounds_exception in nested aggregates (global -> terms -> top_hits)\r\n\r\n**Steps to reproduce**:\r\n 1. Test data:\r\n\r\n\r\n```\r\nPOST _bulk\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"21206357\"}}\r\n{\"id\":21206357,\"name\":\"AEG PW 5570 FA Inox, 5 in 1 напольные весы\",\"content\":\"AEG PW 5570 FA Inox, 5 in 1 напольные весы,Напольные весы,AEG\",\"brand_id\":24566662,\"brand_name\":\"AEG\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"21206358\"}}\r\n{\"id\":21206358,\"name\":\"AEG PW 5571 FA Glas, 6 in 1 напольные весы\",\"content\":\"AEG PW 5571 FA Glas, 6 in 1 напольные весы,Напольные весы,AEG\",\"brand_id\":24566662,\"brand_name\":\"AEG\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"21206499\"}}\r\n{\"id\":21206499,\"name\":\"Clatronic PW 3368, Glas напольные весы\",\"content\":\"Clatronic PW 3368, Glas напольные весы,Напольные весы,Clatronic\",\"brand_id\":26303064,\"brand_name\":\"Clatronic\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"21206500\"}}\r\n{\"id\":21206500,\"name\":\"Clatronic PW 3369, Black Glas напольные весы\",\"content\":\"Clatronic PW 3369, Black Glas напольные весы,Напольные весы,Clatronic\",\"brand_id\":26303064,\"brand_name\":\"Clatronic\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"21206501\"}}\r\n{\"id\":21206501,\"name\":\"Clatronic PW 3370 напольные весы\",\"content\":\"Clatronic PW 3370 напольные весы,Напольные весы,Clatronic\",\"brand_id\":26303064,\"brand_name\":\"Clatronic\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"21891718\"}}\r\n{\"id\":21891718,\"name\":\"Vitek VT-1983(BK) весы напольные\",\"content\":\"Vitek VT-1983(BK) весы напольные,Напольные весы,Vitek,dbntr витэк витек мшеул\",\"brand_id\":26303458,\"brand_name\":\"Vitek\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"22452029\"}}\r\n{\"id\":22452029,\"name\":\"Vitek VT-1986, Green весы напольные\",\"content\":\"Vitek VT-1986, Green весы напольные,Напольные весы,Vitek,dbntr витэк витек мшеул\",\"brand_id\":26303458,\"brand_name\":\"Vitek\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"22453024\"}}\r\n{\"id\":22453024,\"name\":\"Tefal PP1121 Classic Fashion Love напольные весы\",\"content\":\"Tefal PP1121 Classic Fashion Love напольные весы,Напольные весы,Tefal,Тефаль, Тефал\",\"brand_id\":18819636,\"brand_name\":\"Tefal\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"33677014\"}}\r\n{\"id\":33677014,\"name\":\"Вытяжка классическая ELIKOR Бельведер 60П-650-П3Г бежевый/дуб неокр\",\"content\":\"Вытяжка классическая ELIKOR Бельведер 60П-650-П3Г бежевый/дуб неокр,Вытяжка,Каминная,Пристенная,Elikor,ELIKOR\",\"brand_id\":24595342,\"brand_name\":\"Elikor\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037591\"}}\r\n{\"id\":35037591,\"name\":\"Lacroix для iPhone 5/5S Paseo Hard Black\",\"content\":\"Lacroix для iPhone 5/5S Paseo Hard Black,Чехол для сотового телефона,Чехол,ChristianLacroix\",\"brand_id\":35037587,\"brand_name\":\"ChristianLacroix\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037593\"}}\r\n{\"id\":35037593,\"name\":\"Lacroix для iPhone 5/5S Paseo Hard Gold\",\"content\":\"Lacroix для iPhone 5/5S Paseo Hard Gold,Чехол для сотового телефона,Чехол,ChristianLacroix\",\"brand_id\":35037587,\"brand_name\":\"ChristianLacroix\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037595\"}}\r\n{\"id\":35037595,\"name\":\"Lacroix для iPhone 5/5S Suiting Folio Black\",\"content\":\"Lacroix для iPhone 5/5S Suiting Folio Black,Чехол для сотового телефона,Чехол,ChristianLacroix\",\"brand_id\":35037587,\"brand_name\":\"ChristianLacroix\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037599\"}}\r\n{\"id\":35037599,\"name\":\"Lacroix для iPhone 6/6S Butterfly Hard Pink\",\"content\":\"Lacroix для iPhone 6/6S Butterfly Hard Pink,Чехол для сотового телефона,Чехол,ChristianLacroix\",\"brand_id\":35037587,\"brand_name\":\"ChristianLacroix\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037601\"}}\r\n{\"id\":35037601,\"name\":\"Lacroix для iPhone 6/6S Paseo Hard Gold\",\"content\":\"Lacroix для iPhone 6/6S Paseo Hard Gold,Чехол для сотового телефона,Чехол,ChristianLacroix\",\"brand_id\":35037587,\"brand_name\":\"ChristianLacroix\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037619\"}}\r\n{\"id\":35037619,\"name\":\"Lacroix для iPhone 6+/6S+ Butterfly Hard Black\",\"content\":\"Lacroix для iPhone 6+/6S+ Butterfly Hard Black,Чехол для сотового телефона,Чехол,ChristianLacroix\",\"brand_id\":35037587,\"brand_name\":\"ChristianLacroix\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037621\"}}\r\n{\"id\":35037621,\"name\":\"Lacroix для iPhone 6+/6S+Butterfly Hard White\",\"content\":\"Lacroix для iPhone 6+/6S+Butterfly Hard White,Чехол для сотового телефона,Чехол,ChristianLacroix\",\"brand_id\":35037587,\"brand_name\":\"ChristianLacroix\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037629\"}}\r\n{\"id\":35037629,\"name\":\"Kenzo для iPhone 5/5S Big K Folio Kaki\",\"content\":\"Kenzo для iPhone 5/5S Big K Folio Kaki,Чехол для сотового телефона,Чехол,Kenzo,кензо кинзо cenzo лутящ rtypj\",\"brand_id\":18571406,\"brand_name\":\"Kenzo\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037635\"}}\r\n{\"id\":35037635,\"name\":\"Kenzo для iPhone 5/5S Chick Flip Blue\",\"content\":\"Kenzo для iPhone 5/5S Chick Flip Blue,Чехол для сотового телефона,Чехол,Kenzo,кензо кинзо cenzo лутящ rtypj\",\"brand_id\":18571406,\"brand_name\":\"Kenzo\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037649\"}}\r\n{\"id\":35037649,\"name\":\"Kenzo для iPhone 5/5S Leo Pack (folio+covers)\",\"content\":\"Kenzo для iPhone 5/5S Leo Pack (folio+covers),Чехол для сотового телефона,Чехол,Kenzo,кензо кинзо cenzo лутящ rtypj\",\"brand_id\":18571406,\"brand_name\":\"Kenzo\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"35037661\"}}\r\n{\"id\":35037661,\"name\":\"Kenzo для iPhone 5/5S/5SE Tiger Hard Violine\",\"content\":\"Kenzo для iPhone 5/5S/5SE Tiger Hard Violine,Чехол для сотового телефона,Чехол,Kenzo,кензо кинзо cenzo лутящ rtypj\",\"brand_id\":18571406,\"brand_name\":\"Kenzo\"}\r\n```\r\n\r\n 2. Search request:\r\n\r\n\r\n```\r\nPOST /test/test/_search\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"multi_match\": {\r\n \"query\": \"apple iphone\",\r\n \"fields\": [\r\n \"name\",\r\n \"content\"\r\n ]\r\n }\r\n }\r\n ],\r\n \"filter\": [\r\n {\r\n \"term\": {\r\n \"brand_id\": {\r\n \"value\": 26303000\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"global\": {\r\n \"global\": {},\r\n \"aggs\": {\r\n \"brand\": {\r\n \"terms\": {\r\n \"field\": \"brand_id\"\r\n },\r\n \"aggs\": {\r\n \"name\": {\r\n \"top_hits\": {\r\n \"size\": 1\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n 3. Error:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"index_out_of_bounds_exception\",\r\n \"reason\": null\r\n },\r\n {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n ],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"all shards failed\",\r\n \"phase\": \"query\",\r\n \"grouped\": true,\r\n \"failed_shards\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"test\",\r\n \"node\": \"LgkZnevWR1CcVVbV9ybVPA\",\r\n \"reason\": {\r\n \"type\": \"index_out_of_bounds_exception\",\r\n \"reason\": null\r\n }\r\n },\r\n {\r\n \"shard\": 3,\r\n \"index\": \"test\",\r\n \"node\": \"LgkZnevWR1CcVVbV9ybVPA\",\r\n \"reason\": {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n }\r\n ],\r\n \"caused_by\": {\r\n \"type\": \"index_out_of_bounds_exception\",\r\n \"reason\": null\r\n }\r\n },\r\n \"status\": 500\r\n}\r\n```\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nnull_pointer_exception in nested aggregates (global -> terms -> top_hits)\r\n\r\n\r\n**Steps to reproduce**:\r\n 1. Test data:\r\n\r\n\r\n```\r\nPOST _bulk\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"21206358\"}}\r\n{\"id\":21206358,\"name\":\"AEG PW 5571 FA Glas, 6 in 1 напольные весы\",\"content\":\"AEG PW 5571 FA Glas, 6 in 1 напольные весы,Напольные весы,AEG\",\"brand_id\":24566662,\"brand_name\":\"AEG\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"21206500\"}}\r\n{\"id\":21206500,\"name\":\"Clatronic PW 3369, Black Glas напольные весы\",\"content\":\"Clatronic PW 3369, Black Glas напольные весы,Напольные весы,Clatronic\",\"brand_id\":26303064,\"brand_name\":\"Clatronic\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"21891718\"}}\r\n{\"id\":21891718,\"name\":\"Vitek VT-1983(BK) весы напольные\",\"content\":\"Vitek VT-1983(BK) весы напольные,Напольные весы,Vitek,dbntr витэк витек мшеул\",\"brand_id\":26303458,\"brand_name\":\"Vitek\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"22453024\"}}\r\n{\"id\":22453024,\"name\":\"Tefal PP1121 Classic Fashion Love напольные весы\",\"content\":\"Tefal PP1121 Classic Fashion Love напольные весы,Напольные весы,Tefal,Тефаль, Тефал\",\"brand_id\":18819636,\"brand_name\":\"Tefal\"}\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"33677014\"}}\r\n{\"id\":33677014,\"name\":\"Вытяжка классическая ELIKOR Бельведер 60П-650-П3Г бежевый/дуб неокр\",\"content\":\"Вытяжка классическая ELIKOR Бельведер 60П-650-П3Г бежевый/дуб неокр,Вытяжка,Каминная,Пристенная,Elikor,ELIKOR\",\"brand_id\":24595342,\"brand_name\":\"Elikor\"}\r\n```\r\n\r\n 2. Search request:\r\n\r\n(see above)\r\n\r\n 3. Error:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n ],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"all shards failed\",\r\n \"phase\": \"query\",\r\n \"grouped\": true,\r\n \"failed_shards\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"test\",\r\n \"node\": \"LgkZnevWR1CcVVbV9ybVPA\",\r\n \"reason\": {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n }\r\n ],\r\n \"caused_by\": {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n },\r\n \"status\": 500\r\n}\r\n```",
"comments": [
{
"body": "@martijnvg please could you take a look",
"created_at": "2016-12-23T13:44:32Z"
},
{
"body": "@clintongormley This looks to be caused by the fact breadth first collection mode is used. Using the same query with depth first collection mode succeeds:\r\n\r\n```\r\ncurl -XPOST \"http://localhost:9200/test/test/_search\" -d'\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"multi_match\": {\r\n \"query\": \"apple iphone\",\r\n \"fields\": [\r\n \"name\",\r\n \"content\"\r\n ]\r\n }\r\n }\r\n ],\r\n \"filter\": [\r\n {\r\n \"term\": {\r\n \"brand_id\": {\r\n \"value\": 26303000\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"global\": {\r\n \"global\": {},\r\n \"aggs\": {\r\n \"brand\": {\r\n \"terms\": {\r\n \"field\": \"brand_id\",\r\n \"collect_mode\" : \"depth_first\"\r\n },\r\n \"aggs\": {\r\n \"name\": {\r\n \"top_hits\": {\r\n \"size\": 1\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n```\r\n\r\nLooking at `BestBucketsDeferringCollector.java`, it doesn't take into account that it may be used in a global aggregation when scores are needed. For example that the main query has matches for all segments and that the saved docids are the same to the matching docid with the main query are not true. I'll try to think about a proper fix.",
"created_at": "2016-12-27T13:48:56Z"
},
{
"body": "+1 \r\n\r\nElasticsearch version: 5.1.1\r\n\r\nOS version: Ubuntu 14, x86_64\r\n\r\nWe are also facing the same issue with on a parent-child, terms+top_hit aggregation. On the below question, we get a null_pointer_exception, unless we add the above suggested workaround of `\"collect_mode\" : \"depth_first\"`\r\n\r\n```\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": {\r\n \"bool\": {\r\n \"filter\": {\r\n \"ids\": {\r\n \"type\": \"parenttype\",\r\n \"values\": [\r\n \"AVr69LpqdvPuInEFv3Bg33\"\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"aggs\": {\r\n \"test\": {\r\n \"aggregations\": {\r\n \"stats\": {\r\n \"aggregations\": {\r\n \"usersuser.id\": {\r\n \"aggregations\": {\r\n \"fetchSources\": {\r\n \"top_hits\": {\r\n \"_source\": {\r\n \"excludes\": [],\r\n \"includes\": [\r\n \"name\",\r\n ]\r\n },\r\n \"size\": 1\r\n }\r\n }\r\n },\r\n \"terms\": {\r\n \"field\": \"uniqueid\",\r\n \"size\": 500,\r\n \"collect_mode\": \"depth_first\" ==> Only addition of this fixes the problem\r\n }\r\n }\r\n },\r\n \"filter\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"range\": {\r\n \"timestamp\": {\r\n \"from\": \"now-1d\",\r\n \"include_lower\": true,\r\n \"include_upper\": true,\r\n \"to\": null\r\n }\r\n }\r\n },\r\n {\r\n \"term\": {\r\n \"pid\": \"pqdvPuI\"\r\n }\r\n },\r\n {\r\n \"parent_id\": {\r\n \"id\": \"AVr69LpqdvPuInEFv3Bg33\",\r\n \"type\": \"childtype\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n },\r\n \"children\": {\r\n \"type\": \"childtype\"\r\n }\r\n }\r\n }\r\n}\r\n```",
"created_at": "2017-03-23T16:56:12Z"
}
],
"number": 22321,
"title": "index_out_of_bounds_exception and null_pointer_exception in nested aggregates (global -> terms -> top_hits)"
} | {
"body": "This change fixes the deferring collector when it is executed in a global context\r\nwith a sub collector thats requires to access scores (e.g. top_hits aggregation).\r\nThe deferring collector replays the best buckets for each document and re-executes the original query\r\nif scores are needed. When executed in a global context, the query to replay is a simple match_all\r\n query and not the original query.\r\n\r\nCloses #22321\r\nCloses #27928",
"number": 27942,
"review_comments": [],
"title": "Fix global aggregation that requires breadth first and scores"
} | {
"commits": [
{
"message": "Fix global aggregation that requires breadth first and scores\n\nThis change fixes the deferring collector when it is executed in a global context\nwith a sub collector thats requires to access scores (e.g. top_hits aggregation).\nThe deferring collector replays the best buckets for each document and re-executes the original query\nif scores are needed. When executed in a global context, the query to replay is a simple match_all\n query and not the original query.\n\nCloses #22321\nCloses #27928"
},
{
"message": "add tests"
}
],
"files": [
{
"diff": "@@ -21,6 +21,8 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.DocIdSetIterator;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.Query;\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.Weight;\n import org.apache.lucene.util.packed.PackedInts;\n@@ -59,16 +61,22 @@ private static class Entry {\n final List<Entry> entries = new ArrayList<>();\n BucketCollector collector;\n final SearchContext searchContext;\n+ final boolean isGlobal;\n LeafReaderContext context;\n PackedLongValues.Builder docDeltas;\n PackedLongValues.Builder buckets;\n long maxBucket = -1;\n boolean finished = false;\n LongHash selectedBuckets;\n \n- /** Sole constructor. */\n- public BestBucketsDeferringCollector(SearchContext context) {\n+ /**\n+ * Sole constructor.\n+ * @param context The search context\n+ * @param isGlobal Whether this collector visits all documents (global context)\n+ */\n+ public BestBucketsDeferringCollector(SearchContext context, boolean isGlobal) {\n this.searchContext = context;\n+ this.isGlobal = isGlobal;\n }\n \n @Override\n@@ -144,11 +152,11 @@ public void prepareSelectedBuckets(long... selectedBuckets) throws IOException {\n }\n this.selectedBuckets = hash;\n \n- boolean needsScores = collector.needsScores();\n+ boolean needsScores = needsScores();\n Weight weight = null;\n if (needsScores) {\n- weight = searchContext.searcher()\n- .createNormalizedWeight(searchContext.query(), true);\n+ Query query = isGlobal ? new MatchAllDocsQuery() : searchContext.query();\n+ weight = searchContext.searcher().createNormalizedWeight(query, true);\n }\n for (Entry entry : entries) {\n final LeafBucketCollector leafCollector = collector.getLeafCollector(entry.context);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/BestBucketsDeferringCollector.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n import org.elasticsearch.search.aggregations.BucketCollector;\n+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregator;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.internal.SearchContext;\n \n@@ -61,10 +62,20 @@ protected void doPreCollection() throws IOException {\n collectableSubAggregators = BucketCollector.wrap(collectors);\n }\n \n+ public static boolean descendsFromGlobalAggregator(Aggregator parent) {\n+ while (parent != null) {\n+ if (parent.getClass() == GlobalAggregator.class) {\n+ return true;\n+ }\n+ parent = parent.parent();\n+ }\n+ return false;\n+ }\n+\n public DeferringBucketCollector getDeferringCollector() {\n // Default impl is a collector that selects the best buckets\n // but an alternative defer policy may be based on best docs.\n- return new BestBucketsDeferringCollector(context());\n+ return new BestBucketsDeferringCollector(context(), descendsFromGlobalAggregator(parent()));\n }\n \n /**\n@@ -74,7 +85,7 @@ public DeferringBucketCollector getDeferringCollector() {\n * recording of all doc/bucketIds from the first pass and then the sub class\n * should call {@link #runDeferredCollections(long...)} for the selected set\n * of buckets that survive the pruning.\n- * \n+ *\n * @param aggregator\n * the child aggregator\n * @return true if the aggregator should be deferred until a first pass at",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/DeferableBucketAggregator.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,8 @@\n import org.apache.lucene.index.RandomIndexWriter;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.Query;\n import org.apache.lucene.search.ScoreDoc;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.TopDocs;\n@@ -41,6 +43,8 @@\n import java.util.HashSet;\n import java.util.Set;\n \n+import static org.mockito.Mockito.when;\n+\n public class BestBucketsDeferringCollectorTests extends AggregatorTestCase {\n \n public void testReplay() throws Exception {\n@@ -59,17 +63,38 @@ public void testReplay() throws Exception {\n IndexSearcher indexSearcher = new IndexSearcher(indexReader);\n \n TermQuery termQuery = new TermQuery(new Term(\"field\", String.valueOf(randomInt(maxNumValues))));\n+ Query rewrittenQuery = indexSearcher.rewrite(termQuery);\n TopDocs topDocs = indexSearcher.search(termQuery, numDocs);\n \n SearchContext searchContext = createSearchContext(indexSearcher, createIndexSettings());\n- BestBucketsDeferringCollector collector = new BestBucketsDeferringCollector(searchContext);\n+ when(searchContext.query()).thenReturn(rewrittenQuery);\n+ BestBucketsDeferringCollector collector = new BestBucketsDeferringCollector(searchContext, false) {\n+ @Override\n+ public boolean needsScores() {\n+ return true;\n+ }\n+ };\n Set<Integer> deferredCollectedDocIds = new HashSet<>();\n collector.setDeferredCollector(Collections.singleton(bla(deferredCollectedDocIds)));\n collector.preCollection();\n indexSearcher.search(termQuery, collector);\n collector.postCollection();\n collector.replay(0);\n \n+ assertEquals(topDocs.scoreDocs.length, deferredCollectedDocIds.size());\n+ for (ScoreDoc scoreDoc : topDocs.scoreDocs) {\n+ assertTrue(\"expected docid [\" + scoreDoc.doc + \"] is missing\", deferredCollectedDocIds.contains(scoreDoc.doc));\n+ }\n+\n+ topDocs = indexSearcher.search(new MatchAllDocsQuery(), numDocs);\n+ collector = new BestBucketsDeferringCollector(searchContext, true);\n+ deferredCollectedDocIds = new HashSet<>();\n+ collector.setDeferredCollector(Collections.singleton(bla(deferredCollectedDocIds)));\n+ collector.preCollection();\n+ indexSearcher.search(new MatchAllDocsQuery(), collector);\n+ collector.postCollection();\n+ collector.replay(0);\n+\n assertEquals(topDocs.scoreDocs.length, deferredCollectedDocIds.size());\n for (ScoreDoc scoreDoc : topDocs.scoreDocs) {\n assertTrue(\"expected docid [\" + scoreDoc.doc + \"] is missing\", deferredCollectedDocIds.contains(scoreDoc.doc));",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/BestBucketsDeferringCollectorTests.java",
"status": "modified"
},
{
"diff": "@@ -46,14 +46,21 @@\n import org.elasticsearch.index.mapper.NumberFieldMapper;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n+import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregationBuilders;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorTestCase;\n import org.elasticsearch.search.aggregations.BucketOrder;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation;\n import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.global.InternalGlobal;\n+import org.elasticsearch.search.aggregations.metrics.tophits.InternalTopHits;\n+import org.elasticsearch.search.aggregations.metrics.tophits.TopHitsAggregationBuilder;\n import org.elasticsearch.search.aggregations.support.ValueType;\n \n import java.io.IOException;\n@@ -67,6 +74,8 @@\n import java.util.function.BiFunction;\n import java.util.function.Function;\n \n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.instanceOf;\n \n public class TermsAggregatorTests extends AggregatorTestCase {\n@@ -933,6 +942,63 @@ public void testMixLongAndDouble() throws Exception {\n }\n }\n \n+ public void testGlobalAggregationWithScore() throws IOException {\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {\n+ Document document = new Document();\n+ document.add(new SortedDocValuesField(\"keyword\", new BytesRef(\"a\")));\n+ indexWriter.addDocument(document);\n+ document = new Document();\n+ document.add(new SortedDocValuesField(\"keyword\", new BytesRef(\"c\")));\n+ indexWriter.addDocument(document);\n+ document = new Document();\n+ document.add(new SortedDocValuesField(\"keyword\", new BytesRef(\"e\")));\n+ indexWriter.addDocument(document);\n+ try (IndexReader indexReader = maybeWrapReaderEs(indexWriter.getReader())) {\n+ IndexSearcher indexSearcher = newIndexSearcher(indexReader);\n+ String executionHint = randomFrom(TermsAggregatorFactory.ExecutionMode.values()).toString();\n+ Aggregator.SubAggCollectionMode collectionMode = randomFrom(Aggregator.SubAggCollectionMode.values());\n+ GlobalAggregationBuilder globalBuilder = new GlobalAggregationBuilder(\"global\")\n+ .subAggregation(\n+ new TermsAggregationBuilder(\"terms\", ValueType.STRING)\n+ .executionHint(executionHint)\n+ .collectMode(collectionMode)\n+ .field(\"keyword\")\n+ .order(BucketOrder.key(true))\n+ .subAggregation(\n+ new TermsAggregationBuilder(\"sub_terms\", ValueType.STRING)\n+ .executionHint(executionHint)\n+ .collectMode(collectionMode)\n+ .field(\"keyword\").order(BucketOrder.key(true))\n+ .subAggregation(\n+ new TopHitsAggregationBuilder(\"top_hits\")\n+ .storedField(\"_none_\")\n+ )\n+ )\n+ );\n+\n+ MappedFieldType fieldType = new KeywordFieldMapper.KeywordFieldType();\n+ fieldType.setName(\"keyword\");\n+ fieldType.setHasDocValues(true);\n+\n+ InternalGlobal result = searchAndReduce(indexSearcher, new MatchAllDocsQuery(), globalBuilder, fieldType);\n+ InternalMultiBucketAggregation<?, ?> terms = result.getAggregations().get(\"terms\");\n+ assertThat(terms.getBuckets().size(), equalTo(3));\n+ for (MultiBucketsAggregation.Bucket bucket : terms.getBuckets()) {\n+ InternalMultiBucketAggregation<?, ?> subTerms = bucket.getAggregations().get(\"sub_terms\");\n+ assertThat(subTerms.getBuckets().size(), equalTo(1));\n+ MultiBucketsAggregation.Bucket subBucket = subTerms.getBuckets().get(0);\n+ InternalTopHits topHits = subBucket.getAggregations().get(\"top_hits\");\n+ assertThat(topHits.getHits().getHits().length, equalTo(1));\n+ for (SearchHit hit : topHits.getHits()) {\n+ assertThat(hit.getScore(), greaterThan(0f));\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n private IndexReader createIndexWithLongs() throws IOException {\n Directory directory = newDirectory();\n RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory);",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorTests.java",
"status": "modified"
}
]
} |
{
"body": "`ESTestCase#randomValueOtherThan` ignores nulls. This is it now:\r\n\r\n```\r\n /**\r\n * helper to get a random value in a certain range that's different from the input\r\n */\r\n public static <T> T randomValueOtherThan(T input, Supplier<T> randomSupplier) {\r\n if (input != null) {\r\n return randomValueOtherThanMany(input::equals, randomSupplier);\r\n }\r\n\r\n return(randomSupplier.get());\r\n }\r\n```\r\n\r\nSo if you pass in `null` it'll just run the supplier one time. I think this is confusing - if you pass in null it shouldn't return null. I think something like:\r\n\r\n```\r\n /**\r\n * helper to get a random value in a certain range that's different from the input\r\n */\r\n public static <T> T randomValueOtherThan(T input, Supplier<T> randomSupplier) {\r\n return randomValueOtherThanMany(v -> Object.equals(input, v), randomSupplier);\r\n }\r\n```\r\n\r\nWould be better. Discuss?",
"comments": [
{
"body": "+1, would you be willing to contribute a PR @nik9000? 😇",
"created_at": "2017-12-14T15:49:46Z"
},
{
"body": "> +1, would you be willing to contribute a PR @nik9000? 😇\r\n\r\nYes! I'll totally do it, but I wanted to get some buy in before I spent the time running all the tests. I have a sneaking suspicion that we rely on this behavior and I'll be spending an afternoon hunting down weird test failures after I do it.",
"created_at": "2017-12-14T15:53:40Z"
},
{
"body": "I think it's a bug and we can move forward with the fix.",
"created_at": "2017-12-14T18:47:25Z"
},
{
"body": "We discussed this in Fix-it-Friday and agreed to move forward with this change.",
"created_at": "2017-12-15T14:06:02Z"
},
{
"body": "I'll have a look next week.",
"created_at": "2017-12-16T14:07:18Z"
}
],
"number": 27821,
"title": "Test: randomValueOtherThan confusing with nulls"
} | {
"body": "When the first parameter of `ESTestCase#randomValueOtherThan` is `null`\r\nthen run the supplier until it returns non-`null`. Previously,\r\n`randomValueOtherThan` just ran the supplier one time which was\r\nconfusing.\r\n\r\nUnexpectedly, it looks like not tests rely on the original `null`\r\nhandling.\r\n\r\nCloses #27821",
"number": 27901,
"review_comments": [
{
"body": "Why pass a null message here, I think `assertNotNull(randomValueOtherThan(null, usuallyNull));` is fine here, the expectation will fail saying we got `null`?",
"created_at": "2017-12-19T15:08:20Z"
},
{
"body": "My mistake. I originally had `assertThat` and wasn't super careful reworking it. Strange that a null message is allowed though....",
"created_at": "2017-12-19T15:10:14Z"
}
],
"title": "Test: Change randomValueOtherThan(null, supplier)"
} | {
"commits": [
{
"message": "Test: Change randomValueOtherThan(null, supplier)\n\nWhen the first parameter of `ESTestCase#randomValueOtherThan` is `null`\nthen run the supplier until it returns non-`null`. Previously,\n`randomValueOtherThan` just ran the supplier one time which was\nconfusing.\n\nUnexpectedly, it looks like not tests rely on the original `null`\nhandling."
},
{
"message": "Remove null message"
}
],
"files": [
{
"diff": "@@ -684,11 +684,7 @@ public static <T> void maybeSet(Consumer<T> consumer, T value) {\n * helper to get a random value in a certain range that's different from the input\n */\n public static <T> T randomValueOtherThan(T input, Supplier<T> randomSupplier) {\n- if (input != null) {\n- return randomValueOtherThanMany(input::equals, randomSupplier);\n- }\n-\n- return(randomSupplier.get());\n+ return randomValueOtherThanMany(v -> Objects.equals(input, v), randomSupplier);\n }\n \n /**",
"filename": "test/framework/src/main/java/org/elasticsearch/test/ESTestCase.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@\n import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.function.Supplier;\n \n import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.hasSize;\n@@ -166,4 +167,18 @@ public void testRandomUniqueTotallyUnique() {\n public void testRandomUniqueNormalUsageAlwayMoreThanOne() {\n assertThat(randomUnique(() -> randomAlphaOfLengthBetween(1, 20), 10), hasSize(greaterThan(0)));\n }\n+\n+ public void testRandomValueOtherThan() {\n+ // \"normal\" way of calling where the value is not null\n+ int bad = randomInt();\n+ assertNotEquals(bad, (int) randomValueOtherThan(bad, ESTestCase::randomInt));\n+\n+ /*\n+ * \"funny\" way of calling where the value is null. This once\n+ * had a unique behavior but at this point `null` acts just\n+ * like any other value.\n+ */\n+ Supplier<Object> usuallyNull = () -> usually() ? null : randomInt();\n+ assertNotNull(randomValueOtherThan(null, usuallyNull));\n+ }\n }",
"filename": "test/framework/src/test/java/org/elasticsearch/test/test/ESTestCaseTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): 6.1.0\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** : 1.8.0_152\r\n\r\n**OS version** : Mac OS X 10.13.2\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nAnonymous filters aggregation returns a named result using integers as keys when a bool query is specified in the filters. The result should contained an array of buckets when using anonymous filters.\r\n\r\n**Steps to reproduce**:\r\nThis query\r\n```json\r\n{\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"test\": {\r\n \"filters\": {\r\n \"filters\": [ { \"match_all\": {} } ]\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\ncorrectly returns an anonymous array of buckets:\r\n```json\r\n{\r\n \"took\": 10,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 207,\r\n \"successful\": 207,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 1942637,\r\n \"max_score\": 0,\r\n \"hits\": []\r\n },\r\n \"aggregations\": {\r\n \"test\": {\r\n \"buckets\": [\r\n {\r\n \"doc_count\": 1942637\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```\r\n\r\nWhile this query:\r\n```\r\n{\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"test\": {\r\n \"filters\": {\r\n \"filters\": [ { \"bool\": { } } ]\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nreturns a named result using array indexes as keys:\r\n\r\n```json\r\n{\r\n \"took\": 7,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 207,\r\n \"successful\": 207,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 1942637,\r\n \"max_score\": 0,\r\n \"hits\": []\r\n },\r\n \"aggregations\": {\r\n \"test\": {\r\n \"buckets\": {\r\n \"0\": {\r\n \"doc_count\": 1942637\r\n }\r\n }\r\n }\r\n }\r\n}\r\n``` \r\n\r\nIn general, if a bool query is specified in the filters, I get a map having \"0\", \"1\", ... , \"n\" as keys instead of an array. \r\nThe content of the bool query doesn't seem to matter, as longs as there's at least one bool query in the array of filters, the result is wrong. I used an empty bool as an example for the sake of shortness\r\n\r\n",
"comments": [
{
"body": "> The content of the bool query doesn't seem to matter, as longs as there's at least one bool query in the array of filters, the result is wrong. I used an empty bool as an example for the sake of shortness\r\n\r\n@TrustNoOne thanks for opening this issue, I can reproduce the reported behaviour for empty bool query, however if I add a clause (e.g. a \"must\" clause) the response is as expected (without numeric key). Just adding this as a datapoint, the behaviour for an empty bool query is still stange and needs investigation.",
"created_at": "2017-12-15T20:31:15Z"
},
{
"body": "@cbuescher You are actually right. I thought this happened all the time, but it's only in some cases.\r\nIn my case I had a range query in the filter context, this reproduces the issue as well:\r\n\r\nedit: it's not the filter context, with must it's the same. It's apparently the range query (at least in my case). A term query doesn't do it. \r\n\r\n```json\r\n{\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"test\": {\r\n \"filters\": {\r\n \"filters\": [\r\n {\r\n \"bool\": {\r\n \"filter\": {\r\n \"range\": {\r\n \"somefield\": {\r\n \"gte\": 170071352479318000\r\n }\r\n }\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n}\r\n```",
"created_at": "2017-12-15T20:37:35Z"
},
{
"body": "So this is what I think happens here: the empty bool query gets rewritten internally to a MatchAllQueryBuilder, and as a follow up FiltersAggregationBuilder#doRewrite creates a new FiltersAggregationBuilder which doesn't correctly copy the original \"keyed\" field. I think that's the root cause of this and I will start working on a fix.",
"created_at": "2017-12-19T13:48:37Z"
},
{
"body": "@TrustNoOne thanks for reporting this, good catch. And thanks for the detailed description, it was easy to reproduce that way. Fix will be in next 6.1 release and onwards.",
"created_at": "2017-12-19T19:44:46Z"
},
{
"body": "sure thing! thanks for fixing it that fast",
"created_at": "2017-12-19T19:47:00Z"
}
],
"number": 27841,
"title": "Anonymous filters aggregation returns a named result when filters contain a bool query"
} | {
"body": "Currently FiltersAggregationBuilder#doRewrite creates a new FiltersAggregationBuilder which doesn't correctly copy the original \"keyed\" field if a non-keyed filter gets rewritten.\r\nThis can cause rendering bugs of the output aggregations like the one reported in #27841.\r\n\r\nCloses #27841 ",
"number": 27900,
"review_comments": [],
"title": "Fix preserving FiltersAggregationBuilder#keyed field on rewrite"
} | {
"commits": [
{
"message": "Add rest test"
},
{
"message": "Fix preserving FiltersAggregationBuilder#keyed field on rewrite"
}
],
"files": [
{
"diff": "@@ -65,15 +65,19 @@ public class FiltersAggregationBuilder extends AbstractAggregationBuilder<Filter\n * the KeyedFilters to use with this aggregation.\n */\n public FiltersAggregationBuilder(String name, KeyedFilter... filters) {\n- this(name, Arrays.asList(filters));\n+ this(name, Arrays.asList(filters), true);\n }\n \n- private FiltersAggregationBuilder(String name, List<KeyedFilter> filters) {\n+ private FiltersAggregationBuilder(String name, List<KeyedFilter> filters, boolean keyed) {\n super(name);\n- // internally we want to have a fixed order of filters, regardless of the order of the filters in the request\n this.filters = new ArrayList<>(filters);\n- Collections.sort(this.filters, (KeyedFilter kf1, KeyedFilter kf2) -> kf1.key().compareTo(kf2.key()));\n- this.keyed = true;\n+ if (keyed) {\n+ // internally we want to have a fixed order of filters, regardless of the order of the filters in the request\n+ Collections.sort(this.filters, (KeyedFilter kf1, KeyedFilter kf2) -> kf1.key().compareTo(kf2.key()));\n+ this.keyed = true;\n+ } else {\n+ this.keyed = false;\n+ }\n }\n \n /**\n@@ -152,6 +156,13 @@ public List<KeyedFilter> filters() {\n return Collections.unmodifiableList(this.filters);\n }\n \n+ /**\n+ * @return true if this builders filters have a key\n+ */\n+ public boolean isKeyed() {\n+ return this.keyed;\n+ }\n+\n /**\n * Set the key to use for the bucket for documents not matching any\n * filter.\n@@ -184,7 +195,7 @@ protected AggregationBuilder doRewrite(QueryRewriteContext queryShardContext) th\n }\n }\n if (changed) {\n- return new FiltersAggregationBuilder(getName(), rewrittenFilters);\n+ return new FiltersAggregationBuilder(getName(), rewrittenFilters, this.keyed);\n } else {\n return this;\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -23,15 +23,21 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.query.BoolQueryBuilder;\n+import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.index.query.MatchNoneQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.index.query.QueryRewriteContext;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.BaseAggregationTestCase;\n import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregator.KeyedFilter;\n \n import java.io.IOException;\n \n+import static org.hamcrest.Matchers.instanceOf;\n+\n public class FiltersTests extends BaseAggregationTestCase<FiltersAggregationBuilder> {\n \n @Override\n@@ -113,4 +119,38 @@ public void testOtherBucket() throws IOException {\n // unless the other bucket is explicitly disabled\n assertFalse(filters.otherBucket());\n }\n+\n+ public void testRewrite() throws IOException {\n+ // test non-keyed filter that doesn't rewrite\n+ AggregationBuilder original = new FiltersAggregationBuilder(\"my-agg\", new MatchAllQueryBuilder());\n+ AggregationBuilder rewritten = original.rewrite(new QueryRewriteContext(xContentRegistry(), null, null, () -> 0L));\n+ assertSame(original, rewritten);\n+\n+ // test non-keyed filter that does rewrite\n+ original = new FiltersAggregationBuilder(\"my-agg\", new BoolQueryBuilder());\n+ rewritten = original.rewrite(new QueryRewriteContext(xContentRegistry(), null, null, () -> 0L));\n+ assertNotSame(original, rewritten);\n+ assertThat(rewritten, instanceOf(FiltersAggregationBuilder.class));\n+ assertEquals(\"my-agg\", ((FiltersAggregationBuilder) rewritten).getName());\n+ assertEquals(1, ((FiltersAggregationBuilder) rewritten).filters().size());\n+ assertEquals(\"0\", ((FiltersAggregationBuilder) rewritten).filters().get(0).key());\n+ assertThat(((FiltersAggregationBuilder) rewritten).filters().get(0).filter(), instanceOf(MatchAllQueryBuilder.class));\n+ assertFalse(((FiltersAggregationBuilder) rewritten).isKeyed());\n+\n+ // test keyed filter that doesn't rewrite\n+ original = new FiltersAggregationBuilder(\"my-agg\", new KeyedFilter(\"my-filter\", new MatchAllQueryBuilder()));\n+ rewritten = original.rewrite(new QueryRewriteContext(xContentRegistry(), null, null, () -> 0L));\n+ assertSame(original, rewritten);\n+\n+ // test non-keyed filter that does rewrite\n+ original = new FiltersAggregationBuilder(\"my-agg\", new KeyedFilter(\"my-filter\", new BoolQueryBuilder()));\n+ rewritten = original.rewrite(new QueryRewriteContext(xContentRegistry(), null, null, () -> 0L));\n+ assertNotSame(original, rewritten);\n+ assertThat(rewritten, instanceOf(FiltersAggregationBuilder.class));\n+ assertEquals(\"my-agg\", ((FiltersAggregationBuilder) rewritten).getName());\n+ assertEquals(1, ((FiltersAggregationBuilder) rewritten).filters().size());\n+ assertEquals(\"my-filter\", ((FiltersAggregationBuilder) rewritten).filters().get(0).key());\n+ assertThat(((FiltersAggregationBuilder) rewritten).filters().get(0).filter(), instanceOf(MatchAllQueryBuilder.class));\n+ assertTrue(((FiltersAggregationBuilder) rewritten).isKeyed());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/FiltersTests.java",
"status": "modified"
},
{
"diff": "@@ -246,6 +246,22 @@ setup:\n - match: { aggregations.the_filter.buckets.second_filter.doc_count: 1 }\n - match: { aggregations.the_filter.meta.foo: \"bar\" }\n \n+---\n+\"Single anonymous bool query\":\n+\n+ - do:\n+ search:\n+ body:\n+ aggs:\n+ the_filter:\n+ filters:\n+ filters:\n+ - bool: {}\n+\n+ - match: { hits.total: 4 }\n+ - length: { hits.hits: 4 }\n+ - match: { aggregations.the_filter.buckets.0.doc_count: 4 }\n+\n ---\n \"Bad params\":\n ",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/220_filters_bucket.yml",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion: 6.0.0, Build: 8f0685b/2017-11-10T18:41:22.859Z, JVM: 1.8.0_77\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`):\r\njava version \"1.8.0_77\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_77-b03)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nDarwin MacBook-1265.local 16.7.0 Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64 x86_64\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nProviding an IP as the value of the `missing` parameter, when executing a `terms` aggregation on a field of type `ip` results in the following 503 error:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"\",\r\n \"phase\": \"fetch\",\r\n \"grouped\": true,\r\n \"failed_shards\": [],\r\n \"caused_by\": {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"encoded bytes are of incorrect length\",\r\n \"caused_by\": {\r\n \"type\": \"unknown_host_exception\",\r\n \"reason\": \"addr is of illegal length\"\r\n }\r\n }\r\n },\r\n \"status\": 503\r\n}\r\n```\r\nLooking at the code that raises the exception (stack trace pasted below) it seems that the code expects a byte array of length 4 or 16. As a workaround, the IP can be encoded as a character array instead (for example `\"\\u0000\\u0000\\u0000\\u0000\"`).\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Create an index with a field mapped as type `ip`:\r\n```\r\nPUT iptest\r\n{\r\n \"mappings\": {\r\n \"doc\": {\r\n \"properties\": {\r\n \"ip\": {\r\n \"type\": \"ip\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n 2. Index a document that is missing the `ip` field:\r\n```\r\nPUT iptest/doc/1\r\n{\r\n \"foo\": \"bar\"\r\n}\r\n```\r\n 3. Execute a `terms` agg, providing an IP address as the value of the `missing` parameter. This results in a 503 error:\r\n```\r\nGET iptest/_search\r\n{\r\n \"size\": 0,\r\n \"aggregations\": {\r\n \"ip\": {\r\n \"terms\": {\r\n \"field\": \"ip\",\r\n \"missing\": \"0.0.0.0\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\n4. Pass a 4-character string as the value of `missing`. This is executed successfully:\r\n```\r\nGET iptest/_search\r\n{\r\n \"size\": 0,\r\n \"aggregations\": {\r\n \"ip\": {\r\n \"terms\": {\r\n \"field\": \"ip\",\r\n \"missing\": \"\\u0000\\u0000\\u0000\\u0000\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n[2017-12-13T09:12:41,550][WARN ][r.suppressed ] path: /iptest/_search, params: {index=iptest}\r\norg.elasticsearch.action.search.SearchPhaseExecutionException: \r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:272) [elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:92) [elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:623) [elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-6.0.0.jar:6.0.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_77]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_77]\r\n\tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]\r\nCaused by: java.lang.IllegalArgumentException: encoded bytes are of incorrect length\r\n\tat org.apache.lucene.document.InetAddressPoint.decode(InetAddressPoint.java:187) ~[lucene-misc-7.0.1.jar:7.0.1 8d6c3889aa543954424d8ac1dbb3f03bf207140b - sarowe - 2017-10-02 14:36:35]\r\n\tat org.elasticsearch.search.DocValueFormat$4.format(DocValueFormat.java:298) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket.getKeyAsString(StringTerms.java:75) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket.getKey(StringTerms.java:64) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.search.aggregations.bucket.terms.InternalTerms.doReduce(InternalTerms.java:274) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.search.aggregations.InternalAggregation.reduce(InternalAggregation.java:120) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:77) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.SearchPhaseController.reduceAggs(SearchPhaseController.java:523) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:500) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:417) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.SearchPhaseController$1.reduce(SearchPhaseController.java:736) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:102) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase.access$000(FetchSearchPhase.java:45) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:87) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\t... 3 more\r\nCaused by: java.net.UnknownHostException: addr is of illegal length\r\n\tat java.net.InetAddress.getByAddress(InetAddress.java:1042) ~[?:1.8.0_77]\r\n\tat java.net.InetAddress.getByAddress(InetAddress.java:1439) ~[?:1.8.0_77]\r\n\tat org.apache.lucene.document.InetAddressPoint.decode(InetAddressPoint.java:184) ~[lucene-misc-7.0.1.jar:7.0.1 8d6c3889aa543954424d8ac1dbb3f03bf207140b - sarowe - 2017-10-02 14:36:35]\r\n\tat org.elasticsearch.search.DocValueFormat$4.format(DocValueFormat.java:298) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket.getKeyAsString(StringTerms.java:75) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket.getKey(StringTerms.java:64) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.search.aggregations.bucket.terms.InternalTerms.doReduce(InternalTerms.java:274) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.search.aggregations.InternalAggregation.reduce(InternalAggregation.java:120) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:77) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.SearchPhaseController.reduceAggs(SearchPhaseController.java:523) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:500) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:417) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.SearchPhaseController$1.reduce(SearchPhaseController.java:736) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:102) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase.access$000(FetchSearchPhase.java:45) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:87) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n\t... 3 more\r\n\r\n```",
"comments": [],
"number": 27788,
"title": "The missing parameter does not accept an IP when executing a terms aggregation on a field of type ip"
} | {
"body": "of fields that use binary representation\r\n\r\nCloses #27788",
"number": 27855,
"review_comments": [],
"title": "Using DocValueFormat::parseBytesRef for parsing missing value parameter"
} | {
"commits": [
{
"message": "Using DocValueFormat::parseBytesRef for parsing missing value parameter\nof fields that use binary representation"
}
],
"files": [
{
"diff": "@@ -56,7 +56,7 @@ public interface DocValueFormat extends NamedWriteable {\n * such as the {@code long}, {@code double} or {@code date} fields. */\n String format(double value);\n \n- /** Format a double value. This is used by terms aggregations to format\n+ /** Format a binary value. This is used by terms aggregations to format\n * keys for fields that use binary doc value representations such as the\n * {@code keyword} and {@code ip} fields. */\n String format(BytesRef value);",
"filename": "core/src/main/java/org/elasticsearch/search/DocValueFormat.java",
"status": "modified"
},
{
"diff": "@@ -252,7 +252,7 @@ public VS toValuesSource(QueryShardContext context) throws IOException {\n }\n \n if (vs instanceof ValuesSource.Bytes) {\n- final BytesRef missing = new BytesRef(missing().toString());\n+ final BytesRef missing = format.parseBytesRef(missing().toString());\n if (vs instanceof ValuesSource.Bytes.WithOrdinals) {\n return (VS) MissingValues.replaceMissing((ValuesSource.Bytes.WithOrdinals) vs, missing);\n } else {",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/support/ValuesSourceConfig.java",
"status": "modified"
},
{
"diff": "@@ -113,4 +113,29 @@ public void testScriptValues() throws Exception {\n assertEquals(\"2001:db8::2:1\", bucket2.getKey());\n assertEquals(\"2001:db8::2:1\", bucket2.getKeyAsString());\n }\n+\n+ public void testMissingValue() throws Exception {\n+ assertAcked(prepareCreate(\"index\").addMapping(\"type\", \"ip\", \"type=ip\"));\n+ indexRandom(true,\n+ client().prepareIndex(\"index\", \"type\", \"1\").setSource(\"ip\", \"192.168.1.7\"),\n+ client().prepareIndex(\"index\", \"type\", \"2\").setSource(\"ip\", \"192.168.1.7\"),\n+ client().prepareIndex(\"index\", \"type\", \"3\").setSource(\"ip\", \"127.0.0.1\"),\n+ client().prepareIndex(\"index\", \"type\", \"4\").setSource(\"not_ip\", \"something\"));\n+ SearchResponse response = client().prepareSearch(\"index\").addAggregation(AggregationBuilders\n+ .terms(\"my_terms\").field(\"ip\").missing(\"127.0.0.1\").executionHint(randomExecutionHint())).get();\n+\n+ assertSearchResponse(response);\n+ Terms terms = response.getAggregations().get(\"my_terms\");\n+ assertEquals(2, terms.getBuckets().size());\n+\n+ Terms.Bucket bucket1 = terms.getBuckets().get(0);\n+ assertEquals(2, bucket1.getDocCount());\n+ assertEquals(\"127.0.0.1\", bucket1.getKey());\n+ assertEquals(\"127.0.0.1\", bucket1.getKeyAsString());\n+\n+ Terms.Bucket bucket2 = terms.getBuckets().get(1);\n+ assertEquals(2, bucket2.getDocCount());\n+ assertEquals(\"192.168.1.7\", bucket2.getKey());\n+ assertEquals(\"192.168.1.7\", bucket2.getKeyAsString());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/IpTermsIT.java",
"status": "modified"
}
]
} |
{
"body": "Looks like these artifacts are missing on maven central\r\n```\r\nCould not resolve all files for configuration ':compile'.\r\n> Could not find org.elasticsearch:elasticsearch-cli:6.1.0.\r\n Searched in the following locations:\r\n file:/Users/robinanil/.m2/repository/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.pom\r\n file:/Users/robinanil/.m2/repository/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.jar\r\n https://repo1.maven.org/maven2/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.pom\r\n https://repo1.maven.org/maven2/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.jar\r\n file:/Users/robinanil/work/core/lib/elasticsearch-cli-6.1.0.jar\r\n file:/Users/robinanil/work/core/lib/elasticsearch-cli.jar\r\n http://dl.bintray.com/content/fullcontact/fullcontact-oss/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.pom\r\n http://dl.bintray.com/content/fullcontact/fullcontact-oss/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.jar\r\n http://repo.jenkins-ci.org/releases/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.pom\r\n http://repo.jenkins-ci.org/releases/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.jar\r\n Required by:\r\n project : > org.elasticsearch.client:elasticsearch-rest-high-level-client:6.1.0 > org.elasticsearch:elasticsearch:6.1.0\r\n> Could not find org.elasticsearch.plugin:mapper-extras:6.1.0.\r\n Searched in the following locations:\r\n file:/Users/robinanil/.m2/repository/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.pom\r\n file:/Users/robinanil/.m2/repository/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.jar\r\n https://repo1.maven.org/maven2/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.pom\r\n https://repo1.maven.org/maven2/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.jar\r\n file:/Users/robinanil/work/core/lib/mapper-extras-6.1.0.jar\r\n file:/Users/robinanil/work/core/lib/mapper-extras.jar\r\n http://dl.bintray.com/content/fullcontact/fullcontact-oss/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.pom\r\n http://dl.bintray.com/content/fullcontact/fullcontact-oss/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.jar\r\n http://repo.jenkins-ci.org/releases/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.pom\r\n http://repo.jenkins-ci.org/releases/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.jar\r\n Required by:\r\n project : > org.elasticsearch.client:transport:6.1.0 > org.elasticsearch.plugin:percolator-client:6.1.0\r\n```\r\n",
"comments": [
{
"body": "confirmed the bug. elasticsearch-cli is not on mavencentral or jcenter",
"created_at": "2017-12-14T09:15:39Z"
},
{
"body": "I'm having the same problem as well:\r\n```\r\n[WARNING] The POM for org.elasticsearch:elasticsearch-cli:jar:6.1.0 is missing, no dependency information available\r\n[WARNING] The POM for org.elasticsearch.plugin:mapper-extras:jar:6.1.0 is missing, no dependency information available\r\n```",
"created_at": "2017-12-14T10:49:27Z"
},
{
"body": "I forgot to mention that as a temporary workaround, you can exclude the dependency as follow:\r\n\r\n<dependency>\r\n\t<groupId>org.elasticsearch.client</groupId>\r\n\t<artifactId>elasticsearch-rest-high-level-client</artifactId>\r\n\t<version>${elasticsearch.version}</version>\r\n\t<exclusions>\r\n\t\t<exclusion>\r\n\t\t\t<groupId>org.elasticsearch</groupId>\r\n\t\t\t<artifactId>elasticsearch-cli</artifactId>\r\n\t\t</exclusion>\r\n\t</exclusions>\r\n</dependency>\r\n\r\n\r\n",
"created_at": "2017-12-14T14:10:51Z"
},
{
"body": "Thanks, but that workaround is not enough for tests as our embedded elastic search dep, needs the following and fails\r\n```\r\njava.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: org/elasticsearch/cli/UserException\r\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\r\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:192)\r\n```",
"created_at": "2017-12-14T15:53:28Z"
},
{
"body": "**Same problem:**\r\n\r\nFailure to find org.elasticsearch:elasticsearch-cli:jar:6.1.0 in https://repo.maven.apache.org/maven2 was cached in the local repository\r\n\r\n**Here's my solution:**\r\n\r\nwget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.0.tar.gz\r\ntar xfz elasticsearch-6.1.0.tar.gz\r\ncp /elasticsearch-6.1.0/lib/elasticsearch-cli-6.1.0.jar /somepath/elasticsearch-cli-6.1.0.jar\r\n\r\nvi /path/to/pom.xml\r\n\r\n> .......\r\n> .......\r\n> <properties>\r\n> <elasticsearch.version>6.1.0</elasticsearch.version>\r\n> <lucene.version>7.1.0</lucene.version>\r\n> </properties>\r\n> \r\n> <dependencies>\r\n> <dependency>\r\n> <groupId>org.elasticsearch</groupId>\r\n> <artifactId>elasticsearch</artifactId>\r\n> <version>${elasticsearch.version}</version>\r\n> <scope>provided</scope>\r\n> </dependency>\r\n> \r\n> \t\t<!-- INSERT THIS ---------------------->\r\n> \r\n> \t\t<dependency>\r\n> \t\t <groupId>org.elasticsearch</groupId>\r\n> \t\t <artifactId>elasticsearch-cli</artifactId>\r\n> \t\t <version>6.1.0</version>\r\n> \t\t <scope>system</scope>\r\n> \t\t <systemPath>/somepath/elasticsearch-cli-6.1.0.jar</systemPath>\r\n> \t\t</dependency>\r\n> \r\n> \t\t<!-- END ------------------------------>\r\n> \r\n> <dependency>\r\n> <groupId>org.apache.logging.log4j</groupId>\r\n> <artifactId>log4j-api</artifactId>\r\n> <version>2.9.1</version>\r\n> <scope>provided</scope>\r\n> </dependency>\r\n> .....\r\n> .....",
"created_at": "2017-12-14T16:37:28Z"
},
{
"body": "We are working on this. We expect to have this fixed today. Sorry for the troubles.",
"created_at": "2017-12-14T17:42:59Z"
},
{
"body": "Also org.elasticsearch:cli:6.1.0 for org.elasticsearch.client:x-pack-transport:6.1.0 > org.elasticsearch.plugin:x-pack-api:6.1.0",
"created_at": "2017-12-14T17:52:55Z"
},
{
"body": "An update on this issue: the missing elasticsearch-cli and mapper-extras are now available in Maven Central. We are working on some additional issues that we have identified during this process including the issue with the x-pack-transport POM referring to `org.elasticsearch:cli` (it should be `org.elasticsearch:elasticsearch-cli`). We will update again when we have this issue resolved.",
"created_at": "2017-12-14T22:22:57Z"
},
{
"body": "We believe that all issues are resolved here. If you encounter additional issues with the Maven artifacts for 6.1.0, please open a new issue. Thanks for the initial report, and sorry again for the troubles.",
"created_at": "2017-12-15T03:51:47Z"
}
],
"number": 27806,
"title": "Elasticsearch 6.1.0 POM is referencing missing deps"
} | {
"body": "This commit moves the range field mapper back to core so that we can remove the compile-time dependency of percolator on mapper-extras which compilcates dependency management for the percolator client JAR, and modules should not be intertwined like this anyway.\r\n\r\nRelates #26549, relates #27806",
"number": 27854,
"review_comments": [
{
"body": "This was the main point of this PR.",
"created_at": "2017-12-17T17:56:26Z"
}
],
"title": "Move range field mapper back to core"
} | {
"commits": [
{
"message": "Move range field mapper back to core\n\nThis commit moves the range field mapper back to core so that we can\nremove the compile-time dependency of percolator on mapper-extras which\ncompilcates dependency management for the percolator client JAR, and\nmodules should not be intertwined like this anyway."
}
],
"files": [
{
"diff": "@@ -43,6 +43,7 @@\n import org.elasticsearch.index.mapper.NumberFieldMapper;\n import org.elasticsearch.index.mapper.ObjectMapper;\n import org.elasticsearch.index.mapper.ParentFieldMapper;\n+import org.elasticsearch.index.mapper.RangeFieldMapper;\n import org.elasticsearch.index.mapper.RoutingFieldMapper;\n import org.elasticsearch.index.mapper.SeqNoFieldMapper;\n import org.elasticsearch.index.mapper.SourceFieldMapper;\n@@ -98,6 +99,9 @@ private Map<String, Mapper.TypeParser> getMappers(List<MapperPlugin> mapperPlugi\n for (NumberFieldMapper.NumberType type : NumberFieldMapper.NumberType.values()) {\n mappers.put(type.typeName(), new NumberFieldMapper.TypeParser(type));\n }\n+ for (RangeFieldMapper.RangeType type : RangeFieldMapper.RangeType.values()) {\n+ mappers.put(type.typeName(), new RangeFieldMapper.TypeParser(type));\n+ }\n mappers.put(BooleanFieldMapper.CONTENT_TYPE, new BooleanFieldMapper.TypeParser());\n mappers.put(BinaryFieldMapper.CONTENT_TYPE, new BinaryFieldMapper.TypeParser());\n mappers.put(DateFieldMapper.CONTENT_TYPE, new DateFieldMapper.TypeParser());",
"filename": "core/src/main/java/org/elasticsearch/indices/IndicesModule.java",
"status": "modified"
},
{
"diff": "@@ -33,9 +33,6 @@ public Map<String, Mapper.TypeParser> getMappers() {\n Map<String, Mapper.TypeParser> mappers = new LinkedHashMap<>();\n mappers.put(ScaledFloatFieldMapper.CONTENT_TYPE, new ScaledFloatFieldMapper.TypeParser());\n mappers.put(TokenCountFieldMapper.CONTENT_TYPE, new TokenCountFieldMapper.TypeParser());\n- for (RangeFieldMapper.RangeType type : RangeFieldMapper.RangeType.values()) {\n- mappers.put(type.typeName(), new RangeFieldMapper.TypeParser(type));\n- }\n return Collections.unmodifiableMap(mappers);\n }\n ",
"filename": "modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/MapperExtrasPlugin.java",
"status": "modified"
},
{
"diff": "@@ -24,8 +24,6 @@ esplugin {\n }\n \n dependencies {\n- // for testing hasChild and hasParent rejections\n- compile project(path: ':modules:mapper-extras', configuration: 'runtime')\n testCompile project(path: ':modules:parent-join', configuration: 'runtime')\n }\n ",
"filename": "modules/percolator/build.gradle",
"status": "modified"
}
]
} |
{
"body": "Looks like these artifacts are missing on maven central\r\n```\r\nCould not resolve all files for configuration ':compile'.\r\n> Could not find org.elasticsearch:elasticsearch-cli:6.1.0.\r\n Searched in the following locations:\r\n file:/Users/robinanil/.m2/repository/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.pom\r\n file:/Users/robinanil/.m2/repository/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.jar\r\n https://repo1.maven.org/maven2/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.pom\r\n https://repo1.maven.org/maven2/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.jar\r\n file:/Users/robinanil/work/core/lib/elasticsearch-cli-6.1.0.jar\r\n file:/Users/robinanil/work/core/lib/elasticsearch-cli.jar\r\n http://dl.bintray.com/content/fullcontact/fullcontact-oss/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.pom\r\n http://dl.bintray.com/content/fullcontact/fullcontact-oss/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.jar\r\n http://repo.jenkins-ci.org/releases/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.pom\r\n http://repo.jenkins-ci.org/releases/org/elasticsearch/elasticsearch-cli/6.1.0/elasticsearch-cli-6.1.0.jar\r\n Required by:\r\n project : > org.elasticsearch.client:elasticsearch-rest-high-level-client:6.1.0 > org.elasticsearch:elasticsearch:6.1.0\r\n> Could not find org.elasticsearch.plugin:mapper-extras:6.1.0.\r\n Searched in the following locations:\r\n file:/Users/robinanil/.m2/repository/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.pom\r\n file:/Users/robinanil/.m2/repository/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.jar\r\n https://repo1.maven.org/maven2/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.pom\r\n https://repo1.maven.org/maven2/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.jar\r\n file:/Users/robinanil/work/core/lib/mapper-extras-6.1.0.jar\r\n file:/Users/robinanil/work/core/lib/mapper-extras.jar\r\n http://dl.bintray.com/content/fullcontact/fullcontact-oss/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.pom\r\n http://dl.bintray.com/content/fullcontact/fullcontact-oss/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.jar\r\n http://repo.jenkins-ci.org/releases/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.pom\r\n http://repo.jenkins-ci.org/releases/org/elasticsearch/plugin/mapper-extras/6.1.0/mapper-extras-6.1.0.jar\r\n Required by:\r\n project : > org.elasticsearch.client:transport:6.1.0 > org.elasticsearch.plugin:percolator-client:6.1.0\r\n```\r\n",
"comments": [
{
"body": "confirmed the bug. elasticsearch-cli is not on mavencentral or jcenter",
"created_at": "2017-12-14T09:15:39Z"
},
{
"body": "I'm having the same problem as well:\r\n```\r\n[WARNING] The POM for org.elasticsearch:elasticsearch-cli:jar:6.1.0 is missing, no dependency information available\r\n[WARNING] The POM for org.elasticsearch.plugin:mapper-extras:jar:6.1.0 is missing, no dependency information available\r\n```",
"created_at": "2017-12-14T10:49:27Z"
},
{
"body": "I forgot to mention that as a temporary workaround, you can exclude the dependency as follow:\r\n\r\n<dependency>\r\n\t<groupId>org.elasticsearch.client</groupId>\r\n\t<artifactId>elasticsearch-rest-high-level-client</artifactId>\r\n\t<version>${elasticsearch.version}</version>\r\n\t<exclusions>\r\n\t\t<exclusion>\r\n\t\t\t<groupId>org.elasticsearch</groupId>\r\n\t\t\t<artifactId>elasticsearch-cli</artifactId>\r\n\t\t</exclusion>\r\n\t</exclusions>\r\n</dependency>\r\n\r\n\r\n",
"created_at": "2017-12-14T14:10:51Z"
},
{
"body": "Thanks, but that workaround is not enough for tests as our embedded elastic search dep, needs the following and fails\r\n```\r\njava.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: org/elasticsearch/cli/UserException\r\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\r\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:192)\r\n```",
"created_at": "2017-12-14T15:53:28Z"
},
{
"body": "**Same problem:**\r\n\r\nFailure to find org.elasticsearch:elasticsearch-cli:jar:6.1.0 in https://repo.maven.apache.org/maven2 was cached in the local repository\r\n\r\n**Here's my solution:**\r\n\r\nwget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.0.tar.gz\r\ntar xfz elasticsearch-6.1.0.tar.gz\r\ncp /elasticsearch-6.1.0/lib/elasticsearch-cli-6.1.0.jar /somepath/elasticsearch-cli-6.1.0.jar\r\n\r\nvi /path/to/pom.xml\r\n\r\n> .......\r\n> .......\r\n> <properties>\r\n> <elasticsearch.version>6.1.0</elasticsearch.version>\r\n> <lucene.version>7.1.0</lucene.version>\r\n> </properties>\r\n> \r\n> <dependencies>\r\n> <dependency>\r\n> <groupId>org.elasticsearch</groupId>\r\n> <artifactId>elasticsearch</artifactId>\r\n> <version>${elasticsearch.version}</version>\r\n> <scope>provided</scope>\r\n> </dependency>\r\n> \r\n> \t\t<!-- INSERT THIS ---------------------->\r\n> \r\n> \t\t<dependency>\r\n> \t\t <groupId>org.elasticsearch</groupId>\r\n> \t\t <artifactId>elasticsearch-cli</artifactId>\r\n> \t\t <version>6.1.0</version>\r\n> \t\t <scope>system</scope>\r\n> \t\t <systemPath>/somepath/elasticsearch-cli-6.1.0.jar</systemPath>\r\n> \t\t</dependency>\r\n> \r\n> \t\t<!-- END ------------------------------>\r\n> \r\n> <dependency>\r\n> <groupId>org.apache.logging.log4j</groupId>\r\n> <artifactId>log4j-api</artifactId>\r\n> <version>2.9.1</version>\r\n> <scope>provided</scope>\r\n> </dependency>\r\n> .....\r\n> .....",
"created_at": "2017-12-14T16:37:28Z"
},
{
"body": "We are working on this. We expect to have this fixed today. Sorry for the troubles.",
"created_at": "2017-12-14T17:42:59Z"
},
{
"body": "Also org.elasticsearch:cli:6.1.0 for org.elasticsearch.client:x-pack-transport:6.1.0 > org.elasticsearch.plugin:x-pack-api:6.1.0",
"created_at": "2017-12-14T17:52:55Z"
},
{
"body": "An update on this issue: the missing elasticsearch-cli and mapper-extras are now available in Maven Central. We are working on some additional issues that we have identified during this process including the issue with the x-pack-transport POM referring to `org.elasticsearch:cli` (it should be `org.elasticsearch:elasticsearch-cli`). We will update again when we have this issue resolved.",
"created_at": "2017-12-14T22:22:57Z"
},
{
"body": "We believe that all issues are resolved here. If you encounter additional issues with the Maven artifacts for 6.1.0, please open a new issue. Thanks for the initial report, and sorry again for the troubles.",
"created_at": "2017-12-15T03:51:47Z"
}
],
"number": 27806,
"title": "Elasticsearch 6.1.0 POM is referencing missing deps"
} | {
"body": "This commit addresses the publication of the elasticsearch-cli to Maven. For now for simplicity we publish this to Maven so that it is available as a transitive dependency for any artifacts that depend on the core elasticsearch artifact. It is possible that in the future we can simply exclude this dependency but for now this is the safest and simplest approach that can happen in a patch release.\r\n\r\nRelates #27114, relates #27806\r\n\r\n",
"number": 27853,
"review_comments": [],
"title": "Fix publication of elasticsearch-cli to Maven"
} | {
"commits": [
{
"message": "Fix publication of elasticsearch-cli to Maven\n\nThis commit addresses the publication of the elasticsearch-cli to\nMaven. For now for simplicity we publish this to Maven so that it is\navailable as a transitive dependency for any artifacts that depend on\nthe core elasticsearch artifact. It is possible that in the future we\ncan simply exclude this dependency but for now this is the safest and\nsimplest approach that can happen in a patch release."
}
],
"files": [
{
"diff": "@@ -20,6 +20,17 @@\n import org.elasticsearch.gradle.precommit.PrecommitTasks\n \n apply plugin: 'elasticsearch.build'\n+apply plugin: 'nebula.optional-base'\n+apply plugin: 'nebula.maven-base-publish'\n+apply plugin: 'nebula.maven-scm'\n+\n+publishing {\n+ publications {\n+ nebula {\n+ artifactId 'elasticsearch-cli'\n+ }\n+ }\n+}\n \n archivesBaseName = 'elasticsearch-cli'\n ",
"filename": "core/cli/build.gradle",
"status": "modified"
}
]
} |
{
"body": "\r\n**Elasticsearch version**: 6.1 and below\r\n\r\n**Steps to reproduce**:\r\n\r\nSearch with the example\r\nhttps://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-collapse.html#_expand_collapse_results\r\nbut set \r\n`\r\n\"inner_hits\":{\r\n \"version\":true,\r\n ...\r\n}\r\n`\r\nExpected: \r\nIn the result \"hits\" below \"inner_hits\" is a field \"_version\".\r\nInstead the result is the same as for `\"version\":false,`.\r\n\r\n**Test-Case**:\r\n[InnerHitsCollapseIT.java.txt](https://github.com/elastic/elasticsearch/files/1559850/InnerHitsCollapseIT.java.txt)\r\n\r\n**Solution**:\r\n\r\nAdd one extra line 168 in /_core/java/org/elasticsearch/action/search/ExpandSearchPhase.java\r\n(Method org.elasticsearch.action.search.ExpandSearchPhase.buildExpandSearchSourceBuilder(InnerHitBuilder) ):\r\n `groupSource.version(options.isVersion());`\r\n\r\n\r\n\r\n\r\n",
"comments": [
{
"body": "`\"version\":true,` does not work for inner hits in case of field collapsing, but it works for other inner hit results like nested documents.",
"created_at": "2017-12-14T15:59:56Z"
}
],
"number": 27822,
"title": "Field Collapsing: Version is not visible for inner_hits"
} | {
"body": "Per @KarstenRauch 's code, I made some minor changes. \r\n\r\nCloses #27822 \r\n",
"number": 27833,
"review_comments": [
{
"body": "For a test like this we prefer the rest tests:\r\nhttps://github.com/elastic/elasticsearch/blob/master/rest-api-spec/src/main/resources/rest-api-spec/test/search/110_field_collapsing.yml\r\nIt will start nodes that you can query like a client would do in the real world.",
"created_at": "2017-12-15T09:35:53Z"
},
{
"body": "Thanks for your quick response. I pushed c9b8ee488ddabf807932d2441ac8ff9536f58a6d ",
"created_at": "2017-12-15T10:38:50Z"
}
],
"title": "Add version support for inner hits in field collapsing (#27822)"
} | {
"commits": [
{
"message": "Add version support for inner hits in field collapsing (#27822)"
},
{
"message": "Add a new rest test for field collapsing"
}
],
"files": [
{
"diff": "@@ -165,6 +165,7 @@ private SearchSourceBuilder buildExpandSearchSourceBuilder(InnerHitBuilder optio\n }\n groupSource.explain(options.isExplain());\n groupSource.trackScores(options.isTrackScores());\n+ groupSource.version(options.isVersion());\n return groupSource;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.action.search;\n \n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.common.document.DocumentField;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.text.Text;\n@@ -248,6 +247,8 @@ public void run() throws IOException {\n \n public void testExpandRequestOptions() throws IOException {\n MockSearchPhaseContext mockSearchPhaseContext = new MockSearchPhaseContext(1);\n+ boolean version = randomBoolean();\n+\n mockSearchPhaseContext.searchTransport = new SearchTransportService(\n Settings.builder().put(\"search.remote.connect\", false).build(), null, null) {\n \n@@ -256,13 +257,14 @@ void sendExecuteMultiSearch(MultiSearchRequest request, SearchTask task, ActionL\n final QueryBuilder postFilter = QueryBuilders.existsQuery(\"foo\");\n assertTrue(request.requests().stream().allMatch((r) -> \"foo\".equals(r.preference())));\n assertTrue(request.requests().stream().allMatch((r) -> \"baz\".equals(r.routing())));\n+ assertTrue(request.requests().stream().allMatch((r) -> version == r.source().version()));\n assertTrue(request.requests().stream().allMatch((r) -> postFilter.equals(r.source().postFilter())));\n }\n };\n mockSearchPhaseContext.getRequest().source(new SearchSourceBuilder()\n .collapse(\n new CollapseBuilder(\"someField\")\n- .setInnerHits(new InnerHitBuilder().setName(\"foobarbaz\"))\n+ .setInnerHits(new InnerHitBuilder().setName(\"foobarbaz\").setVersion(version))\n )\n .postFilter(QueryBuilders.existsQuery(\"foo\")))\n .preference(\"foobar\")",
"filename": "core/src/test/java/org/elasticsearch/action/search/ExpandSearchPhaseTests.java",
"status": "modified"
},
{
"diff": "@@ -7,36 +7,48 @@ setup:\n index: test\n type: test\n id: 1\n+ version_type: external\n+ version: 11\n body: { numeric_group: 1, sort: 10 }\n - do:\n index:\n index: test\n type: test\n id: 2\n+ version_type: external\n+ version: 22\n body: { numeric_group: 1, sort: 6 }\n - do:\n index:\n index: test\n type: test\n id: 3\n+ version_type: external\n+ version: 33\n body: { numeric_group: 1, sort: 24 }\n - do:\n index:\n index: test\n type: test\n id: 4\n+ version_type: external\n+ version: 44\n body: { numeric_group: 25, sort: 10 }\n - do:\n index:\n index: test\n type: test\n id: 5\n+ version_type: external\n+ version: 55\n body: { numeric_group: 25, sort: 5 }\n - do:\n index:\n index: test\n type: test\n id: 6\n+ version_type: external\n+ version: 66\n body: { numeric_group: 3, sort: 36 }\n - do:\n indices.refresh:\n@@ -322,3 +334,56 @@ setup:\n - match: { hits.hits.2.inner_hits.sub_hits_desc.hits.total: 2 }\n - length: { hits.hits.2.inner_hits.sub_hits_desc.hits.hits: 1 }\n - match: { hits.hits.2.inner_hits.sub_hits_desc.hits.hits.0._id: \"4\" }\n+\n+---\n+\"field collapsing, inner_hits and version\":\n+\n+ - skip:\n+ version: \" - 6.99.99\"\n+ reason: \"bug fixed in 7.0.0\"\n+\n+ - do:\n+ search:\n+ index: test\n+ type: test\n+ body:\n+ collapse: { field: numeric_group, inner_hits: { name: sub_hits, version: true, size: 2, sort: [{ sort: asc }] } }\n+ sort: [{ sort: desc }]\n+ version: true\n+\n+ - match: { hits.total: 6 }\n+ - length: { hits.hits: 3 }\n+ - match: { hits.hits.0._index: test }\n+ - match: { hits.hits.0._type: test }\n+ - match: { hits.hits.0.fields.numeric_group: [3] }\n+ - match: { hits.hits.0.sort: [36] }\n+ - match: { hits.hits.0._id: \"6\" }\n+ - match: { hits.hits.0._version: 66 }\n+ - match: { hits.hits.0.inner_hits.sub_hits.hits.total: 1 }\n+ - length: { hits.hits.0.inner_hits.sub_hits.hits.hits: 1 }\n+ - match: { hits.hits.0.inner_hits.sub_hits.hits.hits.0._id: \"6\" }\n+ - match: { hits.hits.0.inner_hits.sub_hits.hits.hits.0._version: 66 }\n+ - match: { hits.hits.1._index: test }\n+ - match: { hits.hits.1._type: test }\n+ - match: { hits.hits.1.fields.numeric_group: [1] }\n+ - match: { hits.hits.1.sort: [24] }\n+ - match: { hits.hits.1._id: \"3\" }\n+ - match: { hits.hits.1._version: 33 }\n+ - match: { hits.hits.1.inner_hits.sub_hits.hits.total: 3 }\n+ - length: { hits.hits.1.inner_hits.sub_hits.hits.hits: 2 }\n+ - match: { hits.hits.1.inner_hits.sub_hits.hits.hits.0._id: \"2\" }\n+ - match: { hits.hits.1.inner_hits.sub_hits.hits.hits.0._version: 22 }\n+ - match: { hits.hits.1.inner_hits.sub_hits.hits.hits.1._id: \"1\" }\n+ - match: { hits.hits.1.inner_hits.sub_hits.hits.hits.1._version: 11 }\n+ - match: { hits.hits.2._index: test }\n+ - match: { hits.hits.2._type: test }\n+ - match: { hits.hits.2.fields.numeric_group: [25] }\n+ - match: { hits.hits.2.sort: [10] }\n+ - match: { hits.hits.2._id: \"4\" }\n+ - match: { hits.hits.2._version: 44 }\n+ - match: { hits.hits.2.inner_hits.sub_hits.hits.total: 2 }\n+ - length: { hits.hits.2.inner_hits.sub_hits.hits.hits: 2 }\n+ - match: { hits.hits.2.inner_hits.sub_hits.hits.hits.0._id: \"5\" }\n+ - match: { hits.hits.2.inner_hits.sub_hits.hits.hits.0._version: 55 }\n+ - match: { hits.hits.2.inner_hits.sub_hits.hits.hits.1._id: \"4\" }\n+ - match: { hits.hits.2.inner_hits.sub_hits.hits.hits.1._version: 44 }",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search/110_field_collapsing.yml",
"status": "modified"
}
]
} |
{
"body": "In 5.6 we introduced a change to the rest client that told the apache client to use system properties for some settings (3e4bc027ebdba5dd1a0935a75b5218960f736f76). \r\n\r\n```\r\nHttpAsyncClientBuilder httpClientBuilder = HttpAsyncClientBuilder.create().setDefaultRequestConfig(requestConfigBuilder.build())\r\n //default settings for connection pooling may be too constraining\r\n .setMaxConnPerRoute(DEFAULT_MAX_CONN_PER_ROUTE).setMaxConnTotal(DEFAULT_MAX_CONN_TOTAL).useSystemProperties();\r\n```\r\n\r\nThis was introduced to force the client to use the system default `SSLContext`. Unfortunately this also leads the http client to use system properties for max connections, keep alive, and other configs. This means that our settings of `DEFAULT_MAX_CONN_PER_ROUTE` and `DEFAULT_MAX_CONN_TOTAL` are ignored. Additionally any settings that a user changes related to the client might be ignored (depending on the setting).\r\n\r\n`org.apache.http.impl.nio.client.HttpAsyncClientBuilder.class`\r\n```\r\n if (systemProperties) {\r\n String s = System.getProperty(\"http.keepAlive\", \"true\");\r\n if (\"true\".equalsIgnoreCase(s)) {\r\n s = System.getProperty(\"http.maxConnections\", \"5\");\r\n final int max = Integer.parseInt(s);\r\n poolingmgr.setDefaultMaxPerRoute(max);\r\n poolingmgr.setMaxTotal(2 * max);\r\n }\r\n } else {\r\n if (maxConnTotal > 0) {\r\n poolingmgr.setMaxTotal(maxConnTotal);\r\n }\r\n if (maxConnPerRoute > 0) {\r\n poolingmgr.setDefaultMaxPerRoute(maxConnPerRoute);\r\n }\r\n }\r\n```\r\n\r\nI think we need to configured the SSLContext specifically, opposed for forcing system properties for many settings.\r\n\r\n@jaymode ",
"comments": [
{
"body": "Thanks for the quick fix!",
"created_at": "2017-12-20T18:14:17Z"
}
],
"number": 27827,
"title": "Rest client settings related to connections and keep alives are broken"
} | {
"body": "This commit removes the usage of system properties for the HttpAsyncClient as this overrides some\r\ndefaults that we intentionally change. In order to set the default SSLContext to the system context\r\nwe set the SSLContext on the builder explicitly.\r\n\r\nCloses #27827",
"number": 27829,
"review_comments": [
{
"body": "I just want to point out that this is slightly different from what is happening now. When it uses the system properties, Apache is calling `SSLContexts.createSystemDefault()`. Which does:\r\n\r\n```\r\ntry {\r\n return SSLContext.getDefault();\r\n} catch (final NoSuchAlgorithmException ex) {\r\n return createDefault();\r\n}\r\n```\r\n\r\nAnd `createDefault()`:\r\n\r\n```\r\ntry {\r\n final SSLContext sslcontext = SSLContext.getInstance(SSLContextBuilder.TLS);\r\n sslcontext.init(null, null, null);\r\n return sslcontext;\r\n } catch (final NoSuchAlgorithmException ex) {\r\n throw new SSLInitializationException(ex.getMessage(), ex);\r\n } catch (final KeyManagementException ex) {\r\n throw new SSLInitializationException(ex.getMessage(), ex);\r\n }\r\n```\r\n\r\nYou know better than I if that change is okay. I just wanted to point it out.",
"created_at": "2017-12-15T01:10:13Z"
},
{
"body": "Thanks for looking into this. I was aware of it and the behavior change was intentional; I think the previous method was too lenient in that it silently ignored a bad parameter and swallowed the exception.",
"created_at": "2017-12-15T17:23:18Z"
}
],
"title": "Do not use system properties when building the HttpAsyncClient"
} | {
"commits": [
{
"message": "Do not use system properties when building the HttpAsyncClient\n\nThis commit removes the usage of system properties for the HttpAsyncClient as this overrides some\ndefaults that we intentionally change. In order to set the default SSLContext to the system context\nwe set the SSLContext on the builder explicitly.\n\nCloses #27827"
},
{
"message": "Merge branch 'master' into client_sys_prop_defaults"
}
],
"files": [
{
"diff": "@@ -28,7 +28,9 @@\n import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;\n import org.apache.http.nio.conn.SchemeIOSessionStrategy;\n \n+import javax.net.ssl.SSLContext;\n import java.security.AccessController;\n+import java.security.NoSuchAlgorithmException;\n import java.security.PrivilegedAction;\n import java.util.Objects;\n \n@@ -200,20 +202,25 @@ private CloseableHttpAsyncClient createHttpClient() {\n requestConfigBuilder = requestConfigCallback.customizeRequestConfig(requestConfigBuilder);\n }\n \n- HttpAsyncClientBuilder httpClientBuilder = HttpAsyncClientBuilder.create().setDefaultRequestConfig(requestConfigBuilder.build())\n+ try {\n+ HttpAsyncClientBuilder httpClientBuilder = HttpAsyncClientBuilder.create().setDefaultRequestConfig(requestConfigBuilder.build())\n //default settings for connection pooling may be too constraining\n- .setMaxConnPerRoute(DEFAULT_MAX_CONN_PER_ROUTE).setMaxConnTotal(DEFAULT_MAX_CONN_TOTAL).useSystemProperties();\n- if (httpClientConfigCallback != null) {\n- httpClientBuilder = httpClientConfigCallback.customizeHttpClient(httpClientBuilder);\n- }\n-\n- final HttpAsyncClientBuilder finalBuilder = httpClientBuilder;\n- return AccessController.doPrivileged(new PrivilegedAction<CloseableHttpAsyncClient>() {\n- @Override\n- public CloseableHttpAsyncClient run() {\n- return finalBuilder.build();\n+ .setMaxConnPerRoute(DEFAULT_MAX_CONN_PER_ROUTE).setMaxConnTotal(DEFAULT_MAX_CONN_TOTAL)\n+ .setSSLContext(SSLContext.getDefault());\n+ if (httpClientConfigCallback != null) {\n+ httpClientBuilder = httpClientConfigCallback.customizeHttpClient(httpClientBuilder);\n }\n- });\n+\n+ final HttpAsyncClientBuilder finalBuilder = httpClientBuilder;\n+ return AccessController.doPrivileged(new PrivilegedAction<CloseableHttpAsyncClient>() {\n+ @Override\n+ public CloseableHttpAsyncClient run() {\n+ return finalBuilder.build();\n+ }\n+ });\n+ } catch (NoSuchAlgorithmException e) {\n+ throw new IllegalStateException(\"could not create the default ssl context\", e);\n+ }\n }\n \n /**",
"filename": "client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`):\r\n```\r\nVersion: 5.5.3, Build: 9305a5e/2017-09-07T15:56:59.599Z, JVM: 1.8.0_151\r\n```\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): \r\n```\r\njava version \"1.8.0_151\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_151-b12)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)\r\n```\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): `Darwin Thiagos-MacBook-Pro.local 17.0.0 Darwin Kernel Version 17.0.0: Thu Aug 24 21:48:19 PDT 2017; root:xnu-4570.1.46~2/RELEASE_X86_64 x86_64`\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nIf a non-data node, that contains dangling indices in it's data path, joins a cluster these dangling indices will be detected and auto-imported.\r\n\r\nIMO, a non-data node that contains index data in it's data path is probably accidental and unintended. In this case, those dangling indices should not be detected, better yet if the node does not even starts (maybe a bootstrap check that fails if a non-data node contains index data in it's data path).\r\n\r\n**Steps to reproduce**:\r\n\r\nThis can be done in a single machine:\r\n 1. Start `node-1` with `bin/elasticsearch -E path.data=/Users/thiago/data-1 -E node.name=node-1`\r\n 2. Start `node-2` with `bin/elasticsearch -E path.data=/Users/thiago/data-2 -E node.name=node-2`\r\n 3. Create an index `test` configured with `1S/0R` with `curl -XPUT localhost:9200/test -d '{ \"settings\": { \"index\": { \"number_of_shards\": 1, \"number_of_replicas\": 0 } } }' -H \"Content-Type: application/json\"`\r\n 4. Create a document `curl -XPOST localhost:9200/test -d '{ \"test\": 1 }' -H \"Content-Type: application/json\"`\r\n 5. Stop both nodes\r\n 6. Check which data directory, either `data-1` or `data-2`, that the shard for index `test` was created in and delete the _other_ empty data directory (so we effectively make a dangling index). \r\n 7. Consider that `data-2` was deleted. So start `node-2` again with `bin/elasticsearch -E path.data=/Users/thiago/data-2 -E node.name=node-2`\r\n 8. Start `node-1` (which contains dangling indices) as a non-data node with `bin/elasticsearch -E path.data=/Users/thiago/data-1 -E node.name=node-1 -E node.data=false`\r\n\r\n**Provide logs (if relevant)**:\r\n\r\nAfter non-data node `node-1` starts, `node-2` will detect and auto-import dangling indices even though `node-1` is a non-data node: \r\n```\r\n[2017-10-21T18:02:14,158][INFO ][o.e.g.LocalAllocateDangledIndices] [node-2] auto importing dangled indices [[test/R2Nh9sERThmkJ-0IZ0ppwA]/OPEN] from [{node-1}{RqWMW2AeSXWOpkUm4cT1TA}{lEqpWLIhRqqU_n1DSFuv2Q}{127.0.0.1}{127.0.0.1:9301}]\r\n```\r\n\r\n",
"comments": [
{
"body": "We discussed this on Fixit Friday and agreed to add a check that will fail:\r\n- starting up a non-data node that has shard data (e.g. dedicated master node or coordinating-only node)\r\n- starting up a coordinating-only node that has index metadata.\r\n\r\nThis means that some user action (explicitly deleting shard data) is going to be required if a data node is switched to a master-only/ coordinating node.\r\n",
"created_at": "2017-10-27T13:55:17Z"
},
{
"body": "Is this taken or can I pick it?",
"created_at": "2017-11-22T11:49:31Z"
},
{
"body": "@swethapavan sure, go ahead.",
"created_at": "2017-11-22T12:06:30Z"
},
{
"body": "Thank you",
"created_at": "2017-11-22T16:39:08Z"
},
{
"body": "I think we can fail earlier than the bootstrap checks so I'm not sure if this should be a bootstrap check, isn't it enough to be a check in node environment (we've done this in the past with the default path data issue)?",
"created_at": "2017-11-23T02:35:58Z"
},
{
"body": "> I'm not sure if this should be a bootstrap check\r\n\r\nyes, I used bootstrap check in the larger sense here when I meant \"a boot/start time check\". It does not require the bootstrap checks code infrastructure.",
"created_at": "2017-11-23T08:19:30Z"
},
{
"body": "I have done the changes but I get errors when i run some tests because the node fails due to the existence of dangling indices",
"created_at": "2017-12-13T09:36:12Z"
},
{
"body": "Specifically, these are the tests that fail:\r\norg.elasticsearch.indices.flush.FlushIT.testSyncedFlushWithConcurrentIndexing\r\n - org.elasticsearch.indices.flush.FlushIT.testWaitIfOngoing\r\n - org.elasticsearch.indices.flush.FlushIT.testSyncedFlush\r\n - org.elasticsearch.search.geo.GeoShapeIntegrationIT.testOrientationPersistence\r\n - org.elasticsearch.search.geo.GeoShapeIntegrationIT.testIgnoreMalformed\r\n - org.elasticsearch.gateway.GatewayIndexStateIT.testJustMasterNode\r\n - org.elasticsearch.index.store.CorruptedFileIT.testReplicaCorruption",
"created_at": "2017-12-13T09:37:05Z"
},
{
"body": "> I think we can fail earlier than the bootstrap checks so I'm not sure if this should be a bootstrap check, isn't it enough to be a check in node environment (we've done this in the past with the default path data issue)?\r\n\r\nI wonder if adding it as a bootstrap check is actually a feature (ie. testing for it later). Like I can totally see starting up a node with `data=false` for testing in my dev env with local host disco etc. and I don't want them to fail in that case? Just putting out my way of thinking here.",
"created_at": "2017-12-13T09:47:21Z"
},
{
"body": "@swethapavan please open a PullRequest or share your code otherwise we won't be able to help you",
"created_at": "2017-12-13T09:48:06Z"
},
{
"body": "@s1monw I have created a pull request. Kindly have a look.",
"created_at": "2017-12-14T09:51:17Z"
},
{
"body": "> I wonder if adding it as a bootstrap check is actually a feature (ie. testing for it later). Like I can totally see starting up a node with data=false for testing in my dev env with local host disco etc. and I don't want them to fail in that case?\r\n\r\nMy preference would be not to have this as a bootstrap check. Bootstrap checks are requirements for going to production, and we should keep them at a strict minimum so that the difference between prod and dev stays low. For this particular check, I don't see a good reason why we would not want to enforce it for development mode as well. If you want to start-up a node with `data=false` for testing, and that you happen to do that on a data folder which previously had a node with data, you can as easily just define a different `path.data`.",
"created_at": "2018-01-09T13:54:48Z"
},
{
"body": "Is this issue still open, there seems to be no update on it since long. I would like to work on this.",
"created_at": "2018-04-19T12:11:19Z"
},
{
"body": "Is this fixed on 6.x? Ran into this issue yesterday on 5.6.10",
"created_at": "2018-09-01T11:36:36Z"
},
{
"body": "The proposal is to detect if a data=false node have any data and fail startup if that is the case. However, even indices without any data can be resurrected and I wonder if we need to also handle that? I have created a slightly modified reproduction case to explain this:\r\n\r\n1. Clear out any previous experiments:\r\n\r\n`rm -r data-1 data-2`\r\n\r\n2. Start two nodes:\r\n\r\n```\r\nbin/elasticsearch -E path.data=data-1 -E node.name=node-1\r\nbin/elasticsearch -E path.data=data-2 -E node.name=node-2\r\n```\r\n\r\n3. Create two indexes and data for them:\r\n\r\n```\r\ncurl -XPUT localhost:9200/test?pretty -d '{ \"settings\": { \"index\": { \"number_of_shards\": 1, \"number_of_replicas\": 0 } } }' -H \"Content-Type: application/json\"\r\ncurl -XPOST localhost:9200/test/_doc?pretty -d '{ \"test\": 1 }' -H \"Content-Type: application/json\"\r\n\r\ncurl -XPUT localhost:9200/test2?pretty -d '{ \"settings\": { \"index\": { \"number_of_shards\": 1, \"number_of_replicas\": 0 } } }' -H \"Content-Type: application/json\"\r\ncurl -XPOST localhost:9200/test2/_doc?pretty -d '{ \"test\": 1 }' -H \"Content-Type: application/json\"\r\n```\r\n\r\n4. Verify that data for the two indexes are on different nodes:\r\n\r\n`ls -d data-*/nodes/0/indices/*/0`\r\n\r\nshould give something like following (notice: different data folders):\r\n\r\n```\r\ndata-1/nodes/0/indices/bF19AZJvREOs33p8udeD-A/0 data-2/nodes/0/indices/xpuL1YkcR1SttdAYF6zGEg/0\r\n```\r\n\r\n5. Shutdown both nodes. Remove the data folder for `node-1`:\r\n\r\n`rm -r data-1`\r\n\r\n6. Start `node-1` and then `node-2` with `node.data=false`:\r\n\r\n`bin/elasticsearch -E path.data=data-1 -E node.name=node-1`\r\n`bin/elasticsearch -E path.data=data-2 -E node.name=node-2 -E node.data=false`\r\n\r\nExpected log for `node-2`:\r\n\r\n```\r\n[2019-01-10T11:54:46,133][INFO ][o.e.g.DanglingIndicesState] [node-2] [[test2/bF19AZJvREOs33p8udeD-A]] dangling index exists on local file system, but not in cluster metadata, auto import to cluster state\r\n[2019-01-10T11:54:46,133][INFO ][o.e.g.DanglingIndicesState] [node-2] [[test/xpuL1YkcR1SttdAYF6zGEg]] dangling index exists on local file system, but not in cluster metadata, auto import to cluster state\r\n```\r\n\r\nand for `node-1`:\r\n\r\n```\r\n[2019-01-10T11:54:46,308][INFO ][o.e.g.LocalAllocateDangledIndices] [node-1] auto importing dangled indices [[test2/bF19AZJvREOs33p8udeD-A]/OPEN][[test/xpuL1YkcR1SttdAYF6zGEg]/OPEN] from [{node-2}{wwM9q--3TmW0VCAHerzmNg}{OYshEsG6Rv6CvNmANivlnQ}{127.0.0.1}{127.0.0.1:9301}{ml.machine_memory=33465024512, ml.max_open_jobs=20, xpack.installed=true}]\r\n```\r\n\r\nLooking at the file system, both indices now exist on node-1 too without any data:\r\n\r\n```\r\nls -d data-1/nodes/0/indices/*/*\r\ndata-1/nodes/0/indices/bF19AZJvREOs33p8udeD-A/_state data-1/nodes/0/indices/xpuL1YkcR1SttdAYF6zGEg/_state\r\n```\r\n\r\nand both are red status:\r\n\r\n```\r\ncurl localhost:9200/_cat/indices?v\r\nhealth status index uuid pri rep docs.count docs.deleted store.size pri.store.size\r\nred open test xpuL1YkcR1SttdAYF6zGEg 1 0 \r\nred open test2 bF19AZJvREOs33p8udeD-A 1 0\r\n```\r\n\r\nThis makes me wonder whether the proposed change is enough since there is still a risk of resurrecting old indexes that did not have any shards allocated on the node?\r\n",
"created_at": "2019-01-10T11:16:25Z"
},
{
"body": "Had a conversation with @ywelsch on this on another channel. We came to the conclusion that the original proposal should be implemented to avoid resurrecting the indices in clearly bad cases and also to avoid having old data lying around that are invalid for the type of node.",
"created_at": "2019-01-10T12:59:08Z"
}
],
"number": 27073,
"title": "Dangling indices living in non-data nodes are detected and auto-imported"
} | {
"body": "#27073 : Some test cases are failing after the change. Need to investigate. \r\n<!--\r\nThank you for your interest in and contributing to Elasticsearch! There\r\nare a few simple things to check before submitting your pull request\r\nthat can help with the review process. You should delete these items\r\nfrom your submission, but they are here to help bring them to your\r\nattention.\r\n-->\r\n\r\n- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?\r\n- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md)?\r\n- If submitting code, have you built your formula locally prior to submission with `gradle check`?\r\n- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.\r\n- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?\r\n- If you are submitting this code for a class then read our [policy](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-as-part-of-a-class) for that.\r\n",
"number": 27811,
"review_comments": [],
"title": "#27073: Dangling indices living in non-data nodes are detected and auto-imported."
} | {
"commits": [
{
"message": "#27073: Dangling indices living in non-data nodes are detected and auto-imported. Some test cases are failing. Need to check further."
}
],
"files": [
{
"diff": "@@ -249,7 +249,9 @@ public NodeEnvironment(Settings settings, Environment environment) throws IOExce\n this.nodeLockId = nodeLockId;\n this.locks = locks;\n this.nodePaths = nodePaths;\n-\n+ if(!DiscoveryNode.isDataNode(settings) && !availableIndexFolders().isEmpty()) {\n+ throw new IllegalStateException(\"Non Data node cannot have dangling indices\");\n+ }\n if (logger.isDebugEnabled()) {\n logger.debug(\"using node location [{}], local_lock_id [{}]\", nodePaths, nodeLockId);\n }",
"filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.node.Node;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.IndexSettingsModule;\n \n@@ -450,6 +451,32 @@ public void testExistingTempFiles() throws IOException {\n }\n }\n \n+ public void testIfNodeEnvironmentInitiationFails() throws IOException {\n+ // simulate some previous left over temp files\n+ Settings settings = buildEnvSettings(Settings.builder().put(Node.NODE_DATA_SETTING.getKey(), false).build());\n+\n+ List<String> dataPaths = Environment.PATH_DATA_SETTING.get(settings);\n+\n+\n+ final Path nodePath = NodeEnvironment.resolveNodePath(PathUtils.get(dataPaths.get(0)), 0);\n+ final Path indicesPath = nodePath.resolve(NodeEnvironment.INDICES_FOLDER);\n+\n+ Files.createDirectories(indicesPath.resolve(\"index-uuid\"));\n+ try {\n+\n+ NodeEnvironment env = new NodeEnvironment(settings, TestEnvironment.newEnvironment(settings));\n+ Path nodepatt = env.nodePaths()[0].indicesPath;\n+ env.close();\n+ fail(\"Node environment instantiation should have failed for non data node\" + nodepatt + \" \" + indicesPath +\n+ env.availableIndexFolders());\n+ } catch (IllegalStateException e) {\n+ // that's OK :)\n+ }\n+\n+ for (String path: dataPaths) {\n+ Files.deleteIfExists(indicesPath.resolve(\"index-uuid\"));\n+ }\n+ }\n /** Converts an array of Strings to an array of Paths, adding an additional child if specified */\n private Path[] stringsToPaths(String[] strings, String additional) {\n Path[] locations = new Path[strings.length];",
"filename": "core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java",
"status": "modified"
}
]
} |
{
"body": "Today when we get a metadata snapshot from the index shard we ensure\r\nthat if there is no engine started on the shard that we lock the index\r\nwriter before we go and fetch the store metadata. Yet, if we concurrently\r\nrecover that shard, recovery finalization might fail since it can't acquire\r\nthe IW lock on the directory. This is mainly due to the wrong order of acquiring\r\nthe IW lock and the metadata lock. Fetching store metadata without a started engine\r\nshould block on the metadata lock in Store.java but since IndexShard locks the writer\r\nfirst we get into a failed recovery dance especially in test. In production\r\nthis is less of an issue since we rarely get into this situation if at all.\r\n\r\nCloses #24481\r\n",
"comments": [],
"number": 24787,
"title": "Obey lock order if working with store to get metadata snapshots"
} | {
"body": "Today when we get a metadata snapshot directly from a store directory, we acquire a metadata lock, then acquire an IndexWriter lock. However, we create a CheckIndex in IndexShard without acquiring the metadata lock first. This causes a recovery failed because the IndexWriter lock can be still held by method `snapshotStoreMetadata`. This commit makes sure to create a `CheckIndex` under the metadata lock.\r\n\r\nCloses #24481\r\nCloses #27731\r\nRelates #24787\r\n",
"number": 27768,
"review_comments": [
{
"body": "instead of adding this `runUnderMetadataLock` method would it make sense to just move the checkindex method into store like this `public CheckIndex.Status checkIndex()` then we can interpret the result here and run under the appropriate locks.",
"created_at": "2017-12-14T07:01:58Z"
},
{
"body": "I wonder if we should still acquire the lock. I don't think we should execute this while the IW is open?",
"created_at": "2017-12-18T08:47:43Z"
},
{
"body": "I pushed https://github.com/elastic/elasticsearch/pull/27768/commits/f231042ffd970b99fa0037ad3e5a6ce1c181674b",
"created_at": "2017-12-18T14:04:48Z"
}
],
"title": "Check and repair index under the store metadata lock"
} | {
"commits": [
{
"message": "Check index under the store metadata lock\n\nToday when we get a metadata snapshot directly from a store directory,\nwe acquire a metadata lock, then acquire an IW lock. However, we create\na CheckIndex in IndexShard without acquiring the metadata lock first.\nThis causes a recovery failed because the IW lock can be still held by\n`snapshotStoreMetadata`. This commit makes sure to create a CheckIndex\nunder the metadata lock.\n\nCloses #24481\nRelates #24787"
},
{
"message": "loan the metadata lock"
},
{
"message": "move checkIndex and exorciseIndex to Store"
},
{
"message": "do not sysout"
},
{
"message": "Merge branch 'master' into lock-checkindex"
},
{
"message": "Merge branch 'master' into lock-checkindex"
},
{
"message": "lock directory when checkIndex"
},
{
"message": "Merge branch 'master' into lock-checkindex"
},
{
"message": "Do not use CheckIndex directly"
}
],
"files": [
{
"diff": "@@ -1899,7 +1899,7 @@ public void noopUpdate(String type) {\n internalIndexingStats.noopUpdate(type);\n }\n \n- private void checkIndex() throws IOException {\n+ void checkIndex() throws IOException {\n if (store.tryIncRef()) {\n try {\n doCheckIndex();\n@@ -1938,29 +1938,25 @@ private void doCheckIndex() throws IOException {\n }\n } else {\n // full checkindex\n- try (CheckIndex checkIndex = new CheckIndex(store.directory())) {\n- checkIndex.setInfoStream(out);\n- CheckIndex.Status status = checkIndex.checkIndex();\n- out.flush();\n-\n- if (!status.clean) {\n- if (state == IndexShardState.CLOSED) {\n- // ignore if closed....\n- return;\n+ final CheckIndex.Status status = store.checkIndex(out);\n+ out.flush();\n+ if (!status.clean) {\n+ if (state == IndexShardState.CLOSED) {\n+ // ignore if closed....\n+ return;\n+ }\n+ logger.warn(\"check index [failure]\\n{}\", os.bytes().utf8ToString());\n+ if (\"fix\".equals(checkIndexOnStartup)) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"fixing index, writing new segments file ...\");\n }\n- logger.warn(\"check index [failure]\\n{}\", os.bytes().utf8ToString());\n- if (\"fix\".equals(checkIndexOnStartup)) {\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"fixing index, writing new segments file ...\");\n- }\n- checkIndex.exorciseIndex(status);\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"index fixed, wrote new segments file \\\"{}\\\"\", status.segmentsFileName);\n- }\n- } else {\n- // only throw a failure if we are not going to fix the index\n- throw new IllegalStateException(\"index check failure but can't fix it\");\n+ store.exorciseIndex(status);\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"index fixed, wrote new segments file \\\"{}\\\"\", status.segmentsFileName);\n }\n+ } else {\n+ // only throw a failure if we are not going to fix the index\n+ throw new IllegalStateException(\"index check failure but can't fix it\");\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.apache.logging.log4j.message.ParameterizedMessage;\n import org.apache.logging.log4j.util.Supplier;\n import org.apache.lucene.codecs.CodecUtil;\n+import org.apache.lucene.index.CheckIndex;\n import org.apache.lucene.index.CorruptIndexException;\n import org.apache.lucene.index.IndexCommit;\n import org.apache.lucene.index.IndexFileNames;\n@@ -86,6 +87,7 @@\n import java.io.FileNotFoundException;\n import java.io.IOException;\n import java.io.InputStream;\n+import java.io.PrintStream;\n import java.nio.file.AccessDeniedException;\n import java.nio.file.NoSuchFileException;\n import java.nio.file.Path;\n@@ -341,6 +343,33 @@ public int compare(Map.Entry<String, String> o1, Map.Entry<String, String> o2) {\n \n }\n \n+ /**\n+ * Checks and returns the status of the existing index in this store.\n+ *\n+ * @param out where infoStream messages should go. See {@link CheckIndex#setInfoStream(PrintStream)}\n+ */\n+ public CheckIndex.Status checkIndex(PrintStream out) throws IOException {\n+ metadataLock.writeLock().lock();\n+ try (CheckIndex checkIndex = new CheckIndex(directory)) {\n+ checkIndex.setInfoStream(out);\n+ return checkIndex.checkIndex();\n+ } finally {\n+ metadataLock.writeLock().unlock();\n+ }\n+ }\n+\n+ /**\n+ * Repairs the index using the previous returned status from {@link #checkIndex(PrintStream)}.\n+ */\n+ public void exorciseIndex(CheckIndex.Status status) throws IOException {\n+ metadataLock.writeLock().lock();\n+ try (CheckIndex checkIndex = new CheckIndex(directory)) {\n+ checkIndex.exorciseIndex(status);\n+ } finally {\n+ metadataLock.writeLock().unlock();\n+ }\n+ }\n+\n public StoreStats stats() throws IOException {\n ensureOpen();\n return statsCache.getOrRefresh();",
"filename": "core/src/main/java/org/elasticsearch/index/store/Store.java",
"status": "modified"
},
{
"diff": "@@ -2449,6 +2449,71 @@ public void testReadSnapshotConcurrently() throws IOException, InterruptedExcept\n closeShards(newShard);\n }\n \n+ /**\n+ * Simulates a scenario that happens when we are async fetching snapshot metadata from GatewayService\n+ * and checking index concurrently. This should always be possible without any exception.\n+ */\n+ public void testReadSnapshotAndCheckIndexConcurrently() throws Exception {\n+ final boolean isPrimary = randomBoolean();\n+ IndexShard indexShard = newStartedShard(isPrimary);\n+ final long numDocs = between(10, 100);\n+ for (long i = 0; i < numDocs; i++) {\n+ indexDoc(indexShard, \"doc\", Long.toString(i), \"{\\\"foo\\\" : \\\"bar\\\"}\");\n+ if (randomBoolean()) {\n+ indexShard.refresh(\"test\");\n+ }\n+ }\n+ indexShard.flush(new FlushRequest());\n+ closeShards(indexShard);\n+\n+ final ShardRouting shardRouting = ShardRoutingHelper.initWithSameId(indexShard.routingEntry(),\n+ isPrimary ? RecoverySource.StoreRecoverySource.EXISTING_STORE_INSTANCE : RecoverySource.PeerRecoverySource.INSTANCE\n+ );\n+ final IndexMetaData indexMetaData = IndexMetaData.builder(indexShard.indexSettings().getIndexMetaData())\n+ .settings(Settings.builder()\n+ .put(indexShard.indexSettings.getSettings())\n+ .put(IndexSettings.INDEX_CHECK_ON_STARTUP.getKey(), randomFrom(\"false\", \"true\", \"checksum\", \"fix\")))\n+ .build();\n+ final IndexShard newShard = newShard(shardRouting, indexShard.shardPath(), indexMetaData,\n+ null, indexShard.engineFactory, indexShard.getGlobalCheckpointSyncer());\n+\n+ Store.MetadataSnapshot storeFileMetaDatas = newShard.snapshotStoreMetadata();\n+ assertTrue(\"at least 2 files, commit and data: \" + storeFileMetaDatas.toString(), storeFileMetaDatas.size() > 1);\n+ AtomicBoolean stop = new AtomicBoolean(false);\n+ CountDownLatch latch = new CountDownLatch(1);\n+ Thread snapshotter = new Thread(() -> {\n+ latch.countDown();\n+ while (stop.get() == false) {\n+ try {\n+ Store.MetadataSnapshot readMeta = newShard.snapshotStoreMetadata();\n+ assertThat(readMeta.getNumDocs(), equalTo(numDocs));\n+ assertThat(storeFileMetaDatas.recoveryDiff(readMeta).different.size(), equalTo(0));\n+ assertThat(storeFileMetaDatas.recoveryDiff(readMeta).missing.size(), equalTo(0));\n+ assertThat(storeFileMetaDatas.recoveryDiff(readMeta).identical.size(), equalTo(storeFileMetaDatas.size()));\n+ } catch (IOException e) {\n+ throw new AssertionError(e);\n+ }\n+ }\n+ });\n+ snapshotter.start();\n+\n+ if (isPrimary) {\n+ newShard.markAsRecovering(\"store\", new RecoveryState(newShard.routingEntry(),\n+ getFakeDiscoNode(newShard.routingEntry().currentNodeId()), null));\n+ } else {\n+ newShard.markAsRecovering(\"peer\", new RecoveryState(newShard.routingEntry(),\n+ getFakeDiscoNode(newShard.routingEntry().currentNodeId()), getFakeDiscoNode(newShard.routingEntry().currentNodeId())));\n+ }\n+ int iters = iterations(10, 100);\n+ latch.await();\n+ for (int i = 0; i < iters; i++) {\n+ newShard.checkIndex();\n+ }\n+ assertTrue(stop.compareAndSet(false, true));\n+ snapshotter.join();\n+ closeShards(newShard);\n+ }\n+\n class Result {\n private final int localCheckpoint;\n private final int maxSeqNo;",
"filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
},
{
"diff": "@@ -204,16 +204,13 @@ public void afterIndexShardClosed(ShardId sid, @Nullable IndexShard indexShard,\n if (!Lucene.indexExists(store.directory()) && indexShard.state() == IndexShardState.STARTED) {\n return;\n }\n- try (CheckIndex checkIndex = new CheckIndex(store.directory())) {\n- BytesStreamOutput os = new BytesStreamOutput();\n- PrintStream out = new PrintStream(os, false, StandardCharsets.UTF_8.name());\n- checkIndex.setInfoStream(out);\n- out.flush();\n- CheckIndex.Status status = checkIndex.checkIndex();\n- if (!status.clean) {\n- logger.warn(\"check index [failure]\\n{}\", os.bytes().utf8ToString());\n- throw new IOException(\"index check failure\");\n- }\n+ BytesStreamOutput os = new BytesStreamOutput();\n+ PrintStream out = new PrintStream(os, false, StandardCharsets.UTF_8.name());\n+ CheckIndex.Status status = store.checkIndex(out);\n+ out.flush();\n+ if (!status.clean) {\n+ logger.warn(\"check index [failure]\\n{}\", os.bytes().utf8ToString());\n+ throw new IOException(\"index check failure\");\n }\n } catch (Exception e) {\n exception.add(e);",
"filename": "core/src/test/java/org/elasticsearch/index/store/CorruptedFileIT.java",
"status": "modified"
},
{
"diff": "@@ -1070,4 +1070,5 @@ public Directory newDirectory() throws IOException {\n }\n store.close();\n }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/index/store/StoreTests.java",
"status": "modified"
},
{
"diff": "@@ -119,17 +119,14 @@ public static void checkIndex(Logger logger, Store store, ShardId shardId) {\n if (!Lucene.indexExists(dir)) {\n return;\n }\n- try (CheckIndex checkIndex = new CheckIndex(dir)) {\n+ try {\n BytesStreamOutput os = new BytesStreamOutput();\n PrintStream out = new PrintStream(os, false, StandardCharsets.UTF_8.name());\n- checkIndex.setInfoStream(out);\n+ CheckIndex.Status status = store.checkIndex(out);\n out.flush();\n- CheckIndex.Status status = checkIndex.checkIndex();\n if (!status.clean) {\n ESTestCase.checkIndexFailed = true;\n- logger.warn(\"check index [failure] index files={}\\n{}\",\n- Arrays.toString(dir.listAll()),\n- os.bytes().utf8ToString());\n+ logger.warn(\"check index [failure] index files={}\\n{}\", Arrays.toString(dir.listAll()), os.bytes().utf8ToString());\n throw new IOException(\"index check failure\");\n } else {\n if (logger.isDebugEnabled()) {",
"filename": "test/framework/src/main/java/org/elasticsearch/test/store/MockFSDirectoryService.java",
"status": "modified"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion: 5.6.3, Build: 1a2f265/2017-10-06T20:33:39.012Z, JVM: 1.8.0_45\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`):\r\njava version \"1.8.0_45\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_45-b14)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nLinux 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nDocuments can't be retrieved if their routing key ends with a whitespace. Whitespace at other positions in the key, however, work correctly.\r\n\r\n**Steps to reproduce**:\r\n\r\nPlease include a *minimal* but *complete* recreation of the problem, including\r\n(e.g.) index creation, mappings, settings, query etc. The easier you make for\r\nus to reproduce it, the more likely that somebody will take the time to look at it.\r\n\r\n ```\r\n➜ ~ curl -XPOST \"localhost:9200/test/test/1?routing=key\" -d '{ \"name\" : \"routing is key\" }'\r\n{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\",\"_version\":1,\"result\":\"created\",\"_shards\":{\"total\":2,\"successful\":1,\"failed\":0},\"created\":true} \r\n\r\n➜ ~ curl \"localhost:9200/test/test/_search?routing=key\"\r\n{\"took\":69,\"timed_out\":false,\"_shards\":{\"total\":1,\"successful\":1,\"skipped\":0,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":1.0,\"hits\":[{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\",\"_score\":1.0,\"_routing\":\"key\",\"_source\":{ \"name\" : \"routing is key\" }}]}}\r\n \r\n➜ ~ curl -XPOST \"localhost:9200/test/test/1?routing=ke%20\" -d '{ \"name\" : \"routing is ke \" }'\r\n{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\",\"_version\":2,\"result\":\"updated\",\"_shards\":{\"total\":2,\"successful\":1,\"failed\":0},\"created\":false}\r\n\r\n➜ ~ curl \"localhost:9200/test/test/_search?routing=ke%20\"\r\n{\"took\":6,\"timed_out\":false,\"_shards\":{\"total\":1,\"successful\":1,\"skipped\":0,\"failed\":0},\"hits\":{\"total\":0,\"max_score\":null,\"hits\":[]}}\r\n \r\n➜ ~ curl -XPOST \"localhost:9200/test/test/1?routing=k%20e\" -d '{ \"name\" : \"routing is k e\" }'\r\n{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\",\"_version\":1,\"result\":\"created\",\"_shards\":{\"total\":2,\"successful\":1,\"failed\":0},\"created\":true} \r\n \r\n➜ ~ curl \"localhost:9200/test/test/_search?routing=k%20e\"\r\n{\"took\":2,\"timed_out\":false,\"_shards\":{\"total\":1,\"successful\":1,\"skipped\":0,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":1.0,\"hits\":[{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\",\"_score\":1.0,\"_routing\":\"k e\",\"_source\":{ \"name\" : \"routing is k e\" }}]}}\r\n```\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n",
"comments": [
{
"body": "@taylorKonigsmark There is an open PR: #27712",
"created_at": "2017-12-08T15:22:33Z"
},
{
"body": "Closed by #27712",
"created_at": "2017-12-08T16:33:33Z"
}
],
"number": 27708,
"title": "Cannot search using routing keys that end in whitespace"
} | {
"body": "The problem here is that splitting was using a method that intentionally trims whitespace (the method is really meant to be used for splitting parameters where whitespace should be trimmed like list settings). However, for routing values whitespace should not be trimmed because we allow routing with leading and trailing spaces. This commit switches the parsing of these routing values to a method that does not trim whitespace.\r\n\r\nCloses #27708\r\n",
"number": 27712,
"review_comments": [],
"title": "Fix routing with leading or trailing whitespace"
} | {
"commits": [
{
"message": "Fix routing with leading or trailing whitespace\n\nThe problem here is that splitting was using a method that intentionally\ntrims whitespace (the method is really meant to be used for splitting\nparameters where whitespace should be trimmed like list\nsettings). However, for routing values whitespace should not be trimmed\nbecause we allow routing with leading and trailing spaces. This commit\nswitches the parsing of these routing values to a method that does not\ntrim whitespace."
},
{
"message": "Merge branch 'master' into fix-routing-with-leading-or-trailing-whitespace\n\n* master:\n Add test for writer operation buffer accounting (#27707)\n [TEST] Wait for merging to complete before testing breaker"
},
{
"message": "Fix ambiguity"
},
{
"message": "One more test case"
}
],
"files": [
{
"diff": "@@ -658,7 +658,6 @@\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]recovery[/\\\\]RelocationIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]recovery[/\\\\]TruncatedRecoveryIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]rest[/\\\\]BytesRestResponseTests.java\" checks=\"LineLength\" />\n- <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]routing[/\\\\]AliasResolveRoutingIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]routing[/\\\\]AliasRoutingIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]routing[/\\\\]SimpleRoutingIT.java\" checks=\"LineLength\" />\n <suppress files=\"core[/\\\\]src[/\\\\]test[/\\\\]java[/\\\\]org[/\\\\]elasticsearch[/\\\\]script[/\\\\]FileScriptTests.java\" checks=\"LineLength\" />",
"filename": "buildSrc/src/main/resources/checkstyle_suppressions.xml",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -58,7 +59,7 @@ private AliasMetaData(String alias, CompressedXContent filter, String indexRouti\n this.indexRouting = indexRouting;\n this.searchRouting = searchRouting;\n if (searchRouting != null) {\n- searchRoutingValues = Collections.unmodifiableSet(Strings.splitStringByCommaToSet(searchRouting));\n+ searchRoutingValues = Collections.unmodifiableSet(Sets.newHashSet(Strings.splitStringByCommaToArray(searchRouting)));\n } else {\n searchRoutingValues = emptySet();\n }\n@@ -186,7 +187,7 @@ public AliasMetaData(StreamInput in) throws IOException {\n }\n if (in.readBoolean()) {\n searchRouting = in.readString();\n- searchRoutingValues = Collections.unmodifiableSet(Strings.splitStringByCommaToSet(searchRouting));\n+ searchRoutingValues = Collections.unmodifiableSet(Sets.newHashSet(Strings.splitStringByCommaToArray(searchRouting)));\n } else {\n searchRouting = null;\n searchRoutingValues = emptySet();",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/AliasMetaData.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.indices.IndexClosedException;\n@@ -358,6 +359,7 @@ public Map<String, Set<String>> resolveSearchRouting(ClusterState state, @Nullab\n resolvedExpressions = expressionResolver.resolve(context, resolvedExpressions);\n }\n \n+ // TODO: it appears that this can never be true?\n if (isAllIndices(resolvedExpressions)) {\n return resolveSearchRoutingAllIndices(state.metaData(), routing);\n }\n@@ -367,7 +369,7 @@ public Map<String, Set<String>> resolveSearchRouting(ClusterState state, @Nullab\n // List of indices that don't require any routing\n Set<String> norouting = new HashSet<>();\n if (routing != null) {\n- paramRouting = Strings.splitStringByCommaToSet(routing);\n+ paramRouting = Sets.newHashSet(Strings.splitStringByCommaToArray(routing));\n }\n \n for (String expression : resolvedExpressions) {\n@@ -442,9 +444,9 @@ public Map<String, Set<String>> resolveSearchRouting(ClusterState state, @Nullab\n /**\n * Sets the same routing for all indices\n */\n- private Map<String, Set<String>> resolveSearchRoutingAllIndices(MetaData metaData, String routing) {\n+ public Map<String, Set<String>> resolveSearchRoutingAllIndices(MetaData metaData, String routing) {\n if (routing != null) {\n- Set<String> r = Strings.splitStringByCommaToSet(routing);\n+ Set<String> r = Sets.newHashSet(Strings.splitStringByCommaToArray(routing));\n Map<String, Set<String>> routings = new HashMap<>();\n String[] concreteIndices = metaData.getConcreteAllIndices();\n for (String index : concreteIndices) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,55 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.metadata;\n+\n+import org.elasticsearch.common.io.stream.BytesStreamOutput;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.util.set.Sets;\n+import org.elasticsearch.common.xcontent.XContent;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.IOException;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+\n+public class AliasMetaDataTests extends ESTestCase {\n+\n+ public void testSerialization() throws IOException {\n+ final AliasMetaData before =\n+ AliasMetaData\n+ .builder(\"alias\")\n+ .filter(\"{ \\\"term\\\": \\\"foo\\\"}\")\n+ .indexRouting(\"indexRouting\")\n+ .routing(\"routing\")\n+ .searchRouting(\"trim,tw , ltw , lw\")\n+ .build();\n+\n+ assertThat(before.searchRoutingValues(), equalTo(Sets.newHashSet(\"trim\", \"tw \", \" ltw \", \" lw\")));\n+\n+ final BytesStreamOutput out = new BytesStreamOutput();\n+ before.writeTo(out);\n+\n+ final StreamInput in = out.bytes().streamInput();\n+ final AliasMetaData after = new AliasMetaData(in);\n+\n+ assertThat(after, equalTo(before));\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/cluster/metadata/AliasMetaDataTests.java",
"status": "added"
},
{
"diff": "@@ -26,21 +26,20 @@\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.test.ESIntegTestCase;\n \n+import java.util.Collections;\n import java.util.HashMap;\n import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.ExecutionException;\n \n import static org.elasticsearch.common.util.set.Sets.newHashSet;\n-import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.nullValue;\n \n public class AliasResolveRoutingIT extends ESIntegTestCase {\n \n-\n // see https://github.com/elastic/elasticsearch/issues/13278\n public void testSearchClosedWildcardIndex() throws ExecutionException, InterruptedException {\n createIndex(\"test-0\");\n@@ -52,10 +51,17 @@ public void testSearchClosedWildcardIndex() throws ExecutionException, Interrupt\n client().prepareIndex(\"test-0\", \"type1\", \"2\").setSource(\"field1\", \"quick brown\"),\n client().prepareIndex(\"test-0\", \"type1\", \"3\").setSource(\"field1\", \"quick\"));\n refresh(\"test-*\");\n- assertHitCount(client().prepareSearch().setIndices(\"alias-*\").setIndicesOptions(IndicesOptions.lenientExpandOpen()).setQuery(queryStringQuery(\"quick\")).get(), 3L);\n+ assertHitCount(\n+ client()\n+ .prepareSearch()\n+ .setIndices(\"alias-*\")\n+ .setIndicesOptions(IndicesOptions.lenientExpandOpen())\n+ .setQuery(queryStringQuery(\"quick\"))\n+ .get(),\n+ 3L);\n }\n \n- public void testResolveIndexRouting() throws Exception {\n+ public void testResolveIndexRouting() {\n createIndex(\"test1\");\n createIndex(\"test2\");\n client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForGreenStatus().execute().actionGet();\n@@ -97,9 +103,10 @@ public void testResolveIndexRouting() throws Exception {\n }\n }\n \n- public void testResolveSearchRouting() throws Exception {\n+ public void testResolveSearchRouting() {\n createIndex(\"test1\");\n createIndex(\"test2\");\n+ createIndex(\"test3\");\n client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForGreenStatus().execute().actionGet();\n \n client().admin().indices().prepareAliases()\n@@ -108,7 +115,10 @@ public void testResolveSearchRouting() throws Exception {\n .addAliasAction(AliasActions.add().index(\"test2\").alias(\"alias20\").routing(\"0\"))\n .addAliasAction(AliasActions.add().index(\"test2\").alias(\"alias21\").routing(\"1\"))\n .addAliasAction(AliasActions.add().index(\"test1\").alias(\"alias0\").routing(\"0\"))\n- .addAliasAction(AliasActions.add().index(\"test2\").alias(\"alias0\").routing(\"0\")).get();\n+ .addAliasAction(AliasActions.add().index(\"test2\").alias(\"alias0\").routing(\"0\"))\n+ .addAliasAction(AliasActions.add().index(\"test3\").alias(\"alias3tw\").routing(\"tw \"))\n+ .addAliasAction(AliasActions.add().index(\"test3\").alias(\"alias3ltw\").routing(\" ltw \"))\n+ .addAliasAction(AliasActions.add().index(\"test3\").alias(\"alias3lw\").routing(\" lw\")).get();\n \n ClusterState state = clusterService().state();\n IndexNameExpressionResolver indexNameExpressionResolver = internalCluster().getInstance(IndexNameExpressionResolver.class);\n@@ -118,7 +128,9 @@ public void testResolveSearchRouting() throws Exception {\n assertThat(indexNameExpressionResolver.resolveSearchRouting(state, null, \"alias10\"), equalTo(newMap(\"test1\", newSet(\"0\"))));\n assertThat(indexNameExpressionResolver.resolveSearchRouting(state, \"0\", \"alias10\"), equalTo(newMap(\"test1\", newSet(\"0\"))));\n assertThat(indexNameExpressionResolver.resolveSearchRouting(state, \"1\", \"alias10\"), nullValue());\n- assertThat(indexNameExpressionResolver.resolveSearchRouting(state, null, \"alias0\"), equalTo(newMap(\"test1\", newSet(\"0\"), \"test2\", newSet(\"0\"))));\n+ assertThat(\n+ indexNameExpressionResolver.resolveSearchRouting(state, null, \"alias0\"),\n+ equalTo(newMap(\"test1\", newSet(\"0\"), \"test2\", newSet(\"0\"))));\n \n assertThat(indexNameExpressionResolver.resolveSearchRouting(state, null, new String[]{\"alias10\", \"alias20\"}),\n equalTo(newMap(\"test1\", newSet(\"0\"), \"test2\", newSet(\"0\"))));\n@@ -143,13 +155,42 @@ public void testResolveSearchRouting() throws Exception {\n equalTo(newMap(\"test1\", newSet(\"0\"), \"test2\", newSet(\"1\"))));\n assertThat(indexNameExpressionResolver.resolveSearchRouting(state, \"0,1,2\", new String[]{\"test1\", \"alias10\", \"alias21\"}),\n equalTo(newMap(\"test1\", newSet(\"0\", \"1\", \"2\"), \"test2\", newSet(\"1\"))));\n+\n+ assertThat(\n+ indexNameExpressionResolver.resolveSearchRouting(state, \"tw , ltw , lw\", \"test1\"),\n+ equalTo(newMap(\"test1\", newSet(\"tw \", \" ltw \", \" lw\"))));\n+ assertThat(\n+ indexNameExpressionResolver.resolveSearchRouting(state, \"tw , ltw , lw\", \"alias3tw\"),\n+ equalTo(newMap(\"test3\", newSet(\"tw \"))));\n+ assertThat(\n+ indexNameExpressionResolver.resolveSearchRouting(state, \"tw , ltw , lw\", \"alias3ltw\"),\n+ equalTo(newMap(\"test3\", newSet(\" ltw \"))));\n+ assertThat(\n+ indexNameExpressionResolver.resolveSearchRouting(state, \"tw , ltw , lw\", \"alias3lw\"),\n+ equalTo(newMap(\"test3\", newSet(\" lw\"))));\n+ assertThat(\n+ indexNameExpressionResolver.resolveSearchRouting(state, \"0,tw , ltw , lw\", \"test1\", \"alias3ltw\"),\n+ equalTo(newMap(\"test1\", newSet(\"0\", \"tw \", \" ltw \", \" lw\"), \"test3\", newSet(\" ltw \"))));\n+\n+ assertThat(\n+ indexNameExpressionResolver.resolveSearchRouting(state, \"0,1,2,tw , ltw , lw\", (String[])null),\n+ equalTo(newMap(\n+ \"test1\", newSet(\"0\", \"1\", \"2\", \"tw \", \" ltw \", \" lw\"),\n+ \"test2\", newSet(\"0\", \"1\", \"2\", \"tw \", \" ltw \", \" lw\"),\n+ \"test3\", newSet(\"0\", \"1\", \"2\", \"tw \", \" ltw \", \" lw\"))));\n+\n+ assertThat(\n+ indexNameExpressionResolver.resolveSearchRoutingAllIndices(state.metaData(), \"0,1,2,tw , ltw , lw\"),\n+ equalTo(newMap(\n+ \"test1\", newSet(\"0\", \"1\", \"2\", \"tw \", \" ltw \", \" lw\"),\n+ \"test2\", newSet(\"0\", \"1\", \"2\", \"tw \", \" ltw \", \" lw\"),\n+ \"test3\", newSet(\"0\", \"1\", \"2\", \"tw \", \" ltw \", \" lw\"))));\n }\n \n private <T> Set<T> newSet(T... elements) {\n return newHashSet(elements);\n }\n \n-\n private <K, V> Map<K, V> newMap(K key, V value) {\n Map<K, V> r = new HashMap<>();\n r.put(key, value);\n@@ -163,4 +204,12 @@ private <K, V> Map<K, V> newMap(K key1, V value1, K key2, V value2) {\n return r;\n }\n \n+ private <K, V> Map<K, V> newMap(K key1, V value1, K key2, V value2, K key3, V value3) {\n+ Map<K, V> r = new HashMap<>();\n+ r.put(key1, value1);\n+ r.put(key2, value2);\n+ r.put(key3, value3);\n+ return r;\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/routing/AliasResolveRoutingIT.java",
"status": "modified"
}
]
} |
{
"body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 5.4.1\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): java version \"1.8.0_131\"\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): MacOS 10.11.6\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nHave been trying to index a lot of geometry, and getting consistently the same problems with those geometry/polygons that seem to contain a lot of vertices. Have tried importing the shapefile, and geojson through ogr2ogr. also tried generating a json file to use with bulk update. The error I am getting is an ArrayOutofBoundsException. \r\n\r\nAlso to note, if I upload the JSON file without specifying a mapping i.e. a geo_shape, then the data gets sucked in, but I would expect it's then not indexed.\r\n\r\nAs I have consistently been able to reproduce this with different upload routes, I am assuming it has to be a bug - is there a way around it? \r\n\r\nError I am getting is below:\r\nhttps://gist.github.com/aj7/12fa99044901d7db8b42a5d5d5753bf8\r\n\r\nI am happy to attach the json file if someone want's to try and see for themselves?\r\n\r\n",
"comments": [
{
"body": "@aj7 It would be very helpful if you can create a minimal reproduction scenario (data + API calls / curl commands).",
"created_at": "2017-07-27T12:54:10Z"
},
{
"body": "@danielmitterdorfer Sure, I have put the info in the [gist](https://gist.github.com/aj7/307b4577dbe52601c5e7b940d81450cd)\r\n\r\nHope this is fine?\r\n",
"created_at": "2017-07-27T13:08:47Z"
},
{
"body": "Based on the data from the gist I could create a minimal reproduction scenario for Elasticsearch 5.4.1 consisting of only one document (attached compressed as [geometry.json.zip](https://github.com/elastic/elasticsearch/files/1180167/geometry.json.zip)).\r\n\r\n```\r\ncurl -XDELETE \"http://localhost:9200/speedlimit\"\r\n\r\ncurl -XPUT \"http://localhost:9200/speedlimit\" -d '\r\n{\r\n \"mappings\": {\r\n \"speedlimit\": {\r\n \"properties\": {\r\n \"geometry\": {\r\n \"type\": \"geo_shape\",\r\n \"tree\": \"quadtree\",\r\n \"precision\": \"100m\"\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n# ensure that you've unzipped geometry.json.zip from the attachment above\r\ncurl -XPOST 'http://localhost:9200/_bulk?pretty' --data-binary @geometry.json\r\n```\r\n\r\nWill lead to:\r\n\r\n```\r\n{\r\n \"took\" : 33813,\r\n \"errors\" : true,\r\n \"items\" : [\r\n {\r\n \"index\" : {\r\n \"_index\" : \"speedlimit\",\r\n \"_type\" : \"speedlimit\",\r\n \"_id\" : \"AV2EWf3T4RDVowrFsHtQ\",\r\n \"status\" : 400,\r\n \"error\" : {\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse [geometry]\",\r\n \"caused_by\" : {\r\n \"type\" : \"array_index_out_of_bounds_exception\",\r\n \"reason\" : \"-1\"\r\n }\r\n }\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nand in the logs:\r\n\r\n```\r\norg.elasticsearch.index.mapper.MapperParsingException: failed to parse [geometry]\r\n\tat org.elasticsearch.index.mapper.GeoShapeFieldMapper.parse(GeoShapeFieldMapper.java:473) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:450) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:467) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n[...]\r\nCaused by: java.lang.ArrayIndexOutOfBoundsException: -1\r\n\tat org.elasticsearch.common.geo.builders.PolygonBuilder.assign(PolygonBuilder.java:483) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.geo.builders.PolygonBuilder.compose(PolygonBuilder.java:455) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.geo.builders.PolygonBuilder.coordinates(PolygonBuilder.java:221) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.geo.builders.MultiPolygonBuilder.build(MultiPolygonBuilder.java:129) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.index.mapper.GeoShapeFieldMapper.parse(GeoShapeFieldMapper.java:455) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\t... 36 more\r\n",
"created_at": "2017-07-27T14:03:14Z"
},
{
"body": "I have nothing to add to this ticket other than to +1 @aj7 on this issue. We too have seen this error with either high vertex count polygons or polygons with very high precision. Sometime the workaround is to simplify the geometry with something like PostGIS but often that can compromise the data. \r\nAnd, for what it's worth, it's not too hard to generate polygons like this (imagine a highly precise polygon representing country borders).\r\nWe would love for this to be fixed, as we've run into this on more then one occasion.",
"created_at": "2017-09-08T23:10:13Z"
},
{
"body": "We also faced with this issue\r\nSimple example cause\r\n\"java.lang.ArrayIndexOutOfBoundsException: -1\\r\\n\\tat org.elasticsearch.common.geo.builders.PolygonBuilder.assign(PolygonBuilder.java:483)\\r\\n\\tat org.elasticsearch.common.geo.builders.PolygonBuilder.compose(PolygonBuilder.java:455)\\r\\n\\tat org.elasticsearch.common.geo.builders.PolygonBuilder.coordinates(PolygonBuilder.java:221)\\r\\n\\tat org.elasticsearch.common.geo.builders.PolygonBuilder.buildGeometry(PolygonBuilder.java:251)\\r\\n\\tat org.elasticsearch.common.geo.builders.PolygonBuilder.build(PolygonBuilder.java:226)\\r\\n\\tat \r\n\r\n\r\n{\"type\":\"Polygon\",\"coordinates\":[[[-10,10.0000000000002],[-5,-5],[10,5],[-10,10.0000000000002]],[[-10,10],[0,5],[0,0],[-10,10]]]}",
"created_at": "2017-12-05T12:16:17Z"
},
{
"body": "This seems to be because of an error in the geometry you're trying to index, although it would certainly be helpful if there was a more descriptive error message given. The attached zip file, for instance, contains self-intersections so I think it's right to reject it.\r\n\r\nHere is a shorter reproduction:\r\n\r\n```\r\n$ curl -XDELETE \"http://localhost:9200/speedlimit\"\r\n{\"acknowledged\":true}\r\n$ curl -XPUT \"http://localhost:9200/speedlimit\" -H'Content-type: application/json' -d '{\"mappings\":{\"speedlimit\":{\"properties\":{\"geometry\":{\"type\":\"geo_shape\",\"tree\":\"quadtree\",\"precision\":\"100m\"}}}}}'\r\n{\"acknowledged\":true,\"shards_acknowledged\":true,\"index\":\"speedlimit\"}\r\n$ curl -XPOST \"http://localhost:9200/speedlimit/speedlimit\" -H'Content-type: application/json' -d '{\"geometry\":{\"coordinates\":[[[[4,3],[3,2],[3,3],[4,3]],[[4,2],[3,1],[4,1],[4,2]]]],\"type\":\"MultiPolygon\"}}'\r\n{\"error\":{\"root_cause\":[{\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse [geometry]\"}],\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse [geometry]\",\"caused_by\":{\"type\":\"array_index_out_of_bounds_exception\",\"reason\":\"-1\"}},\"status\":400}\r\n```\r\n\r\nIn this case the given geometry contains two triangles, the first of which is clockwise and the second is anticlockwise, indicating that one is an enclave (hole) within the other. However the triangles do not touch.",
"created_at": "2017-12-05T15:18:06Z"
},
{
"body": "@rkuchvarskyy the example you give also is not a valid geometry because the \"hole\" is not completely within the outer polygon. It's too small to see the error, but if I exaggerate the problem a bit your shape looks like this:\r\n\r\n<img width=\"278\" alt=\"screen shot 2017-12-05 at 15 41 56\" src=\"https://user-images.githubusercontent.com/5058284/33615906-39893440-d9d3-11e7-86e4-11992042b03c.png\">\r\n",
"created_at": "2017-12-05T15:45:30Z"
},
{
"body": "@DaveCTurner I understood this but I expect some validation exception but not java.lang.ArrayIndexOutOfBoundsException",
"created_at": "2017-12-05T15:54:29Z"
},
{
"body": "As of version 6.2 these shapes will yield a more descriptive `org.locationtech.spatial4j.exception.InvalidShapeException` rather than `java.lang.ArrayIndexOutOfBoundsException`, via #27685.\r\n\r\nNote that this issue was nothing to do with the vertex count, nor the precision, and that the fix does not change the set of shapes accepted by Elasticsearch: it merely improves the message it gives when rejecting some invalid ones.",
"created_at": "2017-12-18T08:59:56Z"
}
],
"number": 25933,
"title": "Geo_shape indexing issue - java.lang.ArrayIndexOutOfBoundsException"
} | {
"body": "Normally the hole is assigned to the component of the first edge to the south\r\nof one of its vertices, but if the chosen hole vertex is south of everything\r\nthen the binary search returns -1 yielding an `ArrayIndexOutOfBoundsException`.\r\nInstead, assign the vertex to the component of the first edge to its north.\r\nSubsequent validation catches the fact that the hole is outside its component.\r\n\r\nFixes #25933\r\n",
"number": 27685,
"review_comments": [
{
"body": "nit: there is the neat expectThrows() utility in LuceneTestCase that makes test for exceptions a little more succinct. MIght help here (and in the other test) as well",
"created_at": "2017-12-06T08:50:20Z"
},
{
"body": "D'oh, I forgot about that. I pushed dc88dace9b",
"created_at": "2017-12-06T08:54:26Z"
},
{
"body": "@DaveCTurner thanks for walking me trough this f2f, it took some time for me to understand but now I think I understand this whole method a lot better. This looks right to me now after some longer thought, I think we should add some comments to this part after that explains the three(?) cases that might be true at this point and what that means for the index of the edge we assign this component to.",
"created_at": "2017-12-14T15:33:34Z"
}
],
"title": "Handle case where the hole vertex is south of the containing polygon(s)"
} | {
"commits": [
{
"message": "Handle case where the hole vertex is south of the containing polygon(s)\n\nNormally the hole is assigned to the component of the first edge to the south\nof one of its vertices, but if the chosen hole vertex is south of everything\nthen the binary search returns -1 yielding an ArrayIndexOutOfBoundsException.\nInstead, assign the vertex to the component of the first edge to its north.\nSubsequent validation catches the fact that the hole is outside its component.\n\nFixes #25933"
},
{
"message": "Use expectThrows()"
},
{
"message": "Add comments and split up some of the conditionals for clarity"
},
{
"message": "Merge branch 'master' into 2017-12-05-issue-25933"
}
],
"files": [
{
"diff": "@@ -469,20 +469,56 @@ private static void assign(Edge[] holes, Coordinate[][] points, int numHoles, Ed\n LOGGER.debug(\"Holes: {}\", Arrays.toString(holes));\n }\n for (int i = 0; i < numHoles; i++) {\n+ // To do the assignment we assume (and later, elsewhere, check) that each hole is within\n+ // a single component, and the components do not overlap. Based on this assumption, it's\n+ // enough to find a component that contains some vertex of the hole, and\n+ // holes[i].coordinate is such a vertex, so we use that one.\n+\n+ // First, we sort all the edges according to their order of intersection with the line\n+ // of longitude through holes[i].coordinate, in order from south to north. Edges that do\n+ // not intersect this line are sorted to the end of the array and of no further interest\n+ // here.\n final Edge current = new Edge(holes[i].coordinate, holes[i].next);\n- // the edge intersects with itself at its own coordinate. We need intersect to be set this way so the binary search\n- // will get the correct position in the edge list and therefore the correct component to add the hole\n current.intersect = current.coordinate;\n final int intersections = intersections(current.coordinate.x, edges);\n- // if no intersection is found then the hole is not within the polygon, so\n- // don't waste time calling a binary search\n+\n+ if (intersections == 0) {\n+ // There were no edges that intersect the line of longitude through\n+ // holes[i].coordinate, so there's no way this hole is within the polygon.\n+ throw new InvalidShapeException(\"Invalid shape: Hole is not within polygon\");\n+ }\n+\n+ // Next we do a binary search to find the position of holes[i].coordinate in the array.\n+ // The binary search returns the index of an exact match, or (-insertionPoint - 1) if\n+ // the vertex lies between the intersections of edges[insertionPoint] and\n+ // edges[insertionPoint+1]. The latter case is vastly more common.\n+\n final int pos;\n boolean sharedVertex = false;\n- if (intersections == 0 || ((pos = Arrays.binarySearch(edges, 0, intersections, current, INTERSECTION_ORDER)) >= 0)\n- && !(sharedVertex = (edges[pos].intersect.compareTo(current.coordinate) == 0)) ) {\n+ if (((pos = Arrays.binarySearch(edges, 0, intersections, current, INTERSECTION_ORDER)) >= 0)\n+ && !(sharedVertex = (edges[pos].intersect.compareTo(current.coordinate) == 0))) {\n+ // The binary search returned an exact match, but we checked again using compareTo()\n+ // and it didn't match after all.\n+\n+ // TODO Can this actually happen? Needs a test to exercise it, or else needs to be removed.\n throw new InvalidShapeException(\"Invalid shape: Hole is not within polygon\");\n }\n- final int index = -((sharedVertex) ? 0 : pos+2);\n+\n+ final int index;\n+ if (sharedVertex) {\n+ // holes[i].coordinate lies exactly on an edge.\n+ index = 0; // TODO Should this be pos instead of 0? This assigns exact matches to the southernmost component.\n+ } else if (pos == -1) {\n+ // holes[i].coordinate is strictly south of all intersections. Assign it to the\n+ // southernmost component, and allow later validation to spot that it is not\n+ // entirely within the chosen component.\n+ index = 0;\n+ } else {\n+ // holes[i].coordinate is strictly north of at least one intersection. Assign it to\n+ // the component immediately to its south.\n+ index = -(pos + 2);\n+ }\n+\n final int component = -edges[index].component - numHoles - 1;\n \n if(debugEnabled()) {",
"filename": "core/src/main/java/org/elasticsearch/common/geo/builders/PolygonBuilder.java",
"status": "modified"
},
{
"diff": "@@ -20,10 +20,10 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.vividsolutions.jts.geom.Coordinate;\n-\n import org.elasticsearch.common.geo.builders.ShapeBuilder.Orientation;\n import org.elasticsearch.test.geo.RandomShapeGenerator;\n import org.elasticsearch.test.geo.RandomShapeGenerator.ShapeType;\n+import org.locationtech.spatial4j.exception.InvalidShapeException;\n \n import java.io.IOException;\n \n@@ -124,4 +124,23 @@ public void testCoerceHole() {\n assertThat(\"hole should have been closed via coerce\", pb.holes().get(0).coordinates(false).length, equalTo(4));\n }\n \n+ public void testHoleThatIsSouthOfPolygon() {\n+ InvalidShapeException e = expectThrows(InvalidShapeException.class, () -> {\n+ PolygonBuilder pb = new PolygonBuilder(new CoordinatesBuilder().coordinate(4, 3).coordinate(3, 2).coordinate(3, 3).close());\n+ pb.hole(new LineStringBuilder(new CoordinatesBuilder().coordinate(4, 2).coordinate(3, 1).coordinate(4, 1).close()));\n+ pb.build();\n+ });\n+\n+ assertEquals(\"Hole lies outside shell at or near point (4.0, 1.0, NaN)\", e.getMessage());\n+ }\n+\n+ public void testHoleThatIsNorthOfPolygon() {\n+ InvalidShapeException e = expectThrows(InvalidShapeException.class, () -> {\n+ PolygonBuilder pb = new PolygonBuilder(new CoordinatesBuilder().coordinate(3, 2).coordinate(4, 1).coordinate(3, 1).close());\n+ pb.hole(new LineStringBuilder(new CoordinatesBuilder().coordinate(3, 3).coordinate(4, 2).coordinate(4, 3).close()));\n+ pb.build();\n+ });\n+\n+ assertEquals(\"Hole lies outside shell at or near point (4.0, 3.0, NaN)\", e.getMessage());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/common/geo/builders/PolygonBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "I'm not sure if this is actually working as intended and a documentation bug, or an actual bug with the documentation being right (either way, existing docs are wrong). The documentation says that `doc_values` are enabled by default for `binary` fields, but that is not the case:\r\n\r\n```http\r\nPUT /test\r\n{\r\n \"settings\": {\r\n \"number_of_shards\": 1\r\n },\r\n \"mappings\": {\r\n \"doc\": {\r\n \"properties\": {\r\n \"message\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"blob\": {\r\n \"type\": \"binary\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT /test/doc/1\r\n{\r\n \"message\": \"in a bottle\",\r\n \"blob\": \"U29tZSBiaW5hcnkgYmxvYg==\"\r\n}\r\n\r\nGET /test/_search\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": {\r\n \"script\": {\r\n \"script\": {\r\n \"source\": \"doc['blob'] != null\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nUnless you manually specify `\"doc_values\": true` for the `binary` field, the script triggers an exception:\r\n\r\n```\r\norg.elasticsearch.index.mapper.MappedFieldType.failIfNoDocValues(MappedFieldType.java:418)\r\norg.elasticsearch.index.mapper.BinaryFieldMapper$BinaryFieldType.fielddataBuilder(BinaryFieldMapper.java:125)\r\n...\r\n```",
"comments": [
{
"body": "I think this is a documentation bug. IMO the primary use case for binary fields is to store a blob that you don't want to search or aggregate on but you want to be returned in the `_source` of hits. An example of this would be embedding a small profile picture in documents.",
"created_at": "2017-11-03T10:36:33Z"
}
],
"number": 27240,
"title": "[binary] fields do not have doc_values enabled by default"
} | {
"body": "closes #27240",
"number": 27680,
"review_comments": [],
"title": "Correct docs for binary fields and their default for doc values"
} | {
"commits": [
{
"message": "Correct docs for binary fields and their default for doc values\n\ncloses #27240"
}
],
"files": [
{
"diff": "@@ -43,7 +43,7 @@ The following parameters are accepted by `binary` fields:\n \n Should the field be stored on disk in a column-stride fashion, so that it\n can later be used for sorting, aggregations, or scripting? Accepts `true`\n- (default) or `false`.\n+ or `false` (default).\n \n <<mapping-store,`store`>>::\n ",
"filename": "docs/reference/mapping/types/binary.asciidoc",
"status": "modified"
}
]
} |
{
"body": "Unknown index settings from 2.x are moved to the `archived` namespace on upgrade to 5.x.\r\n\r\nThese settings are impossible to delete as the `archived.*` setting pattern is changed to `index.archived.*`, which does not exist.\r\n\r\nIn 2.x, run:\r\n\r\n```\r\nPUT foo \r\n{\r\n \"settings\": {\r\n \"index.foo\": \"bar\"\r\n }\r\n}\r\n```\r\n\r\nIn 5.x, run:\r\n\r\n```\r\nGET foo\r\n```\r\n\r\nreturns:\r\n\r\n```\r\n{\r\n \"foo\": {\r\n \"aliases\": {},\r\n \"mappings\": {},\r\n \"settings\": {\r\n \"archived\": {\r\n \"index\": {\r\n \"foo\": \"bar\"\r\n }\r\n },\r\n \"index\": {\r\n \"creation_date\": \"1511788651546\",\r\n ...\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nRunning:\r\n```\r\nPUT foo/_settings\r\n{\r\n \"archived.*\": null\r\n}\r\n````\r\n\r\nreturns: `unknown setting [index.archived.*]`\r\n\r\n",
"comments": [],
"number": 27537,
"title": "Unable to delete archived index settings"
} | {
"body": "Index settings didn't support reset by wildcard which also causes\r\nissues like #27537 where archived settings can't be reset. This change\r\nadds support for wildcards like `archived.*` to be used to reset setting to their\r\ndefaults or remove them from an index.\r\n\r\nClose #27537",
"number": 27671,
"review_comments": [
{
"body": "nit: extra space between `=` and `setting`",
"created_at": "2017-12-05T17:09:41Z"
},
{
"body": "How is it ever possible to have `isWildcard` true if `setting` must not be null here?",
"created_at": "2017-12-05T17:11:12Z"
},
{
"body": "there is a second line to this assert || (isWildcard && normalizedSettings.hasValue(key) == false) does this make it clear?",
"created_at": "2017-12-05T22:38:38Z"
},
{
"body": "Oops sorry, the inline comments confused me!",
"created_at": "2017-12-05T22:51:58Z"
}
],
"title": "Allow index settings to be reset by wildcards"
} | {
"commits": [
{
"message": "Allow index settings to be reset by wildcards\n\nIndex settings didn't support reset by wildcard which also causes\nissues like #27537 where archived settings can't be reset. This change\nadds support for wildcards like `archived.*` to be used to reset setting to their\ndefaults or remove them from an index.\n\nClose #27537"
},
{
"message": "fix nit"
}
],
"files": [
{
"diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.IndexScopedSettings;\n import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n@@ -54,7 +55,6 @@\n import java.util.Locale;\n import java.util.Map;\n import java.util.Set;\n-import java.util.function.Predicate;\n \n import static org.elasticsearch.action.support.ContextPreservingActionListener.wrapPreservingContext;\n \n@@ -164,13 +164,16 @@ public void updateSettings(final UpdateSettingsClusterStateUpdateRequest request\n Settings.Builder settingsForOpenIndices = Settings.builder();\n final Set<String> skippedSettings = new HashSet<>();\n \n- indexScopedSettings.validate(normalizedSettings, false); // don't validate dependencies here we check it below\n- // never allow to change the number of shards\n+ indexScopedSettings.validate(normalizedSettings.filter(s -> Regex.isSimpleMatchPattern(s) == false /* don't validate wildcards */),\n+ false); //don't validate dependencies here we check it below never allow to change the number of shards\n for (String key : normalizedSettings.keySet()) {\n Setting setting = indexScopedSettings.get(key);\n- assert setting != null; // we already validated the normalized settings\n+ boolean isWildcard = setting == null && Regex.isSimpleMatchPattern(key);\n+ assert setting != null // we already validated the normalized settings\n+ || (isWildcard && normalizedSettings.hasValue(key) == false)\n+ : \"unknown setting: \" + key + \" isWildcard: \" + isWildcard + \" hasValue: \" + normalizedSettings.hasValue(key);\n settingsForClosedIndices.copy(key, normalizedSettings);\n- if (setting.isDynamic()) {\n+ if (isWildcard || setting.isDynamic()) {\n settingsForOpenIndices.copy(key, normalizedSettings);\n } else {\n skippedSettings.add(key);",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataUpdateSettingsService.java",
"status": "modified"
},
{
"diff": "@@ -500,6 +500,16 @@ public boolean updateSettings(Settings toApply, Settings.Builder target, Setting\n return updateSettings(toApply, target, updates, type, false);\n }\n \n+ /**\n+ * Returns <code>true</code> if the given key is a valid delete key\n+ */\n+ private boolean isValidDelete(String key, boolean onlyDynamic) {\n+ return isFinalSetting(key) == false && // it's not a final setting\n+ (onlyDynamic && isDynamicSetting(key) // it's a dynamicSetting and we only do dynamic settings\n+ || get(key) == null && key.startsWith(ARCHIVED_SETTINGS_PREFIX) // the setting is not registered AND it's been archived\n+ || (onlyDynamic == false && get(key) != null)); // if it's not dynamic AND we have a key\n+ }\n+\n /**\n * Updates a target settings builder with new, updated or deleted settings from a given settings builder.\n *\n@@ -519,21 +529,16 @@ private boolean updateSettings(Settings toApply, Settings.Builder target, Settin\n final Predicate<String> canUpdate = (key) -> (\n isFinalSetting(key) == false && // it's not a final setting\n ((onlyDynamic == false && get(key) != null) || isDynamicSetting(key)));\n- final Predicate<String> canRemove = (key) ->(// we can delete if\n- isFinalSetting(key) == false && // it's not a final setting\n- (onlyDynamic && isDynamicSetting(key) // it's a dynamicSetting and we only do dynamic settings\n- || get(key) == null && key.startsWith(ARCHIVED_SETTINGS_PREFIX) // the setting is not registered AND it's been archived\n- || (onlyDynamic == false && get(key) != null))); // if it's not dynamic AND we have a key\n for (String key : toApply.keySet()) {\n- boolean isNull = toApply.get(key) == null;\n- if (isNull && (canRemove.test(key) || key.endsWith(\"*\"))) {\n+ boolean isDelete = toApply.hasValue(key) == false;\n+ if (isDelete && (isValidDelete(key, onlyDynamic) || key.endsWith(\"*\"))) {\n // this either accepts null values that suffice the canUpdate test OR wildcard expressions (key ends with *)\n // we don't validate if there is any dynamic setting with that prefix yet we could do in the future\n toRemove.add(key);\n // we don't set changed here it's set after we apply deletes below if something actually changed\n } else if (get(key) == null) {\n throw new IllegalArgumentException(type + \" setting [\" + key + \"], not recognized\");\n- } else if (isNull == false && canUpdate.test(key)) {\n+ } else if (isDelete == false && canUpdate.test(key)) {\n validate(key, toApply, false); // we might not have a full picture here do to a dependency validation\n settingsBuilder.copy(key, toApply);\n updates.copy(key, toApply);\n@@ -546,7 +551,7 @@ private boolean updateSettings(Settings toApply, Settings.Builder target, Settin\n }\n }\n }\n- changed |= applyDeletes(toRemove, target, canRemove);\n+ changed |= applyDeletes(toRemove, target, k -> isValidDelete(k, onlyDynamic));\n target.put(settingsBuilder.build());\n return changed;\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java",
"status": "modified"
},
{
"diff": "@@ -306,6 +306,13 @@ public Long getAsLong(String setting, Long defaultValue) {\n }\n }\n \n+ /**\n+ * Returns <code>true</code> iff the given key has a value in this settings object\n+ */\n+ public boolean hasValue(String key) {\n+ return settings.get(key) != null;\n+ }\n+\n /**\n * We have to lazy initialize the deprecation logger as otherwise a static logger here would be constructed before logging is configured\n * leading to a runtime failure (see {@link LogConfigurator#checkErrorListener()} ). The premature construction would come from any\n@@ -1229,8 +1236,9 @@ public Builder normalizePrefix(String prefix) {\n Iterator<Map.Entry<String, Object>> iterator = map.entrySet().iterator();\n while(iterator.hasNext()) {\n Map.Entry<String, Object> entry = iterator.next();\n- if (entry.getKey().startsWith(prefix) == false) {\n- replacements.put(prefix + entry.getKey(), entry.getValue());\n+ String key = entry.getKey();\n+ if (key.startsWith(prefix) == false && key.endsWith(\"*\") == false) {\n+ replacements.put(prefix + key, entry.getValue());\n iterator.remove();\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java",
"status": "modified"
},
{
"diff": "@@ -241,10 +241,45 @@ public void testUpdateDependentIndexSettings() {\n .actionGet();\n }\n }\n+ public void testResetDefaultWithWildcard() {\n+ createIndex(\"test\");\n+\n+ client()\n+ .admin()\n+ .indices()\n+ .prepareUpdateSettings(\"test\")\n+ .setSettings(\n+ Settings.builder()\n+ .put(\"index.refresh_interval\", -1))\n+ .execute()\n+ .actionGet();\n+ IndexMetaData indexMetaData = client().admin().cluster().prepareState().execute().actionGet().getState().metaData().index(\"test\");\n+ assertEquals(indexMetaData.getSettings().get(\"index.refresh_interval\"), \"-1\");\n+ for (IndicesService service : internalCluster().getInstances(IndicesService.class)) {\n+ IndexService indexService = service.indexService(resolveIndex(\"test\"));\n+ if (indexService != null) {\n+ assertEquals(indexService.getIndexSettings().getRefreshInterval().millis(), -1);\n+ }\n+ }\n+ client()\n+ .admin()\n+ .indices()\n+ .prepareUpdateSettings(\"test\")\n+ .setSettings(Settings.builder().putNull(\"index.ref*\"))\n+ .execute()\n+ .actionGet();\n+ indexMetaData = client().admin().cluster().prepareState().execute().actionGet().getState().metaData().index(\"test\");\n+ assertNull(indexMetaData.getSettings().get(\"index.refresh_interval\"));\n+ for (IndicesService service : internalCluster().getInstances(IndicesService.class)) {\n+ IndexService indexService = service.indexService(resolveIndex(\"test\"));\n+ if (indexService != null) {\n+ assertEquals(indexService.getIndexSettings().getRefreshInterval().millis(), 1000);\n+ }\n+ }\n+ }\n \n public void testResetDefault() {\n createIndex(\"test\");\n-\n client()\n .admin()\n .indices()",
"filename": "core/src/test/java/org/elasticsearch/indices/settings/UpdateSettingsIT.java",
"status": "modified"
}
]
} |
{
"body": "flush_index hangs when no indices exist, then eventually just closes the connection \n",
"comments": [
{
"body": "Yep, its a bug. A fix is traveling on the intertubes as I type...\n",
"created_at": "2010-02-16T19:56:15Z"
},
{
"body": "flush_index hangs when no indices exist, closed by 1299f203645d1b4b72abfedc1d65991b05042361.\n",
"created_at": "2010-02-16T19:56:29Z"
}
],
"number": 19,
"title": "flush_index hangs when no indices exist"
} | {
"body": "This is a minimal change which illustrates how explicit *null*-checks in *elasticsearch* can be replaced by declarative approach:\r\n* target not-*null* method arguments are marked by a corresponding annotation (this *PR* uses *JSR-305*'s *Nonnull* but any other annotation can be configured)\r\n* the build is configured to use [Traute](http://traute.oss.harmonysoft.tech/) *javac* plugin which inserts *null*-checks into generated byte code\r\n\r\nExample: this *PR* changes [AbstractBindingBuilder](https://github.com/denis-zhdanov/elasticsearch/commit/160caec4e7dd8bc97cc9c625ba4d33848bba3211#diff-8c244abec38a563f75df9fe19c12a131) in a way to replace *Objects.requireNonNull()* by *Nonnull* annotation. Resulting byte code: \r\n\r\n```\r\njavap -c ./core/build/classes/java/main/org/elasticsearch/common/inject/internal/AbstractBindingBuilder.class\r\n\r\n...\r\nprotected org.elasticsearch.common.inject.internal.BindingImpl<T> annotatedWithInternal(java.lang.Class<? extends java.lang.annotation.Annotation>);\r\n Code:\r\n 0: aload_1\r\n 1: ifnonnull 14\r\n 4: new #12 // class java/lang/NullPointerException\r\n 7: dup\r\n 8: ldc #13 // String annotationType\r\n 10: invokespecial #14 // Method java/lang/NullPointerException.\"<init>\":(Ljava/lang/String;)V\r\n 13: athrow\r\n 14: aload_0\r\n 15: invokevirtual #15 // Method checkNotAnnotated:()V\r\n 18: aload_0\r\n 19: aload_0\r\n 20: getfield #9 // Field binding:Lorg/elasticsearch/common/inject/internal/BindingImpl;\r\n 23: aload_0\r\n 24: getfield #9 // Field binding:Lorg/elasticsearch/common/inject/internal/BindingImpl;\r\n 27: invokevirtual #16 // Method org/elasticsearch/common/inject/internal/BindingImpl.getKey:()Lorg/elasticsearch/common/inject/Key;\r\n 30: invokevirtual #17 // Method org/elasticsearch/common/inject/Key.getTypeLiteral:()Lorg/elasticsearch/common/inject/TypeLiteral;\r\n 33: aload_1\r\n 34: invokestatic #18 // Method org/elasticsearch/common/inject/Key.get:(Lorg/elasticsearch/common/inject/TypeLiteral;Ljava/lang/Class;)Lorg/elasticsearch/common/inject/Key;\r\n 37: invokevirtual #19 // Method org/elasticsearch/common/inject/internal/BindingImpl.withKey:(Lorg/elasticsearch/common/inject/Key;)Lorg/elasticsearch/common/inject/internal/BindingImpl;\r\n 40: invokevirtual #20 // Method setBinding:(Lorg/elasticsearch/common/inject/internal/BindingImpl;)Lorg/elasticsearch/common/inject/internal/BindingImpl;\r\n 43: areturn\r\n```\r\n\r\nTESTED: gradle build\r\n\r\n**Current PR is just a POC, if elasticsearch team is interested in it, I create a PR which applies this approach for the whole project's codebase**\r\n\r\n<!--\r\nThank you for your interest in and contributing to Elasticsearch! There\r\nare a few simple things to check before submitting your pull request\r\nthat can help with the review process. You should delete these items\r\nfrom your submission, but they are here to help bring them to your\r\nattention.\r\n-->\r\n\r\n- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?\r\n- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md)?\r\n- If submitting code, have you built your formula locally prior to submission with `gradle check`?\r\n- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.\r\n- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?\r\n- If you are submitting this code for a class then read our [policy](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-as-part-of-a-class) for that.\r\n",
"number": 27647,
"review_comments": [],
"title": "A POC for replacing explicit null checks by declarative approach"
} | {
"commits": [
{
"message": "A POC which illustrates how to replace explicit\nnull checks by declarative annotations and a javac\nplugin\n\nTESTED: gradle build"
}
],
"files": [
{
"diff": "@@ -21,6 +21,10 @@\n import com.carrotsearch.gradle.junit4.RandomizedTestingTask\n import org.elasticsearch.gradle.BuildPlugin\n \n+plugins {\n+ id \"tech.harmonysoft.oss.traute\" version \"1.0.5\"\n+}\n+\n apply plugin: 'elasticsearch.build'\n apply plugin: 'nebula.optional-base'\n apply plugin: 'nebula.maven-base-publish'\n@@ -60,6 +64,7 @@ dependencies {\n // utilities\n compile \"org.elasticsearch:elasticsearch-cli:${version}\"\n compile 'com.carrotsearch:hppc:0.7.1'\n+ compile 'com.google.code.findbugs:jsr305:1.3.9'\n \n // time handling, remove with java 8 time\n compile 'joda-time:joda-time:2.9.5'\n@@ -115,6 +120,11 @@ if (isEclipse) {\n compileJava.options.compilerArgs << \"-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked\"\n compileTestJava.options.compilerArgs << \"-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked\"\n \n+traute {\n+ javacPluginVersion = '1.0.10'\n+ exceptionTexts = [ 'parameter' : '${PARAMETER_NAME}' ]\n+}\n+\n forbiddenPatterns {\n exclude '**/*.json'\n exclude '**/*.jmx'",
"filename": "core/build.gradle",
"status": "modified"
},
{
"diff": "@@ -0,0 +1 @@\n+40719ea6961c0cb6afaeb6a921eaa1f6afd4cfdf\n\\ No newline at end of file",
"filename": "core/licenses/jsr305-1.3.9.jar.sha1",
"status": "added"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.inject.spi.Element;\n import org.elasticsearch.common.inject.spi.InstanceBinding;\n \n+import javax.annotation.Nonnull;\n import java.lang.annotation.Annotation;\n import java.util.List;\n import java.util.Objects;\n@@ -71,8 +72,7 @@ protected BindingImpl<T> setBinding(BindingImpl<T> binding) {\n /**\n * Sets the binding to a copy with the specified annotation on the bound key\n */\n- protected BindingImpl<T> annotatedWithInternal(Class<? extends Annotation> annotationType) {\n- Objects.requireNonNull(annotationType, \"annotationType\");\n+ protected BindingImpl<T> annotatedWithInternal(@Nonnull Class<? extends Annotation> annotationType) {\n checkNotAnnotated();\n return setBinding(binding.withKey(\n Key.get(this.binding.getKey().getTypeLiteral(), annotationType)));\n@@ -81,21 +81,18 @@ protected BindingImpl<T> annotatedWithInternal(Class<? extends Annotation> annot\n /**\n * Sets the binding to a copy with the specified annotation on the bound key\n */\n- protected BindingImpl<T> annotatedWithInternal(Annotation annotation) {\n- Objects.requireNonNull(annotation, \"annotation\");\n+ protected BindingImpl<T> annotatedWithInternal(@Nonnull Annotation annotation) {\n checkNotAnnotated();\n return setBinding(binding.withKey(\n Key.get(this.binding.getKey().getTypeLiteral(), annotation)));\n }\n \n- public void in(final Class<? extends Annotation> scopeAnnotation) {\n- Objects.requireNonNull(scopeAnnotation, \"scopeAnnotation\");\n+ public void in(@Nonnull final Class<? extends Annotation> scopeAnnotation) {\n checkNotScoped();\n setBinding(getBinding().withScoping(Scoping.forAnnotation(scopeAnnotation)));\n }\n \n- public void in(final Scope scope) {\n- Objects.requireNonNull(scope, \"scope\");\n+ public void in(@Nonnull final Scope scope) {\n checkNotScoped();\n setBinding(getBinding().withScoping(Scoping.forInstance(scope)));\n }\n@@ -132,4 +129,4 @@ protected void checkNotScoped() {\n binder.addError(SCOPE_ALREADY_SET);\n }\n }\n-}\n\\ No newline at end of file\n+}",
"filename": "core/src/main/java/org/elasticsearch/common/inject/internal/AbstractBindingBuilder.java",
"status": "modified"
}
]
} |
{
"body": "\r\n**Elasticsearch version**: 5.6.2\r\n\r\n**Plugins installed**: x-pack\r\n\r\n**JVM version**: 1.8.0_144\r\n\r\n**OS version**: Linux centos7 3.10.0-327.28.3.el7.x86_64\r\n\r\n**Description**: \r\n\r\nI used custom normalizer with some char_filter and lowercase filter. And when I perfom search with sorting I see that it is actually works! But I don't see any changes in terms via _termvectors query by my normalized field.\r\n\r\nMy current index setting:\r\n```\r\nPUT tender-search\r\n{\r\n \"settings\": {\r\n \"number_of_replicas\": 0,\r\n \"index.mapping.single_type\": true,\r\n \"analysis\": {\r\n \"char_filter\": {\r\n \"garbage_filter\": {\r\n \"type\": \"pattern_replace\",\r\n \"pattern\": \"^([^\\\\p{L}\\\\d]+)(.*)\",\r\n \"replacement\": \"$2\"\r\n },\r\n \"ua_sort_filter\": {\r\n \"type\": \"mapping\",\r\n \"mappings\": [\r\n \"і => ия\",\r\n \"І => ИЯ\",\r\n \"є => ея\",\r\n \"Є => ЕЯ\",\r\n \"ґ => гя\",\r\n \"Ґ => ГЯ\"\r\n ]\r\n }\r\n },\r\n \"analyzer\": {\r\n \"default\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"default_search\": {\r\n \"type\": \"standard\"\r\n }\r\n },\r\n \"normalizer\": {\r\n \"title_clean\": {\r\n \"type\": \"custom\",\r\n \"char_filter\": [\"garbage_filter\", \"ua_sort_filter\"],\r\n \"filter\": [\"lowercase\"]\r\n }\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"tenderSearch\": {\r\n \"properties\": {\r\n \"title\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"standard\",\r\n \"fields\": {\r\n \"raw\": {\r\n \"type\": \"keyword\",\r\n \"normalizer\": \"title_clean\"\r\n }\r\n }\r\n },\r\n \"description\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"standard\"\r\n },\r\n \"procuringEntityName\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"standard\",\r\n \"fields\": {\r\n \"raw\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n },\r\n \"amount\": {\r\n \"type\": \"double\"\r\n },\r\n \"lots\": {\r\n \"properties\": {\r\n \"amount\": {\r\n \"type\": \"double\"\r\n },\r\n \"title\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"standard\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nI make _termvectors query for newely created document:\r\n\r\n```\r\nGET tender-search/tenderSearch/d9669b3b22dd4a11ae581b31a5802a13/_termvectors\r\n{\r\n \"fields\" : [\"title.raw\"],\r\n \"offsets\" : true,\r\n \"payloads\" : true,\r\n \"positions\" : true,\r\n \"term_statistics\" : true,\r\n \"field_statistics\" : true\r\n}\r\n```\r\n\r\nAnd I'm getting the response:\r\n\r\n```\r\n{\r\n \"_index\": \"tender-search\",\r\n \"_type\": \"tenderSearch\",\r\n \"_id\": \"d9669b3b22dd4a11ae581b31a5802a13\",\r\n \"_version\": 4,\r\n \"found\": true,\r\n \"took\": 0,\r\n \"term_vectors\": {\r\n \"title.raw\": {\r\n \"field_statistics\": {\r\n \"sum_doc_freq\": 30201,\r\n \"doc_count\": 30201,\r\n \"sum_ttf\": -1\r\n },\r\n \"terms\": {\r\n \"[ТЕСТУВАННЯ] Займати біда з неціновими показниками викот робак.\": {\r\n \"term_freq\": 1,\r\n \"tokens\": [\r\n {\r\n \"position\": 0,\r\n \"start_offset\": 0,\r\n \"end_offset\": 63\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nBut when I test title_clean normalizer with:\r\n\r\n```\r\nGET tender-search/_analyze \r\n{\r\n \"field\": \"title.raw\", \r\n \"text\": \"[ТЕСТУВАННЯ] Займати біда з неціновими показниками викот робак.\"\r\n}\r\n```\r\n\r\nI'm getting the response which is that I expected to see in terms above:\r\n\r\n```\r\n{\r\n \"tokens\": [\r\n {\r\n \"token\": \"тестування] займати бияда з нецияновими показниками викот робак.\",\r\n \"start_offset\": 0,\r\n \"end_offset\": 63,\r\n \"type\": \"word\",\r\n \"position\": 0\r\n }\r\n ]\r\n}\r\n```\r\n\r\nIt's a bug report of this issue: https://discuss.elastic.co/t/using-normalizer-for-sorting/106704/4?u=arturklb",
"comments": [],
"number": 27320,
"title": "Bug with _termvectors query on a field contains normalizer"
} | {
"body": "This change applies the normalizer defined on the field when building term vectors dynamically on a keyword field.\r\n\r\nFixes #27320 \r\n",
"number": 27608,
"review_comments": [],
"title": "Fix term vectors generator with keyword and normalizer"
} | {
"commits": [
{
"message": "Fix term vectors generator with keyword and normalizer\n\nThis change applies the normalizer defined on the field when building term vectors dynamically on a keyword field.\n\nFixes #27320"
}
],
"files": [
{
"diff": "@@ -217,7 +217,12 @@ private static Analyzer getAnalyzerAtField(IndexShard indexShard, String field,\n if (perFieldAnalyzer != null && perFieldAnalyzer.containsKey(field)) {\n analyzer = mapperService.getIndexAnalyzers().get(perFieldAnalyzer.get(field).toString());\n } else {\n- analyzer = mapperService.fullName(field).indexAnalyzer();\n+ MappedFieldType fieldType = mapperService.fullName(field);\n+ if (fieldType instanceof KeywordFieldMapper.KeywordFieldType) {\n+ analyzer = ((KeywordFieldMapper.KeywordFieldType) fieldType).normalizer();\n+ } else {\n+ analyzer = fieldType.indexAnalyzer();\n+ }\n }\n if (analyzer == null) {\n analyzer = mapperService.getIndexAnalyzers().getDefaultIndexAnalyzer();",
"filename": "core/src/main/java/org/elasticsearch/index/termvectors/TermVectorsService.java",
"status": "modified"
},
{
"diff": "@@ -1025,6 +1025,51 @@ public void testArtificialDocWithPreference() throws ExecutionException, Interru\n assertEquals(\"expected to find term statistics in exactly one shard!\", 2, sumDocFreq);\n }\n \n+ public void testWithKeywordAndNormalizer() throws IOException, ExecutionException, InterruptedException {\n+ // setup indices\n+ String[] indexNames = new String[] {\"with_tv\", \"without_tv\"};\n+ Settings.Builder builder = Settings.builder()\n+ .put(indexSettings())\n+ .put(\"index.analysis.analyzer.my_analyzer.tokenizer\", \"keyword\")\n+ .putList(\"index.analysis.analyzer.my_analyzer.filter\", \"lowercase\")\n+ .putList(\"index.analysis.normalizer.my_normalizer.filter\", \"lowercase\");\n+ assertAcked(prepareCreate(indexNames[0]).setSettings(builder.build())\n+ .addMapping(\"type1\", \"field1\", \"type=text,term_vector=with_positions_offsets,analyzer=my_analyzer\"));\n+ assertAcked(prepareCreate(indexNames[1]).setSettings(builder.build())\n+ .addMapping(\"type1\", \"field1\", \"type=keyword,normalizer=my_normalizer\"));\n+ ensureGreen();\n+\n+ // index documents with and without term vectors\n+ String[] content = new String[] { \"Hello World\", \"hello world\", \"HELLO WORLD\" };\n+\n+ List<IndexRequestBuilder> indexBuilders = new ArrayList<>();\n+ for (String indexName : indexNames) {\n+ for (int id = 0; id < content.length; id++) {\n+ indexBuilders.add(client().prepareIndex()\n+ .setIndex(indexName)\n+ .setType(\"type1\")\n+ .setId(String.valueOf(id))\n+ .setSource(\"field1\", content[id]));\n+ }\n+ }\n+ indexRandom(true, indexBuilders);\n+\n+ // request tvs and compare from each index\n+ for (int id = 0; id < content.length; id++) {\n+ Fields[] fields = new Fields[2];\n+ for (int j = 0; j < indexNames.length; j++) {\n+ TermVectorsResponse resp = client().prepareTermVector(indexNames[j], \"type1\", String.valueOf(id))\n+ .setOffsets(true)\n+ .setPositions(true)\n+ .setSelectedFields(\"field1\")\n+ .get();\n+ assertThat(\"doc with index: \" + indexNames[j] + \", type1 and id: \" + id, resp.isExists(), equalTo(true));\n+ fields[j] = resp.getFields();\n+ }\n+ compareTermVectors(\"field1\", fields[0], fields[1]);\n+ }\n+ }\n+\n private void checkBestTerms(Terms terms, List<String> expectedTerms) throws IOException {\n final TermsEnum termsEnum = terms.iterator();\n List<String> bestTerms = new ArrayList<>();",
"filename": "core/src/test/java/org/elasticsearch/action/termvectors/GetTermVectorsIT.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): 6.0.0\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen the cat API is used on APIs that support URL parameters like index names, then calling those endpoints with `&h` to get help results in an error\r\n\r\n**Steps to reproduce**:\r\n\r\n```\r\n# works\r\nGET _cat/shards?help\r\nGET _cat/shards/test\r\n# returns an exception\r\nGET _cat/shards/test?help\r\n```",
"comments": [
{
"body": "I would like to work on this.",
"created_at": "2017-11-23T05:51:48Z"
},
{
"body": "hey @spinscale \r\nI would like to have a go at this. I figured I will ask before doing the work, as the issue does not have an `adoptme` tag.\r\nThanks!",
"created_at": "2017-11-23T06:29:07Z"
},
{
"body": "@jyoti0208 oh crap! I just now saw your earlier comment ( browser did not refresh properly ). Sorry for jumping up the queue :)",
"created_at": "2017-11-23T06:58:56Z"
},
{
"body": "feel free to go ahead! Thanks a lot for working on this!",
"created_at": "2017-11-23T09:33:24Z"
},
{
"body": "I'm getting some broken cat urls when i use \r\n\r\n`http://thecatapi.com/api/images/get?format=xml&results_per_page=20`",
"created_at": "2018-01-23T22:22:41Z"
},
{
"body": "This was explored in #27598 and I do not think that there is much that we [should](https://github.com/elastic/elasticsearch/pull/27598#pullrequestreview-80061309) do here. The behavior would be odd if the index does not exist, or odd if there are unrecognized parameters in the request (do we ignore them? do we throw 404s? do we fail on the unrecognized parameters?). I think we should leave this as-is.",
"created_at": "2018-01-30T11:25:53Z"
},
{
"body": "@jasontedor what about adding a short note in the docs that `help` should not be used with any other url params?",
"created_at": "2018-01-30T19:10:49Z"
},
{
"body": "@olcbean That would be good.",
"created_at": "2018-01-30T19:14:02Z"
}
],
"number": 27424,
"title": "cat API help is broken when url parameters are specified"
} | {
"body": "The `_cat` API was throwing IllegalArgumentException if url parameters were provided\r\n```\r\n# worked\r\nGET _cat/shards?help\r\nGET _cat/shards/test_index\r\n# returned an exception\r\nGET _cat/shards/test_index?help\r\n```\r\nFixes #27424",
"number": 27598,
"review_comments": [
{
"body": "Shouldn't this only be consuming the `help` key? Any random other params should still cause an error.",
"created_at": "2017-11-29T21:24:54Z"
},
{
"body": "As @rjernst says, this is incredibly broken behavior. It invalidates all the work we did to reject unrecognized URL parameters, now it accepts any parameters including garbage parameters.",
"created_at": "2017-11-30T02:20:25Z"
}
],
"title": "Fix cat API help with url parameters"
} | {
"commits": [
{
"message": "_cat if help is requested, mark all params as consumed"
}
],
"files": [
{
"diff": "@@ -54,6 +54,8 @@ public AbstractCatAction(Settings settings) {\n public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {\n boolean helpWanted = request.paramAsBoolean(\"help\", false);\n if (helpWanted) {\n+ // consume all request parameters for 'help' to prevent an IllegalArgumentException\n+ request.params().keySet().forEach(request::param);\n return channel -> {\n Table table = getTableWithHeader(request);\n int[] width = buildHelpWidths(table, request);",
"filename": "core/src/main/java/org/elasticsearch/rest/action/cat/AbstractCatAction.java",
"status": "modified"
},
{
"diff": "@@ -257,3 +257,21 @@\n /^ foo \\s+ 0\\n\n bar \\s+ 1\\n\n $/\n+\n+---\n+\"Test cat shards help with params\":\n+ - skip:\n+ version: \" - 6.2.99\"\n+ reason: until 6.3.0 '/_cat/shards/index?help' was throwing IllegalArgumentException\n+ - do:\n+ indices.create:\n+ index: index\n+ body:\n+ settings:\n+ number_of_shards: 2\n+ number_of_replicas: 0\n+\n+ - do:\n+ cat.shards:\n+ index: index\n+ help: true",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/cat.shards/10_basic.yml",
"status": "modified"
}
]
} |
{
"body": "I have a client node installed via apt-get install and configured as such:\r\n\r\n```\r\nnode.master: false\r\nnode.data: false\r\nnode.ingest: false\r\n```\r\n\r\nWhen I try to run it (`sudo service elasticsearch start`) I'm getting the following exception in the logs:\r\n\r\n```\r\n[2017-11-28T19:36:06,220][ERROR][o.e.b.Bootstrap ] Exception\r\njava.lang.IllegalStateException: Unable to access 'path.data' (/usr/share/elasticsearch/data)\r\n at org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath(FilePermissionUtils.java:70) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:287) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:242) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Security.configure(Security.java:120) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:207) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:322) [elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:130) [elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:121) [elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:69) [elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) [elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) [elasticsearch-6.0.0.jar:6.0.0]\r\nCaused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data\r\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:1.8.0_151]\r\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_151]\r\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:1.8.0_151]\r\n at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384) ~[?:1.8.0_151]\r\n at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_151]\r\n at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) ~[?:1.8.0_151]\r\n at java.nio.file.Files.createDirectories(Files.java:767) ~[?:1.8.0_151]\r\n at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:401) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath(FilePermissionUtils.java:68) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n ... 12 more\r\n[2017-11-28T19:36:06,247][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [my-cluster-client000000] uncaught exception in thread [main]\r\norg.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Unable to access 'path.data' (/usr/share/elasticsearch/data)\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:134) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:121) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:69) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.0.0.jar:6.0.0]\r\nCaused by: java.lang.IllegalStateException: Unable to access 'path.data' (/usr/share/elasticsearch/data)\r\n at org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath(FilePermissionUtils.java:70) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:287) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:242) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Security.configure(Security.java:120) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:207) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:322) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:130) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n ... 6 more\r\nCaused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data\r\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:1.8.0_151]\r\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_151]\r\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:1.8.0_151]\r\n at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384) ~[?:1.8.0_151]\r\n at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_151]\r\n at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) ~[?:1.8.0_151]\r\n at java.nio.file.Files.createDirectories(Files.java:767) ~[?:1.8.0_151]\r\n at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:401) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath(FilePermissionUtils.java:68) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:287) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:242) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Security.configure(Security.java:120) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:207) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:322) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:130) ~[elasticsearch-6.0.0.jar:6.0.0]\r\n ... 6 more\r\n```\r\n\r\nWhy is `path.data` being used? and why hasn't the installer handle permissions for this folder (`/usr/share/elasticsearch` is set as root:root)?\r\n\r\nEasily reproducible using https://github.com/synhershko/elasticsearch-cloud-deploy/tree/es6 (I hit this with Azure but shouldn't matter). Note the installation scripts in ./packer and the userdata script in ./templates",
"comments": [
{
"body": "Hi @synhershko! Your question is better suited to the support forums over at https://discuss.elastic.co. We prefer to use Github issues only for bug reports and feature requests, and we think it's more likely this is a question (about a nonstandard installer) than a bug report. I'm closing this as an issue here, but encourage you to ask this question in the forum instead. Many thanks.",
"created_at": "2017-11-28T21:26:34Z"
},
{
"body": "If you actually read the issue you'd see this is actually happening with the default installer provided and released by you.",
"created_at": "2017-11-28T21:30:46Z"
},
{
"body": "And also, a client-only node shouldn't be checking on the data path anyway. If that's not a bug I don't know what is.",
"created_at": "2017-11-28T21:31:29Z"
},
{
"body": "This is a \"bug\", of sorts. When configuring the security policy, we do not look at any settings to determine if the path may or may not be used. I'm going to reopen this, but know that this is very low priority.",
"created_at": "2017-11-28T21:32:13Z"
},
{
"body": "@rjernst if you can point me at the right file in the code base I'll be happy to adopt.",
"created_at": "2017-11-28T21:33:20Z"
},
{
"body": "My apologies @synhershko, I misunderstood the origin of the installer.",
"created_at": "2017-11-28T21:34:36Z"
},
{
"body": "@synhershko `core/src/main/java/org/elasticsearch/bootstrap/Security.java`. The `addFilePermissions` method.",
"created_at": "2017-11-28T21:35:00Z"
},
{
"body": "> If you actually read the issue you'd see this is actually happening with the default installer provided and released by you.\r\n\r\n> If that's not a bug I don't know what is.\r\n\r\nThere is a bug here and the issue was closed prematurely which we will work on our end, but this kind of tone is still not acceptable. We are collaborators here, not adversaries.",
"created_at": "2017-11-28T21:41:52Z"
},
{
"body": "For now you can work around this by simply creating that directory with permissions for the Elasticsearch user.",
"created_at": "2017-11-28T21:43:24Z"
},
{
"body": "> There is a bug here and the issue was closed prematurely which we will work on our end, but this kind of tone is still not acceptable. We are collaborators here, not adversaries.\r\n\r\nYou are right @jasontedor , and my apologies for that @DaveCTurner.\r\n\r\nI'm going to look at that file and see if I can send a PR to fix it. I'd rather avoid creating that folder because I'm using automation scripts and those tend to get too specific too fast otherwise.\r\n",
"created_at": "2017-11-28T22:01:03Z"
},
{
"body": "@synhershko I am not sure if I want that file to become node-type aware so I don’t think there’s an obvious fix at the moment. That might be the fix we have to go with but I want to see if we have other options first. \r\n\r\nAnother workaround is to set `path.data` to `/var/lib/elasticsearch` or another directory like that.",
"created_at": "2017-11-28T22:14:40Z"
},
{
"body": "> Another workaround is to set `path.data` to `/var/lib/elasticsearch` or another directory like that.\r\n\r\nTo be clear here: this is what the default packaging does and it ensures that directory exists.",
"created_at": "2017-11-28T22:21:25Z"
},
{
"body": "Okay, the fix is rather trivial other than the concern you raised:\r\n\r\n```java\r\n // only need to access default/configured data paths if we are running on a data node\r\n if (Node.NODE_DATA_SETTING.get(environment.settings())) {\r\n if (environment.sharedDataFile() != null) {\r\n addDirectoryPath(policy, Environment.PATH_SHARED_DATA_SETTING.getKey(), environment.sharedDataFile(),\r\n \"read,readlink,write,delete\");\r\n }\r\n final Set<Path> dataFilesPaths = new HashSet<>();\r\n for (Path path : environment.dataFiles()) {\r\n addDirectoryPath(policy, Environment.PATH_DATA_SETTING.getKey(), path, \"read,readlink,write,delete\");\r\n /*\r\n * We have to do this after adding the path because a side effect of that is that the directory is created; the Path#toRealPath\r\n * invocation will fail if the directory does not already exist. We use Path#toRealPath to follow symlinks and handle issues\r\n * like unicode normalization or case-insensitivity on some filesystems (e.g., the case-insensitive variant of HFS+ on macOS).\r\n */\r\n try {\r\n final Path realPath = path.toRealPath();\r\n if (!dataFilesPaths.add(realPath)) {\r\n throw new IllegalStateException(\"path [\" + realPath + \"] is duplicated by [\" + path + \"]\");\r\n }\r\n } catch (final IOException e) {\r\n throw new IllegalStateException(\"unable to access [\" + path + \"]\", e);\r\n }\r\n }\r\n for (Path path : environment.repoFiles()) {\r\n addDirectoryPath(policy, Environment.PATH_REPO_SETTING.getKey(), path, \"read,readlink,write,delete\");\r\n }\r\n }\r\n```\r\n\r\nIMO this makes perfect sense - you only want to perform certain operations and throw exceptions on failure of them if the setup is right for it. I'm not sure why this wasn't happening in 5.x though - perhaps something in the installer changed? `/usr/share/elasticsearch` currently is owned by root:root.\r\n\r\nPlease let me know if I should submit a PR for it.",
"created_at": "2017-11-28T22:22:10Z"
},
{
"body": "> To be clear here: this is what the default packaging does and it ensures that directory exists.\r\n\r\n@jasontedor How come 6.0 (using the deb installer) looks at `/usr/share/elasticsearch` ?",
"created_at": "2017-11-29T06:33:04Z"
},
{
"body": "> How come 6.0 (using the deb installer) looks at `/usr/share/elasticsearch` ?\r\n\r\nIt looks at `path.data` which is packaged to be `/var/lib/elasticsearch`. You have overridden this to not be set so it falls back to the default which is `ES_HOME/data`. Here, `ES_HOME` is `/usr/share/elasticsearch`. ",
"created_at": "2017-11-29T09:51:05Z"
},
{
"body": "Ok, so that is the change between 5.x and 6.x, thanks. Was that made on\npurpose?\n\n--\n\nItamar Syn-Hershko\nFreelance Developer & Consultant\nElasticsearch Partner\nMicrosoft MVP | Lucene.NET PMC\nhttp://code972.com | @synhershko <https://twitter.com/synhershko>\nhttp://BigDataBoutique.co.il/\n\nOn Wed, Nov 29, 2017 at 11:51 AM, Jason Tedor <notifications@github.com>\nwrote:\n\n> How come 6.0 (using the deb installer) looks at /usr/share/elasticsearch ?\n>\n> It looks at path.data which is packaged to be /var/lib/elasticsearch. You\n> have overridden this to not be set so it falls back to the default which is\n> ES_HOME/data. Here, ES_HOME is /usr/share/elasticsearch.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/issues/27572#issuecomment-347808037>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AAM9HIxvlYZqI9eMzWpJN5M82stLxDaTks5s7Sk8gaJpZM4Qt1sl>\n> .\n>\n",
"created_at": "2017-11-29T10:09:08Z"
},
{
"body": "Indirectly, yes. I think the relevant change is the removal of default settings (e.g., `default.path.data`, see #25408). The point is that in the past if you did not set `path.data` then we would fall back to `default.path.data`. The service scripts set `default.path.data` to `/var/lib/elasticsearch` which again the packages ensure exists so there's no problem when `path.data` is not set. Now when `path.data` is not set we fall back to `ES_HOME/data` instead which is different than the previous fallback.\r\n\r\n> Was that made on purpose?\r\n\r\nSince it's not clear to me we are talking about the same change until I clarified which change it is, I can't answer this question. What are you asking?",
"created_at": "2017-11-29T12:41:14Z"
},
{
"body": "@synhershko The fix you have proposed is not sufficient, it does not cover the case of master nodes. It also does not cover the case of coordinating-only nodes that do persist cluster state (`node.local_storage` defaults to true). And anyway, it makes this class node-aware which I do not want to do. I think I have a solution that I am happy with. I will open a PR later today.",
"created_at": "2017-11-29T15:06:35Z"
},
{
"body": "By the way, there is actually no bug in *your* case because you have not set `node.local_storage` to false (defaults to true). There is a bug in the case that `node.local_storage` is false. In this case we should not be trying to access `path.data` but I would go further and say that we should reject the settings if `node.local_storage` is false and `path.data` is set.",
"created_at": "2017-11-29T15:23:53Z"
},
{
"body": "I opened #27587.",
"created_at": "2017-11-29T15:30:14Z"
},
{
"body": "> By the way, there is actually no bug in your case because you have not set node.local_storage to false (defaults to true).\r\n\r\nSorry this should have been a comment here - I do want `node.local_storage` to be set to true on my client-only nodes, to persist cluster state (because, why not).",
"created_at": "2017-11-29T17:52:22Z"
},
{
"body": "> Sorry this should have been a comment here - I do want `node.local_storage` to be set to true on my client-only nodes, to persist cluster state (because, why not).\r\n\r\nRight, and like I say: there's no bug here for you then because we are going to ensure that `path.data` exists and we have permissions for them in this case then. You have a broken configuration, you need to set `path.data` to `/var/lib/elasticsearch` (as the packaging defaults to and ensure exists) or somewhere else where you are comfortable ensuring Elasticsearch can store data.\r\n\r\nThe only bug is when `node.local_storage` is false and `node.master` and `node.data` are false. That's addressed by #27587 now.",
"created_at": "2017-11-29T18:57:04Z"
}
],
"number": 27572,
"title": "Security policy always uses 'path.data' even when node.data:false"
} | {
"body": "Today when configuring the data paths for the environment, we set data paths to either the specified path.data or default to data relative to the Elasticsearch home. Yet if node.local_storage is false, data paths do not even make sense. In this case, we should reject if path.data is set, and instead of defaulting data paths to data relative to home, we should set this to empty paths. This commit does this.\r\n\r\nCloses #27572\r\n",
"number": 27587,
"review_comments": [
{
"body": "this seems too aggressive no? if no local storage is required maybe it's just better to ignore paths altogether. In scripted deployments this could be kind of a frustrating gotcha in my experience. Maybe it's better to just keep or if it affects methods downstream maybe just set to empty and log a warning instead?",
"created_at": "2017-11-29T17:03:46Z"
},
{
"body": "We simply abhor allowing this kind of confusion. If a user is setting `path.data` and `node.local_storage` to false then there is a disconnect because one setting is saying store data here and the other is saying do not store data. Rather, we it to be clear: no local storage is required, so where to store data should not be set.",
"created_at": "2017-11-29T17:09:58Z"
},
{
"body": "> In scripted deployments this could be kind of a frustrating gotcha in my experience. \r\n\r\nI don't understand this comment? In a scripted environment a user would already be scripting to set `node.local_storage` to `false` (it's not the default) so why wouldn't they also script to ensure `path.data` is not set if that's what we are going to require. Fix the script once and then it's good?",
"created_at": "2017-11-29T17:11:59Z"
},
{
"body": "Isn't `node.local_storage` required / beneficial on client-only nodes for maintaining cluster state?",
"created_at": "2017-11-29T17:41:19Z"
},
{
"body": "Yes, but that does not negate either of my points.",
"created_at": "2017-11-29T17:48:04Z"
}
],
"title": "Do not set data paths on no local storage required"
} | {
"commits": [
{
"message": "Do not set data paths on no local storage required\n\nToday when configuring the data paths for the environment, we set data\npaths to either the specified path.data or default to data relative to\nthe Elasticsearch home. Yet if node.local_storage is false, data paths\ndo not even make sense. In this case, we should reject if path.data is\nset, and instead of defaulting data paths to data relative to home, we\nshould set this to empty paths. This commit does this."
},
{
"message": "Fix test"
}
],
"files": [
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.env;\n \n import org.elasticsearch.cluster.ClusterName;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.settings.Setting;\n@@ -45,6 +46,9 @@\n // TODO: move PathUtils to be package-private here instead of\n // public+forbidden api!\n public class Environment {\n+\n+ private static final Path[] EMPTY_PATH_ARRAY = new Path[0];\n+\n public static final Setting<String> PATH_HOME_SETTING = Setting.simpleString(\"path.home\", Property.NodeScope);\n public static final Setting<List<String>> PATH_DATA_SETTING =\n Setting.listSetting(\"path.data\", Collections.emptyList(), Function.identity(), Property.NodeScope);\n@@ -103,30 +107,39 @@ public Environment(final Settings settings, final Path configPath) {\n \n List<String> dataPaths = PATH_DATA_SETTING.get(settings);\n final ClusterName clusterName = ClusterName.CLUSTER_NAME_SETTING.get(settings);\n- if (dataPaths.isEmpty() == false) {\n- dataFiles = new Path[dataPaths.size()];\n- dataWithClusterFiles = new Path[dataPaths.size()];\n- for (int i = 0; i < dataPaths.size(); i++) {\n- dataFiles[i] = PathUtils.get(dataPaths.get(i));\n- dataWithClusterFiles[i] = dataFiles[i].resolve(clusterName.value());\n+ if (DiscoveryNode.nodeRequiresLocalStorage(settings)) {\n+ if (dataPaths.isEmpty() == false) {\n+ dataFiles = new Path[dataPaths.size()];\n+ dataWithClusterFiles = new Path[dataPaths.size()];\n+ for (int i = 0; i < dataPaths.size(); i++) {\n+ dataFiles[i] = PathUtils.get(dataPaths.get(i));\n+ dataWithClusterFiles[i] = dataFiles[i].resolve(clusterName.value());\n+ }\n+ } else {\n+ dataFiles = new Path[]{homeFile.resolve(\"data\")};\n+ dataWithClusterFiles = new Path[]{homeFile.resolve(\"data\").resolve(clusterName.value())};\n }\n } else {\n- dataFiles = new Path[]{homeFile.resolve(\"data\")};\n- dataWithClusterFiles = new Path[]{homeFile.resolve(\"data\").resolve(clusterName.value())};\n+ if (dataPaths.isEmpty()) {\n+ dataFiles = dataWithClusterFiles = EMPTY_PATH_ARRAY;\n+ } else {\n+ final String paths = String.join(\",\", dataPaths);\n+ throw new IllegalStateException(\"node does not require local storage yet path.data is set to [\" + paths + \"]\");\n+ }\n }\n if (PATH_SHARED_DATA_SETTING.exists(settings)) {\n sharedDataFile = PathUtils.get(PATH_SHARED_DATA_SETTING.get(settings)).normalize();\n } else {\n sharedDataFile = null;\n }\n List<String> repoPaths = PATH_REPO_SETTING.get(settings);\n- if (repoPaths.isEmpty() == false) {\n+ if (repoPaths.isEmpty()) {\n+ repoFiles = EMPTY_PATH_ARRAY;\n+ } else {\n repoFiles = new Path[repoPaths.size()];\n for (int i = 0; i < repoPaths.size(); i++) {\n repoFiles[i] = PathUtils.get(repoPaths.get(i));\n }\n- } else {\n- repoFiles = new Path[0];\n }\n \n // this is trappy, Setting#get(Settings) will get a fallback setting yet return false for Settings#exists(Settings)",
"filename": "core/src/main/java/org/elasticsearch/env/Environment.java",
"status": "modified"
},
{
"diff": "@@ -28,7 +28,10 @@\n import static org.hamcrest.CoreMatchers.endsWith;\n import static org.hamcrest.CoreMatchers.notNullValue;\n import static org.hamcrest.CoreMatchers.nullValue;\n+import static org.hamcrest.Matchers.arrayWithSize;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.hasToString;\n \n /**\n * Simple unit-tests for Environment.java\n@@ -115,4 +118,32 @@ public void testConfigPathWhenNotSet() {\n assertThat(environment.configFile(), equalTo(pathHome.resolve(\"config\")));\n }\n \n+ public void testNodeDoesNotRequireLocalStorage() {\n+ final Path pathHome = createTempDir().toAbsolutePath();\n+ final Settings settings =\n+ Settings.builder()\n+ .put(\"path.home\", pathHome)\n+ .put(\"node.local_storage\", false)\n+ .put(\"node.master\", false)\n+ .put(\"node.data\", false)\n+ .build();\n+ final Environment environment = new Environment(settings, null);\n+ assertThat(environment.dataFiles(), arrayWithSize(0));\n+ }\n+\n+ public void testNodeDoesNotRequireLocalStorageButHasPathData() {\n+ final Path pathHome = createTempDir().toAbsolutePath();\n+ final Path pathData = pathHome.resolve(\"data\");\n+ final Settings settings =\n+ Settings.builder()\n+ .put(\"path.home\", pathHome)\n+ .put(\"path.data\", pathData)\n+ .put(\"node.local_storage\", false)\n+ .put(\"node.master\", false)\n+ .put(\"node.data\", false)\n+ .build();\n+ final IllegalStateException e = expectThrows(IllegalStateException.class, () -> new Environment(settings, null));\n+ assertThat(e, hasToString(containsString(\"node does not require local storage yet path.data is set to [\" + pathData + \"]\")));\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/env/EnvironmentTests.java",
"status": "modified"
},
{
"diff": "@@ -397,14 +397,14 @@ public void testCustomDataPaths() throws Exception {\n }\n \n public void testPersistentNodeId() throws IOException {\n- String[] paths = tmpPaths();\n- NodeEnvironment env = newNodeEnvironment(paths, Settings.builder()\n+ NodeEnvironment env = newNodeEnvironment(new String[0], Settings.builder()\n .put(\"node.local_storage\", false)\n .put(\"node.master\", false)\n .put(\"node.data\", false)\n .build());\n String nodeID = env.nodeId();\n env.close();\n+ final String[] paths = tmpPaths();\n env = newNodeEnvironment(paths, Settings.EMPTY);\n assertThat(\"previous node didn't have local storage enabled, id should change\", env.nodeId(), not(equalTo(nodeID)));\n nodeID = env.nodeId();",
"filename": "core/src/test/java/org/elasticsearch/env/NodeEnvironmentTests.java",
"status": "modified"
}
]
} |
{
"body": "During a rolling upgrade shards on 6.0 nodes should be able to work with operations with no sequence numbers when the primary is still on a 5.6 node. After the primary is on a 6.0 node, it will start generating sequence numbers for all shard copies. To simplify the number of edge cases we have to deal with we have designed the code to only make the transition in one direction - i.e., once a shard moves to the sequence numbers universe, it will never go back to operated in a non sequence numbers universe. We also added protections in place to verify it doesn't happen.\r\n\r\nSadly, a rolling upgrade can lead to a situation that triggers those protections and fail the recoveries. Here is a typical exception line:\r\n\r\n> Recovery failed from {bos1-es3}{JR9qqMjEQF6c22eGZqIAcw}{izoNnZpWTsyt5_xxymVDbg}{192.168.8.152}{192.168.8.152:9300}{ml.max_open_jobs=10, ml.enabled=true} into {bos1-es1}{tBZdi1W_TRa16getb8eNlA}{picaUSZyS7K34rTVpJvu1Q}{192.168.8.150}{192.168.8.150:9300}{ml.max_open_jobs=10, ml.enabled=true}]; nested: RemoteTransportException[[bos1-es3][192.168.8.152:9300][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[2] phase2 failed]; nested: RemoteTransportException[[bos1-es1][192.168.8.150:9300][internal:index/shard/recovery/translog_ops]]; nested: TranslogException[Failed to write operation [Index{id='AV_HS9TBPU9Vhf7shs_U', type='px-web-server', seqNo=-2, primaryTerm=0}]]; nested: IllegalArgumentException[sequence number must be assigned];\r\n\r\nThis can happen in the following scenario:\r\n1) A replica is on a 6.0 node while the primary is on 5.6 while indexing is going on.\r\n2) The replica inserts operations with no seq# to it's translog\r\n3) The primary goes down (planned or not) and the replica becomes a new primary.\r\n4) The primary comes back (or another shard) and starts recovery. While recovery is ongoing, indexing operations come in.\r\n5) The recovering replica opens up its engine and processes the new indexing operations, switching to seq# mode.\r\n6) The old operations from the translogs are replayed and since they have no seq #s in them the assumptions are violated and the recovery failed. \r\n\r\nThis problem is made worse by the fact that we now ship all the translogs to create a replica that adheres to the translog retention policy.\r\n\r\nTo work around it, people can reduce their translog retention policy to 0 (`index.translog.retention.size`) and then flush the primary shard. This should clean up the old ops from the translog. After that (and setting `index.translog.retention.size` back to null) you can run a reroute command with retry_failed flag : `POST /_cluster/reroute?retry_failed=true`. \r\n\r\nI'm still evaluating possible solutions. Based on the complexity we can decide which version the solution should go to. It's also an open question why our tests didn't catch this. ",
"comments": [
{
"body": "@bleskes If `index.translog.retention.size` is set to null, what's the significance, how is it different from `0`? Defaults to 512mb?",
"created_at": "2017-11-28T20:35:48Z"
},
{
"body": "@archanid setting to null unsets it and restores the default, which is indeed in this case 512mb. Some parts of the system sometimes treat unset as different than a fixed value but not in this case.",
"created_at": "2017-11-28T21:02:55Z"
},
{
"body": "This has been future ported and is all fixed,",
"created_at": "2017-12-05T14:43:56Z"
},
{
"body": "For anyone landing here from a search who's seeking concrete workaround steps for indexes affected by this issue, check out the thread at https://discuss.elastic.co/t/replica-shard-is-in-unallocated-state-after-upgrade-to-6-0-from-5-6-0",
"created_at": "2017-12-05T15:52:10Z"
}
],
"number": 27536,
"title": "Translog Operations with no sequence numbers fail recoveries after a rolling upgrading to 6.0"
} | {
"body": "During a recovery the target shard may process both new indexing operations and old ones concurrently. When the primary is on a 6.0 node, the new indexing operations are guaranteed to have sequence numbers but we don't have that guarantee for the old operations as they may come from a period when the primary was on a pre 6.0 node. Have this mixture of old and new is something we do not support and it triggers exceptions.\r\n\r\nThis PR adds a flush on primary promotion and primary relocations to make sure that any recoveries from a primary on a 6.0 will be guaranteed to only need operations with sequence numbers. A recovery from store already flushes when we start the engine if there were any ops in the translog.\r\n\r\nWith this extra flushes in place we can now actively filter out operations that have no sequence numbers during recovery. Since filtering out operations is risky, I have opted to harden the logic in the recovery source handler to verify that all operations in the required sequence number range (from the local checkpoint in the commit onwards) are not missed. This comes at an extra complexity for this PR but I think it's worth it.\r\n\r\nFinally I added two tests that reproduce the problems.\r\n\r\nCloses #27536 \r\n\r\nPS. I still need to add unit tests but since there's some time pressure on this one I think we can start reviewing.\r\n",
"number": 27580,
"review_comments": [
{
"body": "I would instead add the following two assertions after the `if (isSequenceNumberBasedRecoveryPossible) { ... } else { ... }` as that's what we actually want to hold for both branches:\r\n```\r\nassert startingSeqNo >= 0;\r\nassert requiredSeqNoRangeStart >= start;\r\n```",
"created_at": "2017-11-29T12:30:08Z"
},
{
"body": "I don't like the name of this method. I would prefer to not have a separate method, and just have:\r\n\r\n```\r\nfinal long endingSeqNo = shard.seqNoStats().getMaxSeqNo();\r\ncancellableThreads.execute(() -> shard.waitForOpsToComplete(endingSeqNo));\r\n```\r\n\r\nand then use `endingSeqNo` as an inclusive bound instead of an exclusive one in the remaining calculations.",
"created_at": "2017-11-29T12:52:28Z"
},
{
"body": "here it's defined as inclusive, below in the Javadocs of finalizeRecovery, it's defined as exclusive...\r\nLet's use the inclusive version.",
"created_at": "2017-11-29T12:56:28Z"
},
{
"body": "If `endingSeqNo` is exclusive, then this should probably be `endingSeqNo - 1`. I know that it does not really matter as we only use the `markSeqNoAsCompleted` method, we might as well initialize this to \r\n`new LocalCheckpointTracker(requiredSeqNoRangeStart - 1, requiredSeqNoRangeStart - 1)`, but yeah, let's use inclusive bounds for `endingSeqNo` ;-)",
"created_at": "2017-11-29T13:09:54Z"
},
{
"body": "again having to use -1 here, easier to have endingSeqNo be inclusive.",
"created_at": "2017-11-29T13:11:27Z"
},
{
"body": "why do we need to wait here? Why not directly in the corresponding tests? This makes for example the RecoveryIT test dependent on a specific yml file, which is ugly IMO.",
"created_at": "2017-11-29T13:18:14Z"
},
{
"body": "I misread, ignore this comment.",
"created_at": "2017-11-29T14:26:33Z"
},
{
"body": "sure",
"created_at": "2017-11-29T14:43:29Z"
},
{
"body": "nit: introduce -> introduce*d*",
"created_at": "2017-11-29T15:38:01Z"
},
{
"body": "nit: we may need to reformat this javadocs",
"created_at": "2017-11-29T15:40:06Z"
},
{
"body": "nit: we may need to reformat this javadocs",
"created_at": "2017-11-29T15:40:28Z"
},
{
"body": "covered -> cover",
"created_at": "2017-11-29T15:52:17Z"
},
{
"body": "It can start from `requiredStartingSeqNo - 1`?",
"created_at": "2017-11-29T15:55:02Z"
},
{
"body": "is this proper enough now?",
"created_at": "2017-11-29T15:55:17Z"
},
{
"body": "I think this does not verify the right thing. You check that it's never called with the arguments null and 0, whereas you want to check that it's never called with any arguments.",
"created_at": "2017-11-29T15:57:48Z"
},
{
"body": "formatting nit: can you move the `while` behind the closing bracket `}`, not in a new line?",
"created_at": "2017-11-29T15:59:31Z"
},
{
"body": "yes.",
"created_at": "2017-11-29T16:19:15Z"
},
{
"body": ":)",
"created_at": "2017-11-29T16:19:22Z"
},
{
"body": "good catch. Thanks. ",
"created_at": "2017-11-29T16:19:36Z"
},
{
"body": "hard -> shard",
"created_at": "2017-11-30T09:06:34Z"
}
],
"title": "Flush old indices on primary promotion and relocation"
} | {
"commits": [
{
"message": "add testRecoveryWithConcurrentIndexing"
},
{
"message": "fix for promotion"
},
{
"message": "relax. You can't guarantee what you want"
},
{
"message": "assert we ship what we want to ship"
},
{
"message": "verify we ship the right ops"
},
{
"message": "logging"
},
{
"message": "doh"
},
{
"message": "lint"
},
{
"message": "more intuitive range indication"
},
{
"message": "fix testSendSnapshotSendsOps"
},
{
"message": "add primary relocation test"
},
{
"message": "index specific ensure green"
},
{
"message": "fix counts"
},
{
"message": "tighten testRelocationWithConcurrentIndexing"
},
{
"message": "flush on relocation"
},
{
"message": "simplify relation ship between flush and roll"
},
{
"message": "add explicit index names to health check"
},
{
"message": "beef up testSendSnapshotSendsOps"
},
{
"message": "fix testWaitForPendingSeqNo"
},
{
"message": "feedback"
},
{
"message": "more feedback"
},
{
"message": "last feedback round"
},
{
"message": "reduce the ensure green"
},
{
"message": "fix testSendSnapshotSendsOps as we always send at least one (potentially empty) batch"
},
{
"message": "extra space?"
},
{
"message": "add empty shard test"
},
{
"message": "make sure seq no info is in commit if recovering an old index"
},
{
"message": "add assertions that commit point in store always has sequence numbers info once recovery is done."
},
{
"message": "hard -> shard"
}
],
"files": [
{
"diff": "@@ -351,9 +351,11 @@ private void recoverFromTranslogInternal() throws IOException {\n } else if (translog.isCurrent(translogGeneration) == false) {\n commitIndexWriter(indexWriter, translog, lastCommittedSegmentInfos.getUserData().get(Engine.SYNC_COMMIT_ID));\n refreshLastCommittedSegmentInfos();\n- } else if (lastCommittedSegmentInfos.getUserData().containsKey(HISTORY_UUID_KEY) == false) {\n- assert historyUUID != null;\n- // put the history uuid into the index\n+ } else if (lastCommittedSegmentInfos.getUserData().containsKey(SequenceNumbers.LOCAL_CHECKPOINT_KEY) == false) {\n+ assert engineConfig.getIndexSettings().getIndexVersionCreated().before(Version.V_6_0_0_alpha1) :\n+ \"index was created on version \" + engineConfig.getIndexSettings().getIndexVersionCreated() + \"but has \"\n+ + \"no sequence numbers info in commit\";\n+\n commitIndexWriter(indexWriter, translog, lastCommittedSegmentInfos.getUserData().get(Engine.SYNC_COMMIT_ID));\n refreshLastCommittedSegmentInfos();\n }",
"filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -422,7 +422,13 @@ public void updateShardState(final ShardRouting newRouting,\n final DiscoveryNode recoverySourceNode = recoveryState.getSourceNode();\n if (currentRouting.isRelocationTarget() == false || recoverySourceNode.getVersion().before(Version.V_6_0_0_alpha1)) {\n // there was no primary context hand-off in < 6.0.0, need to manually activate the shard\n- getEngine().seqNoService().activatePrimaryMode(getEngine().seqNoService().getLocalCheckpoint());\n+ final Engine engine = getEngine();\n+ engine.seqNoService().activatePrimaryMode(getEngine().seqNoService().getLocalCheckpoint());\n+ // Flush the translog as it may contain operations with no sequence numbers. We want to make sure those\n+ // operations will never be replayed as part of peer recovery to avoid an arbitrary mixture of operations with seq#\n+ // (due to active indexing) and operations without a seq# coming from the translog. We therefore flush\n+ // to create a lucene commit point to an empty translog file.\n+ engine.flush(false, true);\n }\n }\n \n@@ -487,15 +493,26 @@ public void updateShardState(final ShardRouting newRouting,\n * subsequently fails before the primary/replica re-sync completes successfully and we are now being\n * promoted, the local checkpoint tracker here could be left in a state where it would re-issue sequence\n * numbers. To ensure that this is not the case, we restore the state of the local checkpoint tracker by\n- * replaying the translog and marking any operations there are completed. Rolling the translog generation is\n- * not strictly needed here (as we will never have collisions between sequence numbers in a translog\n- * generation in a new primary as it takes the last known sequence number as a starting point), but it\n- * simplifies reasoning about the relationship between primary terms and translog generations.\n+ * replaying the translog and marking any operations there are completed.\n */\n- getEngine().rollTranslogGeneration();\n- getEngine().restoreLocalCheckpointFromTranslog();\n- getEngine().fillSeqNoGaps(newPrimaryTerm);\n- getEngine().seqNoService().updateLocalCheckpointForShard(currentRouting.allocationId().getId(),\n+ final Engine engine = getEngine();\n+ engine.restoreLocalCheckpointFromTranslog();\n+ if (indexSettings.getIndexVersionCreated().onOrBefore(Version.V_6_0_0_alpha1)) {\n+ // an index that was created before sequence numbers were introduced may contain operations in its\n+ // translog that do not have a sequence numbers. We want to make sure those operations will never\n+ // be replayed as part of peer recovery to avoid an arbitrary mixture of operations with seq# (due\n+ // to active indexing) and operations without a seq# coming from the translog. We therefore flush\n+ // to create a lucene commit point to an empty translog file.\n+ engine.flush(false, true);\n+ }\n+ /* Rolling the translog generation is not strictly needed here (as we will never have collisions between\n+ * sequence numbers in a translog generation in a new primary as it takes the last known sequence number\n+ * as a starting point), but it simplifies reasoning about the relationship between primary terms and\n+ * translog generations.\n+ */\n+ engine.rollTranslogGeneration();\n+ engine.fillSeqNoGaps(newPrimaryTerm);\n+ engine.seqNoService().updateLocalCheckpointForShard(currentRouting.allocationId().getId(),\n getEngine().seqNoService().getLocalCheckpoint());\n primaryReplicaSyncer.accept(this, new ActionListener<ResyncTask>() {\n @Override\n@@ -1316,6 +1333,17 @@ private void internalPerformTranslogRecovery(boolean skipTranslogRecovery, boole\n active.set(true);\n newEngine.recoverFromTranslog();\n }\n+ assertSequenceNumbersInCommit();\n+ }\n+\n+ private boolean assertSequenceNumbersInCommit() throws IOException {\n+ final Map<String, String> userData = SegmentInfos.readLatestCommit(store.directory()).getUserData();\n+ assert userData.containsKey(SequenceNumbers.LOCAL_CHECKPOINT_KEY) : \"commit point doesn't contains a local checkpoint\";\n+ assert userData.containsKey(SequenceNumbers.MAX_SEQ_NO) : \"commit point doesn't contains a maximum sequence number\";\n+ assert userData.containsKey(Engine.HISTORY_UUID_KEY) : \"commit point doesn't contains a history uuid\";\n+ assert userData.get(Engine.HISTORY_UUID_KEY).equals(getHistoryUUID()) : \"commit point history uuid [\"\n+ + userData.get(Engine.HISTORY_UUID_KEY) + \"] is different than engine [\" + getHistoryUUID() + \"]\";\n+ return true;\n }\n \n private boolean assertMaxUnsafeAutoIdInCommit() throws IOException {",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -148,23 +148,26 @@ public RecoveryResponse recoverToTarget() throws IOException {\n final Translog translog = shard.getTranslog();\n \n final long startingSeqNo;\n+ final long requiredSeqNoRangeStart;\n final boolean isSequenceNumberBasedRecoveryPossible = request.startingSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO &&\n isTargetSameHistory() && isTranslogReadyForSequenceNumberBasedRecovery();\n-\n if (isSequenceNumberBasedRecoveryPossible) {\n logger.trace(\"performing sequence numbers based recovery. starting at [{}]\", request.startingSeqNo());\n startingSeqNo = request.startingSeqNo();\n+ requiredSeqNoRangeStart = startingSeqNo;\n } else {\n final Engine.IndexCommitRef phase1Snapshot;\n try {\n phase1Snapshot = shard.acquireIndexCommit(false);\n } catch (final Exception e) {\n throw new RecoveryEngineException(shard.shardId(), 1, \"snapshot failed\", e);\n }\n- // we set this to unassigned to create a translog roughly according to the retention policy\n- // on the target\n- startingSeqNo = SequenceNumbers.UNASSIGNED_SEQ_NO;\n-\n+ // we set this to 0 to create a translog roughly according to the retention policy\n+ // on the target. Note that it will still filter out legacy operations with no sequence numbers\n+ startingSeqNo = 0;\n+ // but we must have everything above the local checkpoint in the commit\n+ requiredSeqNoRangeStart =\n+ Long.parseLong(phase1Snapshot.getIndexCommit().getUserData().get(SequenceNumbers.LOCAL_CHECKPOINT_KEY)) + 1;\n try {\n phase1(phase1Snapshot.getIndexCommit(), translog::totalOperations);\n } catch (final Exception e) {\n@@ -177,6 +180,9 @@ public RecoveryResponse recoverToTarget() throws IOException {\n }\n }\n }\n+ assert startingSeqNo >= 0 : \"startingSeqNo must be non negative. got: \" + startingSeqNo;\n+ assert requiredSeqNoRangeStart >= startingSeqNo : \"requiredSeqNoRangeStart [\" + requiredSeqNoRangeStart + \"] is lower than [\"\n+ + startingSeqNo + \"]\";\n \n runUnderPrimaryPermit(() -> shard.initiateTracking(request.targetAllocationId()));\n \n@@ -186,10 +192,19 @@ public RecoveryResponse recoverToTarget() throws IOException {\n throw new RecoveryEngineException(shard.shardId(), 1, \"prepare target for translog failed\", e);\n }\n \n+ final long endingSeqNo = shard.seqNoStats().getMaxSeqNo();\n+ /*\n+ * We need to wait for all operations up to the current max to complete, otherwise we can not guarantee that all\n+ * operations in the required range will be available for replaying from the translog of the source.\n+ */\n+ cancellableThreads.execute(() -> shard.waitForOpsToComplete(endingSeqNo));\n+\n+ logger.trace(\"all operations up to [{}] completed, which will be used as an ending sequence number\", endingSeqNo);\n+\n logger.trace(\"snapshot translog for recovery; current size is [{}]\", translog.estimateTotalOperationsFromMinSeq(startingSeqNo));\n final long targetLocalCheckpoint;\n try(Translog.Snapshot snapshot = translog.newSnapshotFromMinSeqNo(startingSeqNo)) {\n- targetLocalCheckpoint = phase2(startingSeqNo, snapshot);\n+ targetLocalCheckpoint = phase2(startingSeqNo, requiredSeqNoRangeStart, endingSeqNo, snapshot);\n } catch (Exception e) {\n throw new RecoveryEngineException(shard.shardId(), 2, \"phase2 failed\", e);\n }\n@@ -223,26 +238,19 @@ private void runUnderPrimaryPermit(CancellableThreads.Interruptable runnable) {\n \n /**\n * Determines if the source translog is ready for a sequence-number-based peer recovery. The main condition here is that the source\n- * translog contains all operations between the local checkpoint on the target and the current maximum sequence number on the source.\n+ * translog contains all operations above the local checkpoint on the target. We already know the that translog contains or will contain\n+ * all ops above the source local checkpoint, so we can stop check there.\n *\n * @return {@code true} if the source is ready for a sequence-number-based recovery\n * @throws IOException if an I/O exception occurred reading the translog snapshot\n */\n boolean isTranslogReadyForSequenceNumberBasedRecovery() throws IOException {\n final long startingSeqNo = request.startingSeqNo();\n assert startingSeqNo >= 0;\n- final long endingSeqNo = shard.seqNoStats().getMaxSeqNo();\n- logger.trace(\"testing sequence numbers in range: [{}, {}]\", startingSeqNo, endingSeqNo);\n+ final long localCheckpoint = shard.getLocalCheckpoint();\n+ logger.trace(\"testing sequence numbers in range: [{}, {}]\", startingSeqNo, localCheckpoint);\n // the start recovery request is initialized with the starting sequence number set to the target shard's local checkpoint plus one\n- if (startingSeqNo - 1 <= endingSeqNo) {\n- /*\n- * We need to wait for all operations up to the current max to complete, otherwise we can not guarantee that all\n- * operations in the required range will be available for replaying from the translog of the source.\n- */\n- cancellableThreads.execute(() -> shard.waitForOpsToComplete(endingSeqNo));\n-\n- logger.trace(\"all operations up to [{}] completed, checking translog content\", endingSeqNo);\n-\n+ if (startingSeqNo - 1 <= localCheckpoint) {\n final LocalCheckpointTracker tracker = new LocalCheckpointTracker(startingSeqNo, startingSeqNo - 1);\n try (Translog.Snapshot snapshot = shard.getTranslog().newSnapshotFromMinSeqNo(startingSeqNo)) {\n Translog.Operation operation;\n@@ -252,7 +260,7 @@ boolean isTranslogReadyForSequenceNumberBasedRecovery() throws IOException {\n }\n }\n }\n- return tracker.getCheckpoint() >= endingSeqNo;\n+ return tracker.getCheckpoint() >= localCheckpoint;\n } else {\n return false;\n }\n@@ -432,24 +440,27 @@ void prepareTargetForTranslog(final int totalTranslogOps) throws IOException {\n * point-in-time view of the translog). It then sends each translog operation to the target node so it can be replayed into the new\n * shard.\n *\n- * @param startingSeqNo the sequence number to start recovery from, or {@link SequenceNumbers#UNASSIGNED_SEQ_NO} if all\n- * ops should be sent\n- * @param snapshot a snapshot of the translog\n- *\n+ * @param startingSeqNo the sequence number to start recovery from, or {@link SequenceNumbers#UNASSIGNED_SEQ_NO} if all\n+ * ops should be sent\n+ * @param requiredSeqNoRangeStart the lower sequence number of the required range (ending with endingSeqNo)\n+ * @param endingSeqNo the highest sequence number that should be sent\n+ * @param snapshot a snapshot of the translog\n * @return the local checkpoint on the target\n */\n- long phase2(final long startingSeqNo, final Translog.Snapshot snapshot) throws IOException {\n+ long phase2(final long startingSeqNo, long requiredSeqNoRangeStart, long endingSeqNo, final Translog.Snapshot snapshot)\n+ throws IOException {\n if (shard.state() == IndexShardState.CLOSED) {\n throw new IndexShardClosedException(request.shardId());\n }\n cancellableThreads.checkForCancel();\n \n final StopWatch stopWatch = new StopWatch().start();\n \n- logger.trace(\"recovery [phase2]: sending transaction log operations\");\n+ logger.trace(\"recovery [phase2]: sending transaction log operations (seq# from [\" + startingSeqNo + \"], \" +\n+ \"required [\" + requiredSeqNoRangeStart + \":\" + endingSeqNo + \"]\");\n \n // send all the snapshot's translog operations to the target\n- final SendSnapshotResult result = sendSnapshot(startingSeqNo, snapshot);\n+ final SendSnapshotResult result = sendSnapshot(startingSeqNo, requiredSeqNoRangeStart, endingSeqNo, snapshot);\n \n stopWatch.stop();\n logger.trace(\"recovery [phase2]: took [{}]\", stopWatch.totalTime());\n@@ -510,18 +521,26 @@ static class SendSnapshotResult {\n * <p>\n * Operations are bulked into a single request depending on an operation count limit or size-in-bytes limit.\n *\n- * @param startingSeqNo the sequence number for which only operations with a sequence number greater than this will be sent\n- * @param snapshot the translog snapshot to replay operations from\n- * @return the local checkpoint on the target and the total number of operations sent\n+ * @param startingSeqNo the sequence number for which only operations with a sequence number greater than this will be sent\n+ * @param requiredSeqNoRangeStart the lower sequence number of the required range\n+ * @param endingSeqNo the upper bound of the sequence number range to be sent (inclusive)\n+ * @param snapshot the translog snapshot to replay operations from @return the local checkpoint on the target and the\n+ * total number of operations sent\n * @throws IOException if an I/O exception occurred reading the translog snapshot\n */\n- protected SendSnapshotResult sendSnapshot(final long startingSeqNo, final Translog.Snapshot snapshot) throws IOException {\n+ protected SendSnapshotResult sendSnapshot(final long startingSeqNo, long requiredSeqNoRangeStart, long endingSeqNo,\n+ final Translog.Snapshot snapshot) throws IOException {\n+ assert requiredSeqNoRangeStart <= endingSeqNo + 1:\n+ \"requiredSeqNoRangeStart \" + requiredSeqNoRangeStart + \" is larger than endingSeqNo \" + endingSeqNo;\n+ assert startingSeqNo <= requiredSeqNoRangeStart :\n+ \"startingSeqNo \" + startingSeqNo + \" is larger than requiredSeqNoRangeStart \" + requiredSeqNoRangeStart;\n int ops = 0;\n long size = 0;\n int skippedOps = 0;\n int totalSentOps = 0;\n final AtomicLong targetLocalCheckpoint = new AtomicLong(SequenceNumbers.UNASSIGNED_SEQ_NO);\n final List<Translog.Operation> operations = new ArrayList<>();\n+ final LocalCheckpointTracker requiredOpsTracker = new LocalCheckpointTracker(endingSeqNo, requiredSeqNoRangeStart - 1);\n \n final int expectedTotalOps = snapshot.totalOperations();\n if (expectedTotalOps == 0) {\n@@ -538,19 +557,17 @@ protected SendSnapshotResult sendSnapshot(final long startingSeqNo, final Transl\n throw new IndexShardClosedException(request.shardId());\n }\n cancellableThreads.checkForCancel();\n- /*\n- * If we are doing a sequence-number-based recovery, we have to skip older ops for which no sequence number was assigned, and\n- * any ops before the starting sequence number.\n- */\n+\n final long seqNo = operation.seqNo();\n- if (startingSeqNo >= 0 && (seqNo == SequenceNumbers.UNASSIGNED_SEQ_NO || seqNo < startingSeqNo)) {\n+ if (seqNo < startingSeqNo || seqNo > endingSeqNo) {\n skippedOps++;\n continue;\n }\n operations.add(operation);\n ops++;\n size += operation.estimateSize();\n totalSentOps++;\n+ requiredOpsTracker.markSeqNoAsCompleted(seqNo);\n \n // check if this request is past bytes threshold, and if so, send it off\n if (size >= chunkSizeInBytes) {\n@@ -567,6 +584,12 @@ protected SendSnapshotResult sendSnapshot(final long startingSeqNo, final Transl\n cancellableThreads.executeIO(sendBatch);\n }\n \n+ if (requiredOpsTracker.getCheckpoint() < endingSeqNo) {\n+ throw new IllegalStateException(\"translog replay failed to cover required sequence numbers\" +\n+ \" (required range [\" + requiredSeqNoRangeStart + \":\" + endingSeqNo + \"). first missing op is [\"\n+ + (requiredOpsTracker.getCheckpoint() + 1) + \"]\");\n+ }\n+\n assert expectedTotalOps == skippedOps + totalSentOps\n : \"expected total [\" + expectedTotalOps + \"], skipped [\" + skippedOps + \"], total sent [\" + totalSentOps + \"]\";\n ",
"filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java",
"status": "modified"
},
{
"diff": "@@ -374,15 +374,15 @@ protected EngineFactory getEngineFactory(ShardRouting routing) {\n IndexShard newReplica = shards.addReplicaWithExistingPath(replica.shardPath(), replica.routingEntry().currentNodeId());\n \n CountDownLatch recoveryStart = new CountDownLatch(1);\n- AtomicBoolean preparedForTranslog = new AtomicBoolean(false);\n+ AtomicBoolean opsSent = new AtomicBoolean(false);\n final Future<Void> recoveryFuture = shards.asyncRecoverReplica(newReplica, (indexShard, node) -> {\n recoveryStart.countDown();\n return new RecoveryTarget(indexShard, node, recoveryListener, l -> {\n }) {\n @Override\n- public void prepareForTranslogOperations(int totalTranslogOps) throws IOException {\n- preparedForTranslog.set(true);\n- super.prepareForTranslogOperations(totalTranslogOps);\n+ public long indexTranslogOperations(List<Translog.Operation> operations, int totalTranslogOps) throws IOException {\n+ opsSent.set(true);\n+ return super.indexTranslogOperations(operations, totalTranslogOps);\n }\n };\n });\n@@ -392,7 +392,7 @@ public void prepareForTranslogOperations(int totalTranslogOps) throws IOExceptio\n // index some more\n docs += shards.indexDocs(randomInt(5));\n \n- assertFalse(\"recovery should wait on pending docs\", preparedForTranslog.get());\n+ assertFalse(\"recovery should wait on pending docs\", opsSent.get());\n \n primaryEngineFactory.releaseLatchedIndexers();\n pendingDocsDone.await();",
"filename": "core/src/test/java/org/elasticsearch/index/replication/RecoveryDuringReplicationTests.java",
"status": "modified"
},
{
"diff": "@@ -70,15 +70,18 @@\n import org.elasticsearch.test.DummyShardLock;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.IndexSettingsModule;\n+import org.mockito.ArgumentCaptor;\n \n import java.io.IOException;\n import java.nio.file.Path;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n+import java.util.Comparator;\n import java.util.List;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.function.Supplier;\n+import java.util.stream.Collectors;\n \n import static java.util.Collections.emptyMap;\n import static java.util.Collections.emptySet;\n@@ -88,6 +91,7 @@\n import static org.mockito.Matchers.anyString;\n import static org.mockito.Mockito.doAnswer;\n import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.verify;\n import static org.mockito.Mockito.when;\n \n public class RecoverySourceHandlerTests extends ESTestCase {\n@@ -181,29 +185,68 @@ public void testSendSnapshotSendsOps() throws IOException {\n operations.add(new Translog.Index(index, new Engine.IndexResult(1, i - initialNumberOfDocs, true)));\n }\n operations.add(null);\n- final long startingSeqNo = randomBoolean() ? SequenceNumbers.UNASSIGNED_SEQ_NO : randomIntBetween(0, 16);\n- RecoverySourceHandler.SendSnapshotResult result = handler.sendSnapshot(startingSeqNo, new Translog.Snapshot() {\n- @Override\n- public void close() {\n+ final long startingSeqNo = randomIntBetween(0, numberOfDocsWithValidSequenceNumbers - 1);\n+ final long requiredStartingSeqNo = randomIntBetween((int) startingSeqNo, numberOfDocsWithValidSequenceNumbers - 1);\n+ final long endingSeqNo = randomIntBetween((int) requiredStartingSeqNo - 1, numberOfDocsWithValidSequenceNumbers - 1);\n+ RecoverySourceHandler.SendSnapshotResult result = handler.sendSnapshot(startingSeqNo, requiredStartingSeqNo,\n+ endingSeqNo, new Translog.Snapshot() {\n+ @Override\n+ public void close() {\n \n- }\n+ }\n \n- private int counter = 0;\n+ private int counter = 0;\n \n- @Override\n- public int totalOperations() {\n- return operations.size() - 1;\n- }\n+ @Override\n+ public int totalOperations() {\n+ return operations.size() - 1;\n+ }\n \n- @Override\n- public Translog.Operation next() throws IOException {\n- return operations.get(counter++);\n- }\n- });\n- if (startingSeqNo == SequenceNumbers.UNASSIGNED_SEQ_NO) {\n- assertThat(result.totalOperations, equalTo(initialNumberOfDocs + numberOfDocsWithValidSequenceNumbers));\n- } else {\n- assertThat(result.totalOperations, equalTo(Math.toIntExact(numberOfDocsWithValidSequenceNumbers - startingSeqNo)));\n+ @Override\n+ public Translog.Operation next() throws IOException {\n+ return operations.get(counter++);\n+ }\n+ });\n+ final int expectedOps = (int) (endingSeqNo - startingSeqNo + 1);\n+ assertThat(result.totalOperations, equalTo(expectedOps));\n+ final ArgumentCaptor<List> shippedOpsCaptor = ArgumentCaptor.forClass(List.class);\n+ verify(recoveryTarget).indexTranslogOperations(shippedOpsCaptor.capture(), ArgumentCaptor.forClass(Integer.class).capture());\n+ List<Translog.Operation> shippedOps = shippedOpsCaptor.getAllValues().stream()\n+ .flatMap(List::stream).map(o -> (Translog.Operation) o).collect(Collectors.toList());\n+ shippedOps.sort(Comparator.comparing(Translog.Operation::seqNo));\n+ assertThat(shippedOps.size(), equalTo(expectedOps));\n+ for (int i = 0; i < shippedOps.size(); i++) {\n+ assertThat(shippedOps.get(i), equalTo(operations.get(i + (int) startingSeqNo + initialNumberOfDocs)));\n+ }\n+ if (endingSeqNo >= requiredStartingSeqNo + 1) {\n+ // check that missing ops blows up\n+ List<Translog.Operation> requiredOps = operations.subList(0, operations.size() - 1).stream() // remove last null marker\n+ .filter(o -> o.seqNo() >= requiredStartingSeqNo && o.seqNo() <= endingSeqNo).collect(Collectors.toList());\n+ List<Translog.Operation> opsToSkip = randomSubsetOf(randomIntBetween(1, requiredOps.size()), requiredOps);\n+ expectThrows(IllegalStateException.class, () ->\n+ handler.sendSnapshot(startingSeqNo, requiredStartingSeqNo,\n+ endingSeqNo, new Translog.Snapshot() {\n+ @Override\n+ public void close() {\n+\n+ }\n+\n+ private int counter = 0;\n+\n+ @Override\n+ public int totalOperations() {\n+ return operations.size() - 1 - opsToSkip.size();\n+ }\n+\n+ @Override\n+ public Translog.Operation next() throws IOException {\n+ Translog.Operation op;\n+ do {\n+ op = operations.get(counter++);\n+ } while (op != null && opsToSkip.contains(op));\n+ return op;\n+ }\n+ }));\n }\n }\n \n@@ -383,7 +426,7 @@ void prepareTargetForTranslog(final int totalTranslogOps) throws IOException {\n }\n \n @Override\n- long phase2(long startingSeqNo, Translog.Snapshot snapshot) throws IOException {\n+ long phase2(long startingSeqNo, long requiredSeqNoRangeStart, long endingSeqNo, Translog.Snapshot snapshot) throws IOException {\n phase2Called.set(true);\n return SequenceNumbers.UNASSIGNED_SEQ_NO;\n }",
"filename": "core/src/test/java/org/elasticsearch/indices/recovery/RecoverySourceHandlerTests.java",
"status": "modified"
},
{
"diff": "@@ -25,8 +25,10 @@\n import org.apache.http.util.EntityUtils;\n import org.elasticsearch.Version;\n import org.elasticsearch.client.Response;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.CheckedFunction;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n@@ -49,6 +51,8 @@\n import static java.util.Collections.emptyMap;\n import static java.util.Collections.singletonList;\n import static java.util.Collections.singletonMap;\n+import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING;\n+import static org.elasticsearch.cluster.routing.allocation.decider.MaxRetryAllocationDecider.SETTING_ALLOCATION_MAX_RETRY;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n@@ -583,6 +587,28 @@ public void testSingleDoc() throws IOException {\n assertThat(toStr(client().performRequest(\"GET\", docLocation)), containsString(doc));\n }\n \n+ /**\n+ * Tests that a single empty shard index is correctly recovered. Empty shards are often an edge case.\n+ */\n+ public void testEmptyShard() throws IOException {\n+ final String index = \"test_empty_shard\";\n+\n+ if (runningAgainstOldCluster) {\n+ Settings.Builder settings = Settings.builder()\n+ .put(IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1)\n+ .put(IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 1)\n+ // if the node with the replica is the first to be restarted, while a replica is still recovering\n+ // then delayed allocation will kick in. When the node comes back, the master will search for a copy\n+ // but the recovering copy will be seen as invalid and the cluster health won't return to GREEN\n+ // before timing out\n+ .put(INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"100ms\")\n+ .put(SETTING_ALLOCATION_MAX_RETRY.getKey(), \"0\"); // fail faster\n+ createIndex(index, settings.build());\n+ }\n+ ensureGreen(index);\n+ }\n+\n+\n /**\n * Tests recovery of an index with or without a translog and the\n * statistics we gather about that.",
"filename": "qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,6 @@\n import org.elasticsearch.client.Response;\n import org.elasticsearch.client.RestClient;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n-import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.seqno.SeqNoStats;\n import org.elasticsearch.test.rest.ESRestTestCase;\n@@ -44,19 +43,10 @@\n import static java.util.Collections.singletonMap;\n import static org.elasticsearch.index.seqno.SequenceNumbers.NO_OPS_PERFORMED;\n import static org.elasticsearch.index.seqno.SequenceNumbers.UNASSIGNED_SEQ_NO;\n-import static org.hamcrest.Matchers.anyOf;\n import static org.hamcrest.Matchers.equalTo;\n \n public class IndexingIT extends ESRestTestCase {\n \n- private void updateIndexSetting(String name, Settings.Builder settings) throws IOException {\n- updateIndexSetting(name, settings.build());\n- }\n- private void updateIndexSetting(String name, Settings settings) throws IOException {\n- assertOK(client().performRequest(\"PUT\", name + \"/_settings\", Collections.emptyMap(),\n- new StringEntity(Strings.toString(settings), ContentType.APPLICATION_JSON)));\n- }\n-\n private int indexDocs(String index, final int idStart, final int numDocs) throws IOException {\n for (int i = 0; i < numDocs; i++) {\n final int id = idStart + i;\n@@ -113,7 +103,7 @@ public void testIndexVersionPropagation() throws Exception {\n final int finalVersionForDoc1 = indexDocWithConcurrentUpdates(index, 1, nUpdates);\n logger.info(\"allowing shards on all nodes\");\n updateIndexSetting(index, Settings.builder().putNull(\"index.routing.allocation.include._name\"));\n- ensureGreen();\n+ ensureGreen(index);\n assertOK(client().performRequest(\"POST\", index + \"/_refresh\"));\n List<Shard> shards = buildShards(index, nodes, newNodeClient);\n Shard primary = buildShards(index, nodes, newNodeClient).stream().filter(Shard::isPrimary).findFirst().get();\n@@ -138,7 +128,7 @@ public void testIndexVersionPropagation() throws Exception {\n primary = shards.stream().filter(Shard::isPrimary).findFirst().get();\n logger.info(\"moving primary to new node by excluding {}\", primary.getNode().getNodeName());\n updateIndexSetting(index, Settings.builder().put(\"index.routing.allocation.exclude._name\", primary.getNode().getNodeName()));\n- ensureGreen();\n+ ensureGreen(index);\n nUpdates = randomIntBetween(minUpdates, maxUpdates);\n logger.info(\"indexing docs with [{}] concurrent updates after moving primary\", nUpdates);\n final int finalVersionForDoc3 = indexDocWithConcurrentUpdates(index, 3, nUpdates);\n@@ -151,7 +141,7 @@ public void testIndexVersionPropagation() throws Exception {\n \n logger.info(\"setting number of replicas to 0\");\n updateIndexSetting(index, Settings.builder().put(\"index.number_of_replicas\", 0));\n- ensureGreen();\n+ ensureGreen(index);\n nUpdates = randomIntBetween(minUpdates, maxUpdates);\n logger.info(\"indexing doc with [{}] concurrent updates after setting number of replicas to 0\", nUpdates);\n final int finalVersionForDoc4 = indexDocWithConcurrentUpdates(index, 4, nUpdates);\n@@ -164,7 +154,7 @@ public void testIndexVersionPropagation() throws Exception {\n \n logger.info(\"setting number of replicas to 1\");\n updateIndexSetting(index, Settings.builder().put(\"index.number_of_replicas\", 1));\n- ensureGreen();\n+ ensureGreen(index);\n nUpdates = randomIntBetween(minUpdates, maxUpdates);\n logger.info(\"indexing doc with [{}] concurrent updates after setting number of replicas to 1\", nUpdates);\n final int finalVersionForDoc5 = indexDocWithConcurrentUpdates(index, 5, nUpdates);\n@@ -202,7 +192,7 @@ public void testSeqNoCheckpoints() throws Exception {\n assertSeqNoOnShards(index, nodes, nodes.getBWCVersion().major >= 6 ? numDocs : 0, newNodeClient);\n logger.info(\"allowing shards on all nodes\");\n updateIndexSetting(index, Settings.builder().putNull(\"index.routing.allocation.include._name\"));\n- ensureGreen();\n+ ensureGreen(index);\n assertOK(client().performRequest(\"POST\", index + \"/_refresh\"));\n for (final String bwcName : bwcNamesList) {\n assertCount(index, \"_only_nodes:\" + bwcName, numDocs);\n@@ -214,7 +204,7 @@ public void testSeqNoCheckpoints() throws Exception {\n Shard primary = buildShards(index, nodes, newNodeClient).stream().filter(Shard::isPrimary).findFirst().get();\n logger.info(\"moving primary to new node by excluding {}\", primary.getNode().getNodeName());\n updateIndexSetting(index, Settings.builder().put(\"index.routing.allocation.exclude._name\", primary.getNode().getNodeName()));\n- ensureGreen();\n+ ensureGreen(index);\n int numDocsOnNewPrimary = 0;\n final int numberOfDocsAfterMovingPrimary = 1 + randomInt(5);\n logger.info(\"indexing [{}] docs after moving primary\", numberOfDocsAfterMovingPrimary);\n@@ -233,7 +223,7 @@ public void testSeqNoCheckpoints() throws Exception {\n numDocs += numberOfDocsAfterDroppingReplicas;\n logger.info(\"setting number of replicas to 1\");\n updateIndexSetting(index, Settings.builder().put(\"index.number_of_replicas\", 1));\n- ensureGreen();\n+ ensureGreen(index);\n assertOK(client().performRequest(\"POST\", index + \"/_refresh\"));\n // the number of documents on the primary and on the recovered replica should match the number of indexed documents\n assertCount(index, \"_primary\", numDocs);",
"filename": "qa/mixed-cluster/src/test/java/org/elasticsearch/backwards/IndexingIT.java",
"status": "modified"
},
{
"diff": "@@ -18,16 +18,29 @@\n */\n package org.elasticsearch.upgrades;\n \n+import org.apache.http.entity.ContentType;\n+import org.apache.http.entity.StringEntity;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.support.PlainActionFuture;\n import org.elasticsearch.client.Response;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.test.rest.ESRestTestCase;\n import org.elasticsearch.test.rest.yaml.ObjectPath;\n \n+import java.io.IOException;\n import java.util.Collections;\n import java.util.List;\n+import java.util.Map;\n+import java.util.concurrent.Future;\n+import java.util.function.Predicate;\n \n+import static com.carrotsearch.randomizedtesting.RandomizedTest.randomAsciiOfLength;\n+import static java.util.Collections.emptyMap;\n import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING;\n+import static org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE_SETTING;\n+import static org.elasticsearch.cluster.routing.allocation.decider.MaxRetryAllocationDecider.SETTING_ALLOCATION_MAX_RETRY;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.hasSize;\n import static org.hamcrest.Matchers.notNullValue;\n@@ -89,7 +102,7 @@ public void testHistoryUUIDIsGenerated() throws Exception {\n .put(INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"100ms\");\n createIndex(index, settings.build());\n } else if (clusterType == CLUSTER_TYPE.UPGRADED) {\n- ensureGreen();\n+ ensureGreen(index);\n Response response = client().performRequest(\"GET\", index + \"/_stats\", Collections.singletonMap(\"level\", \"shards\"));\n assertOK(response);\n ObjectPath objectPath = ObjectPath.createFromResponse(response);\n@@ -109,4 +122,146 @@ public void testHistoryUUIDIsGenerated() throws Exception {\n }\n }\n \n+ private int indexDocs(String index, final int idStart, final int numDocs) throws IOException {\n+ for (int i = 0; i < numDocs; i++) {\n+ final int id = idStart + i;\n+ assertOK(client().performRequest(\"PUT\", index + \"/test/\" + id, emptyMap(),\n+ new StringEntity(\"{\\\"test\\\": \\\"test_\" + randomAsciiOfLength(2) + \"\\\"}\", ContentType.APPLICATION_JSON)));\n+ }\n+ return numDocs;\n+ }\n+\n+ private Future<Void> asyncIndexDocs(String index, final int idStart, final int numDocs) throws IOException {\n+ PlainActionFuture<Void> future = new PlainActionFuture<>();\n+ Thread background = new Thread(new AbstractRunnable() {\n+ @Override\n+ public void onFailure(Exception e) {\n+ future.onFailure(e);\n+ }\n+\n+ @Override\n+ protected void doRun() throws Exception {\n+ indexDocs(index, idStart, numDocs);\n+ future.onResponse(null);\n+ }\n+ });\n+ background.start();\n+ return future;\n+ }\n+\n+ public void testRecoveryWithConcurrentIndexing() throws Exception {\n+ final String index = \"recovery_with_concurrent_indexing\";\n+ switch (clusterType) {\n+ case OLD:\n+ Settings.Builder settings = Settings.builder()\n+ .put(IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1)\n+ .put(IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 1)\n+ // if the node with the replica is the first to be restarted, while a replica is still recovering\n+ // then delayed allocation will kick in. When the node comes back, the master will search for a copy\n+ // but the recovering copy will be seen as invalid and the cluster health won't return to GREEN\n+ // before timing out\n+ .put(INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"100ms\")\n+ .put(SETTING_ALLOCATION_MAX_RETRY.getKey(), \"0\"); // fail faster\n+ createIndex(index, settings.build());\n+ indexDocs(index, 0, 10);\n+ ensureGreen(index);\n+ // make sure that we can index while the replicas are recovering\n+ updateIndexSetting(index, Settings.builder().put(INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.getKey(), \"primaries\"));\n+ break;\n+ case MIXED:\n+ updateIndexSetting(index, Settings.builder().put(INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.getKey(), (String)null));\n+ asyncIndexDocs(index, 10, 50).get();\n+ ensureGreen(index);\n+ assertOK(client().performRequest(\"POST\", index + \"/_refresh\"));\n+ assertCount(index, \"_primary\", 60);\n+ assertCount(index, \"_replica\", 60);\n+ // make sure that we can index while the replicas are recovering\n+ updateIndexSetting(index, Settings.builder().put(INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.getKey(), \"primaries\"));\n+ break;\n+ case UPGRADED:\n+ updateIndexSetting(index, Settings.builder().put(INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.getKey(), (String)null));\n+ asyncIndexDocs(index, 60, 50).get();\n+ ensureGreen(index);\n+ assertOK(client().performRequest(\"POST\", index + \"/_refresh\"));\n+ assertCount(index, \"_primary\", 110);\n+ assertCount(index, \"_replica\", 110);\n+ break;\n+ default:\n+ throw new IllegalStateException(\"unknown type \" + clusterType);\n+ }\n+ }\n+\n+ private void assertCount(final String index, final String preference, final int expectedCount) throws IOException {\n+ final Response response = client().performRequest(\"GET\", index + \"/_count\", Collections.singletonMap(\"preference\", preference));\n+ assertOK(response);\n+ final int actualCount = Integer.parseInt(ObjectPath.createFromResponse(response).evaluate(\"count\").toString());\n+ assertThat(actualCount, equalTo(expectedCount));\n+ }\n+\n+\n+ private String getNodeId(Predicate<Version> versionPredicate) throws IOException {\n+ Response response = client().performRequest(\"GET\", \"_nodes\");\n+ ObjectPath objectPath = ObjectPath.createFromResponse(response);\n+ Map<String, Object> nodesAsMap = objectPath.evaluate(\"nodes\");\n+ for (String id : nodesAsMap.keySet()) {\n+ Version version = Version.fromString(objectPath.evaluate(\"nodes.\" + id + \".version\"));\n+ if (versionPredicate.test(version)) {\n+ return id;\n+ }\n+ }\n+ return null;\n+ }\n+\n+\n+ public void testRelocationWithConcurrentIndexing() throws Exception {\n+ final String index = \"relocation_with_concurrent_indexing\";\n+ switch (clusterType) {\n+ case OLD:\n+ Settings.Builder settings = Settings.builder()\n+ .put(IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1)\n+ .put(IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 1)\n+ // if the node with the replica is the first to be restarted, while a replica is still recovering\n+ // then delayed allocation will kick in. When the node comes back, the master will search for a copy\n+ // but the recovering copy will be seen as invalid and the cluster health won't return to GREEN\n+ // before timing out\n+ .put(INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"100ms\")\n+ .put(SETTING_ALLOCATION_MAX_RETRY.getKey(), \"0\"); // fail faster\n+ createIndex(index, settings.build());\n+ indexDocs(index, 0, 10);\n+ ensureGreen(index);\n+ // make sure that no shards are allocated, so we can make sure the primary stays on the old node (when one\n+ // node stops, we lose the master too, so a replica will not be promoted)\n+ updateIndexSetting(index, Settings.builder().put(INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.getKey(), \"none\"));\n+ break;\n+ case MIXED:\n+ final String newNode = getNodeId(v -> v.equals(Version.CURRENT));\n+ final String oldNode = getNodeId(v -> v.before(Version.CURRENT));\n+ // remove the replica now that we know that the primary is an old node\n+ updateIndexSetting(index, Settings.builder()\n+ .put(IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 0)\n+ .put(INDEX_ROUTING_ALLOCATION_ENABLE_SETTING.getKey(), (String)null)\n+ .put(\"index.routing.allocation.include._id\", oldNode)\n+ );\n+ updateIndexSetting(index, Settings.builder().put(\"index.routing.allocation.include._id\", newNode));\n+ asyncIndexDocs(index, 10, 50).get();\n+ ensureGreen(index);\n+ assertOK(client().performRequest(\"POST\", index + \"/_refresh\"));\n+ assertCount(index, \"_primary\", 60);\n+ break;\n+ case UPGRADED:\n+ updateIndexSetting(index, Settings.builder()\n+ .put(IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 1)\n+ .put(\"index.routing.allocation.include._id\", (String)null)\n+ );\n+ asyncIndexDocs(index, 60, 50).get();\n+ ensureGreen(index);\n+ assertOK(client().performRequest(\"POST\", index + \"/_refresh\"));\n+ assertCount(index, \"_primary\", 110);\n+ assertCount(index, \"_replica\", 110);\n+ break;\n+ default:\n+ throw new IllegalStateException(\"unknown type \" + clusterType);\n+ }\n+ }\n+\n }",
"filename": "qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/RecoveryIT.java",
"status": "modified"
},
{
"diff": "@@ -16,6 +16,16 @@\n # allocation will kick in, and the cluster health won't return to GREEN\n # before timing out\n index.unassigned.node_left.delayed_timeout: \"100ms\"\n+\n+ - do:\n+ indices.create:\n+ index: empty_index # index to ensure we can recover empty indices\n+ body:\n+ # if the node with the replica is the first to be restarted, then delayed\n+ # allocation will kick in, and the cluster health won't return to GREEN\n+ # before timing out\n+ index.unassigned.node_left.delayed_timeout: \"100ms\"\n+\n - do:\n bulk:\n refresh: true",
"filename": "qa/rolling-upgrade/src/test/resources/rest-api-spec/test/old_cluster/10_basic.yml",
"status": "modified"
},
{
"diff": "@@ -7,6 +7,7 @@\n # wait for long enough that we give delayed unassigned shards to stop being delayed\n timeout: 70s\n level: shards\n+ index: test_index,index_with_replicas,empty_index\n \n - do:\n search:",
"filename": "qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/10_basic.yml",
"status": "modified"
},
{
"diff": "@@ -392,13 +392,18 @@ protected void assertOK(Response response) {\n assertThat(response.getStatusLine().getStatusCode(), anyOf(equalTo(200), equalTo(201)));\n }\n \n- protected void ensureGreen() throws IOException {\n+ /**\n+ * checks that the specific index is green. we force a selection of an index as the tests share a cluster and often leave indices\n+ * in an non green state\n+ * @param index index to test for\n+ **/\n+ protected void ensureGreen(String index) throws IOException {\n Map<String, String> params = new HashMap<>();\n params.put(\"wait_for_status\", \"green\");\n params.put(\"wait_for_no_relocating_shards\", \"true\");\n params.put(\"timeout\", \"70s\");\n params.put(\"level\", \"shards\");\n- assertOK(client().performRequest(\"GET\", \"_cluster/health\", params));\n+ assertOK(client().performRequest(\"GET\", \"_cluster/health/\" + index, params));\n }\n \n protected void createIndex(String name, Settings settings) throws IOException {\n@@ -411,4 +416,12 @@ protected void createIndex(String name, Settings settings, String mapping) throw\n + \", \\\"mappings\\\" : {\" + mapping + \"} }\", ContentType.APPLICATION_JSON)));\n }\n \n+ protected void updateIndexSetting(String index, Settings.Builder settings) throws IOException {\n+ updateIndexSetting(index, settings.build());\n+ }\n+\n+ private void updateIndexSetting(String index, Settings settings) throws IOException {\n+ assertOK(client().performRequest(\"PUT\", index + \"/_settings\", Collections.emptyMap(),\n+ new StringEntity(Strings.toString(settings), ContentType.APPLICATION_JSON)));\n+ }\n }",
"filename": "test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java",
"status": "modified"
}
]
} |
{
"body": "Running with the all permission java.security.AllPermission granted is equivalent to disabling the security manager. This commit adds a bootstrap check that forbids running with this permission granted.\r\n\r\n",
"comments": [
{
"body": "test this please",
"created_at": "2017-11-27T19:40:59Z"
},
{
"body": "W00T change of the year!",
"created_at": "2017-11-27T22:31:37Z"
}
],
"number": 27548,
"title": "Forbid granting the all permission in production"
} | {
"body": "The all permission can no longer be granted in production as it effectively disables the security manager. This commit adds a note to the breaking changes regarding this.\r\n\r\nRelates #27548\r\n",
"number": 27549,
"review_comments": [],
"title": "Add breaking changes note on all permission check"
} | {
"commits": [
{
"message": "Add breaking changes note on all permission check\n\nThe all permission can no longer be granted in production as it\neffectively disables the security manager. This commit adds a note to\nthe breaking changes regarding this."
}
],
"files": [
{
"diff": "@@ -20,3 +20,5 @@ As a general rule:\n See <<setup-upgrade>> for more info.\n --\n include::migrate_6_0.asciidoc[]\n+\n+include::migrate_6_2.asciidoc[]",
"filename": "docs/reference/migration/index.asciidoc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,14 @@\n+[[breaking-changes-6.2]]\n+== Breaking changes in 6.2\n+\n+[[breaking_62_packaging]]\n+[float]\n+=== All permission bootstrap check\n+\n+Elasticsearch installs a security manager during bootstrap to mitigate the scope\n+of exploits in the JDK, in third-party dependencies, and in Elasticsearch itself\n+as well as to sandbox untrusted plugins. A custom security policy can be applied\n+and one permission that can be added to this policy is\n+`java.security.AllPermission`. However, this effectively disables the security\n+manager. As such, granting this permission in production mode is now forbidden\n+via the <<all-permission-check, all permission bootstrap check>>.",
"filename": "docs/reference/migration/migrate_6_2.asciidoc",
"status": "added"
},
{
"diff": "@@ -228,6 +228,7 @@ enabled. The versions impacted are those earlier than the version of\n HotSpot that shipped with JDK 8u40. The G1GC check detects these early\n versions of the HotSpot JVM.\n \n+[[all-permission-check]]\n === All permission check\n \n The all permission check ensures that the security policy used during bootstrap",
"filename": "docs/reference/setup/bootstrap-checks.asciidoc",
"status": "modified"
}
]
} |
{
"body": "The GlobalOrdinalsStringTermsAggregator.LowCardinality aggregator casts global\r\nvalues to `GlobalOrdinalMapping`, even though the implementation of global\r\nvalues is different when a `missing` value is configured.\r\n\r\nThis commit adds a new API that gives access to the ordinal remapping in order\r\nto fix this problem.\r\n",
"comments": [],
"number": 27543,
"title": "Fix illegal cast of the \"low cardinality\" optimization of the `terms` aggregation."
} | {
"body": "This is a safer version of #27543 for backports.",
"number": 27545,
"review_comments": [],
"title": "Disable the \"low cardinality\" optimization of terms aggregations."
} | {
"commits": [
{
"message": "Disable the \"low cardinality\" optimization of terms aggregations.\n\nThis is a safer version of #27543 for backports."
}
],
"files": [
{
"diff": "@@ -258,7 +258,8 @@ Aggregator create(String name,\n final long maxOrd = getMaxOrd(valuesSource, context.searcher());\n assert maxOrd != -1;\n final double ratio = maxOrd / ((double) context.searcher().getIndexReader().numDocs());\n- if (factories == AggregatorFactories.EMPTY &&\n+ if (valuesSource instanceof ValuesSource.Bytes.WithOrdinals.FieldData && // see #27543\n+ factories == AggregatorFactories.EMPTY &&\n includeExclude == null &&\n Aggregator.descendsFromBucketAggregator(parent) == false &&\n ratio <= 0.5 && maxOrd <= 2048) {",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregatorFactory.java",
"status": "modified"
}
]
} |
{
"body": "This is a follow-up to #23941. Currently there are a number of complexities related to compression. The raw DeflaterOutputStream must be closed prior to sending bytes to ensure that EOS bytes are written. But the underlying ReleasableBytesStreamOutput cannot be closed until the bytes are sent to ensure that the bytes are not reused.\r\n\r\nRight now we have three different stream references hanging around in TCPTransport to handle this complexity. This commit introduces CompressibleBytesOutputStream to be one stream implemenation that will behave properly with or without compression enabled.\r\n\r\nThis is a backport of #24927 to 5.6.\r\n\r\nCloses #27525",
"comments": [
{
"body": "To be clear about #27525, the issue there has nothing to do with compressible streams but rather handling closing of the releasable bytes output stream when an exception is thrown before we attach the send listener that would otherwise close the stream. The reason that #24927 addresses this though is due to how it simplifies closing of the releasable bytes output stream whether or not the send listener is attached.",
"created_at": "2017-11-27T15:34:28Z"
},
{
"body": "I pushed this change and backported the other changes that we discussed:\r\n - this change in 5.6: de6ed75503dc80c6aa0424f7718a0f669214c337\r\n - #27542 in 5.6: e903468dd1e8b3224b494b53b322fe685a3d1f23\r\n - #27564 in 5.6: f29219bab2b5b6a7cbb7bd99de8ee814a2fcaf8e",
"created_at": "2017-11-29T14:22:56Z"
}
],
"number": 27540,
"title": "Add CompressibleBytesOutputStream for compression"
} | {
"body": "Compressible bytes output stream swallows exceptions that occur when closing. This commit changes this behavior so that such exceptions bubble up.\r\n\r\nRelates #27540\r\n",
"number": 27542,
"review_comments": [],
"title": "Bubble exceptions when closing compressible streams"
} | {
"commits": [
{
"message": "Bubble exceptions when closing compressible streams\n\nCompressible bytes output stream swallows exceptions that occur when\nclosing. This commit changes this behavior so that such exceptions\nbubble up."
}
],
"files": [
{
"diff": "@@ -43,7 +43,7 @@\n * {@link CompressibleBytesOutputStream#close()} should be called when the bytes are no longer needed and\n * can be safely released.\n */\n-final class CompressibleBytesOutputStream extends StreamOutput implements Releasable {\n+final class CompressibleBytesOutputStream extends StreamOutput {\n \n private final StreamOutput stream;\n private final BytesStream bytesStreamOutput;\n@@ -92,13 +92,13 @@ public void flush() throws IOException {\n }\n \n @Override\n- public void close() {\n+ public void close() throws IOException {\n if (stream == bytesStreamOutput) {\n assert shouldCompress == false : \"If the streams are the same we should not be compressing\";\n- IOUtils.closeWhileHandlingException(stream);\n+ IOUtils.close(stream);\n } else {\n assert shouldCompress : \"If the streams are different we should be compressing\";\n- IOUtils.closeWhileHandlingException(stream, bytesStreamOutput);\n+ IOUtils.close(stream, bytesStreamOutput);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/transport/CompressibleBytesOutputStream.java",
"status": "modified"
},
{
"diff": "@@ -49,7 +49,6 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.lease.Releasable;\n-import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.metrics.CounterMetric;\n import org.elasticsearch.common.metrics.MeanMetric;\n import org.elasticsearch.common.network.NetworkAddress;\n@@ -73,8 +72,10 @@\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.threadpool.ThreadPool;\n \n+import java.io.Closeable;\n import java.io.IOException;\n import java.io.StreamCorruptedException;\n+import java.io.UncheckedIOException;\n import java.net.BindException;\n import java.net.InetAddress;\n import java.net.InetSocketAddress;\n@@ -1704,29 +1705,36 @@ protected final void innerOnResponse(Void object) {\n \n private final class SendListener extends SendMetricListener {\n private final TcpChannel channel;\n- private final Releasable optionalReleasable;\n+ private final Closeable optionalCloseable;\n private final Runnable transportAdaptorCallback;\n \n- private SendListener(TcpChannel channel, Releasable optionalReleasable, Runnable transportAdaptorCallback, long messageLength) {\n+ private SendListener(TcpChannel channel, Closeable optionalCloseable, Runnable transportAdaptorCallback, long messageLength) {\n super(messageLength);\n this.channel = channel;\n- this.optionalReleasable = optionalReleasable;\n+ this.optionalCloseable = optionalCloseable;\n this.transportAdaptorCallback = transportAdaptorCallback;\n }\n \n @Override\n protected void innerInnerOnResponse(Void v) {\n- release();\n+ closeAndCallback(null);\n }\n \n @Override\n protected void innerOnFailure(Exception e) {\n logger.warn(() -> new ParameterizedMessage(\"send message failed [channel: {}]\", channel), e);\n- release();\n+ closeAndCallback(e);\n }\n \n- private void release() {\n- Releasables.close(optionalReleasable, transportAdaptorCallback::run);\n+ private void closeAndCallback(final Exception e) {\n+ try {\n+ IOUtils.close(optionalCloseable, transportAdaptorCallback::run);\n+ } catch (final IOException inner) {\n+ if (e != null) {\n+ inner.addSuppressed(e);\n+ }\n+ throw new UncheckedIOException(inner);\n+ }\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/transport/TcpTransport.java",
"status": "modified"
}
]
} |
{
"body": "https://discuss.elastic.co/t/circuit-breaker-always-trips/109067\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion: 5.6.4, Build: 8bbedf5/2017-10-31T18:55:38.105Z, JVM: 1.8.0_144\r\n\r\n**Plugins installed**: [analysis-icu]\r\n\r\n**JVM version** (`java -version`):\r\nopenjdk version \"1.8.0_144\"\r\nOpenJDK Runtime Environment (build 1.8.0_144-b01)\r\nOpenJDK 64-Bit Server VM (build 25.144-b01, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nFreeBSD fe 11.1-STABLE FreeBSD 11.1-STABLE #0 r324684: Tue Oct 17 15:07:45 CEST 2017 root@builder:/usr/obj/usr/src/sys/GENERIC amd64\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nCircuit breakers' size constantly grow after a short period of uptime. This happens (for now) only on two machines, which may be because of replication.\r\nAfter the limit is reached, even a\r\ncurl http://localhost:9200/ fails with:\r\n```json\r\n{\r\n \"error\":{\r\n \"root_cause\":[\r\n {\r\n \"type\":\"circuit_breaking_exception\",\r\n \"reason\":\"[parent] Data too large, data for [<http_request>] would be [13610582016/12.6gb], which is larger than the limit of [11885484441/11gb]\",\r\n \"bytes_wanted\":13610582016,\r\n \"bytes_limit\":11885484441\r\n }\r\n ],\r\n \"type\":\"circuit_breaking_exception\",\r\n \"reason\":\"[parent] Data too large, data for [<http_request>] would be [13610582016/12.6gb], which is larger than the limit of [11885484441/11gb]\",\r\n \"bytes_wanted\":13610582016,\r\n \"bytes_limit\":11885484441\r\n },\r\n \"status\":503\r\n}\r\n```\r\nWith the default configuration, the cluster remains operational for some time. When it reaches the request breaker limit, all shards residing on the two failing machines become essentially unavailable.\r\nAfter some time the failing nodes get dropped out and reconnect, but it can't automatically heal.\r\nWhen I raise the breakers' limit to 2^63-1, the cluster remains operational, but the breaker size grows indefintely (growing around 160 GiB in 8 hours).\r\n\r\n**Steps to reproduce**:\r\nIt is 100% reproduceable on our cluster. More hints below.\r\nI need help (maybe a debug build) to figure out what causes it.\r\n\r\n**Provide logs (if relevant)**:\r\nI guess the root cause is that we have a too big multiget, which fails. It may be that this exception is not handled well and the 2 GiBs of size remains in the circuit breaker counter.\r\nIt would be pretty nice to log at least the mget doc _ids along with the following exception, so it would make easier to find out what docs have the problem.\r\n\r\n```\r\n[2017-11-25T08:06:18,532][DEBUG][o.e.a.g.TransportShardMultiGetAction] [fe00] null: failed to execute [org.elasticsearch.action.get.MultiGetShardRequest@165b2817]\r\norg.elasticsearch.transport.RemoteTransportException: [fe32][10.6.145.237:9300][indices:data/read/mget[shard][s]]\r\nCaused by: java.lang.IllegalArgumentException: ReleasableBytesStreamOutput cannot hold more than 2GB of data\r\n at org.elasticsearch.common.io.stream.BytesStreamOutput.ensureCapacity(BytesStreamOutput.java:155) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.common.io.stream.ReleasableBytesStreamOutput.ensureCapacity(ReleasableBytesStreamOutput.java:69) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.common.io.stream.BytesStreamOutput.writeBytes(BytesStreamOutput.java:89) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.common.io.Streams$FlushOnCloseOutputStream.writeBytes(Streams.java:266) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.common.io.stream.StreamOutput.write(StreamOutput.java:406) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.common.bytes.BytesReference.writeTo(BytesReference.java:68) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.common.io.stream.StreamOutput.writeBytesReference(StreamOutput.java:150) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.index.get.GetResult.writeTo(GetResult.java:365) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.action.get.GetResponse.writeTo(GetResponse.java:201) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.action.get.MultiGetShardResponse.writeTo(MultiGetShardResponse.java:89) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.transport.TcpTransport.buildMessage(TcpTransport.java:1243) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.transport.TcpTransport.sendResponse(TcpTransport.java:1199) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.transport.TcpTransport.sendResponse(TcpTransport.java:1178) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.transport.TcpTransportChannel.sendResponse(TcpTransportChannel.java:67) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.transport.TcpTransportChannel.sendResponse(TcpTransportChannel.java:61) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.transport.DelegatingTransportChannel.sendResponse(DelegatingTransportChannel.java:60) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.transport.RequestHandlerRegistry$TransportChannelWrapper.sendResponse(RequestHandlerRegistry.java:111) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$ShardTransportHandler.messageReceived(TransportSingleShardAction.java:295) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$ShardTransportHandler.messageReceived(TransportSingleShardAction.java:287) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1553) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.6.4.jar:5.6.4]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_144]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_144]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\n```\r\n\r\n",
"comments": [
{
"body": "Thanks for the report. It is easy to make this reproduce and this bug only exists in 5.x.\r\n\r\n@s1monw The problem here is we are not closing the releasable bytes stream output when we throw an exception before attaching a send listener to the channel so we leak. I would like to fix this by backporting #24927 to 5.6. This ensures that we always close the releasable whether or not we attached the send listener. What do you think?",
"created_at": "2017-11-25T14:09:54Z"
},
{
"body": "Closed by #27540",
"created_at": "2017-12-02T02:57:32Z"
}
],
"number": 27525,
"title": "Circuit breaker grows indefinitely when >2GiB of mget is issued (and possibly at other places as well)"
} | {
"body": "This is a follow-up to #23941. Currently there are a number of complexities related to compression. The raw DeflaterOutputStream must be closed prior to sending bytes to ensure that EOS bytes are written. But the underlying ReleasableBytesStreamOutput cannot be closed until the bytes are sent to ensure that the bytes are not reused.\r\n\r\nRight now we have three different stream references hanging around in TCPTransport to handle this complexity. This commit introduces CompressibleBytesOutputStream to be one stream implemenation that will behave properly with or without compression enabled.\r\n\r\nThis is a backport of #24927 to 5.6.\r\n\r\nCloses #27525",
"number": 27540,
"review_comments": [
{
"body": "can't we just throw UOE?",
"created_at": "2017-11-27T16:22:24Z"
},
{
"body": "why do we not bubble up the exception here?",
"created_at": "2017-11-27T16:23:28Z"
},
{
"body": "I think this is an artifact of the compressible bytes output stream implementing releasable which does not declare any checked exceptions. I think we should remove this and let these bubble up as you say. I opened: #27542",
"created_at": "2017-11-27T17:09:28Z"
},
{
"body": "In practice it probably does not matter, I think we will never call reset on these streams.\r\n\r\nIf the stream is not compressed, then resetting is fine.\r\n\r\nIf the stream is compressed, then we would already throw an unsupported operation exception (from `OutputStreamStreamOutput#reset`).",
"created_at": "2017-11-27T17:13:43Z"
},
{
"body": "I integrated #27542, are you good with this PR now @s1monw?",
"created_at": "2017-11-27T21:26:14Z"
},
{
"body": "I think it would be good to be consistent and always throw",
"created_at": "2017-11-28T14:28:57Z"
},
{
"body": "this PR still doesn't bubble up exceptions?!",
"created_at": "2017-11-28T14:30:01Z"
},
{
"body": "@s1monw I opened #27564.",
"created_at": "2017-11-28T16:27:54Z"
},
{
"body": "@s1monw I will pull #27542 in when I pull this PR in (I did it this way because this code already exists in 6.0/6.1/6.x/master and this PR is targeting 5.6 only). So: separate PRs to make the changes you are requesting to compressible bytes output stream in all branches.",
"created_at": "2017-11-28T16:29:23Z"
}
],
"title": "Add CompressibleBytesOutputStream for compression"
} | {
"commits": [
{
"message": "Add CompressibleBytesOutputStream for compression (#24927)\n\nThis is a follow-up to #23941. Currently there are a number of\ncomplexities related to compression. The raw DeflaterOutputStream must\nbe closed prior to sending bytes to ensure that EOS bytes are written.\nBut the underlying ReleasableBytesStreamOutput cannot be closed until\nthe bytes are sent to ensure that the bytes are not reused.\n\nRight now we have three different stream references hanging around in\nTCPTransport to handle this complexity. This commit introduces\nCompressibleBytesOutputStream to be one stream implemenation that will\nbehave properly with or without compression enabled."
}
],
"files": [
{
"diff": "@@ -0,0 +1,109 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.transport;\n+\n+import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.compress.CompressorFactory;\n+import org.elasticsearch.common.io.Streams;\n+import org.elasticsearch.common.io.stream.BytesStream;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.lease.Releasable;\n+\n+import java.io.IOException;\n+import java.util.zip.DeflaterOutputStream;\n+\n+/**\n+ * This class exists to provide a stream with optional compression. This is useful as using compression\n+ * requires that the underlying {@link DeflaterOutputStream} be closed to write EOS bytes. However, the\n+ * {@link BytesStream} should not be closed yet, as we have not used the bytes. This class handles these\n+ * intricacies.\n+ *\n+ * {@link CompressibleBytesOutputStream#materializeBytes()} should be called when all the bytes have been\n+ * written to this stream. If compression is enabled, the proper EOS bytes will be written at that point.\n+ * The underlying {@link BytesReference} will be returned.\n+ *\n+ * {@link CompressibleBytesOutputStream#close()} should be called when the bytes are no longer needed and\n+ * can be safely released.\n+ */\n+final class CompressibleBytesOutputStream extends StreamOutput implements Releasable {\n+\n+ private final StreamOutput stream;\n+ private final BytesStream bytesStreamOutput;\n+ private final boolean shouldCompress;\n+\n+ CompressibleBytesOutputStream(BytesStream bytesStreamOutput, boolean shouldCompress) throws IOException {\n+ this.bytesStreamOutput = bytesStreamOutput;\n+ this.shouldCompress = shouldCompress;\n+ if (shouldCompress) {\n+ this.stream = CompressorFactory.COMPRESSOR.streamOutput(Streams.flushOnCloseStream(bytesStreamOutput));\n+ } else {\n+ this.stream = bytesStreamOutput;\n+ }\n+ }\n+\n+ /**\n+ * This method ensures that compression is complete and returns the underlying bytes.\n+ *\n+ * @return bytes underlying the stream\n+ * @throws IOException if an exception occurs when writing or flushing\n+ */\n+ BytesReference materializeBytes() throws IOException {\n+ // If we are using compression the stream needs to be closed to ensure that EOS marker bytes are written.\n+ // The actual ReleasableBytesStreamOutput will not be closed yet as it is wrapped in flushOnCloseStream when\n+ // passed to the deflater stream.\n+ if (shouldCompress) {\n+ stream.close();\n+ }\n+\n+ return bytesStreamOutput.bytes();\n+ }\n+\n+ @Override\n+ public void writeByte(byte b) throws IOException {\n+ stream.write(b);\n+ }\n+\n+ @Override\n+ public void writeBytes(byte[] b, int offset, int length) throws IOException {\n+ stream.writeBytes(b, offset, length);\n+ }\n+\n+ @Override\n+ public void flush() throws IOException {\n+ stream.flush();\n+ }\n+\n+ @Override\n+ public void close() {\n+ if (stream == bytesStreamOutput) {\n+ assert shouldCompress == false : \"If the streams are the same we should not be compressing\";\n+ IOUtils.closeWhileHandlingException(stream);\n+ } else {\n+ assert shouldCompress : \"If the streams are different we should be compressing\";\n+ IOUtils.closeWhileHandlingException(stream, bytesStreamOutput);\n+ }\n+ }\n+\n+ @Override\n+ public void reset() throws IOException {\n+ stream.reset();\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/transport/CompressibleBytesOutputStream.java",
"status": "added"
},
{
"diff": "@@ -40,7 +40,6 @@\n import org.elasticsearch.common.compress.Compressor;\n import org.elasticsearch.common.compress.CompressorFactory;\n import org.elasticsearch.common.compress.NotCompressedException;\n-import org.elasticsearch.common.io.Streams;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput;\n import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n@@ -1094,18 +1093,18 @@ private void sendRequestToChannel(final DiscoveryNode node, final Channel target\n if (compress) {\n options = TransportRequestOptions.builder(options).withCompress(true).build();\n }\n+\n+ // only compress if asked and the request is not bytes. Otherwise only\n+ // the header part is compressed, and the \"body\" can't be extracted as compressed\n+ final boolean compressMessage = options.compress() && canCompress(request);\n+\n status = TransportStatus.setRequest(status);\n ReleasableBytesStreamOutput bStream = new ReleasableBytesStreamOutput(bigArrays);\n- // we wrap this in a release once since if the onRequestSent callback throws an exception\n- // we might release things twice and this should be prevented\n- final Releasable toRelease = Releasables.releaseOnce(() -> Releasables.close(bStream.bytes()));\n- StreamOutput stream = Streams.flushOnCloseStream(bStream);\n+ final CompressibleBytesOutputStream stream = new CompressibleBytesOutputStream(bStream, compressMessage);\n+ boolean addedReleaseListener = false;\n try {\n- // only compress if asked, and, the request is not bytes, since then only\n- // the header part is compressed, and the \"body\" can't be extracted as compressed\n- if (options.compress() && canCompress(request)) {\n+ if (compressMessage) {\n status = TransportStatus.setCompress(status);\n- stream = CompressorFactory.COMPRESSOR.streamOutput(stream);\n }\n \n // we pick the smallest of the 2, to support both backward and forward compatibility\n@@ -1116,14 +1115,17 @@ private void sendRequestToChannel(final DiscoveryNode node, final Channel target\n stream.setVersion(version);\n threadPool.getThreadContext().writeTo(stream);\n stream.writeString(action);\n- BytesReference message = buildMessage(requestId, status, node.getVersion(), request, stream, bStream);\n+ BytesReference message = buildMessage(requestId, status, node.getVersion(), request, stream);\n final TransportRequestOptions finalOptions = options;\n // this might be called in a different thread\n- SendListener onRequestSent = new SendListener(toRelease,\n- () -> transportServiceAdapter.onRequestSent(node, requestId, action, request, finalOptions));\n+ SendListener onRequestSent = new SendListener(stream,\n+ () -> transportServiceAdapter.onRequestSent(node, requestId, action, request, finalOptions));\n internalSendMessage(targetChannel, message, onRequestSent);\n+ addedReleaseListener = true;\n } finally {\n- IOUtils.close(stream);\n+ if (!addedReleaseListener) {\n+ IOUtils.close(stream);\n+ }\n }\n }\n \n@@ -1185,26 +1187,26 @@ private void sendResponse(Version nodeVersion, Channel channel, final TransportR\n }\n status = TransportStatus.setResponse(status); // TODO share some code with sendRequest\n ReleasableBytesStreamOutput bStream = new ReleasableBytesStreamOutput(bigArrays);\n- // we wrap this in a release once since if the onRequestSent callback throws an exception\n- // we might release things twice and this should be prevented\n- final Releasable toRelease = Releasables.releaseOnce(() -> Releasables.close(bStream.bytes()));\n- StreamOutput stream = Streams.flushOnCloseStream(bStream);\n+ CompressibleBytesOutputStream stream = new CompressibleBytesOutputStream(bStream, options.compress());\n+ boolean addedReleaseListener = false;\n try {\n if (options.compress()) {\n status = TransportStatus.setCompress(status);\n- stream = CompressorFactory.COMPRESSOR.streamOutput(stream);\n }\n threadPool.getThreadContext().writeTo(stream);\n stream.setVersion(nodeVersion);\n- BytesReference reference = buildMessage(requestId, status, nodeVersion, response, stream, bStream);\n+ BytesReference reference = buildMessage(requestId, status, nodeVersion, response, stream);\n \n final TransportResponseOptions finalOptions = options;\n // this might be called in a different thread\n- SendListener listener = new SendListener(toRelease,\n- () -> transportServiceAdapter.onResponseSent(requestId, action, response, finalOptions));\n+ SendListener listener = new SendListener(stream,\n+ () -> transportServiceAdapter.onResponseSent(requestId, action, response, finalOptions));\n internalSendMessage(channel, reference, listener);\n+ addedReleaseListener = true;\n } finally {\n- IOUtils.close(stream);\n+ if (!addedReleaseListener) {\n+ IOUtils.close(stream);\n+ }\n }\n }\n \n@@ -1231,8 +1233,8 @@ final BytesReference buildHeader(long requestId, byte status, Version protocolVe\n /**\n * Serializes the given message into a bytes representation\n */\n- private BytesReference buildMessage(long requestId, byte status, Version nodeVersion, TransportMessage message, StreamOutput stream,\n- ReleasableBytesStreamOutput writtenBytes) throws IOException {\n+ private BytesReference buildMessage(long requestId, byte status, Version nodeVersion, TransportMessage message,\n+ CompressibleBytesOutputStream stream) throws IOException {\n final BytesReference zeroCopyBuffer;\n if (message instanceof BytesTransportRequest) { // what a shitty optimization - we should use a direct send method instead\n BytesTransportRequest bRequest = (BytesTransportRequest) message;\n@@ -1243,12 +1245,12 @@ private BytesReference buildMessage(long requestId, byte status, Version nodeVer\n message.writeTo(stream);\n zeroCopyBuffer = BytesArray.EMPTY;\n }\n- // we have to close the stream here - flush is not enough since we might be compressing the content\n- // and if we do that the close method will write some marker bytes (EOS marker) and otherwise\n- // we barf on the decompressing end when we read past EOF on purpose in the #validateRequest method.\n- // this might be a problem in deflate after all but it's important to close it for now.\n- stream.close();\n- final BytesReference messageBody = writtenBytes.bytes();\n+ // we have to call materializeBytes() here before accessing the bytes. A CompressibleBytesOutputStream\n+ // might be implementing compression. And materializeBytes() ensures that some marker bytes (EOS marker)\n+ // are written. Otherwise we barf on the decompressing end when we read past EOF on purpose in the\n+ // #validateRequest method. this might be a problem in deflate after all but it's important to write\n+ // the marker bytes.\n+ final BytesReference messageBody = stream.materializeBytes();\n final BytesReference header = buildHeader(requestId, status, stream.getVersion(), messageBody.length() + zeroCopyBuffer.length());\n return new CompositeBytesReference(header, messageBody, zeroCopyBuffer);\n }",
"filename": "core/src/main/java/org/elasticsearch/transport/TcpTransport.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,116 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.transport;\n+\n+import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.compress.CompressorFactory;\n+import org.elasticsearch.common.io.stream.BytesStream;\n+import org.elasticsearch.common.io.stream.BytesStreamOutput;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.EOFException;\n+import java.io.IOException;\n+\n+public class CompressibleBytesOutputStreamTests extends ESTestCase {\n+\n+ public void testStreamWithoutCompression() throws IOException {\n+ BytesStream bStream = new ZeroOutOnCloseStream();\n+ CompressibleBytesOutputStream stream = new CompressibleBytesOutputStream(bStream, false);\n+\n+ byte[] expectedBytes = randomBytes(randomInt(30));\n+ stream.write(expectedBytes);\n+\n+ BytesReference bytesRef = stream.materializeBytes();\n+\n+ assertFalse(CompressorFactory.COMPRESSOR.isCompressed(bytesRef));\n+\n+ StreamInput streamInput = bytesRef.streamInput();\n+ byte[] actualBytes = new byte[expectedBytes.length];\n+ streamInput.readBytes(actualBytes, 0, expectedBytes.length);\n+\n+ assertEquals(-1, streamInput.read());\n+ assertArrayEquals(expectedBytes, actualBytes);\n+ stream.close();\n+\n+ // The bytes should be zeroed out on close\n+ for (byte b : bytesRef.toBytesRef().bytes) {\n+ assertEquals((byte) 0, b);\n+ }\n+ }\n+\n+ public void testStreamWithCompression() throws IOException {\n+ BytesStream bStream = new ZeroOutOnCloseStream();\n+ CompressibleBytesOutputStream stream = new CompressibleBytesOutputStream(bStream, true);\n+\n+ byte[] expectedBytes = randomBytes(randomInt(30));\n+ stream.write(expectedBytes);\n+\n+ BytesReference bytesRef = stream.materializeBytes();\n+\n+ assertTrue(CompressorFactory.COMPRESSOR.isCompressed(bytesRef));\n+\n+ StreamInput streamInput = CompressorFactory.COMPRESSOR.streamInput(bytesRef.streamInput());\n+ byte[] actualBytes = new byte[expectedBytes.length];\n+ streamInput.readBytes(actualBytes, 0, expectedBytes.length);\n+\n+ assertEquals(-1, streamInput.read());\n+ assertArrayEquals(expectedBytes, actualBytes);\n+ stream.close();\n+\n+ // The bytes should be zeroed out on close\n+ for (byte b : bytesRef.toBytesRef().bytes) {\n+ assertEquals((byte) 0, b);\n+ }\n+ }\n+\n+ public void testCompressionWithCallingMaterializeFails() throws IOException {\n+ BytesStream bStream = new ZeroOutOnCloseStream();\n+ CompressibleBytesOutputStream stream = new CompressibleBytesOutputStream(bStream, true);\n+\n+ byte[] expectedBytes = randomBytes(randomInt(30));\n+ stream.write(expectedBytes);\n+\n+\n+ StreamInput streamInput = CompressorFactory.COMPRESSOR.streamInput(bStream.bytes().streamInput());\n+ byte[] actualBytes = new byte[expectedBytes.length];\n+ EOFException e = expectThrows(EOFException.class, () -> streamInput.readBytes(actualBytes, 0, expectedBytes.length));\n+ assertEquals(\"Unexpected end of ZLIB input stream\", e.getMessage());\n+\n+ stream.close();\n+ }\n+\n+ private static byte[] randomBytes(int length) {\n+ byte[] bytes = new byte[length];\n+ for (int i = 0; i < bytes.length; ++i) {\n+ bytes[i] = randomByte();\n+ }\n+ return bytes;\n+ }\n+\n+ private static class ZeroOutOnCloseStream extends BytesStreamOutput {\n+\n+ @Override\n+ public void close() {\n+ int size = (int) bytes.size();\n+ bytes.set(0, new byte[size], 0, size);\n+ }\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/transport/CompressibleBytesOutputStreamTests.java",
"status": "added"
}
]
} |
{
"body": "Today we create a new concurrent hash map everytime we refresh\r\nthe internal reader. Under defaults this isn't much of a deal but\r\nonce the refresh interval is set to `-1` these maps grow quite large\r\nand it can have a significant impact on indexing throughput. Under low\r\nmemory situations this can cause up to 2x slowdown. This change carries\r\nover the map size as the initial capacity wich will be auto-adjusted once\r\nindexing stops.\r\n\r\nCloses #20498\r\n",
"comments": [
{
"body": "here is a benchmark that I ran with and without the change and `index.refresh_interval: -1`\r\n\r\n```\r\n| Metric | Task | Baseline | Contender | Diff | Unit |\r\n|-------------------------------:|-------------:|-----------:|------------:|---------:|-------:|\r\n| Indexing time | | 23.1021 | 46.3922 | 23.2901 | min |\r\n| Merge time | | 8.26212 | 15.8539 | 7.59175 | min |\r\n| Refresh time | | 5.45988 | 0.919767 | -4.54012 | min |\r\n| Flush time | | 0.140767 | 0.390983 | 0.25022 | min |\r\n| Merge throttle time | | 1.29745 | 1.30738 | 0.00993 | min |\r\n| Total Young Gen GC | | 25.794 | 19.311 | -6.483 | s |\r\n| Total Old Gen GC | | 5.888 | 344.69 | 338.802 | s |\r\n| Totally written | | 15.1218 | 20.9158 | 5.79395 | GB |\r\n| Heap used for segments | | 19.2159 | 5.79695 | -13.4189 | MB |\r\n| Heap used for doc values | | 0.0357857 | 0.0361099 | 0.00032 | MB |\r\n| Heap used for terms | | 18.0396 | 5.40639 | -12.6332 | MB |\r\n| Heap used for norms | | 0.0803833 | 0.0611572 | -0.01923 | MB |\r\n| Heap used for points | | 0.270901 | 0.0695763 | -0.20132 | MB |\r\n| Heap used for stored fields | | 0.789207 | 0.223724 | -0.56548 | MB |\r\n| Segment count | | 105 | 82 | -23 | |\r\n| Min Throughput | index-append | 28694.3 | 15381.4 | -13312.9 | docs/s |\r\n| Median Throughput | index-append | 29199.8 | 20549.2 | -8650.53 | docs/s |\r\n| Max Throughput | index-append | 30244.4 | 28250.8 | -1993.56 | docs/s |\r\n| 50th percentile latency | index-append | 1189.23 | 1943.61 | 754.38 | ms |\r\n| 90th percentile latency | index-append | 1623.65 | 5461.84 | 3838.19 | ms |\r\n| 99th percentile latency | index-append | 2810.07 | 11537.4 | 8727.37 | ms |\r\n| 99.9th percentile latency | index-append | 3575.99 | 37390.4 | 33814.4 | ms |\r\n| 100th percentile latency | index-append | 3876.73 | 60040 | 56163.3 | ms |\r\n| 50th percentile service time | index-append | 1189.23 | 1943.61 | 754.38 | ms |\r\n| 90th percentile service time | index-append | 1623.65 | 5461.84 | 3838.19 | ms |\r\n| 99th percentile service time | index-append | 2810.07 | 11537.4 | 8727.37 | ms |\r\n| 99.9th percentile service time | index-append | 3575.99 | 37390.4 | 33814.4 | ms |\r\n| 100th percentile service time | index-append | 3876.73 | 60040 | 56163.3 | ms |\r\n| error rate | index-append | 0 | 0.0621504 | 0.06215 | % |\r\n\r\n\r\n```\r\n\r\nnote: baseline is with the change",
"created_at": "2017-11-24T11:00:41Z"
},
{
"body": "@jpountz I think this closes https://github.com/elastic/elasticsearch/issues/20498 WDYT?",
"created_at": "2017-11-24T13:44:15Z"
},
{
"body": "Agreed it does.",
"created_at": "2017-11-24T13:54:07Z"
},
{
"body": "This is great. I wonder if we should reset the map during synced flush to make sure we free resources when we go idle (there are other options, but I think this is the simplest).",
"created_at": "2017-11-26T09:16:16Z"
},
{
"body": "@bleskes I opened https://github.com/elastic/elasticsearch/pull/27534",
"created_at": "2017-11-27T10:56:49Z"
}
],
"number": 27516,
"title": "Carry over version map size to prevent excessive resizing"
} | {
"body": "Today we carry on the size of the live version map to ensure that\r\nwe minimze rehashing. Yet, once we are idle or we can issue a sync-commit\r\nwe can resize it to defaults to free up memory.\r\n\r\nRelates to #27516",
"number": 27534,
"review_comments": [
{
"body": "can we add a check that all deletes are tracked in the tombstone?",
"created_at": "2017-11-30T10:01:22Z"
},
{
"body": "DONE!",
"created_at": "2017-11-30T19:43:51Z"
},
{
"body": "THANKS! ;)",
"created_at": "2017-11-30T20:37:46Z"
}
],
"title": "Reset LiveVersionMap on sync commit"
} | {
"commits": [
{
"message": "Reset LiveVersionMap on sync commit\n\nToday we carry on the size of the live version map to ensure that\nwe minimze rehashing. Yet, once we are idle or we can issue a sync-commit\nwe can resize it to defaults to free up memory.\n\nRelates to #27516"
},
{
"message": "move the flush into the right place"
},
{
"message": "Merge branch 'master' into empty_map_on_sync_flush"
},
{
"message": "check tombstones"
}
],
"files": [
{
"diff": "@@ -44,6 +44,24 @@ public long ramBytesUsed() {\n return BASE_RAM_BYTES_USED;\n }\n \n+ @Override\n+ public boolean equals(Object o) {\n+ if (this == o) return true;\n+ if (o == null || getClass() != o.getClass()) return false;\n+ if (!super.equals(o)) return false;\n+\n+ DeleteVersionValue that = (DeleteVersionValue) o;\n+\n+ return time == that.time;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ int result = super.hashCode();\n+ result = 31 * result + (int) (time ^ (time >>> 32));\n+ return result;\n+ }\n+\n @Override\n public String toString() {\n return \"DeleteVersionValue{\" +",
"filename": "core/src/main/java/org/elasticsearch/index/engine/DeleteVersionValue.java",
"status": "modified"
},
{
"diff": "@@ -560,7 +560,7 @@ public GetResult get(Get get, BiFunction<String, SearcherScope, Searcher> search\n ensureOpen();\n SearcherScope scope;\n if (get.realtime()) {\n- VersionValue versionValue = versionMap.getUnderLock(get.uid());\n+ VersionValue versionValue = versionMap.getUnderLock(get.uid().bytes());\n if (versionValue != null) {\n if (versionValue.isDelete()) {\n return GetResult.NOT_EXISTS;\n@@ -598,7 +598,7 @@ enum OpVsLuceneDocStatus {\n private OpVsLuceneDocStatus compareOpToLuceneDocBasedOnSeqNo(final Operation op) throws IOException {\n assert op.seqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO : \"resolving ops based on seq# but no seqNo is found\";\n final OpVsLuceneDocStatus status;\n- final VersionValue versionValue = versionMap.getUnderLock(op.uid());\n+ final VersionValue versionValue = versionMap.getUnderLock(op.uid().bytes());\n assert incrementVersionLookup();\n if (versionValue != null) {\n if (op.seqNo() > versionValue.seqNo ||\n@@ -635,7 +635,7 @@ private OpVsLuceneDocStatus compareOpToLuceneDocBasedOnSeqNo(final Operation op)\n /** resolves the current version of the document, returning null if not found */\n private VersionValue resolveDocVersion(final Operation op) throws IOException {\n assert incrementVersionLookup(); // used for asserting in tests\n- VersionValue versionValue = versionMap.getUnderLock(op.uid());\n+ VersionValue versionValue = versionMap.getUnderLock(op.uid().bytes());\n if (versionValue == null) {\n assert incrementIndexVersionLookup(); // used for asserting in tests\n final long currentVersion = loadCurrentVersionFromIndex(op.uid());\n@@ -1048,7 +1048,7 @@ static IndexingStrategy processButSkipLucene(boolean currentNotFoundOrDeleted,\n * Asserts that the doc in the index operation really doesn't exist\n */\n private boolean assertDocDoesNotExist(final Index index, final boolean allowDeleted) throws IOException {\n- final VersionValue versionValue = versionMap.getUnderLock(index.uid());\n+ final VersionValue versionValue = versionMap.getUnderLock(index.uid().bytes());\n if (versionValue != null) {\n if (versionValue.isDelete() == false || allowDeleted == false) {\n throw new AssertionError(\"doc [\" + index.type() + \"][\" + index.id() + \"] exists in version map (version \" + versionValue + \")\");\n@@ -1376,6 +1376,8 @@ public SyncedFlushResult syncFlush(String syncId, CommitId expectedCommitId) thr\n commitIndexWriter(indexWriter, translog, syncId);\n logger.debug(\"successfully sync committed. sync id [{}].\", syncId);\n lastCommittedSegmentInfos = store.readLastCommittedSegmentsInfo();\n+ // we are guaranteed to have no operations in the version map here!\n+ versionMap.adjustMapSizeUnderLock();\n return SyncedFlushResult.SUCCESS;\n } catch (IOException ex) {\n maybeFailEngine(\"sync commit\", ex);",
"filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.index.engine;\n \n-import org.apache.lucene.index.Term;\n import org.apache.lucene.search.ReferenceManager;\n import org.apache.lucene.util.Accountable;\n import org.apache.lucene.util.BytesRef;\n@@ -35,6 +34,18 @@\n /** Maps _uid value to its version information. */\n class LiveVersionMap implements ReferenceManager.RefreshListener, Accountable {\n \n+ /**\n+ * Resets the internal map and adjusts it's capacity as if there were no indexing operations.\n+ * This must be called under write lock in the engine\n+ */\n+ void adjustMapSizeUnderLock() {\n+ if (maps.current.isEmpty() == false || maps.old.isEmpty() == false) {\n+ assert false : \"map must be empty\"; // fail hard if not empty and fail with assertion in tests to ensure we never swallow it\n+ throw new IllegalStateException(\"map must be empty\");\n+ }\n+ maps = new Maps();\n+ }\n+\n private static class Maps {\n \n // All writes (adds and deletes) go into here:\n@@ -50,7 +61,7 @@ private static class Maps {\n \n Maps() {\n this(ConcurrentCollections.<BytesRef,VersionValue>newConcurrentMapWithAggressiveConcurrency(),\n- ConcurrentCollections.<BytesRef,VersionValue>newConcurrentMapWithAggressiveConcurrency());\n+ Collections.emptyMap());\n }\n }\n \n@@ -121,21 +132,21 @@ public void afterRefresh(boolean didRefresh) throws IOException {\n }\n \n /** Returns the live version (add or delete) for this uid. */\n- VersionValue getUnderLock(final Term uid) {\n+ VersionValue getUnderLock(final BytesRef uid) {\n Maps currentMaps = maps;\n \n // First try to get the \"live\" value:\n- VersionValue value = currentMaps.current.get(uid.bytes());\n+ VersionValue value = currentMaps.current.get(uid);\n if (value != null) {\n return value;\n }\n \n- value = currentMaps.old.get(uid.bytes());\n+ value = currentMaps.old.get(uid);\n if (value != null) {\n return value;\n }\n \n- return tombstones.get(uid.bytes());\n+ return tombstones.get(uid);\n }\n \n /** Adds this uid/version to the pending adds map. */\n@@ -250,4 +261,8 @@ public Collection<Accountable> getChildResources() {\n // TODO: useful to break down RAM usage here?\n return Collections.emptyList();\n }\n-}\n+\n+ /** Returns the current internal versions as a point in time snapshot*/\n+ Map<BytesRef, VersionValue> getAllCurrent() {\n+ return maps.current;\n+ }}",
"filename": "core/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java",
"status": "modified"
},
{
"diff": "@@ -57,10 +57,31 @@ public Collection<Accountable> getChildResources() {\n return Collections.emptyList();\n }\n \n+ @Override\n+ public boolean equals(Object o) {\n+ if (this == o) return true;\n+ if (o == null || getClass() != o.getClass()) return false;\n+\n+ VersionValue that = (VersionValue) o;\n+\n+ if (version != that.version) return false;\n+ if (seqNo != that.seqNo) return false;\n+ return term == that.term;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ int result = (int) (version ^ (version >>> 32));\n+ result = 31 * result + (int) (seqNo ^ (seqNo >>> 32));\n+ result = 31 * result + (int) (term ^ (term >>> 32));\n+ return result;\n+ }\n+\n @Override\n public String toString() {\n return \"VersionValue{\" +\n \"version=\" + version +\n+\n \", seqNo=\" + seqNo +\n \", term=\" + term +\n '}';",
"filename": "core/src/main/java/org/elasticsearch/index/engine/VersionValue.java",
"status": "modified"
},
{
"diff": "@@ -19,12 +19,25 @@\n \n package org.elasticsearch.index.engine;\n \n+import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.BytesRefBuilder;\n import org.apache.lucene.util.RamUsageTester;\n import org.apache.lucene.util.TestUtil;\n+import org.elasticsearch.Assertions;\n import org.elasticsearch.bootstrap.JavaVersion;\n+import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.util.concurrent.KeyedLock;\n import org.elasticsearch.test.ESTestCase;\n \n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.concurrent.ConcurrentHashMap;\n+import java.util.concurrent.CountDownLatch;\n+\n public class LiveVersionMapTests extends ESTestCase {\n \n public void testRamBytesUsed() throws Exception {\n@@ -57,4 +70,151 @@ public void testRamBytesUsed() throws Exception {\n assertEquals(actualRamBytesUsed, estimatedRamBytesUsed, actualRamBytesUsed / 4);\n }\n \n+ private BytesRef uid(String string) {\n+ BytesRefBuilder builder = new BytesRefBuilder();\n+ builder.copyChars(string);\n+ // length of the array must be the same as the len of the ref... there is an assertion in LiveVersionMap#putUnderLock\n+ return BytesRef.deepCopyOf(builder.get());\n+ }\n+\n+ public void testBasics() throws IOException {\n+ LiveVersionMap map = new LiveVersionMap();\n+ map.putUnderLock(uid(\"test\"), new VersionValue(1,1,1));\n+ assertEquals(new VersionValue(1,1,1), map.getUnderLock(uid(\"test\")));\n+ map.beforeRefresh();\n+ assertEquals(new VersionValue(1,1,1), map.getUnderLock(uid(\"test\")));\n+ map.afterRefresh(randomBoolean());\n+ assertNull(map.getUnderLock(uid(\"test\")));\n+\n+\n+ map.putUnderLock(uid(\"test\"), new DeleteVersionValue(1,1,1, Long.MAX_VALUE));\n+ assertEquals(new DeleteVersionValue(1,1,1, Long.MAX_VALUE), map.getUnderLock(uid(\"test\")));\n+ map.beforeRefresh();\n+ assertEquals(new DeleteVersionValue(1,1,1, Long.MAX_VALUE), map.getUnderLock(uid(\"test\")));\n+ map.afterRefresh(randomBoolean());\n+ assertEquals(new DeleteVersionValue(1,1,1, Long.MAX_VALUE), map.getUnderLock(uid(\"test\")));\n+ map.removeTombstoneUnderLock(uid(\"test\"));\n+ assertNull(map.getUnderLock(uid(\"test\")));\n+ }\n+\n+\n+ public void testAdjustMapSizeUnderLock() throws IOException {\n+ LiveVersionMap map = new LiveVersionMap();\n+ map.putUnderLock(uid(\"test\"), new VersionValue(1,1,1));\n+ boolean withinRefresh = randomBoolean();\n+ if (withinRefresh) {\n+ map.beforeRefresh();\n+ }\n+ assertEquals(new VersionValue(1,1,1), map.getUnderLock(uid(\"test\")));\n+ final String msg;\n+ if (Assertions.ENABLED) {\n+ msg = expectThrows(AssertionError.class, map::adjustMapSizeUnderLock).getMessage();\n+ } else {\n+ msg = expectThrows(IllegalStateException.class, map::adjustMapSizeUnderLock).getMessage();\n+ }\n+ assertEquals(\"map must be empty\", msg);\n+ assertEquals(new VersionValue(1,1,1), map.getUnderLock(uid(\"test\")));\n+ if (withinRefresh == false) {\n+ map.beforeRefresh();\n+ }\n+ map.afterRefresh(randomBoolean());\n+ Map<BytesRef, VersionValue> allCurrent = map.getAllCurrent();\n+ map.adjustMapSizeUnderLock();\n+ assertNotSame(allCurrent, map.getAllCurrent());\n+ }\n+\n+ public void testConcurrently() throws IOException, InterruptedException {\n+ HashSet<BytesRef> keySet = new HashSet<>();\n+ int numKeys = randomIntBetween(50, 200);\n+ for (int i = 0; i < numKeys; i++) {\n+ keySet.add(uid(TestUtil.randomSimpleString(random(), 10, 20)));\n+ }\n+ List<BytesRef> keyList = new ArrayList<>(keySet);\n+ ConcurrentHashMap<BytesRef, VersionValue> values = new ConcurrentHashMap<>();\n+ KeyedLock<BytesRef> keyedLock = new KeyedLock<>();\n+ LiveVersionMap map = new LiveVersionMap();\n+ int numThreads = randomIntBetween(2, 5);\n+\n+ Thread[] threads = new Thread[numThreads];\n+ CountDownLatch startGun = new CountDownLatch(numThreads);\n+ CountDownLatch done = new CountDownLatch(numThreads);\n+ int randomValuesPerThread = randomIntBetween(5000, 20000);\n+ for (int j = 0; j < threads.length; j++) {\n+ threads[j] = new Thread(() -> {\n+ startGun.countDown();\n+ try {\n+ startGun.await();\n+ } catch (InterruptedException e) {\n+ done.countDown();\n+ throw new AssertionError(e);\n+ }\n+ try {\n+ for (int i = 0; i < randomValuesPerThread; ++i) {\n+ BytesRef bytesRef = randomFrom(random(), keyList);\n+ try (Releasable r = keyedLock.acquire(bytesRef)) {\n+ VersionValue versionValue = values.computeIfAbsent(bytesRef,\n+ v -> new VersionValue(randomLong(), randomLong(), randomLong()));\n+ boolean isDelete = versionValue instanceof DeleteVersionValue;\n+ if (isDelete) {\n+ map.removeTombstoneUnderLock(bytesRef);\n+ }\n+ if (isDelete == false && rarely()) {\n+ versionValue = new DeleteVersionValue(versionValue.version + 1, versionValue.seqNo + 1,\n+ versionValue.term, Long.MAX_VALUE);\n+ } else {\n+ versionValue = new VersionValue(versionValue.version + 1, versionValue.seqNo + 1, versionValue.term);\n+ }\n+ values.put(bytesRef, versionValue);\n+ map.putUnderLock(bytesRef, versionValue);\n+ }\n+ }\n+ } finally {\n+ done.countDown();\n+ }\n+ });\n+ threads[j].start();\n+\n+\n+ }\n+ do {\n+ Map<BytesRef, VersionValue> valueMap = new HashMap<>(map.getAllCurrent());\n+ map.beforeRefresh();\n+ valueMap.forEach((k, v) -> {\n+ VersionValue actualValue = map.getUnderLock(k);\n+ assertNotNull(actualValue);\n+ assertTrue(v.version <= actualValue.version);\n+ });\n+ map.afterRefresh(randomBoolean());\n+ valueMap.forEach((k, v) -> {\n+ VersionValue actualValue = map.getUnderLock(k);\n+ if (actualValue != null) {\n+ if (actualValue instanceof DeleteVersionValue) {\n+ assertTrue(v.version <= actualValue.version); // deletes can be the same version\n+ } else {\n+ assertTrue(v.version < actualValue.version);\n+ }\n+\n+ }\n+ });\n+ if (randomBoolean()) {\n+ Thread.yield();\n+ }\n+ } while (done.getCount() != 0);\n+\n+ for (int j = 0; j < threads.length; j++) {\n+ threads[j].join();\n+ }\n+ map.getAllCurrent().forEach((k, v) -> {\n+ VersionValue versionValue = values.get(k);\n+ assertNotNull(versionValue);\n+ assertEquals(v, versionValue);\n+ });\n+\n+ map.getAllTombstones().forEach(e -> {\n+ VersionValue versionValue = values.get(e.getKey());\n+ assertNotNull(versionValue);\n+ assertEquals(e.getValue(), versionValue);\n+ assertTrue(versionValue instanceof DeleteVersionValue);\n+ });\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/engine/LiveVersionMapTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): \r\n\r\n```\r\n# rpm -qa |grep elasticsearch\r\nelasticsearch-5.6.2-1.noarch\r\n```\r\n**Plugins installed**:\r\n\r\n```\r\ndiscovery-ec2\r\nrepository-s3\r\nx-pack\r\n```\r\n\r\n**JVM version** (`java -version`):\r\n\r\n```\r\n# java -version\r\njava version \"1.8.0_141\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_141-b15)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)\r\n```\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n\r\n```\r\nFedora 26\r\nLinux 4.12.14-300.fc26.x86_64 #1 SMP Wed Sep 20 16:28:07 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n```\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWe have had about twenty indexes that are stuck in a red state after trying to restore a snapshot taken from elasticsearch `5.4.1` to a brand new cluster running `5.6.2`. For this issue, I will focus on one index `logstash-2017.09.20`. \r\n\r\nYou can see here that the index is in a red state:\r\n\r\n```\r\n# curl -XGET 'localhost:9200/_cluster/health/logstash-2017.09.20?level=shards&pretty'\r\n{\r\n \"cluster_name\" : \"redacted\",\r\n \"status\" : \"red\",\r\n \"timed_out\" : false,\r\n \"number_of_nodes\" : 11,\r\n \"number_of_data_nodes\" : 5,\r\n \"active_primary_shards\" : 4,\r\n \"active_shards\" : 4,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 1,\r\n \"delayed_unassigned_shards\" : 0,\r\n \"number_of_pending_tasks\" : 0,\r\n \"number_of_in_flight_fetch\" : 0,\r\n \"task_max_waiting_in_queue_millis\" : 0,\r\n \"active_shards_percent_as_number\" : 98.60064585575888,\r\n \"indices\" : {\r\n \"logstash-2017.09.20\" : {\r\n \"status\" : \"red\",\r\n \"number_of_shards\" : 5,\r\n \"number_of_replicas\" : 0,\r\n \"active_primary_shards\" : 4,\r\n \"active_shards\" : 4,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 1,\r\n \"shards\" : {\r\n \"0\" : {\r\n \"status\" : \"green\",\r\n \"primary_active\" : true,\r\n \"active_shards\" : 1,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 0\r\n },\r\n \"1\" : {\r\n \"status\" : \"green\",\r\n \"primary_active\" : true,\r\n \"active_shards\" : 1,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 0\r\n },\r\n \"2\" : {\r\n \"status\" : \"green\",\r\n \"primary_active\" : true,\r\n \"active_shards\" : 1,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 0\r\n },\r\n \"3\" : {\r\n \"status\" : \"green\",\r\n \"primary_active\" : true,\r\n \"active_shards\" : 1,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 0\r\n },\r\n \"4\" : {\r\n \"status\" : \"red\",\r\n \"primary_active\" : false,\r\n \"active_shards\" : 0,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 1\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nYou can see the restore says it finished with a SUCCESS:\r\n\r\n```\r\n# curl -XGET 'localhost:9200/_snapshot/my_cool_backup/snapshot_0?pretty'\r\n{\r\n \"snapshots\" : [\r\n {\r\n \"snapshot\" : \"snapshot_0\",\r\n \"uuid\" : \"e_wavyGfTD-SwXC-imkF0g\",\r\n \"version_id\" : 5040199,\r\n \"version\" : \"5.4.1\",\r\n \"indices\" : [\r\n ** SNIP **\r\n ],\r\n \"state\" : \"SUCCESS\",\r\n \"start_time\" : \"2017-09-27T07:00:01.807Z\",\r\n \"start_time_in_millis\" : 1506495601807,\r\n \"end_time\" : \"2017-09-27T08:44:35.377Z\",\r\n \"end_time_in_millis\" : 1506501875377,\r\n \"duration_in_millis\" : 6273570,\r\n \"failures\" : [ ],\r\n \"shards\" : {\r\n \"total\" : 929,\r\n \"failed\" : 0,\r\n \"successful\" : 929\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\n\r\nLooking at the restore process in detail for the example index, you can see that it says this index has been put into the DONE state for each shard. \r\n\r\n```\r\n$ curl -XGET 'localhost:9200/_snapshot/my_cool_backup/snapshot_0/_status?pretty'\r\n\"snapshots\" : [\r\n {\r\n \"snapshot\" : \"snapshot_0\",\r\n \"repository\" : \"my_cool_backup\",\r\n \"uuid\" : \"e_wavyGfTD-SwXC-imkF0g\",\r\n \"state\" : \"SUCCESS\",\r\n \"shards_stats\" : {\r\n \"initializing\" : 0,\r\n \"started\" : 0,\r\n \"finalizing\" : 0,\r\n \"done\" : 929,\r\n \"failed\" : 0,\r\n \"total\" : 929\r\n },\r\n \"stats\" : {\r\n \"number_of_files\" : 2364,\r\n \"processed_files\" : 2364,\r\n \"total_size_in_bytes\" : 15393945691,\r\n \"processed_size_in_bytes\" : 15393945691,\r\n \"start_time_in_millis\" : 1506495618226,\r\n \"time_in_millis\" : 6252967\r\n },\r\n \"indices\" : {\r\n \"logstash-2017.09.20\" : {\r\n \"shards_stats\" : {\r\n \"initializing\" : 0,\r\n \"started\" : 0,\r\n \"finalizing\" : 0,\r\n \"done\" : 5,\r\n \"failed\" : 0,\r\n \"total\" : 5\r\n },\r\n \"stats\" : {\r\n \"number_of_files\" : 31,\r\n \"processed_files\" : 31,\r\n \"total_size_in_bytes\" : 168664,\r\n \"processed_size_in_bytes\" : 168664,\r\n \"start_time_in_millis\" : 1506495678150,\r\n \"time_in_millis\" : 2401656\r\n },\r\n \"shards\" : {\r\n \"0\" : {\r\n \"stage\" : \"DONE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 7,\r\n \"processed_files\" : 7,\r\n \"total_size_in_bytes\" : 118135,\r\n \"processed_size_in_bytes\" : 118135,\r\n \"start_time_in_millis\" : 1506495720316,\r\n \"time_in_millis\" : 1949\r\n }\r\n },\r\n \"1\" : {\r\n \"stage\" : \"DONE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 16,\r\n \"processed_files\" : 16,\r\n \"total_size_in_bytes\" : 33918,\r\n \"processed_size_in_bytes\" : 33918,\r\n \"start_time_in_millis\" : 1506495722992,\r\n \"time_in_millis\" : 2804\r\n }\r\n },\r\n \"2\" : {\r\n \"stage\" : \"DONE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 0,\r\n \"processed_files\" : 0,\r\n \"total_size_in_bytes\" : 0,\r\n \"processed_size_in_bytes\" : 0,\r\n \"start_time_in_millis\" : 1506498067865,\r\n \"time_in_millis\" : 11941\r\n }\r\n },\r\n \"3\" : {\r\n \"stage\" : \"DONE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 4,\r\n \"processed_files\" : 4,\r\n \"total_size_in_bytes\" : 8434,\r\n \"processed_size_in_bytes\" : 8434,\r\n \"start_time_in_millis\" : 1506495678150,\r\n \"time_in_millis\" : 1206\r\n }\r\n },\r\n \"4\" : {\r\n \"stage\" : \"DONE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 4,\r\n \"processed_files\" : 4,\r\n \"total_size_in_bytes\" : 8177,\r\n \"processed_size_in_bytes\" : 8177,\r\n \"start_time_in_millis\" : 1506495684287,\r\n \"time_in_millis\" : 1164\r\n }\r\n }\r\n }\r\n }\r\n```\r\n\r\nLooking at `/_cat/recovery` it says it's done too\r\n```\r\n# curl -XGET localhost:9200/_cat/recovery|grep logstash-2017.09.20\r\n\r\nlogstash-2017.09.20 0 7.9s snapshot done n/a n/a redacted data-03 my_cool_backup snapshot_0 1 1 100.0% 109 1699 1699 100.0% 2911728303 0 0 100.0%\r\nlogstash-2017.09.20 1 14.5m snapshot done n/a n/a redacted data-04 my_cool_backup snapshot_0 136 136 100.0% 136 2842065772 2842065772 100.0% 2842065772 0 0 100.0%\r\nlogstash-2017.09.20 2 1.7s snapshot done n/a n/a redacted data-00 my_cool_backup snapshot_0 1 1 100.0% 109 1699 1699 100.0% 2889504028 0 0 100.0%\r\nlogstash-2017.09.20 3 13.9m snapshot done n/a n/a redacted data-02 my_cool_backup snapshot_0 127 127 100.0% 127 2929823683 2929823683 100.0% 2929823683 0 0 100.0%\r\n```\r\n\r\nBut if you try to close the index it says that it is still being restored:\r\n\r\n```\r\n$ curl -XPOST 'localhost:9200/logstash-2017.09.20/_close?pretty'\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"remote_transport_exception\",\r\n \"reason\" : \"[master-01][redacted:9300][indices:admin/close]\"\r\n }\r\n ],\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"Cannot close indices that are being restored: [[logstash-2017.09.20/crXjrjtwTEqkK6_ITG1HVQ]]\"\r\n },\r\n \"status\" : 400\r\n}\r\n```\r\n\r\nLooking in the logs it says that it failed to recover the index because the file already exists:\r\n\r\n```\r\n[2017-10-02T19:50:28,790][WARN ][o.e.c.a.s.ShardStateAction] [master-01] [logstash-2017.09.20][4] received shard failed for shard id [[logstash-2017.09.20][4]], allocation id [lW_4BSVGSc6phnI1vLEPWg], primary term [0], message [failed recovery], failure [RecoveryFailedException[[logstash-2017.09.20][4]: Recovery failed on {data-02}{Af43AKvBRf6r-PTr2s9KRg}{O1R6sKwAQK2FyYYmdFLjPA}{redacted}{redacted:9300}{aws_availability_zone=us-west-2c, ml.max_open_jobs=10, ml.enabled=true}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[failed to restore snapshot [snapshot_0/e_wavyGfTD-SwXC-imkF0g]]; nested: IndexShardRestoreFailedException[Failed to recover index]; nested: FileAlreadyExistsException[/var/lib/elasticsearch/nodes/0/indices/crXjrjtwTEqkK6_ITG1HVQ/4/index/_22g.si]; ]\r\n\r\n[2017-10-02T19:50:28,790][WARN ][o.e.c.a.s.ShardStateAction] [master-01] [logstash-2017.09.20][4] received shard failed for shard id [[logstash-2017.09.20][4]\r\n], allocation id [lW_4BSVGSc6phnI1vLEPWg], primary term [0], message [failed recovery], failure [RecoveryFailedException[[logstash-2017.09.20][4]: Recovery failed \r\non {data-02}{Af43AKvBRf6r-PTr2s9KRg}{O1R6sKwAQK2FyYYmdFLjPA}{redacted}{redacted:9300}{aws_availability_zone=us-west-2c, ml.max_open_jobs=10, ml.enabled=\r\ntrue}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[fa\r\niled to restore snapshot [snapshot_0/e_wavyGfTD-SwXC-imkF0g]]; nested: IndexShardRestoreFailedException[Failed to recover index]; nested: FileAlre\r\nadyExistsException[/var/lib/elasticsearch/nodes/0/indices/crXjrjtwTEqkK6_ITG1HVQ/4/index/_22g.si]; ]\r\norg.elasticsearch.indices.recovery.RecoveryFailedException: [logstash-2017.09.20][4]: Recovery failed on {data-02}{Af43AKvBRf6r-PTr2s9KRg}{O1R6sKwAQK2FyYYmdFL\r\njPA}{redacted}{redacted:9300}{aws_availability_zone=us-west-2c, ml.max_open_jobs=10, ml.enabled=true}\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1511) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_141]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_141]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]\r\nCaused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery\r\n at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:299) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:232) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1243) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1507) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n ... 4 more\r\nCaused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: restore failed\r\n at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:405) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:234) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:232) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1243) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1507) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n ... 4 more\r\nCaused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: failed to restore snapshot [snapshot_0/e_wavyGfTD-SwXC-imkF0g]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:993) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:400) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:234) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:232) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1243) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1507) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n ... 4 more\r\nCaused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: Failed to recover index\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1679) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:991) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:400) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:234) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:232) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1243) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1507) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n ... 4 more\r\nCaused by: java.nio.file.FileAlreadyExistsException: /var/lib/elasticsearch/nodes/0/indices/crXjrjtwTEqkK6_ITG1HVQ/4/index/_22g.si\r\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:88) ~[?:?]\r\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]\r\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]\r\n at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) ~[?:?]\r\n at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434) ~[?:1.8.0_141]\r\n at java.nio.file.Files.newOutputStream(Files.java:216) ~[?:1.8.0_141]\r\n at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:413) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]\r\n at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:409) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]\r\n at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]\r\n at org.apache.lucene.store.RateLimitedFSDirectory.createOutput(RateLimitedFSDirectory.java:40) ~[elasticsearch-5.6.2.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]\r\n at org.apache.lucene.store.FilterDirectory.createOutput(FilterDirectory.java:73) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]\r\n at org.elasticsearch.index.store.Store.createVerifyingOutput(Store.java:463) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restoreFile(BlobStoreRepository.java:1734) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1676) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:991) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:400) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:234) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:232) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1243) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1507) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n ... 4 more\r\n```\r\n\r\nAnd if you look on for that file it says is already exists, it is not present on the data node:\r\n\r\n```\r\n# ll /var/lib/elasticsearch/nodes/0/indices/crXjrjtwTEqkK6_ITG1HVQ/4/index/_22g.si\r\nls: cannot access '/var/lib/elasticsearch/nodes/0/indices/crXjrjtwTEqkK6_ITG1HVQ/4/index/_22g.si': No such file or directory\r\n```\r\n\r\nThe only way I have been able to get the cluster out of this hung state is to do a full cluster shutdown and start it back up again. From there I am able to close these red indexes and retry the restore again. When I first encountered this issue, I had ~20 indexes that failed to restore. After retrying to restore these failures with the process above, I was able to get all but seven of them restored. The remaining failures are in the same state.\r\n",
"comments": [
{
"body": "That sounds like two problems to me:\r\n\r\n* State handling during recovery seems to be inconsistent / not to agree\r\n* File system issues\r\n\r\nCan you please tell which file system you've used? Also, as you are on EC2: Did you configure EBS volumes or instance storage on the nodes?",
"created_at": "2017-10-04T16:01:10Z"
},
{
"body": "Also @imotov may have further ideas.",
"created_at": "2017-10-04T16:01:22Z"
},
{
"body": "These are on AWS I3 servers with NVMe SSD instance storage. We are using XFS with LUKS on these disks. ",
"created_at": "2017-10-04T16:13:48Z"
},
{
"body": "Thanks for the feedback. I also talked to @imotov. As this is about S3 snapshot could you please have a look @tlrx?",
"created_at": "2017-10-05T07:48:13Z"
},
{
"body": "There is a bit of confusion between the snapshot and the restore APIs on this issue:\r\n\r\n@jdoss When you say\r\n\r\n> You can see the restore says it finished with a SUCCESS:\r\n\r\nyou're actually showing the result of the (successfully completed) snapshotting process, not the restore process (same mistake for showing the details).\r\n\r\nThe `/_cat/recovery` output also is consistent with the cluster health. It shows that shards 0 to 3 have successfully recovered. Shard 4 (the one causing the cluster health to be red) is not reported as done.\r\n\r\nFrom the output shown it is not clear that the restore process is stuck. Note that we don't allow an index that is being restored to be closed. However, you can delete this index, which will also abort the restore process (same as when you delete a snapshot that's in progress, it will abort the snapshot).\r\n\r\nThe bug you're hitting here is the `FileAlreadyExistsException`, which we've seen already on other reports:\r\nhttps://discuss.elastic.co/t/snapshot-restore-failed-recovery-of-index-getting-filealreadyexistsexception/100300\r\n\r\nCould you perhaps share the snapshot (privately) with us?\r\n\r\n@danielmitterdorfer I have my doubts that this is S3 related.",
"created_at": "2017-10-05T08:24:03Z"
},
{
"body": "@jdoss access to snapshots would really help, but if this is not possible would you be able to try reproducing this issue with additional logging enabled and send us the logs files?",
"created_at": "2017-10-06T15:26:43Z"
},
{
"body": "> you're actually showing the result of the (successfully completed) snapshotting process, not the restore process (same mistake for showing the details).\r\n\r\n@ywelsch I was following the documentation https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html#_monitoring_snapshot_restore_progress which states to use the \r\n\r\n`curl -XGET 'localhost:9200/_snapshot/my_backup/snapshot_1?pretty'`\r\n\r\nand \r\n\r\n`curl -XGET 'localhost:9200/_snapshot/my_backup/snapshot_1/status?pretty'`\r\n\r\nWhich is pretty confusing mashing the snapshot and recovery status documentation together. Re-reading the whole section I see I misunderstood things and I should have been using the indices recovery and cat recovery APIs.\r\n\r\nI do wish it was easier to see what is going on with a restore and having the snapshot status documentation crammed together with the restore documentation is confusing. I wish there was a better method to see what is going on with a specific restore and a better method on stopping a restore. I have nuked snapshots from S3 misunderstanding that the DELETE method used for stopping a snapshot does not work on restores. It is good to know that you can just delete the index on the cluster to stop the restore. \r\n\r\nIt would be nice to be able to ping a restore API to see all this information and to stop a restore vs using the recovery APIs. I was looking for something hat showed a clear status of the recovery and confused the snapshot status endpoint as something that worked with the recovery of a snapshot. My bad.\r\n\r\n@imotov email me at jdoss *at* kennasecurity.com and I will talk to my higher ups about getting you this snapshot.",
"created_at": "2017-10-06T19:46:53Z"
},
{
"body": "@jdoss I think I might actually get by with just 2 files from your snapshot repository that contain no actual data (just a list of files that index consisted of at the time of the snapshot, their sizes and checksums). The files I am interested in are `indices/logstash-2017.09.20/4/index-*` (it might be also located in `indices/crXjrjtwTEqkK6_ITG1HVQ/4/index-*`) and `snap-snapshot_0.dat` or `snap-e_wavyGfTD-SwXC-imkF0g.dat` from the same directory as `index-*`. Could you send these two files to igor at elastic.co?",
"created_at": "2017-10-09T14:09:08Z"
},
{
"body": "@imotov I have sent you the requested files. ",
"created_at": "2017-10-09T15:54:17Z"
},
{
"body": "I was finally able to see a reproduction of this issue with enough trace logging to figure out what's going on. It looks like in the case that I was able to observe, the `FileAlreadyExists` exception was the secondary issue on that was triggered by a previous failure (missing blob in the repository in the case that I was able to observe). If you still have the log files from this failure around, can you see if there are any exceptions for the same shard prior to the `FileAlreadyExists`.",
"created_at": "2017-10-25T11:12:34Z"
},
{
"body": "@tlrx this is the issue we talked about earlier today. ",
"created_at": "2017-11-03T23:41:32Z"
},
{
"body": "Hi, I'd like to ask which version contains this fix. Thanks.",
"created_at": "2018-09-16T05:24:52Z"
},
{
"body": "Please see the version labels in the corresponding pull request https://github.com/elastic/elasticsearch/pull/27493: 5.6.6 is the earliest version in the 5.x series that contains this fix.",
"created_at": "2018-09-17T05:00:51Z"
},
{
"body": "Thanks, @danielmitterdorfer. Appreciate it. Can I also ask if this affects the S3 destination only or the Shared FS as well?",
"created_at": "2018-09-17T22:11:16Z"
}
],
"number": 26865,
"title": "Restoring a snapshot from S3 to 5.6.2 results in a hung and incomplete restore. "
} | {
"body": "When the allocation of a shard has been retried too many times, the\r\n`MaxRetryDecider` is engaged to prevent any future allocation of the\r\nfailed shard. If it happens while restoring a snapshot, the restore\r\nhangs and never completes because it stays around waiting for the\r\nshards to be assigned. It also blocks future attempts to restore the\r\nsnapshot again.\r\n\r\nThis commit changes the current behaviour in order to fail the restore if\r\na shard reached the maximum allocations attempts without being successfully\r\nassigned.\r\n\r\nThis is the second part of the #26865 issue.",
"number": 27493,
"review_comments": [
{
"body": "I think the solution needs to be more generic than depending on the settings of specific allocation deciders. I think we can use `unassignedInfo.getLastAllocationStatus` for that and check if it is `DECIDERS_NO`.",
"created_at": "2017-11-22T16:34:07Z"
},
{
"body": "That's a good suggestion as I suppose that a restore can also be stuck because the deciders cannot assign the shard (no enough space on disk, awareness rules forbid allocation etc). I also like it to be more generic.\r\n\r\nI think I can give it a try by reverted portion of code and override `unassignedInfoUpdated()`... I'll push something if it works.",
"created_at": "2017-11-22T17:57:14Z"
},
{
"body": "I think instead of adding more types here (unassignedShards), better we do the reverse and fold failedShards, startedShards and unassignedShards into just \"updates\".\r\nIt's not worth separating them just to have this one assertion I've put there.",
"created_at": "2017-11-23T14:15:47Z"
},
{
"body": "I would just put \"shard could not be allocated on any of the nodes\"",
"created_at": "2017-11-23T14:21:20Z"
},
{
"body": "just choose a fixed index name, no need for randomization here :)",
"created_at": "2017-11-23T14:27:57Z"
},
{
"body": "isn't this the default?",
"created_at": "2017-11-23T14:29:06Z"
},
{
"body": "no need for allocation explain API. you can check all this directly on the cluster state that you get below. I also think that assertBusy won't be needed then.",
"created_at": "2017-11-23T14:32:06Z"
},
{
"body": "idem",
"created_at": "2017-11-23T14:32:42Z"
},
{
"body": "just ensureGreen()",
"created_at": "2017-11-23T14:33:56Z"
},
{
"body": "I think it's possible to share most of the code with the previous test by calling a generic method with two parameters:\r\n- restoreIndexSettings (which would set maxRetries in first case and filters in second case)\r\n- fixupAction (the action to run to fix the issue)",
"created_at": "2017-11-23T14:39:17Z"
},
{
"body": "It is, thanks",
"created_at": "2017-11-24T08:27:49Z"
},
{
"body": "Pff I should have seen that... I agree, that would be better, thanks :)",
"created_at": "2017-11-24T11:27:22Z"
},
{
"body": "Right, we can use the cluster state in this case and it's even easier. But it seems that the assertBusy() is still needed as the cluster state change can take few miliseconds to propagate.",
"created_at": "2017-11-24T11:29:13Z"
},
{
"body": "I tried to do something like that, please let me know what you think.",
"created_at": "2017-11-24T11:29:39Z"
},
{
"body": "allocated **to**",
"created_at": "2017-11-24T12:50:38Z"
},
{
"body": "I would use the following message:\r\n\"ignored as shard is not being recovered from a snapshot\"\r\nand not have an explicit check for `shardRouting.primary() == false`. That case is automatically handled by this case too as replica shards are never recovered from snapshot (their recovery source is always PEER).",
"created_at": "2017-12-05T13:40:33Z"
},
{
"body": "\"close or delete the index\"\r\n\r\nI would also lowercase the \"reroute API\"",
"created_at": "2017-12-05T13:49:10Z"
},
{
"body": "this assertion is not correct I think.\r\nIf a restore for a shard fails 5 times, it's marked as completed only in one of the next cluster state updates (see cleanupRestoreState)",
"created_at": "2017-12-05T13:54:38Z"
},
{
"body": "just wondering if it's possible for `shardRestoreStatus` to be null.\r\nI think it can be if you restore from a snapshot, then the restore fails, and you retry another restore with a different subset of indices from that same snapshot. ",
"created_at": "2017-12-05T13:57:08Z"
},
{
"body": "I would write this check as\r\n```\r\nif (shardRestoreStatus.state().completed() == false) {\r\n```\r\n\r\nand then add an assertion that `shardRestoreStatus.state() != SUCCESS` (as the shard should have been moved to started and the recovery source cleaned up at that point).",
"created_at": "2017-12-05T14:00:39Z"
},
{
"body": "can you also add\r\n\r\n```\r\n@Override\r\n public Decision canForceAllocatePrimary(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) {\r\n assert shardRouting.primary() : \"must not call canForceAllocatePrimary on a non-primary shard \" + shardRouting;\r\n return canAllocate(shardRouting, node, allocation);\r\n }\r\n```\r\n\r\nas this is a hard constraint with no exceptions",
"created_at": "2017-12-05T14:04:54Z"
},
{
"body": "only primaries can have a snapshot recovery source, so no need for this extra check here.",
"created_at": "2017-12-05T14:06:21Z"
},
{
"body": "Good catch",
"created_at": "2017-12-05T14:27:27Z"
},
{
"body": "Right, thanks",
"created_at": "2017-12-05T14:28:35Z"
},
{
"body": "The assertion asserts that the restore in progress for the current allocation is **not** completed, so I think it's good? It will be marked later as you noticed.",
"created_at": "2017-12-05T16:42:46Z"
},
{
"body": "> just wondering if it's possible for shardRestoreStatus to be null.\r\n> I think it can be if you restore from a snapshot, then the restore fails, and you retry another restore with a different subset of indices from that same snapshot.\r\n\r\nGood catch, thanks!",
"created_at": "2017-12-05T20:45:22Z"
},
{
"body": "Note: we talked about this and Yannick is right, this assertion can be problematic on more busy clusters if a reroute kicks in between the moment the restore completes and the moment the restore is removed from the cluster state by the CleanRestoreStateTaskExecutor",
"created_at": "2017-12-07T17:10:52Z"
},
{
"body": "can you also add the shardRestoreStatus state and the shard routing to the failure message here?",
"created_at": "2017-12-07T19:57:30Z"
},
{
"body": "Sure, will do once the current CI build is finished",
"created_at": "2017-12-07T20:08:11Z"
}
],
"title": "Fail restore when the shard allocations max retries count is reached"
} | {
"commits": [
{
"message": "Fail restore when the shard allocations max retries count is reached\n\nWhen the allocation of a shard has been retried too many times, the\nMaxRetryDecider is engaged to prevent any future allocation of the\nfailed shard. If it happens while restoring a snapshot, the restore\nhangs and never completes because it stays around waiting for the\nshards to be assigned. It also blocks future attempts to restore the\nsnapshot again.\n\nThis commit changes the current behavior in order to fail the restore if\na shard reached the maximum allocations attempts without being successfully\nassigned.\n\nThis is the second part of the #26865 issue.\n\ncloses #26865"
},
{
"message": "Apply feedback"
},
{
"message": "Add test"
},
{
"message": "Apply feedback"
},
{
"message": "add RestoreInProgressAllocationDecider"
},
{
"message": "Fix license"
},
{
"message": "Fix ClusterModuleTests"
},
{
"message": "Adapt SharedClusterSnapshotRestoreIT.testDataFileFailureDuringRestore"
},
{
"message": "Apply feedback"
},
{
"message": "Remove assertion"
},
{
"message": "Update ThrottlingAllocationTests"
},
{
"message": "Update assertion message"
}
],
"files": [
{
"diff": "@@ -54,6 +54,7 @@\n import org.elasticsearch.cluster.routing.allocation.decider.SameShardAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.ShardsLimitAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.SnapshotInProgressAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.RestoreInProgressAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.ParseField;\n@@ -191,6 +192,7 @@ public static Collection<AllocationDecider> createAllocationDeciders(Settings se\n addAllocationDecider(deciders, new EnableAllocationDecider(settings, clusterSettings));\n addAllocationDecider(deciders, new NodeVersionAllocationDecider(settings));\n addAllocationDecider(deciders, new SnapshotInProgressAllocationDecider(settings));\n+ addAllocationDecider(deciders, new RestoreInProgressAllocationDecider(settings));\n addAllocationDecider(deciders, new FilterAllocationDecider(settings, clusterSettings));\n addAllocationDecider(deciders, new SameShardAllocationDecider(settings, clusterSettings));\n addAllocationDecider(deciders, new DiskThresholdDecider(settings, clusterSettings));",
"filename": "core/src/main/java/org/elasticsearch/cluster/ClusterModule.java",
"status": "modified"
},
{
"diff": "@@ -48,6 +48,8 @@\n import java.util.function.Function;\n import java.util.stream.Collectors;\n \n+import static java.util.Collections.emptyList;\n+import static java.util.Collections.singletonList;\n import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING;\n \n \n@@ -135,13 +137,14 @@ protected ClusterState buildResultAndLogHealthChange(ClusterState oldState, Rout\n return newState;\n }\n \n+ // Used for testing\n public ClusterState applyFailedShard(ClusterState clusterState, ShardRouting failedShard) {\n- return applyFailedShards(clusterState, Collections.singletonList(new FailedShard(failedShard, null, null)),\n- Collections.emptyList());\n+ return applyFailedShards(clusterState, singletonList(new FailedShard(failedShard, null, null)), emptyList());\n }\n \n+ // Used for testing\n public ClusterState applyFailedShards(ClusterState clusterState, List<FailedShard> failedShards) {\n- return applyFailedShards(clusterState, failedShards, Collections.emptyList());\n+ return applyFailedShards(clusterState, failedShards, emptyList());\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,86 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.routing.allocation.decider;\n+\n+import org.elasticsearch.cluster.RestoreInProgress;\n+import org.elasticsearch.cluster.routing.RecoverySource;\n+import org.elasticsearch.cluster.routing.RoutingNode;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.snapshots.Snapshot;\n+\n+/**\n+ * This {@link AllocationDecider} prevents shards that have failed to be\n+ * restored from a snapshot to be allocated.\n+ */\n+public class RestoreInProgressAllocationDecider extends AllocationDecider {\n+\n+ public static final String NAME = \"restore_in_progress\";\n+\n+ /**\n+ * Creates a new {@link RestoreInProgressAllocationDecider} instance from\n+ * given settings\n+ *\n+ * @param settings {@link Settings} to use\n+ */\n+ public RestoreInProgressAllocationDecider(Settings settings) {\n+ super(settings);\n+ }\n+\n+ @Override\n+ public Decision canAllocate(final ShardRouting shardRouting, final RoutingNode node, final RoutingAllocation allocation) {\n+ return canAllocate(shardRouting, allocation);\n+ }\n+\n+ @Override\n+ public Decision canAllocate(final ShardRouting shardRouting, final RoutingAllocation allocation) {\n+ final RecoverySource recoverySource = shardRouting.recoverySource();\n+ if (recoverySource == null || recoverySource.getType() != RecoverySource.Type.SNAPSHOT) {\n+ return allocation.decision(Decision.YES, NAME, \"ignored as shard is not being recovered from a snapshot\");\n+ }\n+\n+ final Snapshot snapshot = ((RecoverySource.SnapshotRecoverySource) recoverySource).snapshot();\n+ final RestoreInProgress restoresInProgress = allocation.custom(RestoreInProgress.TYPE);\n+\n+ if (restoresInProgress != null) {\n+ for (RestoreInProgress.Entry restoreInProgress : restoresInProgress.entries()) {\n+ if (restoreInProgress.snapshot().equals(snapshot)) {\n+ RestoreInProgress.ShardRestoreStatus shardRestoreStatus = restoreInProgress.shards().get(shardRouting.shardId());\n+ if (shardRestoreStatus != null && shardRestoreStatus.state().completed() == false) {\n+ assert shardRestoreStatus.state() != RestoreInProgress.State.SUCCESS : \"expected shard [\" + shardRouting\n+ + \"] to be in initializing state but got [\" + shardRestoreStatus.state() + \"]\";\n+ return allocation.decision(Decision.YES, NAME, \"shard is currently being restored\");\n+ }\n+ break;\n+ }\n+ }\n+ }\n+ return allocation.decision(Decision.NO, NAME, \"shard has failed to be restored from the snapshot [%s] because of [%s] - \" +\n+ \"manually close or delete the index [%s] in order to retry to restore the snapshot again or use the reroute API to force the \" +\n+ \"allocation of an empty primary shard\", snapshot, shardRouting.unassignedInfo().getDetails(), shardRouting.getIndexName());\n+ }\n+\n+ @Override\n+ public Decision canForceAllocatePrimary(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) {\n+ assert shardRouting.primary() : \"must not call canForceAllocatePrimary on a non-primary shard \" + shardRouting;\n+ return canAllocate(shardRouting, node, allocation);\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/RestoreInProgressAllocationDecider.java",
"status": "added"
},
{
"diff": "@@ -64,7 +64,6 @@\n import org.elasticsearch.common.settings.ClusterSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n@@ -534,7 +533,7 @@ public void shardStarted(ShardRouting initializingShard, ShardRouting startedSha\n RecoverySource recoverySource = initializingShard.recoverySource();\n if (recoverySource.getType() == RecoverySource.Type.SNAPSHOT) {\n Snapshot snapshot = ((SnapshotRecoverySource) recoverySource).snapshot();\n- changes(snapshot).startedShards.put(initializingShard.shardId(),\n+ changes(snapshot).shards.put(initializingShard.shardId(),\n new ShardRestoreStatus(initializingShard.currentNodeId(), RestoreInProgress.State.SUCCESS));\n }\n }\n@@ -550,7 +549,7 @@ public void shardFailed(ShardRouting failedShard, UnassignedInfo unassignedInfo)\n // to restore this shard on another node if the snapshot files are corrupt. In case where a node just left or crashed,\n // however, we only want to acknowledge the restore operation once it has been successfully restored on another node.\n if (unassignedInfo.getFailure() != null && Lucene.isCorruptionException(unassignedInfo.getFailure().getCause())) {\n- changes(snapshot).failedShards.put(failedShard.shardId(), new ShardRestoreStatus(failedShard.currentNodeId(),\n+ changes(snapshot).shards.put(failedShard.shardId(), new ShardRestoreStatus(failedShard.currentNodeId(),\n RestoreInProgress.State.FAILURE, unassignedInfo.getFailure().getCause().getMessage()));\n }\n }\n@@ -563,11 +562,24 @@ public void shardInitialized(ShardRouting unassignedShard, ShardRouting initiali\n if (unassignedShard.recoverySource().getType() == RecoverySource.Type.SNAPSHOT &&\n initializedShard.recoverySource().getType() != RecoverySource.Type.SNAPSHOT) {\n Snapshot snapshot = ((SnapshotRecoverySource) unassignedShard.recoverySource()).snapshot();\n- changes(snapshot).failedShards.put(unassignedShard.shardId(), new ShardRestoreStatus(null,\n+ changes(snapshot).shards.put(unassignedShard.shardId(), new ShardRestoreStatus(null,\n RestoreInProgress.State.FAILURE, \"recovery source type changed from snapshot to \" + initializedShard.recoverySource()));\n }\n }\n \n+ @Override\n+ public void unassignedInfoUpdated(ShardRouting unassignedShard, UnassignedInfo newUnassignedInfo) {\n+ RecoverySource recoverySource = unassignedShard.recoverySource();\n+ if (recoverySource.getType() == RecoverySource.Type.SNAPSHOT) {\n+ if (newUnassignedInfo.getLastAllocationStatus() == UnassignedInfo.AllocationStatus.DECIDERS_NO) {\n+ Snapshot snapshot = ((SnapshotRecoverySource) recoverySource).snapshot();\n+ String reason = \"shard could not be allocated to any of the nodes\";\n+ changes(snapshot).shards.put(unassignedShard.shardId(),\n+ new ShardRestoreStatus(unassignedShard.currentNodeId(), RestoreInProgress.State.FAILURE, reason));\n+ }\n+ }\n+ }\n+\n /**\n * Helper method that creates update entry for the given shard id if such an entry does not exist yet.\n */\n@@ -576,25 +588,21 @@ private Updates changes(Snapshot snapshot) {\n }\n \n private static class Updates {\n- private Map<ShardId, ShardRestoreStatus> failedShards = new HashMap<>();\n- private Map<ShardId, ShardRestoreStatus> startedShards = new HashMap<>();\n+ private Map<ShardId, ShardRestoreStatus> shards = new HashMap<>();\n }\n \n- public RestoreInProgress applyChanges(RestoreInProgress oldRestore) {\n+ public RestoreInProgress applyChanges(final RestoreInProgress oldRestore) {\n if (shardChanges.isEmpty() == false) {\n final List<RestoreInProgress.Entry> entries = new ArrayList<>();\n for (RestoreInProgress.Entry entry : oldRestore.entries()) {\n Snapshot snapshot = entry.snapshot();\n Updates updates = shardChanges.get(snapshot);\n- assert Sets.haveEmptyIntersection(updates.startedShards.keySet(), updates.failedShards.keySet());\n- if (updates.startedShards.isEmpty() == false || updates.failedShards.isEmpty() == false) {\n+ if (updates.shards.isEmpty() == false) {\n ImmutableOpenMap.Builder<ShardId, ShardRestoreStatus> shardsBuilder = ImmutableOpenMap.builder(entry.shards());\n- for (Map.Entry<ShardId, ShardRestoreStatus> startedShardEntry : updates.startedShards.entrySet()) {\n- shardsBuilder.put(startedShardEntry.getKey(), startedShardEntry.getValue());\n- }\n- for (Map.Entry<ShardId, ShardRestoreStatus> failedShardEntry : updates.failedShards.entrySet()) {\n- shardsBuilder.put(failedShardEntry.getKey(), failedShardEntry.getValue());\n+ for (Map.Entry<ShardId, ShardRestoreStatus> shard : updates.shards.entrySet()) {\n+ shardsBuilder.put(shard.getKey(), shard.getValue());\n }\n+\n ImmutableOpenMap<ShardId, ShardRestoreStatus> shards = shardsBuilder.build();\n RestoreInProgress.State newState = overallState(RestoreInProgress.State.STARTED, shards);\n entries.add(new RestoreInProgress.Entry(entry.snapshot(), newState, entry.indices(), shards));\n@@ -607,7 +615,6 @@ public RestoreInProgress applyChanges(RestoreInProgress oldRestore) {\n return oldRestore;\n }\n }\n-\n }\n \n public static RestoreInProgress.Entry restoreInProgress(ClusterState state, Snapshot snapshot) {",
"filename": "core/src/main/java/org/elasticsearch/snapshots/RestoreService.java",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,7 @@\n import org.elasticsearch.cluster.routing.allocation.decider.RebalanceOnlyWhenActiveAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.ReplicaAfterPrimaryActiveAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.ResizeAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.RestoreInProgressAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.SameShardAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.ShardsLimitAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.SnapshotInProgressAllocationDecider;\n@@ -183,6 +184,7 @@ public void testAllocationDeciderOrder() {\n EnableAllocationDecider.class,\n NodeVersionAllocationDecider.class,\n SnapshotInProgressAllocationDecider.class,\n+ RestoreInProgressAllocationDecider.class,\n FilterAllocationDecider.class,\n SameShardAllocationDecider.class,\n DiskThresholdDecider.class,",
"filename": "core/src/test/java/org/elasticsearch/cluster/ClusterModuleTests.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ESAllocationTestCase;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -38,6 +39,7 @@\n import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands;\n import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n@@ -46,7 +48,10 @@\n import org.elasticsearch.snapshots.SnapshotId;\n import org.elasticsearch.test.gateway.TestGatewayAllocator;\n \n+import java.util.ArrayList;\n import java.util.Collections;\n+import java.util.HashSet;\n+import java.util.Set;\n \n import static org.elasticsearch.cluster.ClusterName.CLUSTER_NAME_SETTING;\n import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n@@ -309,6 +314,8 @@ private ClusterState createRecoveryStateAndInitalizeAllocations(MetaData metaDat\n DiscoveryNode node1 = newNode(\"node1\");\n MetaData.Builder metaDataBuilder = new MetaData.Builder(metaData);\n RoutingTable.Builder routingTableBuilder = RoutingTable.builder();\n+ Snapshot snapshot = new Snapshot(\"repo\", new SnapshotId(\"snap\", \"randomId\"));\n+ Set<String> snapshotIndices = new HashSet<>();\n for (ObjectCursor<IndexMetaData> cursor: metaData.indices().values()) {\n Index index = cursor.value.getIndex();\n IndexMetaData.Builder indexMetaDataBuilder = IndexMetaData.builder(cursor.value);\n@@ -329,14 +336,14 @@ private ClusterState createRecoveryStateAndInitalizeAllocations(MetaData metaDat\n routingTableBuilder.addAsFromDangling(indexMetaData);\n break;\n case 3:\n+ snapshotIndices.add(index.getName());\n routingTableBuilder.addAsNewRestore(indexMetaData,\n- new SnapshotRecoverySource(new Snapshot(\"repo\", new SnapshotId(\"snap\", \"randomId\")), Version.CURRENT,\n- indexMetaData.getIndex().getName()), new IntHashSet());\n+ new SnapshotRecoverySource(snapshot, Version.CURRENT, indexMetaData.getIndex().getName()), new IntHashSet());\n break;\n case 4:\n+ snapshotIndices.add(index.getName());\n routingTableBuilder.addAsRestore(indexMetaData,\n- new SnapshotRecoverySource(new Snapshot(\"repo\", new SnapshotId(\"snap\", \"randomId\")), Version.CURRENT,\n- indexMetaData.getIndex().getName()));\n+ new SnapshotRecoverySource(snapshot, Version.CURRENT, indexMetaData.getIndex().getName()));\n break;\n case 5:\n routingTableBuilder.addAsNew(indexMetaData);\n@@ -345,10 +352,31 @@ private ClusterState createRecoveryStateAndInitalizeAllocations(MetaData metaDat\n throw new IndexOutOfBoundsException();\n }\n }\n+\n+ final RoutingTable routingTable = routingTableBuilder.build();\n+\n+ final ImmutableOpenMap.Builder<String, ClusterState.Custom> restores = ImmutableOpenMap.builder();\n+ if (snapshotIndices.isEmpty() == false) {\n+ // Some indices are restored from snapshot, the RestoreInProgress must be set accordingly\n+ ImmutableOpenMap.Builder<ShardId, RestoreInProgress.ShardRestoreStatus> restoreShards = ImmutableOpenMap.builder();\n+ for (ShardRouting shard : routingTable.allShards()) {\n+ if (shard.primary() && shard.recoverySource().getType() == RecoverySource.Type.SNAPSHOT) {\n+ ShardId shardId = shard.shardId();\n+ restoreShards.put(shardId, new RestoreInProgress.ShardRestoreStatus(node1.getId(), RestoreInProgress.State.INIT));\n+ }\n+ }\n+\n+ RestoreInProgress.Entry restore = new RestoreInProgress.Entry(snapshot, RestoreInProgress.State.INIT,\n+ new ArrayList<>(snapshotIndices), restoreShards.build());\n+ restores.put(RestoreInProgress.TYPE, new RestoreInProgress(restore));\n+ }\n+\n return ClusterState.builder(CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY))\n .nodes(DiscoveryNodes.builder().add(node1))\n .metaData(metaDataBuilder.build())\n- .routingTable(routingTableBuilder.build()).build();\n+ .routingTable(routingTable)\n+ .customs(restores.build())\n+ .build();\n }\n \n private void addInSyncAllocationIds(Index index, IndexMetaData.Builder indexMetaData,",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/ThrottlingAllocationTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,208 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.cluster.routing.allocation.decider;\n+\n+import com.carrotsearch.hppc.cursors.ObjectCursor;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterName;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.ESAllocationTestCase;\n+import org.elasticsearch.cluster.RestoreInProgress;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.IndexRoutingTable;\n+import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n+import org.elasticsearch.cluster.routing.RecoverySource;\n+import org.elasticsearch.cluster.routing.RoutingNode;\n+import org.elasticsearch.cluster.routing.RoutingTable;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n+import org.elasticsearch.cluster.routing.UnassignedInfo;\n+import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.snapshots.Snapshot;\n+import org.elasticsearch.snapshots.SnapshotId;\n+\n+import java.io.IOException;\n+import java.util.Collections;\n+\n+import static java.util.Collections.singletonList;\n+\n+/**\n+ * Test {@link RestoreInProgressAllocationDecider}\n+ */\n+public class RestoreInProgressAllocationDeciderTests extends ESAllocationTestCase {\n+\n+ public void testCanAllocatePrimary() {\n+ ClusterState clusterState = createInitialClusterState();\n+ ShardRouting shard;\n+ if (randomBoolean()) {\n+ shard = clusterState.getRoutingTable().shardRoutingTable(\"test\", 0).primaryShard();\n+ assertEquals(RecoverySource.Type.EMPTY_STORE, shard.recoverySource().getType());\n+ } else {\n+ shard = clusterState.getRoutingTable().shardRoutingTable(\"test\", 0).replicaShards().get(0);\n+ assertEquals(RecoverySource.Type.PEER, shard.recoverySource().getType());\n+ }\n+\n+ final Decision decision = executeAllocation(clusterState, shard);\n+ assertEquals(Decision.Type.YES, decision.type());\n+ assertEquals(\"ignored as shard is not being recovered from a snapshot\", decision.getExplanation());\n+ }\n+\n+ public void testCannotAllocatePrimaryMissingInRestoreInProgress() {\n+ ClusterState clusterState = createInitialClusterState();\n+ RoutingTable routingTable = RoutingTable.builder(clusterState.getRoutingTable())\n+ .addAsRestore(clusterState.getMetaData().index(\"test\"), createSnapshotRecoverySource(\"_missing\"))\n+ .build();\n+\n+ clusterState = ClusterState.builder(clusterState)\n+ .routingTable(routingTable)\n+ .build();\n+\n+ ShardRouting primary = clusterState.getRoutingTable().shardRoutingTable(\"test\", 0).primaryShard();\n+ assertEquals(ShardRoutingState.UNASSIGNED, primary.state());\n+ assertEquals(RecoverySource.Type.SNAPSHOT, primary.recoverySource().getType());\n+\n+ final Decision decision = executeAllocation(clusterState, primary);\n+ assertEquals(Decision.Type.NO, decision.type());\n+ assertEquals(\"shard has failed to be restored from the snapshot [_repository:_missing/_uuid] because of \" +\n+ \"[restore_source[_repository/_missing]] - manually close or delete the index [test] in order to retry to restore \" +\n+ \"the snapshot again or use the reroute API to force the allocation of an empty primary shard\", decision.getExplanation());\n+ }\n+\n+ public void testCanAllocatePrimaryExistingInRestoreInProgress() {\n+ RecoverySource.SnapshotRecoverySource recoverySource = createSnapshotRecoverySource(\"_existing\");\n+\n+ ClusterState clusterState = createInitialClusterState();\n+ RoutingTable routingTable = RoutingTable.builder(clusterState.getRoutingTable())\n+ .addAsRestore(clusterState.getMetaData().index(\"test\"), recoverySource)\n+ .build();\n+\n+ clusterState = ClusterState.builder(clusterState)\n+ .routingTable(routingTable)\n+ .build();\n+\n+ ShardRouting primary = clusterState.getRoutingTable().shardRoutingTable(\"test\", 0).primaryShard();\n+ assertEquals(ShardRoutingState.UNASSIGNED, primary.state());\n+ assertEquals(RecoverySource.Type.SNAPSHOT, primary.recoverySource().getType());\n+\n+ routingTable = clusterState.routingTable();\n+\n+ final RestoreInProgress.State shardState;\n+ if (randomBoolean()) {\n+ shardState = randomFrom(RestoreInProgress.State.STARTED, RestoreInProgress.State.INIT);\n+ } else {\n+ shardState = RestoreInProgress.State.FAILURE;\n+\n+ UnassignedInfo currentInfo = primary.unassignedInfo();\n+ UnassignedInfo newInfo = new UnassignedInfo(currentInfo.getReason(), currentInfo.getMessage(), new IOException(\"i/o failure\"),\n+ currentInfo.getNumFailedAllocations(), currentInfo.getUnassignedTimeInNanos(),\n+ currentInfo.getUnassignedTimeInMillis(), currentInfo.isDelayed(), currentInfo.getLastAllocationStatus());\n+ primary = primary.updateUnassigned(newInfo, primary.recoverySource());\n+\n+ IndexRoutingTable indexRoutingTable = routingTable.index(\"test\");\n+ IndexRoutingTable.Builder newIndexRoutingTable = IndexRoutingTable.builder(indexRoutingTable.getIndex());\n+ for (final ObjectCursor<IndexShardRoutingTable> shardEntry : indexRoutingTable.getShards().values()) {\n+ final IndexShardRoutingTable shardRoutingTable = shardEntry.value;\n+ for (ShardRouting shardRouting : shardRoutingTable.getShards()) {\n+ if (shardRouting.primary()) {\n+ newIndexRoutingTable.addShard(primary);\n+ } else {\n+ newIndexRoutingTable.addShard(shardRouting);\n+ }\n+ }\n+ }\n+ routingTable = RoutingTable.builder(routingTable).add(newIndexRoutingTable).build();\n+ }\n+\n+ ImmutableOpenMap.Builder<ShardId, RestoreInProgress.ShardRestoreStatus> shards = ImmutableOpenMap.builder();\n+ shards.put(primary.shardId(), new RestoreInProgress.ShardRestoreStatus(clusterState.getNodes().getLocalNodeId(), shardState));\n+\n+ Snapshot snapshot = recoverySource.snapshot();\n+ RestoreInProgress.State restoreState = RestoreInProgress.State.STARTED;\n+ RestoreInProgress.Entry restore = new RestoreInProgress.Entry(snapshot, restoreState, singletonList(\"test\"), shards.build());\n+\n+ clusterState = ClusterState.builder(clusterState)\n+ .putCustom(RestoreInProgress.TYPE, new RestoreInProgress(restore))\n+ .routingTable(routingTable)\n+ .build();\n+\n+ Decision decision = executeAllocation(clusterState, primary);\n+ if (shardState == RestoreInProgress.State.FAILURE) {\n+ assertEquals(Decision.Type.NO, decision.type());\n+ assertEquals(\"shard has failed to be restored from the snapshot [_repository:_existing/_uuid] because of \" +\n+ \"[restore_source[_repository/_existing], failure IOException[i/o failure]] - manually close or delete the index \" +\n+ \"[test] in order to retry to restore the snapshot again or use the reroute API to force the allocation of \" +\n+ \"an empty primary shard\", decision.getExplanation());\n+ } else {\n+ assertEquals(Decision.Type.YES, decision.type());\n+ assertEquals(\"shard is currently being restored\", decision.getExplanation());\n+ }\n+ }\n+\n+ private ClusterState createInitialClusterState() {\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test\"))\n+ .build();\n+\n+ DiscoveryNodes discoveryNodes = DiscoveryNodes.builder()\n+ .add(newNode(\"master\", Collections.singleton(DiscoveryNode.Role.MASTER)))\n+ .localNodeId(\"master\")\n+ .masterNodeId(\"master\")\n+ .build();\n+\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)\n+ .metaData(metaData)\n+ .routingTable(routingTable)\n+ .nodes(discoveryNodes)\n+ .build();\n+\n+ assertEquals(2, clusterState.getRoutingTable().shardsWithState(ShardRoutingState.UNASSIGNED).size());\n+ return clusterState;\n+ }\n+\n+ private Decision executeAllocation(final ClusterState clusterState, final ShardRouting shardRouting) {\n+ final AllocationDecider decider = new RestoreInProgressAllocationDecider(Settings.EMPTY);\n+ final RoutingAllocation allocation = new RoutingAllocation(new AllocationDeciders(Settings.EMPTY, Collections.singleton(decider)),\n+ clusterState.getRoutingNodes(), clusterState, null, 0L);\n+ allocation.debugDecision(true);\n+\n+ final Decision decision;\n+ if (randomBoolean()) {\n+ decision = decider.canAllocate(shardRouting, allocation);\n+ } else {\n+ DiscoveryNode node = clusterState.getNodes().getMasterNode();\n+ decision = decider.canAllocate(shardRouting, new RoutingNode(node.getId(), node), allocation);\n+ }\n+ return decision;\n+ }\n+\n+ private RecoverySource.SnapshotRecoverySource createSnapshotRecoverySource(final String snapshotName) {\n+ Snapshot snapshot = new Snapshot(\"_repository\", new SnapshotId(snapshotName, \"_uuid\"));\n+ return new RecoverySource.SnapshotRecoverySource(snapshot, Version.CURRENT, \"test\");\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/RestoreInProgressAllocationDeciderTests.java",
"status": "added"
},
{
"diff": "@@ -46,6 +46,7 @@\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ClusterStateUpdateTask;\n+import org.elasticsearch.cluster.RestoreInProgress;\n import org.elasticsearch.cluster.SnapshotsInProgress;\n import org.elasticsearch.cluster.SnapshotsInProgress.Entry;\n import org.elasticsearch.cluster.SnapshotsInProgress.ShardSnapshotStatus;\n@@ -55,6 +56,10 @@\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.MetaDataIndexStateService;\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n+import org.elasticsearch.cluster.routing.RecoverySource;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n+import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Strings;\n@@ -97,14 +102,15 @@\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n+import java.util.function.Consumer;\n import java.util.stream.Collectors;\n import java.util.stream.Stream;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n+import static org.elasticsearch.cluster.routing.allocation.decider.MaxRetryAllocationDecider.SETTING_ALLOCATION_MAX_RETRY;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.IndexSettings.INDEX_REFRESH_INTERVAL_SETTING;\n-import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAliasesExist;\n@@ -117,9 +123,11 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThrows;\n import static org.hamcrest.Matchers.allOf;\n+import static org.hamcrest.Matchers.anyOf;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.hasSize;\n import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.lessThan;\n import static org.hamcrest.Matchers.not;\n@@ -824,6 +832,8 @@ public void testDataFileFailureDuringRestore() throws Exception {\n prepareCreate(\"test-idx\").setSettings(Settings.builder().put(\"index.allocation.max_retries\", Integer.MAX_VALUE)).get();\n ensureGreen();\n \n+ final NumShards numShards = getNumShards(\"test-idx\");\n+\n logger.info(\"--> indexing some data\");\n for (int i = 0; i < 100; i++) {\n index(\"test-idx\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n@@ -848,14 +858,31 @@ public void testDataFileFailureDuringRestore() throws Exception {\n logger.info(\"--> delete index\");\n cluster().wipeIndices(\"test-idx\");\n logger.info(\"--> restore index after deletion\");\n- RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).execute().actionGet();\n- assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n- SearchResponse countResponse = client.prepareSearch(\"test-idx\").setSize(0).get();\n- assertThat(countResponse.getHits().getTotalHits(), equalTo(100L));\n+ final RestoreSnapshotResponse restoreResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\")\n+ .setWaitForCompletion(true)\n+ .get();\n+\n logger.info(\"--> total number of simulated failures during restore: [{}]\", getFailureCount(\"test-repo\"));\n+ final RestoreInfo restoreInfo = restoreResponse.getRestoreInfo();\n+ assertThat(restoreInfo.totalShards(), equalTo(numShards.numPrimaries));\n+\n+ if (restoreInfo.successfulShards() == restoreInfo.totalShards()) {\n+ // All shards were restored, we must find the exact number of hits\n+ assertHitCount(client.prepareSearch(\"test-idx\").setSize(0).get(), 100L);\n+ } else {\n+ // One or more shards failed to be restored. This can happen when there is\n+ // only 1 data node: a shard failed because of the random IO exceptions\n+ // during restore and then we don't allow the shard to be assigned on the\n+ // same node again during the same reroute operation. Then another reroute\n+ // operation is scheduled, but the RestoreInProgressAllocationDecider will\n+ // block the shard to be assigned again because it failed during restore.\n+ final ClusterStateResponse clusterStateResponse = client.admin().cluster().prepareState().get();\n+ assertEquals(1, clusterStateResponse.getState().getNodes().getDataNodes().size());\n+ assertEquals(restoreInfo.failedShards(),\n+ clusterStateResponse.getState().getRoutingTable().shardsWithState(ShardRoutingState.UNASSIGNED).size());\n+ }\n }\n \n- @TestLogging(\"org.elasticsearch.cluster.routing:TRACE,org.elasticsearch.snapshots:TRACE\")\n public void testDataFileCorruptionDuringRestore() throws Exception {\n Path repositoryLocation = randomRepoPath();\n Client client = client();\n@@ -907,6 +934,155 @@ public void testDataFileCorruptionDuringRestore() throws Exception {\n cluster().wipeIndices(\"test-idx\");\n }\n \n+ /**\n+ * Test that restoring a snapshot whose files can't be downloaded at all is not stuck or\n+ * does not hang indefinitely.\n+ */\n+ public void testUnrestorableFilesDuringRestore() throws Exception {\n+ final String indexName = \"unrestorable-files\";\n+ final int maxRetries = randomIntBetween(1, 10);\n+\n+ Settings createIndexSettings = Settings.builder().put(SETTING_ALLOCATION_MAX_RETRY.getKey(), maxRetries).build();\n+\n+ Settings repositorySettings = Settings.builder()\n+ .put(\"random\", randomAlphaOfLength(10))\n+ .put(\"max_failure_number\", 10000000L)\n+ // No lucene corruptions, we want to test retries\n+ .put(\"use_lucene_corruption\", false)\n+ // Restoring a file will never complete\n+ .put(\"random_data_file_io_exception_rate\", 1.0)\n+ .build();\n+\n+ Consumer<UnassignedInfo> checkUnassignedInfo = unassignedInfo -> {\n+ assertThat(unassignedInfo.getReason(), equalTo(UnassignedInfo.Reason.ALLOCATION_FAILED));\n+ assertThat(unassignedInfo.getNumFailedAllocations(), anyOf(equalTo(maxRetries), equalTo(1)));\n+ };\n+\n+ unrestorableUseCase(indexName, createIndexSettings, repositorySettings, Settings.EMPTY, checkUnassignedInfo, () -> {});\n+ }\n+\n+ /**\n+ * Test that restoring an index with shard allocation filtering settings that prevents\n+ * its allocation does not hang indefinitely.\n+ */\n+ public void testUnrestorableIndexDuringRestore() throws Exception {\n+ final String indexName = \"unrestorable-index\";\n+ Settings restoreIndexSettings = Settings.builder().put(\"index.routing.allocation.include._name\", randomAlphaOfLength(5)).build();\n+\n+ Consumer<UnassignedInfo> checkUnassignedInfo = unassignedInfo -> {\n+ assertThat(unassignedInfo.getReason(), equalTo(UnassignedInfo.Reason.NEW_INDEX_RESTORED));\n+ };\n+\n+ Runnable fixupAction =() -> {\n+ // remove the shard allocation filtering settings and use the Reroute API to retry the failed shards\n+ assertAcked(client().admin().indices().prepareUpdateSettings(indexName)\n+ .setSettings(Settings.builder()\n+ .putNull(\"index.routing.allocation.include._name\")\n+ .build()));\n+ assertAcked(client().admin().cluster().prepareReroute().setRetryFailed(true));\n+ };\n+\n+ unrestorableUseCase(indexName, Settings.EMPTY, Settings.EMPTY, restoreIndexSettings, checkUnassignedInfo, fixupAction);\n+ }\n+\n+ /** Execute the unrestorable test use case **/\n+ private void unrestorableUseCase(final String indexName,\n+ final Settings createIndexSettings,\n+ final Settings repositorySettings,\n+ final Settings restoreIndexSettings,\n+ final Consumer<UnassignedInfo> checkUnassignedInfo,\n+ final Runnable fixUpAction) throws Exception {\n+ // create a test repository\n+ final Path repositoryLocation = randomRepoPath();\n+ assertAcked(client().admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\")\n+ .setSettings(Settings.builder().put(\"location\", repositoryLocation)));\n+ // create a test index\n+ assertAcked(prepareCreate(indexName, Settings.builder().put(createIndexSettings)));\n+\n+ // index some documents\n+ final int nbDocs = scaledRandomIntBetween(10, 100);\n+ for (int i = 0; i < nbDocs; i++) {\n+ index(indexName, \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ }\n+ flushAndRefresh(indexName);\n+ assertThat(client().prepareSearch(indexName).setSize(0).get().getHits().getTotalHits(), equalTo((long) nbDocs));\n+\n+ // create a snapshot\n+ final NumShards numShards = getNumShards(indexName);\n+ CreateSnapshotResponse snapshotResponse = client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\")\n+ .setWaitForCompletion(true)\n+ .setIndices(indexName)\n+ .get();\n+\n+ assertThat(snapshotResponse.getSnapshotInfo().state(), equalTo(SnapshotState.SUCCESS));\n+ assertThat(snapshotResponse.getSnapshotInfo().successfulShards(), equalTo(numShards.numPrimaries));\n+ assertThat(snapshotResponse.getSnapshotInfo().failedShards(), equalTo(0));\n+\n+ // delete the test index\n+ assertAcked(client().admin().indices().prepareDelete(indexName));\n+\n+ // update the test repository\n+ assertAcked(client().admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"mock\")\n+ .setSettings(Settings.builder()\n+ .put(\"location\", repositoryLocation)\n+ .put(repositorySettings)\n+ .build()));\n+\n+ // attempt to restore the snapshot with the given settings\n+ RestoreSnapshotResponse restoreResponse = client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\")\n+ .setIndices(indexName)\n+ .setIndexSettings(restoreIndexSettings)\n+ .setWaitForCompletion(true)\n+ .get();\n+\n+ // check that all shards failed during restore\n+ assertThat(restoreResponse.getRestoreInfo().totalShards(), equalTo(numShards.numPrimaries));\n+ assertThat(restoreResponse.getRestoreInfo().successfulShards(), equalTo(0));\n+\n+ ClusterStateResponse clusterStateResponse = client().admin().cluster().prepareState().setCustoms(true).setRoutingTable(true).get();\n+\n+ // check that there is no restore in progress\n+ RestoreInProgress restoreInProgress = clusterStateResponse.getState().custom(RestoreInProgress.TYPE);\n+ assertNotNull(\"RestoreInProgress must be not null\", restoreInProgress);\n+ assertThat(\"RestoreInProgress must be empty\", restoreInProgress.entries(), hasSize(0));\n+\n+ // check that the shards have been created but are not assigned\n+ assertThat(clusterStateResponse.getState().getRoutingTable().allShards(indexName), hasSize(numShards.totalNumShards));\n+\n+ // check that every primary shard is unassigned\n+ for (ShardRouting shard : clusterStateResponse.getState().getRoutingTable().allShards(indexName)) {\n+ if (shard.primary()) {\n+ assertThat(shard.state(), equalTo(ShardRoutingState.UNASSIGNED));\n+ assertThat(shard.recoverySource().getType(), equalTo(RecoverySource.Type.SNAPSHOT));\n+ assertThat(shard.unassignedInfo().getLastAllocationStatus(), equalTo(UnassignedInfo.AllocationStatus.DECIDERS_NO));\n+ checkUnassignedInfo.accept(shard.unassignedInfo());\n+ }\n+ }\n+\n+ // update the test repository in order to make it work\n+ assertAcked(client().admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\")\n+ .setSettings(Settings.builder().put(\"location\", repositoryLocation)));\n+\n+ // execute action to eventually fix the situation\n+ fixUpAction.run();\n+\n+ // delete the index and restore again\n+ assertAcked(client().admin().indices().prepareDelete(indexName));\n+\n+ restoreResponse = client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).get();\n+ assertThat(restoreResponse.getRestoreInfo().totalShards(), equalTo(numShards.numPrimaries));\n+ assertThat(restoreResponse.getRestoreInfo().successfulShards(), equalTo(numShards.numPrimaries));\n+\n+ // Wait for the shards to be assigned\n+ ensureGreen(indexName);\n+ refresh(indexName);\n+\n+ assertThat(client().prepareSearch(indexName).setSize(0).get().getHits().getTotalHits(), equalTo((long) nbDocs));\n+ }\n+\n public void testDeletionOfFailingToRecoverIndexShouldStopRestore() throws Exception {\n Path repositoryLocation = randomRepoPath();\n Client client = client();",
"filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): \r\n\r\n```\r\n# rpm -qa |grep elasticsearch\r\nelasticsearch-5.6.2-1.noarch\r\n```\r\n**Plugins installed**:\r\n\r\n```\r\ndiscovery-ec2\r\nrepository-s3\r\nx-pack\r\n```\r\n\r\n**JVM version** (`java -version`):\r\n\r\n```\r\n# java -version\r\njava version \"1.8.0_141\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_141-b15)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)\r\n```\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n\r\n```\r\nFedora 26\r\nLinux 4.12.14-300.fc26.x86_64 #1 SMP Wed Sep 20 16:28:07 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n```\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWe have had about twenty indexes that are stuck in a red state after trying to restore a snapshot taken from elasticsearch `5.4.1` to a brand new cluster running `5.6.2`. For this issue, I will focus on one index `logstash-2017.09.20`. \r\n\r\nYou can see here that the index is in a red state:\r\n\r\n```\r\n# curl -XGET 'localhost:9200/_cluster/health/logstash-2017.09.20?level=shards&pretty'\r\n{\r\n \"cluster_name\" : \"redacted\",\r\n \"status\" : \"red\",\r\n \"timed_out\" : false,\r\n \"number_of_nodes\" : 11,\r\n \"number_of_data_nodes\" : 5,\r\n \"active_primary_shards\" : 4,\r\n \"active_shards\" : 4,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 1,\r\n \"delayed_unassigned_shards\" : 0,\r\n \"number_of_pending_tasks\" : 0,\r\n \"number_of_in_flight_fetch\" : 0,\r\n \"task_max_waiting_in_queue_millis\" : 0,\r\n \"active_shards_percent_as_number\" : 98.60064585575888,\r\n \"indices\" : {\r\n \"logstash-2017.09.20\" : {\r\n \"status\" : \"red\",\r\n \"number_of_shards\" : 5,\r\n \"number_of_replicas\" : 0,\r\n \"active_primary_shards\" : 4,\r\n \"active_shards\" : 4,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 1,\r\n \"shards\" : {\r\n \"0\" : {\r\n \"status\" : \"green\",\r\n \"primary_active\" : true,\r\n \"active_shards\" : 1,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 0\r\n },\r\n \"1\" : {\r\n \"status\" : \"green\",\r\n \"primary_active\" : true,\r\n \"active_shards\" : 1,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 0\r\n },\r\n \"2\" : {\r\n \"status\" : \"green\",\r\n \"primary_active\" : true,\r\n \"active_shards\" : 1,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 0\r\n },\r\n \"3\" : {\r\n \"status\" : \"green\",\r\n \"primary_active\" : true,\r\n \"active_shards\" : 1,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 0\r\n },\r\n \"4\" : {\r\n \"status\" : \"red\",\r\n \"primary_active\" : false,\r\n \"active_shards\" : 0,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 1\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nYou can see the restore says it finished with a SUCCESS:\r\n\r\n```\r\n# curl -XGET 'localhost:9200/_snapshot/my_cool_backup/snapshot_0?pretty'\r\n{\r\n \"snapshots\" : [\r\n {\r\n \"snapshot\" : \"snapshot_0\",\r\n \"uuid\" : \"e_wavyGfTD-SwXC-imkF0g\",\r\n \"version_id\" : 5040199,\r\n \"version\" : \"5.4.1\",\r\n \"indices\" : [\r\n ** SNIP **\r\n ],\r\n \"state\" : \"SUCCESS\",\r\n \"start_time\" : \"2017-09-27T07:00:01.807Z\",\r\n \"start_time_in_millis\" : 1506495601807,\r\n \"end_time\" : \"2017-09-27T08:44:35.377Z\",\r\n \"end_time_in_millis\" : 1506501875377,\r\n \"duration_in_millis\" : 6273570,\r\n \"failures\" : [ ],\r\n \"shards\" : {\r\n \"total\" : 929,\r\n \"failed\" : 0,\r\n \"successful\" : 929\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\n\r\nLooking at the restore process in detail for the example index, you can see that it says this index has been put into the DONE state for each shard. \r\n\r\n```\r\n$ curl -XGET 'localhost:9200/_snapshot/my_cool_backup/snapshot_0/_status?pretty'\r\n\"snapshots\" : [\r\n {\r\n \"snapshot\" : \"snapshot_0\",\r\n \"repository\" : \"my_cool_backup\",\r\n \"uuid\" : \"e_wavyGfTD-SwXC-imkF0g\",\r\n \"state\" : \"SUCCESS\",\r\n \"shards_stats\" : {\r\n \"initializing\" : 0,\r\n \"started\" : 0,\r\n \"finalizing\" : 0,\r\n \"done\" : 929,\r\n \"failed\" : 0,\r\n \"total\" : 929\r\n },\r\n \"stats\" : {\r\n \"number_of_files\" : 2364,\r\n \"processed_files\" : 2364,\r\n \"total_size_in_bytes\" : 15393945691,\r\n \"processed_size_in_bytes\" : 15393945691,\r\n \"start_time_in_millis\" : 1506495618226,\r\n \"time_in_millis\" : 6252967\r\n },\r\n \"indices\" : {\r\n \"logstash-2017.09.20\" : {\r\n \"shards_stats\" : {\r\n \"initializing\" : 0,\r\n \"started\" : 0,\r\n \"finalizing\" : 0,\r\n \"done\" : 5,\r\n \"failed\" : 0,\r\n \"total\" : 5\r\n },\r\n \"stats\" : {\r\n \"number_of_files\" : 31,\r\n \"processed_files\" : 31,\r\n \"total_size_in_bytes\" : 168664,\r\n \"processed_size_in_bytes\" : 168664,\r\n \"start_time_in_millis\" : 1506495678150,\r\n \"time_in_millis\" : 2401656\r\n },\r\n \"shards\" : {\r\n \"0\" : {\r\n \"stage\" : \"DONE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 7,\r\n \"processed_files\" : 7,\r\n \"total_size_in_bytes\" : 118135,\r\n \"processed_size_in_bytes\" : 118135,\r\n \"start_time_in_millis\" : 1506495720316,\r\n \"time_in_millis\" : 1949\r\n }\r\n },\r\n \"1\" : {\r\n \"stage\" : \"DONE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 16,\r\n \"processed_files\" : 16,\r\n \"total_size_in_bytes\" : 33918,\r\n \"processed_size_in_bytes\" : 33918,\r\n \"start_time_in_millis\" : 1506495722992,\r\n \"time_in_millis\" : 2804\r\n }\r\n },\r\n \"2\" : {\r\n \"stage\" : \"DONE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 0,\r\n \"processed_files\" : 0,\r\n \"total_size_in_bytes\" : 0,\r\n \"processed_size_in_bytes\" : 0,\r\n \"start_time_in_millis\" : 1506498067865,\r\n \"time_in_millis\" : 11941\r\n }\r\n },\r\n \"3\" : {\r\n \"stage\" : \"DONE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 4,\r\n \"processed_files\" : 4,\r\n \"total_size_in_bytes\" : 8434,\r\n \"processed_size_in_bytes\" : 8434,\r\n \"start_time_in_millis\" : 1506495678150,\r\n \"time_in_millis\" : 1206\r\n }\r\n },\r\n \"4\" : {\r\n \"stage\" : \"DONE\",\r\n \"stats\" : {\r\n \"number_of_files\" : 4,\r\n \"processed_files\" : 4,\r\n \"total_size_in_bytes\" : 8177,\r\n \"processed_size_in_bytes\" : 8177,\r\n \"start_time_in_millis\" : 1506495684287,\r\n \"time_in_millis\" : 1164\r\n }\r\n }\r\n }\r\n }\r\n```\r\n\r\nLooking at `/_cat/recovery` it says it's done too\r\n```\r\n# curl -XGET localhost:9200/_cat/recovery|grep logstash-2017.09.20\r\n\r\nlogstash-2017.09.20 0 7.9s snapshot done n/a n/a redacted data-03 my_cool_backup snapshot_0 1 1 100.0% 109 1699 1699 100.0% 2911728303 0 0 100.0%\r\nlogstash-2017.09.20 1 14.5m snapshot done n/a n/a redacted data-04 my_cool_backup snapshot_0 136 136 100.0% 136 2842065772 2842065772 100.0% 2842065772 0 0 100.0%\r\nlogstash-2017.09.20 2 1.7s snapshot done n/a n/a redacted data-00 my_cool_backup snapshot_0 1 1 100.0% 109 1699 1699 100.0% 2889504028 0 0 100.0%\r\nlogstash-2017.09.20 3 13.9m snapshot done n/a n/a redacted data-02 my_cool_backup snapshot_0 127 127 100.0% 127 2929823683 2929823683 100.0% 2929823683 0 0 100.0%\r\n```\r\n\r\nBut if you try to close the index it says that it is still being restored:\r\n\r\n```\r\n$ curl -XPOST 'localhost:9200/logstash-2017.09.20/_close?pretty'\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"remote_transport_exception\",\r\n \"reason\" : \"[master-01][redacted:9300][indices:admin/close]\"\r\n }\r\n ],\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"Cannot close indices that are being restored: [[logstash-2017.09.20/crXjrjtwTEqkK6_ITG1HVQ]]\"\r\n },\r\n \"status\" : 400\r\n}\r\n```\r\n\r\nLooking in the logs it says that it failed to recover the index because the file already exists:\r\n\r\n```\r\n[2017-10-02T19:50:28,790][WARN ][o.e.c.a.s.ShardStateAction] [master-01] [logstash-2017.09.20][4] received shard failed for shard id [[logstash-2017.09.20][4]], allocation id [lW_4BSVGSc6phnI1vLEPWg], primary term [0], message [failed recovery], failure [RecoveryFailedException[[logstash-2017.09.20][4]: Recovery failed on {data-02}{Af43AKvBRf6r-PTr2s9KRg}{O1R6sKwAQK2FyYYmdFLjPA}{redacted}{redacted:9300}{aws_availability_zone=us-west-2c, ml.max_open_jobs=10, ml.enabled=true}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[failed to restore snapshot [snapshot_0/e_wavyGfTD-SwXC-imkF0g]]; nested: IndexShardRestoreFailedException[Failed to recover index]; nested: FileAlreadyExistsException[/var/lib/elasticsearch/nodes/0/indices/crXjrjtwTEqkK6_ITG1HVQ/4/index/_22g.si]; ]\r\n\r\n[2017-10-02T19:50:28,790][WARN ][o.e.c.a.s.ShardStateAction] [master-01] [logstash-2017.09.20][4] received shard failed for shard id [[logstash-2017.09.20][4]\r\n], allocation id [lW_4BSVGSc6phnI1vLEPWg], primary term [0], message [failed recovery], failure [RecoveryFailedException[[logstash-2017.09.20][4]: Recovery failed \r\non {data-02}{Af43AKvBRf6r-PTr2s9KRg}{O1R6sKwAQK2FyYYmdFLjPA}{redacted}{redacted:9300}{aws_availability_zone=us-west-2c, ml.max_open_jobs=10, ml.enabled=\r\ntrue}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[fa\r\niled to restore snapshot [snapshot_0/e_wavyGfTD-SwXC-imkF0g]]; nested: IndexShardRestoreFailedException[Failed to recover index]; nested: FileAlre\r\nadyExistsException[/var/lib/elasticsearch/nodes/0/indices/crXjrjtwTEqkK6_ITG1HVQ/4/index/_22g.si]; ]\r\norg.elasticsearch.indices.recovery.RecoveryFailedException: [logstash-2017.09.20][4]: Recovery failed on {data-02}{Af43AKvBRf6r-PTr2s9KRg}{O1R6sKwAQK2FyYYmdFL\r\njPA}{redacted}{redacted:9300}{aws_availability_zone=us-west-2c, ml.max_open_jobs=10, ml.enabled=true}\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1511) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_141]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_141]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]\r\nCaused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery\r\n at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:299) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:232) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1243) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1507) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n ... 4 more\r\nCaused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: restore failed\r\n at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:405) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:234) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:232) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1243) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1507) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n ... 4 more\r\nCaused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: failed to restore snapshot [snapshot_0/e_wavyGfTD-SwXC-imkF0g]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:993) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:400) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:234) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:232) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1243) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1507) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n ... 4 more\r\nCaused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: Failed to recover index\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1679) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:991) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:400) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:234) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:232) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1243) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1507) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n ... 4 more\r\nCaused by: java.nio.file.FileAlreadyExistsException: /var/lib/elasticsearch/nodes/0/indices/crXjrjtwTEqkK6_ITG1HVQ/4/index/_22g.si\r\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:88) ~[?:?]\r\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]\r\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]\r\n at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) ~[?:?]\r\n at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434) ~[?:1.8.0_141]\r\n at java.nio.file.Files.newOutputStream(Files.java:216) ~[?:1.8.0_141]\r\n at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:413) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]\r\n at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:409) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]\r\n at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]\r\n at org.apache.lucene.store.RateLimitedFSDirectory.createOutput(RateLimitedFSDirectory.java:40) ~[elasticsearch-5.6.2.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]\r\n at org.apache.lucene.store.FilterDirectory.createOutput(FilterDirectory.java:73) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]\r\n at org.elasticsearch.index.store.Store.createVerifyingOutput(Store.java:463) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restoreFile(BlobStoreRepository.java:1734) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1676) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:991) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:400) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:234) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:232) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1243) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1507) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n ... 4 more\r\n```\r\n\r\nAnd if you look on for that file it says is already exists, it is not present on the data node:\r\n\r\n```\r\n# ll /var/lib/elasticsearch/nodes/0/indices/crXjrjtwTEqkK6_ITG1HVQ/4/index/_22g.si\r\nls: cannot access '/var/lib/elasticsearch/nodes/0/indices/crXjrjtwTEqkK6_ITG1HVQ/4/index/_22g.si': No such file or directory\r\n```\r\n\r\nThe only way I have been able to get the cluster out of this hung state is to do a full cluster shutdown and start it back up again. From there I am able to close these red indexes and retry the restore again. When I first encountered this issue, I had ~20 indexes that failed to restore. After retrying to restore these failures with the process above, I was able to get all but seven of them restored. The remaining failures are in the same state.\r\n",
"comments": [
{
"body": "That sounds like two problems to me:\r\n\r\n* State handling during recovery seems to be inconsistent / not to agree\r\n* File system issues\r\n\r\nCan you please tell which file system you've used? Also, as you are on EC2: Did you configure EBS volumes or instance storage on the nodes?",
"created_at": "2017-10-04T16:01:10Z"
},
{
"body": "Also @imotov may have further ideas.",
"created_at": "2017-10-04T16:01:22Z"
},
{
"body": "These are on AWS I3 servers with NVMe SSD instance storage. We are using XFS with LUKS on these disks. ",
"created_at": "2017-10-04T16:13:48Z"
},
{
"body": "Thanks for the feedback. I also talked to @imotov. As this is about S3 snapshot could you please have a look @tlrx?",
"created_at": "2017-10-05T07:48:13Z"
},
{
"body": "There is a bit of confusion between the snapshot and the restore APIs on this issue:\r\n\r\n@jdoss When you say\r\n\r\n> You can see the restore says it finished with a SUCCESS:\r\n\r\nyou're actually showing the result of the (successfully completed) snapshotting process, not the restore process (same mistake for showing the details).\r\n\r\nThe `/_cat/recovery` output also is consistent with the cluster health. It shows that shards 0 to 3 have successfully recovered. Shard 4 (the one causing the cluster health to be red) is not reported as done.\r\n\r\nFrom the output shown it is not clear that the restore process is stuck. Note that we don't allow an index that is being restored to be closed. However, you can delete this index, which will also abort the restore process (same as when you delete a snapshot that's in progress, it will abort the snapshot).\r\n\r\nThe bug you're hitting here is the `FileAlreadyExistsException`, which we've seen already on other reports:\r\nhttps://discuss.elastic.co/t/snapshot-restore-failed-recovery-of-index-getting-filealreadyexistsexception/100300\r\n\r\nCould you perhaps share the snapshot (privately) with us?\r\n\r\n@danielmitterdorfer I have my doubts that this is S3 related.",
"created_at": "2017-10-05T08:24:03Z"
},
{
"body": "@jdoss access to snapshots would really help, but if this is not possible would you be able to try reproducing this issue with additional logging enabled and send us the logs files?",
"created_at": "2017-10-06T15:26:43Z"
},
{
"body": "> you're actually showing the result of the (successfully completed) snapshotting process, not the restore process (same mistake for showing the details).\r\n\r\n@ywelsch I was following the documentation https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html#_monitoring_snapshot_restore_progress which states to use the \r\n\r\n`curl -XGET 'localhost:9200/_snapshot/my_backup/snapshot_1?pretty'`\r\n\r\nand \r\n\r\n`curl -XGET 'localhost:9200/_snapshot/my_backup/snapshot_1/status?pretty'`\r\n\r\nWhich is pretty confusing mashing the snapshot and recovery status documentation together. Re-reading the whole section I see I misunderstood things and I should have been using the indices recovery and cat recovery APIs.\r\n\r\nI do wish it was easier to see what is going on with a restore and having the snapshot status documentation crammed together with the restore documentation is confusing. I wish there was a better method to see what is going on with a specific restore and a better method on stopping a restore. I have nuked snapshots from S3 misunderstanding that the DELETE method used for stopping a snapshot does not work on restores. It is good to know that you can just delete the index on the cluster to stop the restore. \r\n\r\nIt would be nice to be able to ping a restore API to see all this information and to stop a restore vs using the recovery APIs. I was looking for something hat showed a clear status of the recovery and confused the snapshot status endpoint as something that worked with the recovery of a snapshot. My bad.\r\n\r\n@imotov email me at jdoss *at* kennasecurity.com and I will talk to my higher ups about getting you this snapshot.",
"created_at": "2017-10-06T19:46:53Z"
},
{
"body": "@jdoss I think I might actually get by with just 2 files from your snapshot repository that contain no actual data (just a list of files that index consisted of at the time of the snapshot, their sizes and checksums). The files I am interested in are `indices/logstash-2017.09.20/4/index-*` (it might be also located in `indices/crXjrjtwTEqkK6_ITG1HVQ/4/index-*`) and `snap-snapshot_0.dat` or `snap-e_wavyGfTD-SwXC-imkF0g.dat` from the same directory as `index-*`. Could you send these two files to igor at elastic.co?",
"created_at": "2017-10-09T14:09:08Z"
},
{
"body": "@imotov I have sent you the requested files. ",
"created_at": "2017-10-09T15:54:17Z"
},
{
"body": "I was finally able to see a reproduction of this issue with enough trace logging to figure out what's going on. It looks like in the case that I was able to observe, the `FileAlreadyExists` exception was the secondary issue on that was triggered by a previous failure (missing blob in the repository in the case that I was able to observe). If you still have the log files from this failure around, can you see if there are any exceptions for the same shard prior to the `FileAlreadyExists`.",
"created_at": "2017-10-25T11:12:34Z"
},
{
"body": "@tlrx this is the issue we talked about earlier today. ",
"created_at": "2017-11-03T23:41:32Z"
},
{
"body": "Hi, I'd like to ask which version contains this fix. Thanks.",
"created_at": "2018-09-16T05:24:52Z"
},
{
"body": "Please see the version labels in the corresponding pull request https://github.com/elastic/elasticsearch/pull/27493: 5.6.6 is the earliest version in the 5.x series that contains this fix.",
"created_at": "2018-09-17T05:00:51Z"
},
{
"body": "Thanks, @danielmitterdorfer. Appreciate it. Can I also ask if this affects the S3 destination only or the Shared FS as well?",
"created_at": "2018-09-17T22:11:16Z"
}
],
"number": 26865,
"title": "Restoring a snapshot from S3 to 5.6.2 results in a hung and incomplete restore. "
} | {
"body": "Pull request #20220 added a change where the store files\r\nthat have the same name but are different from the ones in the\r\nsnapshot are deleted first before the snapshot is restored.\r\nThis logic was based on the `Store.RecoveryDiff.different`\r\nset of files which works by computing a diff between an\r\nexisting store and a snapshot.\r\n\r\nThis works well when the files on the filesystem form valid\r\nshard store, ie there's a `segments` file and store files\r\nare not corrupted. Otherwise, the existing store's snapshot\r\nmetadata cannot be read (using Store#snapshotStoreMetadata())\r\nand an exception is thrown\r\n(CorruptIndexException, IndexFormatTooOldException etc) which\r\nis later caught as the begining of the restore process\r\n(see RestoreContext#restore()) and is translated into\r\nan empty store metadata (Store.MetadataSnapshot.EMPTY).\r\n\r\nThis will make the deletion of different files introduced\r\nin #20220 useless as the set of files will always be empty\r\neven when store files exist on the filesystem. And if some\r\nfiles are present within the store directory, then restoring\r\na snapshot with files with same names will fail with a\r\nFileAlreadyExistException.\r\n\r\nThis is part of the #26865 issue.\r\n\r\nThere are various cases were some files could exist in the\r\n store directory before a snapshot is restored. One that\r\nIgor identified is a restore attempt that failed on a node\r\nand only first files were restored, then the shard is allocated\r\nagain to the same node and the restore starts again (but fails\r\n because of existing files). Another one is when some files\r\nof a closed index are corrupted / deleted and the index is\r\nrestored.\r\n\r\nThis commit adds a test that uses the infrastructure provided\r\nby `IndexShardTestCase` in order to test that restoring a shard\r\nsucceed even when files with same names exist on filesystem.\r\n\r\nRelated to #26865",
"number": 27476,
"review_comments": [
{
"body": "Is there a need to substract the identical ones? The filesToRecover, which we iterate over, won't contain those anyhow.",
"created_at": "2017-11-22T11:22:26Z"
},
{
"body": "I see that you fixed the check here (before it read `recoveryTargetMetadata == null` which did not make any sense as that one was always non-null). I wonder what impact this bug fix has. What change in behavior do we expect by this (it's a bit unclear to me what this check does)?",
"created_at": "2017-11-22T11:24:48Z"
},
{
"body": "I think we can just use an FsRepository for this. All our other shard-level tests do the same, so no need to optimize this. If we want to change that in the future, I think it's easier to switch to jimfs and continue using FsRepository.",
"created_at": "2017-11-22T11:39:28Z"
},
{
"body": "you can move this method (and the one below it) up to IndexShardTestCase (in test:framework). It could be useful for other people.",
"created_at": "2017-11-22T11:42:06Z"
},
{
"body": "Sadly there's no test for this and as you noticed the previous check did not make any sense. I saw it as a bug and I think that we can't restore a snapshot without segments file so I don't expect any impact for this. Maybe @imotov has more knowledge?",
"created_at": "2017-11-22T14:22:28Z"
},
{
"body": "That was a safety check I added but I agree it does not make sense, I'll remove it.",
"created_at": "2017-11-22T14:23:49Z"
},
{
"body": "I cannot really think of a scenario where snapshot would have no segments. So, removing check for recoveryTargetMetadata and replacing it with check for no snapshots shouldn't have any change in behavior expect in some pathological cases that would fail anyway. Now they will at least fail with a reasonable error message.",
"created_at": "2017-11-22T14:40:31Z"
},
{
"body": "I agree with identical point. Could you also clean the trace logging 10 lines above while you are at it? I keep forgetting to do it and it doesn't make any sense now.\r\n\r\nAlso, can anyone think of a scenario where case-insensitive FS can screw us here somehow?",
"created_at": "2017-11-22T14:44:50Z"
},
{
"body": "It's that, or we can replace replace FsRepository with this one, but we need to beef it up.",
"created_at": "2017-11-22T14:58:56Z"
},
{
"body": "Let's just use a FsRepository.",
"created_at": "2017-11-22T15:50:33Z"
},
{
"body": "> Also, can anyone think of a scenario where case-insensitive FS can screw us here somehow?\r\n\r\nI don't see any scenario like this...",
"created_at": "2017-11-22T16:14:16Z"
}
],
"title": "Delete shard store files before restoring a snapshot"
} | {
"commits": [
{
"message": "Delete shard store files before restoring a snapshot\n\nPull request #20220 added a change where the store files\nthat have the same name but are different from the ones in the\nsnapshot are deleted first before the snapshot is restored.\nThis logic was based on the `Store.RecoveryDiff.different`\nset of files which works by computing a diff between an\nexisting store and a snapshot.\n\nThis works well when the files on the filesystem form valid\nshard store, ie there's a `segments` file and store files\nare not corrupted. Otherwise, the existing store's snapshot\nmetadata cannot be read (using Store#snapshotStoreMetadata())\nand an exception is thrown\n(CorruptIndexException, IndexFormatTooOldException etc) which\nis later caught as the begining of the restore process\n(see RestoreContext#restore()) and is translated into\nan empty store metadata (Store.MetadataSnapshot.EMPTY).\n\nThis will make the deletion of different files introduced\nin #20220 useless as the set of files will always be empty\neven when store files exist on the filesystem. And if some\nfiles are present within the store directory, then restoring\na snapshot with files with same names will fail with a\nFileAlreadyExistException.\n\nThis is part of the #26865 issue.\n\nThere are various cases were some files could exist in the\n store directory before a snapshot is restored. One that\nIgor identified is a restore attempt that failed on a node\nand only first files were restored, then the shard is allocated\nagain to the same node and the restore starts again (but fails\n because of existing files). Another one is when some files\nof a closed index are corrupted / deleted and the index is\nrestored.\n\nThis commit adds a test that uses the infrastructure provided\nby IndexShardTestCase in order to test that restoring a shard\nsucceed even when files with same names exist on filesystem.\n\nRelated to #26865"
},
{
"message": "Apply feedback"
}
],
"files": [
{
"diff": "@@ -731,7 +731,7 @@ public String toString() {\n \n /**\n * Represents a snapshot of the current directory build from the latest Lucene commit.\n- * Only files that are part of the last commit are considered in this datastrucutre.\n+ * Only files that are part of the last commit are considered in this datastructure.\n * For backwards compatibility the snapshot might include legacy checksums that\n * are derived from a dedicated checksum file written by older elasticsearch version pre 1.3\n * <p>",
"filename": "core/src/main/java/org/elasticsearch/index/store/Store.java",
"status": "modified"
},
{
"diff": "@@ -35,7 +35,6 @@\n import org.apache.lucene.store.RateLimiter;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.BytesRefBuilder;\n-import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.ResourceNotFoundException;\n@@ -110,6 +109,7 @@\n import java.nio.file.FileAlreadyExistsException;\n import java.nio.file.NoSuchFileException;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n import java.util.HashMap;\n@@ -1451,6 +1451,9 @@ public void restore() throws IOException {\n SnapshotFiles snapshotFiles = new SnapshotFiles(snapshot.snapshot(), snapshot.indexFiles());\n Store.MetadataSnapshot recoveryTargetMetadata;\n try {\n+ // this will throw an IOException if the store has no segments infos file. The\n+ // store can still have existing files but they will be deleted just before being\n+ // restored.\n recoveryTargetMetadata = targetShard.snapshotStoreMetadata();\n } catch (IndexNotFoundException e) {\n // happens when restore to an empty shard, not a big deal\n@@ -1478,7 +1481,14 @@ public void restore() throws IOException {\n snapshotMetaData.put(fileInfo.metadata().name(), fileInfo.metadata());\n fileInfos.put(fileInfo.metadata().name(), fileInfo);\n }\n+\n final Store.MetadataSnapshot sourceMetaData = new Store.MetadataSnapshot(unmodifiableMap(snapshotMetaData), emptyMap(), 0);\n+\n+ final StoreFileMetaData restoredSegmentsFile = sourceMetaData.getSegmentsFile();\n+ if (restoredSegmentsFile == null) {\n+ throw new IndexShardRestoreFailedException(shardId, \"Snapshot has no segments file\");\n+ }\n+\n final Store.RecoveryDiff diff = sourceMetaData.recoveryDiff(recoveryTargetMetadata);\n for (StoreFileMetaData md : diff.identical) {\n BlobStoreIndexShardSnapshot.FileInfo fileInfo = fileInfos.get(md.name());\n@@ -1505,29 +1515,31 @@ public void restore() throws IOException {\n logger.trace(\"no files to recover, all exists within the local store\");\n }\n \n- if (logger.isTraceEnabled()) {\n- logger.trace(\"[{}] [{}] recovering_files [{}] with total_size [{}], reusing_files [{}] with reused_size [{}]\", shardId, snapshotId,\n- index.totalRecoverFiles(), new ByteSizeValue(index.totalRecoverBytes()), index.reusedFileCount(), new ByteSizeValue(index.reusedFileCount()));\n- }\n try {\n- // first, delete pre-existing files in the store that have the same name but are\n- // different (i.e. different length/checksum) from those being restored in the snapshot\n- for (final StoreFileMetaData storeFileMetaData : diff.different) {\n- IOUtils.deleteFiles(store.directory(), storeFileMetaData.name());\n- }\n+ // list of all existing store files\n+ final List<String> deleteIfExistFiles = Arrays.asList(store.directory().listAll());\n+\n // restore the files from the snapshot to the Lucene store\n for (final BlobStoreIndexShardSnapshot.FileInfo fileToRecover : filesToRecover) {\n+ // if a file with a same physical name already exist in the store we need to delete it\n+ // before restoring it from the snapshot. We could be lenient and try to reuse the existing\n+ // store files (and compare their names/length/checksum again with the snapshot files) but to\n+ // avoid extra complexity we simply delete them and restore them again like StoreRecovery\n+ // does with dangling indices. Any existing store file that is not restored from the snapshot\n+ // will be clean up by RecoveryTarget.cleanFiles().\n+ final String physicalName = fileToRecover.physicalName();\n+ if (deleteIfExistFiles.contains(physicalName)) {\n+ logger.trace(\"[{}] [{}] deleting pre-existing file [{}]\", shardId, snapshotId, physicalName);\n+ store.directory().deleteFile(physicalName);\n+ }\n+\n logger.trace(\"[{}] [{}] restoring file [{}]\", shardId, snapshotId, fileToRecover.name());\n restoreFile(fileToRecover, store);\n }\n } catch (IOException ex) {\n throw new IndexShardRestoreFailedException(shardId, \"Failed to recover index\", ex);\n }\n- final StoreFileMetaData restoredSegmentsFile = sourceMetaData.getSegmentsFile();\n- if (recoveryTargetMetadata == null) {\n- throw new IndexShardRestoreFailedException(shardId, \"Snapshot has no segments file\");\n- }\n- assert restoredSegmentsFile != null;\n+\n // read the snapshot data persisted\n final SegmentInfos segmentCommitInfos;\n try {\n@@ -1602,5 +1614,4 @@ private void restoreFile(final BlobStoreIndexShardSnapshot.FileInfo fileInfo, fi\n }\n }\n }\n-\n }",
"filename": "core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java",
"status": "modified"
},
{
"diff": "@@ -76,7 +76,6 @@\n import java.util.Collections;\n import java.util.HashMap;\n import java.util.HashSet;\n-import java.util.Iterator;\n import java.util.List;\n import java.util.Map;\n import java.util.Objects;\n@@ -189,7 +188,7 @@ public void restoreSnapshot(final RestoreRequest request, final ActionListener<R\n final SnapshotInfo snapshotInfo = repository.getSnapshotInfo(snapshotId);\n final Snapshot snapshot = new Snapshot(request.repositoryName, snapshotId);\n List<String> filteredIndices = SnapshotUtils.filterIndices(snapshotInfo.indices(), request.indices(), request.indicesOptions());\n- MetaData metaData = repository.getSnapshotMetaData(snapshotInfo, repositoryData.resolveIndices(filteredIndices));\n+ final MetaData metaData = repository.getSnapshotMetaData(snapshotInfo, repositoryData.resolveIndices(filteredIndices));\n \n // Make sure that we can restore from this snapshot\n validateSnapshotRestorable(request.repositoryName, snapshotInfo);",
"filename": "core/src/main/java/org/elasticsearch/snapshots/RestoreService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,143 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.repositories.blobstore;\n+\n+import org.apache.lucene.store.Directory;\n+import org.apache.lucene.util.IOUtils;\n+import org.apache.lucene.util.TestUtil;\n+import org.elasticsearch.cluster.metadata.RepositoryMetaData;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.ShardRoutingHelper;\n+import org.elasticsearch.common.UUIDs;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.env.Environment;\n+import org.elasticsearch.env.TestEnvironment;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.IndexShardState;\n+import org.elasticsearch.index.shard.IndexShardTestCase;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.index.store.Store;\n+import org.elasticsearch.index.store.StoreFileMetaData;\n+import org.elasticsearch.repositories.IndexId;\n+import org.elasticsearch.repositories.Repository;\n+import org.elasticsearch.repositories.fs.FsRepository;\n+import org.elasticsearch.snapshots.Snapshot;\n+import org.elasticsearch.snapshots.SnapshotId;\n+\n+import java.io.IOException;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.util.Arrays;\n+import java.util.List;\n+\n+import static org.elasticsearch.cluster.routing.RecoverySource.StoreRecoverySource.EXISTING_STORE_INSTANCE;\n+\n+/**\n+ * This class tests the behavior of {@link BlobStoreRepository} when it\n+ * restores a shard from a snapshot but some files with same names already\n+ * exist on disc.\n+ */\n+public class BlobStoreRepositoryRestoreTests extends IndexShardTestCase {\n+\n+ /**\n+ * Restoring a snapshot that contains multiple files must succeed even when\n+ * some files already exist in the shard's store.\n+ */\n+ public void testRestoreSnapshotWithExistingFiles() throws IOException {\n+ final IndexId indexId = new IndexId(randomAlphaOfLength(10), UUIDs.randomBase64UUID());\n+ final ShardId shardId = new ShardId(indexId.getName(), indexId.getId(), 0);\n+\n+ IndexShard shard = newShard(shardId, true);\n+ try {\n+ // index documents in the shards\n+ final int numDocs = scaledRandomIntBetween(1, 500);\n+ recoverShardFromStore(shard);\n+ for (int i = 0; i < numDocs; i++) {\n+ indexDoc(shard, \"doc\", Integer.toString(i));\n+ if (rarely()) {\n+ flushShard(shard, false);\n+ }\n+ }\n+ assertDocCount(shard, numDocs);\n+\n+ // snapshot the shard\n+ final Repository repository = createRepository();\n+ final Snapshot snapshot = new Snapshot(repository.getMetadata().name(), new SnapshotId(randomAlphaOfLength(10), \"_uuid\"));\n+ snapshotShard(shard, snapshot, repository);\n+\n+ // capture current store files\n+ final Store.MetadataSnapshot storeFiles = shard.snapshotStoreMetadata();\n+ assertFalse(storeFiles.asMap().isEmpty());\n+\n+ // close the shard\n+ closeShards(shard);\n+\n+ // delete some random files in the store\n+ List<String> deletedFiles = randomSubsetOf(randomIntBetween(1, storeFiles.size() - 1), storeFiles.asMap().keySet());\n+ for (String deletedFile : deletedFiles) {\n+ Files.delete(shard.shardPath().resolveIndex().resolve(deletedFile));\n+ }\n+\n+ // build a new shard using the same store directory as the closed shard\n+ ShardRouting shardRouting = ShardRoutingHelper.initWithSameId(shard.routingEntry(), EXISTING_STORE_INSTANCE);\n+ shard = newShard(shardRouting, shard.shardPath(), shard.indexSettings().getIndexMetaData(), null, null, () -> {});\n+\n+ // restore the shard\n+ recoverShardFromSnapshot(shard, snapshot, repository);\n+\n+ // check that the shard is not corrupted\n+ TestUtil.checkIndex(shard.store().directory());\n+\n+ // check that all files have been restored\n+ final Directory directory = shard.store().directory();\n+ final List<String> directoryFiles = Arrays.asList(directory.listAll());\n+\n+ for (StoreFileMetaData storeFile : storeFiles) {\n+ String fileName = storeFile.name();\n+ assertTrue(\"File [\" + fileName + \"] does not exist in store directory\", directoryFiles.contains(fileName));\n+ assertEquals(storeFile.length(), shard.store().directory().fileLength(fileName));\n+ }\n+ } finally {\n+ if (shard != null && shard.state() != IndexShardState.CLOSED) {\n+ try {\n+ shard.close(\"test\", false);\n+ } finally {\n+ IOUtils.close(shard.store());\n+ }\n+ }\n+ }\n+ }\n+\n+ /** Create a {@link Repository} with a random name **/\n+ private Repository createRepository() throws IOException {\n+ Settings settings = Settings.builder().put(\"location\", randomAlphaOfLength(10)).build();\n+ RepositoryMetaData repositoryMetaData = new RepositoryMetaData(randomAlphaOfLength(10), FsRepository.TYPE, settings);\n+ return new FsRepository(repositoryMetaData, createEnvironment(), xContentRegistry());\n+ }\n+\n+ /** Create a {@link Environment} with random path.home and path.repo **/\n+ private Environment createEnvironment() {\n+ Path home = createTempDir();\n+ return TestEnvironment.newEnvironment(Settings.builder()\n+ .put(Environment.PATH_HOME_SETTING.getKey(), home.toAbsolutePath())\n+ .put(Environment.PATH_REPO_SETTING.getKey(), home.resolve(\"repo\").toAbsolutePath())\n+ .build());\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/repositories/blobstore/BlobStoreRepositoryRestoreTests.java",
"status": "added"
},
{
"diff": "@@ -46,6 +46,7 @@\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.MapperTestUtils;\n import org.elasticsearch.index.VersionType;\n@@ -60,6 +61,7 @@\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.seqno.SequenceNumbers;\n import org.elasticsearch.index.similarity.SimilarityService;\n+import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus;\n import org.elasticsearch.index.store.DirectoryService;\n import org.elasticsearch.index.store.Store;\n import org.elasticsearch.indices.recovery.PeerRecoveryTargetService;\n@@ -69,6 +71,9 @@\n import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.indices.recovery.StartRecoveryRequest;\n import org.elasticsearch.node.Node;\n+import org.elasticsearch.repositories.IndexId;\n+import org.elasticsearch.repositories.Repository;\n+import org.elasticsearch.snapshots.Snapshot;\n import org.elasticsearch.test.DummyShardLock;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.threadpool.TestThreadPool;\n@@ -85,6 +90,7 @@\n import java.util.function.BiFunction;\n import java.util.function.Consumer;\n \n+import static org.elasticsearch.cluster.routing.TestShardRouting.newShardRouting;\n import static org.hamcrest.Matchers.contains;\n import static org.hamcrest.Matchers.hasSize;\n \n@@ -583,6 +589,38 @@ protected void flushShard(IndexShard shard, boolean force) {\n shard.flush(new FlushRequest(shard.shardId().getIndexName()).force(force));\n }\n \n+ /** Recover a shard from a snapshot using a given repository **/\n+ protected void recoverShardFromSnapshot(final IndexShard shard,\n+ final Snapshot snapshot,\n+ final Repository repository) throws IOException {\n+ final Version version = Version.CURRENT;\n+ final ShardId shardId = shard.shardId();\n+ final String index = shardId.getIndexName();\n+ final IndexId indexId = new IndexId(shardId.getIndex().getName(), shardId.getIndex().getUUID());\n+ final DiscoveryNode node = getFakeDiscoNode(shard.routingEntry().currentNodeId());\n+ final RecoverySource.SnapshotRecoverySource recoverySource = new RecoverySource.SnapshotRecoverySource(snapshot, version, index);\n+ final ShardRouting shardRouting = newShardRouting(shardId, node.getId(), true, recoverySource, ShardRoutingState.INITIALIZING);\n+\n+ shard.markAsRecovering(\"from snapshot\", new RecoveryState(shardRouting, node, null));\n+ repository.restoreShard(shard, snapshot.getSnapshotId(), version, indexId, shard.shardId(), shard.recoveryState());\n+ }\n+\n+ /** Snapshot a shard using a given repository **/\n+ protected void snapshotShard(final IndexShard shard,\n+ final Snapshot snapshot,\n+ final Repository repository) throws IOException {\n+ final IndexShardSnapshotStatus snapshotStatus = new IndexShardSnapshotStatus();\n+ try (Engine.IndexCommitRef indexCommitRef = shard.acquireIndexCommit(true)) {\n+ Index index = shard.shardId().getIndex();\n+ IndexId indexId = new IndexId(index.getName(), index.getUUID());\n+\n+ repository.snapshotShard(shard, snapshot.getSnapshotId(), indexId, indexCommitRef.getIndexCommit(), snapshotStatus);\n+ }\n+ assertEquals(IndexShardSnapshotStatus.Stage.DONE, snapshotStatus.stage());\n+ assertEquals(shard.snapshotStoreMetadata().size(), snapshotStatus.numberOfFiles());\n+ assertNull(snapshotStatus.failure());\n+ }\n+\n /**\n * Helper method to access (package-protected) engine from tests\n */",
"filename": "test/framework/src/main/java/org/elasticsearch/index/shard/IndexShardTestCase.java",
"status": "modified"
}
]
} |
{
"body": "This commit addresses a subtle bug in the serialization routine for resync requests. The problem here is that Translog.Operation#readType is not compatible with the implementations of Translog.Operation#writeTo. Unfortunately, this issue prevents primary-replica from succeeding, issues which we will address in follow-ups.\r\n\r\nRelates #24841\r\n\r\n",
"comments": [
{
"body": "@dakrone Thanks for the review; I pushed another commit pursuing a good idea you had to add assertions taking it further and completely removing the `Writable` interface from `Translog.Operation` and making private the dangerous constructor and write methods. Would you mind taking another look?",
"created_at": "2017-11-17T01:16:49Z"
}
],
"number": 27418,
"title": "Fix resync request serialization"
} | {
"body": "Today we do not fail a replica shard if the primary-replica resync to that replica fails. Yet, we should at least log the failure messages. This commit causes this to be the case.\r\n\r\nRelates #24841, relates #27418",
"number": 27421,
"review_comments": [],
"title": "Log primary-replica resync failures"
} | {
"commits": [
{
"message": "Log primary-replica resync failures\n\nToday we do not fail a replica shard if the primary-replica resync to\nthat replica fails. Yet, we should at least log the failure\nmessages. This commit causes this to be the case."
},
{
"message": "warn -> info"
},
{
"message": "annotation"
}
],
"files": [
{
"diff": "@@ -18,11 +18,13 @@\n */\n package org.elasticsearch.action.resync;\n \n+import org.apache.logging.log4j.message.ParameterizedMessage;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.TransportActions;\n import org.elasticsearch.action.support.replication.ReplicationOperation;\n+import org.elasticsearch.action.support.replication.ReplicationResponse;\n import org.elasticsearch.action.support.replication.TransportReplicationAction;\n import org.elasticsearch.action.support.replication.TransportWriteAction;\n import org.elasticsearch.cluster.action.shard.ShardStateAction;\n@@ -158,6 +160,15 @@ public String executor() {\n \n @Override\n public void handleResponse(ResyncReplicationResponse response) {\n+ final ReplicationResponse.ShardInfo.Failure[] failures = response.getShardInfo().getFailures();\n+ // noinspection ForLoopReplaceableByForEach\n+ for (int i = 0; i < failures.length; i++) {\n+ final ReplicationResponse.ShardInfo.Failure f = failures[i];\n+ logger.info(\n+ new ParameterizedMessage(\n+ \"{} primary-replica resync to replica on node [{}] failed\", f.fullShardId(), f.nodeId()),\n+ f.getCause());\n+ }\n listener.onResponse(response);\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/resync/TransportResyncReplicationAction.java",
"status": "modified"
}
]
} |
{
"body": "The string_distance option for suggesters has 5 options. The default is ```demarau_levenshtein```. That's okay. One of the other options is the more classic ```levenstein```. That's without the `h`. I think the Levenshtein of Demarau-Levenshtein is the same dude who came up with the LevensHtein distance.\r\n\r\nI thought maybe it was just a typo in the documentation, but I just tested it on 6.0.0-rc1.\r\n\r\n<img width=\"852\" alt=\"screen shot 2017-11-08 at 8 03 23 pm\" src=\"https://user-images.githubusercontent.com/29132388/32587956-6ec27d9c-c4c0-11e7-936a-cab44907fbb4.png\">\r\n\r\n\r\nI'm guessing not many people use this option so nobody noticed.",
"comments": [],
"number": 27325,
"title": "Suggester string_distance option typo in code"
} | {
"body": "Fixes #27325",
"number": 27409,
"review_comments": [
{
"body": "It seems a shame to leave this TODO here. Would you care to fix that too?",
"created_at": "2017-11-18T10:44:11Z"
},
{
"body": "No scope creep please. :smile:",
"created_at": "2017-11-18T12:05:44Z"
},
{
"body": "I might be missing something but I do not see a test that `resolveDistance(\"levenshtein\")` does the right thing and does not produce a deprecation warning?",
"created_at": "2017-11-18T12:23:26Z"
},
{
"body": "Good catch! ",
"created_at": "2017-11-18T17:25:34Z"
}
],
"title": "Deprecating `levenstein` in favor of `levensHtein`"
} | {
"commits": [
{
"message": "replacing levenstein with levensHtein"
},
{
"message": "deprecate `levestein`"
},
{
"message": "added tests"
},
{
"message": "Merge branch 'master' into pr/27409\n\n* master: (41 commits)\n [Test] Fix AggregationsTests#testFromXContentWithRandomFields\n [DOC] Fix mathematical representation on interval (range) (#27450)\n Update version check for CCS optional remote clusters\n Bump BWC version to 6.1.0 for #27469\n Adapt rest test BWC version after backport\n Fix dynamic mapping update generation. (#27467)\n Use the primary_term field to identify parent documents (#27469)\n Move composite aggregation to core (#27474)\n Fix test BWC version after backport\n Protect shard splitting from illegal target shards (#27468)\n Cross Cluster Search: make remote clusters optional (#27182)\n [Docs] Fix broken bulleted lists (#27470)\n Move resync request serialization assertion\n Fix resync request serialization\n Fix issue where pages aren't released (#27459)\n Add YAML REST tests for filters bucket agg (#27128)\n Remove tcp profile from low level nio channel (#27441)\n [TEST] Fix `GeoShapeQueryTests#testPointsOnly` failure\n Transition transport apis to use void listeners (#27440)\n AwaitsFix GeoShapeQueryTests#testPointsOnly #27454\n ..."
}
],
"files": [
{
"diff": "@@ -31,6 +31,8 @@\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.xcontent.ConstructingObjectParser;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -45,6 +47,9 @@\n \n public final class DirectCandidateGeneratorBuilder implements CandidateGenerator {\n \n+ private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(\n+ Loggers.getLogger(DirectCandidateGeneratorBuilder.class));\n+\n private static final String TYPE = \"direct_generator\";\n \n public static final ParseField DIRECT_GENERATOR_FIELD = new ParseField(TYPE);\n@@ -211,8 +216,8 @@ String sort() {\n * string distance for terms inside the index.\n * <li><code>damerau_levenshtein</code> - String distance algorithm\n * based on Damerau-Levenshtein algorithm.\n- * <li><code>levenstein</code> - String distance algorithm based on\n- * Levenstein edit distance algorithm.\n+ * <li><code>levenshtein</code> - String distance algorithm based on\n+ * Levenshtein edit distance algorithm.\n * <li><code>jarowinkler</code> - String distance algorithm based on\n * Jaro-Winkler algorithm.\n * <li><code>ngram</code> - String distance algorithm based on character\n@@ -458,13 +463,16 @@ private static SuggestMode resolveSuggestMode(String suggestMode) {\n }\n }\n \n- private static StringDistance resolveDistance(String distanceVal) {\n+ static StringDistance resolveDistance(String distanceVal) {\n distanceVal = distanceVal.toLowerCase(Locale.US);\n if (\"internal\".equals(distanceVal)) {\n return DirectSpellChecker.INTERNAL_LEVENSHTEIN;\n } else if (\"damerau_levenshtein\".equals(distanceVal) || \"damerauLevenshtein\".equals(distanceVal)) {\n return new LuceneLevenshteinDistance();\n } else if (\"levenstein\".equals(distanceVal)) {\n+ DEPRECATION_LOGGER.deprecated(\"Deprecated distance [levenstein] used, replaced by [levenshtein]\");\n+ return new LevensteinDistance();\n+ } else if (\"levenshtein\".equals(distanceVal)) {\n return new LevensteinDistance();\n // TODO Jaro and Winkler are 2 people - so apply same naming logic\n // as damerau_levenshtein",
"filename": "core/src/main/java/org/elasticsearch/search/suggest/phrase/DirectCandidateGeneratorBuilder.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,8 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryShardContext;\n@@ -66,6 +68,9 @@\n * global options, but are only applicable for this suggestion.\n */\n public class TermSuggestionBuilder extends SuggestionBuilder<TermSuggestionBuilder> {\n+\n+ private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(Loggers.getLogger(TermSuggestionBuilder.class));\n+\n private static final String SUGGESTION_NAME = \"term\";\n \n private SuggestMode suggestMode = SuggestMode.MISSING;\n@@ -214,8 +219,8 @@ public SortBy sort() {\n * string distance for terms inside the index.\n * <li><code>damerau_levenshtein</code> - String distance algorithm based on\n * Damerau-Levenshtein algorithm.\n- * <li><code>levenstein</code> - String distance algorithm based on\n- * Levenstein edit distance algorithm.\n+ * <li><code>levenshtein</code> - String distance algorithm based on\n+ * Levenshtein edit distance algorithm.\n * <li><code>jarowinkler</code> - String distance algorithm based on\n * Jaro-Winkler algorithm.\n * <li><code>ngram</code> - String distance algorithm based on character\n@@ -543,8 +548,8 @@ public StringDistance toLucene() {\n return new LuceneLevenshteinDistance();\n }\n },\n- /** String distance algorithm based on Levenstein edit distance algorithm. */\n- LEVENSTEIN {\n+ /** String distance algorithm based on Levenshtein edit distance algorithm. */\n+ LEVENSHTEIN {\n @Override\n public StringDistance toLucene() {\n return new LevensteinDistance();\n@@ -584,7 +589,10 @@ public static StringDistanceImpl resolve(final String str) {\n case \"damerauLevenshtein\":\n return DAMERAU_LEVENSHTEIN;\n case \"levenstein\":\n- return LEVENSTEIN;\n+ DEPRECATION_LOGGER.deprecated(\"Deprecated distance [levenstein] used, replaced by [levenshtein]\");\n+ return LEVENSHTEIN;\n+ case \"levenshtein\":\n+ return LEVENSHTEIN;\n case \"ngram\":\n return NGRAM;\n case \"jarowinkler\":",
"filename": "core/src/main/java/org/elasticsearch/search/suggest/term/TermSuggestionBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,11 @@\n \n package org.elasticsearch.search.suggest.phrase;\n \n+import org.apache.lucene.search.spell.DirectSpellChecker;\n+import org.apache.lucene.search.spell.JaroWinklerDistance;\n+import org.apache.lucene.search.spell.LevensteinDistance;\n+import org.apache.lucene.search.spell.LuceneLevenshteinDistance;\n+import org.apache.lucene.search.spell.NGramDistance;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -38,6 +43,8 @@\n import java.util.function.Supplier;\n \n import static org.elasticsearch.test.EqualsHashCodeTestUtils.checkEqualsAndHashCode;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.core.IsInstanceOf.instanceOf;\n \n public class DirectCandidateGeneratorTests extends ESTestCase {\n private static final int NUMBER_OF_RUNS = 20;\n@@ -65,6 +72,22 @@ public void testEqualsAndHashcode() throws IOException {\n }\n }\n \n+ public void testFromString() {\n+ assertThat(DirectCandidateGeneratorBuilder.resolveDistance(\"internal\"), equalTo(DirectSpellChecker.INTERNAL_LEVENSHTEIN));\n+ assertThat(DirectCandidateGeneratorBuilder.resolveDistance(\"damerau_levenshtein\"), instanceOf(LuceneLevenshteinDistance.class));\n+ assertThat(DirectCandidateGeneratorBuilder.resolveDistance(\"levenshtein\"), instanceOf(LevensteinDistance.class));\n+ assertThat(DirectCandidateGeneratorBuilder.resolveDistance(\"jaroWinkler\"), instanceOf(JaroWinklerDistance.class));\n+ assertThat(DirectCandidateGeneratorBuilder.resolveDistance(\"ngram\"), instanceOf(NGramDistance.class));\n+\n+ expectThrows(IllegalArgumentException.class, () -> DirectCandidateGeneratorBuilder.resolveDistance(\"doesnt_exist\"));\n+ expectThrows(NullPointerException.class, () -> DirectCandidateGeneratorBuilder.resolveDistance(null));\n+ }\n+\n+ public void testLevensteinDeprecation() {\n+ assertThat(DirectCandidateGeneratorBuilder.resolveDistance(\"levenstein\"), instanceOf(LevensteinDistance.class));\n+ assertWarnings(\"Deprecated distance [levenstein] used, replaced by [levenshtein]\");\n+ }\n+\n private static DirectCandidateGeneratorBuilder mutate(DirectCandidateGeneratorBuilder original) throws IOException {\n DirectCandidateGeneratorBuilder mutation = copy(original);\n List<Supplier<DirectCandidateGeneratorBuilder>> mutators = new ArrayList<>();\n@@ -89,7 +112,7 @@ private static DirectCandidateGeneratorBuilder mutate(DirectCandidateGeneratorBu\n mutators.add(() -> mutation.preFilter(original.preFilter() == null ? \"preFilter\" : original.preFilter() + \"_other\"));\n mutators.add(() -> mutation.sort(original.sort() == null ? \"score\" : original.sort() + \"_other\"));\n mutators.add(\n- () -> mutation.stringDistance(original.stringDistance() == null ? \"levenstein\" : original.stringDistance() + \"_other\"));\n+ () -> mutation.stringDistance(original.stringDistance() == null ? \"levenshtein\" : original.stringDistance() + \"_other\"));\n mutators.add(() -> mutation.suggestMode(original.suggestMode() == null ? \"missing\" : original.suggestMode() + \"_other\"));\n return randomFrom(mutators).get();\n }\n@@ -189,7 +212,7 @@ public static DirectCandidateGeneratorBuilder randomCandidateGenerator() {\n maybeSet(generator::postFilter, randomAlphaOfLengthBetween(1, 20));\n maybeSet(generator::size, randomIntBetween(1, 20));\n maybeSet(generator::sort, randomFrom(\"score\", \"frequency\"));\n- maybeSet(generator::stringDistance, randomFrom(\"internal\", \"damerau_levenshtein\", \"levenstein\", \"jarowinkler\", \"ngram\"));\n+ maybeSet(generator::stringDistance, randomFrom(\"internal\", \"damerau_levenshtein\", \"levenshtein\", \"jarowinkler\", \"ngram\"));\n maybeSet(generator::suggestMode, randomFrom(\"missing\", \"popular\", \"always\"));\n return generator;\n }",
"filename": "core/src/test/java/org/elasticsearch/search/suggest/phrase/DirectCandidateGeneratorTests.java",
"status": "modified"
},
{
"diff": "@@ -20,10 +20,10 @@\n package org.elasticsearch.search.suggest.term;\n \n import org.elasticsearch.common.io.stream.AbstractWriteableEnumTestCase;\n+import org.elasticsearch.search.suggest.term.TermSuggestionBuilder.StringDistanceImpl;\n \n import java.io.IOException;\n \n-import static org.elasticsearch.search.suggest.term.TermSuggestionBuilder.StringDistanceImpl;\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n@@ -38,7 +38,7 @@ public StringDistanceImplTests() {\n public void testValidOrdinals() {\n assertThat(StringDistanceImpl.INTERNAL.ordinal(), equalTo(0));\n assertThat(StringDistanceImpl.DAMERAU_LEVENSHTEIN.ordinal(), equalTo(1));\n- assertThat(StringDistanceImpl.LEVENSTEIN.ordinal(), equalTo(2));\n+ assertThat(StringDistanceImpl.LEVENSHTEIN.ordinal(), equalTo(2));\n assertThat(StringDistanceImpl.JAROWINKLER.ordinal(), equalTo(3));\n assertThat(StringDistanceImpl.NGRAM.ordinal(), equalTo(4));\n }\n@@ -47,28 +47,27 @@ public void testValidOrdinals() {\n public void testFromString() {\n assertThat(StringDistanceImpl.resolve(\"internal\"), equalTo(StringDistanceImpl.INTERNAL));\n assertThat(StringDistanceImpl.resolve(\"damerau_levenshtein\"), equalTo(StringDistanceImpl.DAMERAU_LEVENSHTEIN));\n- assertThat(StringDistanceImpl.resolve(\"levenstein\"), equalTo(StringDistanceImpl.LEVENSTEIN));\n+ assertThat(StringDistanceImpl.resolve(\"levenshtein\"), equalTo(StringDistanceImpl.LEVENSHTEIN));\n assertThat(StringDistanceImpl.resolve(\"jarowinkler\"), equalTo(StringDistanceImpl.JAROWINKLER));\n assertThat(StringDistanceImpl.resolve(\"ngram\"), equalTo(StringDistanceImpl.NGRAM));\n+\n final String doesntExist = \"doesnt_exist\";\n- try {\n- StringDistanceImpl.resolve(doesntExist);\n- fail(\"StringDistanceImpl should not have an element \" + doesntExist);\n- } catch (IllegalArgumentException e) {\n- }\n- try {\n- StringDistanceImpl.resolve(null);\n- fail(\"StringDistanceImpl.resolve on a null value should throw an exception.\");\n- } catch (NullPointerException e) {\n- assertThat(e.getMessage(), equalTo(\"Input string is null\"));\n- }\n+ expectThrows(IllegalArgumentException.class, () -> StringDistanceImpl.resolve(doesntExist)); \n+ \n+ NullPointerException e = expectThrows(NullPointerException.class, () -> StringDistanceImpl.resolve(null));\n+ assertThat(e.getMessage(), equalTo(\"Input string is null\"));\n+ }\n+\n+ public void testLevensteinDeprecation() {\n+ assertThat(StringDistanceImpl.resolve(\"levenstein\"), equalTo(StringDistanceImpl.LEVENSHTEIN));\n+ assertWarnings(\"Deprecated distance [levenstein] used, replaced by [levenshtein]\");\n }\n \n @Override\n public void testWriteTo() throws IOException {\n assertWriteToStream(StringDistanceImpl.INTERNAL, 0);\n assertWriteToStream(StringDistanceImpl.DAMERAU_LEVENSHTEIN, 1);\n- assertWriteToStream(StringDistanceImpl.LEVENSTEIN, 2);\n+ assertWriteToStream(StringDistanceImpl.LEVENSHTEIN, 2);\n assertWriteToStream(StringDistanceImpl.JAROWINKLER, 3);\n assertWriteToStream(StringDistanceImpl.NGRAM, 4);\n }\n@@ -77,7 +76,7 @@ public void testWriteTo() throws IOException {\n public void testReadFrom() throws IOException {\n assertReadFromStream(0, StringDistanceImpl.INTERNAL);\n assertReadFromStream(1, StringDistanceImpl.DAMERAU_LEVENSHTEIN);\n- assertReadFromStream(2, StringDistanceImpl.LEVENSTEIN);\n+ assertReadFromStream(2, StringDistanceImpl.LEVENSHTEIN);\n assertReadFromStream(3, StringDistanceImpl.JAROWINKLER);\n assertReadFromStream(4, StringDistanceImpl.NGRAM);\n }",
"filename": "core/src/test/java/org/elasticsearch/search/suggest/term/StringDistanceImplTests.java",
"status": "modified"
},
{
"diff": "@@ -99,7 +99,7 @@ private static StringDistanceImpl randomStringDistance() {\n switch (randomVal) {\n case 0: return StringDistanceImpl.INTERNAL;\n case 1: return StringDistanceImpl.DAMERAU_LEVENSHTEIN;\n- case 2: return StringDistanceImpl.LEVENSTEIN;\n+ case 2: return StringDistanceImpl.LEVENSHTEIN;\n case 3: return StringDistanceImpl.JAROWINKLER;\n case 4: return StringDistanceImpl.NGRAM;\n default: throw new IllegalArgumentException(\"No string distance algorithm with an ordinal of \" + randomVal);",
"filename": "core/src/test/java/org/elasticsearch/search/suggest/term/TermSuggestionBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -116,7 +116,7 @@ doesn't take the query into account that is part of request.\n for comparing string distance for terms inside the index.\n `damerau_levenshtein` - String distance algorithm based on\n Damerau-Levenshtein algorithm.\n- `levenstein` - String distance algorithm based on Levenstein edit distance\n+ `levenshtein` - String distance algorithm based on Levenshtein edit distance\n algorithm.\n `jarowinkler` - String distance algorithm based on Jaro-Winkler algorithm.\n `ngram` - String distance algorithm based on character n-grams.",
"filename": "docs/reference/search/suggesters/term-suggest.asciidoc",
"status": "modified"
}
]
} |
{
"body": "Making this a PR rather than just a straight push because I've had to change the version calculation logic and would like reviews first.",
"comments": [
{
"body": "The build fails with this:\r\n\r\n```\r\n16:44:27 Resource missing. [HTTP GET: https://repo1.maven.org/maven2/org/elasticsearch/distribution/zip/elasticsearch/5.6.5-SNAPSHOT/maven-metadata.xml]\r\n16:44:27 Resource missing. [HTTP GET: https://repo1.maven.org/maven2/org/elasticsearch/distribution/zip/elasticsearch/5.6.5-SNAPSHOT/elasticsearch-5.6.5-SNAPSHOT.pom]\r\n16:44:27 Resource missing. [HTTP HEAD: https://repo1.maven.org/maven2/org/elasticsearch/distribution/zip/elasticsearch/5.6.5-SNAPSHOT/elasticsearch-5.6.5-SNAPSHOT.zip]\r\n```\r\n\r\nIt looks like it's (erroneously) trying to get 5.6.5-SNAPSHOT from maven rather than checking it out and building it locally. I do not (yet) know where the code to do this is. Hints appreciated.",
"created_at": "2017-11-14T17:31:13Z"
}
],
"number": 27386,
"title": "Bump version to 6.0.1 on the 6.0 branch"
} | {
"body": "Mainly opening this PR to get a full CI run before merging, as it's mostly a port of #27386 to the 6.x branch. The changes to `VersionUtils#resolveReleasedVersions()` and associated tests deserve attention, and ultimately backporting to 6.0 too.\r\n\r\nI'd wait for CI to pass before looking at this too hard.",
"number": 27402,
"review_comments": [
{
"body": "why do we remove this assertion, I think it's valid?",
"created_at": "2017-11-16T10:30:51Z"
},
{
"body": "Sorry, yes, you're right it's valid. There's a bug somewhere. Hunting it now...",
"created_at": "2017-11-16T10:46:27Z"
}
],
"title": "Bump version to 6.0.1 on the 6.x branch"
} | {
"commits": [
{
"message": "Prepare for bump to 6.0.1 on the 6.0 branch (#27386)\n\nAn attempt to bump to 6.0.1 on the 6.0 branch exposed up a handful of issues that this commit fixes. One of those fixes is a terrible hack that will be fixed more thoroughly in #27397, and another is a back port of d5e56c55553291682932d7e9e8dc7068f59e618b which is related to #27251."
},
{
"message": "Bump version to 6.0.1"
},
{
"message": "Remove big horrible hack\n\nVersion.CURRENT.minimumCompatibilityVersion() is now Version.V_5_6_0 anyway."
},
{
"message": "Fix assertion message\n\nThe preceding line modifies `versions` so `versions.get(versions.size() - 1)`\nno longer contains the highest version."
},
{
"message": "Rewrite the guts of VersionUtils.resolveReleasedVersions()\n\nThe policy for which version constants are unreleased changed at 5.6 - we now\nplan to keep an unreleased 'marker' version at the end of each minor branch,\nwhereas prior to 5.6 the unreleased marker version was removed during the\nversion bumping.\n\nThis commit simplifies the logic a bit and also makes it clear which bits can\nbe removed once 5.x versions are no longer relevant."
},
{
"message": "orly"
},
{
"message": "Improve comments"
},
{
"message": "Even better comments"
},
{
"message": "rly!\n\nActually this shouldn't fail on the 6.x branch - it's asserting that 7.0.0 is\ncompatible with 6.1.0 but not with anything on the 6.0.x series."
},
{
"message": "Extract method"
},
{
"message": "Move reflection method to Version class"
},
{
"message": "Version#minimumCompatibilityVersion() now searches for the last minor of the previous major"
},
{
"message": "This assertion is the wrong one"
},
{
"message": "No need for special case for 5.6<->6.x any more"
},
{
"message": "Add javadoc"
},
{
"message": "Fix comment & extra whitespace"
},
{
"message": "Make things final"
},
{
"message": "Test classes need to be public for reflection to work"
},
{
"message": "Revert \"Bump version to 6.0.1\" - will be committed separately\n\nThis reverts commit a46d44ec56a23dfad16a221ce6b945134ba27bac."
}
],
"files": [
{
"diff": "@@ -81,6 +81,7 @@ List<Version> versions = []\n // keep track of the previous major version's last minor, so we know where wire compat begins\n int prevMinorIndex = -1 // index in the versions list of the last minor from the prev major\n int lastPrevMinor = -1 // the minor version number from the prev major we most recently seen\n+int prevBugfixIndex = -1 // index in the versions list of the last bugfix release from the prev major\n for (String line : versionLines) {\n /* Note that this skips alphas and betas which is fine because they aren't\n * compatible with anything. */\n@@ -97,12 +98,19 @@ for (String line : versionLines) {\n prevMinorIndex = versions.size() - 1\n lastPrevMinor = minor\n }\n+ if (major == prevMajor) {\n+ prevBugfixIndex = versions.size() - 1\n+ }\n }\n }\n if (versions.toSorted { it.id } != versions) {\n println \"Versions: ${versions}\"\n throw new GradleException(\"Versions.java contains out of order version constants\")\n }\n+if (prevBugfixIndex != -1) {\n+ versions[prevBugfixIndex] = new Version(\n+ versions[prevBugfixIndex].major, versions[prevBugfixIndex].minor, versions[prevBugfixIndex].bugfix, true)\n+}\n if (currentVersion.bugfix == 0) {\n // If on a release branch, after the initial release of that branch, the bugfix version will\n // be bumped, and will be != 0. On master and N.x branches, we want to test against the\n@@ -248,6 +256,11 @@ subprojects {\n ext.projectSubstitutions[\"org.elasticsearch.distribution.rpm:elasticsearch:${indexCompatVersions[-1]}\"] = ':distribution:bwc-release-snapshot'\n ext.projectSubstitutions[\"org.elasticsearch.distribution.zip:elasticsearch:${indexCompatVersions[-1]}\"] = ':distribution:bwc-release-snapshot'\n }\n+ } else if (indexCompatVersions[-2].snapshot) {\n+ /* This is a terrible hack for the bump to 6.0.1 which will be fixed by #27397 */\n+ ext.projectSubstitutions[\"org.elasticsearch.distribution.deb:elasticsearch:${indexCompatVersions[-2]}\"] = ':distribution:bwc-release-snapshot'\n+ ext.projectSubstitutions[\"org.elasticsearch.distribution.rpm:elasticsearch:${indexCompatVersions[-2]}\"] = ':distribution:bwc-release-snapshot'\n+ ext.projectSubstitutions[\"org.elasticsearch.distribution.zip:elasticsearch:${indexCompatVersions[-2]}\"] = ':distribution:bwc-release-snapshot'\n }\n project.afterEvaluate {\n configurations.all {",
"filename": "build.gradle",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,11 @@\n import org.elasticsearch.monitor.jvm.JvmInfo;\n \n import java.io.IOException;\n+import java.lang.reflect.Field;\n+import java.lang.reflect.Modifier;\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.List;\n \n public class Version implements Comparable<Version> {\n /*\n@@ -351,19 +356,23 @@ public int compareTo(Version other) {\n * is a beta or RC release then the version itself is returned.\n */\n public Version minimumCompatibilityVersion() {\n- final int bwcMajor;\n- final int bwcMinor;\n- if (major == 6) { // we only specialize for current major here\n- bwcMajor = Version.V_5_6_0.major;\n- bwcMinor = Version.V_5_6_0.minor;\n- } else if (major > 6) { // all the future versions are compatible with first minor...\n- bwcMajor = major -1;\n- bwcMinor = 0;\n- } else {\n- bwcMajor = major;\n- bwcMinor = 0;\n+ if (major >= 6) {\n+ // all major versions from 6 onwards are compatible with last minor series of the previous major\n+ final List<Version> declaredVersions = getDeclaredVersions(getClass());\n+ Version bwcVersion = null;\n+ for (int i = declaredVersions.size() - 1; i >= 0; i--) {\n+ final Version candidateVersion = declaredVersions.get(i);\n+ if (candidateVersion.major == major - 1 && candidateVersion.isRelease() && after(candidateVersion)) {\n+ if (bwcVersion != null && candidateVersion.minor < bwcVersion.minor) {\n+ break;\n+ }\n+ bwcVersion = candidateVersion;\n+ }\n+ }\n+ return bwcVersion == null ? this : bwcVersion;\n }\n- return Version.min(this, fromId(bwcMajor * 1000000 + bwcMinor * 10000 + 99));\n+\n+ return Version.min(this, fromId((int) major * 1000000 + 0 * 10000 + 99));\n }\n \n /**\n@@ -471,4 +480,34 @@ public boolean isRC() {\n public boolean isRelease() {\n return build == 99;\n }\n+\n+ /**\n+ * Extracts a sorted list of declared version constants from a class.\n+ * The argument would normally be Version.class but is exposed for\n+ * testing with other classes-containing-version-constants.\n+ */\n+ public static List<Version> getDeclaredVersions(final Class<?> versionClass) {\n+ final Field[] fields = versionClass.getFields();\n+ final List<Version> versions = new ArrayList<>(fields.length);\n+ for (final Field field : fields) {\n+ final int mod = field.getModifiers();\n+ if (false == Modifier.isStatic(mod) && Modifier.isFinal(mod) && Modifier.isPublic(mod)) {\n+ continue;\n+ }\n+ if (field.getType() != Version.class) {\n+ continue;\n+ }\n+ if (\"CURRENT\".equals(field.getName())) {\n+ continue;\n+ }\n+ assert field.getName().matches(\"V(_\\\\d+)+(_(alpha|beta|rc)\\\\d+)?\") : field.getName();\n+ try {\n+ versions.add(((Version) field.get(null)));\n+ } catch (final IllegalAccessException e) {\n+ throw new RuntimeException(e);\n+ }\n+ }\n+ Collections.sort(versions);\n+ return versions;\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/Version.java",
"status": "modified"
},
{
"diff": "@@ -337,7 +337,6 @@ public void testIsCompatible() {\n assertTrue(isCompatible(Version.V_5_6_0, Version.V_6_0_0_alpha2));\n assertFalse(isCompatible(Version.fromId(2000099), Version.V_6_0_0_alpha2));\n assertFalse(isCompatible(Version.fromId(2000099), Version.V_5_0_0));\n- assertTrue(isCompatible(Version.fromString(\"6.0.0\"), Version.fromString(\"7.0.0\")));\n if (Version.CURRENT.isRelease()) {\n assertTrue(isCompatible(Version.CURRENT, Version.fromString(\"7.0.0\")));\n } else {",
"filename": "core/src/test/java/org/elasticsearch/VersionTests.java",
"status": "modified"
},
{
"diff": "@@ -23,10 +23,7 @@\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.collect.Tuple;\n \n-import java.lang.reflect.Field;\n-import java.lang.reflect.Modifier;\n import java.util.ArrayList;\n-import java.util.Arrays;\n import java.util.Collections;\n import java.util.List;\n import java.util.Optional;\n@@ -49,72 +46,64 @@ public class VersionUtils {\n * guarantees in v1 and versions without the guranteees in v2\n */\n static Tuple<List<Version>, List<Version>> resolveReleasedVersions(Version current, Class<?> versionClass) {\n- Field[] fields = versionClass.getFields();\n- List<Version> versions = new ArrayList<>(fields.length);\n- for (final Field field : fields) {\n- final int mod = field.getModifiers();\n- if (false == Modifier.isStatic(mod) && Modifier.isFinal(mod) && Modifier.isPublic(mod)) {\n- continue;\n- }\n- if (field.getType() != Version.class) {\n- continue;\n- }\n- if (\"CURRENT\".equals(field.getName())) {\n- continue;\n- }\n- assert field.getName().matches(\"V(_\\\\d+)+(_(alpha|beta|rc)\\\\d+)?\") : field.getName();\n- try {\n- versions.add(((Version) field.get(null)));\n- } catch (final IllegalAccessException e) {\n- throw new RuntimeException(e);\n- }\n- }\n- Collections.sort(versions);\n+ List<Version> versions = Version.getDeclaredVersions(versionClass);\n Version last = versions.remove(versions.size() - 1);\n assert last.equals(current) : \"The highest version must be the current one \"\n- + \"but was [\" + versions.get(versions.size() - 1) + \"] and current was [\" + current + \"]\";\n-\n- if (current.revision != 0) {\n- /* If we are in a stable branch there should be no unreleased version constants\n- * because we don't expect to release any new versions in older branches. If there\n- * are extra constants then gradle will yell about it. */\n+ + \"but was [\" + last + \"] and current was [\" + current + \"]\";\n+\n+ /* In the 5.x series prior to 5.6, unreleased version constants had an\n+ * `_UNRELEASED` suffix, and when making the first release on a minor release\n+ * branch the last, unreleased, version constant from the previous minor branch\n+ * was dropped. After 5.6, there is no `_UNRELEASED` suffix on version constants'\n+ * names and, additionally, they are not dropped when a new minor release branch\n+ * starts.\n+ *\n+ * This means that in 6.x and later series the last release _in each\n+ * minor branch_ is unreleased, whereas in 5.x it's more complicated: There were\n+ * (sometimes, and sometimes multiple) minor branches containing no releases, each\n+ * of which contains a single version constant of the form 5.n.0, and these\n+ * branches always followed a branch that _did_ contain a version of the\n+ * form 5.m.p (p>0). All versions strictly before the last 5.m version are released,\n+ * and all other 5.* versions are unreleased.\n+ */\n+\n+ if (current.major == 5 && current.revision != 0) {\n+ /* The current (i.e. latest) version is 5.a.b, b nonzero, which\n+ * means that all other versions are released. */\n return new Tuple<>(unmodifiableList(versions), singletonList(current));\n }\n \n- /* If we are on a patch release then we know that at least the version before the\n- * current one is unreleased. If it is released then gradle would be complaining. */\n- int unreleasedIndex = versions.size() - 1;\n- while (true) {\n- if (unreleasedIndex < 0) {\n- throw new IllegalArgumentException(\"Couldn't find first non-alpha release\");\n- }\n- /* We don't support backwards compatibility for alphas, betas, and rcs. But\n- * they were released so we add them to the released list. Usually this doesn't\n- * matter to consumers, but consumers that do care should filter non-release\n- * versions. */\n- if (versions.get(unreleasedIndex).isRelease()) {\n- break;\n+ final List<Version> unreleased = new ArrayList<>();\n+ unreleased.add(current);\n+ Version prevConsideredVersion = current;\n+\n+ for (int i = versions.size() - 1; i >= 0; i--) {\n+ Version currConsideredVersion = versions.get(i);\n+ if (currConsideredVersion.major == 5) {\n+ unreleased.add(currConsideredVersion);\n+ versions.remove(i);\n+ if (currConsideredVersion.revision != 0) {\n+ /* Currently considering the latest version in the 5.x series,\n+ * which is (a) unreleased and (b) the only such. So we're done. */\n+ break;\n+ }\n+ /* ... else we're on a version of the form 5.n.0, and have not yet\n+ * considered a version of the form 5.n.m (m>0), so this entire branch\n+ * is unreleased, so carry on looking for a branch containing releases.\n+ */\n+ } else if (currConsideredVersion.major != prevConsideredVersion.major\n+ || currConsideredVersion.minor != prevConsideredVersion.minor) {\n+ /* Have moved to the end of a new minor branch, so this is\n+ * an unreleased version. */\n+ unreleased.add(currConsideredVersion);\n+ versions.remove(i);\n }\n- unreleasedIndex--;\n- }\n+ prevConsideredVersion = currConsideredVersion;\n \n- Version unreleased = versions.remove(unreleasedIndex);\n- if (unreleased.revision == 0) {\n- /*\n- * If the last unreleased version is itself a patch release then Gradle enforces that there is yet another unreleased version\n- * before that. However, we have to skip alpha/betas/RCs too (e.g., consider when the version constants are ..., 5.6.3, 5.6.4,\n- * 6.0.0-alpha1, ..., 6.0.0-rc1, 6.0.0-rc2, 6.0.0, 6.1.0 on the 6.x branch. In this case, we will have pruned 6.0.0 and 6.1.0 as\n- * unreleased versions, but we also need to prune 5.6.4. At this point though, unreleasedIndex will be pointing to 6.0.0-rc2, so\n- * we have to skip backwards until we find a non-alpha/beta/RC again. Then we can prune that version as an unreleased version\n- * too.\n- */\n- do {\n- unreleasedIndex--;\n- } while (versions.get(unreleasedIndex).isRelease() == false);\n- Version earlierUnreleased = versions.remove(unreleasedIndex);\n- return new Tuple<>(unmodifiableList(versions), unmodifiableList(Arrays.asList(earlierUnreleased, unreleased, current)));\n }\n- return new Tuple<>(unmodifiableList(versions), unmodifiableList(Arrays.asList(unreleased, current)));\n+\n+ Collections.reverse(unreleased);\n+ return new Tuple<>(unmodifiableList(versions), unmodifiableList(unreleased));\n }\n \n private static final List<Version> RELEASED_VERSIONS;",
"filename": "test/framework/src/main/java/org/elasticsearch/test/VersionUtils.java",
"status": "modified"
},
{
"diff": "@@ -95,7 +95,7 @@ public void testRandomVersionBetween() {\n assertEquals(unreleased, VersionUtils.randomVersionBetween(random(), unreleased, unreleased));\n }\n \n- static class TestReleaseBranch {\n+ public static class TestReleaseBranch {\n public static final Version V_5_3_0 = Version.fromString(\"5.3.0\");\n public static final Version V_5_3_1 = Version.fromString(\"5.3.1\");\n public static final Version V_5_3_2 = Version.fromString(\"5.3.2\");\n@@ -112,7 +112,7 @@ public void testResolveReleasedVersionsForReleaseBranch() {\n assertEquals(singletonList(TestReleaseBranch.V_5_4_1), unreleased);\n }\n \n- static class TestStableBranch {\n+ public static class TestStableBranch {\n public static final Version V_5_3_0 = Version.fromString(\"5.3.0\");\n public static final Version V_5_3_1 = Version.fromString(\"5.3.1\");\n public static final Version V_5_3_2 = Version.fromString(\"5.3.2\");\n@@ -128,15 +128,15 @@ public void testResolveReleasedVersionsForUnreleasedStableBranch() {\n assertEquals(Arrays.asList(TestStableBranch.V_5_3_2, TestStableBranch.V_5_4_0), unreleased);\n }\n \n- static class TestStableBranchBehindStableBranch {\n+ public static class TestStableBranchBehindStableBranch {\n public static final Version V_5_3_0 = Version.fromString(\"5.3.0\");\n public static final Version V_5_3_1 = Version.fromString(\"5.3.1\");\n public static final Version V_5_3_2 = Version.fromString(\"5.3.2\");\n public static final Version V_5_4_0 = Version.fromString(\"5.4.0\");\n public static final Version V_5_5_0 = Version.fromString(\"5.5.0\");\n public static final Version CURRENT = V_5_5_0;\n }\n- public void testResolveReleasedVersionsForStableBtranchBehindStableBranch() {\n+ public void testResolveReleasedVersionsForStableBranchBehindStableBranch() {\n Tuple<List<Version>, List<Version>> t = VersionUtils.resolveReleasedVersions(TestStableBranchBehindStableBranch.CURRENT,\n TestStableBranchBehindStableBranch.class);\n List<Version> released = t.v1();\n@@ -146,7 +146,7 @@ public void testResolveReleasedVersionsForStableBtranchBehindStableBranch() {\n TestStableBranchBehindStableBranch.V_5_5_0), unreleased);\n }\n \n- static class TestUnstableBranch {\n+ public static class TestUnstableBranch {\n public static final Version V_5_3_0 = Version.fromString(\"5.3.0\");\n public static final Version V_5_3_1 = Version.fromString(\"5.3.1\");\n public static final Version V_5_3_2 = Version.fromString(\"5.3.2\");\n@@ -167,6 +167,87 @@ public void testResolveReleasedVersionsForUnstableBranch() {\n assertEquals(Arrays.asList(TestUnstableBranch.V_5_3_2, TestUnstableBranch.V_5_4_0, TestUnstableBranch.V_6_0_0_beta1), unreleased);\n }\n \n+ public static class TestNewMajorRelease {\n+ public static final Version V_5_6_0 = Version.fromString(\"5.6.0\");\n+ public static final Version V_5_6_1 = Version.fromString(\"5.6.1\");\n+ public static final Version V_5_6_2 = Version.fromString(\"5.6.2\");\n+ public static final Version V_6_0_0_alpha1 = Version.fromString(\"6.0.0-alpha1\");\n+ public static final Version V_6_0_0_alpha2 = Version.fromString(\"6.0.0-alpha2\");\n+ public static final Version V_6_0_0_beta1 = Version.fromString(\"6.0.0-beta1\");\n+ public static final Version V_6_0_0_beta2 = Version.fromString(\"6.0.0-beta2\");\n+ public static final Version V_6_0_0 = Version.fromString(\"6.0.0\");\n+ public static final Version V_6_0_1 = Version.fromString(\"6.0.1\");\n+ public static final Version CURRENT = V_6_0_1;\n+ }\n+\n+ public void testResolveReleasedVersionsAtNewMajorRelease() {\n+ Tuple<List<Version>, List<Version>> t = VersionUtils.resolveReleasedVersions(TestNewMajorRelease.CURRENT,\n+ TestNewMajorRelease.class);\n+ List<Version> released = t.v1();\n+ List<Version> unreleased = t.v2();\n+ assertEquals(Arrays.asList(TestNewMajorRelease.V_5_6_0, TestNewMajorRelease.V_5_6_1,\n+ TestNewMajorRelease.V_6_0_0_alpha1, TestNewMajorRelease.V_6_0_0_alpha2,\n+ TestNewMajorRelease.V_6_0_0_beta1, TestNewMajorRelease.V_6_0_0_beta2,\n+ TestNewMajorRelease.V_6_0_0), released);\n+ assertEquals(Arrays.asList(TestNewMajorRelease.V_5_6_2, TestNewMajorRelease.V_6_0_1), unreleased);\n+ }\n+\n+ public static class TestVersionBumpIn6x {\n+ public static final Version V_5_6_0 = Version.fromString(\"5.6.0\");\n+ public static final Version V_5_6_1 = Version.fromString(\"5.6.1\");\n+ public static final Version V_5_6_2 = Version.fromString(\"5.6.2\");\n+ public static final Version V_6_0_0_alpha1 = Version.fromString(\"6.0.0-alpha1\");\n+ public static final Version V_6_0_0_alpha2 = Version.fromString(\"6.0.0-alpha2\");\n+ public static final Version V_6_0_0_beta1 = Version.fromString(\"6.0.0-beta1\");\n+ public static final Version V_6_0_0_beta2 = Version.fromString(\"6.0.0-beta2\");\n+ public static final Version V_6_0_0 = Version.fromString(\"6.0.0\");\n+ public static final Version V_6_0_1 = Version.fromString(\"6.0.1\");\n+ public static final Version V_6_1_0 = Version.fromString(\"6.1.0\");\n+ public static final Version CURRENT = V_6_1_0;\n+ }\n+\n+ public void testResolveReleasedVersionsAtVersionBumpIn6x() {\n+ Tuple<List<Version>, List<Version>> t = VersionUtils.resolveReleasedVersions(TestVersionBumpIn6x.CURRENT,\n+ TestVersionBumpIn6x.class);\n+ List<Version> released = t.v1();\n+ List<Version> unreleased = t.v2();\n+ assertEquals(Arrays.asList(TestVersionBumpIn6x.V_5_6_0, TestVersionBumpIn6x.V_5_6_1,\n+ TestVersionBumpIn6x.V_6_0_0_alpha1, TestVersionBumpIn6x.V_6_0_0_alpha2,\n+ TestVersionBumpIn6x.V_6_0_0_beta1, TestVersionBumpIn6x.V_6_0_0_beta2,\n+ TestVersionBumpIn6x.V_6_0_0), released);\n+ assertEquals(Arrays.asList(TestVersionBumpIn6x.V_5_6_2, TestVersionBumpIn6x.V_6_0_1, TestVersionBumpIn6x.V_6_1_0), unreleased);\n+ }\n+\n+ public static class TestNewMinorBranchIn6x {\n+ public static final Version V_5_6_0 = Version.fromString(\"5.6.0\");\n+ public static final Version V_5_6_1 = Version.fromString(\"5.6.1\");\n+ public static final Version V_5_6_2 = Version.fromString(\"5.6.2\");\n+ public static final Version V_6_0_0_alpha1 = Version.fromString(\"6.0.0-alpha1\");\n+ public static final Version V_6_0_0_alpha2 = Version.fromString(\"6.0.0-alpha2\");\n+ public static final Version V_6_0_0_beta1 = Version.fromString(\"6.0.0-beta1\");\n+ public static final Version V_6_0_0_beta2 = Version.fromString(\"6.0.0-beta2\");\n+ public static final Version V_6_0_0 = Version.fromString(\"6.0.0\");\n+ public static final Version V_6_0_1 = Version.fromString(\"6.0.1\");\n+ public static final Version V_6_1_0 = Version.fromString(\"6.1.0\");\n+ public static final Version V_6_1_1 = Version.fromString(\"6.1.1\");\n+ public static final Version V_6_1_2 = Version.fromString(\"6.1.2\");\n+ public static final Version V_6_2_0 = Version.fromString(\"6.2.0\");\n+ public static final Version CURRENT = V_6_2_0;\n+ }\n+\n+ public void testResolveReleasedVersionsAtNewMinorBranchIn6x() {\n+ Tuple<List<Version>, List<Version>> t = VersionUtils.resolveReleasedVersions(TestNewMinorBranchIn6x.CURRENT,\n+ TestNewMinorBranchIn6x.class);\n+ List<Version> released = t.v1();\n+ List<Version> unreleased = t.v2();\n+ assertEquals(Arrays.asList(TestNewMinorBranchIn6x.V_5_6_0, TestNewMinorBranchIn6x.V_5_6_1,\n+ TestNewMinorBranchIn6x.V_6_0_0_alpha1, TestNewMinorBranchIn6x.V_6_0_0_alpha2,\n+ TestNewMinorBranchIn6x.V_6_0_0_beta1, TestNewMinorBranchIn6x.V_6_0_0_beta2,\n+ TestNewMinorBranchIn6x.V_6_0_0, TestNewMinorBranchIn6x.V_6_1_0, TestNewMinorBranchIn6x.V_6_1_1), released);\n+ assertEquals(Arrays.asList(TestNewMinorBranchIn6x.V_5_6_2, TestNewMinorBranchIn6x.V_6_0_1,\n+ TestNewMinorBranchIn6x.V_6_1_2, TestNewMinorBranchIn6x.V_6_2_0), unreleased);\n+ }\n+\n /**\n * Tests that {@link Version#minimumCompatibilityVersion()} and {@link VersionUtils#allReleasedVersions()}\n * agree with the list of wire and index compatible versions we build in gradle.\n@@ -196,15 +277,7 @@ public void testGradleVersionsMatchVersionUtils() {\n \n // Now the wire compatible versions\n VersionsFromProperty wireCompatible = new VersionsFromProperty(\"tests.gradle_wire_compat_versions\");\n-\n- // Big horrible hack:\n- // This *should* be:\n- // Version minimumCompatibleVersion = Version.CURRENT.minimumCompatibilityVersion();\n- // But instead it is:\n- Version minimumCompatibleVersion = Version.V_5_6_0;\n- // Because things blow up all over the place if the minimum compatible version isn't released.\n- // We'll fix this very, very soon. But for now, this hack.\n- // end big horrible hack\n+ Version minimumCompatibleVersion = Version.CURRENT.minimumCompatibilityVersion();\n List<String> releasedWireCompatible = released.stream()\n .filter(v -> v.onOrAfter(minimumCompatibleVersion))\n .map(Object::toString)",
"filename": "test/framework/src/test/java/org/elasticsearch/test/VersionUtilsTests.java",
"status": "modified"
}
]
} |
{
"body": "Nested docs share the same `_id` which is enough unless `_routing` is involved and / or the index is partitioned. We need to find a good way to respect nested docs when we select the docs that should be deleted. We can either do this at runtime (when we split) or we also index / store the `_routing` value from the parent. \r\n\r\n@jpountz WDYT",
"comments": [
{
"body": "I think if the mapping has nested fields then we should just wrap the `ShardSplittingQuery` in a `ToChildBlockJoinQuery`? This should include the document that do not belong in the current shard with their nested documents.",
"created_at": "2017-11-14T13:05:05Z"
},
{
"body": "@mvg this sounds like a good plan, I wonder if we then need to skip all child docs when we select documents for deletion? the query we pass to the `ToChildBlockJoinQuery ` should only match parents, right? ",
"created_at": "2017-11-14T14:06:27Z"
},
{
"body": "> I wonder if we then need to skip all child docs when we select documents for deletion\r\n\r\nRight, because then there is one way of selecting nested documents.\r\n\r\n> the query we pass to the ToChildBlockJoinQuery should only match parents, right?\r\n\r\nYes. Then the `ShardSplittingQuery` query should then in the default case (no routing AND no partitioning) not select nested documents by itself.\r\n\r\nMaybe we can create a bool query with two should clause:\r\n* `ShardSplittingQuery` which selects all regular Lucene documents that do not belong on the current shard.\r\n* `ToChildBlockJoinQuery` with `ShardSplittingQuery` as `parentQuery` and a bitset based on `Queries.newNonNestedFilter()` query as `parentsFilter`. This will select all nested documents of regular documents that do not belong on the current shard.",
"created_at": "2017-11-14T15:09:13Z"
},
{
"body": "The `ShardSplittingQuery` can be an expensive query, so alternatively we can introduce a new query. This query would wrap the `ShardSplittingQuery` query and emit its docids and use `BitSet` (based `Queries.newNonNestedFilter()`) to also emit all nested docids belong to each normal document. This way only a single `ShardSplittingQuery` instance needs to be used.",
"created_at": "2017-11-14T15:27:06Z"
}
],
"number": 27378,
"title": "shard splitting doesn't always respect nested docs "
} | {
"body": "Today if nested docs are used in an index that is split the operation\r\nwill only work correctly if the index is not routing partitioned or\r\nunless routing is used. This change fixes the query that selectes the docs\r\nto delete to also select all parents nested docs as well.\r\n\r\nCloses #27378",
"number": 27398,
"review_comments": [
{
"body": "Why deletedDocs ? It's the opposite, no ? The bitset of the matching parent+children, allDocs ?",
"created_at": "2017-11-16T08:02:17Z"
},
{
"body": "We warm this query per segment in `BitsetFilterCache#BitSetProducerWarmer` so why not using the per-segment cache directly ?",
"created_at": "2017-11-16T08:06:43Z"
},
{
"body": "s/`true || randomBoolean();`/`randomBoolean();`",
"created_at": "2017-11-16T08:08:25Z"
},
{
"body": "same here",
"created_at": "2017-11-16T08:08:46Z"
},
{
"body": "So just to double check, the `includeDoc` is to ensure that only root docs get selected and later we select the nested docs of the selected root docs in `markChildDocs(...)`, right?",
"created_at": "2017-11-16T08:13:46Z"
},
{
"body": ":+1: ",
"created_at": "2017-11-16T08:17:58Z"
},
{
"body": "I was trying to understand why this works, because of the forward iteration here (with nested, we usually seek backwards (`BitSet.prevSetBit(...)`)). So this works because all live doc ids (root docs and nested docs) are evaluated in order.",
"created_at": "2017-11-16T08:28:42Z"
},
{
"body": "we use this only as a delete by query which is executed on a `recovery-private` index writer. There is no point in cacheing it and it won't have a cache hit either.",
"created_at": "2017-11-16T08:42:13Z"
},
{
"body": "naming issue, I will fix. In theory it will hold all docs that need to be deleted by the IndexWriter.",
"created_at": "2017-11-16T08:42:55Z"
},
{
"body": "oh yeah 🗡 ",
"created_at": "2017-11-16T08:44:08Z"
},
{
"body": "correct",
"created_at": "2017-11-16T08:45:01Z"
},
{
"body": "correct, I will leave a comment",
"created_at": "2017-11-16T08:46:57Z"
},
{
"body": "I left a comment",
"created_at": "2017-11-16T08:48:08Z"
},
{
"body": "Ok I missed the recovery-private thing. Thanks",
"created_at": "2017-11-16T09:02:14Z"
}
],
"title": "Fix `ShardSplittingQuery` to respect nested documents."
} | {
"commits": [
{
"message": "Fix `ShardSplittingQuery` to respect nested documents.\n\nToday if nested docs are used in an index that is split the operation\nwill only work correctly if the index is not routing partitioned or\nunless routing is used. This change fixes the query that selectes the docs\nto delete to also select all parents nested docs as well.\n\nCloses #27378"
},
{
"message": "Merge branch 'master' into fix_split_on_nested"
},
{
"message": "apply review comments and simplify code"
},
{
"message": "add more comments"
}
],
"files": [
{
"diff": "@@ -19,9 +19,11 @@\n package org.elasticsearch.index.shard;\n \n import org.apache.lucene.index.FieldInfo;\n+import org.apache.lucene.index.IndexReaderContext;\n import org.apache.lucene.index.LeafReader;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.PostingsEnum;\n+import org.apache.lucene.index.ReaderUtil;\n import org.apache.lucene.index.StoredFieldVisitor;\n import org.apache.lucene.index.Terms;\n import org.apache.lucene.index.TermsEnum;\n@@ -33,19 +35,23 @@\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.TwoPhaseIterator;\n import org.apache.lucene.search.Weight;\n+import org.apache.lucene.search.join.BitSetProducer;\n+import org.apache.lucene.util.BitSet;\n import org.apache.lucene.util.BitSetIterator;\n-import org.apache.lucene.util.Bits;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.FixedBitSet;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.routing.OperationRouting;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.index.mapper.IdFieldMapper;\n import org.elasticsearch.index.mapper.RoutingFieldMapper;\n import org.elasticsearch.index.mapper.Uid;\n \n import java.io.IOException;\n+import java.util.function.Function;\n import java.util.function.IntConsumer;\n+import java.util.function.IntPredicate;\n import java.util.function.Predicate;\n \n /**\n@@ -56,16 +62,17 @@\n final class ShardSplittingQuery extends Query {\n private final IndexMetaData indexMetaData;\n private final int shardId;\n+ private final BitSetProducer nestedParentBitSetProducer;\n \n- ShardSplittingQuery(IndexMetaData indexMetaData, int shardId) {\n+ ShardSplittingQuery(IndexMetaData indexMetaData, int shardId, boolean hasNested) {\n if (indexMetaData.getCreationVersion().before(Version.V_6_0_0_rc2)) {\n throw new IllegalArgumentException(\"Splitting query can only be executed on an index created with version \"\n + Version.V_6_0_0_rc2 + \" or higher\");\n }\n this.indexMetaData = indexMetaData;\n this.shardId = shardId;\n+ this.nestedParentBitSetProducer = hasNested ? newParentDocBitSetProducer() : null;\n }\n-\n @Override\n public Weight createWeight(IndexSearcher searcher, boolean needsScores, float boost) {\n return new ConstantScoreWeight(this, boost) {\n@@ -84,44 +91,87 @@ public Scorer scorer(LeafReaderContext context) throws IOException {\n Uid.decodeId(ref.bytes, ref.offset, ref.length), null);\n return shardId == targetShardId;\n };\n- if (terms == null) { // this is the common case - no partitioning and no _routing values\n+ if (terms == null) {\n+ // this is the common case - no partitioning and no _routing values\n+ // in this case we also don't do anything special with regards to nested docs since we basically delete\n+ // by ID and parent and nested all have the same id.\n assert indexMetaData.isRoutingPartitionedIndex() == false;\n findSplitDocs(IdFieldMapper.NAME, includeInShard, leafReader, bitSet::set);\n } else {\n+ final BitSet parentBitSet;\n+ if (nestedParentBitSetProducer == null) {\n+ parentBitSet = null;\n+ } else {\n+ parentBitSet = nestedParentBitSetProducer.getBitSet(context);\n+ if (parentBitSet == null) {\n+ return null; // no matches\n+ }\n+ }\n if (indexMetaData.isRoutingPartitionedIndex()) {\n // this is the heaviest invariant. Here we have to visit all docs stored fields do extract _id and _routing\n // this this index is routing partitioned.\n- Visitor visitor = new Visitor();\n- return new ConstantScoreScorer(this, score(),\n- new RoutingPartitionedDocIdSetIterator(leafReader, visitor));\n+ Visitor visitor = new Visitor(leafReader);\n+ TwoPhaseIterator twoPhaseIterator =\n+ parentBitSet == null ? new RoutingPartitionedDocIdSetIterator(visitor) :\n+ new NestedRoutingPartitionedDocIdSetIterator(visitor, parentBitSet);\n+ return new ConstantScoreScorer(this, score(), twoPhaseIterator);\n } else {\n+ // here we potentially guard the docID consumers with our parent bitset if we have one.\n+ // this ensures that we are only marking root documents in the nested case and if necessary\n+ // we do a second pass to mark the corresponding children in markChildDocs\n+ Function<IntConsumer, IntConsumer> maybeWrapConsumer = consumer -> {\n+ if (parentBitSet != null) {\n+ return docId -> {\n+ if (parentBitSet.get(docId)) {\n+ consumer.accept(docId);\n+ }\n+ };\n+ }\n+ return consumer;\n+ };\n // in the _routing case we first go and find all docs that have a routing value and mark the ones we have to delete\n findSplitDocs(RoutingFieldMapper.NAME, ref -> {\n int targetShardId = OperationRouting.generateShardId(indexMetaData, null, ref.utf8ToString());\n return shardId == targetShardId;\n- }, leafReader, bitSet::set);\n+ }, leafReader, maybeWrapConsumer.apply(bitSet::set));\n+\n // now if we have a mixed index where some docs have a _routing value and some don't we have to exclude the ones\n // with a routing value from the next iteration an delete / select based on the ID.\n if (terms.getDocCount() != leafReader.maxDoc()) {\n // this is a special case where some of the docs have no routing values this sucks but it's possible today\n FixedBitSet hasRoutingValue = new FixedBitSet(leafReader.maxDoc());\n- findSplitDocs(RoutingFieldMapper.NAME, ref -> false, leafReader,\n- hasRoutingValue::set);\n+ findSplitDocs(RoutingFieldMapper.NAME, ref -> false, leafReader, maybeWrapConsumer.apply(hasRoutingValue::set));\n+ IntConsumer bitSetConsumer = maybeWrapConsumer.apply(bitSet::set);\n findSplitDocs(IdFieldMapper.NAME, includeInShard, leafReader, docId -> {\n if (hasRoutingValue.get(docId) == false) {\n- bitSet.set(docId);\n+ bitSetConsumer.accept(docId);\n }\n });\n }\n }\n+ if (parentBitSet != null) {\n+ // if nested docs are involved we also need to mark all child docs that belong to a matching parent doc.\n+ markChildDocs(parentBitSet, bitSet);\n+ }\n }\n+\n return new ConstantScoreScorer(this, score(), new BitSetIterator(bitSet, bitSet.length()));\n }\n-\n-\n };\n }\n \n+ private void markChildDocs(BitSet parentDocs, BitSet matchingDocs) {\n+ int currentDeleted = 0;\n+ while (currentDeleted < matchingDocs.length() &&\n+ (currentDeleted = matchingDocs.nextSetBit(currentDeleted)) != DocIdSetIterator.NO_MORE_DOCS) {\n+ int previousParent = parentDocs.prevSetBit(Math.max(0, currentDeleted-1));\n+ for (int i = previousParent + 1; i < currentDeleted; i++) {\n+ matchingDocs.set(i);\n+ }\n+ currentDeleted++;\n+ }\n+ }\n+\n @Override\n public String toString(String field) {\n return \"shard_splitting_query\";\n@@ -145,8 +195,8 @@ public int hashCode() {\n return classHash() ^ result;\n }\n \n- private static void findSplitDocs(String idField, Predicate<BytesRef> includeInShard,\n- LeafReader leafReader, IntConsumer consumer) throws IOException {\n+ private static void findSplitDocs(String idField, Predicate<BytesRef> includeInShard, LeafReader leafReader,\n+ IntConsumer consumer) throws IOException {\n Terms terms = leafReader.terms(idField);\n TermsEnum iterator = terms.iterator();\n BytesRef idTerm;\n@@ -162,15 +212,17 @@ private static void findSplitDocs(String idField, Predicate<BytesRef> includeInS\n }\n }\n \n- private static final class Visitor extends StoredFieldVisitor {\n- int leftToVisit = 2;\n- final BytesRef spare = new BytesRef();\n- String routing;\n- String id;\n+ /* this class is a stored fields visitor that reads _id and/or _routing from the stored fields which is necessary in the case\n+ of a routing partitioned index sine otherwise we would need to un-invert the _id and _routing field which is memory heavy */\n+ private final class Visitor extends StoredFieldVisitor {\n+ final LeafReader leafReader;\n+ private int leftToVisit = 2;\n+ private final BytesRef spare = new BytesRef();\n+ private String routing;\n+ private String id;\n \n- void reset() {\n- routing = id = null;\n- leftToVisit = 2;\n+ Visitor(LeafReader leafReader) {\n+ this.leafReader = leafReader;\n }\n \n @Override\n@@ -210,36 +262,91 @@ public Status needsField(FieldInfo fieldInfo) throws IOException {\n return leftToVisit == 0 ? Status.STOP : Status.NO;\n }\n }\n+\n+ boolean matches(int doc) throws IOException {\n+ routing = id = null;\n+ leftToVisit = 2;\n+ leafReader.document(doc, this);\n+ assert id != null : \"docID must not be null - we might have hit a nested document\";\n+ int targetShardId = OperationRouting.generateShardId(indexMetaData, id, routing);\n+ return targetShardId != shardId;\n+ }\n }\n \n /**\n * This two phase iterator visits every live doc and selects all docs that don't belong into this\n * shard based on their id and routing value. This is only used in a routing partitioned index.\n */\n- private final class RoutingPartitionedDocIdSetIterator extends TwoPhaseIterator {\n- private final LeafReader leafReader;\n+ private static final class RoutingPartitionedDocIdSetIterator extends TwoPhaseIterator {\n private final Visitor visitor;\n \n- RoutingPartitionedDocIdSetIterator(LeafReader leafReader, Visitor visitor) {\n- super(DocIdSetIterator.all(leafReader.maxDoc())); // we iterate all live-docs\n- this.leafReader = leafReader;\n+ RoutingPartitionedDocIdSetIterator(Visitor visitor) {\n+ super(DocIdSetIterator.all(visitor.leafReader.maxDoc())); // we iterate all live-docs\n this.visitor = visitor;\n }\n \n @Override\n public boolean matches() throws IOException {\n+ return visitor.matches(approximation.docID());\n+ }\n+\n+ @Override\n+ public float matchCost() {\n+ return 42; // that's obvious, right?\n+ }\n+ }\n+\n+ /**\n+ * This TwoPhaseIterator marks all nested docs of matching parents as matches as well.\n+ */\n+ private static final class NestedRoutingPartitionedDocIdSetIterator extends TwoPhaseIterator {\n+ private final Visitor visitor;\n+ private final BitSet parentDocs;\n+ private int nextParent = -1;\n+ private boolean nextParentMatches;\n+\n+ NestedRoutingPartitionedDocIdSetIterator(Visitor visitor, BitSet parentDocs) {\n+ super(DocIdSetIterator.all(visitor.leafReader.maxDoc())); // we iterate all live-docs\n+ this.parentDocs = parentDocs;\n+ this.visitor = visitor;\n+ }\n+\n+ @Override\n+ public boolean matches() throws IOException {\n+ // the educated reader might ask why this works, it does because all live doc ids (root docs and nested docs) are evaluated in\n+ // order and that way we don't need to seek backwards as we do in other nested docs cases.\n int doc = approximation.docID();\n- visitor.reset();\n- leafReader.document(doc, visitor);\n- int targetShardId = OperationRouting.generateShardId(indexMetaData, visitor.id, visitor.routing);\n- return targetShardId != shardId;\n+ if (doc > nextParent) {\n+ // we only check once per nested/parent set\n+ nextParent = parentDocs.nextSetBit(doc);\n+ // never check a child document against the visitor, they neihter have _id nor _routing as stored fields\n+ nextParentMatches = visitor.matches(nextParent);\n+ }\n+ return nextParentMatches;\n }\n \n @Override\n public float matchCost() {\n return 42; // that's obvious, right?\n }\n }\n+\n+ /*\n+ * this is used internally to obtain a bitset for parent documents. We don't cache this since we never access the same reader more\n+ * than once. There is no point in using BitsetFilterCache#BitSetProducerWarmer since we use this only as a delete by query which is\n+ * executed on a recovery-private index writer. There is no point in caching it and it won't have a cache hit either.\n+ */\n+ private static BitSetProducer newParentDocBitSetProducer() {\n+ return context -> {\n+ Query query = Queries.newNonNestedFilter();\n+ final IndexReaderContext topLevelContext = ReaderUtil.getTopLevelContext(context);\n+ final IndexSearcher searcher = new IndexSearcher(topLevelContext);\n+ searcher.setQueryCache(null);\n+ final Weight weight = searcher.createNormalizedWeight(query, false);\n+ Scorer s = weight.scorer(context);\n+ return s == null ? null : BitSet.of(s.iterator(), context.reader().maxDoc());\n+ };\n+ }\n }\n \n ",
"filename": "core/src/main/java/org/elasticsearch/index/shard/ShardSplittingQuery.java",
"status": "modified"
},
{
"diff": "@@ -115,6 +115,7 @@ boolean recoverFromLocalShards(BiConsumer<String, MappingMetaData> mappingUpdate\n indexShard.mapperService().merge(sourceMetaData, MapperService.MergeReason.MAPPING_RECOVERY, true);\n // now that the mapping is merged we can validate the index sort configuration.\n Sort indexSort = indexShard.getIndexSort();\n+ final boolean hasNested = indexShard.mapperService().hasNested();\n final boolean isSplit = sourceMetaData.getNumberOfShards() < indexShard.indexSettings().getNumberOfShards();\n assert isSplit == false || sourceMetaData.getCreationVersion().onOrAfter(Version.V_6_0_0_alpha1) : \"for split we require a \" +\n \"single type but the index is created before 6.0.0\";\n@@ -127,7 +128,7 @@ boolean recoverFromLocalShards(BiConsumer<String, MappingMetaData> mappingUpdate\n final long maxUnsafeAutoIdTimestamp =\n shards.stream().mapToLong(LocalShardSnapshot::maxUnsafeAutoIdTimestamp).max().getAsLong();\n addIndices(indexShard.recoveryState().getIndex(), directory, indexSort, sources, maxSeqNo, maxUnsafeAutoIdTimestamp,\n- indexShard.indexSettings().getIndexMetaData(), indexShard.shardId().id(), isSplit);\n+ indexShard.indexSettings().getIndexMetaData(), indexShard.shardId().id(), isSplit, hasNested);\n internalRecoverFromStore(indexShard);\n // just trigger a merge to do housekeeping on the\n // copied segments - we will also see them in stats etc.\n@@ -142,8 +143,8 @@ boolean recoverFromLocalShards(BiConsumer<String, MappingMetaData> mappingUpdate\n }\n \n void addIndices(final RecoveryState.Index indexRecoveryStats, final Directory target, final Sort indexSort, final Directory[] sources,\n- final long maxSeqNo, final long maxUnsafeAutoIdTimestamp, IndexMetaData indexMetaData, int shardId, boolean split)\n- throws IOException {\n+ final long maxSeqNo, final long maxUnsafeAutoIdTimestamp, IndexMetaData indexMetaData, int shardId, boolean split,\n+ boolean hasNested) throws IOException {\n final Directory hardLinkOrCopyTarget = new org.apache.lucene.store.HardlinkCopyDirectoryWrapper(target);\n IndexWriterConfig iwc = new IndexWriterConfig(null)\n .setCommitOnClose(false)\n@@ -158,9 +159,8 @@ void addIndices(final RecoveryState.Index indexRecoveryStats, final Directory ta\n \n try (IndexWriter writer = new IndexWriter(new StatsDirectoryWrapper(hardLinkOrCopyTarget, indexRecoveryStats), iwc)) {\n writer.addIndexes(sources);\n-\n if (split) {\n- writer.deleteDocuments(new ShardSplittingQuery(indexMetaData, shardId));\n+ writer.deleteDocuments(new ShardSplittingQuery(indexMetaData, shardId, hasNested));\n }\n /*\n * We set the maximum sequence number and the local checkpoint on the target to the maximum of the maximum sequence numbers on",
"filename": "core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.search.SortField;\n import org.apache.lucene.search.SortedSetSelector;\n import org.apache.lucene.search.SortedSetSortField;\n+import org.apache.lucene.search.join.ScoreMode;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n@@ -57,15 +58,23 @@\n import org.elasticsearch.test.InternalSettingsPlugin;\n import org.elasticsearch.test.VersionUtils;\n \n+import java.io.IOException;\n+import java.io.UncheckedIOException;\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.HashSet;\n import java.util.List;\n import java.util.Set;\n+import java.util.function.BiFunction;\n+import java.util.function.IntFunction;\n import java.util.stream.IntStream;\n \n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.index.query.QueryBuilders.nestedQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n@@ -78,13 +87,14 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n return Arrays.asList(InternalSettingsPlugin.class);\n }\n \n- public void testCreateSplitIndexToN() {\n+ public void testCreateSplitIndexToN() throws IOException {\n int[][] possibleShardSplits = new int[][] {{2,4,8}, {3, 6, 12}, {1, 2, 4}};\n int[] shardSplits = randomFrom(possibleShardSplits);\n assertEquals(shardSplits[0], (shardSplits[0] * shardSplits[1]) / shardSplits[1]);\n assertEquals(shardSplits[1], (shardSplits[1] * shardSplits[2]) / shardSplits[2]);\n internalCluster().ensureAtLeastNumDataNodes(2);\n final boolean useRouting = randomBoolean();\n+ final boolean useNested = randomBoolean();\n final boolean useMixedRouting = useRouting ? randomBoolean() : false;\n CreateIndexRequestBuilder createInitialIndex = prepareCreate(\"source\");\n final int routingShards = shardSplits[2] * randomIntBetween(1, 10);\n@@ -93,16 +103,43 @@ public void testCreateSplitIndexToN() {\n .put(\"index.number_of_routing_shards\", routingShards);\n if (useRouting && useMixedRouting == false && randomBoolean()) {\n settings.put(\"index.routing_partition_size\", randomIntBetween(1, routingShards - 1));\n- createInitialIndex.addMapping(\"t1\", \"_routing\", \"required=true\");\n+ if (useNested) {\n+ createInitialIndex.addMapping(\"t1\", \"_routing\", \"required=true\", \"nested1\", \"type=nested\");\n+ } else {\n+ createInitialIndex.addMapping(\"t1\", \"_routing\", \"required=true\");\n+ }\n+ } else if (useNested) {\n+ createInitialIndex.addMapping(\"t1\", \"nested1\", \"type=nested\");\n }\n- logger.info(\"use routing {} use mixed routing {}\", useRouting, useMixedRouting);\n+ logger.info(\"use routing {} use mixed routing {} use nested {}\", useRouting, useMixedRouting, useNested);\n createInitialIndex.setSettings(settings).get();\n \n int numDocs = randomIntBetween(10, 50);\n String[] routingValue = new String[numDocs];\n+\n+ BiFunction<String, Integer, IndexRequestBuilder> indexFunc = (index, id) -> {\n+ try {\n+ return client().prepareIndex(index, \"t1\", Integer.toString(id))\n+ .setSource(jsonBuilder().startObject()\n+ .field(\"foo\", \"bar\")\n+ .field(\"i\", id)\n+ .startArray(\"nested1\")\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_1\")\n+ .field(\"n_field2\", \"n_value2_1\")\n+ .endObject()\n+ .startObject()\n+ .field(\"n_field1\", \"n_value1_2\")\n+ .field(\"n_field2\", \"n_value2_2\")\n+ .endObject()\n+ .endArray()\n+ .endObject());\n+ } catch (IOException e) {\n+ throw new UncheckedIOException(e);\n+ }\n+ };\n for (int i = 0; i < numDocs; i++) {\n- IndexRequestBuilder builder = client().prepareIndex(\"source\", \"t1\", Integer.toString(i))\n- .setSource(\"{\\\"foo\\\" : \\\"bar\\\", \\\"i\\\" : \" + i + \"}\", XContentType.JSON);\n+ IndexRequestBuilder builder = indexFunc.apply(\"source\", i);\n if (useRouting) {\n String routing = randomRealisticUnicodeOfCodepointLengthBetween(1, 10);\n if (useMixedRouting && randomBoolean()) {\n@@ -118,8 +155,7 @@ public void testCreateSplitIndexToN() {\n if (randomBoolean()) {\n for (int i = 0; i < numDocs; i++) { // let's introduce some updates / deletes on the index\n if (randomBoolean()) {\n- IndexRequestBuilder builder = client().prepareIndex(\"source\", \"t1\", Integer.toString(i))\n- .setSource(\"{\\\"foo\\\" : \\\"bar\\\", \\\"i\\\" : \" + i + \"}\", XContentType.JSON);\n+ IndexRequestBuilder builder = indexFunc.apply(\"source\", i);\n if (useRouting) {\n builder.setRouting(routingValue[i]);\n }\n@@ -145,8 +181,7 @@ public void testCreateSplitIndexToN() {\n assertHitCount(client().prepareSearch(\"first_split\").setSize(100).setQuery(new TermsQueryBuilder(\"foo\", \"bar\")).get(), numDocs);\n \n for (int i = 0; i < numDocs; i++) { // now update\n- IndexRequestBuilder builder = client().prepareIndex(\"first_split\", \"t1\", Integer.toString(i))\n- .setSource(\"{\\\"foo\\\" : \\\"bar\\\", \\\"i\\\" : \" + i + \"}\", XContentType.JSON);\n+ IndexRequestBuilder builder = indexFunc.apply(\"first_split\", i);\n if (useRouting) {\n builder.setRouting(routingValue[i]);\n }\n@@ -180,8 +215,7 @@ public void testCreateSplitIndexToN() {\n assertHitCount(client().prepareSearch(\"second_split\").setSize(100).setQuery(new TermsQueryBuilder(\"foo\", \"bar\")).get(), numDocs);\n \n for (int i = 0; i < numDocs; i++) { // now update\n- IndexRequestBuilder builder = client().prepareIndex(\"second_split\", \"t1\", Integer.toString(i))\n- .setSource(\"{\\\"foo\\\" : \\\"bar\\\", \\\"i\\\" : \" + i + \"}\", XContentType.JSON);\n+ IndexRequestBuilder builder = indexFunc.apply(\"second_split\", i);\n if (useRouting) {\n builder.setRouting(routingValue[i]);\n }\n@@ -195,14 +229,25 @@ public void testCreateSplitIndexToN() {\n assertHitCount(client().prepareSearch(\"second_split\").setSize(100).setQuery(new TermsQueryBuilder(\"foo\", \"bar\")).get(), numDocs);\n assertHitCount(client().prepareSearch(\"first_split\").setSize(100).setQuery(new TermsQueryBuilder(\"foo\", \"bar\")).get(), numDocs);\n assertHitCount(client().prepareSearch(\"source\").setSize(100).setQuery(new TermsQueryBuilder(\"foo\", \"bar\")).get(), numDocs);\n-\n+ if (useNested) {\n+ assertNested(\"source\", numDocs);\n+ assertNested(\"first_split\", numDocs);\n+ assertNested(\"second_split\", numDocs);\n+ }\n assertAllUniqueDocs(client().prepareSearch(\"second_split\").setSize(100)\n .setQuery(new TermsQueryBuilder(\"foo\", \"bar\")).get(), numDocs);\n assertAllUniqueDocs(client().prepareSearch(\"first_split\").setSize(100)\n .setQuery(new TermsQueryBuilder(\"foo\", \"bar\")).get(), numDocs);\n assertAllUniqueDocs(client().prepareSearch(\"source\").setSize(100)\n .setQuery(new TermsQueryBuilder(\"foo\", \"bar\")).get(), numDocs);\n+ }\n \n+ public void assertNested(String index, int numDocs) {\n+ // now, do a nested query\n+ SearchResponse searchResponse = client().prepareSearch(index).setQuery(nestedQuery(\"nested1\", termQuery(\"nested1.n_field1\",\n+ \"n_value1_1\"), ScoreMode.Avg)).get();\n+ assertNoFailures(searchResponse);\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo((long)numDocs));\n }\n \n public void assertAllUniqueDocs(SearchResponse response, int numDocs) {",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/create/SplitIndexIT.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.document.StringField;\n import org.apache.lucene.index.DirectoryReader;\n import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.RandomIndexWriter;\n import org.apache.lucene.index.SortedNumericDocValues;\n@@ -38,10 +39,12 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.mapper.IdFieldMapper;\n import org.elasticsearch.index.mapper.RoutingFieldMapper;\n+import org.elasticsearch.index.mapper.TypeFieldMapper;\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.List;\n \n@@ -58,18 +61,36 @@ public void testSplitOnID() throws IOException {\n .setRoutingNumShards(numShards * 1000000)\n .numberOfReplicas(0).build();\n int targetShardId = randomIntBetween(0, numShards-1);\n+ boolean hasNested = randomBoolean();\n for (int j = 0; j < numDocs; j++) {\n int shardId = OperationRouting.generateShardId(metaData, Integer.toString(j), null);\n- writer.addDocument(Arrays.asList(\n- new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n- new SortedNumericDocValuesField(\"shard_id\", shardId)\n- ));\n+ if (hasNested) {\n+ List<Iterable<IndexableField>> docs = new ArrayList<>();\n+ int numNested = randomIntBetween(0, 10);\n+ for (int i = 0; i < numNested; i++) {\n+ docs.add(Arrays.asList(\n+ new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n+ new StringField(TypeFieldMapper.NAME, \"__nested\", Field.Store.YES),\n+ new SortedNumericDocValuesField(\"shard_id\", shardId)\n+ ));\n+ }\n+ docs.add(Arrays.asList(\n+ new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n+ new SortedNumericDocValuesField(\"shard_id\", shardId)\n+ ));\n+ writer.addDocuments(docs);\n+ } else {\n+ writer.addDocument(Arrays.asList(\n+ new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n+ new SortedNumericDocValuesField(\"shard_id\", shardId)\n+ ));\n+ }\n }\n writer.commit();\n writer.close();\n \n \n- assertSplit(dir, metaData, targetShardId);\n+ assertSplit(dir, metaData, targetShardId, hasNested);\n dir.close();\n }\n \n@@ -83,53 +104,91 @@ public void testSplitOnRouting() throws IOException {\n .numberOfShards(numShards)\n .setRoutingNumShards(numShards * 1000000)\n .numberOfReplicas(0).build();\n+ boolean hasNested = randomBoolean();\n int targetShardId = randomIntBetween(0, numShards-1);\n for (int j = 0; j < numDocs; j++) {\n String routing = randomRealisticUnicodeOfCodepointLengthBetween(1, 5);\n final int shardId = OperationRouting.generateShardId(metaData, null, routing);\n- writer.addDocument(Arrays.asList(\n- new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n- new StringField(RoutingFieldMapper.NAME, routing, Field.Store.YES),\n- new SortedNumericDocValuesField(\"shard_id\", shardId)\n- ));\n+ if (hasNested) {\n+ List<Iterable<IndexableField>> docs = new ArrayList<>();\n+ int numNested = randomIntBetween(0, 10);\n+ for (int i = 0; i < numNested; i++) {\n+ docs.add(Arrays.asList(\n+ new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n+ new StringField(TypeFieldMapper.NAME, \"__nested\", Field.Store.YES),\n+ new SortedNumericDocValuesField(\"shard_id\", shardId)\n+ ));\n+ }\n+ docs.add(Arrays.asList(\n+ new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n+ new StringField(RoutingFieldMapper.NAME, routing, Field.Store.YES),\n+ new SortedNumericDocValuesField(\"shard_id\", shardId)\n+ ));\n+ writer.addDocuments(docs);\n+ } else {\n+ writer.addDocument(Arrays.asList(\n+ new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n+ new StringField(RoutingFieldMapper.NAME, routing, Field.Store.YES),\n+ new SortedNumericDocValuesField(\"shard_id\", shardId)\n+ ));\n+ }\n }\n writer.commit();\n writer.close();\n- assertSplit(dir, metaData, targetShardId);\n+ assertSplit(dir, metaData, targetShardId, hasNested);\n dir.close();\n }\n \n public void testSplitOnIdOrRouting() throws IOException {\n Directory dir = newFSDirectory(createTempDir());\n final int numDocs = randomIntBetween(50, 100);\n RandomIndexWriter writer = new RandomIndexWriter(random(), dir);\n- int numShards = randomIntBetween(2, 10);\n+ int numShards = randomIntBetween(2, 10);\n IndexMetaData metaData = IndexMetaData.builder(\"test\")\n .settings(Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))\n .numberOfShards(numShards)\n .setRoutingNumShards(numShards * 1000000)\n .numberOfReplicas(0).build();\n+ boolean hasNested = randomBoolean();\n int targetShardId = randomIntBetween(0, numShards-1);\n for (int j = 0; j < numDocs; j++) {\n+ Iterable<IndexableField> rootDoc;\n+ final int shardId;\n if (randomBoolean()) {\n String routing = randomRealisticUnicodeOfCodepointLengthBetween(1, 5);\n- final int shardId = OperationRouting.generateShardId(metaData, null, routing);\n- writer.addDocument(Arrays.asList(\n+ shardId = OperationRouting.generateShardId(metaData, null, routing);\n+ rootDoc = Arrays.asList(\n new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n new StringField(RoutingFieldMapper.NAME, routing, Field.Store.YES),\n new SortedNumericDocValuesField(\"shard_id\", shardId)\n- ));\n+ );\n } else {\n- int shardId = OperationRouting.generateShardId(metaData, Integer.toString(j), null);\n- writer.addDocument(Arrays.asList(\n+ shardId = OperationRouting.generateShardId(metaData, Integer.toString(j), null);\n+ rootDoc = Arrays.asList(\n new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n new SortedNumericDocValuesField(\"shard_id\", shardId)\n- ));\n+ );\n+ }\n+\n+ if (hasNested) {\n+ List<Iterable<IndexableField>> docs = new ArrayList<>();\n+ int numNested = randomIntBetween(0, 10);\n+ for (int i = 0; i < numNested; i++) {\n+ docs.add(Arrays.asList(\n+ new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n+ new StringField(TypeFieldMapper.NAME, \"__nested\", Field.Store.YES),\n+ new SortedNumericDocValuesField(\"shard_id\", shardId)\n+ ));\n+ }\n+ docs.add(rootDoc);\n+ writer.addDocuments(docs);\n+ } else {\n+ writer.addDocument(rootDoc);\n }\n }\n writer.commit();\n writer.close();\n- assertSplit(dir, metaData, targetShardId);\n+ assertSplit(dir, metaData, targetShardId, hasNested);\n dir.close();\n }\n \n@@ -145,47 +204,94 @@ public void testSplitOnRoutingPartitioned() throws IOException {\n .setRoutingNumShards(numShards * 1000000)\n .routingPartitionSize(randomIntBetween(1, 10))\n .numberOfReplicas(0).build();\n+ boolean hasNested = randomBoolean();\n int targetShardId = randomIntBetween(0, numShards-1);\n for (int j = 0; j < numDocs; j++) {\n String routing = randomRealisticUnicodeOfCodepointLengthBetween(1, 5);\n final int shardId = OperationRouting.generateShardId(metaData, Integer.toString(j), routing);\n- writer.addDocument(Arrays.asList(\n- new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n- new StringField(RoutingFieldMapper.NAME, routing, Field.Store.YES),\n- new SortedNumericDocValuesField(\"shard_id\", shardId)\n- ));\n+\n+ if (hasNested) {\n+ List<Iterable<IndexableField>> docs = new ArrayList<>();\n+ int numNested = randomIntBetween(0, 10);\n+ for (int i = 0; i < numNested; i++) {\n+ docs.add(Arrays.asList(\n+ new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n+ new StringField(TypeFieldMapper.NAME, \"__nested\", Field.Store.YES),\n+ new SortedNumericDocValuesField(\"shard_id\", shardId)\n+ ));\n+ }\n+ docs.add(Arrays.asList(\n+ new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n+ new StringField(RoutingFieldMapper.NAME, routing, Field.Store.YES),\n+ new SortedNumericDocValuesField(\"shard_id\", shardId)\n+ ));\n+ writer.addDocuments(docs);\n+ } else {\n+ writer.addDocument(Arrays.asList(\n+ new StringField(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(j)), Field.Store.YES),\n+ new StringField(RoutingFieldMapper.NAME, routing, Field.Store.YES),\n+ new SortedNumericDocValuesField(\"shard_id\", shardId)\n+ ));\n+ }\n }\n writer.commit();\n writer.close();\n- assertSplit(dir, metaData, targetShardId);\n+ assertSplit(dir, metaData, targetShardId, hasNested);\n dir.close();\n }\n \n \n \n \n- void assertSplit(Directory dir, IndexMetaData metaData, int targetShardId) throws IOException {\n+ void assertSplit(Directory dir, IndexMetaData metaData, int targetShardId, boolean hasNested) throws IOException {\n try (IndexReader reader = DirectoryReader.open(dir)) {\n IndexSearcher searcher = new IndexSearcher(reader);\n searcher.setQueryCache(null);\n final boolean needsScores = false;\n- final Weight splitWeight = searcher.createNormalizedWeight(new ShardSplittingQuery(metaData, targetShardId), needsScores);\n+ final Weight splitWeight = searcher.createNormalizedWeight(new ShardSplittingQuery(metaData, targetShardId, hasNested),\n+ needsScores);\n final List<LeafReaderContext> leaves = reader.leaves();\n for (final LeafReaderContext ctx : leaves) {\n Scorer scorer = splitWeight.scorer(ctx);\n DocIdSetIterator iterator = scorer.iterator();\n SortedNumericDocValues shard_id = ctx.reader().getSortedNumericDocValues(\"shard_id\");\n- int doc;\n- while ((doc = iterator.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {\n- while (shard_id.nextDoc() < doc) {\n+ int numExpected = 0;\n+ while (shard_id.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {\n+ if (targetShardId == shard_id.nextValue()) {\n+ numExpected++;\n+ }\n+ }\n+ if (numExpected == ctx.reader().maxDoc()) {\n+ // all docs belong in this shard\n+ assertEquals(DocIdSetIterator.NO_MORE_DOCS, iterator.nextDoc());\n+ } else {\n+ shard_id = ctx.reader().getSortedNumericDocValues(\"shard_id\");\n+ int doc;\n+ int numActual = 0;\n+ int lastDoc = 0;\n+ while ((doc = iterator.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {\n+ lastDoc = doc;\n+ while (shard_id.nextDoc() < doc) {\n+ long shardID = shard_id.nextValue();\n+ assertEquals(shardID, targetShardId);\n+ numActual++;\n+ }\n+ assertEquals(shard_id.docID(), doc);\n long shardID = shard_id.nextValue();\n- assertEquals(shardID, targetShardId);\n+ BytesRef id = reader.document(doc).getBinaryValue(\"_id\");\n+ String actualId = Uid.decodeId(id.bytes, id.offset, id.length);\n+ assertNotEquals(ctx.reader() + \" docID: \" + doc + \" actualID: \" + actualId, shardID, targetShardId);\n+ }\n+ if (lastDoc < ctx.reader().maxDoc()) {\n+ // check the last docs in the segment and make sure they all have the right shard id\n+ while (shard_id.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {\n+ long shardID = shard_id.nextValue();\n+ assertEquals(shardID, targetShardId);\n+ numActual++;\n+ }\n }\n- assertEquals(shard_id.docID(), doc);\n- long shardID = shard_id.nextValue();\n- BytesRef id = reader.document(doc).getBinaryValue(\"_id\");\n- String actualId = Uid.decodeId(id.bytes, id.offset, id.length);\n- assertNotEquals(ctx.reader() + \" docID: \" + doc + \" actualID: \" + actualId, shardID, targetShardId);\n+\n+ assertEquals(numExpected, numActual);\n }\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/shard/ShardSplittingQueryTests.java",
"status": "modified"
},
{
"diff": "@@ -100,7 +100,7 @@ public void testAddIndices() throws IOException {\n Directory target = newFSDirectory(createTempDir());\n final long maxSeqNo = randomNonNegativeLong();\n final long maxUnsafeAutoIdTimestamp = randomNonNegativeLong();\n- storeRecovery.addIndices(indexStats, target, indexSort, dirs, maxSeqNo, maxUnsafeAutoIdTimestamp, null, 0, false);\n+ storeRecovery.addIndices(indexStats, target, indexSort, dirs, maxSeqNo, maxUnsafeAutoIdTimestamp, null, 0, false, false);\n int numFiles = 0;\n Predicate<String> filesFilter = (f) -> f.startsWith(\"segments\") == false && f.equals(\"write.lock\") == false\n && f.startsWith(\"extra\") == false;\n@@ -174,7 +174,7 @@ public void testSplitShard() throws IOException {\n .setRoutingNumShards(numShards * 1000000)\n .numberOfReplicas(0).build();\n storeRecovery.addIndices(indexStats, target, indexSort, new Directory[] {dir}, maxSeqNo, maxUnsafeAutoIdTimestamp, metaData,\n- targetShardId, true);\n+ targetShardId, true, false);\n \n \n SegmentInfos segmentCommitInfos = SegmentInfos.readLatestCommit(target);",
"filename": "core/src/test/java/org/elasticsearch/index/shard/StoreRecoveryTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): 5.3.0\r\n\r\n**Plugins installed**: [repository-s3]\r\n\r\n**JVM version** (`java -version`): 1.8.0_92\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Linux 2.6\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\n**Steps to reproduce**:\r\nAfter we make a PUT or POST mapping to add additional fields, it removes our _meta Field from the mapping.\r\nIs this normal? Can we prevent this?\r\n\r\nBefore:\r\n\r\n{\r\n \"foo\": {\r\n \"mappings\": {\r\n \"bar\": {\r\n \"_meta\": {\r\n \"foo\": \"bar\"\r\n },\r\n \"properties\": {\r\n...\r\n\r\nAfter:\r\n\r\n{\r\n \"foo\": {\r\n \"mappings\": {\r\n \"bar\": {\r\n \"properties\": {\r\n...\r\n\r\nThis was question was asked in the forum with no answer provided.\r\nhttps://discuss.elastic.co/t/put-mapping-removes-metadata-from-index/91794\r\n\r\n",
"comments": [
{
"body": "@pacer11 thanks or raising this, it looks to me like this is a bug. I managed to reproduce it on the master branch with the following script:\r\n```\r\nPUT test\r\n{\r\n \"settings\": {\r\n \"number_of_shards\": 1,\r\n \"number_of_replicas\": 0\r\n },\r\n \"mappings\": {\r\n \"doc\": {\r\n \"_meta\": {\r\n \"foo\": \"bar\"\r\n },\r\n \"properties\": {\r\n \"string_field\": {\r\n \"type\": \"text\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nGET test/_mapping\r\n\r\n#Dynamically mapped field maintain _meta object\r\nPUT test/doc/1\r\n{\r\n \"string_field\": \"foo\"\r\n}\r\n\r\nGET test/_mapping\r\n\r\n# Adding a field with the PUT mapping API destroys the _meta object\r\nPUT test/doc/_mapping\r\n{\r\n \"doc\": {\r\n \"properties\": {\r\n \"int_field\": {\r\n \"type\": \"integer\"\r\n }\r\n }\r\n }\r\n}\r\n\r\nGET test/_mapping\r\n```\r\n\r\nUnfortunately I think the only workaround at the moment would be to make sure you include the `_meta` object on every call to the PUT mappings API.\r\n\r\nOut of curiousity, what are you using the `_meta` field in the mappings for? i.e. what information do you store in it and how do you use that information?",
"created_at": "2017-11-09T10:21:31Z"
},
{
"body": "@colings86 Thanks for your confirmation. We are kinda already using the workaround you suggested, however it's one of those annoying things to remember especially when it's not necessarily intuitive.\r\n\r\nWe use the _meta field for a couple of reasons. \r\n1. The date fields that we define in our mapping are all consistently stored as one format but when we retrieve from ES, we represent in various formats so we track the different formats in the _meta field so that as we are transforming the JSON we get from ES we can apply the applicable transformation before sending it further upstream in our application. I am sure there are other ways of achieving this but this seems to work well for us.\r\n2. We have a few nested field mappings defined and we keep track of the nested paths within the _meta field. We have a query generator routine that dynamically generates the ES Queries based upon the fields passed in and we pull nested paths information from the _meta field so that we can determine if a nested path should be incorporated into the query or not.\r\n\r\nHope this helps!",
"created_at": "2017-11-09T11:22:26Z"
}
],
"number": 27323,
"title": "PUT Mapping removes metadata from index"
} | {
"body": "Closes #27323\r\n",
"number": 27352,
"review_comments": [
{
"body": "I think this initialization should stay, for the case meta is not defined in the json?",
"created_at": "2017-11-15T19:32:25Z"
},
{
"body": "I would flip this logic to remove negations. ie `mergeWith.meta == null ? meta : mergeWith.meta`. Positive logic is easier to reason about. It might also be better to just move it out to a local instead of trying to inline it.",
"created_at": "2017-11-15T19:34:43Z"
},
{
"body": "@rjernst I was thinking of using `null` for this case. Or is empty map better?\r\nAnd which value must be used for unset meta? ",
"created_at": "2017-11-16T18:43:55Z"
},
{
"body": "I'm fine with null. I had thought changing this from emtpy map to null would require updating Mapping.toXContent. However, it looks like it currently handles both cases. I would update that to remove the empty case (which means explicitly putting an empty _meta would be serialized, not disappear).",
"created_at": "2017-11-16T19:44:49Z"
},
{
"body": "Done",
"created_at": "2017-11-16T21:09:48Z"
}
],
"title": "Fix merging of _meta field"
} | {
"commits": [
{
"message": "Fix merging of _meta field"
},
{
"message": "Using local variable for merged meta"
},
{
"message": "Remove meta empty case from Mapping::toXContent"
}
],
"files": [
{
"diff": "@@ -56,7 +56,7 @@ public static class Builder {\n \n private final RootObjectMapper rootObjectMapper;\n \n- private Map<String, Object> meta = emptyMap();\n+ private Map<String, Object> meta;\n \n private final Mapper.BuilderContext builderContext;\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java",
"status": "modified"
},
{
"diff": "@@ -98,7 +98,8 @@ public Mapping merge(Mapping mergeWith, boolean updateAllTypes) {\n }\n mergedMetaDataMappers.put(merged.getClass(), merged);\n }\n- return new Mapping(indexCreated, mergedRoot, mergedMetaDataMappers.values().toArray(new MetadataFieldMapper[0]), mergeWith.meta);\n+ Map<String, Object> mergedMeta = mergeWith.meta == null ? meta : mergeWith.meta;\n+ return new Mapping(indexCreated, mergedRoot, mergedMetaDataMappers.values().toArray(new MetadataFieldMapper[0]), mergedMeta);\n }\n \n /**\n@@ -128,7 +129,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n root.toXContent(builder, params, new ToXContent() {\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- if (meta != null && !meta.isEmpty()) {\n+ if (meta != null) {\n builder.field(\"_meta\", meta);\n }\n for (Mapper mapper : metadataMappers) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/Mapping.java",
"status": "modified"
},
{
"diff": "@@ -289,4 +289,47 @@ public void testMergeAddingParent() throws IOException {\n Exception e = expectThrows(IllegalArgumentException.class, () -> initMapper.merge(updatedMapper.mapping(), false));\n assertThat(e.getMessage(), containsString(\"The _parent field's type option can't be changed: [null]->[parent]\"));\n }\n+\n+ public void testMergeMeta() throws IOException {\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ String initMapping = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"test\")\n+ .startObject(\"_meta\")\n+ .field(\"foo\").value(\"bar\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .string();\n+ DocumentMapper initMapper = parser.parse(\"test\", new CompressedXContent(initMapping));\n+\n+ assertThat(initMapper.meta().get(\"foo\"), equalTo(\"bar\"));\n+\n+ String updateMapping = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"test\")\n+ .startObject(\"properties\")\n+ .startObject(\"name\").field(\"type\", \"text\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .string();\n+ DocumentMapper updatedMapper = parser.parse(\"test\", new CompressedXContent(updateMapping));\n+\n+ assertThat(initMapper.merge(updatedMapper.mapping(), true).meta().get(\"foo\"), equalTo(\"bar\"));\n+\n+ updateMapping = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"test\")\n+ .startObject(\"_meta\")\n+ .field(\"foo\").value(\"new_bar\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .string();\n+ updatedMapper = parser.parse(\"test\", new CompressedXContent(updateMapping));\n+\n+ assertThat(initMapper.merge(updatedMapper.mapping(), true).meta().get(\"foo\"), equalTo(\"new_bar\"));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/DocumentMapperMergeTests.java",
"status": "modified"
}
]
} |
{
"body": "Kibana Version: 5.4.3\r\nElasticsearch Version: 5.4.3\r\nOS: Ubuntu 16.10, Windows 7\r\n\r\nWhile queries on fields such as _type works just fine :- \r\n\r\n> GET /_search\r\n> {\r\n> \"query\": {\r\n> \"wildcard\" : { \"_type\" : \"inde*\" }\r\n> }\r\n> }\r\n\r\nBut when i run code even as simple as :- \r\n\r\n> GET /_search\r\n> {\r\n> \"query\": {\r\n> \"wildcard\" : { \"_index\" : \".kib*\" }\r\n> }\r\n> }\r\n\r\nI get greeted by an error message like this :- \r\n\r\n> {\r\n> \"error\": {\r\n> \"root_cause\": [\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"XF4sz8QzSH-4IULAikLaQg\",\r\n> \"index\": \".kibana\"\r\n> },\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"T53UWmqYR6aOB_wYbZnyAA\",\r\n> \"index\": \"auto\"\r\n> },\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"n5J6B9wLQSu-P6Ea8Gzfcw\",\r\n> \"index\": \"elk\"\r\n> },\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"bXirSfCpTBuH0J_kX3SjZw\",\r\n> \"index\": \"index\"\r\n> },\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"K1fQCwsEQGyllfAWh5gtsQ\",\r\n> \"index\": \"library\"\r\n> },\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"3VoVe4UhRLOntuWH0ury6Q\",\r\n> \"index\": \"person\"\r\n> },\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"XZWOWIL9TzWyQTOpUXySeA\",\r\n> \"index\": \"try_index\"\r\n> },\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"w3AD2R6BSbCLkLpY_biTBw\",\r\n> \"index\": \"tryindex\"\r\n> },\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"puvR3VIGR1aIrA38KsL8ww\",\r\n> \"index\": \"vartoken\"\r\n> },\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"qTktBjOST7KAKk4zuEYo_A\",\r\n> \"index\": \"varun\"\r\n> },\r\n> {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"M0OnUiw5RS-lOCJ-kHt6_A\",\r\n> \"index\": \"varun_nocrid\"\r\n> }\r\n> ],\r\n> \"type\": \"search_phase_execution_exception\",\r\n> \"reason\": \"all shards failed\",\r\n> \"phase\": \"query\",\r\n> \"grouped\": true,\r\n> \"failed_shards\": [\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \".kibana\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"XF4sz8QzSH-4IULAikLaQg\",\r\n> \"index\": \".kibana\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: .kibana vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> },\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \"auto\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"T53UWmqYR6aOB_wYbZnyAA\",\r\n> \"index\": \"auto\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: auto vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> },\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \"elk\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"n5J6B9wLQSu-P6Ea8Gzfcw\",\r\n> \"index\": \"elk\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: elk vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> },\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \"index\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"bXirSfCpTBuH0J_kX3SjZw\",\r\n> \"index\": \"index\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: index vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> },\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \"library\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"K1fQCwsEQGyllfAWh5gtsQ\",\r\n> \"index\": \"library\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: library vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> },\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \"person\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"3VoVe4UhRLOntuWH0ury6Q\",\r\n> \"index\": \"person\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: person vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> },\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \"try_index\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"XZWOWIL9TzWyQTOpUXySeA\",\r\n> \"index\": \"try_index\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: try_index vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> },\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \"tryindex\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"w3AD2R6BSbCLkLpY_biTBw\",\r\n> \"index\": \"tryindex\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: tryindex vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> },\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \"vartoken\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"puvR3VIGR1aIrA38KsL8ww\",\r\n> \"index\": \"vartoken\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: vartoken vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> },\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \"varun\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"qTktBjOST7KAKk4zuEYo_A\",\r\n> \"index\": \"varun\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: varun vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> },\r\n> {\r\n> \"shard\": 0,\r\n> \"index\": \"varun_nocrid\",\r\n> \"node\": \"vT0v2wjETTeP8qXqlK7eyw\",\r\n> \"reason\": {\r\n> \"type\": \"query_shard_exception\",\r\n> \"reason\": \"failed to create query: {\\n \\\"wildcard\\\" : {\\n \\\"_index\\\" : {\\n \\\"wildcard\\\" : \\\".kib*\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n}\",\r\n> \"index_uuid\": \"M0OnUiw5RS-lOCJ-kHt6_A\",\r\n> \"index\": \"varun_nocrid\",\r\n> \"caused_by\": {\r\n> \"type\": \"illegal_argument_exception\",\r\n> \"reason\": \"Cannot extract a term from a query of type class org.elasticsearch.common.lucene.search.MatchNoDocsQuery: MatchNoDocsQuery[\\\"Index didn't match. Index queried: varun_nocrid vs. .kib*\\\"]\"\r\n> }\r\n> }\r\n> }\r\n> ]\r\n> },\r\n> \"status\": 400\r\n> }\r\n",
"comments": [
{
"body": "@varundbest I have reproduced what you see on master, however I have marked this as discuss to get some other's opinions because I'm not sure if the `_index` field is actually intended to work with the wildcard query.\r\n\r\nThe reason this fails is because [IndexFieldMapper](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java#L125-L131) returns either a MatchAllDocsQuery or a MatchNoDocsQuery depending on whether the index name matches the value provided. The WildcardQueryBuilder then calls [`MappedFieldType.extractTerm()`](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java#L446-L470) with this query and this method is not able to deal with queries that are not term based.\r\n\r\nJust out of curiosity, why are you wanting to do a wildcard query on the `_index` field instead of just using a wildcard expression for the indices in the URL for the search? e.g.\r\n```\r\nGET .kib*/_search\r\n```",
"created_at": "2017-07-14T11:10:43Z"
},
{
"body": "@colings86 Actually i'm implementing a user role-based access to dashboards and data. So the plan is to have a hidden filter which is based on the group the user belongs to. Ex:-\r\nGroup A:-\r\nHave access to indices :- producta*, sales,etc. or they can access anything but productb*.\r\nSo to automatically limit the super-set of the dashboard data, based on the payload (usergroup) on jwt token.\r\nSo for white-listing, i can use the wildcard expression for indices but for blacklisting, i'd actually require something like a wildcard query.",
"created_at": "2017-07-14T11:46:05Z"
},
{
"body": "Discussed in FixitFriday. We agreed on supporting that feature, eagerly evaluating the wildcard against the index name and returning a match_all/match_none depending on whether there is a match.",
"created_at": "2017-08-18T13:20:39Z"
}
],
"number": 25722,
"title": "Wildcard query fails on _index"
} | {
"body": "The wildcard is evaluated against the index name.\r\n\r\nFixes #25722",
"number": 27334,
"review_comments": [],
"title": "Add support for wildcard on `_index`"
} | {
"commits": [
{
"message": "wildcard query on _index"
}
],
"files": [
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.search.Queries;\n+import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n@@ -150,12 +151,8 @@ public Query termsQuery(List values, QueryShardContext context) {\n }\n \n private boolean isSameIndex(Object value, String indexName) {\n- if (value instanceof BytesRef) {\n- BytesRef indexNameRef = new BytesRef(indexName);\n- return (indexNameRef.bytesEquals((BytesRef) value));\n- } else {\n- return indexName.equals(value.toString());\n- }\n+ String pattern = value instanceof BytesRef ? pattern = ((BytesRef) value).utf8ToString() : value.toString();\n+ return Regex.simpleMatch(pattern, indexName);\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,8 @@\n package org.elasticsearch.index.query;\n \n import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.MultiTermQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.WildcardQuery;\n@@ -31,6 +33,7 @@\n import org.elasticsearch.common.lucene.BytesRefs;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.mapper.IndexFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.query.support.QueryParsers;\n \n@@ -187,6 +190,9 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n term = new Term(fieldName, BytesRefs.toBytesRef(value));\n } else {\n Query termQuery = fieldType.termQuery(value, context);\n+ if (termQuery instanceof MatchNoDocsQuery || termQuery instanceof MatchAllDocsQuery) {\n+ return termQuery;\n+ }\n term = MappedFieldType.extractTerm(termQuery);\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/query/WildcardQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,8 @@\n package org.elasticsearch.index.query;\n \n import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.WildcardQuery;\n import org.elasticsearch.common.ParsingException;\n@@ -136,4 +138,20 @@ public void testWithMetaDataField() throws IOException {\n assertEquals(expected, query);\n }\n }\n+ \n+ public void testIndexWildcard() throws IOException {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+\n+ QueryShardContext context = createShardContext();\n+ String index = context.getFullyQualifiedIndexName();\n+ \n+ Query query = new WildcardQueryBuilder(\"_index\", index).doToQuery(context);\n+ assertThat(query instanceof MatchAllDocsQuery, equalTo(true));\n+ \n+ query = new WildcardQueryBuilder(\"_index\", index + \"*\").doToQuery(context);\n+ assertThat(query instanceof MatchAllDocsQuery, equalTo(true));\n+ \n+ query = new WildcardQueryBuilder(\"_index\", \"index_\" + index + \"*\").doToQuery(context);\n+ assertThat(query instanceof MatchNoDocsQuery, equalTo(true));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/query/WildcardQueryBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 5.6.3\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: 1.8.0_144\r\n\r\n**OS version**: macOS Sierra 10.12.6 (Darwin Kernel Version 16.7.0)\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWe get a deprecation warning if using `Content-Type: application/x-ndjson; charset=UTF-8` header for the Bulk API. That is, the server only checks if the `Content-Type === \"application/x-ndjson\"` and otherwise complains about missing `Content-Type`.\r\n\r\n**Steps to reproduce**:\r\n```bash\r\ncurl -XPOST 'localhost:9200/_bulk?pretty' -H 'Content-Type: application/x-ndjson; charset=UTF-8' -d'\r\n{ \"index\" : { \"_index\" : \"test\", \"_type\" : \"type1\", \"_id\" : \"1\" } }\r\n{ \"field1\" : \"value1\" }\r\n'\r\n```\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n[WARN ][o.e.d.r.RestController ] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.\r\n```\r\n",
"comments": [
{
"body": "Opened this issue as a result of https://github.com/elastic/elasticsearch/issues/22769#issuecomment-338230868.",
"created_at": "2017-10-20T17:36:37Z"
},
{
"body": " We should definitely not issue a deprecation warning when Content-Type includes the charset. Another story is what we do with the charset, see #22769 .",
"created_at": "2017-10-27T13:53:44Z"
},
{
"body": "Closing as 5.x isn't maintained anymore and the behaviour on 6.x is tracked under https://github.com/elastic/elasticsearch/issues/28123",
"created_at": "2019-09-12T15:12:00Z"
}
],
"number": 27065,
"title": "Deprecation warning when having charset in the request `Content-Type`"
} | {
"body": "Closes #27065 \r\n\r\ncc @javanna ",
"number": 27301,
"review_comments": [
{
"body": "I feel we should still have some form of warning if you claim to be using a charset that isn't UTF-{8,16,32}, as per #22769. This might not be the time or the place to ask for that, however.",
"created_at": "2017-11-07T12:52:25Z"
},
{
"body": "make it `final`",
"created_at": "2018-02-08T19:47:12Z"
},
{
"body": "Thanks for picking this up. I pushed 0fabbd002c47ef86fb0c0eda01f184e6fe456171 . The merge target is 5.6 now. Thank you so much!",
"created_at": "2018-02-10T04:33:25Z"
}
],
"title": "Add ability to parse Content-Type from content type contains charset"
} | {
"commits": [
{
"message": "Add ability to parse Content-Type from content type contains charset (#27065)"
},
{
"message": "Validate charset if exist when parse media type from Content-Type"
},
{
"message": "Make variables to be final"
}
],
"files": [
{
"diff": "@@ -286,7 +286,8 @@ private boolean hasContentTypeOrCanAutoDetect(final RestRequest restRequest, fin\n }\n }\n } else if (restHandler != null && restHandler.supportsContentStream() && restRequest.header(\"Content-Type\") != null) {\n- final String lowercaseMediaType = restRequest.header(\"Content-Type\").toLowerCase(Locale.ROOT);\n+ final String lowercaseMediaType = parseMediaType(restRequest.header(\"Content-Type\"));\n+\n // we also support newline delimited JSON: http://specs.okfnlabs.org/ndjson/\n if (lowercaseMediaType.equals(\"application/x-ndjson\")) {\n restRequest.setXContentType(XContentType.JSON);\n@@ -308,6 +309,23 @@ private boolean hasContentTypeOrCanAutoDetect(final RestRequest restRequest, fin\n return true;\n }\n \n+ private String parseMediaType(String rawContentType) {\n+ final String contentType = rawContentType.toLowerCase(Locale.ROOT);\n+ final int firstSemiColonIndex = contentType.indexOf(';');\n+ if (firstSemiColonIndex == -1) {\n+ return contentType;\n+ }\n+ final String mediaType = contentType.substring(0, firstSemiColonIndex).trim();\n+ final String charsetCandidate = contentType.substring(firstSemiColonIndex + 1);\n+\n+ final String[] keyValue = charsetCandidate.split(\"=\", 2);\n+ if (keyValue.length != 2 || keyValue[0].trim().equalsIgnoreCase(\"charset\") == false\n+ || keyValue[1].trim().equalsIgnoreCase(\"utf-8\") == false) {\n+ deprecationLogger.deprecated(\"Content-Type [\" + rawContentType + \"] contains unrecognized [\" + charsetCandidate + \"]\");\n+ }\n+ return mediaType;\n+ }\n+\n private boolean autoDetectXContentType(RestRequest restRequest) {\n deprecationLogger.deprecated(\"Content type detection for rest requests is deprecated. Specify the content type using \" +\n \"the [Content-Type] header.\");",
"filename": "core/src/main/java/org/elasticsearch/rest/RestController.java",
"status": "modified"
},
{
"diff": "@@ -418,6 +418,50 @@ public boolean supportsContentStream() {\n assertTrue(channel.getSendResponseCalled());\n }\n \n+ public void testContentTypeNotOnlyMediaType() {\n+ restController.registerHandler(RestRequest.Method.GET, \"/foo\", new RestHandler() {\n+ @Override\n+ public void handleRequest(RestRequest request, RestChannel channel, NodeClient client) throws Exception {\n+ channel.sendResponse(new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY));\n+ }\n+\n+ @Override\n+ public boolean supportsContentStream() {\n+ return true;\n+ }\n+ });\n+\n+ String goodMimeType = randomFrom(\"application/x-ndjson\",\n+ \"application/x-ndjson; charset=UTF-8\",\n+ \"application/x-ndjson; charset=utf-8\",\n+ \"application/x-ndjson;charset=UTF-8\");\n+ String content = randomAlphaOfLengthBetween(1, BREAKER_LIMIT.bytesAsInt());\n+ FakeRestRequest fakeRestRequest = new FakeRestRequest.Builder(NamedXContentRegistry.EMPTY)\n+ .withContent(new BytesArray(content), null).withPath(\"/foo\")\n+ .withHeaders(Collections.singletonMap(\"Content-Type\", Collections.singletonList(goodMimeType))).build();\n+ AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.OK);\n+\n+ assertFalse(channel.getSendResponseCalled());\n+ restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));\n+ assertTrue(channel.getSendResponseCalled());\n+\n+ String badMimeType = randomFrom(\"application/x-ndjson; charset=utf-8; unknown\",\n+ \"application/x-ndjson; charset=unknown\",\n+ \"application/x-ndjson;charset=UTF-16\",\n+ \"application/x-ndjson; unknown\");\n+ content = randomAlphaOfLengthBetween(1, BREAKER_LIMIT.bytesAsInt());\n+ fakeRestRequest = new FakeRestRequest.Builder(NamedXContentRegistry.EMPTY)\n+ .withContent(new BytesArray(content), null).withPath(\"/foo\")\n+ .withHeaders(Collections.singletonMap(\"Content-Type\", Collections.singletonList(badMimeType))).build();\n+ channel = new AssertingChannel(fakeRestRequest, true, RestStatus.OK);\n+\n+ assertFalse(channel.getSendResponseCalled());\n+ restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY));\n+ assertTrue(channel.getSendResponseCalled());\n+ assertWarnings(\"Content-Type [\" + badMimeType + \"] contains unrecognized [\"\n+ + badMimeType.substring(badMimeType.indexOf(\";\") + 1).toLowerCase(Locale.ROOT) + \"]\");\n+ }\n+\n public void testDispatchWithContentStreamAutoDetect() {\n FakeRestRequest fakeRestRequest = new FakeRestRequest.Builder(NamedXContentRegistry.EMPTY)\n .withContent(new BytesArray(\"{}\"), null).withPath(\"/foo\").build();",
"filename": "core/src/test/java/org/elasticsearch/rest/RestControllerTests.java",
"status": "modified"
}
]
} |
{
"body": "ES 5.3.0 Crashes because of an unhandled exception from Netty when building the Warning response header with a deprecation message containing a field with a carriage return.\r\n\r\nWhen indexing a message containing a field with a carriage return, such as\r\n`{\r\n\t\"\\r_xyz\": \"\",\r\n ...\r\n}`\r\n\r\nAnd with a deprecated dynamic mapping template such as\r\n`'dynamic_templates' => [{\r\n 'string_fields' => {\r\n 'mapping' => {\r\n 'fielddata' => {\r\n 'format' => 'disabled'\r\n }\r\n ,\r\n 'index' => 'analyzed',\r\n 'omit_norms' => true,\r\n 'type' => 'string',\r\n 'fields' => {\r\n 'raw' => {\r\n 'ignore_above' => 256,\r\n 'index' => 'not_analyzed',\r\n 'type' => 'string',\r\n }\r\n }\r\n }\r\n ,\r\n 'match_mapping_type' => 'string',\r\n 'match' => '*'\r\n }\r\n }\r\n ],\r\n }`\r\n\r\nES 5.3.0 crashes with the following exception:\r\n\r\n> java.lang.IllegalArgumentException: only '\\n' is allowed after '\\r': 299 Elasticsearch-5.3.0-3adb13b \"The [string] field is deprecated, please use [text] or [keyword] instead on [^M_xyz]\" \"Tue, 28 Oct 2017 10:21:14 GMT\"\r\n at io.netty.handler.codec.http.DefaultHttpHeaders$HeaderValueConverterAndValidator.validateValueChar(DefaultHttpHeaders.java:454) ~[?:?]\r\n at io.netty.handler.codec.http.DefaultHttpHeaders$HeaderValueConverterAndValidator.convertObject(DefaultHttpHeaders.java:411) ~[?:?]\r\n at io.netty.handler.codec.http.DefaultHttpHeaders$HeaderValueConverterAndValidator.convertObject(DefaultHttpHeaders.java:402) ~[?:?]\r\n at io.netty.handler.codec.DefaultHeaders.addObject(DefaultHeaders.java:318) ~[?:?]\r\n at io.netty.handler.codec.http.DefaultHttpHeaders.add(DefaultHttpHeaders.java:117) ~[?:?]\r\n at org.elasticsearch.http.netty4.Netty4HttpChannel.setHeaderField(Netty4HttpChannel.java:160) ~[?:?]\r\n at org.elasticsearch.http.netty4.Netty4HttpChannel.setHeaderField(Netty4HttpChannel.java:155) ~[?:?]\r\n at org.elasticsearch.http.netty4.Netty4HttpChannel.addCustomHeaders(Netty4HttpChannel.java:181) ~[?:?]\r\n at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:114) ~[?:?]\r\n at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:445) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n\r\n\r\n\r\n\r\nFull exception:\r\n\r\n> [ERROR][o.e.r.a.RestResponseListener] failed to send failure response\r\njava.lang.IllegalStateException: Channel is already closed\r\n at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.close(RestController.java:451) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:444) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.rest.action.RestActionListener.onResponse(RestActionListener.java:49) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:368) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onResponse(TransportBulkAction.java:349) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onResponse(TransportBulkAction.java:338) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishOnSuccess(TransportReplicationAction.java:855) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleResponse(TransportReplicationAction.java:765) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleResponse(TransportReplicationAction.java:751) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1025) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.processResponse(TransportService.java:1098) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1088) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1078) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.DelegatingTransportChannel.sendResponse(DelegatingTransportChannel.java:58) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.RequestHandlerRegistry$TransportChannelWrapper.sendResponse(RequestHandlerRegistry.java:111) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction$2.onResponse(TransportReplicationAction.java:348) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction$2.onResponse(TransportReplicationAction.java:342) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryResult.respond(TransportReplicationAction.java:414) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportWriteAction$WritePrimaryResult.respondIfPossible(TransportWriteAction.java:127) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportWriteAction$WritePrimaryResult.respond(TransportWriteAction.java:118) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.lambda$onResponse$0(TransportReplicationAction.java:320) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation.finish(ReplicationOperation.java:305) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation.decPendingAndFinishIfNeeded(ReplicationOperation.java:286) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation.access$100(ReplicationOperation.java:55) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation$1.onResponse(ReplicationOperation.java:190) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation$1.onResponse(ReplicationOperation.java:186) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:46) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1025) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TcpTransport$1.doRun(TcpTransport.java:1386) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:109) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TcpTransport.handleResponse(TcpTransport.java:1378) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1347) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) [transport-netty4-5.3.0.jar:5.3.0]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:280) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:396) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.7.Final.jar:4.1.7.Final]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n Suppressed: java.lang.IllegalArgumentException: only '\\n' is allowed after '\\r': 299 Elasticsearch-5.3.0-3adb13b \"The [string] field is deprecated, please use [text] or [keyword] instead on [^M_xyz]\" \"Tue, 28 Oct 2017 10:21:14 GMT\"\r\n at io.netty.handler.codec.http.DefaultHttpHeaders$HeaderValueConverterAndValidator.validateValueChar(DefaultHttpHeaders.java:454) ~[?:?]\r\n at io.netty.handler.codec.http.DefaultHttpHeaders$HeaderValueConverterAndValidator.convertObject(DefaultHttpHeaders.java:411) ~[?:?]\r\n at io.netty.handler.codec.http.DefaultHttpHeaders$HeaderValueConverterAndValidator.convertObject(DefaultHttpHeaders.java:402) ~[?:?]\r\n at io.netty.handler.codec.DefaultHeaders.addObject(DefaultHeaders.java:318) ~[?:?]\r\n at io.netty.handler.codec.http.DefaultHttpHeaders.add(DefaultHttpHeaders.java:117) ~[?:?]\r\n at org.elasticsearch.http.netty4.Netty4HttpChannel.setHeaderField(Netty4HttpChannel.java:160) ~[?:?]\r\n at org.elasticsearch.http.netty4.Netty4HttpChannel.setHeaderField(Netty4HttpChannel.java:155) ~[?:?]\r\n at org.elasticsearch.http.netty4.Netty4HttpChannel.addCustomHeaders(Netty4HttpChannel.java:181) ~[?:?]\r\n at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:114) ~[?:?]\r\n at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:445) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.rest.action.RestResponseListener.processResponse(RestResponseListener.java:37) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.rest.action.RestActionListener.onResponse(RestActionListener.java:47) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:368) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onResponse(TransportBulkAction.java:349) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onResponse(TransportBulkAction.java:338) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishOnSuccess(TransportReplicationAction.java:855) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleResponse(TransportReplicationAction.java:765) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleResponse(TransportReplicationAction.java:751) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1025) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.processResponse(TransportService.java:1098) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1088) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1078) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.DelegatingTransportChannel.sendResponse(DelegatingTransportChannel.java:58) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.RequestHandlerRegistry$TransportChannelWrapper.sendResponse(RequestHandlerRegistry.java:111) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction$2.onResponse(TransportReplicationAction.java:348) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction$2.onResponse(TransportReplicationAction.java:342) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryResult.respond(TransportReplicationAction.java:414) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportWriteAction$WritePrimaryResult.respondIfPossible(TransportWriteAction.java:127) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportWriteAction$WritePrimaryResult.respond(TransportWriteAction.java:118) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.lambda$onResponse$0(TransportReplicationAction.java:320) ~[elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation.finish(ReplicationOperation.java:305) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation.decPendingAndFinishIfNeeded(ReplicationOperation.java:286) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation.access$100(ReplicationOperation.java:55) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation$1.onResponse(ReplicationOperation.java:190) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.support.replication.ReplicationOperation$1.onResponse(ReplicationOperation.java:186) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:46) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1025) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TcpTransport$1.doRun(TcpTransport.java:1386) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:109) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TcpTransport.handleResponse(TcpTransport.java:1378) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1347) [elasticsearch-5.3.0.jar:5.3.0]\r\n at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) [transport-netty4-5.3.0.jar:5.3.0]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:280) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:396) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.7.Final.jar:4.1.7.Final]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n\r\n",
"comments": [
{
"body": "cmds to reproduce:\r\n```\r\ncurl -v -XPUT 'localhost:9200/my_index?pretty' -H 'Content-Type: application/json' -d'\r\n{\r\n \"mappings\": {\r\n \"my_type\": {\r\n \"dynamic_templates\": [\r\n {\r\n \"string_fields\": {\r\n \"mapping\": {\r\n \"type\": \"string\",\r\n \"norms\": \"false\",\r\n \"include_in_all\": \"false\"\r\n },\r\n \"match_mapping_type\": \"string\",\r\n \"match\": \"*\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n}'\r\n\r\ncurl -v -XPOST 'http://localhost:9200/my_index/my_type?pretty' -H 'Content-Type: application/json' -d '{ \"field\\r\" : \"value\" }'\r\n```",
"created_at": "2017-11-03T13:54:22Z"
},
{
"body": "I opened #27269.",
"created_at": "2017-11-05T14:12:36Z"
},
{
"body": "With this change, the reproduction that @albertzaharovits gives now produces:\r\n\r\n```\r\n08:48:44 [jason:~/src/elastic/elasticsearch-5.x] 5.6+ ± curl -v -XPUT 'localhost:9200/my_index?pretty' -H 'Content-Type: application/json' -d'\r\n{\r\n \"mappings\": {\r\n \"my_type\": {\r\n \"dynamic_templates\": [\r\n {\r\n \"string_fields\": {\r\n \"mapping\": {\r\n \"type\": \"string\",\r\n \"norms\": \"false\",\r\n \"include_in_all\": \"false\"\r\n },\r\n \"match_mapping_type\": \"string\",\r\n \"match\": \"*\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n}'\r\n* Trying ::1...\r\n* TCP_NODELAY set\r\n* Connected to localhost (::1) port 9200 (#0)\r\n> PUT /my_index?pretty HTTP/1.1\r\n> Host: localhost:9200\r\n> User-Agent: curl/7.54.0\r\n> Accept: */*\r\n> Content-Type: application/json\r\n> Content-Length: 358\r\n> \r\n* upload completely sent off: 358 out of 358 bytes\r\n< HTTP/1.1 200 OK\r\n< content-type: application/json; charset=UTF-8\r\n< content-length: 84\r\n< \r\n{\r\n \"acknowledged\" : true,\r\n \"shards_acknowledged\" : true,\r\n \"index\" : \"my_index\"\r\n}\r\n* Connection #0 to host localhost left intact\r\n08:54:03 [jason:~/src/elastic/elasticsearch-5.x] 5.6+ ± curl -v -XPOST 'http://localhost:9200/my_index/my_type?pretty' -H 'Content-Type: application/json' -d '{ \"field\\r\" : \"value\" }'\r\nNote: Unnecessary use of -X or --request, POST is already inferred.\r\n* Trying ::1...\r\n* TCP_NODELAY set\r\n* Connected to localhost (::1) port 9200 (#0)\r\n> POST /my_index/my_type?pretty HTTP/1.1\r\n> Host: localhost:9200\r\n> User-Agent: curl/7.54.0\r\n> Accept: */*\r\n> Content-Type: application/json\r\n> Content-Length: 23\r\n> \r\n* upload completely sent off: 23 out of 23 bytes\r\n< HTTP/1.1 201 Created\r\n< Location: /my_index/my_type/AV-MdbJvxmftjXDqQmAN\r\n< Warning: 299 Elasticsearch-5.6.4-SNAPSHOT-Unknown \"The [string] field is deprecated, please use [text] or [keyword] instead on [field%0D]\" \"Sun, 05 Nov 2017 13:54:04 GMT\"\r\n< Warning: 299 Elasticsearch-5.6.4-SNAPSHOT-Unknown \"field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.\" \"Sun, 05 Nov 2017 13:54:04 GMT\"\r\n< content-type: application/json; charset=UTF-8\r\n< content-length: 224\r\n< \r\n{\r\n \"_index\" : \"my_index\",\r\n \"_type\" : \"my_type\",\r\n \"_id\" : \"AV-MdbJvxmftjXDqQmAN\",\r\n \"_version\" : 1,\r\n \"result\" : \"created\",\r\n \"_shards\" : {\r\n \"total\" : 2,\r\n \"successful\" : 1,\r\n \"failed\" : 0\r\n },\r\n \"created\" : true\r\n}\r\n* Connection #0 to host localhost left intact\r\n```",
"created_at": "2017-11-05T14:15:58Z"
}
],
"number": 27244,
"title": "ES 5.3.0 Crash with carriage return in indexed field"
} | {
"body": "The warnings headers have a fairly limited set of valid characters (cf. quoted-text in RFC 7230). While we have assertions that we adhere to this set of valid characters ensuring that our warning messages do not violate the specificaion, we were neglecting the possibility that arbitrary user input would trickle into these warning headers. Thus, missing here was tests for these situations and encoding of characters that appear outside the set of valid characters. This commit addresses this by encoding any characters in a deprecation message that are not from the set of valid characters.\r\n\r\nCloses #27244\r\n",
"number": 27269,
"review_comments": [
{
"body": "Lol - I spent some cycles trying to figure out how the hell we know this won't throw an index out of bounds exception, only to end up learning something about the BitSet api - it's funky ;)",
"created_at": "2017-11-06T10:48:56Z"
},
{
"body": "If we assume that all surrogate pairs need encoding, I think we can make this simpler?\r\n\r\n```\r\n int startIndex = i;\r\n for (i++;i< s.length() && doesNotNeedEncoding.get(s.charAt(i)) == false; i++) {\r\n assert Character.isSurrogate(s.charAt(i)) == false || doesNotNeedEncoding.get(s.charAt(i)) == false;\r\n }\r\n final byte[] bytes = s.substring(startIndex, i).getBytes(UTF_8);\r\n\r\n```",
"created_at": "2017-11-06T10:50:45Z"
},
{
"body": "any chance we can specify this using the \\u notation and not have crazy chars in the code?",
"created_at": "2017-11-06T11:17:20Z"
},
{
"body": "we decided to live on the edge and have fun. The concern was around non ascii codes breaking tooling but CI seems happy. Let's see how far we get.",
"created_at": "2017-11-06T14:07:33Z"
},
{
"body": "Yeah, it grows on writes if needed, and reads are always okay.",
"created_at": "2017-11-06T17:30:46Z"
}
],
"title": "Correctly encode warning headers"
} | {
"commits": [
{
"message": "Correctly encode warning headers\n\nThe warnings headers have a fairly limited set of valid characters\n(cf. quoted-text in RFC 7230). While we have assertions that we adhere\nto this set of valid characters ensuring that our warning messages do\nnot violate the specificaion, we were neglecting the possibility that\narbitrary user input would trickle into these warning headers. Thus,\nmissing here was tests for these situations and encoding of characters\nthat appear outside the set of valid characters. This commit addresses\nthis by encoding any characters in a deprecation message that are not\nfrom the set of valid characters."
},
{
"message": "Fix line length in EvilLoggerTests\n\nThis commit fixes a line-length violation in EvilLoggerTests.java that\noccurred after a method was renamed to a longer name."
},
{
"message": "Merge branch 'master' into deprecation-header-encoding\n\n* master:\n Backport the size-based index rollver to v6.1.0\n Add size-based condition to the index rollover API (#27160)\n Remove the single argument Environment constructor (#27235)"
},
{
"message": "Percent encode percent"
},
{
"message": "Fix off by one"
},
{
"message": "Simplify"
}
],
"files": [
{
"diff": "@@ -26,11 +26,14 @@\n import org.elasticsearch.common.SuppressLoggerChecks;\n import org.elasticsearch.common.util.concurrent.ThreadContext;\n \n+import java.io.CharArrayWriter;\n+import java.nio.charset.Charset;\n import java.time.ZoneId;\n import java.time.ZonedDateTime;\n import java.time.format.DateTimeFormatter;\n import java.time.format.DateTimeFormatterBuilder;\n import java.time.format.SignStyle;\n+import java.util.BitSet;\n import java.util.Collections;\n import java.util.HashMap;\n import java.util.Iterator;\n@@ -228,7 +231,7 @@ public void deprecatedAndMaybeLog(final String key, final String msg, final Obje\n public static Pattern WARNING_HEADER_PATTERN = Pattern.compile(\n \"299 \" + // warn code\n \"Elasticsearch-\\\\d+\\\\.\\\\d+\\\\.\\\\d+(?:-(?:alpha|beta|rc)\\\\d+)?(?:-SNAPSHOT)?-(?:[a-f0-9]{7}|Unknown) \" + // warn agent\n- \"\\\"((?:\\t| |!|[\\\\x23-\\\\x5b]|[\\\\x5d-\\\\x7e]|[\\\\x80-\\\\xff]|\\\\\\\\|\\\\\\\\\\\")*)\\\" \" + // quoted warning value, captured\n+ \"\\\"((?:\\t| |!|[\\\\x23-\\\\x5B]|[\\\\x5D-\\\\x7E]|[\\\\x80-\\\\xFF]|\\\\\\\\|\\\\\\\\\\\")*)\\\" \" + // quoted warning value, captured\n // quoted RFC 1123 date format\n \"\\\"\" + // opening quote\n \"(?:Mon|Tue|Wed|Thu|Fri|Sat|Sun), \" + // weekday\n@@ -304,7 +307,7 @@ void deprecated(final Set<ThreadContext> threadContexts, final String message, f\n final String formattedMessage = LoggerMessageFormat.format(message, params);\n final String warningHeaderValue = formatWarning(formattedMessage);\n assert WARNING_HEADER_PATTERN.matcher(warningHeaderValue).matches();\n- assert extractWarningValueFromWarningHeader(warningHeaderValue).equals(escape(formattedMessage));\n+ assert extractWarningValueFromWarningHeader(warningHeaderValue).equals(escapeAndEncode(formattedMessage));\n while (iterator.hasNext()) {\n try {\n final ThreadContext next = iterator.next();\n@@ -328,7 +331,17 @@ void deprecated(final Set<ThreadContext> threadContexts, final String message, f\n * @return a warning value formatted according to RFC 7234\n */\n public static String formatWarning(final String s) {\n- return String.format(Locale.ROOT, WARNING_FORMAT, escape(s), RFC_7231_DATE_TIME.format(ZonedDateTime.now(GMT)));\n+ return String.format(Locale.ROOT, WARNING_FORMAT, escapeAndEncode(s), RFC_7231_DATE_TIME.format(ZonedDateTime.now(GMT)));\n+ }\n+\n+ /**\n+ * Escape and encode a string as a valid RFC 7230 quoted-string.\n+ *\n+ * @param s the string to escape and encode\n+ * @return the escaped and encoded string\n+ */\n+ public static String escapeAndEncode(final String s) {\n+ return encode(escapeBackslashesAndQuotes(s));\n }\n \n /**\n@@ -337,8 +350,81 @@ public static String formatWarning(final String s) {\n * @param s the string to escape\n * @return the escaped string\n */\n- public static String escape(String s) {\n+ static String escapeBackslashesAndQuotes(final String s) {\n return s.replaceAll(\"([\\\"\\\\\\\\])\", \"\\\\\\\\$1\");\n }\n \n+ private static BitSet doesNotNeedEncoding;\n+\n+ static {\n+ doesNotNeedEncoding = new BitSet(1 + 0xFF);\n+ doesNotNeedEncoding.set('\\t');\n+ doesNotNeedEncoding.set(' ');\n+ doesNotNeedEncoding.set('!');\n+ doesNotNeedEncoding.set('\\\\');\n+ doesNotNeedEncoding.set('\"');\n+ // we have to skip '%' which is 0x25 so that it is percent-encoded too\n+ for (int i = 0x23; i <= 0x24; i++) {\n+ doesNotNeedEncoding.set(i);\n+ }\n+ for (int i = 0x26; i <= 0x5B; i++) {\n+ doesNotNeedEncoding.set(i);\n+ }\n+ for (int i = 0x5D; i <= 0x7E; i++) {\n+ doesNotNeedEncoding.set(i);\n+ }\n+ for (int i = 0x80; i <= 0xFF; i++) {\n+ doesNotNeedEncoding.set(i);\n+ }\n+ assert !doesNotNeedEncoding.get('%');\n+ }\n+\n+ private static final Charset UTF_8 = Charset.forName(\"UTF-8\");\n+\n+ /**\n+ * Encode a string containing characters outside of the legal characters for an RFC 7230 quoted-string.\n+ *\n+ * @param s the string to encode\n+ * @return the encoded string\n+ */\n+ static String encode(final String s) {\n+ final StringBuilder sb = new StringBuilder(s.length());\n+ boolean encodingNeeded = false;\n+ for (int i = 0; i < s.length();) {\n+ int current = (int) s.charAt(i);\n+ /*\n+ * Either the character does not need encoding or it does; when the character does not need encoding we append the character to\n+ * a buffer and move to the next character and when the character does need encoding, we peel off as many characters as possible\n+ * which we encode using UTF-8 until we encounter another character that does not need encoding.\n+ */\n+ if (doesNotNeedEncoding.get(current)) {\n+ // append directly and move to the next character\n+ sb.append((char) current);\n+ i++;\n+ } else {\n+ int startIndex = i;\n+ do {\n+ i++;\n+ } while (i < s.length() && !doesNotNeedEncoding.get(s.charAt(i)));\n+\n+ final byte[] bytes = s.substring(startIndex, i).getBytes(UTF_8);\n+ // noinspection ForLoopReplaceableByForEach\n+ for (int j = 0; j < bytes.length; j++) {\n+ sb.append('%').append(hex(bytes[j] >> 4)).append(hex(bytes[j]));\n+ }\n+ encodingNeeded = true;\n+ }\n+ }\n+ return encodingNeeded ? sb.toString() : s;\n+ }\n+\n+ private static char hex(int b) {\n+ final char ch = Character.forDigit(b & 0xF, 16);\n+ if (Character.isLetter(ch)) {\n+ return Character.toUpperCase(ch);\n+ } else {\n+ return ch;\n+ }\n+ }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/common/logging/DeprecationLogger.java",
"status": "modified"
},
{
"diff": "@@ -23,11 +23,13 @@\n import org.elasticsearch.common.util.concurrent.ThreadContext;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.test.hamcrest.RegexMatcher;\n+import org.hamcrest.core.IsSame;\n \n import java.io.IOException;\n import java.util.Collections;\n import java.util.HashSet;\n import java.util.List;\n+import java.util.Locale;\n import java.util.Map;\n import java.util.Set;\n import java.util.stream.IntStream;\n@@ -71,6 +73,54 @@ public void testAddsHeaderWithThreadContext() throws IOException {\n }\n }\n \n+ public void testContainingNewline() throws IOException {\n+ try (ThreadContext threadContext = new ThreadContext(Settings.EMPTY)) {\n+ final Set<ThreadContext> threadContexts = Collections.singleton(threadContext);\n+\n+ logger.deprecated(threadContexts, \"this message contains a newline\\n\");\n+\n+ final Map<String, List<String>> responseHeaders = threadContext.getResponseHeaders();\n+\n+ assertThat(responseHeaders.size(), equalTo(1));\n+ final List<String> responses = responseHeaders.get(\"Warning\");\n+ assertThat(responses, hasSize(1));\n+ assertThat(responses.get(0), warningValueMatcher);\n+ assertThat(responses.get(0), containsString(\"\\\"this message contains a newline%0A\\\"\"));\n+ }\n+ }\n+\n+ public void testSurrogatePair() throws IOException {\n+ try (ThreadContext threadContext = new ThreadContext(Settings.EMPTY)) {\n+ final Set<ThreadContext> threadContexts = Collections.singleton(threadContext);\n+\n+ logger.deprecated(threadContexts, \"this message contains a surrogate pair 😱\");\n+\n+ final Map<String, List<String>> responseHeaders = threadContext.getResponseHeaders();\n+\n+ assertThat(responseHeaders.size(), equalTo(1));\n+ final List<String> responses = responseHeaders.get(\"Warning\");\n+ assertThat(responses, hasSize(1));\n+ assertThat(responses.get(0), warningValueMatcher);\n+\n+ // convert UTF-16 to UTF-8 by hand to show the hard-coded constant below is correct\n+ assertThat(\"😱\", equalTo(\"\\uD83D\\uDE31\"));\n+ final int code = 0x10000 + ((0xD83D & 0x3FF) << 10) + (0xDE31 & 0x3FF);\n+ @SuppressWarnings(\"PointlessBitwiseExpression\")\n+ final int[] points = new int[] {\n+ (code >> 18) & 0x07 | 0xF0,\n+ (code >> 12) & 0x3F | 0x80,\n+ (code >> 6) & 0x3F | 0x80,\n+ (code >> 0) & 0x3F | 0x80};\n+ final StringBuilder sb = new StringBuilder();\n+ // noinspection ForLoopReplaceableByForEach\n+ for (int i = 0; i < points.length; i++) {\n+ sb.append(\"%\").append(Integer.toString(points[i], 16).toUpperCase(Locale.ROOT));\n+ }\n+ assertThat(sb.toString(), equalTo(\"%F0%9F%98%B1\"));\n+ assertThat(responses.get(0), containsString(\"\\\"this message contains a surrogate pair %F0%9F%98%B1\\\"\"));\n+ }\n+ }\n+\n public void testAddsCombinedHeaderWithThreadContext() throws IOException {\n try (ThreadContext threadContext = new ThreadContext(Settings.EMPTY)) {\n final Set<ThreadContext> threadContexts = Collections.singleton(threadContext);\n@@ -172,15 +222,28 @@ public void testWarningValueFromWarningHeader() throws InterruptedException {\n assertThat(DeprecationLogger.extractWarningValueFromWarningHeader(first), equalTo(s));\n }\n \n- public void testEscape() {\n- assertThat(DeprecationLogger.escape(\"\\\\\"), equalTo(\"\\\\\\\\\"));\n- assertThat(DeprecationLogger.escape(\"\\\"\"), equalTo(\"\\\\\\\"\"));\n- assertThat(DeprecationLogger.escape(\"\\\\\\\"\"), equalTo(\"\\\\\\\\\\\\\\\"\"));\n- assertThat(DeprecationLogger.escape(\"\\\"foo\\\\bar\\\"\"),equalTo(\"\\\\\\\"foo\\\\\\\\bar\\\\\\\"\"));\n+ public void testEscapeBackslashesAndQuotes() {\n+ assertThat(DeprecationLogger.escapeBackslashesAndQuotes(\"\\\\\"), equalTo(\"\\\\\\\\\"));\n+ assertThat(DeprecationLogger.escapeBackslashesAndQuotes(\"\\\"\"), equalTo(\"\\\\\\\"\"));\n+ assertThat(DeprecationLogger.escapeBackslashesAndQuotes(\"\\\\\\\"\"), equalTo(\"\\\\\\\\\\\\\\\"\"));\n+ assertThat(DeprecationLogger.escapeBackslashesAndQuotes(\"\\\"foo\\\\bar\\\"\"),equalTo(\"\\\\\\\"foo\\\\\\\\bar\\\\\\\"\"));\n // test that characters other than '\\' and '\"' are left unchanged\n- String chars = \"\\t !\" + range(0x23, 0x5b) + range(0x5d, 0x73) + range(0x80, 0xff);\n+ String chars = \"\\t !\" + range(0x23, 0x24) + range(0x26, 0x5b) + range(0x5d, 0x73) + range(0x80, 0xff);\n+ final String s = new CodepointSetGenerator(chars.toCharArray()).ofCodePointsLength(random(), 16, 16);\n+ assertThat(DeprecationLogger.escapeBackslashesAndQuotes(s), equalTo(s));\n+ }\n+\n+ public void testEncode() {\n+ assertThat(DeprecationLogger.encode(\"\\n\"), equalTo(\"%0A\"));\n+ assertThat(DeprecationLogger.encode(\"😱\"), equalTo(\"%F0%9F%98%B1\"));\n+ assertThat(DeprecationLogger.encode(\"福島深雪\"), equalTo(\"%E7%A6%8F%E5%B3%B6%E6%B7%B1%E9%9B%AA\"));\n+ assertThat(DeprecationLogger.encode(\"100%\\n\"), equalTo(\"100%25%0A\"));\n+ // test that valid characters are left unchanged\n+ String chars = \"\\t !\" + range(0x23, 0x24) + range(0x26, 0x5b) + range(0x5d, 0x73) + range(0x80, 0xff) + '\\\\' + '\"';\n final String s = new CodepointSetGenerator(chars.toCharArray()).ofCodePointsLength(random(), 16, 16);\n- assertThat(DeprecationLogger.escape(s), equalTo(s));\n+ assertThat(DeprecationLogger.encode(s), equalTo(s));\n+ // when no encoding is needed, the original string is returned (optimization)\n+ assertThat(DeprecationLogger.encode(s), IsSame.sameInstance(s));\n }\n \n private String range(int lowerInclusive, int upperInclusive) {",
"filename": "core/src/test/java/org/elasticsearch/common/logging/DeprecationLoggerTests.java",
"status": "modified"
},
{
"diff": "@@ -28,7 +28,6 @@\n import org.apache.logging.log4j.core.appender.CountingNoOpAppender;\n import org.apache.logging.log4j.core.config.Configurator;\n import org.apache.logging.log4j.message.ParameterizedMessage;\n-import org.apache.lucene.util.Constants;\n import org.elasticsearch.cli.UserException;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.common.Randomness;\n@@ -165,7 +164,9 @@ public void testConcurrentDeprecationLogger() throws IOException, UserException,\n final Set<String> actualWarningValues =\n warnings.stream().map(DeprecationLogger::extractWarningValueFromWarningHeader).collect(Collectors.toSet());\n for (int j = 0; j < 128; j++) {\n- assertThat(actualWarningValues, hasItem(DeprecationLogger.escape(\"This is a maybe logged deprecation message\" + j)));\n+ assertThat(\n+ actualWarningValues,\n+ hasItem(DeprecationLogger.escapeAndEncode(\"This is a maybe logged deprecation message\" + j)));\n }\n \n try {",
"filename": "qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerTests.java",
"status": "modified"
},
{
"diff": "@@ -341,7 +341,7 @@ protected final void assertWarnings(String... expectedWarnings) {\n final Set<String> actualWarningValues =\n actualWarnings.stream().map(DeprecationLogger::extractWarningValueFromWarningHeader).collect(Collectors.toSet());\n for (String msg : expectedWarnings) {\n- assertThat(actualWarningValues, hasItem(DeprecationLogger.escape(msg)));\n+ assertThat(actualWarningValues, hasItem(DeprecationLogger.escapeAndEncode(msg)));\n }\n assertEquals(\"Expected \" + expectedWarnings.length + \" warnings but found \" + actualWarnings.size() + \"\\nExpected: \"\n + Arrays.asList(expectedWarnings) + \"\\nActual: \" + actualWarnings,",
"filename": "test/framework/src/main/java/org/elasticsearch/test/ESTestCase.java",
"status": "modified"
},
{
"diff": "@@ -263,7 +263,7 @@ void checkWarningHeaders(final List<String> warningHeaders) {\n final List<String> missing = new ArrayList<>();\n // LinkedHashSet so that missing expected warnings come back in a predictable order which is nice for testing\n final Set<String> expected =\n- new LinkedHashSet<>(expectedWarningHeaders.stream().map(DeprecationLogger::escape).collect(Collectors.toList()));\n+ new LinkedHashSet<>(expectedWarningHeaders.stream().map(DeprecationLogger::escapeAndEncode).collect(Collectors.toList()));\n for (final String header : warningHeaders) {\n final Matcher matcher = WARNING_HEADER_PATTERN.matcher(header);\n final boolean matches = matcher.matches();",
"filename": "test/framework/src/main/java/org/elasticsearch/test/rest/yaml/section/DoSection.java",
"status": "modified"
}
]
} |
{
"body": "If the master disconnects from the cluster after initiating snapshot, but just before the snapshot switches from INIT to STARTED state, the snapshot can get indefinitely stuck in the INIT state. This error is specific to v5.x+ and was triggered by [keeping the master node that stepped down in the node list](https://github.com/elastic/elasticsearch/pull/22049), the cleanup logic in snapshot/restore assumed that if master steps down it is always removed from the the node list. We need to change the cleanup logic to be triggered even if no nodes left the cluster.\r\n",
"comments": [],
"number": 27180,
"title": "Snapshot process can get stuck in INIT state"
} | {
"body": "If the master disconnects from the cluster after initiating snapshot, but just before the snapshot switches from INIT to STARTED state, the snapshot can get indefinitely stuck in the INIT state. This error is specific to v5.x+ and was triggered by keeping the master node that stepped down in the node list, the cleanup logic in snapshot/restore assumed that if master steps down it is always removed from the the node list. This commit changes the logic to trigger cleanup even if no nodes left the cluster.\r\n \r\nCloses #27180",
"number": 27214,
"review_comments": [
{
"body": "can listener be null here? It is marked as Nullable above?",
"created_at": "2017-11-02T12:11:40Z"
},
{
"body": "should this be a timeout exception. can you assert on that?",
"created_at": "2017-11-02T12:24:13Z"
},
{
"body": "I don't understand this comment. What's the issue with repo initialization here? When the disruption triggers, then there is no more repo writing done by the old master node AFAICS?",
"created_at": "2017-11-02T12:28:13Z"
},
{
"body": "Yes! Good catch! Fixing it.",
"created_at": "2017-11-02T15:13:16Z"
},
{
"body": " There is a race condition in writing a list of incompatible snapshot in getRepositoryData method that occurs on empty repositories. This method is called in the START phase on the former master and during clean up on the new master if this file doesn't exist in the repo, which happens in the repo. It shouldn't cause any issues in the real life, but it makes test to fail occasionally due to asserts. We will definitely need to address it at some point of time, but I don't think we should do it as part of this PR. ",
"created_at": "2017-11-02T15:25:30Z"
},
{
"body": "Good point.",
"created_at": "2017-11-02T15:25:50Z"
}
],
"title": "Fix snapshot getting stuck in INIT state"
} | {
"commits": [
{
"message": "Fix snapshot getting stuck in INIT state\n\nIf the master disconnects from the cluster after initiating snapshot, but just before the snapshot switches from INIT to STARTED state, the snapshot can get indefinitely stuck in the INIT state. This error is specific to v5.x+ and was triggered by keeping the master node that stepped down in the node list, the cleanup logic in snapshot/restore assumed that if master steps down it is always removed from the the node list. This commit changes the logic to trigger cleanup even if no nodes left the cluster.\n\nCloses #27180"
},
{
"message": "Improve handling of clean up on the disconnected master node"
},
{
"message": "Address @ywelsch's comments"
}
],
"files": [
{
"diff": "@@ -140,7 +140,9 @@ public void move(String source, String target) throws IOException {\n Path targetPath = path.resolve(target);\n // If the target file exists then Files.move() behaviour is implementation specific\n // the existing file might be replaced or this method fails by throwing an IOException.\n- assert !Files.exists(targetPath);\n+ if (Files.exists(targetPath)) {\n+ throw new FileAlreadyExistsException(\"blob [\" + targetPath + \"] already exists, cannot overwrite\");\n+ }\n Files.move(sourcePath, targetPath, StandardCopyOption.ATOMIC_MOVE);\n IOUtils.fsync(path, true);\n }",
"filename": "core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java",
"status": "modified"
},
{
"diff": "@@ -425,6 +425,15 @@ public void onFailure(String source, Exception e) {\n removeSnapshotFromClusterState(snapshot.snapshot(), null, e, new CleanupAfterErrorListener(snapshot, true, userCreateSnapshotListener, e));\n }\n \n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ // We are not longer a master - we shouldn't try to do any cleanup\n+ // The new master will take care of it\n+ logger.warn(\"[{}] failed to create snapshot - no longer a master\", snapshot.snapshot().getSnapshotId());\n+ userCreateSnapshotListener.onFailure(\n+ new SnapshotException(snapshot.snapshot(), \"master changed during snapshot initialization\"));\n+ }\n+\n @Override\n public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n // The userCreateSnapshotListener.onResponse() notifies caller that the snapshot was accepted\n@@ -473,6 +482,10 @@ public void onFailure(Exception e) {\n cleanupAfterError(e);\n }\n \n+ public void onNoLongerMaster(String source) {\n+ userCreateSnapshotListener.onFailure(e);\n+ }\n+\n private void cleanupAfterError(Exception exception) {\n if(snapshotCreated) {\n try {\n@@ -628,7 +641,8 @@ private SnapshotShardFailure findShardFailure(List<SnapshotShardFailure> shardFa\n public void applyClusterState(ClusterChangedEvent event) {\n try {\n if (event.localNodeMaster()) {\n- if (event.nodesRemoved()) {\n+ // We don't remove old master when master flips anymore. So, we need to check for change in master\n+ if (event.nodesRemoved() || event.previousState().nodes().isLocalNodeElectedMaster() == false) {\n processSnapshotsOnRemovedNodes(event);\n }\n if (event.routingTableChanged()) {\n@@ -981,7 +995,7 @@ private void removeSnapshotFromClusterState(final Snapshot snapshot, final Snaps\n * @param listener listener to notify when snapshot information is removed from the cluster state\n */\n private void removeSnapshotFromClusterState(final Snapshot snapshot, final SnapshotInfo snapshotInfo, final Exception failure,\n- @Nullable ActionListener<SnapshotInfo> listener) {\n+ @Nullable CleanupAfterErrorListener listener) {\n clusterService.submitStateUpdateTask(\"remove snapshot metadata\", new ClusterStateUpdateTask() {\n \n @Override\n@@ -1013,6 +1027,13 @@ public void onFailure(String source, Exception e) {\n }\n }\n \n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ if (listener != null) {\n+ listener.onNoLongerMaster(source);\n+ }\n+ }\n+\n @Override\n public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n for (SnapshotCompletionListener listener : snapshotCompletionListeners) {\n@@ -1183,9 +1204,16 @@ public void onSnapshotCompletion(Snapshot completedSnapshot, SnapshotInfo snapsh\n if (completedSnapshot.equals(snapshot)) {\n logger.debug(\"deleted snapshot completed - deleting files\");\n removeListener(this);\n- threadPool.executor(ThreadPool.Names.SNAPSHOT).execute(() ->\n- deleteSnapshot(completedSnapshot.getRepository(), completedSnapshot.getSnapshotId().getName(),\n- listener, true)\n+ threadPool.executor(ThreadPool.Names.SNAPSHOT).execute(() -> {\n+ try {\n+ deleteSnapshot(completedSnapshot.getRepository(), completedSnapshot.getSnapshotId().getName(),\n+ listener, true);\n+\n+ } catch (Exception ex) {\n+ logger.warn((Supplier<?>) () ->\n+ new ParameterizedMessage(\"[{}] failed to delete snapshot\", snapshot), ex);\n+ }\n+ }\n );\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,173 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.discovery;\n+\n+import org.elasticsearch.action.ActionFuture;\n+import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;\n+import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.cluster.ClusterChangedEvent;\n+import org.elasticsearch.cluster.ClusterStateListener;\n+import org.elasticsearch.cluster.SnapshotsInProgress;\n+import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.ByteSizeUnit;\n+import org.elasticsearch.snapshots.SnapshotInfo;\n+import org.elasticsearch.snapshots.SnapshotMissingException;\n+import org.elasticsearch.snapshots.SnapshotState;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.disruption.NetworkDisruption;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n+\n+import java.util.Collections;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Set;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.ExecutionException;\n+import java.util.concurrent.TimeUnit;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.hamcrest.Matchers.instanceOf;\n+\n+/**\n+ * Tests snapshot operations during disruptions.\n+ */\n+@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, numDataNodes = 0, transportClientRatio = 0, autoMinMasterNodes = false)\n+@TestLogging(\"org.elasticsearch.snapshot:TRACE\")\n+public class SnapshotDisruptionIT extends AbstractDisruptionTestCase {\n+\n+ public void testDisruptionOnSnapshotInitialization() throws Exception {\n+ final Settings settings = Settings.builder()\n+ .put(DEFAULT_SETTINGS)\n+ .put(DiscoverySettings.COMMIT_TIMEOUT_SETTING.getKey(), \"30s\") // wait till cluster state is committed\n+ .build();\n+ final String idxName = \"test\";\n+ configureCluster(settings, 4, null, 2);\n+ final List<String> allMasterEligibleNodes = internalCluster().startMasterOnlyNodes(3);\n+ final String dataNode = internalCluster().startDataOnlyNode();\n+ ensureStableCluster(4);\n+\n+ createRandomIndex(idxName);\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client().admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(Settings.builder()\n+ .put(\"location\", randomRepoPath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)));\n+\n+ // Writing incompatible snapshot can cause this test to fail due to a race condition in repo initialization\n+ // by the current master and the former master. It is not causing any issues in real life scenario, but\n+ // might make this test to fail. We are going to complete initialization of the snapshot to prevent this failures.\n+ logger.info(\"--> initializing the repository\");\n+ assertEquals(SnapshotState.SUCCESS, client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\")\n+ .setWaitForCompletion(true).setIncludeGlobalState(true).setIndices().get().getSnapshotInfo().state());\n+\n+ final String masterNode1 = internalCluster().getMasterName();\n+ Set<String> otherNodes = new HashSet<>();\n+ otherNodes.addAll(allMasterEligibleNodes);\n+ otherNodes.remove(masterNode1);\n+ otherNodes.add(dataNode);\n+\n+ NetworkDisruption networkDisruption =\n+ new NetworkDisruption(new NetworkDisruption.TwoPartitions(Collections.singleton(masterNode1), otherNodes),\n+ new NetworkDisruption.NetworkUnresponsive());\n+ internalCluster().setDisruptionScheme(networkDisruption);\n+\n+ ClusterService clusterService = internalCluster().clusterService(masterNode1);\n+ CountDownLatch disruptionStarted = new CountDownLatch(1);\n+ clusterService.addListener(new ClusterStateListener() {\n+ @Override\n+ public void clusterChanged(ClusterChangedEvent event) {\n+ SnapshotsInProgress snapshots = event.state().custom(SnapshotsInProgress.TYPE);\n+ if (snapshots != null && snapshots.entries().size() > 0) {\n+ if (snapshots.entries().get(0).state() == SnapshotsInProgress.State.INIT) {\n+ // The snapshot started, we can start disruption so the INIT state will arrive to another master node\n+ logger.info(\"--> starting disruption\");\n+ networkDisruption.startDisrupting();\n+ clusterService.removeListener(this);\n+ disruptionStarted.countDown();\n+ }\n+ }\n+ }\n+ });\n+\n+ logger.info(\"--> starting snapshot\");\n+ ActionFuture<CreateSnapshotResponse> future = client(masterNode1).admin().cluster()\n+ .prepareCreateSnapshot(\"test-repo\", \"test-snap-2\").setWaitForCompletion(false).setIndices(idxName).execute();\n+\n+ logger.info(\"--> waiting for disruption to start\");\n+ assertTrue(disruptionStarted.await(1, TimeUnit.MINUTES));\n+\n+ logger.info(\"--> wait until the snapshot is done\");\n+ assertBusy(() -> {\n+ SnapshotsInProgress snapshots = dataNodeClient().admin().cluster().prepareState().setLocal(true).get().getState()\n+ .custom(SnapshotsInProgress.TYPE);\n+ if (snapshots != null && snapshots.entries().size() > 0) {\n+ logger.info(\"Current snapshot state [{}]\", snapshots.entries().get(0).state());\n+ fail(\"Snapshot is still running\");\n+ } else {\n+ logger.info(\"Snapshot is no longer in the cluster state\");\n+ }\n+ }, 1, TimeUnit.MINUTES);\n+\n+ logger.info(\"--> verify that snapshot was successful or no longer exist\");\n+ assertBusy(() -> {\n+ try {\n+ GetSnapshotsResponse snapshotsStatusResponse = dataNodeClient().admin().cluster().prepareGetSnapshots(\"test-repo\")\n+ .setSnapshots(\"test-snap-2\").get();\n+ SnapshotInfo snapshotInfo = snapshotsStatusResponse.getSnapshots().get(0);\n+ assertEquals(SnapshotState.SUCCESS, snapshotInfo.state());\n+ assertEquals(snapshotInfo.totalShards(), snapshotInfo.successfulShards());\n+ assertEquals(0, snapshotInfo.failedShards());\n+ logger.info(\"--> done verifying\");\n+ } catch (SnapshotMissingException exception) {\n+ logger.info(\"--> snapshot doesn't exist\");\n+ }\n+ }, 1, TimeUnit.MINUTES);\n+\n+ logger.info(\"--> stopping disrupting\");\n+ networkDisruption.stopDisrupting();\n+ ensureStableCluster(4, masterNode1);\n+ logger.info(\"--> done\");\n+\n+ try {\n+ future.get();\n+ } catch (Exception ex) {\n+ logger.info(\"--> got exception from hanged master\", ex);\n+ Throwable cause = ex.getCause();\n+ assertThat(cause, instanceOf(MasterNotDiscoveredException.class));\n+ cause = cause.getCause();\n+ assertThat(cause, instanceOf(Discovery.FailedToCommitClusterStateException.class));\n+ }\n+ }\n+\n+ private void createRandomIndex(String idxName) throws ExecutionException, InterruptedException {\n+ assertAcked(prepareCreate(idxName, 0, Settings.builder().put(\"number_of_shards\", between(1, 20))\n+ .put(\"number_of_replicas\", 0)));\n+ logger.info(\"--> indexing some data\");\n+ final int numdocs = randomIntBetween(10, 100);\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[numdocs];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(idxName, \"type1\", Integer.toString(i)).setSource(\"field1\", \"bar \" + i);\n+ }\n+ indexRandom(true, builders);\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/discovery/SnapshotDisruptionIT.java",
"status": "added"
}
]
} |
{
"body": "Describe the feature:\r\n\r\nElasticsearch version (bin/elasticsearch --version):\r\n5.0.1, Build: 080bb47/2016-11-11T22:08:49.812Z\r\n\r\nPlugins installed: []\r\n\r\nJVM version (java -version):\r\n1.8.0_131\r\nOS version (uname -a if on a Unix-like system):\r\nCentOS release 6.5\r\nDescription of the problem including expected versus actual behavior:\r\nResults from inner_hits named \"a\" should be returned only if my_parent's key equal 3.\r\nSteps to reproduce:\r\n\r\nPlease include a minimal but complete recreation of the problem, including\r\n(e.g.) index creation, mappings, settings, query etc. The easier you make for\r\nus to reproduce it, the more likely that somebody will take the time to look at it.\r\n\r\n```\r\nPUT test\r\n{\r\n \"mappings\": {\r\n \"type\": {\r\n \"properties\": {\r\n \"scaled_float_field\": {\r\n \"type\": \"scaled_float\",\r\n \"scaling_factor\": 100\r\n }\r\n }\r\n }\r\n }\r\n}\r\nPOST _bulk\r\n{\"update\": {\"_index\": \"test\",\"_type\": \"type\",\"_id\": \"1\"}}\r\n{\"doc\": {\"scaled_float_field\": 0.1},\"doc_as_upsert\": true}\r\nPOST test/_search\r\n{\r\n \"query\": {\r\n \"range\": {\r\n \"scaled_float_field\": {\r\n \"lt\": 0.1\r\n }\r\n }\r\n }\r\n}\r\n```\r\nresult\r\n```\r\n{\r\n \"took\": 1,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"type\",\r\n \"_id\": \"1\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"scaled_float_field\": 0.1\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n\r\nAs Document mentions [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/number.html), scaled_float is stored as long type. Why it also has precision problem?",
"comments": [
{
"body": "This is a bug indeed. Range queries are supposed to work as if the field was mapped as a `double` field that only contains rounded values.",
"created_at": "2017-10-31T14:38:13Z"
},
{
"body": "Even though this is considered a bug, I'd like to point out that relying on equality on doubles may be a source of problems for other reasons, especially since 0.1 cannot be represented accurately as a double.",
"created_at": "2017-10-31T14:39:45Z"
}
],
"number": 27189,
"title": "scaled_float has precision problem"
} | {
"body": "This fixes #27189 I think:\r\n\r\n* Fixed by switching the order of operations between `nextDown`, `nextUp` and multiplication by the scaling factor to reduce rounding errors. Now the error incurred by `nextUp` and `nextDown` doesn't propagate into the multiplication by the scaling factor, whereas without this change the error incurred when multiplying two `double` can cancel out the `Math.nextDown` step as seen in the example below \r\n * Added unit tests for the example in #27189 in which we were hit by \r\n\r\n```java\r\nMath.floor(Math.nextDown(0.1) * 100.0) = 10.0\r\n```\r\n\r\nwhile\r\n\r\n```java\r\nMath.floor(Math.nextDown(0.1 * 100.0)) = 9.0\r\n```\r\n\r\n",
"number": 27207,
"review_comments": [
{
"body": "I expanded your assertions a bit if you are ok with them:\r\n\r\n```\r\n public void testRoundsUpperBoundCorrectly() {\r\n ScaledFloatFieldMapper.ScaledFloatFieldType ft = new ScaledFloatFieldMapper.ScaledFloatFieldType();\r\n ft.setName(\"scaled_float\");\r\n ft.setScalingFactor(100.0);\r\n Query scaledFloatQ = ft.rangeQuery(null, 0.1, true, false, null);\r\n assertEquals(\"scaled_float:[-9223372036854775808 TO 9]\", scaledFloatQ.toString());\r\n scaledFloatQ = ft.rangeQuery(null, 0.1, true, true, null);\r\n assertEquals(\"scaled_float:[-9223372036854775808 TO 10]\", scaledFloatQ.toString());\r\n scaledFloatQ = ft.rangeQuery(null, 0.095, true, false, null);\r\n assertEquals(\"scaled_float:[-9223372036854775808 TO 9]\", scaledFloatQ.toString());\r\n scaledFloatQ = ft.rangeQuery(null, 0.095, true, true, null);\r\n assertEquals(\"scaled_float:[-9223372036854775808 TO 9]\", scaledFloatQ.toString());\r\n scaledFloatQ = ft.rangeQuery(null, 0.105, true, false, null);\r\n assertEquals(\"scaled_float:[-9223372036854775808 TO 10]\", scaledFloatQ.toString());\r\n scaledFloatQ = ft.rangeQuery(null, 0.105, true, true, null);\r\n assertEquals(\"scaled_float:[-9223372036854775808 TO 10]\", scaledFloatQ.toString());\r\n }\r\n\r\n public void testRoundsLowerBoundCorrectly() {\r\n ScaledFloatFieldMapper.ScaledFloatFieldType ft = new ScaledFloatFieldMapper.ScaledFloatFieldType();\r\n ft.setName(\"scaled_float\");\r\n ft.setScalingFactor(100.0);\r\n Query scaledFloatQ = ft.rangeQuery(-0.1, null, false, true, null);\r\n assertEquals(\"scaled_float:[-9 TO 9223372036854775807]\", scaledFloatQ.toString());\r\n scaledFloatQ = ft.rangeQuery(-0.1, null, true, true, null);\r\n assertEquals(\"scaled_float:[-10 TO 9223372036854775807]\", scaledFloatQ.toString());\r\n scaledFloatQ = ft.rangeQuery(-0.095, null, false, true, null);\r\n assertEquals(\"scaled_float:[-9 TO 9223372036854775807]\", scaledFloatQ.toString());\r\n scaledFloatQ = ft.rangeQuery(-0.095, null, true, true, null);\r\n assertEquals(\"scaled_float:[-9 TO 9223372036854775807]\", scaledFloatQ.toString());\r\n scaledFloatQ = ft.rangeQuery(-0.105, null, false, true, null);\r\n assertEquals(\"scaled_float:[-10 TO 9223372036854775807]\", scaledFloatQ.toString());\r\n scaledFloatQ = ft.rangeQuery(-0.105, null, true, true, null);\r\n assertEquals(\"scaled_float:[-10 TO 9223372036854775807]\", scaledFloatQ.toString());\r\n }\r\n```",
"created_at": "2017-11-03T14:39:18Z"
},
{
"body": "@jpountz thanks for reviewing, I like adding some more assertions :) I'll just add those to this PR?",
"created_at": "2017-11-03T14:50:25Z"
},
{
"body": "Yes please!",
"created_at": "2017-11-03T14:56:46Z"
}
],
"title": "Fixed rounding of bounds in scaled float comparison"
} | {
"commits": [
{
"message": " #27189 Fixed rounding of bounds in scaled float comparison"
},
{
"message": " #27189 more assertions from CR"
}
],
"files": [
{
"diff": "@@ -256,19 +256,19 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower\n failIfNotIndexed();\n Long lo = null;\n if (lowerTerm != null) {\n- double dValue = parse(lowerTerm);\n+ double dValue = parse(lowerTerm) * scalingFactor;\n if (includeLower == false) {\n dValue = Math.nextUp(dValue);\n }\n- lo = Math.round(Math.ceil(dValue * scalingFactor));\n+ lo = Math.round(Math.ceil(dValue));\n }\n Long hi = null;\n if (upperTerm != null) {\n- double dValue = parse(upperTerm);\n+ double dValue = parse(upperTerm) * scalingFactor;\n if (includeUpper == false) {\n dValue = Math.nextDown(dValue);\n }\n- hi = Math.round(Math.floor(dValue * scalingFactor));\n+ hi = Math.round(Math.floor(dValue));\n }\n Query query = NumberFieldMapper.NumberType.LONG.rangeQuery(name(), lo, hi, true, true, hasDocValues());\n if (boost() != 1f) {",
"filename": "modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -124,6 +124,42 @@ public void testRangeQuery() throws IOException {\n IOUtils.close(reader, dir);\n }\n \n+ public void testRoundsUpperBoundCorrectly() {\n+ ScaledFloatFieldMapper.ScaledFloatFieldType ft = new ScaledFloatFieldMapper.ScaledFloatFieldType();\n+ ft.setName(\"scaled_float\");\n+ ft.setScalingFactor(100.0);\n+ Query scaledFloatQ = ft.rangeQuery(null, 0.1, true, false, null);\n+ assertEquals(\"scaled_float:[-9223372036854775808 TO 9]\", scaledFloatQ.toString());\n+ scaledFloatQ = ft.rangeQuery(null, 0.1, true, true, null);\n+ assertEquals(\"scaled_float:[-9223372036854775808 TO 10]\", scaledFloatQ.toString());\n+ scaledFloatQ = ft.rangeQuery(null, 0.095, true, false, null);\n+ assertEquals(\"scaled_float:[-9223372036854775808 TO 9]\", scaledFloatQ.toString());\n+ scaledFloatQ = ft.rangeQuery(null, 0.095, true, true, null);\n+ assertEquals(\"scaled_float:[-9223372036854775808 TO 9]\", scaledFloatQ.toString());\n+ scaledFloatQ = ft.rangeQuery(null, 0.105, true, false, null);\n+ assertEquals(\"scaled_float:[-9223372036854775808 TO 10]\", scaledFloatQ.toString());\n+ scaledFloatQ = ft.rangeQuery(null, 0.105, true, true, null);\n+ assertEquals(\"scaled_float:[-9223372036854775808 TO 10]\", scaledFloatQ.toString());\n+ }\n+\n+ public void testRoundsLowerBoundCorrectly() {\n+ ScaledFloatFieldMapper.ScaledFloatFieldType ft = new ScaledFloatFieldMapper.ScaledFloatFieldType();\n+ ft.setName(\"scaled_float\");\n+ ft.setScalingFactor(100.0);\n+ Query scaledFloatQ = ft.rangeQuery(-0.1, null, false, true, null);\n+ assertEquals(\"scaled_float:[-9 TO 9223372036854775807]\", scaledFloatQ.toString());\n+ scaledFloatQ = ft.rangeQuery(-0.1, null, true, true, null);\n+ assertEquals(\"scaled_float:[-10 TO 9223372036854775807]\", scaledFloatQ.toString());\n+ scaledFloatQ = ft.rangeQuery(-0.095, null, false, true, null);\n+ assertEquals(\"scaled_float:[-9 TO 9223372036854775807]\", scaledFloatQ.toString());\n+ scaledFloatQ = ft.rangeQuery(-0.095, null, true, true, null);\n+ assertEquals(\"scaled_float:[-9 TO 9223372036854775807]\", scaledFloatQ.toString());\n+ scaledFloatQ = ft.rangeQuery(-0.105, null, false, true, null);\n+ assertEquals(\"scaled_float:[-10 TO 9223372036854775807]\", scaledFloatQ.toString());\n+ scaledFloatQ = ft.rangeQuery(-0.105, null, true, true, null);\n+ assertEquals(\"scaled_float:[-10 TO 9223372036854775807]\", scaledFloatQ.toString());\n+ }\n+\n public void testValueForSearch() {\n ScaledFloatFieldMapper.ScaledFloatFieldType ft = new ScaledFloatFieldMapper.ScaledFloatFieldType();\n ft.setName(\"scaled_float\");",
"filename": "modules/mapper-extras/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldTypeTests.java",
"status": "modified"
}
]
} |
{
"body": "This issue (https://github.com/elastic/elasticsearch/issues/18091) reported against inner hits prompted a breaking change on 5.0+ (https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking_50_search_changes.html#_inner_hits):\r\n\r\n>Nested inner hits will now no longer include _index, _type and _id keys. For nested inner hits these values are always the same as the _index, _type and _id keys of the root search hit.\r\n\r\nHowever, it looks like we missed one use case when implementing the breaking change. The above also affects top hits aggregation under a nested agg.\r\n\r\nTake this query as an example:\r\n\r\n```\r\nGET some_index/_search\r\n{\r\n \"size\": 0,\r\n \"query\": {\r\n \"match_all\": {}\r\n },\r\n \"aggregations\": {\r\n \"nested_0\": {\r\n \"nested\": {\r\n \"path\": \"something.concepts\"\r\n },\r\n \"aggregations\": {\r\n \"top_hits_1\": {\r\n \"top_hits\": {\r\n \"sort\": [\r\n {\r\n \"something.concepts.count\": {\r\n \"order\": \"desc\"\r\n }\r\n }\r\n ],\r\n \"from\": 0,\r\n \"size\": 1,\r\n \"version\": false,\r\n \"explain\": false\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nWill return something like the following on 2.x:\r\n\r\n```\r\n \"hits\": [\r\n {\r\n \"_index\": \"some_index\",\r\n \"_type\": \"type\",\r\n \"_id\": \"1\",\r\n \"_nested\": {\r\n \"field\": \"something\",\r\n \"offset\": 0,\r\n \"_nested\": {\r\n \"field\": \"concepts\",\r\n \"offset\": 0\r\n }\r\n },\r\n \"_score\": null,\r\n \"_source\": {\r\n \"keywords\": \"a\",\r\n \"count\": 999\r\n },\r\n \"sort\": [\r\n 999\r\n ]\r\n }\r\n ]\r\n```\r\n\r\nAnd on 5.x, it returns:\r\n\r\n```\r\n{\r\n \"took\": 1,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 2,\r\n \"max_score\": 0,\r\n \"hits\": []\r\n },\r\n \"aggregations\": {\r\n \"nested_0\": {\r\n \"doc_count\": 4,\r\n \"top_hits_1\": {\r\n \"hits\": {\r\n \"total\": 4,\r\n \"max_score\": null,\r\n \"hits\": [\r\n {\r\n \"_nested\": {\r\n \"field\": \"something\",\r\n \"offset\": 0,\r\n \"_nested\": {\r\n \"field\": \"concepts\",\r\n \"offset\": 0\r\n }\r\n },\r\n \"_score\": null,\r\n \"_source\": {\r\n \"keywords\": \"a\",\r\n \"count\": 999\r\n },\r\n \"sort\": [\r\n 999\r\n ]\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nOn 5.0+, how does one determine the \"root search hit\" within the top hits aggregation now? It will be helpful if we can include the root search hit's _id, _index, and _type (well, maybe not type in the future).\r\n\r\n@martijnvg \r\n",
"comments": [
{
"body": "@martijnvg Will you include this fix in a 5.x release as well?",
"created_at": "2018-01-24T16:46:58Z"
},
{
"body": "@elwerene This has been backported to the 5.6 branch and this fix will be included in the next 5.6.x release.",
"created_at": "2018-02-05T14:38:17Z"
},
{
"body": "@martijnvg @elwerene _id field is again not there in version 6.1.2. Any reason?",
"created_at": "2018-04-05T13:06:16Z"
}
],
"number": 27053,
"title": "Include _index and _id for root search hit of nested -> top hits aggregation"
} | {
"body": "~~Nested search hits in inner hits do not need the _id and _index attributes,\r\nbecause it is clear from looking at the root search hit what id and\r\nindex these nested search hits have.~~\r\n\r\n~~However in the case of top_hits aggregation the nested search hits are\r\ndirectly returned and are not grouped by a root or parent document, so\r\nit is important to include the _id and _index attributes in order to know\r\nto what documents these nested search hits belong to.~~\r\n\r\nAlways include the _index and _id of the root document for nested hits.\r\nThis information is not really needed for nested hits inside inner hits,\r\nbut for nested hits inside `top_hits` aggregation this information is important,\r\nbecause nested hits are returned outside of the context of their root documents.\r\nAlso always include the _index for parent/child inner hits.\r\n\r\nPR for #27053 ",
"number": 27201,
"review_comments": [
{
"body": "Can you add a randomization to SearchHitTests#createTestItem() that randomly sets includeIdAndIndex on the nestedIdentity, so we are sure we test parsing this?",
"created_at": "2017-11-01T15:17:14Z"
},
{
"body": "What confused me a bit about this comment is that it first sounded like we need to incluse index/id in the nested Identity, but with flag is really there to give a hint about the xContent output it seems. It feels a bit weird to me that this flag introduces some kind of \"output\" directive to NestedIdentity about fields that are not really part of itself but the surrounding SearchHit, or did I misunderstand something?",
"created_at": "2017-11-01T15:26:11Z"
},
{
"body": "No, what you write is correct. I didn't find a better way of indicating whether `SearchHit` should serialise the _index and _id fields.\r\n\r\nAnother approach is to always serialise the _index and _id field in the case of nested search hits. This would make the code cleaner, but the _index and _id are then also included when nested search hit is an inner hit of a nested parent document and in that case the _index and _id will be serialized multiple times. However I think this might be an ok tradeoff as the _id and _index fields are only a small part of the actual search hit.",
"created_at": "2017-11-01T16:30:14Z"
},
{
"body": "I agree, I would lean towards always including _id/_index then, but I don't know which discussion led to removing the redundancy in the first place. Maybe it was something that people really liked to have to reduce the overall response size.",
"created_at": "2017-11-01T16:37:51Z"
},
{
"body": "The issue that was raised that caused the _id and _index to be removed was: #18091\r\nThis issue is more about the fact that the _index / _id wasn't always available, so it was more about dealing with an inconsistency than removing these attributes. I think with this in mind it is ok to return _id and _index attributes to nested search hits. (as long as these attributes are always serialized for nested search hits)",
"created_at": "2017-11-02T06:57:24Z"
},
{
"body": "@cbuescher We just discussed and we are ok with bringing back the _index and _id fields for any nested being returned irregardless of its context (top_hits or inner hits)",
"created_at": "2017-11-20T16:19:25Z"
},
{
"body": "Looks like the index and id rendering could be pulled out regardless of whether nestedIdentity is set or not now. Also wondering why we render type information in the \"else\" branch but not for the nestedIdentity case here, should this still be there in both cases until types are removed completely?",
"created_at": "2017-11-27T11:01:05Z"
},
{
"body": "@cbuescher Good point. I'll also include _type for nested hits. So that this code becomes cleaner and the normal hits and nested hits become more consistent.",
"created_at": "2017-11-27T11:07:47Z"
}
],
"title": "Always include the _index and _id for nested search hits."
} | {
"commits": [
{
"message": "Include the _index, _type and _id to nested search hits in the top_hits and inner_hits response.\nAlso include _type and _id for parent/child hits inside inner hits.\n\nIn the case of top_hits aggregation the nested search hits are\ndirectly returned and are not grouped by a root or parent document, so\nit is important to include the _id and _index attributes in order to know\nto what documents these nested search hits belong to.\n\nCloses #27053"
}
],
"files": [
{
"diff": "@@ -328,6 +328,14 @@ public SearchShardTarget getShard() {\n }\n \n public void shard(SearchShardTarget target) {\n+ if (innerHits != null) {\n+ for (SearchHits innerHits : innerHits.values()) {\n+ for (SearchHit innerHit : innerHits) {\n+ innerHit.shard(target);\n+ }\n+ }\n+ }\n+\n this.shard = target;\n if (target != null) {\n this.index = target.getIndex();\n@@ -414,18 +422,17 @@ public XContentBuilder toInnerXContent(XContentBuilder builder, Params params) t\n builder.field(Fields._SHARD, shard.getShardId());\n builder.field(Fields._NODE, shard.getNodeIdText());\n }\n+ if (index != null) {\n+ builder.field(Fields._INDEX, RemoteClusterAware.buildRemoteIndexName(clusterAlias, index));\n+ }\n+ if (type != null) {\n+ builder.field(Fields._TYPE, type);\n+ }\n+ if (id != null) {\n+ builder.field(Fields._ID, id);\n+ }\n if (nestedIdentity != null) {\n nestedIdentity.toXContent(builder, params);\n- } else {\n- if (index != null) {\n- builder.field(Fields._INDEX, RemoteClusterAware.buildRemoteIndexName(clusterAlias, index));\n- }\n- if (type != null) {\n- builder.field(Fields._TYPE, type);\n- }\n- if (id != null) {\n- builder.field(Fields._ID, id);\n- }\n }\n if (version != -1) {\n builder.field(Fields._VERSION, version);\n@@ -840,9 +847,9 @@ public static final class NestedIdentity implements Writeable, ToXContentFragmen\n private static final String FIELD = \"field\";\n private static final String OFFSET = \"offset\";\n \n- private Text field;\n- private int offset;\n- private NestedIdentity child;\n+ private final Text field;\n+ private final int offset;\n+ private final NestedIdentity child;\n \n public NestedIdentity(String field, int offset, NestedIdentity child) {\n this.field = new Text(field);",
"filename": "core/src/main/java/org/elasticsearch/search/SearchHit.java",
"status": "modified"
},
{
"diff": "@@ -323,6 +323,9 @@ Top hits response snippet with a nested hit, which resides in the first slot of\n \"max_score\": 0.2876821,\n \"hits\": [\n {\n+ \"_index\": \"sales\",\n+ \"_type\" : \"product\",\n+ \"_id\": \"1\",\n \"_nested\": {\n \"field\": \"comments\", <1>\n \"offset\": 0 <2>",
"filename": "docs/reference/aggregations/metrics/tophits-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -152,6 +152,9 @@ An example of a response snippet that could be generated from the above search r\n \"max_score\": 1.0,\n \"hits\": [\n {\n+ \"_index\": \"test\",\n+ \"_type\": \"doc\",\n+ \"_id\": \"1\",\n \"_nested\": {\n \"field\": \"comments\",\n \"offset\": 1\n@@ -278,6 +281,9 @@ Response not included in text but tested for completeness sake.\n \"max_score\": 1.0444683,\n \"hits\": [\n {\n+ \"_index\": \"test\",\n+ \"_type\": \"doc\",\n+ \"_id\": \"1\",\n \"_nested\": {\n \"field\": \"comments\",\n \"offset\": 1\n@@ -394,6 +400,9 @@ Which would look like:\n \"max_score\": 0.6931472,\n \"hits\": [\n {\n+ \"_index\": \"test\",\n+ \"_type\": \"doc\",\n+ \"_id\": \"1\",\n \"_nested\": {\n \"field\": \"comments\",\n \"offset\": 1,\n@@ -505,6 +514,7 @@ An example of a response snippet that could be generated from the above search r\n \"max_score\": 1.0,\n \"hits\": [\n {\n+ \"_index\": \"test\",\n \"_type\": \"doc\",\n \"_id\": \"2\",\n \"_score\": 1.0,",
"filename": "docs/reference/search/request/inner-hits.asciidoc",
"status": "modified"
},
{
"diff": "@@ -41,6 +41,6 @@ setup:\n - match: { hits.total: 1 }\n - match: { hits.hits.0._index: \"test\" }\n - match: { hits.hits.0._id: \"1\" }\n- - is_false: hits.hits.0.inner_hits.child.hits.hits.0._index\n+ - match: { hits.hits.0.inner_hits.child.hits.hits.0._index: \"test\"}\n - match: { hits.hits.0.inner_hits.child.hits.hits.0._id: \"2\" }\n - is_false: hits.hits.0.inner_hits.child.hits.hits.0._nested",
"filename": "modules/parent-join/src/test/resources/rest-api-spec/test/11_parent_child.yml",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,82 @@\n+---\n+\"top_hits aggregation with nested documents\":\n+ - skip:\n+ version: \"5.99.99 - \"\n+ reason: \"5.x nodes don't include index or id in nested top hits\"\n+ - do:\n+ indices.create:\n+ index: my-index\n+ body:\n+ settings:\n+ number_of_shards: 1\n+ number_of_replicas: 0\n+ mappings:\n+ doc:\n+ properties:\n+ users:\n+ type: nested\n+\n+ - do:\n+ index:\n+ index: my-index\n+ type: doc\n+ id: 1\n+ refresh: true\n+ body: |\n+ {\n+ \"group\" : \"fans\",\n+ \"users\" : [\n+ {\n+ \"first\" : \"John\",\n+ \"last\" : \"Smith\"\n+ },\n+ {\n+ \"first\" : \"Alice\",\n+ \"last\" : \"White\"\n+ }\n+ ]\n+ }\n+\n+ - do:\n+ index:\n+ index: my-index\n+ type: doc\n+ id: 2\n+ refresh: true\n+ body: |\n+ {\n+ \"group\" : \"fans\",\n+ \"users\" : [\n+ {\n+ \"first\" : \"Mark\",\n+ \"last\" : \"Doe\"\n+ }\n+ ]\n+ }\n+\n+ - do:\n+ search:\n+ body:\n+ aggs:\n+ to-users:\n+ nested:\n+ path: users\n+ aggs:\n+ users:\n+ top_hits:\n+ sort: \"users.last.keyword\"\n+\n+ - match: { hits.total: 2 }\n+ - length: { aggregations.to-users.users.hits.hits: 3 }\n+ - match: { aggregations.to-users.users.hits.hits.0._id: \"2\" }\n+ - match: { aggregations.to-users.users.hits.hits.0._index: my-index }\n+ - match: { aggregations.to-users.users.hits.hits.0._nested.field: users }\n+ - match: { aggregations.to-users.users.hits.hits.0._nested.offset: 0 }\n+ - match: { aggregations.to-users.users.hits.hits.1._id: \"1\" }\n+ - match: { aggregations.to-users.users.hits.hits.1._index: my-index }\n+ - match: { aggregations.to-users.users.hits.hits.1._nested.field: users }\n+ - match: { aggregations.to-users.users.hits.hits.1._nested.offset: 0 }\n+ - match: { aggregations.to-users.users.hits.hits.2._id: \"1\" }\n+ - match: { aggregations.to-users.users.hits.hits.2._index: my-index }\n+ - match: { aggregations.to-users.users.hits.hits.2._nested.field: users }\n+ - match: { aggregations.to-users.users.hits.hits.2._nested.offset: 1 }",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/200_top_hits_metric.yml",
"status": "added"
},
{
"diff": "@@ -34,9 +34,9 @@ setup:\n - match: { hits.hits.0._index: \"test\" }\n - match: { hits.hits.0._type: \"type_1\" }\n - match: { hits.hits.0._id: \"1\" }\n- - is_false: hits.hits.0.inner_hits.nested_field.hits.hits.0._index\n- - is_false: hits.hits.0.inner_hits.nested_field.hits.hits.0._type\n- - is_false: hits.hits.0.inner_hits.nested_field.hits.hits.0._id\n+ - match: { hits.hits.0.inner_hits.nested_field.hits.hits.0._index: \"test\" }\n+ - match: { hits.hits.0.inner_hits.nested_field.hits.hits.0._type: \"type1\" }\n+ - match: { hits.hits.0.inner_hits.nested_field.hits.hits.0._id: \"1\" }\n - match: { hits.hits.0.inner_hits.nested_field.hits.hits.0._nested.field: \"nested_field\" }\n - match: { hits.hits.0.inner_hits.nested_field.hits.hits.0._nested.offset: 0 }\n - is_false: hits.hits.0.inner_hits.nested_field.hits.hits.0._nested.child",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search.inner_hits/10_basic.yml",
"status": "modified"
}
]
} |
{
"body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 7.0.0-alpha1-SNAPSHOT, Build: 9e36764/2017-10-03T12:16:42.018Z\r\n\r\n**Plugins installed**: [x-pack]\r\n\r\n**JVM version**: 1.8.0_144\r\n\r\n**OS version**: macOS 10.13\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nYou get different results for e.g. a distinct `sum` aggregation vs. the `sum` within the `stats` aggregation. For a `doc_count` of `0`, the `sum` aggregation returns `0` whereas the `sum` within the `stats` aggregation is `null`.\r\n\r\n**Steps to reproduce**:\r\n\r\n1. Create some documents:\r\n\r\n```\r\nPUT my_test/test/1\r\n{\r\n \"category\" : \"c1\",\r\n \"value\": 1\r\n}\r\nPUT my_test/test/2\r\n{\r\n \"category\" : \"c2\"\r\n}\r\nPUT my_test/test/3\r\n{\r\n \"category\" : \"c3\",\r\n \"value\": 1\r\n}\r\nPUT my_test/test/4\r\n{\r\n \"category\" : \"c3\",\r\n \"value\": 1\r\n}\r\n```\r\n\r\n2. Run a nested aggregation:\r\n\r\n```\r\nPOST my_test/_search\r\n{\r\n \"size\": 0,\r\n \"aggregations\": {\r\n \"categories\": {\r\n \"terms\": {\r\n \"field\": \"category.keyword\"\r\n },\r\n \"aggs\": {\r\n \"my_min\": {\r\n \"min\": {\r\n \"field\": \"value\"\r\n }\r\n },\r\n \"my_max\": {\r\n \"max\": {\r\n \"field\": \"value\"\r\n }\r\n },\r\n \"my_avg\": {\r\n \"avg\": {\r\n \"field\": \"value\"\r\n }\r\n },\r\n \"my_sum\": {\r\n \"sum\": {\r\n \"field\": \"value\"\r\n }\r\n },\r\n \"my_stats\": {\r\n \"stats\": {\r\n \"field\": \"value\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n3. The result looks like this:\r\n\r\n```\r\n \"aggregations\": {\r\n \"categories\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"c3\",\r\n \"doc_count\": 2,\r\n \"my_stats\": {\r\n \"count\": 2,\r\n \"min\": 1,\r\n \"max\": 1,\r\n \"avg\": 1,\r\n \"sum\": 2\r\n },\r\n \"my_max\": {\r\n \"value\": 1\r\n },\r\n \"my_avg\": {\r\n \"value\": 1\r\n },\r\n \"my_min\": {\r\n \"value\": 1\r\n },\r\n \"my_sum\": {\r\n \"value\": 2\r\n }\r\n },\r\n {\r\n \"key\": \"c1\",\r\n \"doc_count\": 1,\r\n \"my_stats\": {\r\n \"count\": 1,\r\n \"min\": 1,\r\n \"max\": 1,\r\n \"avg\": 1,\r\n \"sum\": 1\r\n },\r\n \"my_max\": {\r\n \"value\": 1\r\n },\r\n \"my_avg\": {\r\n \"value\": 1\r\n },\r\n \"my_min\": {\r\n \"value\": 1\r\n },\r\n \"my_sum\": {\r\n \"value\": 1\r\n }\r\n },\r\n {\r\n \"key\": \"c2\",\r\n \"doc_count\": 1,\r\n \"my_stats\": {\r\n \"count\": 0,\r\n \"min\": null,\r\n \"max\": null,\r\n \"avg\": null,\r\n \"sum\": null\r\n },\r\n \"my_max\": {\r\n \"value\": null\r\n },\r\n \"my_avg\": {\r\n \"value\": null\r\n },\r\n \"my_min\": {\r\n \"value\": null\r\n },\r\n \"my_sum\": {\r\n \"value\": 0\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n```\r\n\r\nThe document in category `c2` doesn't include the `value` field. All the aggregation results have a value of `null` whereas only the `sum` aggregation has a `value` of `0`.",
"comments": [
{
"body": "We should change the Stats aggregation so the value of `stats.sum` when no documents are collected is 0 like `value_count`",
"created_at": "2017-10-09T08:17:27Z"
},
{
"body": "I will be happy to fix this issue! Please assign it to me",
"created_at": "2017-10-18T08:58:21Z"
},
{
"body": "@PammyS We are not able to assign issues to users that are not part of the Elastic org but we would love it if you are able to work on this fix. Please feel most welcome to open a PR. Thanks",
"created_at": "2017-10-23T07:59:48Z"
}
],
"number": 26893,
"title": "`sum` and `stats.sum` return different values when `doc_count` is `0`."
} | {
"body": "Closes #26893 \r\n\r\nI would appreciate it if your can take a look at this. @colings86 😃 ",
"number": 27193,
"review_comments": [],
"title": "Render sum as zero if count is zero for stats aggregation"
} | {
"commits": [
{
"message": "Rander sum as zero if count is zero for stats aggregation (#26893)"
}
],
"files": [
{
"diff": "@@ -192,7 +192,7 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n builder.nullField(Fields.MIN);\n builder.nullField(Fields.MAX);\n builder.nullField(Fields.AVG);\n- builder.nullField(Fields.SUM);\n+ builder.field(Fields.SUM, 0.0d);\n }\n otherStatsToXContent(builder, params);\n return builder;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java",
"status": "modified"
},
{
"diff": "@@ -19,13 +19,18 @@\n package org.elasticsearch.search.aggregations.metrics;\n \n import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.ParsedAggregation;\n import org.elasticsearch.search.aggregations.metrics.stats.InternalStats;\n import org.elasticsearch.search.aggregations.metrics.stats.ParsedStats;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.test.InternalAggregationTestCase;\n \n+import java.io.IOException;\n+import java.util.Collections;\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n@@ -80,7 +85,7 @@ static void assertStats(InternalStats aggregation, ParsedStats parsed) {\n long count = aggregation.getCount();\n assertEquals(count, parsed.getCount());\n // for count == 0, fields are rendered as `null`, so we test that we parse to default values used also in the reduce phase\n- assertEquals(count > 0 ? aggregation.getMin() : Double.POSITIVE_INFINITY , parsed.getMin(), 0);\n+ assertEquals(count > 0 ? aggregation.getMin() : Double.POSITIVE_INFINITY, parsed.getMin(), 0);\n assertEquals(count > 0 ? aggregation.getMax() : Double.NEGATIVE_INFINITY, parsed.getMax(), 0);\n assertEquals(count > 0 ? aggregation.getSum() : 0, parsed.getSum(), 0);\n assertEquals(count > 0 ? aggregation.getAvg() : 0, parsed.getAvg(), 0);\n@@ -153,5 +158,55 @@ protected InternalStats mutateInstance(InternalStats instance) {\n }\n return new InternalStats(name, count, sum, min, max, formatter, pipelineAggregators, metaData);\n }\n+\n+ public void testDoXContentBody() throws IOException {\n+ // count is greater than zero\n+ double min = randomDoubleBetween(-1000000, 1000000, true);\n+ double max = randomDoubleBetween(-1000000, 1000000, true);\n+ double sum = randomDoubleBetween(-1000000, 1000000, true);\n+ int count = randomIntBetween(1, 10);\n+ DocValueFormat format = randomNumericDocValueFormat();\n+ InternalStats internalStats = createInstance(\"stats\", count, sum, min, max, format, Collections.emptyList(), null);\n+ XContentBuilder builder = JsonXContent.contentBuilder().prettyPrint();\n+ builder.startObject();\n+ internalStats.doXContentBody(builder, ToXContent.EMPTY_PARAMS);\n+ builder.endObject();\n+\n+ String expected = \"{\\n\" +\n+ \" \\\"count\\\" : \" + count + \",\\n\" +\n+ \" \\\"min\\\" : \" + min + \",\\n\" +\n+ \" \\\"max\\\" : \" + max + \",\\n\" +\n+ \" \\\"avg\\\" : \" + internalStats.getAvg() + \",\\n\" +\n+ \" \\\"sum\\\" : \" + sum;\n+ if (format != DocValueFormat.RAW) {\n+ expected += \",\\n\"+\n+ \" \\\"min_as_string\\\" : \\\"\" + format.format(internalStats.getMin()) + \"\\\",\\n\" +\n+ \" \\\"max_as_string\\\" : \\\"\" + format.format(internalStats.getMax()) + \"\\\",\\n\" +\n+ \" \\\"avg_as_string\\\" : \\\"\" + format.format(internalStats.getAvg()) + \"\\\",\\n\" +\n+ \" \\\"sum_as_string\\\" : \\\"\" + format.format(internalStats.getSum()) + \"\\\"\";\n+ }\n+ expected += \"\\n}\";\n+ assertEquals(expected, builder.string());\n+\n+ // count is zero\n+ format = randomNumericDocValueFormat();\n+ min = 0.0;\n+ max = 0.0;\n+ sum = 0.0;\n+ count = 0;\n+ internalStats = createInstance(\"stats\", count, sum, min, max, format, Collections.emptyList(), null);\n+ builder = JsonXContent.contentBuilder().prettyPrint();\n+ builder.startObject();\n+ internalStats.doXContentBody(builder, ToXContent.EMPTY_PARAMS);\n+ builder.endObject();\n+\n+ assertEquals(\"{\\n\" +\n+ \" \\\"count\\\" : 0,\\n\" +\n+ \" \\\"min\\\" : null,\\n\" +\n+ \" \\\"max\\\" : null,\\n\" +\n+ \" \\\"avg\\\" : null,\\n\" +\n+ \" \\\"sum\\\" : 0.0\\n\" +\n+ \"}\", builder.string());\n+ }\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/InternalStatsTests.java",
"status": "modified"
}
]
} |
{
"body": "https://github.com/elastic/elasticsearch/blob/master/rest-api-spec/src/main/resources/rest-api-spec/api/tasks.list.json specifies params `node_id` and `parent_task` instead of accepted `nodes` and `parent_task_id`.\r\n\r\nThis is present in `5.6`, `6.0`, and `master` branches.\r\n\r\nThanks, @nfsec, for discovering this in https://github.com/elastic/elasticsearch-py/pull/659",
"comments": [
{
"body": "I'd like to give this one a try. \r\nIt looks like the parameter names have been changed (in the code and in the docs, but not in the `rest-api-spec`). As the tests do not use these parameters, no error has been detected till now.",
"created_at": "2017-10-27T21:26:43Z"
}
],
"number": 27124,
"title": "[API] Tasks.list spec doesn't reflect reality"
} | {
"body": "Modify parameters names to bring the `tasks` `rest-api-spec`up to date with the code base.\r\n\r\nFixes #27124",
"number": 27163,
"review_comments": [],
"title": "Fix inconsistencies in the rest api specs for `tasks`"
} | {
"commits": [
{
"message": "modify parameters names to reflect the changes done in the code base"
}
],
"files": [
{
"diff": "@@ -12,7 +12,7 @@\n }\n },\n \"params\": {\n- \"node_id\": {\n+ \"nodes\": {\n \"type\": \"list\",\n \"description\": \"A comma-separated list of node IDs or names to limit the returned information; use `_local` to return information from the node you're connecting to, leave empty to get information from all nodes\"\n },\n@@ -24,7 +24,7 @@\n \"type\": \"string\",\n \"description\": \"Cancel tasks with specified parent node.\"\n },\n- \"parent_task\": {\n+ \"parent_task_id\": {\n \"type\" : \"string\",\n \"description\" : \"Cancel tasks with specified parent task id (node_id:task_number). Set to -1 to cancel all.\"\n }",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/api/tasks.cancel.json",
"status": "modified"
},
{
"diff": "@@ -7,7 +7,7 @@\n \"paths\": [\"/_tasks\"],\n \"parts\": {},\n \"params\": {\n- \"node_id\": {\n+ \"nodes\": {\n \"type\": \"list\",\n \"description\": \"A comma-separated list of node IDs or names to limit the returned information; use `_local` to return information from the node you're connecting to, leave empty to get information from all nodes\"\n },\n@@ -23,7 +23,7 @@\n \"type\": \"string\",\n \"description\": \"Return tasks with specified parent node.\"\n },\n- \"parent_task\": {\n+ \"parent_task_id\": {\n \"type\" : \"string\",\n \"description\" : \"Return tasks with specified parent task id (node_id:task_number). Set to -1 to return all.\"\n },",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/api/tasks.list.json",
"status": "modified"
}
]
} |
{
"body": "Came across this bug today on master and I have been unable to reduce to a simple case.\r\nLucene throws an exception accessing norms only when given the following conditions:\r\n* Only when query is for term A OR term B (doesn't fail with either of the terms independently)\r\n* Only when using top docs (works OK with hits)\r\n* Only when the docs matching the query sit in an index with other docs.\r\n\r\nThe gist with the data and queries to reproduce this is here: \r\nhttps://gist.github.com/markharwood/3bdf60db7887ec5eb7a0a8fd7074cfae\r\nI have reduced the number of docs ingested to the smallest set of fields and the smallest number of records I could before the error is revealed.\r\n\r\nError thrown is \r\n\r\n\tCaused by: java.lang.IndexOutOfBoundsException: 2147483647\r\n\t\tat java.nio.DirectByteBuffer.get(DirectByteBuffer.java:253) ~[?:1.8.0_144]\r\n\t\tat org.apache.lucene.store.ByteBufferGuard.getByte(ByteBufferGuard.java:118) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.store.ByteBufferIndexInput$SingleBufferImpl.readByte(ByteBufferIndexInput.java:385) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.codecs.lucene70.Lucene70NormsProducer$2.longValue(Lucene70NormsProducer.java:218) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.similarities.BM25Similarity$BM25DocScorer.score(BM25Similarity.java:253) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.TermScorer.score(TermScorer.java:66) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.TopScoreDocCollector$SimpleTopScoreDocCollector$1.collect(TopScoreDocCollector.java:64) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.elasticsearch.search.aggregations.metrics.tophits.TopHitsAggregator$1.collect(TopHitsAggregator.java:132) ~[1/:?]\r\n\t\tat org.elasticsearch.search.aggregations.LeafBucketCollector.collect(LeafBucketCollector.java:82) ~[1/:?]\r\n\t\tat org.apache.lucene.search.MultiCollector$MultiLeafCollector.collect(MultiCollector.java:174) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.BooleanScorer.scoreDocument(BooleanScorer.java:189) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.BooleanScorer.scoreMatches(BooleanScorer.java:202) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.BooleanScorer.scoreWindowIntoBitSetAndReplay(BooleanScorer.java:216) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.BooleanScorer.scoreWindowMultipleScorers(BooleanScorer.java:260) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.BooleanScorer.scoreWindow(BooleanScorer.java:305) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:317) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:658) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:186) ~[1/:?]\r\n\t\tat org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:462) ~[lucene-core-7.1.0-snapshot-f33ed4ba12a.jar:7.1.0-snapshot-f33ed4ba12a f33ed4ba12aaf215628d010daaa0e271b8ab3d1f - mvg - 2017-10-02 17:18:30]\r\n\t\tat org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:272) ~[1/:?]\r\n\t\tat org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:110) ~[1/:?]\r\n\t\tat org.elasticsearch.indices.IndicesService.lambda$16(IndicesService.java:1122) ~[1/:?]\r\n\t\tat org.elasticsearch.indices.IndicesService.lambda$17(IndicesService.java:1175) ~[1/:?]\r\n\t\tat org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:160) ~[1/:?]\r\n\t\tat org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:1) ~[1/:?]\r\n\t\tat org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:412) ~[1/:?]\r\n\t\tat org.elasticsearch.indices.IndicesRequestCache.getOrCompute(IndicesRequestCache.java:116) ~[1/:?]\r\n\t\tat org.elasticsearch.indices.IndicesService.cacheShardLevelResult(IndicesService.java:1181) ~[1/:?]\r\n\t\tat org.elasticsearch.indices.IndicesService.loadIntoContext(IndicesService.java:1121) ~[1/:?]\r\n\t\tat org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:300) ~[1/:?]\r\n\t\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:335) ~[1/:?]\r\n\t\tat org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:311) ~[1/:?]\r\n",
"comments": [
{
"body": "@jimczi @jpountz - you may be interested in this one.",
"created_at": "2017-10-26T17:29:07Z"
},
{
"body": "The Lucene50PostingsReader BlockDocsEnum class is returning a docID of max int (aka NO_MORE_DOCS) at the point of failure. Looks like something is iterating more than it should",
"created_at": "2017-10-26T17:43:46Z"
},
{
"body": "I'll look!",
"created_at": "2017-10-26T20:19:55Z"
}
],
"number": 27131,
"title": "Lucene error - IndexOutOfBoundsException accessing norms"
} | {
"body": "It is required in order to work correctly with bulk scorer implementations\r\nthat change the scorer during the collection process. Otherwise sub collectors\r\nmight call `Scorer.score()` on the wrong scorer.\r\n\r\nTagging as a non-issue since the bug was introduced in #26753 which is\r\nnot released yet.\r\n\r\nCloses #27131\r\n",
"number": 27138,
"review_comments": [],
"title": "TopHitsAggregator must propagate calls to `setScorer`."
} | {
"commits": [
{
"message": "TopHitsAggregator must propagate calls to `setScorer`.\n\nIt is required in order to work correctly with bulk scorer implementations\nthat change the scorer during the collection process. Otherwise sub collectors\nmight call `Scorer.score()` on the wrong scorer.\n\nCloses #27131"
}
],
"files": [
{
"diff": "@@ -20,6 +20,8 @@\n package org.elasticsearch.search.aggregations.metrics.tophits;\n \n import com.carrotsearch.hppc.LongObjectHashMap;\n+import com.carrotsearch.hppc.cursors.ObjectCursor;\n+\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.FieldDoc;\n import org.apache.lucene.search.LeafCollector;\n@@ -93,6 +95,9 @@ public LeafBucketCollector getLeafCollector(LeafReaderContext ctx, LeafBucketCol\n public void setScorer(Scorer scorer) throws IOException {\n this.scorer = scorer;\n super.setScorer(scorer);\n+ for (ObjectCursor<LeafCollector> cursor : leafCollectors.values()) {\n+ cursor.value.setScorer(scorer);\n+ }\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java",
"status": "modified"
},
{
"diff": "@@ -21,15 +21,22 @@\n import org.apache.lucene.analysis.core.KeywordAnalyzer;\n import org.apache.lucene.document.Document;\n import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.Field.Store;\n import org.apache.lucene.document.SortedSetDocValuesField;\n+import org.apache.lucene.document.StringField;\n import org.apache.lucene.index.DirectoryReader;\n import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.IndexWriter;\n import org.apache.lucene.index.RandomIndexWriter;\n+import org.apache.lucene.index.Term;\n import org.apache.lucene.queryparser.classic.QueryParser;\n+import org.apache.lucene.search.BooleanClause.Occur;\n+import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.index.mapper.KeywordFieldMapper;\n@@ -39,6 +46,7 @@\n import org.elasticsearch.search.SearchHits;\n import org.elasticsearch.search.aggregations.Aggregation;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregationBuilders;\n import org.elasticsearch.search.aggregations.AggregatorTestCase;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.sort.SortOrder;\n@@ -148,4 +156,47 @@ private Document document(String id, String... stringValues) {\n }\n return document;\n }\n+\n+ public void testSetScorer() throws Exception {\n+ Directory directory = newDirectory();\n+ IndexWriter w = new IndexWriter(directory, newIndexWriterConfig()\n+ // only merge adjacent segments\n+ .setMergePolicy(newLogMergePolicy()));\n+ // first window (see BooleanScorer) has matches on one clause only\n+ for (int i = 0; i < 2048; ++i) {\n+ Document doc = new Document();\n+ doc.add(new StringField(\"_id\", Uid.encodeId(Integer.toString(i)), Store.YES));\n+ if (i == 1000) { // any doc in 0..2048\n+ doc.add(new StringField(\"string\", \"bar\", Store.NO));\n+ }\n+ w.addDocument(doc);\n+ }\n+ // second window has matches in two clauses\n+ for (int i = 0; i < 2048; ++i) {\n+ Document doc = new Document();\n+ doc.add(new StringField(\"_id\", Uid.encodeId(Integer.toString(2048 + i)), Store.YES));\n+ if (i == 500) { // any doc in 0..2048\n+ doc.add(new StringField(\"string\", \"baz\", Store.NO));\n+ } else if (i == 1500) {\n+ doc.add(new StringField(\"string\", \"bar\", Store.NO));\n+ }\n+ w.addDocument(doc);\n+ }\n+\n+ w.forceMerge(1); // we need all docs to be in the same segment\n+\n+ IndexReader reader = DirectoryReader.open(w);\n+ w.close();\n+\n+ IndexSearcher searcher = new IndexSearcher(reader);\n+ Query query = new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(\"string\", \"bar\")), Occur.SHOULD)\n+ .add(new TermQuery(new Term(\"string\", \"baz\")), Occur.SHOULD)\n+ .build();\n+ AggregationBuilder agg = AggregationBuilders.topHits(\"top_hits\");\n+ TopHits result = searchAndReduce(searcher, query, agg, STRING_FIELD_TYPE);\n+ assertEquals(3, result.getHits().totalHits);\n+ reader.close();\n+ directory.close();\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregatorTests.java",
"status": "modified"
},
{
"diff": "@@ -91,6 +91,7 @@\n public abstract class AggregatorTestCase extends ESTestCase {\n private static final String NESTEDFIELD_PREFIX = \"nested_\";\n private List<Releasable> releasables = new ArrayList<>();\n+ private static final String TYPE_NAME = \"type\";\n \n /** Create a factory for the given aggregation builder. */\n protected AggregatorFactory<?> createAggregatorFactory(AggregationBuilder aggregationBuilder,\n@@ -104,6 +105,7 @@ protected AggregatorFactory<?> createAggregatorFactory(AggregationBuilder aggreg\n MapperService mapperService = mapperServiceMock();\n when(mapperService.getIndexSettings()).thenReturn(indexSettings);\n when(mapperService.hasNested()).thenReturn(false);\n+ when(mapperService.types()).thenReturn(Collections.singleton(TYPE_NAME));\n when(searchContext.mapperService()).thenReturn(mapperService);\n IndexFieldDataService ifds = new IndexFieldDataService(indexSettings,\n new IndicesFieldDataCache(Settings.EMPTY, new IndexFieldDataCache.Listener() {\n@@ -115,7 +117,7 @@ public Object answer(InvocationOnMock invocationOnMock) throws Throwable {\n }\n });\n \n- SearchLookup searchLookup = new SearchLookup(mapperService, ifds::getForField, new String[]{\"type\"});\n+ SearchLookup searchLookup = new SearchLookup(mapperService, ifds::getForField, new String[]{TYPE_NAME});\n when(searchContext.lookup()).thenReturn(searchLookup);\n \n QueryShardContext queryShardContext = queryShardContextMock(mapperService, fieldTypes, circuitBreakerService);",
"filename": "test/framework/src/main/java/org/elasticsearch/search/aggregations/AggregatorTestCase.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.3.0\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**:\r\njava version \"1.8.0_121\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_121-b13)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)\r\n\r\n**OS version**: Windows 10\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\"max_score\" is null for query with field collapsing.\r\n\r\n**Steps to reproduce**:\r\n 1. Test data:\r\n\r\n```\r\nPOST http://localhost:9200/_bulk\r\n\r\n{\"index\":{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\"}}\r\n{\"id\":1,\"name\":\"one\"}\r\n```\r\n\r\n 2. Test query:\r\n\r\n```\r\nPOST http://localhost:9200/test/test/_search\r\n\r\n{\r\n\t\"query\" : {\r\n\t\t\"multi_match\" : {\r\n\t\t\t\"fields\" : \"name\",\r\n\t\t\t\"query\" : \"one\"\r\n\t\t}\r\n\t},\r\n\t\r\n\t\"collapse\" : {\r\n\t\t\"field\" : \"id\",\r\n\t\t\"inner_hits\" : {\r\n\t\t\t\"name\" : \"some_name\"\r\n\t\t}\r\n\t}\r\n}\r\n```\r\n\r\n**Describe the feature**:\r\n\r\nThe response contains max_score. And it's null.\r\n\r\n```\r\n{\r\n \"took\": 1,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": null,\r\n \"hits\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"test\",\r\n \"_id\": \"1\",\r\n \"_score\": 0.2876821,\r\n \"_source\": {\r\n \"id\": 1,\r\n \"name\": \"one\"\r\n },\r\n \"fields\": {\r\n \"id\": [\r\n 1\r\n ]\r\n },\r\n \"inner_hits\": {\r\n \"some_name\": {\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 0.2876821,\r\n \"hits\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"test\",\r\n \"_id\": \"1\",\r\n \"_score\": 0.2876821,\r\n \"_source\": {\r\n \"id\": 1,\r\n \"name\": \"one\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```",
"comments": [
{
"body": "@Chakrygin Thanks for raising this. I'm managed to reproduce this using your steps above, thanks for the clear instructions.\r\n\r\n@jimczi are you able to look at this?",
"created_at": "2017-03-31T07:50:36Z"
},
{
"body": "Thanks @Chakrygin \r\nThe `max_score` is filled if you set `track_scores` in the request:\r\n\r\n`````\r\nPOST http://localhost:9200/test/test/_search\r\n{\r\n \"track_scores\": true,\r\n \"query\" : {\r\n\t\t\"multi_match\" : {\r\n\t\t\t\"fields\" : \"name\",\r\n\t\t\t\"query\" : \"one\"\r\n\t\t}\r\n\t},\r\n\t\r\n\t\"collapse\" : {\r\n\t\t\"field\" : \"id\",\r\n\t\t\"inner_hits\" : {\r\n\t\t\t\"name\" : \"some_name\"\r\n\t\t}\r\n\t}\r\n}\r\n``````\r\n\r\nThough we should always return `max_score` when the sort is based on relevancy like the standard search does. I'll work on a patch.",
"created_at": "2017-03-31T08:01:35Z"
}
],
"number": 23840,
"title": "\"max_score\" is null for query with field collapsing "
} | {
"body": "This change makes sure that we track score when sort is set to relevancy only.\r\nIn this case we always track max score like normal search does.\r\n\r\nCloses #23840\r\n",
"number": 27122,
"review_comments": [],
"title": "Fix max score tracking with field collapsing"
} | {
"commits": [
{
"message": "Fix max score tracking with field collapsing\n\nThis change makes sure that we track score when sort is set to relevancy only.\nIn this case we always track max score like normal search does.\n\nCloses #23840"
}
],
"files": [
{
"diff": "@@ -283,9 +283,10 @@ static TopDocsCollectorContext createTopDocsCollectorContext(SearchContext searc\n return new ScrollingTopDocsCollectorContext(searchContext.scrollContext(),\n searchContext.sort(), numDocs, searchContext.trackScores(), searchContext.numberOfShards());\n } else if (searchContext.collapse() != null) {\n+ boolean trackScores = searchContext.sort() == null ? true : searchContext.trackScores();\n int numDocs = Math.min(searchContext.from() + searchContext.size(), totalNumDocs);\n return new CollapsingTopDocsCollectorContext(searchContext.collapse(),\n- searchContext.sort(), numDocs, searchContext.trackScores());\n+ searchContext.sort(), numDocs, trackScores);\n } else {\n int numDocs = Math.min(searchContext.from() + searchContext.size(), totalNumDocs);\n final boolean rescore = searchContext.rescore().isEmpty() == false;",
"filename": "core/src/main/java/org/elasticsearch/search/query/TopDocsCollectorContext.java",
"status": "modified"
},
{
"diff": "@@ -54,6 +54,8 @@\n import java.util.List;\n import java.util.Set;\n \n+import static org.hamcrest.core.IsEqual.equalTo;\n+\n public class CollapsingTopDocsCollectorTests extends ESTestCase {\n private static class SegmentSearcher extends IndexSearcher {\n private final List<LeafReaderContext> ctx;\n@@ -82,12 +84,15 @@ interface CollapsingDocValuesProducer<T extends Comparable> {\n }\n \n <T extends Comparable> void assertSearchCollapse(CollapsingDocValuesProducer<T> dvProducers, boolean numeric) throws IOException {\n- assertSearchCollapse(dvProducers, numeric, true);\n- assertSearchCollapse(dvProducers, numeric, false);\n+ assertSearchCollapse(dvProducers, numeric, true, true);\n+ assertSearchCollapse(dvProducers, numeric, true, false);\n+ assertSearchCollapse(dvProducers, numeric, false, true);\n+ assertSearchCollapse(dvProducers, numeric, false, false);\n }\n \n private <T extends Comparable> void assertSearchCollapse(CollapsingDocValuesProducer<T> dvProducers,\n- boolean numeric, boolean multivalued) throws IOException {\n+ boolean numeric, boolean multivalued,\n+ boolean trackMaxScores) throws IOException {\n final int numDocs = randomIntBetween(1000, 2000);\n int maxGroup = randomIntBetween(2, 500);\n final Directory dir = newDirectory();\n@@ -118,14 +123,14 @@ private <T extends Comparable> void assertSearchCollapse(CollapsingDocValuesProd\n final CollapsingTopDocsCollector collapsingCollector;\n if (numeric) {\n collapsingCollector =\n- CollapsingTopDocsCollector.createNumeric(collapseField.getField(), sort, expectedNumGroups, false);\n+ CollapsingTopDocsCollector.createNumeric(collapseField.getField(), sort, expectedNumGroups, trackMaxScores);\n } else {\n collapsingCollector =\n- CollapsingTopDocsCollector.createKeyword(collapseField.getField(), sort, expectedNumGroups, false);\n+ CollapsingTopDocsCollector.createKeyword(collapseField.getField(), sort, expectedNumGroups, trackMaxScores);\n }\n \n TopFieldCollector topFieldCollector =\n- TopFieldCollector.create(sort, totalHits, true, false, false);\n+ TopFieldCollector.create(sort, totalHits, true, trackMaxScores, trackMaxScores);\n \n searcher.search(new MatchAllDocsQuery(), collapsingCollector);\n searcher.search(new MatchAllDocsQuery(), topFieldCollector);\n@@ -136,6 +141,11 @@ private <T extends Comparable> void assertSearchCollapse(CollapsingDocValuesProd\n assertEquals(totalHits, collapseTopFieldDocs.totalHits);\n assertEquals(totalHits, topDocs.scoreDocs.length);\n assertEquals(totalHits, topDocs.totalHits);\n+ if (trackMaxScores) {\n+ assertThat(collapseTopFieldDocs.getMaxScore(), equalTo(topDocs.getMaxScore()));\n+ } else {\n+ assertThat(collapseTopFieldDocs.getMaxScore(), equalTo(Float.NaN));\n+ }\n \n Set<Object> seen = new HashSet<>();\n // collapse field is the last sort\n@@ -186,14 +196,14 @@ private <T extends Comparable> void assertSearchCollapse(CollapsingDocValuesProd\n }\n \n final CollapseTopFieldDocs[] shardHits = new CollapseTopFieldDocs[subSearchers.length];\n- final Weight weight = searcher.createNormalizedWeight(new MatchAllDocsQuery(), false);\n+ final Weight weight = searcher.createNormalizedWeight(new MatchAllDocsQuery(), true);\n for (int shardIDX = 0; shardIDX < subSearchers.length; shardIDX++) {\n final SegmentSearcher subSearcher = subSearchers[shardIDX];\n final CollapsingTopDocsCollector c;\n if (numeric) {\n- c = CollapsingTopDocsCollector.createNumeric(collapseField.getField(), sort, expectedNumGroups, false);\n+ c = CollapsingTopDocsCollector.createNumeric(collapseField.getField(), sort, expectedNumGroups, trackMaxScores);\n } else {\n- c = CollapsingTopDocsCollector.createKeyword(collapseField.getField(), sort, expectedNumGroups, false);\n+ c = CollapsingTopDocsCollector.createKeyword(collapseField.getField(), sort, expectedNumGroups, trackMaxScores);\n }\n subSearcher.search(weight, c);\n shardHits[shardIDX] = c.getTopDocs();",
"filename": "core/src/test/java/org/apache/lucene/grouping/CollapsingTopDocsCollectorTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.3.3\r\n\r\n**Plugins installed**: [discovery-ec2, repository-s3, x-pack]\r\n\r\n**JVM version** : 1.8.0_131 Java HotSpot(TM) 64-Bit Server VM\r\n\r\n**OS version** : 3.13.0-121-generic AWS Ubuntu 14.04 LTS\r\n\r\n**Description of the problem including expected versus actual behavior**: I recently started using the new [field collapsing](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/search-request-collapse.html) functionality and I noticed that the search `post_filter` is *not* applied to the `inner_hits`. This was surprising behavior to me because the `post_filter` applies to the collapsed search hits just fine and I did not see anything that said it wouldn't work. However I'm not 100% sure if this would be a bug or enhancement.\r\n\r\nWe use the `post_filter` for the typical faceted navigation use case and we wanted to leverage the new field `collapse` with `inner_hits` to add more variety to our search results. I guess a workaround would be to use a `top_hits` + `filter` aggregation to do the [field collapse](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/search-aggregations-metrics-top-hits-aggregation.html#_field_collapse_example)? Are there any significant performance differences between using the `top_hits` aggregation vs the new `collapse` functionality?\r\n\r\n**Steps to reproduce**:\r\n\r\n```\r\nPOST test-index/test-type/1\r\n{\r\n \"value\":1,\r\n \"common\":1\r\n}\r\n\r\nPOST test-index/test-type/2\r\n{\r\n \"value\":2,\r\n \"common\":1\r\n}\r\n\r\nPOST test-index/test-type/_search\r\n{\r\n \"collapse\": {\r\n \"field\": \"common\",\r\n \"inner_hits\": {\r\n \"name\": \"common\"\r\n }\r\n },\r\n \"post_filter\": {\r\n \"term\": {\r\n \"value\": 1\r\n }\r\n }\r\n}\r\n```\r\n\r\nNote the search result `inner_hits` contains both doc 1 and 2, but I'm expecting to see only doc 1 because the `post_filter` does not match doc 2.\r\n\r\n",
"comments": [
{
"body": "@jimczi I think the reason why this happens is because the collapse collector is executed before the filtered collector in `QueryPhase#execute(...)`? (line 259 `collectors.addFirst(topDocsFactory);` )",
"created_at": "2017-09-15T07:22:14Z"
},
{
"body": "@martijnvg the collector chain is reversed when it is built from the factories so the filtered collector is **before** the top docs collector and works fine on the collapsed hits. This issue is about applying the `post_filter` to the expand phase (the `inner_hits` retrieval), currently it is just ignored.\r\nI agree that it should be applied so it can be considered as a bug. \r\n\r\n> Are there any significant performance differences between using the top_hits aggregation vs the new collapse functionality?\r\n\r\nYes, the field collapsing is only applied to the `top_hits` of the query so it should be faster than using an combo agg `terms + top_hits`.\r\n",
"created_at": "2017-09-15T07:31:17Z"
},
{
"body": "@jimczi Ah I forgot that the chain is reversed. Thx for the explanation.",
"created_at": "2017-09-15T07:33:16Z"
}
],
"number": 26649,
"title": "post filter not applied to field collapsing inner hits?"
} | {
"body": "This change adds some missing options to the expand query that builds the inner hits for field collapsing.\r\nThe following options are now applied to the inner_hits query:\r\n * post_filters\r\n * preferences\r\n * routing\r\n\r\nCloses #27079\r\nCloses #26649",
"number": 27118,
"review_comments": [
{
"body": "nit: looks like searchType cannot be null",
"created_at": "2017-10-26T09:28:31Z"
},
{
"body": "nit: maybe the null check isn't necessary",
"created_at": "2017-10-26T09:29:24Z"
},
{
"body": "nit: maybe the null check isn't necessary",
"created_at": "2017-10-26T09:29:28Z"
}
],
"title": "Apply missing request options to the expand phase"
} | {
"commits": [
{
"message": "Apply missing request options to the expand phase\n\nThis change adds some missing options to the expand query that builds the inner hits for field collapsing.\nThe following options are now applied to the inner_hits query:\n * post_filters\n * preferences\n * routing\n\nCloses #27079\nCloses #26649"
},
{
"message": "iter"
}
],
"files": [
{
"diff": "@@ -88,10 +88,9 @@ public void run() throws IOException {\n }\n for (InnerHitBuilder innerHitBuilder : innerHitBuilders) {\n SearchSourceBuilder sourceBuilder = buildExpandSearchSourceBuilder(innerHitBuilder)\n- .query(groupQuery);\n- SearchRequest groupRequest = new SearchRequest(searchRequest.indices())\n- .types(searchRequest.types())\n- .source(sourceBuilder);\n+ .query(groupQuery)\n+ .postFilter(searchRequest.source().postFilter());\n+ SearchRequest groupRequest = buildExpandSearchRequest(searchRequest, sourceBuilder);\n multiRequest.add(groupRequest);\n }\n }\n@@ -120,6 +119,21 @@ public void run() throws IOException {\n }\n }\n \n+ private SearchRequest buildExpandSearchRequest(SearchRequest orig, SearchSourceBuilder sourceBuilder) {\n+ SearchRequest groupRequest = new SearchRequest(orig.indices())\n+ .types(orig.types())\n+ .source(sourceBuilder)\n+ .indicesOptions(orig.indicesOptions())\n+ .requestCache(orig.requestCache())\n+ .preference(orig.preference())\n+ .routing(orig.routing())\n+ .searchType(orig.searchType());\n+ if (orig.isMaxConcurrentShardRequestsSet()) {\n+ groupRequest.setMaxConcurrentShardRequests(orig.getMaxConcurrentShardRequests());\n+ }\n+ return groupRequest;\n+ }\n+\n private SearchSourceBuilder buildExpandSearchSourceBuilder(InnerHitBuilder options) {\n SearchSourceBuilder groupSource = new SearchSourceBuilder();\n groupSource.from(options.getFrom());",
"filename": "core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.action.search;\n \n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.common.document.DocumentField;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.text.Text;\n@@ -242,4 +243,43 @@ public void run() throws IOException {\n assertNotNull(reference.get());\n assertEquals(1, mockSearchPhaseContext.phasesExecuted.get());\n }\n+\n+ public void testExpandRequestOptions() throws IOException {\n+ MockSearchPhaseContext mockSearchPhaseContext = new MockSearchPhaseContext(1);\n+ mockSearchPhaseContext.searchTransport = new SearchTransportService(\n+ Settings.builder().put(\"search.remote.connect\", false).build(), null, null) {\n+\n+ @Override\n+ void sendExecuteMultiSearch(MultiSearchRequest request, SearchTask task, ActionListener<MultiSearchResponse> listener) {\n+ final QueryBuilder postFilter = QueryBuilders.existsQuery(\"foo\");\n+ assertTrue(request.requests().stream().allMatch((r) -> \"foo\".equals(r.preference())));\n+ assertTrue(request.requests().stream().allMatch((r) -> \"baz\".equals(r.routing())));\n+ assertTrue(request.requests().stream().allMatch((r) -> postFilter.equals(r.source().postFilter())));\n+ }\n+ };\n+ mockSearchPhaseContext.getRequest().source(new SearchSourceBuilder()\n+ .collapse(\n+ new CollapseBuilder(\"someField\")\n+ .setInnerHits(new InnerHitBuilder().setName(\"foobarbaz\"))\n+ )\n+ .postFilter(QueryBuilders.existsQuery(\"foo\")))\n+ .preference(\"foobar\")\n+ .routing(\"baz\");\n+\n+ SearchHits hits = new SearchHits(new SearchHit[0], 1, 1.0f);\n+ InternalSearchResponse internalSearchResponse = new InternalSearchResponse(hits, null, null, null, false, null, 1);\n+ AtomicReference<SearchResponse> reference = new AtomicReference<>();\n+ ExpandSearchPhase phase = new ExpandSearchPhase(mockSearchPhaseContext, internalSearchResponse, r ->\n+ new SearchPhase(\"test\") {\n+ @Override\n+ public void run() throws IOException {\n+ reference.set(mockSearchPhaseContext.buildSearchResponse(r, null));\n+ }\n+ }\n+ );\n+ phase.run();\n+ mockSearchPhaseContext.assertNoFailure();\n+ assertNotNull(reference.get());\n+ assertEquals(1, mockSearchPhaseContext.phasesExecuted.get());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/search/ExpandSearchPhaseTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: `Version: 5.6.0, Build: 781a835/2017-09-07T03:09:58.087Z, JVM: 1.8.0_144`\r\n\r\n**Plugins installed**: `[\"analysis-phonetic\"]`\r\n\r\n**JVM version**: `java version \"1.8.0_144\"`\r\n\r\n**OS version** : `16.7.0 Darwin Kernel Version 16.7.0 (OS X 10.12.6)`\r\n\r\n**Description of the problem including expected versus actual behavior**: Beider-Morse encoding fails silently (returns original string as token) if the languageset is not specified.\r\n\r\n**Steps to reproduce**:\r\n\r\n```\r\ncurl -XPUT 'http://localhost:9200/phonetictest?pretty' -d'{\r\n \"settings\": {\r\n \"analysis\": {\r\n \"filter\": {\r\n \"beider_morse_filter\": { \r\n \"type\": \"phonetic\",\r\n \"encoder\": \"beider_morse\",\r\n \"name_type\": \"generic\"\r\n }\r\n },\r\n \"analyzer\": {\r\n \"my_beider_morse\": {\r\n \"tokenizer\": \"standard\",\r\n \"filter\": \"beider_morse_filter\" \r\n }\r\n }\r\n }\r\n }\r\n}'\r\n\r\ncurl -XGET 'http://localhost:9200/phonetictest/_analyze?pretty&analyzer=my_beider_morse' -d'ABADIAS'\r\n```\r\n\r\nIncorrectly returns:\r\n\r\n```\r\n{\r\n \"tokens\" : [\r\n {\r\n \"token\" : \"ABADIAS\",\r\n \"start_offset\" : 0,\r\n \"end_offset\" : 7,\r\n \"type\" : \"<ALPHANUM>\",\r\n \"position\" : 0\r\n }\r\n ]\r\n}\r\n```\r\n\r\nExpected token list based on the current BMPM PHP code at http://stevemorse.org/phoneticinfo.htm :\r\n\r\n```\r\nabadias abadia abadios abadio abodias abodia abodios abodio abYdias abYdios avadias avadios avodias avodios obadias obadia obadios obadio obodias obodia obodios obodio obYdias obYdios ovadias ovadios ovodias ovodios Ybadias Ybadios Ybodias Ybodios YbYdias YbYdios abadiaS abadioS abodiaS abodioS obadiaS obadioS obodiaS obodioS\r\n```\r\n\r\nSimilar failures occurred with all other attempts.",
"comments": [
{
"body": "The problem seems to be because of `this.languageset = settings.getAsArray(\"languageset\");` in `PhoneticTokenFilterFactory` which returns an empty array rather than null when no languageset is defined. Would you like to work on a fix?",
"created_at": "2017-09-28T11:17:30Z"
},
{
"body": "Submitted pull request to fix this: https://github.com/elastic/elasticsearch/pull/26848",
"created_at": "2017-10-01T15:11:16Z"
}
],
"number": 26771,
"title": "Beider_morse phonetic encoder silently fails when languageset not specified"
} | {
"body": "Currently, when we create a BeiderMorseFilter with an unspecified `languageset`,\r\nthe filter will not guess the language, which should be the default behaviour.\r\nThis change fixes this and adds a simple test for the cases with and without\r\nprovided `languageset` settings.\r\n\r\nCloses #26771",
"number": 27112,
"review_comments": [],
"title": "Fix beidermorse phonetic token filter for unspecified `languageset`"
} | {
"commits": [
{
"message": "Fix beidermorse phonetic token filter for unspecified `languageset`\n\nCurrently, when we create a BeiderMorseFilter with an unspecified `languageset`,\nthe filter will not guess the language, which should be the default behaviour.\nThis change fixes this and adds a simple test for the cases with and without\nprovided `languageset` settings.\n\nCloses #26771"
}
],
"files": [
{
"diff": "@@ -19,9 +19,6 @@\n \n package org.elasticsearch.index.analysis;\n \n-import java.util.HashSet;\n-import java.util.List;\n-\n import org.apache.commons.codec.Encoder;\n import org.apache.commons.codec.language.Caverphone1;\n import org.apache.commons.codec.language.Caverphone2;\n@@ -45,6 +42,9 @@\n import org.elasticsearch.index.analysis.phonetic.KoelnerPhonetik;\n import org.elasticsearch.index.analysis.phonetic.Nysiis;\n \n+import java.util.HashSet;\n+import java.util.List;\n+\n public class PhoneticTokenFilterFactory extends AbstractTokenFilterFactory {\n \n private final Encoder encoder;\n@@ -116,11 +116,11 @@ public PhoneticTokenFilterFactory(IndexSettings indexSettings, Environment envir\n public TokenStream create(TokenStream tokenStream) {\n if (encoder == null) {\n if (ruletype != null && nametype != null) {\n- if (languageset != null) {\n- final LanguageSet languages = LanguageSet.from(new HashSet<>(languageset));\n- return new BeiderMorseFilter(tokenStream, new PhoneticEngine(nametype, ruletype, true), languages);\n+ LanguageSet langset = null;\n+ if (languageset != null && languageset.size() > 0) {\n+ langset = LanguageSet.from(new HashSet<>(languageset));\n }\n- return new BeiderMorseFilter(tokenStream, new PhoneticEngine(nametype, ruletype, true));\n+ return new BeiderMorseFilter(tokenStream, new PhoneticEngine(nametype, ruletype, true), langset);\n }\n if (maxcodelength > 0) {\n return new DoubleMetaphoneFilter(tokenStream, maxcodelength, !replace);",
"filename": "plugins/analysis-phonetic/src/main/java/org/elasticsearch/index/analysis/PhoneticTokenFilterFactory.java",
"status": "modified"
},
{
"diff": "@@ -19,26 +19,57 @@\n \n package org.elasticsearch.index.analysis;\n \n+import org.apache.lucene.analysis.BaseTokenStreamTestCase;\n+import org.apache.lucene.analysis.Tokenizer;\n+import org.apache.lucene.analysis.core.WhitespaceTokenizer;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.plugin.analysis.AnalysisPhoneticPlugin;\n import org.elasticsearch.test.ESTestCase;\n import org.hamcrest.MatcherAssert;\n+import org.junit.Before;\n \n import java.io.IOException;\n+import java.io.StringReader;\n \n import static org.hamcrest.Matchers.instanceOf;\n \n public class SimplePhoneticAnalysisTests extends ESTestCase {\n- public void testPhoneticTokenFilterFactory() throws IOException {\n+\n+ private TestAnalysis analysis;\n+\n+ @Before\n+ public void setup() throws IOException {\n String yaml = \"/org/elasticsearch/index/analysis/phonetic-1.yml\";\n Settings settings = Settings.builder().loadFromStream(yaml, getClass().getResourceAsStream(yaml), false)\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n .build();\n- TestAnalysis analysis = createTestAnalysis(new Index(\"test\", \"_na_\"), settings, new AnalysisPhoneticPlugin());\n+ this.analysis = createTestAnalysis(new Index(\"test\", \"_na_\"), settings, new AnalysisPhoneticPlugin());\n+ }\n+\n+ public void testPhoneticTokenFilterFactory() throws IOException {\n TokenFilterFactory filterFactory = analysis.tokenFilter.get(\"phonetic\");\n MatcherAssert.assertThat(filterFactory, instanceOf(PhoneticTokenFilterFactory.class));\n }\n+\n+ public void testPhoneticTokenFilterBeiderMorseNoLanguage() throws IOException {\n+ TokenFilterFactory filterFactory = analysis.tokenFilter.get(\"beidermorsefilter\");\n+ Tokenizer tokenizer = new WhitespaceTokenizer();\n+ tokenizer.setReader(new StringReader(\"ABADIAS\"));\n+ String[] expected = new String[] { \"abYdias\", \"abYdios\", \"abadia\", \"abadiaS\", \"abadias\", \"abadio\", \"abadioS\", \"abadios\", \"abodia\",\n+ \"abodiaS\", \"abodias\", \"abodio\", \"abodioS\", \"abodios\", \"avadias\", \"avadios\", \"avodias\", \"avodios\", \"obadia\", \"obadiaS\",\n+ \"obadias\", \"obadio\", \"obadioS\", \"obadios\", \"obodia\", \"obodiaS\", \"obodias\", \"obodioS\" };\n+ BaseTokenStreamTestCase.assertTokenStreamContents(filterFactory.create(tokenizer), expected);\n+ }\n+\n+ public void testPhoneticTokenFilterBeiderMorseWithLanguage() throws IOException {\n+ TokenFilterFactory filterFactory = analysis.tokenFilter.get(\"beidermorsefilterfrench\");\n+ Tokenizer tokenizer = new WhitespaceTokenizer();\n+ tokenizer.setReader(new StringReader(\"Rimbault\"));\n+ String[] expected = new String[] { \"rimbD\", \"rimbDlt\", \"rimba\", \"rimbalt\", \"rimbo\", \"rimbolt\", \"rimbu\", \"rimbult\", \"rmbD\", \"rmbDlt\",\n+ \"rmba\", \"rmbalt\", \"rmbo\", \"rmbolt\", \"rmbu\", \"rmbult\" };\n+ BaseTokenStreamTestCase.assertTokenStreamContents(filterFactory.create(tokenizer), expected);\n+ }\n }",
"filename": "plugins/analysis-phonetic/src/test/java/org/elasticsearch/index/analysis/SimplePhoneticAnalysisTests.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,10 @@ index:\n beidermorsefilter:\n type: phonetic\n encoder: beidermorse\n+ beidermorsefilterfrench:\n+ type: phonetic\n+ encoder: beidermorse\n+ languageset : [ \"french\" ]\n koelnerphonetikfilter:\n type: phonetic\n encoder: koelnerphonetik",
"filename": "plugins/analysis-phonetic/src/test/resources/org/elasticsearch/index/analysis/phonetic-1.yml",
"status": "modified"
}
]
} |
{
"body": "This commit removes some leniency from the plugin service which skips hidden files in the plugins directory. We really want to ensure the integrity of the plugin folder, so hasta la vista leniency.\r\n\r\nRelates #12465\r\n",
"comments": [
{
"body": "LGTM",
"created_at": "2017-04-08T05:08:01Z"
},
{
"body": "Thanks @rjernst.",
"created_at": "2017-04-08T22:22:52Z"
},
{
"body": "I think it's important to point out that a reason that this leniency is so dangerous is because when we install a plugin we explode it into a hidden temporary directory in the plugins folder. If something blows up during plugin installation (which we try to clean up gracefully), this hidden folder will remain. Leniency allows this to go undetected, and that is bad for the user. This leniency is a bug.",
"created_at": "2017-04-13T02:29:52Z"
},
{
"body": "But this doesn't help users understand what to do - this has occurred for me after initial install. Instead of a solution I get a stack trace and some google treasure hunting until I stumble on this issue. Not a great first impression of ElasticSearch.",
"created_at": "2017-10-01T07:21:22Z"
}
],
"number": 23982,
"title": "Remove hidden file leniency from plugin service"
} | {
"body": "Finder creates these files if you browse a directory there. These files are really annoying, but it's an incredible pain for users that these files are created unbeknownst to them, and then they get in the way of Elasticsearch starting. This commit adds leniency on macOS only to skip these files.\r\n\r\nRelates #23982\r\n",
"number": 27108,
"review_comments": [],
"title": "Ignore .DS_Store files on macOS"
} | {
"commits": [
{
"message": "Ignore .DS_Store files on macOS\n\nFinder creates these files if you browse a directory there. These files\nare really annoying, but it's an incredible pain for users that these\nfiles are created unbeknownst to them, and then they get in the way of\nElasticsearch starting. This commit adds leniency on macOS only to skip\nthese files."
},
{
"message": "Remove unused import"
},
{
"message": "Fix test name"
},
{
"message": "One more test"
}
],
"files": [
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.util.Constants;\n import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.plugins.Platforms;\n import org.elasticsearch.plugins.PluginInfo;\n@@ -73,6 +74,9 @@ void spawnNativePluginControllers(final Environment environment) throws IOExcept\n */\n try (DirectoryStream<Path> stream = Files.newDirectoryStream(pluginsFile)) {\n for (final Path plugin : stream) {\n+ if (FileSystemUtils.isDesktopServicesStore(plugin)) {\n+ continue;\n+ }\n final PluginInfo info = PluginInfo.readFromProperties(plugin);\n final Path spawnPath = Platforms.nativeControllerPath(plugin);\n if (!Files.isRegularFile(spawnPath)) {",
"filename": "core/src/main/java/org/elasticsearch/bootstrap/Spawner.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.common.io;\n \n import org.apache.logging.log4j.Logger;\n+import org.apache.lucene.util.Constants;\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.SuppressForbidden;\n@@ -65,6 +66,16 @@ public static boolean isHidden(Path path) {\n return fileName.toString().startsWith(\".\");\n }\n \n+ /**\n+ * Check whether the file denoted by the given path is a desktop services store created by Finder on macOS.\n+ *\n+ * @param path the path\n+ * @return true if the current system is macOS and the specified file appears to be a desktop services store file\n+ */\n+ public static boolean isDesktopServicesStore(final Path path) {\n+ return Constants.MAC_OS_X && Files.isRegularFile(path) && \".DS_Store\".equals(path.getFileName().toString());\n+ }\n+\n /**\n * Appends the path to the given base and strips N elements off the path if strip is > 0.\n */",
"filename": "core/src/main/java/org/elasticsearch/common/io/FileSystemUtils.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.component.LifecycleComponent;\n import org.elasticsearch.common.inject.Module;\n+import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Setting.Property;\n@@ -326,6 +327,9 @@ static Set<Bundle> getPluginBundles(Path pluginsDirectory) throws IOException {\n \n try (DirectoryStream<Path> stream = Files.newDirectoryStream(pluginsDirectory)) {\n for (Path plugin : stream) {\n+ if (FileSystemUtils.isDesktopServicesStore(plugin)) {\n+ continue;\n+ }\n logger.trace(\"--- adding plugin [{}]\", plugin.toAbsolutePath());\n final PluginInfo info;\n try {",
"filename": "core/src/main/java/org/elasticsearch/plugins/PluginsService.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.common.io;\n \n+import org.apache.lucene.util.Constants;\n import org.apache.lucene.util.LuceneTestCase.SuppressFileSystems;\n import org.elasticsearch.test.ESTestCase;\n import org.junit.Before;\n@@ -34,6 +35,8 @@\n import java.nio.file.StandardOpenOption;\n import java.util.Arrays;\n \n+import static org.hamcrest.Matchers.equalTo;\n+\n /**\n * Unit tests for {@link org.elasticsearch.common.io.FileSystemUtils}.\n */\n@@ -137,4 +140,16 @@ public void testOpenFileURLStream() throws IOException {\n assertArrayEquals(expectedBytes, actualBytes);\n }\n }\n+\n+ public void testIsDesktopServicesStoreFile() throws IOException {\n+ final Path path = createTempDir();\n+ final Path desktopServicesStore = path.resolve(\".DS_Store\");\n+ Files.createFile(desktopServicesStore);\n+ assertThat(FileSystemUtils.isDesktopServicesStore(desktopServicesStore), equalTo(Constants.MAC_OS_X));\n+\n+ Files.delete(desktopServicesStore);\n+ Files.createDirectory(desktopServicesStore);\n+ assertFalse(FileSystemUtils.isDesktopServicesStore(desktopServicesStore));\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/common/io/FileSystemUtilsTests.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.plugins;\n \n+import org.apache.lucene.util.Constants;\n import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.settings.Settings;\n@@ -27,6 +28,7 @@\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n+import java.nio.file.FileSystemException;\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.util.Arrays;\n@@ -36,6 +38,7 @@\n \n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.hasToString;\n+import static org.hamcrest.Matchers.instanceOf;\n \n @LuceneTestCase.SuppressFileSystems(value = \"ExtrasFS\")\n public class PluginsServiceTests extends ESTestCase {\n@@ -124,6 +127,28 @@ public void testHiddenFiles() throws IOException {\n assertThat(e, hasToString(containsString(expected)));\n }\n \n+ public void testDesktopServicesStoreFiles() throws IOException {\n+ final Path home = createTempDir();\n+ final Settings settings =\n+ Settings.builder()\n+ .put(Environment.PATH_HOME_SETTING.getKey(), home)\n+ .build();\n+ final Path plugins = home.resolve(\"plugins\");\n+ Files.createDirectories(plugins);\n+ final Path desktopServicesStore = plugins.resolve(\".DS_Store\");\n+ Files.createFile(desktopServicesStore);\n+ if (Constants.MAC_OS_X) {\n+ @SuppressWarnings(\"unchecked\") final PluginsService pluginsService = newPluginsService(settings);\n+ assertNotNull(pluginsService);\n+ } else {\n+ final IllegalStateException e = expectThrows(IllegalStateException.class, () -> newPluginsService(settings));\n+ assertThat(e, hasToString(containsString(\"Could not load plugin descriptor for existing plugin [.DS_Store]\")));\n+ assertNotNull(e.getCause());\n+ assertThat(e.getCause(), instanceOf(FileSystemException.class));\n+ assertThat(e.getCause(), hasToString(containsString(\"Not a directory\")));\n+ }\n+ }\n+\n public void testStartupWithRemovingMarker() throws IOException {\n final Path home = createTempDir();\n final Settings settings =",
"filename": "core/src/test/java/org/elasticsearch/plugins/PluginsServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@\n import java.io.IOException;\n import java.io.InputStreamReader;\n import java.nio.charset.StandardCharsets;\n+import java.nio.file.FileSystemException;\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.nio.file.attribute.PosixFileAttributeView;\n@@ -40,8 +41,10 @@\n import java.util.Set;\n import java.util.concurrent.TimeUnit;\n \n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.hasSize;\n+import static org.hamcrest.Matchers.hasToString;\n \n /**\n * Create a simple \"daemon controller\", put it in the right place and check that it runs.\n@@ -189,6 +192,29 @@ public void testControllerSpawnWithIncorrectDescriptor() throws IOException {\n equalTo(\"plugin [test_plugin] does not have permission to fork native controller\"));\n }\n \n+ public void testSpawnerHandlingOfDesktopServicesStoreFiles() throws IOException {\n+ final Path esHome = createTempDir().resolve(\"home\");\n+ final Settings settings = Settings.builder().put(Environment.PATH_HOME_SETTING.getKey(), esHome.toString()).build();\n+\n+ final Environment environment = new Environment(settings);\n+\n+ Files.createDirectories(environment.pluginsFile());\n+\n+ final Path desktopServicesStore = environment.pluginsFile().resolve(\".DS_Store\");\n+ Files.createFile(desktopServicesStore);\n+\n+ final Spawner spawner = new Spawner();\n+ if (Constants.MAC_OS_X) {\n+ // if the spawner were not skipping the Desktop Services Store files on macOS this would explode\n+ spawner.spawnNativePluginControllers(environment);\n+ } else {\n+ // we do not ignore these files on non-macOS systems\n+ final FileSystemException e =\n+ expectThrows(FileSystemException.class, () -> spawner.spawnNativePluginControllers(environment));\n+ assertThat(e, hasToString(containsString(\"Not a directory\")));\n+ }\n+ }\n+\n private void createControllerProgram(final Path outputFile) throws IOException {\n final Path outputDir = outputFile.getParent();\n Files.createDirectories(outputDir);",
"filename": "qa/no-bootstrap-tests/src/test/java/org/elasticsearch/bootstrap/SpawnerNoBootstrapTests.java",
"status": "modified"
}
]
} |
{
"body": "When a search is executing locally over many shards, we can stack overflow during query phase execution. This happens due to callbacks that occur after a phase completes for a shard and we move to the same phase on another shard. If all the shards for the query are local to the local node then we will never go async and these callbacks will end up as recursive calls. With sufficiently many shards, this will end up as a stack overflow. This commit addresses this by truncating the stack by forking to another thread on the executor for the phase.\r\n\r\nCloses #27042\r\n",
"comments": [
{
"body": "I'm not very happy with this solution. It's problem we need to solve but blindly spawning threads is not a satisfying solution for me. I wonder if we need a better solution than the recursion we have at least for the skipShard code path. The `performPhaseOnShard` is threaded on the search end so we should be find along those line. The skipShard one can be done differently and leave the responsibility of advancing to the next execution?",
"created_at": "2017-10-21T20:29:11Z"
},
{
"body": "I agree Simon, I am not happy with the solution either but I posted this to kick start a discussion. My first solution was exactly what you suggest, a solution for the skip shard path, but it’s not only skip shard that’s a problem if a search phase does not go async. This is revealed by the test I modified here which fails independently of the skip shards (see the random boolean returned in the can match phase) without this change.",
"created_at": "2017-10-21T21:12:02Z"
},
{
"body": "> It's problem we need to solve but blindly spawning threads is not a satisfying solution for me.\r\n\r\nAlso, I want to be clear about one thing here: practically we are not spawning threads, we are using the executor for the action which is typically backed by the search thread pool (although not always).",
"created_at": "2017-10-21T22:16:09Z"
},
{
"body": ">Also, I want to be clear about one thing here: practically we are not spawning threads, we are using the executor for the action which is typically backed by the search thread pool (although not always).\r\n\r\nagreed, sorry if that statement was confusion or harsh. \r\n\r\nWhat about something like this:\r\n\r\n```diff\r\n\r\ndiff --git a/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java\r\nindex a68d1d599c..4c92a6accd 100644\r\n--- a/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java\r\n+++ b/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java\r\n@@ -26,12 +26,14 @@ import org.elasticsearch.action.support.TransportActions;\r\n import org.elasticsearch.cluster.routing.GroupShardsIterator;\r\n import org.elasticsearch.cluster.routing.ShardRouting;\r\n import org.elasticsearch.common.Nullable;\r\n+import org.elasticsearch.common.util.concurrent.AbstractRunnable;\r\n import org.elasticsearch.common.util.concurrent.AtomicArray;\r\n import org.elasticsearch.search.SearchPhaseResult;\r\n import org.elasticsearch.search.SearchShardTarget;\r\n import org.elasticsearch.transport.ConnectTransportException;\r\n \r\n import java.io.IOException;\r\n+import java.util.concurrent.Executor;\r\n import java.util.concurrent.atomic.AtomicInteger;\r\n import java.util.stream.Stream;\r\n \r\n@@ -51,9 +53,10 @@ abstract class InitialSearchPhase<FirstResult extends SearchPhaseResult> extends\r\n private final AtomicInteger totalOps = new AtomicInteger();\r\n private final AtomicInteger shardExecutionIndex = new AtomicInteger(0);\r\n private final int maxConcurrentShardRequests;\r\n+ private final Executor executor;\r\n \r\n InitialSearchPhase(String name, SearchRequest request, GroupShardsIterator<SearchShardIterator> shardsIts, Logger logger,\r\n- int maxConcurrentShardRequests) {\r\n+ int maxConcurrentShardRequests, Executor executor) {\r\n super(name);\r\n this.request = request;\r\n this.shardsIts = shardsIts;\r\n@@ -63,7 +66,8 @@ abstract class InitialSearchPhase<FirstResult extends SearchPhaseResult> extends\r\n // on a per shards level we use shardIt.remaining() to increment the totalOps pointer but add 1 for the current shard result\r\n // we process hence we add one for the non active partition here.\r\n this.expectedTotalOps = shardsIts.totalSizeWith1ForEmpty();\r\n- this.maxConcurrentShardRequests = Math.min(maxConcurrentShardRequests, shardsIts.size());\r\n+ this.executor = executor;\r\n+ this.maxConcurrentShardRequests = Math.min(maxConcurrentShardRequests, shardsIts.size());\r\n }\r\n \r\n private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard, @Nullable String nodeId,\r\n@@ -128,14 +132,23 @@ abstract class InitialSearchPhase<FirstResult extends SearchPhaseResult> extends\r\n \r\n @Override\r\n public final void run() throws IOException {\r\n- boolean success = shardExecutionIndex.compareAndSet(0, maxConcurrentShardRequests);\r\n- assert success;\r\n- for (int i = 0; i < maxConcurrentShardRequests; i++) {\r\n- SearchShardIterator shardRoutings = shardsIts.get(i);\r\n+ int numSkip = 0;\r\n+ for (SearchShardIterator shardRoutings : shardsIts) {\r\n if (shardRoutings.skip()) {\r\n skipShard(shardRoutings);\r\n- } else {\r\n- performPhaseOnShard(i, shardRoutings, shardRoutings.nextOrNull());\r\n+ numSkip++;\r\n+ }\r\n+ }\r\n+ int numRemaining = shardsIts.size() - numSkip;\r\n+ if (numRemaining != 0) {\r\n+ int maxConcurrentShardRequests = Math.min(this.maxConcurrentShardRequests, numRemaining);\r\n+ boolean success = shardExecutionIndex.compareAndSet(0, maxConcurrentShardRequests);\r\n+ assert success;\r\n+ for (int i = 0; i < maxConcurrentShardRequests; i++) {\r\n+ SearchShardIterator shardRoutings = shardsIts.get(i);\r\n+ if (shardRoutings.skip() == false) {\r\n+ performPhaseOnShard(i, shardRoutings, shardRoutings.nextOrNull());\r\n+ }\r\n }\r\n }\r\n }\r\n@@ -144,9 +157,7 @@ abstract class InitialSearchPhase<FirstResult extends SearchPhaseResult> extends\r\n final int index = shardExecutionIndex.getAndIncrement();\r\n if (index < shardsIts.size()) {\r\n SearchShardIterator shardRoutings = shardsIts.get(index);\r\n- if (shardRoutings.skip()) {\r\n- skipShard(shardRoutings);\r\n- } else {\r\n+ if (shardRoutings.skip() == false) {\r\n performPhaseOnShard(index, shardRoutings, shardRoutings.nextOrNull());\r\n }\r\n }\r\n@@ -154,31 +165,58 @@ abstract class InitialSearchPhase<FirstResult extends SearchPhaseResult> extends\r\n \r\n \r\n private void performPhaseOnShard(final int shardIndex, final SearchShardIterator shardIt, final ShardRouting shard) {\r\n+ Thread origin = Thread.currentThread();\r\n if (shard == null) {\r\n- onShardFailure(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId()));\r\n+ maybeFork(origin, () -> onShardFailure(shardIndex, null, null, shardIt,\r\n+ new NoShardAvailableActionException(shardIt.shardId())));\r\n } else {\r\n try {\r\n executePhaseOnShard(shardIt, shard, new SearchActionListener<FirstResult>(new SearchShardTarget(shard.currentNodeId(),\r\n shardIt.shardId(), shardIt.getClusterAlias(), shardIt.getOriginalIndices()), shardIndex) {\r\n @Override\r\n public void innerOnResponse(FirstResult result) {\r\n- onShardResult(result, shardIt);\r\n+ maybeFork(origin, () -> onShardResult(result, shardIt));\r\n }\r\n \r\n @Override\r\n public void onFailure(Exception t) {\r\n- onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, t);\r\n+ maybeFork(origin, () -> onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, t));\r\n }\r\n });\r\n } catch (ConnectTransportException | IllegalArgumentException ex) {\r\n // we are getting the connection early here so we might run into nodes that are not connected. in that case we move on to\r\n // the next shard. previously when using discovery nodes here we had a special case for null when a node was not connected\r\n // at all which is not not needed anymore.\r\n- onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, ex);\r\n+ maybeFork(origin, () -> onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, ex));\r\n }\r\n }\r\n }\r\n \r\n+ private void maybeFork(Thread origin, Runnable runnable) {\r\n+ if (origin == Thread.currentThread()) {\r\n+ // if we are still on the same thread as we started off with\r\n+ // we fork on the threadpool to prevent stack overflow exceptions this\r\n+ // search phase requires to be forked\r\n+ executor.execute(new AbstractRunnable() {\r\n+ @Override\r\n+ public void onFailure(Exception e) {\r\n+ }\r\n+\r\n+ @Override\r\n+ protected void doRun() throws Exception {\r\n+ runnable.run();\r\n+ }\r\n+\r\n+ @Override\r\n+ public boolean isForceExecution() {\r\n+ return true; // very important we can't get rejected here!\r\n+ }\r\n+ });\r\n+ } else {\r\n+ runnable.run();\r\n+ }\r\n+ }\r\n+\r\n private void onShardResult(FirstResult result, SearchShardIterator shardIt) {\r\n assert result.getShardIndex() != -1 : \"shard index is not set\";\r\n assert result.getSearchShardTarget() != null : \"search shard target must not be null\";\r\n@@ -204,7 +242,7 @@ abstract class InitialSearchPhase<FirstResult extends SearchPhaseResult> extends\r\n } else if (xTotalOps > expectedTotalOps) {\r\n throw new AssertionError(\"unexpected higher total ops [\" + xTotalOps + \"] compared to expected [\"\r\n + expectedTotalOps + \"]\");\r\n- } else {\r\n+ } else if (shardsIt.skip() == false){\r\n maybeExecuteNext();\r\n }\r\n }\r\n```",
"created_at": "2017-10-22T15:03:52Z"
},
{
"body": "I did something similar to this yesterday (and I have done something similar in `TransportMultiSearchAction` previously) but I was worried you would reject it for being too much. I am good with something like this if you are. I will push tomorrow.",
"created_at": "2017-10-22T17:33:02Z"
},
{
"body": "> I did something similar to this yesterday (and I have done something similar in TransportMultiSearchAction previously) but I was worried you would reject it for being too much. I am good with something like this if you are. I will push tomorrow.\r\n\r\n++",
"created_at": "2017-10-22T20:01:24Z"
},
{
"body": "> I wonder if we should add a test with a massive amount of shards that trigger all these paths randomly. It can be a test that just makes sure nothing breaks. WDYT?\r\n\r\nI think we have this in `CanMatchPreFilterSearchPhaseTests#testLotsOfShards` that I added previously. I added a little more randomization to cover some additional code paths, I think we are covered here pretty well now?",
"created_at": "2017-10-23T20:38:14Z"
},
{
"body": "Thanks @s1monw, I will merge this when CI is green.",
"created_at": "2017-10-23T20:47:46Z"
},
{
"body": "> I think we have this in CanMatchPreFilterSearchPhaseTests#testLotsOfShards that I added previously. I added a little more randomization to cover some additional code paths, I think we are covered here pretty well now?\r\n\r\nI think the biggest issue is that is passes `shardsIt.size()` as `maxConcurrentShardRequests` which never triggers the codepaths that are critical for this stack overflow issue?",
"created_at": "2017-10-23T20:52:20Z"
},
{
"body": "@s1monw For the can match phase, yes, but note that there is a second phase here now which bounds max concurrent shard requests (and thus fails with stack overflow without the change to production code in this pull request).",
"created_at": "2017-10-23T21:01:13Z"
},
{
"body": "> @s1monw For the can match phase, yes, but note that there is a second phase here now which bounds max concurrent shard requests (and thus fails with stack overflow without the change to production code in this pull request).\r\n\r\nI missed that LGTM",
"created_at": "2017-10-23T21:05:09Z"
},
{
"body": "The test failure on this branch is explained by #27095.",
"created_at": "2017-10-24T14:14:26Z"
},
{
"body": "The reindex retry tests were super-tricky. Many thanks to @nik9000 for helping on this one. ❤️ ",
"created_at": "2017-10-26T02:12:59Z"
},
{
"body": "@s1monw is there a plan to have a patch for the old version of Elasticsearch? We used Elasticsearch v5.4.2 and hit such issue in production, but we don't want to upgrade to v5.6.4. Is there a way to workaround the issue or apply a patch? it is easy to merge your fix to v5.4.2? ",
"created_at": "2018-02-28T03:33:31Z"
}
],
"number": 27069,
"title": "Avoid stack overflow on search phases"
} | {
"body": "If timed runnable wraps an abstract runnable, then it should delegate to the abstract runnable otherwise force execution and handling rejections is dropped on the floor. Thus, timed runnable should itself be an abstract runnable delegating all methods to the wrapped runnable in cases when it is an abstract runnable. This commit causes this to be the case.\r\n\r\nRelates #27069\r\n",
"number": 27095,
"review_comments": [],
"title": "Timed runnable should delegate to abstract runnable"
} | {
"commits": [
{
"message": "Timed runnable should delegate to abstract runnable\n\nIf timed runnable wraps an abstract runnable, then it should delegate to\nthe abstract runnable otherwise force execution and handling rejections\nis dropped on the floor. Thus, timed runnable should itself be an\nabstract runnable delegating all methods to the wrapped runnable in\ncases when it is an abstract runnable. This commit causes this to be the\ncase."
},
{
"message": "Finally"
},
{
"message": "Newline"
},
{
"message": "Finally"
},
{
"message": "Remove newline"
}
],
"files": [
{
"diff": "@@ -23,19 +23,19 @@\n * A class used to wrap a {@code Runnable} that allows capturing the time of the task since creation\n * through execution as well as only execution time.\n */\n-class TimedRunnable implements Runnable {\n+class TimedRunnable extends AbstractRunnable {\n private final Runnable original;\n private final long creationTimeNanos;\n private long startTimeNanos;\n private long finishTimeNanos = -1;\n \n- TimedRunnable(Runnable original) {\n+ TimedRunnable(final Runnable original) {\n this.original = original;\n this.creationTimeNanos = System.nanoTime();\n }\n \n @Override\n- public void run() {\n+ public void doRun() {\n try {\n startTimeNanos = System.nanoTime();\n original.run();\n@@ -44,6 +44,32 @@ public void run() {\n }\n }\n \n+ @Override\n+ public void onRejection(final Exception e) {\n+ if (original instanceof AbstractRunnable) {\n+ ((AbstractRunnable) original).onRejection(e);\n+ }\n+ }\n+\n+ @Override\n+ public void onAfter() {\n+ if (original instanceof AbstractRunnable) {\n+ ((AbstractRunnable) original).onAfter();\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(final Exception e) {\n+ if (original instanceof AbstractRunnable) {\n+ ((AbstractRunnable) original).onFailure(e);\n+ }\n+ }\n+\n+ @Override\n+ public boolean isForceExecution() {\n+ return original instanceof AbstractRunnable && ((AbstractRunnable) original).isForceExecution();\n+ }\n+\n /**\n * Return the time since this task was created until it finished running.\n * If the task is still running or has not yet been run, returns -1.\n@@ -67,4 +93,5 @@ long getTotalExecutionNanos() {\n }\n return finishTimeNanos - startTimeNanos;\n }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/common/util/concurrent/TimedRunnable.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,117 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util.concurrent;\n+\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.util.concurrent.RejectedExecutionException;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicReference;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+\n+public final class TimedRunnableTests extends ESTestCase {\n+\n+ public void testTimedRunnableDelegatesToAbstractRunnable() {\n+ final boolean isForceExecution = randomBoolean();\n+ final AtomicBoolean onAfter = new AtomicBoolean();\n+ final AtomicReference<Exception> onRejection = new AtomicReference<>();\n+ final AtomicReference<Exception> onFailure = new AtomicReference<>();\n+ final AtomicBoolean doRun = new AtomicBoolean();\n+\n+ final AbstractRunnable runnable = new AbstractRunnable() {\n+ @Override\n+ public boolean isForceExecution() {\n+ return isForceExecution;\n+ }\n+\n+ @Override\n+ public void onAfter() {\n+ onAfter.set(true);\n+ }\n+\n+ @Override\n+ public void onRejection(final Exception e) {\n+ onRejection.set(e);\n+ }\n+\n+ @Override\n+ public void onFailure(final Exception e) {\n+ onFailure.set(e);\n+ }\n+\n+ @Override\n+ protected void doRun() throws Exception {\n+ doRun.set(true);\n+ }\n+ };\n+\n+ final TimedRunnable timedRunnable = new TimedRunnable(runnable);\n+\n+ assertThat(timedRunnable.isForceExecution(), equalTo(isForceExecution));\n+\n+ timedRunnable.onAfter();\n+ assertTrue(onAfter.get());\n+\n+ final Exception rejection = new RejectedExecutionException();\n+ timedRunnable.onRejection(rejection);\n+ assertThat(onRejection.get(), equalTo(rejection));\n+\n+ final Exception failure = new Exception();\n+ timedRunnable.onFailure(failure);\n+ assertThat(onFailure.get(), equalTo(failure));\n+\n+ timedRunnable.run();\n+ assertTrue(doRun.get());\n+ }\n+\n+ public void testTimedRunnableDelegatesRunInFailureCase() {\n+ final AtomicBoolean onAfter = new AtomicBoolean();\n+ final AtomicReference<Exception> onFailure = new AtomicReference<>();\n+ final AtomicBoolean doRun = new AtomicBoolean();\n+\n+ final Exception exception = new Exception();\n+\n+ final AbstractRunnable runnable = new AbstractRunnable() {\n+ @Override\n+ public void onAfter() {\n+ onAfter.set(true);\n+ }\n+\n+ @Override\n+ public void onFailure(final Exception e) {\n+ onFailure.set(e);\n+ }\n+\n+ @Override\n+ protected void doRun() throws Exception {\n+ doRun.set(true);\n+ throw exception;\n+ }\n+ };\n+\n+ final TimedRunnable timedRunnable = new TimedRunnable(runnable);\n+ timedRunnable.run();\n+ assertTrue(doRun.get());\n+ assertThat(onFailure.get(), equalTo(exception));\n+ assertTrue(onAfter.get());\n+ }\n+\n+}",
"filename": "core/src/test/java/org/elasticsearch/common/util/concurrent/TimedRunnableTests.java",
"status": "added"
}
]
} |
{
"body": "When Scrolling, `ShardSearchFailures` returned by ElasticSearch return no `index`, this in turn makes `RestHighLevelClient` fail to parse them, because `Index` [requires name not to be null](https://github.com/elastic/elasticsearch/blob/5.6/core/src/main/java/org/elasticsearch/index/Index.java#L53)\r\n\r\nWe're seeing the following exceptions:\r\n```\r\njava.io.IOException: Unable to parse response body for Response{requestLine=GET /_search/scroll HTTP/1.1, host=http://unrouted.ds-apicore-misc-01.ds:80, response=HTTP/1.1 200 OK}\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:394)\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:361)\r\n\tat org.elasticsearch.client.RestHighLevelClient.searchScroll(RestHighLevelClient.java:321)\r\n\t...\r\nCaused by: java.lang.NullPointerException\r\n\tat java.util.Objects.requireNonNull(Objects.java:203)\r\n\tat org.elasticsearch.index.Index.<init>(Index.java:53)\r\n\tat org.elasticsearch.action.search.ShardSearchFailure.fromXContent(ShardSearchFailure.java:213)\r\n\tat org.elasticsearch.action.search.SearchResponse.fromXContent(SearchResponse.java:297)\r\n\tat org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:505)\r\n\tat org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAndParseEntity$2(RestHighLevelClient.java:361)\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:392)\r\n\t... 39 more\r\n```\r\n\r\nWe're in the process of migrating our `TransportClients` to `RestHighLevelClient`, when using `TransportClient` this exceptions did not occur.\r\n\r\nOn `TransportClient` this translates to having a response with the following shard failures:\r\n```\r\nshard [_na], reason [RemoteTransportException[[i-0ad52147150c19404][10.64.92.24:9300][indices:data/read/search[phase/query/scroll]]]; nested: SearchContextMissingException[No search context found for id [5862]]; ], cause [SearchContextMissingException[No search context found for id [5862]]\r\n```\r\n\r\n**Elasticsearch version** 5.6.3\r\n\r\nEdit:\r\n\r\nThis apparently seems to happen because on Scrolls the `shardTarget` of a `ShardSearchFailure` can be [null](https://github.com/elastic/elasticsearch/blob/5.6/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java#L170), but when parsing itself with `fromXContent` [it always tries to instantiate one with null values](https://github.com/elastic/elasticsearch/blob/5.6/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java#L214). ",
"comments": [
{
"body": "hi @sschepens thanks a lot for opening this issue and for the analysis that alright headed me in the right direction. We did have an integration test for the `SearchContextMissingException` case but it didn't cover partial failures, where the target comes back null. I am going to open a PR with a fix.",
"created_at": "2017-10-23T10:44:17Z"
},
{
"body": "thanks @javanna !",
"created_at": "2017-10-26T13:52:55Z"
}
],
"number": 27055,
"title": "RestHighLevelClient fails to parse ShardSearchFailure on Scroll"
} | {
"body": "Turns out that `ShardSearchTarget` is nullable, hence its fields may not be printed out as part of `ShardSearchFailure#toXContent`, in which case `fromXContent` cannot parse it back. We would previously try to create the object with all of its fields set to null, but `Index` complains about it in the constructor. Also made sure that this code path is covered by our unit tests in `ShardSearchFailureTests`.\r\n\r\nCloses #27055",
"number": 27078,
"review_comments": [],
"title": "Make ShardSearchTarget optional when parsing ShardSearchFailure"
} | {
"commits": [
{
"message": "Make ShardSearchTarget optional when parsing ShardSearchFailure\n\nTurns out that `ShardSearchTarget` is nullable, hence its fields may not be printed out as part of `ShardSearchFailure#toXContent`, in which case `fromXContent` cannot parse it back. We would previously try to create the object with all of its fields set to null, but `Index` complains about it in the constructor. Also made sure that this code path is covered by our unit tests in `ShardSearchFailureTests`.\n\nCloses #27055"
}
],
"files": [
{
"diff": "@@ -131,7 +131,8 @@ public String reason() {\n \n @Override\n public String toString() {\n- return \"shard [\" + (shardTarget == null ? \"_na\" : shardTarget) + \"], reason [\" + reason + \"], cause [\" + (cause == null ? \"_na\" : ExceptionsHelper.stackTrace(cause)) + \"]\";\n+ return \"shard [\" + (shardTarget == null ? \"_na\" : shardTarget) + \"], reason [\" + reason + \"], cause [\" +\n+ (cause == null ? \"_na\" : ExceptionsHelper.stackTrace(cause)) + \"]\";\n }\n \n public static ShardSearchFailure readShardSearchFailure(StreamInput in) throws IOException {\n@@ -210,9 +211,12 @@ public static ShardSearchFailure fromXContent(XContentParser parser) throws IOEx\n parser.skipChildren();\n }\n }\n- return new ShardSearchFailure(exception,\n- new SearchShardTarget(nodeId,\n- new ShardId(new Index(indexName, IndexMetaData.INDEX_UUID_NA_VALUE), shardId), null, OriginalIndices.NONE));\n+ SearchShardTarget searchShardTarget = null;\n+ if (nodeId != null) {\n+ searchShardTarget = new SearchShardTarget(nodeId,\n+ new ShardId(new Index(indexName, IndexMetaData.INDEX_UUID_NA_VALUE), shardId), null, OriginalIndices.NONE);\n+ }\n+ return new ShardSearchFailure(exception, searchShardTarget);\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java",
"status": "modified"
},
{
"diff": "@@ -175,7 +175,7 @@ public void testFromXContentWithFailures() throws IOException {\n ShardSearchFailure parsedFailure = parsed.getShardFailures()[i];\n ShardSearchFailure originalFailure = failures[i];\n assertEquals(originalFailure.index(), parsedFailure.index());\n- assertEquals(originalFailure.shard().getNodeId(), parsedFailure.shard().getNodeId());\n+ assertEquals(originalFailure.shard(), parsedFailure.shard());\n assertEquals(originalFailure.shardId(), parsedFailure.shardId());\n String originalMsg = originalFailure.getCause().getMessage();\n assertEquals(parsedFailure.getCause().getMessage(), \"Elasticsearch exception [type=parsing_exception, reason=\" +",
"filename": "core/src/test/java/org/elasticsearch/action/search/SearchResponseTests.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.action.search;\n \n import org.elasticsearch.action.OriginalIndices;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -40,12 +41,14 @@ public class ShardSearchFailureTests extends ESTestCase {\n public static ShardSearchFailure createTestItem() {\n String randomMessage = randomAlphaOfLengthBetween(3, 20);\n Exception ex = new ParsingException(0, 0, randomMessage , new IllegalArgumentException(\"some bad argument\"));\n- String nodeId = randomAlphaOfLengthBetween(5, 10);\n- String indexName = randomAlphaOfLengthBetween(5, 10);\n- String indexUuid = randomAlphaOfLengthBetween(5, 10);\n- int shardId = randomInt();\n- return new ShardSearchFailure(ex,\n- new SearchShardTarget(nodeId, new ShardId(new Index(indexName, indexUuid), shardId), null, null));\n+ SearchShardTarget searchShardTarget = null;\n+ if (randomBoolean()) {\n+ String nodeId = randomAlphaOfLengthBetween(5, 10);\n+ String indexName = randomAlphaOfLengthBetween(5, 10);\n+ searchShardTarget = new SearchShardTarget(nodeId,\n+ new ShardId(new Index(indexName, IndexMetaData.INDEX_UUID_NA_VALUE), randomInt()), null, null);\n+ }\n+ return new ShardSearchFailure(ex, searchShardTarget);\n }\n \n public void testFromXContent() throws IOException {\n@@ -80,10 +83,10 @@ private void doFromXContentTestWithRandomFields(boolean addRandomFields) throws\n assertNull(parser.nextToken());\n }\n assertEquals(response.index(), parsed.index());\n- assertEquals(response.shard().getNodeId(), parsed.shard().getNodeId());\n+ assertEquals(response.shard(), parsed.shard());\n assertEquals(response.shardId(), parsed.shardId());\n \n- /**\n+ /*\n * we cannot compare the cause, because it will be wrapped in an outer\n * ElasticSearchException best effort: try to check that the original\n * message appears somewhere in the rendered xContent",
"filename": "core/src/test/java/org/elasticsearch/action/search/ShardSearchFailureTests.java",
"status": "modified"
}
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.