issue
dict | pr
dict | pr_details
dict |
---|---|---|
{
"body": "Right now we break if:\r\n\r\n* `JAVA_HOME` end with a trailing slash\r\n* Path is already quoted in the variable.\r\n\r\nWe should either attempt to fix, or detect and exit with a more appropriate message explaining the issue and the fix.\r\n\r\ncc @elastic/microsoft ",
"comments": [
{
"body": "Hi @Mpdreamz,\r\nI noticed this issue has been open for a while, and since it was a quick fix I just put together a solution in PR #27077. Could you please take a look? Thanks!",
"created_at": "2017-10-23T10:06:52Z"
},
{
"body": "I quickly tested with `JAVA_HOME` ending with `/` or `\\` and the service is correctly installed and started.\r\n\r\n@Mpdreamz what issue did you encounter with `/`?",
"created_at": "2017-10-23T10:17:36Z"
},
{
"body": "Here's another path that breaks it. It splits the path after the parens - the \"and\" in the output is the \"and\" from the path after \"(parens)\". Sorry for the screenshot, virtualbox copy and paste isn't working for me today\r\n\r\n<img width=\"933\" alt=\"screen shot 2018-03-12 at 4 03 27 pm\" src=\"https://user-images.githubusercontent.com/29205940/37314076-5d0513ca-260f-11e8-899d-a9cd6be6fd29.png\">\r\n",
"created_at": "2018-03-12T23:07:47Z"
},
{
"body": "Hm.... there was some work done in this area ( the problem here is `)` ) #26916 and #27012\r\n@andyb-elastic which version are you running with ?",
"created_at": "2018-03-13T13:59:07Z"
},
{
"body": "That was on master at 2d1d6503a4d034acd89e5fe31f03e39513976056",
"created_at": "2018-03-13T16:32:45Z"
}
],
"number": 23774,
"title": "%JAVA_HOME% resiliency on Windows"
} | {
"body": "Currently if `JAVA_HOME` contains quotes the Windows service cannot be installed (and started).\r\n\r\nRelates to #23774",
"number": 27077,
"review_comments": [
{
"body": "quotes",
"created_at": "2018-03-16T13:20:46Z"
},
{
"body": "Perhaps merge these two if statements to one with e.g:\r\n\r\n```cmd\r\nSET JAVA_HOME_UNQUOTED=%JAVA_HOME:\"=%\r\nIF not \"!foo!\"==\"!JAVA_HOME_UNQUOTED!\" (\r\n```\r\n\r\nSince a double quote is [a reserved character in filenames on windows](https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396)",
"created_at": "2018-03-16T13:58:15Z"
},
{
"body": "That makes things simpler :) \r\nThanks!",
"created_at": "2018-03-16T15:21:21Z"
},
{
"body": "The tests need to be numbered, so that the execution order is set. Notice the `testNNN` prefix on all the other tests.",
"created_at": "2019-02-28T21:37:41Z"
},
{
"body": "This should be an assumeTrue for windows. Otherwise the test will show up as \"passing\" for other distributions, even though it hasn't run anything since all the checks below are within a windows block.",
"created_at": "2019-04-26T03:55:56Z"
}
],
"title": "Fix handling of `JAVA_HOME` containg quotes"
} | {
"commits": [
{
"message": "Fix handling of JAVA_HOME containg quotes"
},
{
"message": "If JAVA_HOME contains quotes, the service will not be installed"
},
{
"message": "the check if JAVA_HOME contains qoutes is performed in elasticsearch-env"
},
{
"message": "addressing reviewers remarks"
},
{
"message": "Merge remote-tracking branch 'origin/master' into JAVA_HOME_quotes"
},
{
"message": "add a package test"
}
],
"files": [
{
"diff": "@@ -1,3 +1,5 @@\n+setlocal enabledelayedexpansion\n+\n set SCRIPT=%0\n \n rem determine Elasticsearch home; to do this, we strip from the path until we\n@@ -16,6 +18,20 @@ for %%I in (\"%ES_HOME%..\") do set ES_HOME=%%~dpfI\n rem now set the classpath\n set ES_CLASSPATH=!ES_HOME!\\lib\\*\n \n+rem check that JAVA_HOME does not contain quotes\n+if defined JAVA_HOME (\n+ if \"%JAVA_HOME:\"=%\"==\"\" (\n+ echo JAVA_HOME is empty. Specify a valid path for JAVA_HOME\n+ exit /b 1\n+ )\n+\n+ set \"JAVA_HOME_UNQUOTED=%JAVA_HOME:\"=%\"\n+ if not \"!JAVA_HOME!\"==\"!JAVA_HOME_UNQUOTED!\" (\n+ echo JAVA_HOME cannot contain quotes (\"). Remove the quotes from JAVA_HOME and try again.\n+ exit /b 1\n+ )\n+)\n+\n rem now set the path to java\n if defined JAVA_HOME (\n set JAVA=\"%JAVA_HOME%\\bin\\java.exe\"",
"filename": "distribution/src/bin/elasticsearch-env.bat",
"status": "modified"
},
{
"diff": "@@ -318,4 +318,51 @@ public void test100RepairIndexCliPackaging() {\n }\n }\n \n+ public void testAbortWhenJavaHomeContainsQuotes() {\n+ assumeThat(installation, is(notNullValue()));\n+\n+ final Installation.Executables bin = installation.executables();\n+ final Shell sh = new Shell();\n+\n+ Platforms.onWindows(() -> {\n+ final String originalJava = sh.run(\"$Env:JAVA_HOME\").stdout.trim();\n+\n+ final String quote = \"\\\\\\\"\";\n+ final String emptyJava = quote + quote;\n+\n+ // this won't persist to another session so we don't have to reset anything\n+ final Result emptyJavaResult = sh.runIgnoreExitCode(\n+ \"$Env:JAVA_HOME = '\" + emptyJava + \"'; \" +\n+ bin.elasticsearch\n+ );\n+\n+ assertThat(emptyJavaResult.exitCode, is(1));\n+ assertThat(emptyJavaResult.stdout, containsString(\"JAVA_HOME is empty. Specify a valid path for JAVA_HOME\"));\n+\n+ final String quotedJava = quote + originalJava + quote;\n+\n+ // this won't persist to another session so we don't have to reset anything\n+ final Result quotedJavaResult = sh.runIgnoreExitCode(\n+ \"$Env:JAVA_HOME = '\" + quotedJava + \"'; \" +\n+ bin.elasticsearch\n+ );\n+\n+ assertThat(quotedJavaResult.exitCode, is(1));\n+ assertThat(quotedJavaResult.stdout,\n+ containsString(\"JAVA_HOME cannot contain quotes (\\\"). Remove the quotes from JAVA_HOME and try again.\"));\n+\n+ final String singleQuoteJava = originalJava + quote;\n+\n+ // this won't persist to another session so we don't have to reset anything\n+ final Result singleQuoteJavaResult = sh.runIgnoreExitCode(\n+ \"$Env:JAVA_HOME = '\" + singleQuoteJava + \"'; \" +\n+ bin.elasticsearch\n+ );\n+\n+ assertThat(singleQuoteJavaResult.exitCode, is(1));\n+ assertThat(singleQuoteJavaResult.stdout,\n+ containsString(\"JAVA_HOME cannot contain quotes (\\\"). Remove the quotes from JAVA_HOME and try again.\"));\n+ });\n+ }\n+\n }",
"filename": "qa/vagrant/src/main/java/org/elasticsearch/packaging/test/ArchiveTestCase.java",
"status": "modified"
}
]
} |
{
"body": "As discussed [here](https://github.com/elastic/elasticsearch/issues/21903), the Java API supports specifying a pipeline in a bulk upsert, but the REST API either does not (or is not documented).\r\n\r\n```\r\nPUT _ingest/pipeline/timestamps\r\n{\r\n \"description\": \"_timestamp\",\r\n \"processors\": [\r\n {\r\n \"set\": {\r\n \"field\": \"_source.timestamp_created\",\r\n \"value\": \"{{_ingest.timestamp}}\"\r\n }\r\n }\r\n ]\r\n}\r\n\r\nPOST my_index/test_type/_bulk?pipeline=timestamps\r\n{\"index\":{\"_id\":\"1\"}}\r\n{\"field1\":\"val1\", \"counter\":0}\r\n{\"update\":{\"_id\":\"2\"}}\r\n{\"script\":{\"inline\":\"ctx._source.counter++;\"},\"upsert\":{\"field1\":\"upserted_val\", \"counter\":0}}\r\n```\r\n\r\nThe upserted record didn't hit the pipeline:\r\n\r\n```\r\n{\r\n \"took\": 1,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 2,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"my_index\",\r\n \"_type\": \"test_type\",\r\n \"_id\": \"2\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"field1\": \"upserted_val\",\r\n \"counter\": 1\r\n }\r\n },\r\n {\r\n \"_index\": \"my_index\",\r\n \"_type\": \"test_type\",\r\n \"_id\": \"1\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"timestamp_created\": \"Fri Jul 07 10:47:07 EDT 2017\",\r\n \"field1\": \"val1\",\r\n \"counter\": 0\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```",
"comments": [
{
"body": "Please, we have news about this issue?\r\nWe would like use this feature on our application 😊 \r\n\r\nThank's!",
"created_at": "2017-10-13T18:10:03Z"
},
{
"body": "Gave this a go in https://github.com/elastic/elasticsearch/pull/27075 :)",
"created_at": "2017-10-24T08:37:30Z"
},
{
"body": "fixed in `master` via #27075 ",
"created_at": "2017-10-25T17:04:15Z"
}
],
"number": 25601,
"title": "Add pipeline support to REST API for bulk upsert"
} | {
"body": "Tried fixing #25601 here and needed to adjust 2 things:\r\n\r\nIn `BulkRequest` the default pipeline read from the URL parameter in `org.elasticsearch.rest.action.document.RestBulkAction#prepareRequest` was not propagated down to `upsertRequest`:\r\n* Fixed that\r\n* Created `org.elasticsearch.rest.action.document.RestBulkActionTests#testBulkPipelineUpsert` (sorry for adding a new test class, couldn't find any other class to put this into, tried to stick with the style I found elsewhere as best as I could though :))\r\n\r\nWith this fixed, the pipeline was still only executed on `indexRequest` in `org.elasticsearch.common.util.concurrent.AbstractRunnable#doRun`:\r\n\r\n* changed that to run for upsert requested found in `UpdateRequest` as well\r\n* Added `org.elasticsearch.ingest.IngestClientIT#testBulkWithUpsert` to verify that this works for doc-type upserts\r\n * script type ones I couldn't find a straightforward way to write a UT/IT for, unfortunately. Manually verified the example from #25601 to work though + the only branching here comes from the ternery`indexRequest = updateRequest.docAsUpsert() ? updateRequest.doc() : updateRequest.upsertRequest();` and that is used the exact same way in `org.elasticsearch.action.update.UpdateHelper#prepareUpsert` so I guess this should be fine?\r\n\r\nCloses #25601",
"number": 27075,
"review_comments": [
{
"body": "Thanks, great new test.",
"created_at": "2017-10-25T16:53:07Z"
},
{
"body": "Nit: maybe fix the indentation to allign everything. Then again, code formatting will probably destroy this again soon, so no big deal...",
"created_at": "2017-10-25T16:54:11Z"
}
],
"title": "Add pipeline support for REST API bulk upsert"
} | {
"commits": [
{
"message": " #25601 Add pipeline support for REST API bulk upsert"
}
],
"files": [
{
"diff": "@@ -429,6 +429,7 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null\n if (upsertRequest != null) {\n upsertRequest.version(version);\n upsertRequest.versionType(versionType);\n+ upsertRequest.setPipeline(defaultPipeline);\n }\n IndexRequest doc = updateRequest.doc();\n if (doc != null) {",
"filename": "core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.action.DocWriteRequest;\n import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.action.update.UpdateRequest;\n import org.elasticsearch.cluster.ClusterChangedEvent;\n import org.elasticsearch.cluster.ClusterStateApplier;\n import org.elasticsearch.common.Strings;\n@@ -81,17 +82,21 @@ public void onFailure(Exception e) {\n @Override\n protected void doRun() throws Exception {\n for (DocWriteRequest actionRequest : actionRequests) {\n- if ((actionRequest instanceof IndexRequest)) {\n- IndexRequest indexRequest = (IndexRequest) actionRequest;\n- if (Strings.hasText(indexRequest.getPipeline())) {\n- try {\n- innerExecute(indexRequest, getPipeline(indexRequest.getPipeline()));\n- //this shouldn't be needed here but we do it for consistency with index api\n- // which requires it to prevent double execution\n- indexRequest.setPipeline(null);\n- } catch (Exception e) {\n- itemFailureHandler.accept(indexRequest, e);\n- }\n+ IndexRequest indexRequest = null;\n+ if (actionRequest instanceof IndexRequest) {\n+ indexRequest = (IndexRequest) actionRequest;\n+ } else if (actionRequest instanceof UpdateRequest) {\n+ UpdateRequest updateRequest = (UpdateRequest) actionRequest;\n+ indexRequest = updateRequest.docAsUpsert() ? updateRequest.doc() : updateRequest.upsertRequest();\n+ }\n+ if (indexRequest != null && Strings.hasText(indexRequest.getPipeline())) {\n+ try {\n+ innerExecute(indexRequest, getPipeline(indexRequest.getPipeline()));\n+ //this shouldn't be needed here but we do it for consistency with index api\n+ // which requires it to prevent double execution\n+ indexRequest.setPipeline(null);\n+ } catch (Exception e) {\n+ itemFailureHandler.accept(indexRequest, e);\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/ingest/PipelineExecutionService.java",
"status": "modified"
},
{
"diff": "@@ -36,11 +36,16 @@\n import org.elasticsearch.action.ingest.SimulatePipelineRequest;\n import org.elasticsearch.action.ingest.SimulatePipelineResponse;\n import org.elasticsearch.action.ingest.WritePipelineResponse;\n+import org.elasticsearch.action.support.replication.TransportReplicationActionTests;\n+import org.elasticsearch.action.update.UpdateRequest;\n import org.elasticsearch.client.Requests;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.script.Script;\n+import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.test.ESIntegTestCase;\n \n import java.util.Arrays;\n@@ -169,6 +174,43 @@ public void testBulkWithIngestFailures() throws Exception {\n }\n }\n \n+ public void testBulkWithUpsert() throws Exception {\n+ createIndex(\"index\");\n+\n+ BytesReference source = jsonBuilder().startObject()\n+ .field(\"description\", \"my_pipeline\")\n+ .startArray(\"processors\")\n+ .startObject()\n+ .startObject(\"test\")\n+ .endObject()\n+ .endObject()\n+ .endArray()\n+ .endObject().bytes();\n+ PutPipelineRequest putPipelineRequest = new PutPipelineRequest(\"_id\", source, XContentType.JSON);\n+ client().admin().cluster().putPipeline(putPipelineRequest).get();\n+\n+ BulkRequest bulkRequest = new BulkRequest();\n+ IndexRequest indexRequest = new IndexRequest(\"index\", \"type\", \"1\").setPipeline(\"_id\");\n+ indexRequest.source(Requests.INDEX_CONTENT_TYPE, \"field1\", \"val1\");\n+ bulkRequest.add(indexRequest);\n+ UpdateRequest updateRequest = new UpdateRequest(\"index\", \"type\", \"2\");\n+ updateRequest.doc(\"{}\", Requests.INDEX_CONTENT_TYPE);\n+ updateRequest.upsert(\"{\\\"field1\\\":\\\"upserted_val\\\"}\", XContentType.JSON).upsertRequest().setPipeline(\"_id\");\n+ bulkRequest.add(updateRequest);\n+\n+ BulkResponse response = client().bulk(bulkRequest).actionGet();\n+\n+ assertThat(response.getItems().length, equalTo(bulkRequest.requests().size()));\n+ Map<String, Object> inserted = client().prepareGet(\"index\", \"type\", \"1\")\n+ .get().getSourceAsMap();\n+ assertThat(inserted.get(\"field1\"), equalTo(\"val1\"));\n+ assertThat(inserted.get(\"processed\"), equalTo(true));\n+ Map<String, Object> upserted = client().prepareGet(\"index\", \"type\", \"2\")\n+ .get().getSourceAsMap();\n+ assertThat(upserted.get(\"field1\"), equalTo(\"upserted_val\"));\n+ assertThat(upserted.get(\"processed\"), equalTo(true));\n+ }\n+\n public void test() throws Exception {\n BytesReference source = jsonBuilder().startObject()\n .field(\"description\", \"my_pipeline\")",
"filename": "core/src/test/java/org/elasticsearch/ingest/IngestClientIT.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,76 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.rest.action.document;\n+\n+import java.util.HashMap;\n+import java.util.Map;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.bulk.BulkRequest;\n+import org.elasticsearch.action.update.UpdateRequest;\n+import org.elasticsearch.client.node.NodeClient;\n+import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.rest.RestChannel;\n+import org.elasticsearch.rest.RestController;\n+import org.elasticsearch.rest.RestRequest;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.rest.FakeRestRequest;\n+import org.hamcrest.CustomMatcher;\n+import org.mockito.Mockito;\n+\n+import static org.mockito.Matchers.any;\n+import static org.mockito.Matchers.argThat;\n+import static org.mockito.Mockito.mock;\n+\n+/**\n+ * Tests for {@link RestBulkAction}.\n+ */\n+public class RestBulkActionTests extends ESTestCase {\n+\n+ public void testBulkPipelineUpsert() throws Exception {\n+ final NodeClient mockClient = mock(NodeClient.class);\n+ final Map<String, String> params = new HashMap<>();\n+ params.put(\"pipeline\", \"timestamps\");\n+ new RestBulkAction(settings(Version.CURRENT).build(), mock(RestController.class))\n+ .handleRequest(\n+ new FakeRestRequest.Builder(\n+ xContentRegistry()).withPath(\"my_index/my_type/_bulk\").withParams(params)\n+ .withContent(\n+ new BytesArray(\n+ \"{\\\"index\\\":{\\\"_id\\\":\\\"1\\\"}}\\n\" +\n+ \"{\\\"field1\\\":\\\"val1\\\"}\\n\" +\n+ \"{\\\"update\\\":{\\\"_id\\\":\\\"2\\\"}}\\n\" +\n+ \"{\\\"script\\\":{\\\"source\\\":\\\"ctx._source.counter++;\\\"},\\\"upsert\\\":{\\\"field1\\\":\\\"upserted_val\\\"}}\\n\"\n+ ),\n+ XContentType.JSON\n+ ).withMethod(RestRequest.Method.POST).build(),\n+ mock(RestChannel.class), mockClient\n+ );\n+ Mockito.verify(mockClient)\n+ .bulk(argThat(new CustomMatcher<BulkRequest>(\"Pipeline in upsert request\") {\n+ @Override\n+ public boolean matches(final Object item) {\n+ BulkRequest request = (BulkRequest) item;\n+ UpdateRequest update = (UpdateRequest) request.requests().get(1);\n+ return \"timestamps\".equals(update.upsertRequest().getPipeline());\n+ }\n+ }), any());\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/rest/action/document/RestBulkActionTests.java",
"status": "added"
}
]
} |
{
"body": "Say your mappings look like this:\r\n\r\n```\r\n{\r\n \"properties\": {\r\n \"foo\": {\r\n \"type\": \"nested\",\r\n \"include_in_root\": true,\r\n \"include_in_parent\": true\r\n }\r\n }\r\n}\r\n```\r\n\r\nThen we make sure to only copy to the root if it is different from the parent document. However if you start having more than one level of nesting:\r\n\r\n```\r\n{\r\n \"properties\": {\r\n \"foo\": {\r\n \"type\": \"nested\",\r\n \"include_in_root\": true,\r\n \"include_in_parent\": true,\r\n \"properties\": {\r\n \"bar\": {\r\n \"type\": \"nested\",\r\n \"include_in_root\": true,\r\n \"include_in_parent\": true\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nThen `foo.bar` will copy fields to both the root and parent since parent != root. And then foo will copy fields to its parent, which is the root as well. So fields of `foo.bar` will end up twice in the root document.",
"comments": [
{
"body": "Hello,\r\nThe duplicated fields you mean is which part exactly ? I've had a look at the document fields of indexed data and also the objectMappers in the `DocumentMapper`, it seems I didn't find the duplications ?",
"created_at": "2017-10-16T21:13:06Z"
},
{
"body": "Gave this a go in https://github.com/elastic/elasticsearch/pull/27072 :)",
"created_at": "2017-10-24T08:36:51Z"
}
],
"number": 26990,
"title": "More than one level of nesting with include_in_parent + include_in_root can lead to duplicate fields"
} | {
"body": "Fixes #26990 (I think :))\r\n\r\nFixed this by traversing the tree of `Nested` instances in the `Builder` for `RootObjectMapper` since it seemed to be the one place before instantiating the mappers where I can mutate builders and have the full tree of `Nested` available to me.\r\nSo my logic was:\r\n\r\n* The duplicate fields arise from a child nested by more than one level, that is directly included in the root and also transitively included via a chain of parent(s).\r\n => Set all `include_in_root` to `false`, that are already covered by transitive chains of `include_in_parent == true`",
"number": 27072,
"review_comments": [
{
"body": "I don't think it is correct: `included` should also `true` if `nested.isIncludeInRoot` is true? Otherwise we might recurse on a wrong value.",
"created_at": "2017-10-27T07:08:33Z"
},
{
"body": "However I agree that we should test for `if (nested.isNested() && nested.isIncludeInParent())` here.",
"created_at": "2017-10-27T07:09:11Z"
},
{
"body": "should we rename to something like `includeInRootViaParent` to make it clearer what this is about?",
"created_at": "2017-10-27T08:55:51Z"
}
],
"title": "Prevent duplicate fields when mixing parent and root nested includes"
} | {
"commits": [
{
"message": " #26990 prevent duplicate fields when mixing parent and root nested includes"
},
{
"message": "#26990 prevent duplicate fields when mixing parent and root nested includes (follow up)"
},
{
"message": "#26990 prevent duplicate fields when mixing parent and root nested includes (follow up)"
}
],
"files": [
{
"diff": "@@ -74,6 +74,38 @@ public Builder dynamicTemplates(Collection<DynamicTemplate> templates) {\n return this;\n }\n \n+ @Override\n+ public RootObjectMapper build(BuilderContext context) {\n+ fixRedundantIncludes(this, true);\n+ return super.build(context);\n+ }\n+\n+ /**\n+ * Removes redundant root includes in {@link ObjectMapper.Nested} trees to avoid duplicate\n+ * fields on the root mapper when {@code isIncludeInRoot} is {@code true} for a node that is\n+ * itself included into a parent node, for which either {@code isIncludeInRoot} is\n+ * {@code true} or which is transitively included in root by a chain of nodes with\n+ * {@code isIncludeInParent} returning {@code true}.\n+ * @param omb Builder whose children to check.\n+ * @param parentIncluded True iff node is a child of root or a node that is included in\n+ * root\n+ */\n+ private static void fixRedundantIncludes(ObjectMapper.Builder omb, boolean parentIncluded) {\n+ for (Object mapper : omb.mappersBuilders) {\n+ if (mapper instanceof ObjectMapper.Builder) {\n+ ObjectMapper.Builder child = (ObjectMapper.Builder) mapper;\n+ Nested nested = child.nested;\n+ boolean isNested = nested.isNested();\n+ boolean includeInRootViaParent = parentIncluded && isNested && nested.isIncludeInParent();\n+ boolean includedInRoot = isNested && nested.isIncludeInRoot();\n+ if (includeInRootViaParent && includedInRoot) {\n+ child.nested = Nested.newNested(true, false);\n+ }\n+ fixRedundantIncludes(child, includeInRootViaParent || includedInRoot);\n+ }\n+ }\n+ }\n+\n @Override\n protected ObjectMapper createMapper(String name, String fullPath, boolean enabled, Nested nested, Dynamic dynamic,\n Map<String, Mapper> mappers, @Nullable Settings settings) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/RootObjectMapper.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,9 @@\n \n package org.elasticsearch.index.mapper;\n \n+import java.util.HashMap;\n+import java.util.HashSet;\n+import org.apache.lucene.index.IndexableField;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.settings.Settings;\n@@ -333,6 +336,67 @@ public void testMultiRootAndNested1() throws Exception {\n assertThat(doc.docs().get(6).getFields(\"nested1.nested2.field2\").length, equalTo(4));\n }\n \n+ /**\n+ * Checks that multiple levels of nested includes where a node is both directly and transitively\n+ * included in root by {@code include_in_root} and a chain of {@code include_in_parent} does not\n+ * lead to duplicate fields on the root document.\n+ */\n+ public void testMultipleLevelsIncludeRoot1() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder()\n+ .startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"nested1\").field(\"type\", \"nested\").field(\"include_in_root\", true).field(\"include_in_parent\", true).startObject(\"properties\")\n+ .startObject(\"nested2\").field(\"type\", \"nested\").field(\"include_in_root\", true).field(\"include_in_parent\", true)\n+ .endObject().endObject().endObject()\n+ .endObject().endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(\"type\", new CompressedXContent(mapping));\n+\n+ ParsedDocument doc = docMapper.parse(SourceToParse.source(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject().startArray(\"nested1\")\n+ .startObject().startArray(\"nested2\").startObject().field(\"foo\", \"bar\")\n+ .endObject().endArray().endObject().endArray()\n+ .endObject()\n+ .bytes(),\n+ XContentType.JSON));\n+\n+ final Collection<IndexableField> fields = doc.rootDoc().getFields();\n+ assertThat(fields.size(), equalTo(new HashSet<>(fields).size()));\n+ }\n+\n+ /**\n+ * Same as {@link NestedObjectMapperTests#testMultipleLevelsIncludeRoot1()} but tests for the\n+ * case where the transitive {@code include_in_parent} and redundant {@code include_in_root}\n+ * happen on a chain of nodes that starts from a parent node that is not directly connected to\n+ * root by a chain of {@code include_in_parent}, i.e. that has {@code include_in_parent} set to\n+ * {@code false} and {@code include_in_root} set to {@code true}.\n+ */\n+ public void testMultipleLevelsIncludeRoot2() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder()\n+ .startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"nested1\").field(\"type\", \"nested\")\n+ .field(\"include_in_root\", true).field(\"include_in_parent\", true).startObject(\"properties\")\n+ .startObject(\"nested2\").field(\"type\", \"nested\")\n+ .field(\"include_in_root\", true).field(\"include_in_parent\", false).startObject(\"properties\")\n+ .startObject(\"nested3\").field(\"type\", \"nested\")\n+ .field(\"include_in_root\", true).field(\"include_in_parent\", true)\n+ .endObject().endObject().endObject().endObject().endObject()\n+ .endObject().endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(\"type\", new CompressedXContent(mapping));\n+\n+ ParsedDocument doc = docMapper.parse(SourceToParse.source(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject().startArray(\"nested1\")\n+ .startObject().startArray(\"nested2\")\n+ .startObject().startArray(\"nested3\").startObject().field(\"foo\", \"bar\")\n+ .endObject().endArray().endObject().endArray().endObject().endArray()\n+ .endObject()\n+ .bytes(),\n+ XContentType.JSON));\n+\n+ final Collection<IndexableField> fields = doc.rootDoc().getFields();\n+ assertThat(fields.size(), equalTo(new HashSet<>(fields).size()));\n+ }\n+\n public void testNestedArrayStrict() throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n .startObject(\"nested1\").field(\"type\", \"nested\").field(\"dynamic\", \"strict\").startObject(\"properties\")",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/NestedObjectMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 6.0.0 RC-1\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`):\r\n```\r\njava version \"1.8.0_144\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_144-b01)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)\r\n```\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n`Linux elasticsearch-data-hot-002 4.11.0-1011-azure #11-Ubuntu SMP Tue Sep 19 19:03:54 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux`\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWe have a script (In node.js) that runs a scroll request against the server with a configureable `size` parameter.\r\nWhen running the job with `size` set to `20000` (Which is above the `10000` limit), Elasticsearch crashed.\r\n\r\nImportant notes:\r\n1. we have `4230` active shards (`2115` primaries), all hosted on 3 machines with 64GB each.\r\n2. the search request was directed to all indices, but was using a `@timestamp` range filter that would have \"skipped\" most of the shards (Should only search in 6 shards)\r\n\r\n**Steps to reproduce**:\r\n\r\nPlease include a *minimal* but *complete* recreation of the problem, including\r\n(e.g.) index creation, mappings, settings, query etc. The easier you make for\r\nus to reproduce it, the more likely that somebody will take the time to look at it.\r\n\r\n 1. Have a cluster with a lot of shards\r\n 2. Run a 20k sized scroll search\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n[2017-10-18T10:35:35,145][ERROR][o.e.t.n.Netty4Utils ] fatal error on the network layer\r\n\tat org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:179)\r\n\tat org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:69)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:850)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:364)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n[2017-10-18T10:35:35,143][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-data-hot-002] [.kibana][0], node[B93cHubSRa2i3ogHi7NoMA], [P], s[STARTED], a[id=9K0VQRssSkahsPMqylNdbg]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false], types=[customers-raw-*], routing='null', preference='null', requestCache=null, scroll=Scroll{keepAlive=1m}, maxConcurrentShardRequests=25, batchedReduceSize=512, preFilterShardSize=128, source={\r\n \"size\" : 20000,\r\n \"query\" : {\r\n \"bool\" : {\r\n \"filter\" : [\r\n {\r\n \"range\" : {\r\n \"@timestamp\" : {\r\n \"from\" : 0,\r\n \"to\" : 600000,\r\n \"include_lower\" : true,\r\n \"include_upper\" : false,\r\n \"format\" : \"epoch_millis\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n }\r\n ],\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n}}] lastShard [true]\r\norg.elasticsearch.transport.RemoteTransportException: [elasticsearch-data-cold-001][10.0.0.37:9300][indices:data/read/search[phase/query]]\r\nCaused by: org.elasticsearch.search.query.QueryPhaseExecutionException: Batch size is too large, size must be less than or equal to: [10000] but was [20000]. Scroll batch sizes cost as much memory as result windows so they are controlled by the [index.max_result_window] index level setting.\r\n\tat org.elasticsearch.search.DefaultSearchContext.preProcess(DefaultSearchContext.java:212) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.search.query.QueryPhase.preProcess(QueryPhase.java:90) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.search.SearchService.createContext(SearchService.java:542) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:506) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:302) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:288) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:284) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.search.SearchService$3.doRun(SearchService.java:964) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_144]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_144]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\n[2017-10-18T10:35:35,183][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [elasticsearch-data-hot-002] fatal error in thread [Thread-14593], exiting\r\njava.lang.StackOverflowError: null\r\n\tat org.elasticsearch.cluster.routing.PlainShardsIterator.remaining(PlainShardsIterator.java:50) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:197) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:148) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:208) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.skipShard(InitialSearchPhase.java:324) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.skipShard(AbstractSearchAsyncAction.java:321) ~[elasticsearch-6.0.0-rc1.jar:6.0.0-rc1]\r\n```\r\n",
"comments": [
{
"body": "Please ask this question on discuss.elastic.co where we can give a better support.\r\nThis space is only for confirmed issues or feature requests which have been discussed on discuss.elastic.co.\r\n\r\nThanks!\r\n\r\nHere the error is explicit:\r\n\r\n> Batch size is too large, size must be less than or equal to: [10000] but was [20000]\r\n\r\n",
"created_at": "2017-10-18T12:04:06Z"
},
{
"body": "This is not a question. This is a bug.",
"created_at": "2017-10-18T12:16:08Z"
},
{
"body": "@dadoonet I know that it said the scroll size is too big, but why crash the entire process?",
"created_at": "2017-10-18T12:18:43Z"
},
{
"body": "Yeah I'm sorry. I read it too fast and missed that super important part: \r\n\r\n> Elasticsearch crashed.\r\n\r\n(shame on me)",
"created_at": "2017-10-18T12:19:43Z"
},
{
"body": "@shaharmor It’s a bug, we are recursing too deeply when there are a lot of skipped shards.",
"created_at": "2017-10-18T12:44:03Z"
}
],
"number": 27042,
"title": "[6RC1] - Fatal error when trying to do a big scroll request"
} | {
"body": "When a search is executing locally over many shards, we can stack overflow during query phase execution. This happens due to callbacks that occur after a phase completes for a shard and we move to the same phase on another shard. If all the shards for the query are local to the local node then we will never go async and these callbacks will end up as recursive calls. With sufficiently many shards, this will end up as a stack overflow. This commit addresses this by truncating the stack by forking to another thread on the executor for the phase.\r\n\r\nCloses #27042\r\n",
"number": 27069,
"review_comments": [
{
"body": "this can be an assertion, right? no need to check since we checked above?",
"created_at": "2017-10-23T19:18:20Z"
},
{
"body": "I also think we should make this an assertion",
"created_at": "2017-10-23T19:18:34Z"
},
{
"body": "why do you check here?",
"created_at": "2017-10-23T19:19:12Z"
},
{
"body": "can we use `== false`?",
"created_at": "2017-10-23T19:20:13Z"
},
{
"body": "It’s leftover from before I decided to split the shard groups into those that are skipped and those that are not. I pushed a commit to address this one apparently while you were mid-review but it looks like you caught more. I will upgrade to assertions.",
"created_at": "2017-10-23T19:37:50Z"
},
{
"body": "Yes.",
"created_at": "2017-10-23T19:38:05Z"
},
{
"body": "Yes, it’s leftover.",
"created_at": "2017-10-23T19:38:36Z"
},
{
"body": "Yes.",
"created_at": "2017-10-23T19:39:04Z"
}
],
"title": "Avoid stack overflow on search phases"
} | {
"commits": [
{
"message": "Avoid stack overflow on search phases\n\nWhen a search is executing locally over many shards, we can stack\noverflow during query phase execution. This happens due to callbacks\nthat occur after a phase completes for a shard and we move to the same\nphase on another shard. If all the shards for the query are local to the\nlocal node then we will never go async and these callbacks will end up\nas recursive calls. With sufficiently many shards, this will end up as a\nstack overflow. This commit addresses this by truncating the stack by\nforking to another thread on the executor for the phase."
},
{
"message": "Only fork if needed"
},
{
"message": "Remove leftover"
},
{
"message": "Remove more leftovers"
},
{
"message": "Beef up testing"
},
{
"message": "<3"
},
{
"message": "Fix comment"
},
{
"message": "Fix NPE in test"
},
{
"message": "Remove import"
},
{
"message": "Randomize max concurrency"
},
{
"message": "Merge branch 'master' into initial-search-phase-async-on-next\n\n* master:\n Timed runnable should delegate to abstract runnable\n Expose adaptive replica selection stats in /_nodes/stats API\n Remove dangerous `ByteBufStreamInput` methods (#27076)\n Blacklist Gradle 4.2 and above\n Remove duplicated test (#27091)\n Update numbers to reflect 4-byte UTF-8-encoded characters (#27083)\n test: avoid generating duplicate multiple fields (#27080)\n Reduce the default number of cached queries. (#26949)\n removed unused import\n ingest: date processor should not fail if timestamp is specified as json number"
},
{
"message": "wip"
},
{
"message": "fix clients"
},
{
"message": "wip"
},
{
"message": "Magic"
},
{
"message": "Docs and be more careful with the client"
},
{
"message": "Merge branch 'master' into initial-search-phase-async-on-next\n\n* master:\n Ignore .DS_Store files on macOS\n Docs: Fix ingest geoip config location (#27110)\n Adjust SHA-512 supported format on plugin install\n Make ShardSearchTarget optional when parsing ShardSearchFailure (#27078)\n [Docs] Clarify mapping `index` option default (#27104)\n Decouple BulkProcessor from ThreadPool (#26727)\n Stats to record how often the ClusterState diff mechanism is used successfully (#26973)\n Tie-break shard path decision based on total number of shards on path (#27039)"
},
{
"message": "remove file"
}
],
"files": [
{
"diff": "@@ -77,7 +77,7 @@ protected AbstractSearchAsyncAction(String name, Logger logger, SearchTransportS\n ActionListener<SearchResponse> listener, GroupShardsIterator<SearchShardIterator> shardsIts,\n TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion,\n SearchTask task, SearchPhaseResults<Result> resultConsumer, int maxConcurrentShardRequests) {\n- super(name, request, shardsIts, logger, maxConcurrentShardRequests);\n+ super(name, request, shardsIts, logger, maxConcurrentShardRequests, executor);\n this.timeProvider = timeProvider;\n this.logger = logger;\n this.searchTransportService = searchTransportService;",
"filename": "core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java",
"status": "modified"
},
{
"diff": "@@ -26,12 +26,15 @@\n import org.elasticsearch.cluster.routing.GroupShardsIterator;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n import org.elasticsearch.search.SearchPhaseResult;\n import org.elasticsearch.search.SearchShardTarget;\n-import org.elasticsearch.transport.ConnectTransportException;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.concurrent.Executor;\n import java.util.concurrent.atomic.AtomicInteger;\n import java.util.stream.Stream;\n \n@@ -45,45 +48,58 @@\n */\n abstract class InitialSearchPhase<FirstResult extends SearchPhaseResult> extends SearchPhase {\n private final SearchRequest request;\n+ private final GroupShardsIterator<SearchShardIterator> toSkipShardsIts;\n private final GroupShardsIterator<SearchShardIterator> shardsIts;\n private final Logger logger;\n private final int expectedTotalOps;\n private final AtomicInteger totalOps = new AtomicInteger();\n private final AtomicInteger shardExecutionIndex = new AtomicInteger(0);\n private final int maxConcurrentShardRequests;\n+ private final Executor executor;\n \n InitialSearchPhase(String name, SearchRequest request, GroupShardsIterator<SearchShardIterator> shardsIts, Logger logger,\n- int maxConcurrentShardRequests) {\n+ int maxConcurrentShardRequests, Executor executor) {\n super(name);\n this.request = request;\n- this.shardsIts = shardsIts;\n+ final List<SearchShardIterator> toSkipIterators = new ArrayList<>();\n+ final List<SearchShardIterator> iterators = new ArrayList<>();\n+ for (final SearchShardIterator iterator : shardsIts) {\n+ if (iterator.skip()) {\n+ toSkipIterators.add(iterator);\n+ } else {\n+ iterators.add(iterator);\n+ }\n+ }\n+ this.toSkipShardsIts = new GroupShardsIterator<>(toSkipIterators);\n+ this.shardsIts = new GroupShardsIterator<>(iterators);\n this.logger = logger;\n // we need to add 1 for non active partition, since we count it in the total. This means for each shard in the iterator we sum up\n // it's number of active shards but use 1 as the default if no replica of a shard is active at this point.\n // on a per shards level we use shardIt.remaining() to increment the totalOps pointer but add 1 for the current shard result\n // we process hence we add one for the non active partition here.\n this.expectedTotalOps = shardsIts.totalSizeWith1ForEmpty();\n this.maxConcurrentShardRequests = Math.min(maxConcurrentShardRequests, shardsIts.size());\n+ this.executor = executor;\n }\n \n private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard, @Nullable String nodeId,\n final SearchShardIterator shardIt, Exception e) {\n // we always add the shard failure for a specific shard instance\n // we do make sure to clean it on a successful response from a shard\n SearchShardTarget shardTarget = new SearchShardTarget(nodeId, shardIt.shardId(), shardIt.getClusterAlias(),\n- shardIt.getOriginalIndices());\n+ shardIt.getOriginalIndices());\n onShardFailure(shardIndex, shardTarget, e);\n \n if (totalOps.incrementAndGet() == expectedTotalOps) {\n if (logger.isDebugEnabled()) {\n if (e != null && !TransportActions.isShardNotAvailableException(e)) {\n logger.debug(\n- (Supplier<?>) () -> new ParameterizedMessage(\n- \"{}: Failed to execute [{}]\",\n- shard != null ? shard.shortSummary() :\n- shardIt.shardId(),\n- request),\n- e);\n+ (Supplier<?>) () -> new ParameterizedMessage(\n+ \"{}: Failed to execute [{}]\",\n+ shard != null ? shard.shortSummary() :\n+ shardIt.shardId(),\n+ request),\n+ e);\n } else if (logger.isTraceEnabled()) {\n logger.trace((Supplier<?>) () -> new ParameterizedMessage(\"{}: Failed to execute [{}]\", shard, request), e);\n }\n@@ -94,32 +110,27 @@ private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard,\n final boolean lastShard = nextShard == null;\n // trace log this exception\n logger.trace(\n- (Supplier<?>) () -> new ParameterizedMessage(\n- \"{}: Failed to execute [{}] lastShard [{}]\",\n- shard != null ? shard.shortSummary() : shardIt.shardId(),\n- request,\n- lastShard),\n- e);\n+ (Supplier<?>) () -> new ParameterizedMessage(\n+ \"{}: Failed to execute [{}] lastShard [{}]\",\n+ shard != null ? shard.shortSummary() : shardIt.shardId(),\n+ request,\n+ lastShard),\n+ e);\n if (!lastShard) {\n- try {\n- performPhaseOnShard(shardIndex, shardIt, nextShard);\n- } catch (Exception inner) {\n- inner.addSuppressed(e);\n- onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, inner);\n- }\n+ performPhaseOnShard(shardIndex, shardIt, nextShard);\n } else {\n maybeExecuteNext(); // move to the next execution if needed\n // no more shards active, add a failure\n if (logger.isDebugEnabled() && !logger.isTraceEnabled()) { // do not double log this exception\n if (e != null && !TransportActions.isShardNotAvailableException(e)) {\n logger.debug(\n- (Supplier<?>) () -> new ParameterizedMessage(\n- \"{}: Failed to execute [{}] lastShard [{}]\",\n- shard != null ? shard.shortSummary() :\n- shardIt.shardId(),\n- request,\n- lastShard),\n- e);\n+ (Supplier<?>) () -> new ParameterizedMessage(\n+ \"{}: Failed to execute [{}] lastShard [{}]\",\n+ shard != null ? shard.shortSummary() :\n+ shardIt.shardId(),\n+ request,\n+ lastShard),\n+ e);\n }\n }\n }\n@@ -128,53 +139,90 @@ private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard,\n \n @Override\n public final void run() throws IOException {\n- boolean success = shardExecutionIndex.compareAndSet(0, maxConcurrentShardRequests);\n- assert success;\n- for (int i = 0; i < maxConcurrentShardRequests; i++) {\n- SearchShardIterator shardRoutings = shardsIts.get(i);\n- if (shardRoutings.skip()) {\n- skipShard(shardRoutings);\n- } else {\n- performPhaseOnShard(i, shardRoutings, shardRoutings.nextOrNull());\n+ for (final SearchShardIterator iterator : toSkipShardsIts) {\n+ assert iterator.skip();\n+ skipShard(iterator);\n+ }\n+ if (shardsIts.size() > 0) {\n+ int maxConcurrentShardRequests = Math.min(this.maxConcurrentShardRequests, shardsIts.size());\n+ final boolean success = shardExecutionIndex.compareAndSet(0, maxConcurrentShardRequests);\n+ assert success;\n+ for (int index = 0; index < maxConcurrentShardRequests; index++) {\n+ final SearchShardIterator shardRoutings = shardsIts.get(index);\n+ assert shardRoutings.skip() == false;\n+ performPhaseOnShard(index, shardRoutings, shardRoutings.nextOrNull());\n }\n }\n }\n \n private void maybeExecuteNext() {\n final int index = shardExecutionIndex.getAndIncrement();\n if (index < shardsIts.size()) {\n- SearchShardIterator shardRoutings = shardsIts.get(index);\n- if (shardRoutings.skip()) {\n- skipShard(shardRoutings);\n- } else {\n- performPhaseOnShard(index, shardRoutings, shardRoutings.nextOrNull());\n- }\n+ final SearchShardIterator shardRoutings = shardsIts.get(index);\n+ performPhaseOnShard(index, shardRoutings, shardRoutings.nextOrNull());\n }\n }\n \n \n+ private void maybeFork(final Thread thread, final Runnable runnable) {\n+ if (thread == Thread.currentThread()) {\n+ fork(runnable);\n+ } else {\n+ runnable.run();\n+ }\n+ }\n+\n+ private void fork(final Runnable runnable) {\n+ executor.execute(new AbstractRunnable() {\n+ @Override\n+ public void onFailure(Exception e) {\n+\n+ }\n+\n+ @Override\n+ protected void doRun() throws Exception {\n+ runnable.run();\n+ }\n+\n+ @Override\n+ public boolean isForceExecution() {\n+ // we can not allow a stuffed queue to reject execution here\n+ return true;\n+ }\n+ });\n+ }\n+\n private void performPhaseOnShard(final int shardIndex, final SearchShardIterator shardIt, final ShardRouting shard) {\n+ /*\n+ * We capture the thread that this phase is starting on. When we are called back after executing the phase, we are either on the\n+ * same thread (because we never went async, or the same thread was selected from the thread pool) or a different thread. If we\n+ * continue on the same thread in the case that we never went async and this happens repeatedly we will end up recursing deeply and\n+ * could stack overflow. To prevent this, we fork if we are called back on the same thread that execution started on and otherwise\n+ * we can continue (cf. InitialSearchPhase#maybeFork).\n+ */\n+ final Thread thread = Thread.currentThread();\n if (shard == null) {\n- onShardFailure(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId()));\n+ fork(() -> onShardFailure(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())));\n } else {\n try {\n executePhaseOnShard(shardIt, shard, new SearchActionListener<FirstResult>(new SearchShardTarget(shard.currentNodeId(),\n shardIt.shardId(), shardIt.getClusterAlias(), shardIt.getOriginalIndices()), shardIndex) {\n @Override\n public void innerOnResponse(FirstResult result) {\n- onShardResult(result, shardIt);\n+ maybeFork(thread, () -> onShardResult(result, shardIt));\n }\n \n @Override\n public void onFailure(Exception t) {\n- onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, t);\n+ maybeFork(thread, () -> onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, t));\n }\n });\n- } catch (ConnectTransportException | IllegalArgumentException ex) {\n- // we are getting the connection early here so we might run into nodes that are not connected. in that case we move on to\n- // the next shard. previously when using discovery nodes here we had a special case for null when a node was not connected\n- // at all which is not not needed anymore.\n- onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, ex);\n+ } catch (final Exception e) {\n+ /*\n+ * It is possible to run into connection exceptions here because we are getting the connection early and might run in to\n+ * nodes that are not connected. In this case, on shard failure will move us to the next shard copy.\n+ */\n+ fork(() -> onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, e));\n }\n }\n }\n@@ -204,7 +252,7 @@ private void successfulShardExecution(SearchShardIterator shardsIt) {\n } else if (xTotalOps > expectedTotalOps) {\n throw new AssertionError(\"unexpected higher total ops [\" + xTotalOps + \"] compared to expected [\"\n + expectedTotalOps + \"]\");\n- } else {\n+ } else if (shardsIt.skip() == false) {\n maybeExecuteNext();\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java",
"status": "modified"
},
{
"diff": "@@ -24,9 +24,12 @@\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.GroupShardsIterator;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n+import org.elasticsearch.search.SearchPhaseResult;\n+import org.elasticsearch.search.SearchShardTarget;\n import org.elasticsearch.search.internal.AliasFilter;\n import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n import org.elasticsearch.test.ESTestCase;\n@@ -38,11 +41,12 @@\n import java.util.Map;\n import java.util.concurrent.ConcurrentHashMap;\n import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.ExecutorService;\n+import java.util.concurrent.Executors;\n import java.util.concurrent.atomic.AtomicReference;\n \n public class CanMatchPreFilterSearchPhaseTests extends ESTestCase {\n \n-\n public void testFilterShards() throws InterruptedException {\n \n final TransportSearchAction.SearchTimeProvider timeProvider = new TransportSearchAction.SearchTimeProvider(0, System.nanoTime(),\n@@ -185,6 +189,7 @@ public void testLotsOfShards() throws InterruptedException {\n lookup.put(\"node1\", new SearchAsyncActionTests.MockConnection(primaryNode));\n lookup.put(\"node2\", new SearchAsyncActionTests.MockConnection(replicaNode));\n \n+\n final SearchTransportService searchTransportService =\n new SearchTransportService(Settings.builder().put(\"search.remote.connect\", false).build(), null, null) {\n @Override\n@@ -197,11 +202,11 @@ public void sendCanMatch(\n }\n };\n \n- final AtomicReference<GroupShardsIterator<SearchShardIterator>> result = new AtomicReference<>();\n final CountDownLatch latch = new CountDownLatch(1);\n final OriginalIndices originalIndices = new OriginalIndices(new String[]{\"idx\"}, IndicesOptions.strictExpandOpenAndForbidClosed());\n final GroupShardsIterator<SearchShardIterator> shardsIter =\n- SearchAsyncActionTests.getShardsIter(\"idx\", originalIndices, 2048, randomBoolean(), primaryNode, replicaNode);\n+ SearchAsyncActionTests.getShardsIter(\"idx\", originalIndices, 4096, randomBoolean(), primaryNode, replicaNode);\n+ final ExecutorService executor = Executors.newFixedThreadPool(randomIntBetween(1, Runtime.getRuntime().availableProcessors()));\n final CanMatchPreFilterSearchPhase canMatchPhase = new CanMatchPreFilterSearchPhase(\n logger,\n searchTransportService,\n@@ -215,16 +220,38 @@ public void sendCanMatch(\n timeProvider,\n 0,\n null,\n- (iter) -> new SearchPhase(\"test\") {\n+ (iter) -> new InitialSearchPhase<SearchPhaseResult>(\"test\", null, iter, logger, randomIntBetween(1, 32), executor) {\n @Override\n- public void run() throws IOException {\n- result.set(iter);\n+ void onPhaseDone() {\n latch.countDown();\n- }});\n+ }\n+\n+ @Override\n+ void onShardFailure(final int shardIndex, final SearchShardTarget shardTarget, final Exception ex) {\n+\n+ }\n+\n+ @Override\n+ void onShardSuccess(final SearchPhaseResult result) {\n+\n+ }\n+\n+ @Override\n+ protected void executePhaseOnShard(\n+ final SearchShardIterator shardIt,\n+ final ShardRouting shard,\n+ final SearchActionListener<SearchPhaseResult> listener) {\n+ if (randomBoolean()) {\n+ listener.onResponse(new SearchPhaseResult() {});\n+ } else {\n+ listener.onFailure(new Exception(\"failure\"));\n+ }\n+ }\n+ });\n \n canMatchPhase.start();\n latch.await();\n-\n+ executor.shutdown();\n }\n \n }",
"filename": "core/src/test/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhaseTests.java",
"status": "modified"
},
{
"diff": "@@ -50,6 +50,8 @@\n import java.util.Set;\n import java.util.concurrent.ConcurrentHashMap;\n import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.ExecutorService;\n+import java.util.concurrent.Executors;\n import java.util.concurrent.atomic.AtomicInteger;\n import java.util.concurrent.atomic.AtomicReference;\n \n@@ -285,6 +287,7 @@ public void sendFreeContext(Transport.Connection connection, long contextId, Ori\n lookup.put(primaryNode.getId(), new MockConnection(primaryNode));\n lookup.put(replicaNode.getId(), new MockConnection(replicaNode));\n Map<String, AliasFilter> aliasFilters = Collections.singletonMap(\"_na_\", new AliasFilter(null, Strings.EMPTY_ARRAY));\n+ final ExecutorService executor = Executors.newFixedThreadPool(randomIntBetween(1, Runtime.getRuntime().availableProcessors()));\n AbstractSearchAsyncAction asyncAction =\n new AbstractSearchAsyncAction<TestSearchPhaseResult>(\n \"test\",\n@@ -295,7 +298,7 @@ public void sendFreeContext(Transport.Connection connection, long contextId, Ori\n return lookup.get(node); },\n aliasFilters,\n Collections.emptyMap(),\n- null,\n+ executor,\n request,\n responseListener,\n shardsIter,\n@@ -349,6 +352,7 @@ public void run() throws IOException {\n } else {\n assertTrue(nodeToContextMap.get(replicaNode).toString(), nodeToContextMap.get(replicaNode).isEmpty());\n }\n+ executor.shutdown();\n }\n \n static GroupShardsIterator<SearchShardIterator> getShardsIter(String index, OriginalIndices originalIndices, int numShards,",
"filename": "core/src/test/java/org/elasticsearch/action/search/SearchAsyncActionTests.java",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,15 @@ run {\n setting 'reindex.remote.whitelist', '127.0.0.1:*'\n }\n \n+test {\n+ /*\n+ * We have to disable setting the number of available processors as tests in the\n+ * same JVM randomize processors and will step on each other if we allow them to\n+ * set the number of available processors as it's set-once in Netty.\n+ */\n+ systemProperty 'es.set.netty.runtime.available.processors', 'false'\n+}\n+\n dependencies {\n compile \"org.elasticsearch.client:elasticsearch-rest-client:${version}\"\n // for http - testing reindex from remote",
"filename": "modules/reindex/build.gradle",
"status": "modified"
},
{
"diff": "@@ -26,23 +26,25 @@\n import org.elasticsearch.action.bulk.BulkRequestBuilder;\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.bulk.Retry;\n+import org.elasticsearch.client.Client;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.network.NetworkModule;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.TransportAddress;\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.plugins.Plugin;\n-import org.elasticsearch.test.ESSingleNodeTestCase;\n+import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.Netty4Plugin;\n import org.junit.After;\n-import org.junit.Before;\n \n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.Collection;\n import java.util.List;\n import java.util.concurrent.CyclicBarrier;\n+import java.util.function.Function;\n \n import static java.util.Collections.emptyMap;\n import static org.elasticsearch.index.reindex.ReindexTestCase.matcher;\n@@ -51,32 +53,15 @@\n import static org.hamcrest.Matchers.hasSize;\n \n /**\n- * Integration test for retry behavior. Useful because retrying relies on the way that the rest of Elasticsearch throws exceptions and unit\n- * tests won't verify that.\n+ * Integration test for retry behavior. Useful because retrying relies on the way that the\n+ * rest of Elasticsearch throws exceptions and unit tests won't verify that.\n */\n-public class RetryTests extends ESSingleNodeTestCase {\n+public class RetryTests extends ESIntegTestCase {\n \n private static final int DOC_COUNT = 20;\n \n private List<CyclicBarrier> blockedExecutors = new ArrayList<>();\n \n-\n- @Before\n- public void setUp() throws Exception {\n- super.setUp();\n- createIndex(\"source\");\n- // Build the test data. Don't use indexRandom because that won't work consistently with such small thread pools.\n- BulkRequestBuilder bulk = client().prepareBulk();\n- for (int i = 0; i < DOC_COUNT; i++) {\n- bulk.add(client().prepareIndex(\"source\", \"test\").setSource(\"foo\", \"bar \" + i));\n- }\n-\n- Retry retry = new Retry(EsRejectedExecutionException.class, BackoffPolicy.exponentialBackoff(), client().threadPool());\n- BulkResponse response = retry.withBackoff(client()::bulk, bulk.request(), client().settings()).actionGet();\n- assertFalse(response.buildFailureMessage(), response.hasFailures());\n- client().admin().indices().prepareRefresh(\"source\").get();\n- }\n-\n @After\n public void forceUnblockAllExecutors() {\n for (CyclicBarrier barrier: blockedExecutors) {\n@@ -85,8 +70,15 @@ public void forceUnblockAllExecutors() {\n }\n \n @Override\n- protected Collection<Class<? extends Plugin>> getPlugins() {\n- return pluginList(\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return Arrays.asList(\n+ ReindexPlugin.class,\n+ Netty4Plugin.class);\n+ }\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> transportClientPlugins() {\n+ return Arrays.asList(\n ReindexPlugin.class,\n Netty4Plugin.class);\n }\n@@ -95,63 +87,123 @@ protected Collection<Class<? extends Plugin>> getPlugins() {\n * Lower the queue sizes to be small enough that both bulk and searches will time out and have to be retried.\n */\n @Override\n- protected Settings nodeSettings() {\n- Settings.Builder settings = Settings.builder().put(super.nodeSettings());\n- // Use pools of size 1 so we can block them\n- settings.put(\"thread_pool.bulk.size\", 1);\n- settings.put(\"thread_pool.search.size\", 1);\n- // Use queues of size 1 because size 0 is broken and because search requests need the queue to function\n- settings.put(\"thread_pool.bulk.queue_size\", 1);\n- settings.put(\"thread_pool.search.queue_size\", 1);\n- // Enable http so we can test retries on reindex from remote. In this case the \"remote\" cluster is just this cluster.\n- settings.put(NetworkModule.HTTP_ENABLED.getKey(), true);\n- // Whitelist reindexing from the http host we're going to use\n- settings.put(TransportReindexAction.REMOTE_CLUSTER_WHITELIST.getKey(), \"127.0.0.1:*\");\n- return settings.build();\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return Settings.builder().put(super.nodeSettings(nodeOrdinal)).put(nodeSettings()).build();\n+ }\n+\n+ final Settings nodeSettings() {\n+ return Settings.builder()\n+ // enable HTTP so we can test retries on reindex from remote; in this case the \"remote\" cluster is just this cluster\n+ .put(NetworkModule.HTTP_ENABLED.getKey(), true)\n+ // whitelist reindexing from the HTTP host we're going to use\n+ .put(TransportReindexAction.REMOTE_CLUSTER_WHITELIST.getKey(), \"127.0.0.1:*\")\n+ .build();\n }\n \n public void testReindex() throws Exception {\n- testCase(ReindexAction.NAME, ReindexAction.INSTANCE.newRequestBuilder(client()).source(\"source\").destination(\"dest\"),\n+ testCase(\n+ ReindexAction.NAME,\n+ client -> ReindexAction.INSTANCE.newRequestBuilder(client).source(\"source\").destination(\"dest\"),\n matcher().created(DOC_COUNT));\n }\n \n public void testReindexFromRemote() throws Exception {\n- NodeInfo nodeInfo = client().admin().cluster().prepareNodesInfo().get().getNodes().get(0);\n- TransportAddress address = nodeInfo.getHttp().getAddress().publishAddress();\n- RemoteInfo remote = new RemoteInfo(\"http\", address.getAddress(), address.getPort(), new BytesArray(\"{\\\"match_all\\\":{}}\"), null,\n- null, emptyMap(), RemoteInfo.DEFAULT_SOCKET_TIMEOUT, RemoteInfo.DEFAULT_CONNECT_TIMEOUT);\n- ReindexRequestBuilder request = ReindexAction.INSTANCE.newRequestBuilder(client()).source(\"source\").destination(\"dest\")\n- .setRemoteInfo(remote);\n- testCase(ReindexAction.NAME, request, matcher().created(DOC_COUNT));\n+ Function<Client, AbstractBulkByScrollRequestBuilder<?, ?>> function = client -> {\n+ /*\n+ * Use the master node for the reindex from remote because that node\n+ * doesn't have a copy of the data on it.\n+ */\n+ NodeInfo masterNode = null;\n+ for (NodeInfo candidate : client.admin().cluster().prepareNodesInfo().get().getNodes()) {\n+ if (candidate.getNode().isMasterNode()) {\n+ masterNode = candidate;\n+ }\n+ }\n+ assertNotNull(masterNode);\n+\n+ TransportAddress address = masterNode.getHttp().getAddress().publishAddress();\n+ RemoteInfo remote = new RemoteInfo(\"http\", address.getAddress(), address.getPort(), new BytesArray(\"{\\\"match_all\\\":{}}\"), null,\n+ null, emptyMap(), RemoteInfo.DEFAULT_SOCKET_TIMEOUT, RemoteInfo.DEFAULT_CONNECT_TIMEOUT);\n+ ReindexRequestBuilder request = ReindexAction.INSTANCE.newRequestBuilder(client).source(\"source\").destination(\"dest\")\n+ .setRemoteInfo(remote);\n+ return request;\n+ };\n+ testCase(ReindexAction.NAME, function, matcher().created(DOC_COUNT));\n }\n \n public void testUpdateByQuery() throws Exception {\n- testCase(UpdateByQueryAction.NAME, UpdateByQueryAction.INSTANCE.newRequestBuilder(client()).source(\"source\"),\n+ testCase(UpdateByQueryAction.NAME, client -> UpdateByQueryAction.INSTANCE.newRequestBuilder(client).source(\"source\"),\n matcher().updated(DOC_COUNT));\n }\n \n public void testDeleteByQuery() throws Exception {\n- testCase(DeleteByQueryAction.NAME, DeleteByQueryAction.INSTANCE.newRequestBuilder(client()).source(\"source\")\n+ testCase(DeleteByQueryAction.NAME, client -> DeleteByQueryAction.INSTANCE.newRequestBuilder(client).source(\"source\")\n .filter(QueryBuilders.matchAllQuery()), matcher().deleted(DOC_COUNT));\n }\n \n- private void testCase(String action, AbstractBulkByScrollRequestBuilder<?, ?> request, BulkIndexByScrollResponseMatcher matcher)\n+ private void testCase(\n+ String action,\n+ Function<Client, AbstractBulkByScrollRequestBuilder<?, ?>> request,\n+ BulkIndexByScrollResponseMatcher matcher)\n throws Exception {\n+ /*\n+ * These test cases work by stuffing the search and bulk queues of a single node and\n+ * making sure that we read and write from that node. Because of some \"fun\" with the\n+ * way that searches work, we need at least one more node to act as the coordinating\n+ * node for the search request. If we didn't do this then the searches would get stuck\n+ * in the queue anyway because we force queue portions of the coordinating node's\n+ * actions. This is not a big deal in normal operations but a real pain when you are\n+ * intentionally stuffing queues hoping for a failure.\n+ */\n+\n+ final Settings nodeSettings = Settings.builder()\n+ // use pools of size 1 so we can block them\n+ .put(\"thread_pool.bulk.size\", 1)\n+ .put(\"thread_pool.search.size\", 1)\n+ // use queues of size 1 because size 0 is broken and because search requests need the queue to function\n+ .put(\"thread_pool.bulk.queue_size\", 1)\n+ .put(\"thread_pool.search.queue_size\", 1)\n+ .put(\"node.attr.color\", \"blue\")\n+ .build();\n+ final String node = internalCluster().startDataOnlyNode(nodeSettings);\n+ final Settings indexSettings =\n+ Settings.builder()\n+ .put(\"index.number_of_shards\", 1)\n+ .put(\"index.number_of_replicas\", 0)\n+ .put(\"index.routing.allocation.include.color\", \"blue\")\n+ .build();\n+\n+ // Create the source index on the node with small thread pools so we can block them.\n+ client().admin().indices().prepareCreate(\"source\").setSettings(indexSettings).execute().actionGet();\n+ // Not all test cases use the dest index but those that do require that it be on the node will small thread pools\n+ client().admin().indices().prepareCreate(\"dest\").setSettings(indexSettings).execute().actionGet();\n+ // Build the test data. Don't use indexRandom because that won't work consistently with such small thread pools.\n+ BulkRequestBuilder bulk = client().prepareBulk();\n+ for (int i = 0; i < DOC_COUNT; i++) {\n+ bulk.add(client().prepareIndex(\"source\", \"test\").setSource(\"foo\", \"bar \" + i));\n+ }\n+\n+ Retry retry = new Retry(EsRejectedExecutionException.class, BackoffPolicy.exponentialBackoff(), client().threadPool());\n+ BulkResponse initialBulkResponse = retry.withBackoff(client()::bulk, bulk.request(), client().settings()).actionGet();\n+ assertFalse(initialBulkResponse.buildFailureMessage(), initialBulkResponse.hasFailures());\n+ client().admin().indices().prepareRefresh(\"source\").get();\n+\n logger.info(\"Blocking search\");\n- CyclicBarrier initialSearchBlock = blockExecutor(ThreadPool.Names.SEARCH);\n+ CyclicBarrier initialSearchBlock = blockExecutor(ThreadPool.Names.SEARCH, node);\n \n+ AbstractBulkByScrollRequestBuilder<?, ?> builder = request.apply(internalCluster().masterClient());\n // Make sure we use more than one batch so we have to scroll\n- request.source().setSize(DOC_COUNT / randomIntBetween(2, 10));\n+ builder.source().setSize(DOC_COUNT / randomIntBetween(2, 10));\n \n logger.info(\"Starting request\");\n- ActionFuture<BulkByScrollResponse> responseListener = request.execute();\n+ ActionFuture<BulkByScrollResponse> responseListener = builder.execute();\n \n try {\n logger.info(\"Waiting for search rejections on the initial search\");\n assertBusy(() -> assertThat(taskStatus(action).getSearchRetries(), greaterThan(0L)));\n \n logger.info(\"Blocking bulk and unblocking search so we start to get bulk rejections\");\n- CyclicBarrier bulkBlock = blockExecutor(ThreadPool.Names.BULK);\n+ CyclicBarrier bulkBlock = blockExecutor(ThreadPool.Names.BULK, node);\n initialSearchBlock.await();\n \n logger.info(\"Waiting for bulk rejections\");\n@@ -161,7 +213,7 @@ private void testCase(String action, AbstractBulkByScrollRequestBuilder<?, ?> re\n long initialSearchRejections = taskStatus(action).getSearchRetries();\n \n logger.info(\"Blocking search and unblocking bulk so we should get search rejections for the scroll\");\n- CyclicBarrier scrollBlock = blockExecutor(ThreadPool.Names.SEARCH);\n+ CyclicBarrier scrollBlock = blockExecutor(ThreadPool.Names.SEARCH, node);\n bulkBlock.await();\n \n logger.info(\"Waiting for search rejections for the scroll\");\n@@ -187,8 +239,8 @@ private void testCase(String action, AbstractBulkByScrollRequestBuilder<?, ?> re\n * Blocks the named executor by getting its only thread running a task blocked on a CyclicBarrier and fills the queue with a noop task.\n * So requests to use this queue should get {@link EsRejectedExecutionException}s.\n */\n- private CyclicBarrier blockExecutor(String name) throws Exception {\n- ThreadPool threadPool = getInstanceFromNode(ThreadPool.class);\n+ private CyclicBarrier blockExecutor(String name, String node) throws Exception {\n+ ThreadPool threadPool = internalCluster().getInstance(ThreadPool.class, node);\n CyclicBarrier barrier = new CyclicBarrier(2);\n logger.info(\"Blocking the [{}] executor\", name);\n threadPool.executor(name).execute(() -> {\n@@ -211,6 +263,11 @@ private CyclicBarrier blockExecutor(String name) throws Exception {\n * Fetch the status for a task of type \"action\". Fails if there aren't exactly one of that type of task running.\n */\n private BulkByScrollTask.Status taskStatus(String action) {\n+ /*\n+ * We always use the master client because we always start the test requests on the\n+ * master. We do this simply to make sure that the test request is not started on the\n+ * node who's queue we're manipulating.\n+ */\n ListTasksResponse response = client().admin().cluster().prepareListTasks().setActions(action).setDetailed(true).get();\n assertThat(response.getTasks(), hasSize(1));\n return (BulkByScrollTask.Status) response.getTasks().get(0).getStatus();",
"filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/RetryTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**\r\n```\r\ncurl -s localhost:9200/\r\n{\r\n \"name\" : \"es-c2-m1\",\r\n \"cluster_name\" : \"es-c2\",\r\n \"cluster_uuid\" : \"zc5Ak5LoRRy_zfaC39JTFA\",\r\n \"version\" : {\r\n \"number\" : \"5.5.2\",\r\n \"build_hash\" : \"b2f0c09\",\r\n \"build_date\" : \"2017-08-14T12:33:14.154Z\",\r\n \"build_snapshot\" : false,\r\n \"lucene_version\" : \"6.6.0\"\r\n },\r\n \"tagline\" : \"You Know, for Search\"\r\n}\r\n```\r\n\r\n**Plugins installed**: []\r\n```\r\n \"plugins\" : [\r\n {\r\n \"name\" : \"repository-gcs\",\r\n \"version\" : \"5.5.2\",\r\n \"description\" : \"The GCS repository plugin adds Google Cloud Storage support for repositories.\",\r\n \"classname\" : \"org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin\",\r\n \"has_native_controller\" : false\r\n }\r\n ],\r\n```\r\n\r\n**JVM version** (`java -version`):\r\n```\r\njava -version\r\njava version \"1.8.0_144\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_144-b01)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)\r\n```\r\n\r\n**OS version**\r\n```\r\nuname -a\r\nLinux es-c2-m1 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u3 (2017-08-06) x86_64 GNU/Linux\r\n```\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nRequest indices via a curl, like so:\r\n```\r\ncurl localhost:9200/_cat/indices\r\n```\r\n\r\nThe expected result is a table of indices in the cluster, but instead a null pointer exception is returned:\r\n```\r\ncurl localhost:9200/_cat/indices\r\n{\"error\":{\"root_cause\":[{\"type\":\"null_pointer_exception\",\"reason\":null}],\"type\":\"null_pointer_exception\",\"reason\":null},\"status\":500}\r\n```\r\n\r\n**Steps to reproduce**:\r\n 1. ssh into any node in the cluster, master or data node\r\n 2. run command `curl localhost:9200/_cat/indices`\r\n\r\n**Provide logs (if relevant)**\r\nThe following stack trace shows up in the logs right after the curl command is issued:\r\n\r\n```\r\n[2017-10-18T22:52:40,381][DEBUG][o.e.a.a.i.s.TransportIndicesStatsAction] [es-c2-m1] failed to execute [indices:monitor/stats] on node [6QkTt8qWSiKU1sZSTDa9dg]\r\norg.elasticsearch.transport.RemoteTransportException: [es-c2-4][10.240.0.234:9300][indices:monitor/stats[n]]\r\nCaused by: java.lang.IllegalStateException: Negative longs unsupported, use writeLong or writeZLong for negative numbers [-4134408331751]\r\n\tat org.elasticsearch.common.io.stream.StreamOutput.writeVLong(StreamOutput.java:219) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.index.search.stats.SearchStats$Stats.writeTo(SearchStats.java:211) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.index.search.stats.SearchStats.writeTo(SearchStats.java:353) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.common.io.stream.StreamOutput.writeOptionalStreamable(StreamOutput.java:723) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.action.admin.indices.stats.CommonStats.writeTo(CommonStats.java:255) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.action.admin.indices.stats.ShardStats.writeTo(ShardStats.java:102) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.common.io.stream.StreamOutput.writeOptionalStreamable(StreamOutput.java:723) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$NodeResponse.writeTo(TransportBroadcastByNodeAction.java:574) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TcpTransport.buildMessage(TcpTransport.java:1235) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TcpTransport.sendResponse(TcpTransport.java:1184) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TcpTransport.sendResponse(TcpTransport.java:1165) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TcpTransportChannel.sendResponse(TcpTransportChannel.java:67) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TcpTransportChannel.sendResponse(TcpTransportChannel.java:61) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.DelegatingTransportChannel.sendResponse(DelegatingTransportChannel.java:60) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry$TransportChannelWrapper.sendResponse(RequestHandlerRegistry.java:111) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:425) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:399) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1544) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_144]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_144]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\n[2017-10-18T22:52:40,383][WARN ][r.suppressed ] path: /_cat/indices, params: {format=json}\r\njava.lang.NullPointerException: null\r\n\tat org.elasticsearch.rest.action.cat.RestIndicesAction.buildTable(RestIndicesAction.java:368) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.rest.action.cat.RestIndicesAction$1$1$1.buildResponse(RestIndicesAction.java:116) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.rest.action.cat.RestIndicesAction$1$1$1.buildResponse(RestIndicesAction.java:113) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.rest.action.RestResponseListener.processResponse(RestResponseListener.java:37) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.rest.action.RestActionListener.onResponse(RestActionListener.java:47) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.onCompletion(TransportBroadcastByNodeAction.java:391) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.onNodeFailure(TransportBroadcastByNodeAction.java:376) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction$1.handleException(TransportBroadcastByNodeAction.java:335) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1067) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TcpTransport.lambda$handleException$16(TcpTransport.java:1467) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:110) [elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TcpTransport.handleException(TcpTransport.java:1465) [elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TcpTransport.handlerResponseError(TcpTransport.java:1457) [elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1401) [elasticsearch-5.5.2.jar:5.5.2]\r\n\tat org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) [transport-netty4-5.5.2.jar:5.5.2]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) [netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.11.Final.jar:4.1.11.Final]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\n```\r\n",
"comments": [
{
"body": "@mdmarek What was the status of your cluster when this was happening?\r\n\r\nWe recently fixed (https://github.com/elastic/elasticsearch/pull/26953) an issue (https://github.com/elastic/elasticsearch/issues/26942) where a NPE would occur if a primary shard was missing (and the cluster was red).\r\n",
"created_at": "2017-10-19T05:48:02Z"
},
{
"body": "@tvernum Thank you for responding. I saw those other issues, which is why I posted this one. The version of Elasticsearch we are using, 5.5.2, seems like it should be covered by those fixes.\r\n\r\nIn light of that, I though that this might be a regression, and the issue is not actually fixed.\r\n\r\nTo answer your question, the cluster is in green, and was in green when I ran those commands. I just ran it again, and get the same null pointer exception, here is the sequence of commands:\r\n\r\n```\r\nes-c2-m1:~$ curl -s 'localhost:9200/_cluster/health?pretty'\r\n{\r\n \"cluster_name\" : \"es-c2\",\r\n \"status\" : \"green\",\r\n \"timed_out\" : false,\r\n \"number_of_nodes\" : 18,\r\n \"number_of_data_nodes\" : 15,\r\n \"active_primary_shards\" : 500,\r\n \"active_shards\" : 1000,\r\n \"relocating_shards\" : 0,\r\n \"initializing_shards\" : 0,\r\n \"unassigned_shards\" : 0,\r\n \"delayed_unassigned_shards\" : 0,\r\n \"number_of_pending_tasks\" : 0,\r\n \"number_of_in_flight_fetch\" : 0,\r\n \"task_max_waiting_in_queue_millis\" : 0,\r\n \"active_shards_percent_as_number\" : 100.0\r\n}\r\n\r\nes-c2-m1:~$ curl -s 'localhost:9200/_cat/indices?pretty'\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"null_pointer_exception\",\r\n \"reason\" : null\r\n }\r\n ],\r\n \"type\" : \"null_pointer_exception\",\r\n \"reason\" : null\r\n },\r\n \"status\" : 500\r\n}\r\n```\r\nThis is an active cluster, so I can reproduce this issue at will. We are calling that path to fill a status page, unfortunately we just get the NPE.\r\n\r\nAny ideas?",
"created_at": "2017-10-19T14:21:15Z"
},
{
"body": "@mdmarek \r\n\r\nThat PR was merged only into 5.6.4, which is unreleased. It is happening because of this line:\r\n\r\nhttps://github.com/elastic/elasticsearch/blob/b2f0c096c18393e016cb0ad89fcb9db6bdcd5907/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java#L211\r\n\r\nFor some reason the `scrollTimeInMillis` is negative for at least one index for some reason. Given that you have 500 primaries, I have no idea which index is at fault here (and it may be more than one), but are you able to execute\r\n\r\n```shell\r\n$ curl -s 'localhost:9200/_stats?pretty'\r\n```\r\n\r\nI would kind of be surprised if you can, but I am curious if there is something else at play.",
"created_at": "2017-10-19T14:27:32Z"
},
{
"body": "@pickypg Oddly enough I can run that command, here is the sequence:\r\n\r\n```\r\nes-c2-m1:~$ curl -s 'localhost:9200/_stats?pretty' >/tmp/out.tmp\r\nes-c2-m1:~$ head -20 /tmp/out.tmp\r\n{\r\n \"_shards\" : {\r\n \"total\" : 1000,\r\n \"successful\" : 134,\r\n \"failed\" : 866,\r\n \"failures\" : [\r\n {\r\n \"shard\" : 1,\r\n \"index\" : \"ents_4_content\",\r\n \"status\" : \"INTERNAL_SERVER_ERROR\",\r\n \"reason\" : {\r\n \"type\" : \"failed_node_exception\",\r\n \"reason\" : \"Failed node [pguZHKx-SYeYMKmchXBkRg]\",\r\n \"caused_by\" : {\r\n \"type\" : \"illegal_state_exception\",\r\n \"reason\" : \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7481122573017]\"\r\n }\r\n }\r\n },\r\n {\r\n```",
"created_at": "2017-10-19T14:32:32Z"
},
{
"body": "@tvernum @pickypg Thank you for clarifying which version will contain the fix.",
"created_at": "2017-10-19T14:33:51Z"
},
{
"body": "Can you share the output of `GET /_stats/search?level=shards`?",
"created_at": "2017-10-19T16:01:49Z"
},
{
"body": "@jasontedor Here is the output of `GET /_stats/search?level=shards`\r\n\r\n<details>\r\n```\r\n{\r\n \"_shards\": {\r\n \"total\": 1000,\r\n \"successful\": 134,\r\n \"failed\": 866,\r\n \"failures\": [\r\n {\r\n \"shard\": 1,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [pguZHKx-SYeYMKmchXBkRg]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7119592664645]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 6,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [9EcwT69NTAeOzOmHF8gptw]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7089754353819]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 8,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [4lglZut5TjS2cNudPg45Sw]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-2919678814984]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 4,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [6QkTt8qWSiKU1sZSTDa9dg]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-3145018936215]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 6,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [uxPGcUtFSKW3vZH9cg3Cbw]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-3198465973812]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 9,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [aSlKFokMSnaon-r64Z23lg]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7083790164540]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 5,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [Y8RvPwP0QVqtcDJ54lI21A]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7206350567482]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 9,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [sr4zNmGURbmNceloV9GyGw]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7121368275320]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 3,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [S4PCLAFDSJ2HOYtA3S_XwA]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7033974871860]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 3,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [On8stbaqQxuZftfR3_fZpg]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7149733983654]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 1,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [BZ_Egw_kTVuYG9Y_BAPnNw]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7064001399089]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 4,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [9-RQY3OdR7WSwr4gUAp3DQ]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7017593270581]\"\r\n }\r\n }\r\n },\r\n {\r\n \"shard\": 7,\r\n \"index\": \"ents_3_4_y\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\",\r\n \"reason\": {\r\n \"type\": \"failed_node_exception\",\r\n \"reason\": \"Failed node [BLq015s9S3etcGOU_zgsZw]\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"Negative longs unsupported, use writeLong or writeZLong for negative numbers [-7069575020223]\"\r\n }\r\n }\r\n }\r\n ]\r\n },\r\n \"_all\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 24961,\r\n \"query_total\": 22019227,\r\n \"query_time_in_millis\": 4164165371,\r\n \"query_current\": 0,\r\n \"fetch_total\": 9082805,\r\n \"fetch_time_in_millis\": 31758662,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 11971703,\r\n \"scroll_time_in_millis\": 6732998278166,\r\n \"scroll_current\": 24961,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 25097,\r\n \"query_total\": 31313815,\r\n \"query_time_in_millis\": 7734204280,\r\n \"query_current\": 1,\r\n \"fetch_total\": 9219369,\r\n \"fetch_time_in_millis\": 59449550,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 11979447,\r\n \"scroll_time_in_millis\": 6985543196357,\r\n \"scroll_current\": 25097,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n }\r\n },\r\n \"indices\": {\r\n \"ents_6_13_z\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 70055,\r\n \"query_time_in_millis\": 18536431,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8166,\r\n \"fetch_time_in_millis\": 917631,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 28,\r\n \"scroll_time_in_millis\": 387169423,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 70055,\r\n \"query_time_in_millis\": 18536431,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8166,\r\n \"fetch_time_in_millis\": 917631,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 28,\r\n \"scroll_time_in_millis\": 387169423,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"2\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 70055,\r\n \"query_time_in_millis\": 18536431,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8166,\r\n \"fetch_time_in_millis\": 917631,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 28,\r\n \"scroll_time_in_millis\": 387169423,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kQ/A==\",\r\n \"generation\": 1646,\r\n \"user_data\": {\r\n \"translog_uuid\": \"5QBW0Lo_TeysHSlu7yJWgA\",\r\n \"translog_generation\": \"387\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 12256607\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_3_4_y\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 24805,\r\n \"query_total\": 12112042,\r\n \"query_time_in_millis\": 193116785,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8951180,\r\n \"fetch_time_in_millis\": 3785634,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 11964160,\r\n \"scroll_time_in_millis\": 6489243259157,\r\n \"scroll_current\": 24805,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 24805,\r\n \"query_total\": 12112042,\r\n \"query_time_in_millis\": 193116785,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8951180,\r\n \"fetch_time_in_millis\": 3785634,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 11964160,\r\n \"scroll_time_in_millis\": 6489243259157,\r\n \"scroll_current\": 24805,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"0\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 12376,\r\n \"query_total\": 5995976,\r\n \"query_time_in_millis\": 90467872,\r\n \"query_current\": 0,\r\n \"fetch_total\": 3015211,\r\n \"fetch_time_in_millis\": 1370466,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 5924678,\r\n \"scroll_time_in_millis\": 3037886493901,\r\n \"scroll_current\": 12376,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kXCA==\",\r\n \"generation\": 2224,\r\n \"user_data\": {\r\n \"translog_uuid\": \"1XaaAaGPSTSaGLH72WjJFQ\",\r\n \"sync_id\": \"AV82ngQYlqM5HImF_H9p\",\r\n \"translog_generation\": \"410\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 1445029\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"8\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 12429,\r\n \"query_total\": 6116066,\r\n \"query_time_in_millis\": 102648913,\r\n \"query_current\": 0,\r\n \"fetch_total\": 5935969,\r\n \"fetch_time_in_millis\": 2415168,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 6039482,\r\n \"scroll_time_in_millis\": 3451356765256,\r\n \"scroll_current\": 12429,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG51Q==\",\r\n \"generation\": 2401,\r\n \"user_data\": {\r\n \"translog_uuid\": \"fzRHDUmQT1GoX9qET-4fxA\",\r\n \"sync_id\": \"AV82Yz7eid_0uQmXPj9d\",\r\n \"translog_generation\": \"454\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 1449188\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_7_8_z\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"65\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8dIoA==\",\r\n \"generation\": 230,\r\n \"user_data\": {\r\n \"translog_uuid\": \"-cj_C_XhTRC0wFC0S51Pdw\",\r\n \"sync_id\": \"AV5UQHdPlLek85mYTPdQ\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7555930\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"1\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YXzg==\",\r\n \"generation\": 244,\r\n \"user_data\": {\r\n \"translog_uuid\": \"fc_-26JfSg-ecyQZJfeu9A\",\r\n \"sync_id\": \"AV5UQJZqJW5FltHgXVik\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7644883\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"2\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YYXQ==\",\r\n \"generation\": 247,\r\n \"user_data\": {\r\n \"translog_uuid\": \"mvx6YmvPSSSetn3lqR42jA\",\r\n \"sync_id\": \"AV5UQIV5cUukG21P8EeS\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7700547\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"68\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YYAQ==\",\r\n \"generation\": 246,\r\n \"user_data\": {\r\n \"translog_uuid\": \"jEvaMiZnQdCpQs-1tM0-dA\",\r\n \"sync_id\": \"AV5UQJsPJW5FltHgXVit\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7666363\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"69\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YZvQ==\",\r\n \"generation\": 246,\r\n \"user_data\": {\r\n \"translog_uuid\": \"KZmcQlMWRHGEFNLcnAkI4g\",\r\n \"sync_id\": \"AV5UQJCjka25Xoui3nIX\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7613698\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"5\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YZ3g==\",\r\n \"generation\": 245,\r\n \"user_data\": {\r\n \"translog_uuid\": \"-8bOe7tWQaC-zYfm1IakPA\",\r\n \"sync_id\": \"AV5UQHeRlLek85mYTPdR\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7655792\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"9\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YXhA==\",\r\n \"generation\": 250,\r\n \"user_data\": {\r\n \"translog_uuid\": \"w1TeU4ueRwCqScm402aUJg\",\r\n \"sync_id\": \"AV5UQJBJka25Xoui3nIW\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7658835\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"10\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YXgw==\",\r\n \"generation\": 242,\r\n \"user_data\": {\r\n \"translog_uuid\": \"KMrA2huUQOKyjuSGbbZzMQ\",\r\n \"sync_id\": \"AV5UQHaKlLek85mYTPdO\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7635035\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"11\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YZEg==\",\r\n \"generation\": 246,\r\n \"user_data\": {\r\n \"translog_uuid\": \"VH8cRE1dS_2zG84QiP4Xgw\",\r\n \"sync_id\": \"AV5UQIwqJW5FltHgXVif\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7692069\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"12\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YYNA==\",\r\n \"generation\": 247,\r\n \"user_data\": {\r\n \"translog_uuid\": \"HEN6QFMOTduLKFax_WeTew\",\r\n \"sync_id\": \"AV5UQIascUukG21P8EeU\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7658060\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"76\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/7cdQ==\",\r\n \"generation\": 235,\r\n \"user_data\": {\r\n \"translog_uuid\": \"so-Jav2JSaCoRYHo-X996A\",\r\n \"sync_id\": \"AV5zBTMOlog0vERXgZA6\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7659458\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"13\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KDw==\",\r\n \"generation\": 238,\r\n \"user_data\": {\r\n \"translog_uuid\": \"AWseoT9dTxKWStC_esqeyg\",\r\n \"sync_id\": \"AV5UQHYr52joHGdOrCRn\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7649976\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"15\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YXqQ==\",\r\n \"generation\": 247,\r\n \"user_data\": {\r\n \"translog_uuid\": \"b3O-opslSv6t4FbWS091oA\",\r\n \"sync_id\": \"AV5UQHuUlLek85mYTPdT\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7641698\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"19\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KEg==\",\r\n \"generation\": 253,\r\n \"user_data\": {\r\n \"translog_uuid\": \"6XLxPJPgQZuM98fks37dVg\",\r\n \"sync_id\": \"AV5UQJEoka25Xoui3nIZ\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7662624\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"20\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YXgg==\",\r\n \"generation\": 238,\r\n \"user_data\": {\r\n \"translog_uuid\": \"hg5uGQElS6ODnHgIfVqFBw\",\r\n \"sync_id\": \"AV5UQHxglLek85mYTPdW\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7632816\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"21\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KEA==\",\r\n \"generation\": 242,\r\n \"user_data\": {\r\n \"translog_uuid\": \"hIA-xQioQeuDkdW7iBtI1Q\",\r\n \"sync_id\": \"AV5UQI7-JW5FltHgXVih\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7621345\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"23\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YXhQ==\",\r\n \"generation\": 238,\r\n \"user_data\": {\r\n \"translog_uuid\": \"TbYsXtR2Sty7ygtHpJUBxA\",\r\n \"sync_id\": \"AV5UQHrJ52joHGdOrCRt\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7613728\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"24\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YaQA==\",\r\n \"generation\": 243,\r\n \"user_data\": {\r\n \"translog_uuid\": \"D7yIamFsQdeZTPfSKoxo1w\",\r\n \"sync_id\": \"AV5UQJOeka25Xoui3nId\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7554978\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"26\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KEw==\",\r\n \"generation\": 245,\r\n \"user_data\": {\r\n \"translog_uuid\": \"2HO-SEXCQdOXpJssULjnCw\",\r\n \"sync_id\": \"AV5UQJoPJW5FltHgXViq\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7582593\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"35\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KFQ==\",\r\n \"generation\": 239,\r\n \"user_data\": {\r\n \"translog_uuid\": \"1MKoaletS4y2OT37MP1Iyw\",\r\n \"sync_id\": \"AV5UQIJilLek85mYTPda\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7655228\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"37\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KFA==\",\r\n \"generation\": 237,\r\n \"user_data\": {\r\n \"translog_uuid\": \"EDzX8BqFR0aBmKrQgZY6QQ\",\r\n \"sync_id\": \"AV5UQH8n52joHGdOrCRy\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7606003\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"39\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KFw==\",\r\n \"generation\": 243,\r\n \"user_data\": {\r\n \"translog_uuid\": \"ibH-NNSsT7mJz0P0Q5-n-Q\",\r\n \"sync_id\": \"AV5UQJVYka25Xoui3nIf\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7660596\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"42\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KGg==\",\r\n \"generation\": 236,\r\n \"user_data\": {\r\n \"translog_uuid\": \"vjuV1A4XQt2RZacnwllq2g\",\r\n \"sync_id\": \"AV5UQICl52joHGdOrCRz\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7639535\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"43\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KEQ==\",\r\n \"generation\": 251,\r\n \"user_data\": {\r\n \"translog_uuid\": \"DVIFzlucQm-qYApODNiJyQ\",\r\n \"sync_id\": \"AV5UQJeOJW5FltHgXVin\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7636299\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"44\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KGQ==\",\r\n \"generation\": 247,\r\n \"user_data\": {\r\n \"translog_uuid\": \"PbxWpC3GRRiN8IBrv8AcJg\",\r\n \"sync_id\": \"AV5UQJSSka25Xoui3nIe\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7713600\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"48\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KHA==\",\r\n \"generation\": 241,\r\n \"user_data\": {\r\n \"translog_uuid\": \"adl2N8bYRFCwhx__8xTuDg\",\r\n \"sync_id\": \"AV5UQJ1rJW5FltHgXViy\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7682123\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"56\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAACtQ==\",\r\n \"generation\": 245,\r\n \"user_data\": {\r\n \"translog_uuid\": \"M3dGkopsSe6Q6xdTkzeCUw\",\r\n \"sync_id\": \"AV5UQI10cUukG21P8Eed\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7630314\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_1_48_z\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 2,\r\n \"query_total\": 3235010,\r\n \"query_time_in_millis\": 940332448,\r\n \"query_current\": 0,\r\n \"fetch_total\": 53344,\r\n \"fetch_time_in_millis\": 13603562,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 3402,\r\n \"scroll_time_in_millis\": 121923656684,\r\n \"scroll_current\": 2,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 5,\r\n \"query_total\": 6729842,\r\n \"query_time_in_millis\": 1790369911,\r\n \"query_current\": 0,\r\n \"fetch_total\": 111494,\r\n \"fetch_time_in_millis\": 28204095,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 7102,\r\n \"scroll_time_in_millis\": 259169526708,\r\n \"scroll_current\": 5,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"0\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 411716,\r\n \"query_time_in_millis\": 116823237,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7445,\r\n \"fetch_time_in_millis\": 1969813,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 470,\r\n \"scroll_time_in_millis\": 16804390357,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8gpow==\",\r\n \"generation\": 285,\r\n \"user_data\": {\r\n \"translog_uuid\": \"Jzal-WHKRFSKFrXeHUD-iQ\",\r\n \"sync_id\": \"AV8FbZwalqM5HImF_H59\",\r\n \"translog_generation\": \"53\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34630424\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"1\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 1,\r\n \"query_total\": 483921,\r\n \"query_time_in_millis\": 152963777,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8213,\r\n \"fetch_time_in_millis\": 2102384,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 502,\r\n \"scroll_time_in_millis\": 18020283621,\r\n \"scroll_current\": 1,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8gpng==\",\r\n \"generation\": 311,\r\n \"user_data\": {\r\n \"translog_uuid\": \"xHC_WlM_TdiQdkc9BO9Mgw\",\r\n \"sync_id\": \"AV8FbSbDlqM5HImF_H58\",\r\n \"translog_generation\": \"54\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34661532\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"2\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 421730,\r\n \"query_time_in_millis\": 104987277,\r\n \"query_current\": 0,\r\n \"fetch_total\": 6830,\r\n \"fetch_time_in_millis\": 1715876,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 453,\r\n \"scroll_time_in_millis\": 16308840300,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8gpqg==\",\r\n \"generation\": 251,\r\n \"user_data\": {\r\n \"translog_uuid\": \"yyIbmy9CQYuGN-O_wtQGMg\",\r\n \"sync_id\": \"AV8FbeW4i0Xr2w4yldT7\",\r\n \"translog_generation\": \"37\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34691608\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"34\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 573340,\r\n \"query_time_in_millis\": 131785098,\r\n \"query_current\": 0,\r\n \"fetch_total\": 9808,\r\n \"fetch_time_in_millis\": 2364748,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 551,\r\n \"scroll_time_in_millis\": 21442512040,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMADZow==\",\r\n \"generation\": 271,\r\n \"user_data\": {\r\n \"translog_uuid\": \"2nINUmIgQhey8-u0FryjDw\",\r\n \"sync_id\": \"AV8Fbe-Za4wR0iT4ArkZ\",\r\n \"translog_generation\": \"56\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34691631\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"35\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 1,\r\n \"query_total\": 510137,\r\n \"query_time_in_millis\": 132949425,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8346,\r\n \"fetch_time_in_millis\": 2090292,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 541,\r\n \"scroll_time_in_millis\": 20299066644,\r\n \"scroll_current\": 1,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMADZlw==\",\r\n \"generation\": 269,\r\n \"user_data\": {\r\n \"translog_uuid\": \"ApayZgCpQFGUqPiOO-gbLw\",\r\n \"sync_id\": \"AV8FbaCWoNoocWt5elYI\",\r\n \"translog_generation\": \"59\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34630762\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"37\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 423382,\r\n \"query_time_in_millis\": 110693532,\r\n \"query_current\": 0,\r\n \"fetch_total\": 6685,\r\n \"fetch_time_in_millis\": 1730436,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 440,\r\n \"scroll_time_in_millis\": 15682725184,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8gprQ==\",\r\n \"generation\": 276,\r\n \"user_data\": {\r\n \"translog_uuid\": \"pH-Zz-a6Rx2ved6BNJpsww\",\r\n \"sync_id\": \"AV8FberFlqM5HImF_H5_\",\r\n \"translog_generation\": \"55\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34668325\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"43\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 528232,\r\n \"query_time_in_millis\": 155454596,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8861,\r\n \"fetch_time_in_millis\": 2239158,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 552,\r\n \"scroll_time_in_millis\": 20181725857,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMADZoA==\",\r\n \"generation\": 259,\r\n \"user_data\": {\r\n \"translog_uuid\": \"5sCpgeOKSLK6LW2ORZnNEg\",\r\n \"sync_id\": \"AV8FbeLyid_0uQmXPj6V\",\r\n \"translog_generation\": \"63\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34615373\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"14\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 415757,\r\n \"query_time_in_millis\": 123268466,\r\n \"query_current\": 0,\r\n \"fetch_total\": 6433,\r\n \"fetch_time_in_millis\": 1632453,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 417,\r\n \"scroll_time_in_millis\": 14682680796,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8gprg==\",\r\n \"generation\": 341,\r\n \"user_data\": {\r\n \"translog_uuid\": \"7q8nEzuzQS6HKgXOYcKQOQ\",\r\n \"sync_id\": \"AV8FbesxlqM5HImF_H6A\",\r\n \"translog_generation\": \"57\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34681296\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"17\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 1,\r\n \"query_total\": 452721,\r\n \"query_time_in_millis\": 110009265,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7460,\r\n \"fetch_time_in_millis\": 1861363,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 477,\r\n \"scroll_time_in_millis\": 16610103210,\r\n \"scroll_current\": 1,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8gppQ==\",\r\n \"generation\": 242,\r\n \"user_data\": {\r\n \"translog_uuid\": \"LiiXIe5ASl27tsre9VD7bg\",\r\n \"sync_id\": \"AV8FbaijPCfS-z5uQQaH\",\r\n \"translog_generation\": \"38\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34628157\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"20\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 515425,\r\n \"query_time_in_millis\": 122564259,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8597,\r\n \"fetch_time_in_millis\": 2124466,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 559,\r\n \"scroll_time_in_millis\": 20617423268,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMADZng==\",\r\n \"generation\": 288,\r\n \"user_data\": {\r\n \"translog_uuid\": \"hAa0Mm4MRGKhJ35Kt5neRw\",\r\n \"sync_id\": \"AV8FbdABVr3ne82Za7iz\",\r\n \"translog_generation\": \"55\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34633030\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"24\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 1,\r\n \"query_total\": 508971,\r\n \"query_time_in_millis\": 115980954,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8551,\r\n \"fetch_time_in_millis\": 2259243,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 559,\r\n \"scroll_time_in_millis\": 21116656773,\r\n \"scroll_current\": 1,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMADZnA==\",\r\n \"generation\": 286,\r\n \"user_data\": {\r\n \"translog_uuid\": \"p9FUWmWoTka2qTBxxRCc7Q\",\r\n \"sync_id\": \"AV8Fbc1XbZIB-PSTgrVb\",\r\n \"translog_generation\": \"54\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34729716\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"25\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 517285,\r\n \"query_time_in_millis\": 181221188,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8396,\r\n \"fetch_time_in_millis\": 2127525,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 563,\r\n \"scroll_time_in_millis\": 19745980179,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMADZkw==\",\r\n \"generation\": 442,\r\n \"user_data\": {\r\n \"translog_uuid\": \"TYN31bpxTOeQZ4BiLyfXWg\",\r\n \"sync_id\": \"AV8FbR_iid_0uQmXPj6U\",\r\n \"translog_generation\": \"231\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34716987\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"26\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 1,\r\n \"query_total\": 454717,\r\n \"query_time_in_millis\": 99907652,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7311,\r\n \"fetch_time_in_millis\": 1801793,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 458,\r\n \"scroll_time_in_millis\": 16805870690,\r\n \"scroll_current\": 1,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8gppw==\",\r\n \"generation\": 290,\r\n \"user_data\": {\r\n \"translog_uuid\": \"LByFOxK-QlOiWIhpjCMcCQ\",\r\n \"sync_id\": \"AV8Fba_tlqM5HImF_H5-\",\r\n \"translog_generation\": \"56\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34641873\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n },\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 512508,\r\n \"query_time_in_millis\": 131761185,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8558,\r\n \"fetch_time_in_millis\": 2184545,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 560,\r\n \"scroll_time_in_millis\": 20851267789,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMADZmQ==\",\r\n \"generation\": 270,\r\n \"user_data\": {\r\n \"translog_uuid\": \"mTQTSSe8Q1WwqwW599NoOQ\",\r\n \"sync_id\": \"AV8Fba_tlqM5HImF_H5-\",\r\n \"translog_generation\": \"56\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34641873\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_2_7_z\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 85487,\r\n \"query_time_in_millis\": 99506,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 13,\r\n \"scroll_time_in_millis\": 86707719,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 127066,\r\n \"query_time_in_millis\": 148752,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 24,\r\n \"scroll_time_in_millis\": 162598725,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"1\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 41579,\r\n \"query_time_in_millis\": 49246,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 11,\r\n \"scroll_time_in_millis\": 75891006,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YZ0g==\",\r\n \"generation\": 10,\r\n \"user_data\": {\r\n \"translog_uuid\": \"OGURvCujR6eMVZaXFC8E4Q\",\r\n \"sync_id\": \"AV4XMp1nJW5FltHgXVWM\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 7115\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"5\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 41596,\r\n \"query_time_in_millis\": 47276,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 7,\r\n \"scroll_time_in_millis\": 46980625,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/7cMA==\",\r\n \"generation\": 11,\r\n \"user_data\": {\r\n \"translog_uuid\": \"DtGcQlCwSPKlDulrfuyCgg\",\r\n \"sync_id\": \"AV4XMpNHlLek85mYTPRv\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 6863\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"6\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 43891,\r\n \"query_time_in_millis\": 52230,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 6,\r\n \"scroll_time_in_millis\": 39727094,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KIw==\",\r\n \"generation\": 9,\r\n \"user_data\": {\r\n \"translog_uuid\": \"gbbV11QZSLG-DlDf7LRESg\",\r\n \"sync_id\": \"AV5zDmrVcUukG21P8Emf\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 6909\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_3_9_z\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 19,\r\n \"query_total\": 2786247,\r\n \"query_time_in_millis\": 2110793090,\r\n \"query_current\": 0,\r\n \"fetch_total\": 16140,\r\n \"fetch_time_in_millis\": 1194542,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 1059,\r\n \"scroll_time_in_millis\": 28524594499,\r\n \"scroll_current\": 19,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 30,\r\n \"query_total\": 4689404,\r\n \"query_time_in_millis\": 3891258570,\r\n \"query_current\": 1,\r\n \"fetch_total\": 31624,\r\n \"fetch_time_in_millis\": 2104799,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 1816,\r\n \"scroll_time_in_millis\": 48575820958,\r\n \"scroll_current\": 30,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"16\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 2,\r\n \"query_total\": 256933,\r\n \"query_time_in_millis\": 193007177,\r\n \"query_current\": 0,\r\n \"fetch_total\": 277,\r\n \"fetch_time_in_millis\": 92526,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 148,\r\n \"scroll_time_in_millis\": 3727190199,\r\n \"scroll_current\": 2,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kQ3Q==\",\r\n \"generation\": 238,\r\n \"user_data\": {\r\n \"translog_uuid\": \"o1Kc1K6qSRqyZG2AwkJNAQ\",\r\n \"translog_generation\": \"69\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75721517\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"33\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 2,\r\n \"query_total\": 260746,\r\n \"query_time_in_millis\": 230245814,\r\n \"query_current\": 0,\r\n \"fetch_total\": 69,\r\n \"fetch_time_in_millis\": 11599,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 160,\r\n \"scroll_time_in_millis\": 3851959544,\r\n \"scroll_current\": 2,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG1dA==\",\r\n \"generation\": 212,\r\n \"user_data\": {\r\n \"translog_uuid\": \"yiAAnapLTgWVYsPHtyO8gw\",\r\n \"translog_generation\": \"69\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75804569\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"2\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 1,\r\n \"query_total\": 434263,\r\n \"query_time_in_millis\": 326229660,\r\n \"query_current\": 0,\r\n \"fetch_total\": 4948,\r\n \"fetch_time_in_millis\": 251004,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 124,\r\n \"scroll_time_in_millis\": 3552223939,\r\n \"scroll_current\": 1,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kLFg==\",\r\n \"generation\": 242,\r\n \"user_data\": {\r\n \"translog_uuid\": \"8yctL7V1TLqc5ZXR5cYkbQ\",\r\n \"translog_generation\": \"40\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75800375\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"3\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 3,\r\n \"query_total\": 523635,\r\n \"query_time_in_millis\": 553237253,\r\n \"query_current\": 0,\r\n \"fetch_total\": 9872,\r\n \"fetch_time_in_millis\": 478248,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 137,\r\n \"scroll_time_in_millis\": 4225186204,\r\n \"scroll_current\": 3,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kLfQ==\",\r\n \"generation\": 236,\r\n \"user_data\": {\r\n \"translog_uuid\": \"pA8R02FNQfKToHhZvXpbRg\",\r\n \"translog_generation\": \"50\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75833382\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"36\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 3,\r\n \"query_total\": 351380,\r\n \"query_time_in_millis\": 362730214,\r\n \"query_current\": 1,\r\n \"fetch_total\": 286,\r\n \"fetch_time_in_millis\": 79966,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 167,\r\n \"scroll_time_in_millis\": 4112961757,\r\n \"scroll_current\": 3,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAGy0g==\",\r\n \"generation\": 232,\r\n \"user_data\": {\r\n \"translog_uuid\": \"ELszoBitThGvY5EaZ-zdJg\",\r\n \"translog_generation\": \"67\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75820319\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"38\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 3,\r\n \"query_total\": 273670,\r\n \"query_time_in_millis\": 276328994,\r\n \"query_current\": 0,\r\n \"fetch_total\": 76,\r\n \"fetch_time_in_millis\": 11894,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 157,\r\n \"scroll_time_in_millis\": 3894136701,\r\n \"scroll_current\": 3,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG3Xw==\",\r\n \"generation\": 215,\r\n \"user_data\": {\r\n \"translog_uuid\": \"VaL0kg-zQa-n7XCu8yWAWg\",\r\n \"translog_generation\": \"47\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75809796\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"23\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 3,\r\n \"query_total\": 535275,\r\n \"query_time_in_millis\": 295995625,\r\n \"query_current\": 0,\r\n \"fetch_total\": 5131,\r\n \"fetch_time_in_millis\": 334756,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 153,\r\n \"scroll_time_in_millis\": 4454653613,\r\n \"scroll_current\": 3,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kQ3g==\",\r\n \"generation\": 234,\r\n \"user_data\": {\r\n \"translog_uuid\": \"mq3nkZQfSi6pNWJ802rOBQ\",\r\n \"translog_generation\": \"65\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75762755\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"8\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 3,\r\n \"query_total\": 514561,\r\n \"query_time_in_millis\": 396999656,\r\n \"query_current\": 0,\r\n \"fetch_total\": 5119,\r\n \"fetch_time_in_millis\": 345696,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 133,\r\n \"scroll_time_in_millis\": 3990309769,\r\n \"scroll_current\": 3,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kQ1g==\",\r\n \"generation\": 252,\r\n \"user_data\": {\r\n \"translog_uuid\": \"WkRuDaLiSWy0vaUG-p_Tkw\",\r\n \"translog_generation\": \"73\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75839784\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"24\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 1,\r\n \"query_total\": 320209,\r\n \"query_time_in_millis\": 261939359,\r\n \"query_current\": 0,\r\n \"fetch_total\": 302,\r\n \"fetch_time_in_millis\": 89145,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 172,\r\n \"scroll_time_in_millis\": 4266717858,\r\n \"scroll_current\": 1,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAGytQ==\",\r\n \"generation\": 239,\r\n \"user_data\": {\r\n \"translog_uuid\": \"talGGknJTIaak8ktV3JR2Q\",\r\n \"translog_generation\": \"47\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75775502\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"41\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 2,\r\n \"query_total\": 552185,\r\n \"query_time_in_millis\": 455161794,\r\n \"query_current\": 0,\r\n \"fetch_total\": 5151,\r\n \"fetch_time_in_millis\": 307503,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 155,\r\n \"scroll_time_in_millis\": 4547729479,\r\n \"scroll_current\": 2,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG1ew==\",\r\n \"generation\": 212,\r\n \"user_data\": {\r\n \"translog_uuid\": \"ZCISpnXuRqGIt8O8WJsUYw\",\r\n \"translog_generation\": \"58\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75709074\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"11\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 2,\r\n \"query_total\": 318243,\r\n \"query_time_in_millis\": 233296892,\r\n \"query_current\": 0,\r\n \"fetch_total\": 317,\r\n \"fetch_time_in_millis\": 92454,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 155,\r\n \"scroll_time_in_millis\": 3872371812,\r\n \"scroll_current\": 2,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kQ1w==\",\r\n \"generation\": 240,\r\n \"user_data\": {\r\n \"translog_uuid\": \"OUkDrN7JSWeE8z4JynDnFQ\",\r\n \"translog_generation\": \"67\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75713812\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"43\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 5,\r\n \"query_total\": 348304,\r\n \"query_time_in_millis\": 306086132,\r\n \"query_current\": 0,\r\n \"fetch_total\": 76,\r\n \"fetch_time_in_millis\": 10008,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 155,\r\n \"scroll_time_in_millis\": 4080380083,\r\n \"scroll_current\": 5,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG1dg==\",\r\n \"generation\": 273,\r\n \"user_data\": {\r\n \"translog_uuid\": \"HASUozmuR-Wxx4PL1dzv-w\",\r\n \"translog_generation\": \"103\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 75773149\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_12_5_w\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 449,\r\n \"query_time_in_millis\": 114,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1348,\r\n \"query_time_in_millis\": 548,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"0\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 449,\r\n \"query_time_in_millis\": 114,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8hPbw==\",\r\n \"generation\": 1,\r\n \"user_data\": {\r\n \"translog_uuid\": \"UOB0F3LxQKeyNTLImpcyrw\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 0\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"5\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 448,\r\n \"query_time_in_millis\": 300,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG6UA==\",\r\n \"generation\": 388,\r\n \"user_data\": {\r\n \"translog_uuid\": \"nN-SowS6S3-L9YKBJLMqhA\",\r\n \"sync_id\": \"AV82dC3XbZIB-PSTgr_E\",\r\n \"translog_generation\": \"194\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 1\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"9\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 451,\r\n \"query_time_in_millis\": 134,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kWEQ==\",\r\n \"generation\": 195,\r\n \"user_data\": {\r\n \"translog_uuid\": \"gP2XbpSxTX246jK2cucjkw\",\r\n \"sync_id\": \"AV82dCHSoytK9aq0No4V\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 0\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_12_3_w\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 20,\r\n \"query_time_in_millis\": 4,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 20,\r\n \"query_time_in_millis\": 4,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"2\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 8,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8behQ==\",\r\n \"generation\": 1,\r\n \"user_data\": {\r\n \"translog_uuid\": \"79gNyS_STpqduPc5QHuRvw\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 0\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"5\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 5,\r\n \"query_time_in_millis\": 3,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL//jzg==\",\r\n \"generation\": 52,\r\n \"user_data\": {\r\n \"translog_uuid\": \"CQdRRRRlSu6MWi9yucDKTg\",\r\n \"sync_id\": \"AV7l71Gyh-dLxXyAjV6y\",\r\n \"translog_generation\": \"26\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 1\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"6\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 7,\r\n \"query_time_in_millis\": 1,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/+Ygg==\",\r\n \"generation\": 1,\r\n \"user_data\": {\r\n \"translog_uuid\": \"QTEUqo3LRxGIg3kyDrbRiQ\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 0\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_4_3_y\": {\r\n \"primaries\": {},\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 2,\r\n \"query_total\": 10950,\r\n \"query_time_in_millis\": 94858,\r\n \"query_current\": 0,\r\n \"fetch_total\": 4041,\r\n \"fetch_time_in_millis\": 4352,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 22,\r\n \"scroll_time_in_millis\": 2460263691,\r\n \"scroll_current\": 2,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"4\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 1,\r\n \"query_total\": 2694,\r\n \"query_time_in_millis\": 41840,\r\n \"query_current\": 0,\r\n \"fetch_total\": 1539,\r\n \"fetch_time_in_millis\": 821,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 9,\r\n \"scroll_time_in_millis\": 1037911156,\r\n \"scroll_current\": 1,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kXBg==\",\r\n \"generation\": 6446,\r\n \"user_data\": {\r\n \"translog_uuid\": \"l4CoKQYsQimtGFy18dzTmg\",\r\n \"sync_id\": \"AV82nenhvesQcWp5-W8P\",\r\n \"translog_generation\": \"1808\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 297952\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"9\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 1,\r\n \"query_total\": 8256,\r\n \"query_time_in_millis\": 53018,\r\n \"query_current\": 0,\r\n \"fetch_total\": 2502,\r\n \"fetch_time_in_millis\": 3531,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 13,\r\n \"scroll_time_in_millis\": 1422352535,\r\n \"scroll_current\": 1,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG7Mg==\",\r\n \"generation\": 6153,\r\n \"user_data\": {\r\n \"translog_uuid\": \"RnBXf1kGQ1eBktofvo_qVg\",\r\n \"sync_id\": \"AV82nQmdoytK9aq0No4g\",\r\n \"translog_generation\": \"1889\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 302650\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_4_19_z\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 7,\r\n \"query_time_in_millis\": 5361,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 22,\r\n \"query_time_in_millis\": 17952,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"0\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 672,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eUBA==\",\r\n \"generation\": 597,\r\n \"user_data\": {\r\n \"translog_uuid\": \"rsV2mnR4Tc-8VApBchcRNg\",\r\n \"sync_id\": \"AV7viWm-i0Xr2w4yldPW\",\r\n \"translog_generation\": \"150\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22970328\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"1\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 588,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eUAw==\",\r\n \"generation\": 604,\r\n \"user_data\": {\r\n \"translog_uuid\": \"krwNZKDpQcetN-WaSJeH5A\",\r\n \"sync_id\": \"AV7viWlii0Xr2w4yldPV\",\r\n \"translog_generation\": \"153\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22938062\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"65\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 873,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJoA==\",\r\n \"generation\": 563,\r\n \"user_data\": {\r\n \"translog_uuid\": \"K6yDgoNEQo2V9d278bK9Xw\",\r\n \"sync_id\": \"AV7viWwDoNoocWt5elNG\",\r\n \"translog_generation\": \"206\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22843605\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"2\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 2,\r\n \"query_time_in_millis\": 1929,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eUFw==\",\r\n \"generation\": 610,\r\n \"user_data\": {\r\n \"translog_uuid\": \"r6783q1JSc2lC934awMh9g\",\r\n \"sync_id\": \"AV7viXlePCfS-z5uQQPD\",\r\n \"translog_generation\": \"154\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 23029720\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"3\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 725,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eT+g==\",\r\n \"generation\": 594,\r\n \"user_data\": {\r\n \"translog_uuid\": \"rowStT3VQ7a8RmrWXoKWxQ\",\r\n \"sync_id\": \"AV7vh7e2lqM5HImF_H4H\",\r\n \"translog_generation\": \"147\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22953345\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"69\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJtQ==\",\r\n \"generation\": 582,\r\n \"user_data\": {\r\n \"translog_uuid\": \"8nhArQdlRaeDyxKQrhN-PQ\",\r\n \"sync_id\": \"AV7viXZ8id_0uQmXPj5D\",\r\n \"translog_generation\": \"213\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22893533\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"70\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 924,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJsA==\",\r\n \"generation\": 578,\r\n \"user_data\": {\r\n \"translog_uuid\": \"mxTIXlrpQI2KcyzTAQXY0w\",\r\n \"sync_id\": \"AV7viXNsbZIB-PSTgrFD\",\r\n \"translog_generation\": \"214\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22939618\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"9\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 650,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eUFg==\",\r\n \"generation\": 577,\r\n \"user_data\": {\r\n \"translog_uuid\": \"UmZnDr4ET9OTqWThgysnHA\",\r\n \"sync_id\": \"AV7viXiwvesQcWp5-WAo\",\r\n \"translog_generation\": \"22\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22961235\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"75\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 819,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJuA==\",\r\n \"generation\": 592,\r\n \"user_data\": {\r\n \"translog_uuid\": \"5OSO6fwcSWOgmUq7mkDXuA\",\r\n \"sync_id\": \"AV7viXj1id_0uQmXPj5F\",\r\n \"translog_generation\": \"219\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22939934\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"83\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 2,\r\n \"query_time_in_millis\": 1771,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJtg==\",\r\n \"generation\": 600,\r\n \"user_data\": {\r\n \"translog_uuid\": \"qY74hs_YRR-9Gw_WDzw8Ng\",\r\n \"sync_id\": \"AV7viXdPbZIB-PSTgrFH\",\r\n \"translog_generation\": \"220\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 23002687\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"88\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJ6A==\",\r\n \"generation\": 579,\r\n \"user_data\": {\r\n \"translog_uuid\": \"T1_-_nGrRmeQjBt5Bram8w\",\r\n \"sync_id\": \"AV7viXTboytK9aq0Nn0I\",\r\n \"translog_generation\": \"216\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22847300\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"89\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 913,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJtA==\",\r\n \"generation\": 578,\r\n \"user_data\": {\r\n \"translog_uuid\": \"6ou-hIRcTsiqpaqINg77JQ\",\r\n \"sync_id\": \"AV7viXWMoytK9aq0Nn0K\",\r\n \"translog_generation\": \"213\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22784743\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"28\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 744,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eT+Q==\",\r\n \"generation\": 589,\r\n \"user_data\": {\r\n \"translog_uuid\": \"MGgqvP3NRaapwvRg30uLrA\",\r\n \"sync_id\": \"AV7vh7MSlqM5HImF_H4G\",\r\n \"translog_generation\": \"158\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22809503\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"30\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eUBg==\",\r\n \"generation\": 596,\r\n \"user_data\": {\r\n \"translog_uuid\": \"QLeVnlWPTym9nntbGERvRg\",\r\n \"sync_id\": \"AV7viWpQi0Xr2w4yldPX\",\r\n \"translog_generation\": \"150\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 23068484\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"33\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJyA==\",\r\n \"generation\": 595,\r\n \"user_data\": {\r\n \"translog_uuid\": \"4qViWMMkTLmE-ApzUUQ_QA\",\r\n \"sync_id\": \"AV7viXUMa4wR0iT4Ariy\",\r\n \"translog_generation\": \"221\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 23114999\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"98\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJsw==\",\r\n \"generation\": 592,\r\n \"user_data\": {\r\n \"translog_uuid\": \"WbZK7TJXT9G-nFhKxoQsRw\",\r\n \"sync_id\": \"AV7viXU_oytK9aq0Nn0J\",\r\n \"translog_generation\": \"221\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22984821\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"35\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 2,\r\n \"query_time_in_millis\": 1821,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eUEQ==\",\r\n \"generation\": 611,\r\n \"user_data\": {\r\n \"translog_uuid\": \"WNDWxOnEQUmLBZhWyEeC-w\",\r\n \"sync_id\": \"AV7viXYFlqM5HImF_H4K\",\r\n \"translog_generation\": \"158\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22860318\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"99\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 2,\r\n \"query_time_in_millis\": 1926,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJoQ==\",\r\n \"generation\": 585,\r\n \"user_data\": {\r\n \"translog_uuid\": \"vtoLNMV7R9eJxH_pIewLYQ\",\r\n \"sync_id\": \"AV7viWynVr3ne82Za7WR\",\r\n \"translog_generation\": \"214\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22858965\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"36\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 651,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eU2w==\",\r\n \"generation\": 600,\r\n \"user_data\": {\r\n \"translog_uuid\": \"vIqpwpobQwSalsz-rHguCg\",\r\n \"sync_id\": \"AV7viXYFlqM5HImF_H4L\",\r\n \"translog_generation\": \"155\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22905445\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"37\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 2,\r\n \"query_time_in_millis\": 1675,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eT9A==\",\r\n \"generation\": 574,\r\n \"user_data\": {\r\n \"translog_uuid\": \"Vaq0XRmwRkm5Svs2jKlnGQ\",\r\n \"sync_id\": \"AV7vh64SbZIB-PSTgrE9\",\r\n \"translog_generation\": \"151\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22927068\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"38\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eUEA==\",\r\n \"generation\": 605,\r\n \"user_data\": {\r\n \"translog_uuid\": \"db8LbK9CSZiMgon8a1Gi8A\",\r\n \"sync_id\": \"AV7viXV2lqM5HImF_H4J\",\r\n \"translog_generation\": \"157\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22909091\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"41\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eT+A==\",\r\n \"generation\": 590,\r\n \"user_data\": {\r\n \"translog_uuid\": \"STC2D0xwTr6gka2o5Eq1WA\",\r\n \"sync_id\": \"AV7vh7JclqM5HImF_H4F\",\r\n \"translog_generation\": \"152\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22792954\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"42\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 601,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eUFQ==\",\r\n \"generation\": 591,\r\n \"user_data\": {\r\n \"translog_uuid\": \"MUEmmhxQSe2KA9fqEGadhg\",\r\n \"sync_id\": \"AV7viXZtlqM5HImF_H4M\",\r\n \"translog_generation\": \"151\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22825263\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"45\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJrQ==\",\r\n \"generation\": 586,\r\n \"user_data\": {\r\n \"translog_uuid\": \"IrESOAm4R7Otv6jBeHirTA\",\r\n \"sync_id\": \"AV7viXLUbZIB-PSTgrFA\",\r\n \"translog_generation\": \"213\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22881983\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"46\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8eUDw==\",\r\n \"generation\": 576,\r\n \"user_data\": {\r\n \"translog_uuid\": \"UFN_xANQTRGShNRiCJwl-A\",\r\n \"sync_id\": \"AV7viXVhlqM5HImF_H4I\",\r\n \"translog_generation\": \"154\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22855972\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"55\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJtw==\",\r\n \"generation\": 577,\r\n \"user_data\": {\r\n \"translog_uuid\": \"tygehjYpRyOSrRu8k5gMhw\",\r\n \"sync_id\": \"AV7viXjLid_0uQmXPj5E\",\r\n \"translog_generation\": \"213\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22941314\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"58\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1,\r\n \"query_time_in_millis\": 670,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMABJrA==\",\r\n \"generation\": 572,\r\n \"user_data\": {\r\n \"translog_uuid\": \"XEWNKpFSTuq3jrxV--4F5Q\",\r\n \"sync_id\": \"AV7viXLHmkWRzar0hS-Z\",\r\n \"translog_generation\": \"211\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 22734647\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_12_4_w\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 581,\r\n \"query_time_in_millis\": 137,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 1695,\r\n \"query_time_in_millis\": 596,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"0\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 579,\r\n \"query_time_in_millis\": 170,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/+Y8A==\",\r\n \"generation\": 2,\r\n \"user_data\": {\r\n \"translog_uuid\": \"JFAV0EiNTRCmoSn3tIhyBg\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 0\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"2\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 581,\r\n \"query_time_in_millis\": 137,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8be3w==\",\r\n \"generation\": 1,\r\n \"user_data\": {\r\n \"translog_uuid\": \"2zkMzdG_SSmZtC8RohvxbA\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 0\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"5\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 535,\r\n \"query_time_in_millis\": 289,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8hPtw==\",\r\n \"generation\": 413,\r\n \"user_data\": {\r\n \"translog_uuid\": \"x4ucOnIvQC-AbmYhhWO8ig\",\r\n \"sync_id\": \"AV8MvrROvesQcWp5-WaO\",\r\n \"translog_generation\": \"168\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 1\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_2_4_w\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 5802,\r\n \"query_time_in_millis\": 2630,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 11897,\r\n \"query_time_in_millis\": 5459,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"5\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 6095,\r\n \"query_time_in_millis\": 2829,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KDQ==\",\r\n \"generation\": 9,\r\n \"user_data\": {\r\n \"translog_uuid\": \"mmfqXWPBRm2vPvQd3jDfgQ\",\r\n \"sync_id\": \"AV4XSzgVlLek85mYTPRw\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 51\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"6\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 5802,\r\n \"query_time_in_millis\": 2630,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YZ/w==\",\r\n \"generation\": 9,\r\n \"user_data\": {\r\n \"translog_uuid\": \"467dLpczT3eBBYU8ozUcMg\",\r\n \"sync_id\": \"AV5zDsyacUukG21P8Emi\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 39\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_6_5_y\": {\r\n \"primaries\": {},\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 30295,\r\n \"query_time_in_millis\": 136064,\r\n \"query_current\": 0,\r\n \"fetch_total\": 6338,\r\n \"fetch_time_in_millis\": 10017,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 198,\r\n \"scroll_time_in_millis\": 1507879168,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"0\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 12249,\r\n \"query_time_in_millis\": 49239,\r\n \"query_current\": 0,\r\n \"fetch_total\": 2875,\r\n \"fetch_time_in_millis\": 4728,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 75,\r\n \"scroll_time_in_millis\": 649038506,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG6yQ==\",\r\n \"generation\": 2869,\r\n \"user_data\": {\r\n \"translog_uuid\": \"fDPXjUMHR7y5_veMsrAxrw\",\r\n \"sync_id\": \"AV82g0YhvesQcWp5-W8I\",\r\n \"translog_generation\": \"648\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 67813\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"7\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 12226,\r\n \"query_time_in_millis\": 49862,\r\n \"query_current\": 0,\r\n \"fetch_total\": 2955,\r\n \"fetch_time_in_millis\": 4650,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 56,\r\n \"scroll_time_in_millis\": 597874415,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kW/w==\",\r\n \"generation\": 2984,\r\n \"user_data\": {\r\n \"translog_uuid\": \"edOlqy-KTR66ML_UaLbDig\",\r\n \"sync_id\": \"AV82nRcebZIB-PSTgr_N\",\r\n \"translog_generation\": \"518\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 69150\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"9\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 5820,\r\n \"query_time_in_millis\": 36963,\r\n \"query_current\": 0,\r\n \"fetch_total\": 508,\r\n \"fetch_time_in_millis\": 639,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 67,\r\n \"scroll_time_in_millis\": 260966247,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG7NA==\",\r\n \"generation\": 3017,\r\n \"user_data\": {\r\n \"translog_uuid\": \"Ixad_29_Rdq53FNMlsxYeg\",\r\n \"sync_id\": \"AV82nR0ZoytK9aq0No4h\",\r\n \"translog_generation\": \"750\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 69536\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_5_6_z\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 39905,\r\n \"query_time_in_millis\": 2828175,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 1,\r\n \"scroll_time_in_millis\": 3627250,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 179112,\r\n \"query_time_in_millis\": 12992975,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 3,\r\n \"scroll_time_in_millis\": 10911247,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"32\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 14542,\r\n \"query_time_in_millis\": 1034325,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 1,\r\n \"scroll_time_in_millis\": 3641999,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KJg==\",\r\n \"generation\": 10,\r\n \"user_data\": {\r\n \"translog_uuid\": \"Wuxqe7X1RcmIjjPfmD5G_g\",\r\n \"sync_id\": \"AV5VJyS0cUukG21P8Eem\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8659261\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"34\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 13261,\r\n \"query_time_in_millis\": 962987,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YYZQ==\",\r\n \"generation\": 21,\r\n \"user_data\": {\r\n \"translog_uuid\": \"oBWJq8mCRpmBFAyYeGSYAA\",\r\n \"sync_id\": \"AV7OTWJwlqM5HImF_HzV\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8697254\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"6\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 13219,\r\n \"query_time_in_millis\": 896082,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YaDg==\",\r\n \"generation\": 18,\r\n \"user_data\": {\r\n \"translog_uuid\": \"UuF6OgUJTfWpqF1N6OUMqw\",\r\n \"sync_id\": \"AV5VJxqUJW5FltHgXVi8\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8676217\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"8\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 13331,\r\n \"query_time_in_millis\": 884042,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YX+w==\",\r\n \"generation\": 13,\r\n \"user_data\": {\r\n \"translog_uuid\": \"I-hy39CeSpmTotnVf5evSw\",\r\n \"sync_id\": \"AV5VJyaF52joHGdOrCR_\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8664765\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"11\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 13290,\r\n \"query_time_in_millis\": 898971,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YZ0Q==\",\r\n \"generation\": 20,\r\n \"user_data\": {\r\n \"translog_uuid\": \"y2_J-5IET96dIz3DwCiKFg\",\r\n \"sync_id\": \"AV5k8K5VJW5FltHgXVlg\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8803180\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"14\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 14604,\r\n \"query_time_in_millis\": 1083427,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KKQ==\",\r\n \"generation\": 15,\r\n \"user_data\": {\r\n \"translog_uuid\": \"Umv3qbcFQ0aHjCd5l62pzQ\",\r\n \"sync_id\": \"AV5ttrBRka25Xoui3nMe\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8761139\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"18\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 14493,\r\n \"query_time_in_millis\": 1088142,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KJQ==\",\r\n \"generation\": 11,\r\n \"user_data\": {\r\n \"translog_uuid\": \"aM7FbSRVToWwDIjh5gD--g\",\r\n \"sync_id\": \"AV5VJyef52joHGdOrCSG\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8726097\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"21\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 14496,\r\n \"query_time_in_millis\": 1077037,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KJA==\",\r\n \"generation\": 16,\r\n \"user_data\": {\r\n \"translog_uuid\": \"PNAaa4kdTx29vvtf0nn4Zw\",\r\n \"sync_id\": \"AV5VJxuSJW5FltHgXVjB\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8693625\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"22\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 14523,\r\n \"query_time_in_millis\": 1101627,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KHw==\",\r\n \"generation\": 13,\r\n \"user_data\": {\r\n \"translog_uuid\": \"JLALPdkgTIKZLct_8cD4Ag\",\r\n \"sync_id\": \"AV5zBTMPlog0vERXgZA8\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8768465\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"23\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 14558,\r\n \"query_time_in_millis\": 1105596,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 1,\r\n \"scroll_time_in_millis\": 3641998,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KJw==\",\r\n \"generation\": 11,\r\n \"user_data\": {\r\n \"translog_uuid\": \"lhribt2cSVunSllevrEh3g\",\r\n \"sync_id\": \"AV5VJydo52joHGdOrCSE\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8646982\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"25\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 14545,\r\n \"query_time_in_millis\": 1067445,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5KIA==\",\r\n \"generation\": 17,\r\n \"user_data\": {\r\n \"translog_uuid\": \"1N0NMGgcRk-BG0uJ62GgIg\",\r\n \"sync_id\": \"AV6mQ2VdvesQcWp5-VSQ\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8655146\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"26\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 10937,\r\n \"query_time_in_millis\": 812148,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8dJFA==\",\r\n \"generation\": 18,\r\n \"user_data\": {\r\n \"translog_uuid\": \"e9lXXSU_TOepaAzkt_l_4g\",\r\n \"sync_id\": \"AV5VJxtiJW5FltHgXVjA\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8687651\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"31\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 13313,\r\n \"query_time_in_millis\": 981146,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 1,\r\n \"scroll_time_in_millis\": 3627250,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8YZdA==\",\r\n \"generation\": 20,\r\n \"user_data\": {\r\n \"translog_uuid\": \"r-YacbSAR2GZhnr6CDlgRA\",\r\n \"sync_id\": \"AV7OaWiwlqM5HImF_Hzr\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 8762992\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_1_49_z\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 135,\r\n \"query_total\": 3683439,\r\n \"query_time_in_millis\": 898450558,\r\n \"query_current\": 0,\r\n \"fetch_total\": 53975,\r\n \"fetch_time_in_millis\": 12257293,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 3040,\r\n \"scroll_time_in_millis\": 92829263434,\r\n \"scroll_current\": 135,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 255,\r\n \"query_total\": 7349793,\r\n \"query_time_in_millis\": 1827525207,\r\n \"query_current\": 0,\r\n \"fetch_total\": 106526,\r\n \"fetch_time_in_millis\": 24423022,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 6094,\r\n \"scroll_time_in_millis\": 184025767280,\r\n \"scroll_current\": 255,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"32\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 14,\r\n \"query_total\": 490126,\r\n \"query_time_in_millis\": 155637127,\r\n \"query_current\": 0,\r\n \"fetch_total\": 6502,\r\n \"fetch_time_in_millis\": 1619071,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 420,\r\n \"scroll_time_in_millis\": 12607288961,\r\n \"scroll_current\": 14,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kVmg==\",\r\n \"generation\": 89,\r\n \"user_data\": {\r\n \"translog_uuid\": \"xyzM5QOERVyBTSO80gfZmw\",\r\n \"sync_id\": \"AV82Xno3mkWRzar0hTcB\",\r\n \"translog_generation\": \"71\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34925759\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"0\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 20,\r\n \"query_total\": 471362,\r\n \"query_time_in_millis\": 118237928,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7813,\r\n \"fetch_time_in_millis\": 1627154,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 438,\r\n \"scroll_time_in_millis\": 14481715689,\r\n \"scroll_current\": 20,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kVog==\",\r\n \"generation\": 105,\r\n \"user_data\": {\r\n \"translog_uuid\": \"TOcLUZaZQiKg_IN5WRIZDg\",\r\n \"sync_id\": \"AV82XoXDlqM5HImF_H9m\",\r\n \"translog_generation\": \"90\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34868613\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"1\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 18,\r\n \"query_total\": 524646,\r\n \"query_time_in_millis\": 159778918,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7619,\r\n \"fetch_time_in_millis\": 1808635,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 437,\r\n \"scroll_time_in_millis\": 13363989998,\r\n \"scroll_current\": 18,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG5vw==\",\r\n \"generation\": 92,\r\n \"user_data\": {\r\n \"translog_uuid\": \"g91K_utATwesLH5IZlncbA\",\r\n \"sync_id\": \"AV82XnYZi0Xr2w4yldfA\",\r\n \"translog_generation\": \"72\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34907935\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"4\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 16,\r\n \"query_total\": 537399,\r\n \"query_time_in_millis\": 114678478,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7993,\r\n \"fetch_time_in_millis\": 1793439,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 450,\r\n \"scroll_time_in_millis\": 13137239896,\r\n \"scroll_current\": 16,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kVnw==\",\r\n \"generation\": 93,\r\n \"user_data\": {\r\n \"translog_uuid\": \"ghocHy3kSd-1s_utEAVh8Q\",\r\n \"sync_id\": \"AV82XoRKbZIB-PSTgr-5\",\r\n \"translog_generation\": \"76\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34934669\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"38\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 18,\r\n \"query_total\": 534772,\r\n \"query_time_in_millis\": 130814957,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7506,\r\n \"fetch_time_in_millis\": 1774317,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 441,\r\n \"scroll_time_in_millis\": 13379675252,\r\n \"scroll_current\": 18,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG5ww==\",\r\n \"generation\": 92,\r\n \"user_data\": {\r\n \"translog_uuid\": \"IJypdJ2STmeIPiqextBBPQ\",\r\n \"sync_id\": \"AV82XnqLGAP3MK8u2szL\",\r\n \"translog_generation\": \"73\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34958662\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"6\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 18,\r\n \"query_total\": 547504,\r\n \"query_time_in_millis\": 135701210,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8118,\r\n \"fetch_time_in_millis\": 1818072,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 438,\r\n \"scroll_time_in_millis\": 13252114858,\r\n \"scroll_current\": 18,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG5vg==\",\r\n \"generation\": 88,\r\n \"user_data\": {\r\n \"translog_uuid\": \"Q1I6WQ1oTW-FRA3L0U9LLA\",\r\n \"sync_id\": \"AV82XnXh4L-bC6XVFNnY\",\r\n \"translog_generation\": \"72\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34888074\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"39\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 22,\r\n \"query_total\": 516461,\r\n \"query_time_in_millis\": 121401275,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7306,\r\n \"fetch_time_in_millis\": 1727939,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 431,\r\n \"scroll_time_in_millis\": 13293642904,\r\n \"scroll_current\": 22,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG5xQ==\",\r\n \"generation\": 109,\r\n \"user_data\": {\r\n \"translog_uuid\": \"wlxJnmUqSCy5n7F7Qf6BLw\",\r\n \"sync_id\": \"AV82XoWsid_0uQmXPj9c\",\r\n \"translog_generation\": \"91\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34882908\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"9\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 17,\r\n \"query_total\": 544766,\r\n \"query_time_in_millis\": 134918199,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7844,\r\n \"fetch_time_in_millis\": 1717818,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 433,\r\n \"scroll_time_in_millis\": 12486364917,\r\n \"scroll_current\": 17,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG5vA==\",\r\n \"generation\": 176,\r\n \"user_data\": {\r\n \"translog_uuid\": \"UZ9MbR7KRBa0-KMwTNwMnA\",\r\n \"sync_id\": \"AV82XnK2id_0uQmXPj9a\",\r\n \"translog_generation\": \"159\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34889058\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"13\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 17,\r\n \"query_total\": 484965,\r\n \"query_time_in_millis\": 113477495,\r\n \"query_current\": 0,\r\n \"fetch_total\": 6974,\r\n \"fetch_time_in_millis\": 1595211,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 449,\r\n \"scroll_time_in_millis\": 13364882801,\r\n \"scroll_current\": 17,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kVmA==\",\r\n \"generation\": 89,\r\n \"user_data\": {\r\n \"translog_uuid\": \"NsGZDbxyT2OKO8dirzyyQA\",\r\n \"sync_id\": \"AV82XnUIs0Cpny6BzFuS\",\r\n \"translog_generation\": \"72\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34874957\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"45\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 15,\r\n \"query_total\": 548822,\r\n \"query_time_in_millis\": 134851387,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7805,\r\n \"fetch_time_in_millis\": 1783830,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 430,\r\n \"scroll_time_in_millis\": 12671082305,\r\n \"scroll_current\": 15,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kVlg==\",\r\n \"generation\": 109,\r\n \"user_data\": {\r\n \"translog_uuid\": \"cuHHmYqaTkWqH4LH6_YcYQ\",\r\n \"sync_id\": \"AV82XnEhlqM5HImF_H9j\",\r\n \"translog_generation\": \"92\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34893685\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"15\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 23,\r\n \"query_total\": 556362,\r\n \"query_time_in_millis\": 121115763,\r\n \"query_current\": 0,\r\n \"fetch_total\": 8280,\r\n \"fetch_time_in_millis\": 1866016,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 450,\r\n \"scroll_time_in_millis\": 13978504629,\r\n \"scroll_current\": 23,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kVoQ==\",\r\n \"generation\": 106,\r\n \"user_data\": {\r\n \"translog_uuid\": \"KZTC9IA4TBSoCqM1i1JmQg\",\r\n \"sync_id\": \"AV82XoWUlqM5HImF_H9l\",\r\n \"translog_generation\": \"90\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34891681\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"24\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 17,\r\n \"query_total\": 528699,\r\n \"query_time_in_millis\": 148656802,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7564,\r\n \"fetch_time_in_millis\": 1770155,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 428,\r\n \"scroll_time_in_millis\": 12523258684,\r\n \"scroll_current\": 17,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG5yQ==\",\r\n \"generation\": 103,\r\n \"user_data\": {\r\n \"translog_uuid\": \"MBeVLPx2RXK8z4oXfFVEQQ\",\r\n \"sync_id\": \"AV82XnNKid_0uQmXPj9b\",\r\n \"translog_generation\": \"87\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34958863\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"25\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 19,\r\n \"query_total\": 546942,\r\n \"query_time_in_millis\": 118986464,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7839,\r\n \"fetch_time_in_millis\": 1756984,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 419,\r\n \"scroll_time_in_millis\": 12091312080,\r\n \"scroll_current\": 19,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAG5wg==\",\r\n \"generation\": 126,\r\n \"user_data\": {\r\n \"translog_uuid\": \"BoH1XumqRAewXTiecOsNhg\",\r\n \"sync_id\": \"AV82XnqAoNoocWt5el0s\",\r\n \"translog_generation\": \"110\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34951763\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"30\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 21,\r\n \"query_total\": 516967,\r\n \"query_time_in_millis\": 119269204,\r\n \"query_current\": 0,\r\n \"fetch_total\": 7363,\r\n \"fetch_time_in_millis\": 1764381,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 430,\r\n \"scroll_time_in_millis\": 13394694306,\r\n \"scroll_current\": 21,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8kVoA==\",\r\n \"generation\": 107,\r\n \"user_data\": {\r\n \"translog_uuid\": \"yGN0xKvcSWCzInj7nChwFw\",\r\n \"sync_id\": \"AV82XoWNlqM5HImF_H9k\",\r\n \"translog_generation\": \"92\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 34882192\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"snapshot\": {\r\n \"primaries\": {},\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"4\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/5N8Q==\",\r\n \"generation\": 10,\r\n \"user_data\": {\r\n \"translog_uuid\": \"WUXODS0bSXeB4MU_89Cvlg\",\r\n \"sync_id\": \"AV5zcgKnjMH1VtblFcjh\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 0\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"ents_12_2_w\": {\r\n \"primaries\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 183,\r\n \"query_time_in_millis\": 132,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"total\": {\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 274,\r\n \"query_time_in_millis\": 168,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n }\r\n },\r\n \"shards\": {\r\n \"0\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 91,\r\n \"query_time_in_millis\": 26,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8beBQ==\",\r\n \"generation\": 1,\r\n \"user_data\": {\r\n \"translog_uuid\": \"B7hIVpgiTzy_UPgCxiUm8w\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 0\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"5\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"wVqtKh8TShW9SCkALv40ww\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 91,\r\n \"query_time_in_millis\": 36,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"4/1+onbVnAD/YXChg8cj6g==\",\r\n \"generation\": 58,\r\n \"user_data\": {\r\n \"translog_uuid\": \"mGPk3-JNSHWtBb9pTlGuoQ\",\r\n \"sync_id\": \"AV7lgDhHVr3ne82Za7Q-\",\r\n \"translog_generation\": \"29\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 1\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"7\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": false,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 0,\r\n \"query_time_in_millis\": 0,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblMAACYw==\",\r\n \"generation\": 2,\r\n \"user_data\": {\r\n \"translog_uuid\": \"_wfBWMxySZ2Jq4aERS2Vew\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 0\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ],\r\n \"8\": [\r\n {\r\n \"routing\": {\r\n \"state\": \"STARTED\",\r\n \"primary\": true,\r\n \"node\": \"UrnbjGiySZmdi8fBxNll_g\",\r\n \"relocating_node\": null\r\n },\r\n \"search\": {\r\n \"open_contexts\": 0,\r\n \"query_total\": 92,\r\n \"query_time_in_millis\": 106,\r\n \"query_current\": 0,\r\n \"fetch_total\": 0,\r\n \"fetch_time_in_millis\": 0,\r\n \"fetch_current\": 0,\r\n \"scroll_total\": 0,\r\n \"scroll_time_in_millis\": 0,\r\n \"scroll_current\": 0,\r\n \"suggest_total\": 0,\r\n \"suggest_time_in_millis\": 0,\r\n \"suggest_current\": 0\r\n },\r\n \"commit\": {\r\n \"id\": \"8iVAkcB8oYjxqOblL/+X3A==\",\r\n \"generation\": 1,\r\n \"user_data\": {\r\n \"translog_uuid\": \"XTlnjVnVTbK3pmtpEi_BQw\",\r\n \"translog_generation\": \"1\",\r\n \"max_unsafe_auto_id_timestamp\": \"-1\"\r\n },\r\n \"num_docs\": 0\r\n },\r\n \"shard_path\": {\r\n \"state_path\": \"/mnt/es/nodes/0\",\r\n \"data_path\": \"/mnt/es/nodes/0\",\r\n \"is_custom_data_path\": false\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n}\r\n```\r\n</details>",
"created_at": "2017-10-19T23:04:45Z"
},
{
"body": "Thanks for reporting, I see the issue. I will open a fix shortly.",
"created_at": "2017-10-21T10:09:25Z"
},
{
"body": "I opened #27068.",
"created_at": "2017-10-21T10:31:06Z"
},
{
"body": "I also got this NPE via /_cat/indices?v with v5.4.0\r\nAny possibility to prevent this without upgrade to 5.6.4 or 6.0.0 ?",
"created_at": "2018-06-04T06:29:23Z"
},
{
"body": "I've run into the same issue on v5.4.1. Is there a possible fix short of upgrading?",
"created_at": "2019-03-21T22:11:48Z"
}
],
"number": 27046,
"title": "Curl /_cat/indices causes null pointer exception"
} | {
"body": "Today we internally accumulate elapsed scroll time in nanoseconds. The problem here is that this can reasonably overflow. For example, on a system with scrolls that are open for ten minutes on average, after sixteen million scrolls the largest value that can be represented by a long will be exceeded. To address this, we switch to internally representing scrolls using microseconds as this enables with the same number of scrolls scrolls that are open for seven days on average, or with the same average elapsed time sixteen billion scrolls which will never happen (executing one scroll a second until sixteen billion have executed would not occur until more than five-hundred years had elapsed).\r\n\r\nCloses #27046\r\n\r\n",
"number": 27068,
"review_comments": [],
"title": "Keep cumulative elapsed scroll time in microseconds"
} | {
"commits": [
{
"message": "Keep cumulative elapsed scroll time in microseconds\n\nToday we internally accumulate elapsed scroll time in nanoseconds. The\nproblem here is that this can reasonably overflow. For example, on a\nsystem with scrolls that are open for ten minutes on average, after\nsixteen million scrolls the largest value that can be represented by a\nlong will be executed. To address this, we switch to internally\nrepresenting scrolls using microseconds as this enables with the same\nnumber of scrolls scrolls that are open for seven days on average, or\nwith the same average elapsed time sixteen billion scrolls which will\nnever happen (executing one scroll a second until sixteen billion have\nexecuted would not occur until more than five-hundred years had\nelapsed)."
}
],
"files": [
{
"diff": "@@ -180,12 +180,19 @@ public void onNewScrollContext(SearchContext context) {\n public void onFreeScrollContext(SearchContext context) {\n totalStats.scrollCurrent.dec();\n assert totalStats.scrollCurrent.count() >= 0;\n- totalStats.scrollMetric.inc(System.nanoTime() - context.getOriginNanoTime());\n+ totalStats.scrollMetric.inc(TimeUnit.NANOSECONDS.toMicros(System.nanoTime() - context.getOriginNanoTime()));\n }\n \n static final class StatsHolder {\n public final MeanMetric queryMetric = new MeanMetric();\n public final MeanMetric fetchMetric = new MeanMetric();\n+ /* We store scroll statistics in microseconds because with nanoseconds we run the risk of overflowing the total stats if there are\n+ * many scrolls. For example, on a system with 2^24 scrolls that have been executed, each executing for 2^10 seconds, then using\n+ * nanoseconds would require a numeric representation that can represent at least 2^24 * 2^10 * 10^9 > 2^24 * 2^10 * 2^29 = 2^63\n+ * which exceeds the largest value that can be represented by a long. By using microseconds, we enable capturing one-thousand\n+ * times as many scrolls (i.e., billions of scrolls which at one per second would take 32 years to occur), or scrolls that execute\n+ * for one-thousand times as long (i.e., scrolls that execute for almost twelve days on average).\n+ */\n public final MeanMetric scrollMetric = new MeanMetric();\n public final MeanMetric suggestMetric = new MeanMetric();\n public final CounterMetric queryCurrent = new CounterMetric();\n@@ -197,7 +204,7 @@ public SearchStats.Stats stats() {\n return new SearchStats.Stats(\n queryMetric.count(), TimeUnit.NANOSECONDS.toMillis(queryMetric.sum()), queryCurrent.count(),\n fetchMetric.count(), TimeUnit.NANOSECONDS.toMillis(fetchMetric.sum()), fetchCurrent.count(),\n- scrollMetric.count(), TimeUnit.NANOSECONDS.toMillis(scrollMetric.sum()), scrollCurrent.count(),\n+ scrollMetric.count(), TimeUnit.MICROSECONDS.toMillis(scrollMetric.sum()), scrollCurrent.count(),\n suggestMetric.count(), TimeUnit.NANOSECONDS.toMillis(suggestMetric.sum()), suggestCurrent.count()\n );\n }",
"filename": "core/src/main/java/org/elasticsearch/index/search/stats/ShardSearchStats.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.4.1\r\n\r\n**Plugins installed**: x-pack\r\n\r\n**JVM version**:\r\njava version \"1.8.0_65\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_65-b17)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)\r\n\r\n**OS version**:\r\n16.6.0 Darwin Kernel Version 16.6.0: Fri Apr 14 16:21:16 PDT 2017; root:xnu-3789.60.24~6/RELEASE_X86_64 x86_64\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nCreating a snapshot where one of the indices does not exist should result in a failed snapshot unless you have `ignore_unavailable` set to true. Rather, the snapshot succeeds and there is no indication of the missing index.\r\n\r\n``` json\r\nPUT /_snapshot/my_backup\r\n{\r\n \"type\": \"fs\",\r\n \"settings\": {\r\n \"compress\": true,\r\n \"location\": \"/Users/jared/tmp/blah\"\r\n }\r\n}\r\n\r\nPOST exists/test/1\r\n{\"hello\": true}\r\n\r\nPUT /_snapshot/my_backup/snapshot_1\r\n{\"indices\":\"exists,missing\"}\r\n\r\nGET /_snapshot/my_backup/snapshot_1/_status\r\n\r\n# return\r\n{\r\n \"snapshots\": [\r\n {\r\n \"snapshot\": \"snapshot_1\",\r\n \"repository\": \"my_backup\",\r\n \"uuid\": \"Zp8WS3YdST-c65Ku_r9sVA\",\r\n \"state\": \"SUCCESS\",\r\n \"shards_stats\": {\r\n \"initializing\": 0,\r\n \"started\": 0,\r\n \"finalizing\": 0,\r\n \"done\": 5,\r\n \"failed\": 0,\r\n \"total\": 5\r\n },\r\n \"stats\": {\r\n \"number_of_files\": 8,\r\n \"processed_files\": 8,\r\n \"total_size_in_bytes\": 3203,\r\n \"processed_size_in_bytes\": 3203,\r\n \"start_time_in_millis\": 1498150300101,\r\n \"time_in_millis\": 36\r\n },\r\n \"indices\": {\r\n \"exists\": {\r\n \"shards_stats\": {\r\n \"initializing\": 0,\r\n \"started\": 0,\r\n \"finalizing\": 0,\r\n \"done\": 5,\r\n \"failed\": 0,\r\n \"total\": 5\r\n },\r\n \"stats\": {\r\n \"number_of_files\": 8,\r\n \"processed_files\": 8,\r\n \"total_size_in_bytes\": 3203,\r\n \"processed_size_in_bytes\": 3203,\r\n \"start_time_in_millis\": 1498150300101,\r\n \"time_in_millis\": 36\r\n },\r\n \"shards\": {\r\n ... \r\n }\r\n }\r\n }\r\n }\r\n ]\r\n}\r\n",
"comments": [
{
"body": "Hello,\r\n\r\nI've had a look at the index name resolver in `TransportCreateSnapshotAction` and it seems it works well. What is very strange is that I created a test function in `SharedClusterSnapshotRestoreIT` in which I just passed a missing index for the creation of a snapshot:\r\n```\r\npublic void testSnapshotOnMissingIndex() throws Exception {\r\n final Client client = client();\r\n\r\n logger.info(\"--> creating repository\");\r\n assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\r\n .setType(\"fs\").setSettings(Settings.builder().put(\"location\", randomRepoPath())));\r\n\r\n logger.info(\"--> snapshot\");\r\n client.admin().cluster()\r\n .prepareCreateSnapshot(\"test-repo\", \"test-snap\")\r\n .setIndices(\"missing\")\r\n .get();\r\n}\r\n```\r\nAnd it does throw an `IndexNotFoundException`:\r\n```\r\n[missing] IndexNotFoundException[no such index]\r\n\tat __randomizedtesting.SeedInfo.seed([3D91A5BCCC7B5F6C:9E614A94BC995C9B]:0)\r\n\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:664)\r\n\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:626)\r\n\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:583)\r\n\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:162)\r\n\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:141)\r\n\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:74)\r\n\tat org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.checkBlock(TransportCreateSnapshotAction.java:69)\r\n\tat org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.checkBlock(TransportCreateSnapshotAction.java:1)\r\n\tat org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:135)\r\n\tat org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.start(TransportMasterNodeAction.java:127)\r\n\tat org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:105)\r\n\tat org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:1)\r\n\tat org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:168)\r\n\tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139)\r\n\tat org.elasticsearch.action.support.HandledTransportAction$TransportHandler.messageReceived(HandledTransportAction.java:64)\r\n\tat org.elasticsearch.action.support.HandledTransportAction$TransportHandler.messageReceived(HandledTransportAction.java:1)\r\n\tat org.elasticsearch.transport.AssertingTransportInterceptor$1.messageReceived(AssertingTransportInterceptor.java:76)\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66)\r\n\tat org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1527)\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\r\n\tat org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:139)\r\n\tat org.elasticsearch.transport.TcpTransport.handleRequest(TcpTransport.java:1484)\r\n\tat org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1354)\r\n\tat org.elasticsearch.transport.MockTcpTransport.readMessage(MockTcpTransport.java:170)\r\n\tat org.elasticsearch.transport.MockTcpTransport.access$7(MockTcpTransport.java:148)\r\n\tat org.elasticsearch.transport.MockTcpTransport$MockChannel$1.lambda$0(MockTcpTransport.java:348)\r\n\tat org.elasticsearch.common.util.CancellableThreads.executeIO(CancellableThreads.java:105)\r\n\tat org.elasticsearch.transport.MockTcpTransport$MockChannel$1.doRun(MockTcpTransport.java:348)\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n```\r\nBut when I test it on Elasticsearch it was passed with no error message and the missing index is just to be ignored as above. I don't really understand because in the Rest layer we don't do any index resolving and the indices should be passed as it is to create a `CreateSnapshotRequest` ? So where the problem may lie ?",
"created_at": "2017-08-09T12:32:17Z"
},
{
"body": "I've made some investigation, seems like the default value of `ignore_unavailable ` is `true`. Need more effort to figure it out. @abeyad I'm new to es, could you please provide some hints? Thanks in advance. 😄 ",
"created_at": "2017-09-20T13:16:18Z"
}
],
"number": 25359,
"title": "snapshot succeeds with missing index defined"
} | {
"body": "The default value for `ignore_unavailable` did not match what was documented when using the REST APIs for snapshot creation and restore. This PR sets the default value of `ignore_unavailable` to `false`, [the way it is documented](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html) and ensures it's the same when using either REST API or transport client.\r\n\r\nCloses #25359 ",
"number": 27056,
"review_comments": [
{
"body": "I think this should be fixed by falling back to `indicesOptions` instead (which is equal to `IndicesOptions.strictExpandOpen()` by default)",
"created_at": "2017-11-15T11:50:03Z"
}
],
"title": "Fix default value of ignore_unavailable for snapshot REST API (#25359)"
} | {
"commits": [
{
"message": "Fix creating snapshot on missing index (#25359)"
},
{
"message": "Fix default indicesOptions in parsing RestoreSnapshotRequest"
},
{
"message": "Add REST test"
}
],
"files": [
{
"diff": "@@ -380,8 +380,9 @@ public boolean includeGlobalState() {\n * @param source snapshot definition\n * @return this request\n */\n+ @SuppressWarnings(\"unchecked\")\n public CreateSnapshotRequest source(Map<String, Object> source) {\n- for (Map.Entry<String, Object> entry : ((Map<String, Object>) source).entrySet()) {\n+ for (Map.Entry<String, Object> entry : source.entrySet()) {\n String name = entry.getKey();\n if (name.equals(\"indices\")) {\n if (entry.getValue() instanceof String) {\n@@ -402,7 +403,7 @@ public CreateSnapshotRequest source(Map<String, Object> source) {\n includeGlobalState = nodeBooleanValue(entry.getValue(), \"include_global_state\");\n }\n }\n- indicesOptions(IndicesOptions.fromMap((Map<String, Object>) source, IndicesOptions.lenientExpandOpen()));\n+ indicesOptions(IndicesOptions.fromMap(source, indicesOptions));\n return this;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java",
"status": "modified"
},
{
"diff": "@@ -505,6 +505,7 @@ public Settings indexSettings() {\n * @param source restore definition\n * @return this request\n */\n+ @SuppressWarnings(\"unchecked\")\n public RestoreSnapshotRequest source(Map<String, Object> source) {\n for (Map.Entry<String, Object> entry : source.entrySet()) {\n String name = entry.getKey();\n@@ -558,7 +559,7 @@ public RestoreSnapshotRequest source(Map<String, Object> source) {\n }\n }\n }\n- indicesOptions(IndicesOptions.fromMap((Map<String, Object>) source, IndicesOptions.lenientExpandOpen()));\n+ indicesOptions(IndicesOptions.fromMap(source, indicesOptions));\n return this;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java",
"status": "modified"
},
{
"diff": "@@ -37,7 +37,7 @@ public void testRestoreSnapshotRequestParsing() throws IOException {\n \n XContentBuilder builder = jsonBuilder().startObject();\n \n- if(randomBoolean()) {\n+ if (randomBoolean()) {\n builder.field(\"indices\", \"foo,bar,baz\");\n } else {\n builder.startArray(\"indices\");\n@@ -76,6 +76,10 @@ public void testRestoreSnapshotRequestParsing() throws IOException {\n builder.value(\"set3\");\n builder.endArray();\n }\n+ boolean includeIgnoreUnavailable = randomBoolean();\n+ if (includeIgnoreUnavailable) {\n+ builder.field(\"ignore_unavailable\", indicesOptions.ignoreUnavailable());\n+ }\n \n BytesReference bytes = builder.endObject().bytes();\n \n@@ -89,15 +93,18 @@ public void testRestoreSnapshotRequestParsing() throws IOException {\n assertEquals(partial, request.partial());\n assertEquals(\"val1\", request.settings().get(\"set1\"));\n assertArrayEquals(request.ignoreIndexSettings(), new String[]{\"set2\", \"set3\"});\n-\n+ boolean expectedIgnoreAvailable = includeIgnoreUnavailable\n+ ? indicesOptions.ignoreUnavailable()\n+ : IndicesOptions.strictExpandOpen().ignoreUnavailable();\n+ assertEquals(expectedIgnoreAvailable, request.indicesOptions().ignoreUnavailable());\n }\n \n public void testCreateSnapshotRequestParsing() throws IOException {\n CreateSnapshotRequest request = new CreateSnapshotRequest(\"test-repo\", \"test-snap\");\n \n XContentBuilder builder = jsonBuilder().startObject();\n \n- if(randomBoolean()) {\n+ if (randomBoolean()) {\n builder.field(\"indices\", \"foo,bar,baz\");\n } else {\n builder.startArray(\"indices\");\n@@ -134,6 +141,10 @@ public void testCreateSnapshotRequestParsing() throws IOException {\n builder.value(\"set3\");\n builder.endArray();\n }\n+ boolean includeIgnoreUnavailable = randomBoolean();\n+ if (includeIgnoreUnavailable) {\n+ builder.field(\"ignore_unavailable\", indicesOptions.ignoreUnavailable());\n+ }\n \n BytesReference bytes = builder.endObject().bytes();\n \n@@ -144,6 +155,10 @@ public void testCreateSnapshotRequestParsing() throws IOException {\n assertArrayEquals(request.indices(), new String[]{\"foo\", \"bar\", \"baz\"});\n assertEquals(partial, request.partial());\n assertEquals(\"val1\", request.settings().get(\"set1\"));\n+ boolean expectedIgnoreAvailable = includeIgnoreUnavailable\n+ ? indicesOptions.ignoreUnavailable()\n+ : IndicesOptions.strictExpandOpen().ignoreUnavailable();\n+ assertEquals(expectedIgnoreAvailable, request.indicesOptions().ignoreUnavailable());\n }\n \n }",
"filename": "core/src/test/java/org/elasticsearch/snapshots/SnapshotRequestsTests.java",
"status": "modified"
},
{
"diff": "@@ -37,3 +37,38 @@ setup:\n snapshot: test_snapshot\n \n - match: { acknowledged: true }\n+\n+---\n+\"Create a snapshot for missing index\":\n+ - skip:\n+ version: \" - 6.99.99\"\n+ reason: ignore_unavailable default is false in 7.0.0\n+\n+ - do:\n+ catch: missing\n+ snapshot.create:\n+ repository: test_repo_create_1\n+ snapshot: test_snapshot_1\n+ wait_for_completion: true\n+ body: |\n+ { \"indices\": \"missing_1\" }\n+\n+ - do:\n+ snapshot.create:\n+ repository: test_repo_create_1\n+ snapshot: test_snapshot_2\n+ wait_for_completion: true\n+ body: |\n+ { \"indices\": \"missing_2\", \"ignore_unavailable\": true }\n+\n+ - match: { snapshot.snapshot: test_snapshot_2 }\n+ - match: { snapshot.state : SUCCESS }\n+ - match: { snapshot.shards.successful: 0 }\n+ - match: { snapshot.shards.failed : 0 }\n+\n+ - do:\n+ snapshot.delete:\n+ repository: test_repo_create_1\n+ snapshot: test_snapshot_2\n+\n+ - match: { acknowledged: true }",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/snapshot.create/10_basic.yml",
"status": "modified"
}
]
} |
{
"body": "When querying across multiple fields of different types in Query String Query, if there is a mismatch between the data type of the query string and the data type of the queried field, then an exception occurs. When a similar search is run via `multi_match`, no issue is observed. Tested under ES 5.2.0\r\n\r\n```\r\nPUT /tester22/\r\n{\r\n \"mappings\": {\r\n \"test\": {\r\n \"properties\": {\r\n \"Product\": {\r\n \"properties\": {\r\n \"Id\": {\r\n \"type\": \"string\"\r\n },\r\n \"IsBlue\": {\r\n \"type\": \"boolean\"\r\n },\r\n \"Inventory\": {\r\n \"type\": \"long\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPOST /tester22/test\r\n{\r\n \"Product\":{\r\n \"Id\": \"asdf\",\r\n \"IsBlue\" : false,\r\n \"Inventory\" : 234\r\n }\r\n}\r\n```\r\n\r\n\r\nResults in `number_format_exception`:\r\n```\r\nGET /tester22/_search\r\n{\r\n \"query\": {\r\n \"query_string\": {\r\n \"query\": \"Product.\\\\*:asdf\"\r\n }\r\n }\r\n}\r\n```\r\n\r\nWorks seamlessly:\r\n```\r\nGET /tester22/_search\r\n{\r\n \"query\": {\r\n \"multi_match\": {\r\n \"query\": \"asdf\",\r\n \"type\": \"cross_fields\", \r\n \"operator\": \"and\",\r\n \"fields\": [ \"Product.*\" ]\r\n }\r\n }\r\n}\r\n\r\n```",
"comments": [
{
"body": "Good catch, it seems that nested fields expanded by a wildcard does not respect the `lenient` parameter of the query.\r\nBy default `lenient` is false for the `match` and `query_string` query so in theory all your example should throw an exception.\r\nI can replicate the problem on 2.x.",
"created_at": "2017-02-16T15:51:18Z"
},
{
"body": "@jimczi Lenient parameter is ignored in `multi_match` query only for `cross_fields` type ([code](https://github.com/elastic/elasticsearch/blob/4bce7271659889d839388d7df5c61a6d2a5c3c7a/core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java#L245)).\r\nIs this desirable behavior? There is [test case](https://github.com/elastic/elasticsearch/blob/cdd7c1e6c26474721a1513c83ab3ca6473e9f9ef/core/src/test/java/org/elasticsearch/index/search/MultiMatchQueryTests.java#L133) for this behavior.",
"created_at": "2017-10-13T21:11:45Z"
},
{
"body": "@alexshadow007 thanks for looking, `lenient` should not be ignored for `cross_fields`, this is a bug. \r\nThis issue is marked with `adoptme` so feel free to open a PR for it. ",
"created_at": "2017-10-18T06:41:32Z"
}
],
"number": 23210,
"title": "Multi-match type exceptions in Query String Query"
} | {
"body": "Closes #23210\r\n",
"number": 27045,
"review_comments": [],
"title": "Handle leniency for cross_fields type in multi_match query"
} | {
"commits": [
{
"message": "Handle leniency for cross_fields type in multi_match query"
},
{
"message": "Fix test for cross_fields type in multi_match query"
}
],
"files": [
{
"diff": "@@ -43,6 +43,8 @@\n import java.util.Map;\n import java.util.Objects;\n \n+import static org.elasticsearch.common.lucene.search.Queries.newLenientFieldQuery;\n+\n public class MultiMatchQuery extends MatchQuery {\n \n private Float groupTieBreaker = null;\n@@ -204,15 +206,15 @@ public Query blendTerms(Term[] terms, MappedFieldType fieldType) {\n for (int i = 0; i < terms.length; i++) {\n values[i] = terms[i].bytes();\n }\n- return MultiMatchQuery.blendTerms(context, values, commonTermsCutoff, tieBreaker, blendedFields);\n+ return MultiMatchQuery.blendTerms(context, values, commonTermsCutoff, tieBreaker, lenient, blendedFields);\n }\n \n @Override\n public Query blendTerm(Term term, MappedFieldType fieldType) {\n if (blendedFields == null) {\n return super.blendTerm(term, fieldType);\n }\n- return MultiMatchQuery.blendTerm(context, term.bytes(), commonTermsCutoff, tieBreaker, blendedFields);\n+ return MultiMatchQuery.blendTerm(context, term.bytes(), commonTermsCutoff, tieBreaker, lenient, blendedFields);\n }\n \n @Override\n@@ -227,12 +229,12 @@ public Query termQuery(MappedFieldType fieldType, BytesRef value) {\n }\n \n static Query blendTerm(QueryShardContext context, BytesRef value, Float commonTermsCutoff, float tieBreaker,\n- FieldAndFieldType... blendedFields) {\n- return blendTerms(context, new BytesRef[] {value}, commonTermsCutoff, tieBreaker, blendedFields);\n+ boolean lenient, FieldAndFieldType... blendedFields) {\n+ return blendTerms(context, new BytesRef[] {value}, commonTermsCutoff, tieBreaker, lenient, blendedFields);\n }\n \n static Query blendTerms(QueryShardContext context, BytesRef[] values, Float commonTermsCutoff, float tieBreaker,\n- FieldAndFieldType... blendedFields) {\n+ boolean lenient, FieldAndFieldType... blendedFields) {\n List<Query> queries = new ArrayList<>();\n Term[] terms = new Term[blendedFields.length * values.length];\n float[] blendedBoost = new float[blendedFields.length * values.length];\n@@ -242,19 +244,12 @@ static Query blendTerms(QueryShardContext context, BytesRef[] values, Float comm\n Query query;\n try {\n query = ft.fieldType.termQuery(term, context);\n- } catch (IllegalArgumentException e) {\n- // the query expects a certain class of values such as numbers\n- // of ip addresses and the value can't be parsed, so ignore this\n- // field\n- continue;\n- } catch (ElasticsearchParseException parseException) {\n- // date fields throw an ElasticsearchParseException with the\n- // underlying IAE as the cause, ignore this field if that is\n- // the case\n- if (parseException.getCause() instanceof IllegalArgumentException) {\n- continue;\n+ } catch (RuntimeException e) {\n+ if (lenient) {\n+ query = newLenientFieldQuery(ft.fieldType.name(), e);\n+ } else {\n+ throw e;\n }\n- throw parseException;\n }\n float boost = ft.boost;\n while (query instanceof BoostQuery) {",
"filename": "core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.compress.CompressedXContent;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n@@ -110,7 +111,7 @@ public void testBlendTerms() {\n Query expected = BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f);\n Query actual = MultiMatchQuery.blendTerm(\n indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null),\n- new BytesRef(\"baz\"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n+ new BytesRef(\"baz\"), null, 1f, false, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n assertEquals(expected, actual);\n }\n \n@@ -126,11 +127,11 @@ public void testBlendTermsWithFieldBoosts() {\n Query expected = BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f);\n Query actual = MultiMatchQuery.blendTerm(\n indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null),\n- new BytesRef(\"baz\"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n+ new BytesRef(\"baz\"), null, 1f, false, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n assertEquals(expected, actual);\n }\n \n- public void testBlendTermsUnsupportedValue() {\n+ public void testBlendTermsUnsupportedValueWithLenient() {\n FakeFieldType ft1 = new FakeFieldType();\n ft1.setName(\"foo\");\n FakeFieldType ft2 = new FakeFieldType() {\n@@ -142,13 +143,29 @@ public Query termQuery(Object value, QueryShardContext context) {\n ft2.setName(\"bar\");\n Term[] terms = new Term[] { new Term(\"foo\", \"baz\") };\n float[] boosts = new float[] {2};\n- Query expected = BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f);\n+ Query expected = new DisjunctionMaxQuery(Arrays.asList(\n+ Queries.newMatchNoDocsQuery(\"failed [\" + ft2.name() + \"] query, caused by illegal_argument_exception:[null]\"),\n+ BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f)\n+ ), 1f);\n Query actual = MultiMatchQuery.blendTerm(\n indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null),\n- new BytesRef(\"baz\"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n+ new BytesRef(\"baz\"), null, 1f, true, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n assertEquals(expected, actual);\n }\n \n+ public void testBlendTermsUnsupportedValueWithoutLenient() {\n+ FakeFieldType ft = new FakeFieldType() {\n+ @Override\n+ public Query termQuery(Object value, QueryShardContext context) {\n+ throw new IllegalArgumentException();\n+ }\n+ };\n+ ft.setName(\"bar\");\n+ expectThrows(IllegalArgumentException.class, () -> MultiMatchQuery.blendTerm(\n+ indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null),\n+ new BytesRef(\"baz\"), null, 1f, false, new FieldAndFieldType(ft, 1)));\n+ }\n+\n public void testBlendNoTermQuery() {\n FakeFieldType ft1 = new FakeFieldType();\n ft1.setName(\"foo\");\n@@ -170,7 +187,7 @@ public Query termQuery(Object value, QueryShardContext context) {\n ), 1.0f);\n Query actual = MultiMatchQuery.blendTerm(\n indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null),\n- new BytesRef(\"baz\"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n+ new BytesRef(\"baz\"), null, 1f, false, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n assertEquals(expected, actual);\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/index/search/MultiMatchQueryTests.java",
"status": "modified"
},
{
"diff": "@@ -472,6 +472,7 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException\n .setQuery(randomizeType(multiMatchQuery(\"captain america 15\", \"full_name\", \"first_name\", \"last_name\", \"category\", \"skill\")\n .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n .analyzer(\"category\")\n+ .lenient(true)\n .operator(Operator.AND))).get();\n assertHitCount(searchResponse, 1L);\n assertFirstHit(searchResponse, hasId(\"theone\"));\n@@ -480,6 +481,7 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException\n .setQuery(randomizeType(multiMatchQuery(\"captain america 15\", \"full_name\", \"first_name\", \"last_name\", \"category\", \"skill\", \"int-field\")\n .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n .analyzer(\"category\")\n+ .lenient(true)\n .operator(Operator.AND))).get();\n assertHitCount(searchResponse, 1L);\n assertFirstHit(searchResponse, hasId(\"theone\"));\n@@ -488,6 +490,7 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException\n .setQuery(randomizeType(multiMatchQuery(\"captain america 15\", \"skill\", \"full_name\", \"first_name\", \"last_name\", \"category\", \"int-field\")\n .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n .analyzer(\"category\")\n+ .lenient(true)\n .operator(Operator.AND))).get();\n assertHitCount(searchResponse, 1L);\n assertFirstHit(searchResponse, hasId(\"theone\"));\n@@ -496,6 +499,7 @@ public void testCrossFieldMode() throws ExecutionException, InterruptedException\n searchResponse = client().prepareSearch(\"test\")\n .setQuery(randomizeType(multiMatchQuery(\"captain america 15\", \"first_name\", \"last_name\", \"skill\")\n .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .lenient(true)\n .analyzer(\"category\"))).get();\n assertFirstHit(searchResponse, hasId(\"theone\"));\n ",
"filename": "core/src/test/java/org/elasticsearch/search/query/MultiMatchQueryIT.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): 5.5 (tested on 5.5.1)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): osx\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen you have a deprecated setting configured, then the cluster update settings API is returning all of those settings on every request, regardless if the setting you specified as part of the request is affected.\r\n\r\nHowever when trying to unset an already unset setting, the header suddenly vanishes.\r\n\r\n**Steps to reproduce**:\r\n\r\n```\r\nbin/elasticsearch -Eingest.new_date_format=false\r\n```\r\n\r\nSet another setting, see the correct header returned for the setting specified on startup\r\n\r\n```\r\n# curl -v -X PUT localhost:9200/_cluster/settings -d '{ \"transient\" : { \"script.max_compilations_per_minute\" : 15 } }' --header \"Content-Type: application/json\"\r\n* Trying 127.0.0.1...\r\n* TCP_NODELAY set\r\n* Connected to localhost (127.0.0.1) port 9200 (#0)\r\n> PUT /_cluster/settings HTTP/1.1\r\n> Host: localhost:9200\r\n> User-Agent: curl/7.54.0\r\n> Accept: */*\r\n> Content-Type: application/json\r\n> Content-Length: 63\r\n>\r\n* upload completely sent off: 63 out of 63 bytes\r\n< HTTP/1.1 200 OK\r\n< Warning: 299 Elasticsearch-5.5.1-19c13d0 \"[ingest.new_date_format] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.\" \"Tue, 29 Aug 2017 12:29:56 GMT\"\r\n< content-type: application/json; charset=UTF-8\r\n< content-length: 97\r\n<\r\n* Connection #0 to host localhost left intact\r\n{\"acknowledged\":true,\"persistent\":{},\"transient\":{\"script\":{\"max_compilations_per_minute\":\"15\"}}}%\r\n```\r\n\r\nRemove the script compilation setting by setting it to `null`, see the header for the startup setting being returned\r\n\r\n```\r\n# curl -v -X PUT localhost:9200/_cluster/settings -d '{ \"transient\" : { \"script.max_compilations_per_minute\" : null } }' --header \"Content-Type: application/json\"\r\n* Trying 127.0.0.1...\r\n* TCP_NODELAY set\r\n* Connected to localhost (127.0.0.1) port 9200 (#0)\r\n> PUT /_cluster/settings HTTP/1.1\r\n> Host: localhost:9200\r\n> User-Agent: curl/7.54.0\r\n> Accept: */*\r\n> Content-Type: application/json\r\n> Content-Length: 65\r\n>\r\n* upload completely sent off: 65 out of 65 bytes\r\n< HTTP/1.1 200 OK\r\n< Warning: 299 Elasticsearch-5.5.1-19c13d0 \"[ingest.new_date_format] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.\" \"Tue, 29 Aug 2017 12:31:06 GMT\"\r\n< content-type: application/json; charset=UTF-8\r\n< content-length: 52\r\n<\r\n* Connection #0 to host localhost left intact\r\n{\"acknowledged\":true,\"persistent\":{},\"transient\":{}}%\r\n```\r\n\r\nDo the above call for a second time, see no headers\r\n\r\n```\r\ncurl -v -X PUT localhost:9200/_cluster/settings -d '{ \"transient\" : { \"script.max_compilations_per_minute\" : null } }' --header \"Content-Type: application/json\"\r\n* Trying 127.0.0.1...\r\n* TCP_NODELAY set\r\n* Connected to localhost (127.0.0.1) port 9200 (#0)\r\n> PUT /_cluster/settings HTTP/1.1\r\n> Host: localhost:9200\r\n> User-Agent: curl/7.54.0\r\n> Accept: */*\r\n> Content-Type: application/json\r\n> Content-Length: 65\r\n>\r\n* upload completely sent off: 65 out of 65 bytes\r\n< HTTP/1.1 200 OK\r\n< content-type: application/json; charset=UTF-8\r\n< content-length: 52\r\n<\r\n* Connection #0 to host localhost left intact\r\n{\"acknowledged\":true,\"persistent\":{},\"transient\":{}}%\r\n```",
"comments": [],
"number": 26419,
"title": "Deprecation headers are not returned when deleting a non existing setting"
} | {
"body": "When executing a cluster settings update that leaves the cluster state unchanged, we skip validation and this avoids deprecation logging for deprecated settings in the cluster state. This commit addresses this by running validation even if the settings are unchanged.\r\n\r\nCloses #26419\r\n",
"number": 27017,
"review_comments": [
{
"body": "The code in this block is unchanged, only moved with an indentation change (from moving it under an `if` block).",
"created_at": "2017-10-15T01:00:46Z"
}
],
"title": "Emit settings deprecation logging on empty update"
} | {
"commits": [
{
"message": "Emit settings deprecation logging on empty update\n\nWhen executing a cluster settings update that leaves the cluster state\nunchanged, we skip validation and this avoids deprecation logging for\ndeprecated settings in the cluster state. This commit addresses this by\nrunning validation even if the settings are unchanged."
},
{
"message": "Remove import"
}
],
"files": [
{
"diff": "@@ -58,35 +58,40 @@ synchronized ClusterState updateSettings(final ClusterState currentState, Settin\n persistentSettings.put(currentState.metaData().persistentSettings());\n changed |= clusterSettings.updateDynamicSettings(persistentToApply, persistentSettings, persistentUpdates, \"persistent\");\n \n- if (!changed) {\n- return currentState;\n- }\n-\n- MetaData.Builder metaData = MetaData.builder(currentState.metaData())\n- .persistentSettings(persistentSettings.build())\n- .transientSettings(transientSettings.build());\n+ final ClusterState clusterState;\n+ if (changed) {\n+ MetaData.Builder metaData = MetaData.builder(currentState.metaData())\n+ .persistentSettings(persistentSettings.build())\n+ .transientSettings(transientSettings.build());\n \n- ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks());\n- boolean updatedReadOnly = MetaData.SETTING_READ_ONLY_SETTING.get(metaData.persistentSettings())\n- || MetaData.SETTING_READ_ONLY_SETTING.get(metaData.transientSettings());\n- if (updatedReadOnly) {\n- blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK);\n- } else {\n- blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK);\n- }\n- boolean updatedReadOnlyAllowDelete = MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.persistentSettings())\n- || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.transientSettings());\n- if (updatedReadOnlyAllowDelete) {\n- blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK);\n+ ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks());\n+ boolean updatedReadOnly = MetaData.SETTING_READ_ONLY_SETTING.get(metaData.persistentSettings())\n+ || MetaData.SETTING_READ_ONLY_SETTING.get(metaData.transientSettings());\n+ if (updatedReadOnly) {\n+ blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK);\n+ } else {\n+ blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK);\n+ }\n+ boolean updatedReadOnlyAllowDelete = MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.persistentSettings())\n+ || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.transientSettings());\n+ if (updatedReadOnlyAllowDelete) {\n+ blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK);\n+ } else {\n+ blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK);\n+ }\n+ clusterState = builder(currentState).metaData(metaData).blocks(blocks).build();\n } else {\n- blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK);\n+ clusterState = currentState;\n }\n- ClusterState build = builder(currentState).metaData(metaData).blocks(blocks).build();\n- Settings settings = build.metaData().settings();\n- // now we try to apply things and if they are invalid we fail\n- // this dryRun will validate & parse settings but won't actually apply them.\n+\n+ /*\n+ * Now we try to apply things and if they are invalid we fail. This dry run will validate, parse settings, and trigger deprecation\n+ * logging, but will not actually apply them.\n+ */\n+ final Settings settings = clusterState.metaData().settings();\n clusterSettings.validateUpdate(settings);\n- return build;\n+\n+ return clusterState;\n }\n \n ",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java",
"status": "modified"
},
{
"diff": "@@ -23,10 +23,15 @@\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator;\n import org.elasticsearch.common.settings.ClusterSettings;\n+import org.elasticsearch.common.settings.Setting;\n+import org.elasticsearch.common.settings.Setting.Property;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.test.ESTestCase;\n \n+import java.util.Set;\n import java.util.concurrent.atomic.AtomicReference;\n+import java.util.stream.Collectors;\n+import java.util.stream.Stream;\n \n public class SettingsUpdaterTests extends ESTestCase {\n \n@@ -132,4 +137,30 @@ public void testClusterBlock() {\n assertEquals(clusterState.blocks().global().size(), 0);\n \n }\n+\n+ public void testDeprecationLogging() {\n+ Setting<String> deprecatedSetting =\n+ Setting.simpleString(\"deprecated.setting\", Property.Dynamic, Property.NodeScope, Property.Deprecated);\n+ final Settings settings = Settings.builder().put(\"deprecated.setting\", \"foo\").build();\n+ final Set<Setting<?>> settingsSet =\n+ Stream.concat(ClusterSettings.BUILT_IN_CLUSTER_SETTINGS.stream(), Stream.of(deprecatedSetting)).collect(Collectors.toSet());\n+ final ClusterSettings clusterSettings = new ClusterSettings(settings, settingsSet);\n+ clusterSettings.addSettingsUpdateConsumer(deprecatedSetting, s -> {});\n+ final SettingsUpdater settingsUpdater = new SettingsUpdater(clusterSettings);\n+ final ClusterState clusterState =\n+ ClusterState.builder(new ClusterName(\"foo\")).metaData(MetaData.builder().persistentSettings(settings).build()).build();\n+\n+ final Settings toApplyDebug = Settings.builder().put(\"logger.org.elasticsearch\", \"debug\").build();\n+ final ClusterState afterDebug = settingsUpdater.updateSettings(clusterState, toApplyDebug, Settings.EMPTY);\n+ assertSettingDeprecationsAndWarnings(new Setting<?>[] { deprecatedSetting });\n+\n+ final Settings toApplyUnset = Settings.builder().putNull(\"logger.org.elasticsearch\").build();\n+ final ClusterState afterUnset = settingsUpdater.updateSettings(afterDebug, toApplyUnset, Settings.EMPTY);\n+ assertSettingDeprecationsAndWarnings(new Setting<?>[] { deprecatedSetting });\n+\n+ // we also check that if no settings are changed, deprecation logging still occurs\n+ settingsUpdater.updateSettings(afterUnset, toApplyUnset, Settings.EMPTY);\n+ assertSettingDeprecationsAndWarnings(new Setting<?>[] { deprecatedSetting });\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdaterTests.java",
"status": "modified"
}
]
} |
{
"body": "If the ES_HOME contains parentheses, the service cannot be installed.\r\n\r\nFixes #26454\r\n",
"comments": [
{
"body": "Hi @olcbean, we have found your signature in our records, but it seems like you have signed with a different e-mail than the one used in yout Git [commit](https://github.com/elastic/elasticsearch/pull/26916.patch). Can you please add both of these e-mails into your Github profile (they can be hidden), so we can match your e-mails to your Github profile?",
"created_at": "2017-10-06T16:06:15Z"
},
{
"body": "Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?\n",
"created_at": "2017-10-06T16:06:15Z"
},
{
"body": "Thanks @olcbean.",
"created_at": "2017-10-10T13:24:45Z"
},
{
"body": "@jasontedor thanks for merging this!\r\n\r\nThere is a tiny difference between `5.6` and `6.0+`. I just verified that applying only this fix to 5.6 still results in an error when the service in installed (if the path contains parentheses). I opened another PR #27012 which is only applicable to 5.6 (on top of this one). Can you please have a look at it?",
"created_at": "2017-10-14T10:42:19Z"
}
],
"number": 26916,
"title": "Fix handling of Windows paths containing parentheses"
} | {
"body": "If the `ES_HOME` contains parentheses, the service cannot be installed.\r\n\r\nThis fix extends #26916 and is applicable only to `5.x`\r\n\r\nRelates to : #26454\r\n\r\nCC @jasontedor \r\n",
"number": 27012,
"review_comments": [],
"title": "Fix handling of paths containing parentheses in 5.6"
} | {
"commits": [
{
"message": "Fix handling of paths containing parentheses in 5.6"
}
],
"files": [
{
"diff": "@@ -167,9 +167,7 @@ if exist \"%JAVA_HOME%\\bin\\client\\jvm.dll\" (\n )\n \n :foundJVM\n-if \"%ES_JVM_OPTIONS%\" == \"\" (\n-set ES_JVM_OPTIONS=%ES_HOME%\\config\\jvm.options\n-)\n+if \"%ES_JVM_OPTIONS%\" == \"\" set ES_JVM_OPTIONS=%ES_HOME%\\config\\jvm.options\n \n if not \"%ES_JAVA_OPTS%\" == \"\" set ES_JAVA_OPTS=%ES_JAVA_OPTS: =;%\n ",
"filename": "distribution/src/main/resources/bin/elasticsearch-service.bat",
"status": "modified"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 5.5\r\n\r\n**Plugins installed**: [] EMC SourceOne 7.2.5 \r\n\r\n**JVM version** (`java -version`): jre1.8.0_45\r\n\r\n\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Windows Server 2012R\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nUnable to install Elastic 5.5 using elasticsearch-service.bat\r\n**Steps to reproduce**:\r\n\r\n\r\n1. Open CMD in Admin mode\r\n2. cd to \"C:\\Program Files (x86)\\EMC SourceOne\\EXPBA\\bin\\Elastic\\elasticsearch\\bin\\\"\r\n3. run service.bat install or elasticsearch-service.bat install (depending on ES version)\r\n\r\nIssue is that with 5.5 and elasticsearch-service.bat that generates an error due to spaces in the path.\r\n\r\n**Provide logs (if relevant)**: none available\r\n\r\nWORKAROUND:\r\n\r\nin CMD line, change to cd `C:\\PROGRA~2\\EMCSOU~1\\EXPBA\\bin\\Elastic\\elasticsearch\\bin\\`\r\nUsing 8.3 format folder names solves the problem.\r\n\r\n\r\n",
"comments": [],
"number": 26454,
"title": "elasticsearch-service.bat that generates an error with install"
} | {
"body": "If the `ES_HOME` contains parentheses, the service cannot be installed.\r\n\r\nThis fix extends #26916 and is applicable only to `5.x`\r\n\r\nRelates to : #26454\r\n\r\nCC @jasontedor \r\n",
"number": 27012,
"review_comments": [],
"title": "Fix handling of paths containing parentheses in 5.6"
} | {
"commits": [
{
"message": "Fix handling of paths containing parentheses in 5.6"
}
],
"files": [
{
"diff": "@@ -167,9 +167,7 @@ if exist \"%JAVA_HOME%\\bin\\client\\jvm.dll\" (\n )\n \n :foundJVM\n-if \"%ES_JVM_OPTIONS%\" == \"\" (\n-set ES_JVM_OPTIONS=%ES_HOME%\\config\\jvm.options\n-)\n+if \"%ES_JVM_OPTIONS%\" == \"\" set ES_JVM_OPTIONS=%ES_HOME%\\config\\jvm.options\n \n if not \"%ES_JAVA_OPTS%\" == \"\" set ES_JAVA_OPTS=%ES_JAVA_OPTS: =;%\n ",
"filename": "distribution/src/main/resources/bin/elasticsearch-service.bat",
"status": "modified"
}
]
} |
{
"body": "This issue is similar to #26890, but affects the DateProcessor instead of the DateIndexNameProcessor.\r\n\r\nError is\r\n```\r\nCaused by: java.lang.IllegalArgumentException: field [json.timeMillis] of type [java.lang.Long] cannot be cast to [java.lang.String]\r\n at org.elasticsearch.ingest.IngestDocument.cast(IngestDocument.java:542)\r\n at org.elasticsearch.ingest.IngestDocument.getFieldValue(IngestDocument.java:107)\r\n at org.elasticsearch.ingest.common.DateProcessor.execute(DateProcessor.java:67)\r\n at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:100)\r\n```\r\n\r\nTo reproduce (tested with local build 7.0.0-alpha1-SNAPSHOT, commitId d97b21d1da627678f5a97673c742f2088fa65a6b), follow the exact same steps as #26890, except that the definition of the pipeline is:\r\n```json\r\ncurl -XPUT \"http://localhost:9200/_ingest/pipeline/bugTimestampPipeline\" -H 'Content-Type: application/json' -d'\r\n {\r\n \"description\": \"bugTimestampPipeline\",\r\n \"processors\" : [\r\n {\r\n \"date\" : {\r\n \"field\" : \"json.timeMillis\",\r\n \"target_field\" : \"json.timeHuman\",\r\n \"formats\" : [ \"UNIX_MS\", \"yyyy-MM-dd HH:mm:ss.SSSZ\" ]\r\n }\r\n }\r\n ]\r\n }'\r\n```",
"comments": [
{
"body": "@iksnalybok Thanks for reporting this! This bug will be fixed soon.",
"created_at": "2017-10-12T13:08:35Z"
},
{
"body": "It works like a charm. Thanks.",
"created_at": "2017-10-25T10:03:20Z"
}
],
"number": 26967,
"title": "DateProcessor does not support unix epoch format"
} | {
"body": "PR for #26967",
"number": 26986,
"review_comments": [
{
"body": "do we really want to be lenient with nulls?",
"created_at": "2017-10-13T10:36:13Z"
},
{
"body": "@jpountz `IngestDocument#getFieldValue(...)` fails if there is no value for the specified field, so I think the current code is good?",
"created_at": "2017-10-13T12:37:20Z"
},
{
"body": "Does it also fail if there is a value but this value is `null`?",
"created_at": "2017-10-13T12:57:29Z"
},
{
"body": "No, it doesn't fail here, but later at line 81, because the date can't be parsed. That is the same behaviour as before when null was specified, only now the `lastException` exception message is different. (`java.lang.NumberFormatException: For input string: \"null\"` instead of `java.lang.NumberFormatException: For input string: null`)",
"created_at": "2017-10-16T05:55:22Z"
},
{
"body": "@jpountz I've added an if check here and in `DateIndexNameProcessor` too.",
"created_at": "2017-10-18T15:51:25Z"
}
],
"title": "date processor should not fail if timestamp is specified as json number"
} | {
"commits": [
{
"message": "ingest: date processor should not fail if timestamp is specified as json number\n\nCloses #26967"
}
],
"files": [
{
"diff": "@@ -63,7 +63,12 @@ public final class DateIndexNameProcessor extends AbstractProcessor {\n @Override\n public void execute(IngestDocument ingestDocument) throws Exception {\n // Date can be specified as a string or long:\n- String date = Objects.toString(ingestDocument.getFieldValue(field, Object.class));\n+ Object obj = ingestDocument.getFieldValue(field, Object.class);\n+ String date = null;\n+ if (obj != null) {\n+ // Not use Objects.toString(...) here, because null gets changed to \"null\" which may confuse some date parsers\n+ date = obj.toString();\n+ }\n \n DateTime dateTime = null;\n Exception lastException = null;",
"filename": "modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateIndexNameProcessor.java",
"status": "modified"
},
{
"diff": "@@ -30,10 +30,10 @@\n import org.joda.time.format.ISODateTimeFormat;\n \n import java.util.ArrayList;\n-import java.util.IllformedLocaleException;\n import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n+import java.util.Objects;\n import java.util.function.Function;\n \n public final class DateProcessor extends AbstractProcessor {\n@@ -64,7 +64,12 @@ public final class DateProcessor extends AbstractProcessor {\n \n @Override\n public void execute(IngestDocument ingestDocument) {\n- String value = ingestDocument.getFieldValue(field, String.class);\n+ Object obj = ingestDocument.getFieldValue(field, Object.class);\n+ String value = null;\n+ if (obj != null) {\n+ // Not use Objects.toString(...) here, because null gets changed to \"null\" which may confuse some date parsers\n+ value = obj.toString();\n+ }\n \n DateTime dateTime = null;\n Exception lastException = null;",
"filename": "modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateProcessor.java",
"status": "modified"
},
{
"diff": "@@ -134,6 +134,12 @@ public void testUnixMs() {\n IngestDocument ingestDocument = RandomDocumentPicks.randomIngestDocument(random(), document);\n dateProcessor.execute(ingestDocument);\n assertThat(ingestDocument.getFieldValue(\"date_as_date\", String.class), equalTo(\"1970-01-01T00:16:40.500Z\"));\n+\n+ document = new HashMap<>();\n+ document.put(\"date_as_string\", 1000500L);\n+ ingestDocument = RandomDocumentPicks.randomIngestDocument(random(), document);\n+ dateProcessor.execute(ingestDocument);\n+ assertThat(ingestDocument.getFieldValue(\"date_as_date\", String.class), equalTo(\"1970-01-01T00:16:40.500Z\"));\n }\n \n public void testUnix() {",
"filename": "modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateProcessorTests.java",
"status": "modified"
}
]
} |
{
"body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 5.1.1 - 5.4.1\r\n\r\n**Plugins installed**: [analysis-icu, elasticsearch-position-similarity]\r\n\r\n**JVM version** (`java -version`):\r\nopenjdk version \"1.8.0_131\"\r\nOpenJDK Runtime Environment (IcedTea 3.4.0) (suse-10.10.3-x86_64)\r\nOpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nLinux oviken 4.4.36-8-default #1 SMP Fri Dec 9 16:18:38 UTC 2016 (3ec5648) x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nUpgrading an index using a custom similarity fails in org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility with the exception \"Unknown Similarity type\". If the old and the new elasticserver have the same custom similarity installed, upgrading should work.\r\n\r\nLooking at the code, it is no surprise that this does not work, as checkMappingsCompatibility creates a SimilarityService with an empty list of additional similarities. It seems something akin to the fake analyzerMap is necessary for custom similarities?\r\n\r\nThis is a follow up to https://discuss.elastic.co/t/upgrading-index-with-custom-similarity-not-working/76678 as by now I'm quite sure this is a bug with elasticsearch an not with my similarity plugin.\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Start elasticsearch version 5.1.1 with https://github.com/sdauletau/elasticsearch-position-similarity installed (I've attached the exact zip I build and used)\r\n[elasticsearch-position-similarity-5.1.1.zip](https://github.com/elastic/elasticsearch/files/1094449/elasticsearch-position-similarity-5.1.1.zip)\r\n\r\n 2. Create an index using this similarity:\r\n```\r\ncurl -s -XPUT \"http://localhost:9200/test_index\" -d '\r\n{\r\n \"settings\": {\r\n \"similarity\": {\r\n \"positionSimilarity\": {\r\n \"type\": \"position-similarity\"\r\n }\r\n }\r\n }\r\n}'\r\n```\r\n\r\n 3. Stop elasticsearch, upgrade to 5.1.2, install the same plugin for this version (I again attached the zip I used)\r\n[elasticsearch-position-similarity-5.1.2.zip](https://github.com/elastic/elasticsearch/files/1094446/elasticsearch-position-similarity-5.1.2.zip)\r\n\r\n4. Start elasticsearch again\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n[elasticsearch.txt](https://github.com/elastic/elasticsearch/files/1094451/elasticsearch.txt)\r\n\r\nhttps://gist.github.com/xabbu42/66dd46bd3cc15244cede8ceadf884fd8\r\n",
"comments": [
{
"body": "This looks like a bug in how similarities are registered. Currently they are added inside `Plugin.onIndexModule`. But this is too late when the node is starting up with existing similarites in index settings.\r\n\r\nI think we should move similarities out to a pull based plugin interface, and have the index get a registry of similarity impls. @s1monw wdyt?",
"created_at": "2017-06-29T20:29:23Z"
},
{
"body": "We ran into the same issue when trying to upgrade from 2.3 to 5.2 (both in place and from snapshot). We were thinking of patching and building ourselves, but a proper fix did not look very straight forward to our unfamiliar eyes since the similarities are not loaded until an IndexService is created and that looked a little tough to just mock/fake in this scenario. ",
"created_at": "2017-08-29T02:04:14Z"
},
{
"body": "I was also planning to try a patch myself using a similar workaround as is already done with analyzers in checkMappingsCompatibility. But the comments of rjernst suggest that the problem is more complex and so I'm waiting on further input before trying anything myself.",
"created_at": "2017-09-05T16:56:33Z"
},
{
"body": "@xabbu42 I think your idea of using a similar workaround as we do with analyzers is the correct solution. My earlier comment was missing some context and would be something nice to have, but is a separate issue.",
"created_at": "2017-09-07T02:37:00Z"
},
{
"body": "In general, I would advocate a settings that allows an index with a problematic custom similarity to be loaded anyway (with kind of a no-op similarity). This enables us to manipulate the index-definition afterwards.",
"created_at": "2017-09-14T15:22:35Z"
},
{
"body": "@rjernst I'd like to have this fix in the 5.x branch as well. I've ported it and it works for me locally. Can I open a PR?",
"created_at": "2018-02-17T01:05:30Z"
}
],
"number": 25350,
"title": "Upgrading an index using a custom similarity fails"
} | {
"body": "Use a fake similarity map that always returns a value in MetaDataIndexUpgradeService.checkMappingsCompatibility instead of an empty map.\r\n\r\nCloses #25350",
"number": 26985,
"review_comments": [
{
"body": "I realize the analyzer hack does this, but it is inconsistent with what is returned above. I would rather throw a UEO here so that if entrySet begins to be used, we will catch this rather than get weird behavior. (And we should, in a followup or in this PR, fix the other entrySet() impl as well).",
"created_at": "2017-10-12T20:41:54Z"
},
{
"body": "I would add a comment here that having a dummy map works because we assume any similarity providers were known before the upgrade, so allowing any key below is ok.",
"created_at": "2017-10-12T20:52:30Z"
},
{
"body": "I did test the patch with a elasticsearch server without the custom similarity installed. The server starts but does not recover the index with an appropriate error message. I did not test restoring a snapshot from an old version. But as far as I understand the situation, we do not assume all used similarities are actually known here, but just defer the error to a point when we actually know all available similarities.",
"created_at": "2017-10-13T12:21:44Z"
},
{
"body": "I can do that but need to know what an UEO is (I'm not a Java programmer).",
"created_at": "2017-10-13T12:24:01Z"
},
{
"body": "@xabbu42 I think that @rjernst meant \"UOE\" as in `UnsupportedOperationException`.",
"created_at": "2017-10-13T12:26:29Z"
},
{
"body": "Yes, Jason is correct on what I meant.",
"created_at": "2017-10-13T15:59:37Z"
},
{
"body": "I get test failures if make this change. For example:\r\n\r\n```\r\nSuite: org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeServiceTests\r\n 1> [2017-10-14T16:43:29,230][INFO ][o.e.c.m.MetaDataIndexUpgradeServiceTests] [testPluginUpgradeFailure]: before test\r\n 1> [2017-10-14T16:43:29,233][INFO ][o.e.c.m.MetaDataIndexUpgradeServiceTests] [testPluginUpgradeFailure]: after test\r\n 2> REPRODUCE WITH: gradle :core:test -Dtests.seed=67F6816A4DF09ADB -Dtests.class=org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeServiceTests -Dtests.method=\"testPluginUpgradeFailure\" -Dtests.security.manager=true -Dtests.locale=sr -Dtests.timezone=Asia/Katmandu\r\nFAILURE 0.03s | MetaDataIndexUpgradeServiceTests.testPluginUpgradeFailure <<< FAILURES!\r\n > Throwable #1: org.junit.ComparisonFailure: expected:<[unable to upgrade the mappings for the index [[foo/BOOM]]]> but was:<[Cannot upgrade index foo]>\r\n > at __randomizedtesting.SeedInfo.seed([67F6816A4DF09ADB:37D52F4B7D45621E]:0)\r\n > at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeServiceTests.testPluginUpgradeFailure(MetaDataIndexUpgradeServiceTests.java:141)\r\n > at java.lang.Thread.run(Thread.java:748)\r\n 1> [2017-10-14T16:43:29,258][INFO ][o.e.c.m.MetaDataIndexUpgradeServiceTests] [testPluginUpgrade]: before test\r\n 1> [2017-10-14T16:43:29,263][INFO ][o.e.c.m.MetaDataIndexUpgradeServiceTests] [testPluginUpgrade]: after test\r\n```\r\n",
"created_at": "2017-10-14T11:04:04Z"
},
{
"body": "Defer is what I meant by \"assume\". We allow any similarity name to be looked up without error. We don't actually use it, we just need to ensure the lookup does not fail. This is ok because we assume whatever was present when ES last shut down had all known similarities. ",
"created_at": "2017-11-10T18:21:58Z"
},
{
"body": "I think this happens because `IndexAnalyzers` iterates over the analyzers/normalizers when in `close()`. Go ahead and add back the empty map, but please add a comment noting the reason?",
"created_at": "2017-11-10T19:22:59Z"
}
],
"title": "Fix upgrading indices which use a custom similarity plugin."
} | {
"commits": [
{
"message": "Fix upgrading indices which use a custom similarity plugin."
},
{
"message": "add/combine comments for the fake similarity map"
},
{
"message": "Fix wrong tabs"
},
{
"message": "Merge branch 'master' into review/26985"
}
],
"files": [
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.similarity.SimilarityService;\n+import org.elasticsearch.index.similarity.SimilarityProvider;\n import org.elasticsearch.indices.mapper.MapperRegistry;\n \n import java.util.AbstractMap;\n@@ -132,26 +133,52 @@ private static boolean isSupportedVersion(IndexMetaData indexMetaData, Version m\n */\n private void checkMappingsCompatibility(IndexMetaData indexMetaData) {\n try {\n- // We cannot instantiate real analysis server at this point because the node might not have\n- // been started yet. However, we don't really need real analyzers at this stage - so we can fake it\n+\n+ // We cannot instantiate real analysis server or similiarity service at this point because the node\n+ // might not have been started yet. However, we don't really need real analyzers or similarities at\n+ // this stage - so we can fake it using constant maps accepting every key.\n+ // This is ok because all used similarities and analyzers for this index were known before the upgrade.\n+ // Missing analyzers and similarities plugin will still trigger the apropriate error during the\n+ // actual upgrade.\n+\n IndexSettings indexSettings = new IndexSettings(indexMetaData, this.settings);\n- SimilarityService similarityService = new SimilarityService(indexSettings, null, Collections.emptyMap());\n+\n+ final Map<String, SimilarityProvider.Factory> similarityMap = new AbstractMap<String, SimilarityProvider.Factory>() {\n+ @Override\n+ public boolean containsKey(Object key) {\n+ return true;\n+ }\n+\n+ @Override\n+ public SimilarityProvider.Factory get(Object key) {\n+ assert key instanceof String : \"key must be a string but was: \" + key.getClass();\n+ return SimilarityService.BUILT_IN.get(SimilarityService.DEFAULT_SIMILARITY);\n+ }\n+\n+ // this entrySet impl isn't fully correct but necessary as SimilarityService will iterate\n+ // over all similarities\n+ @Override\n+ public Set<Entry<String, SimilarityProvider.Factory>> entrySet() {\n+ return Collections.emptySet();\n+ }\n+ };\n+ SimilarityService similarityService = new SimilarityService(indexSettings, null, similarityMap);\n final NamedAnalyzer fakeDefault = new NamedAnalyzer(\"default\", AnalyzerScope.INDEX, new Analyzer() {\n @Override\n protected TokenStreamComponents createComponents(String fieldName) {\n throw new UnsupportedOperationException(\"shouldn't be here\");\n }\n });\n- // this is just a fake map that always returns the same value for any possible string key\n- // also the entrySet impl isn't fully correct but we implement it since internally\n- // IndexAnalyzers will iterate over all analyzers to close them.\n+\n final Map<String, NamedAnalyzer> analyzerMap = new AbstractMap<String, NamedAnalyzer>() {\n @Override\n public NamedAnalyzer get(Object key) {\n assert key instanceof String : \"key must be a string but was: \" + key.getClass();\n return new NamedAnalyzer((String)key, AnalyzerScope.INDEX, fakeDefault.analyzer());\n }\n \n+ // this entrySet impl isn't fully correct but necessary as IndexAnalyzers will iterate\n+ // over all analyzers to close them\n @Override\n public Set<Entry<String, NamedAnalyzer>> entrySet() {\n return Collections.emptySet();",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java",
"status": "modified"
}
]
} |
{
"body": "As part of #18567 relative paths were no longer used to make nested hits more consistent with normal hits, but the _source of nested document was forgotten. Only if the nested _source was filtered the full field names / paths were used.\r\n\r\nCloses #23090\r\n\r\nI wonder if we should backport this change to 6.0 branch? It is a breaking change.",
"comments": [
{
"body": "Thanks @martijnvg! Please backport if possible since 6x is still beta.",
"created_at": "2017-08-10T00:28:44Z"
},
{
"body": "+1 to backport to 6.0",
"created_at": "2017-08-10T11:22:37Z"
}
],
"number": 26102,
"title": "Unfiltered nested source should keep its full path"
} | {
"body": "Due to a change happened via #26102 to make the nested source consistent\r\nwith or without source filtering, the _source of a nested inner hit was\r\nalways wrapped in the parent path. This turned out to be not ideal for\r\nusers relying on the nested source, as it would require additional parsing\r\non the client side. This change fixes this, the _source of nested inner hits\r\nis now no longer wrapped by parent json objects, irregardless of whether \r\nthe _source is included as is or source filtering is used.\r\n\r\nInternally source filtering and highlighting relies on the fact that the\r\n_source of nested inner hits are accessible by its full field path, so\r\nin order to now break this, the conversion of the _source into its binary\r\nform is performed in FetchSourceSubPhase, after any potential source filtering\r\nis performed to make sure the structure of _source of the nested inner hit\r\nis consistent irregardless if source filtering is performed.\r\n\r\nPR for #26944\r\n",
"number": 26982,
"review_comments": [
{
"body": "Do we need to check `source.internalSourceRef() == null` before returning to ensure that for non-root hits its actually been set by the end of this method?",
"created_at": "2017-10-17T16:20:11Z"
},
{
"body": "`source.internalSourceRef()` could be null here now?",
"created_at": "2017-10-17T16:20:37Z"
},
{
"body": "Yes, I think so. In case of regular hits the source has already been set in the FetchPhase and I guess this check exists to ensure that the binary representation of the _source has really been set.",
"created_at": "2017-10-18T07:09:42Z"
},
{
"body": "It is null in the case `rootHit` is false, which can't be the case here. ",
"created_at": "2017-10-18T07:10:59Z"
},
{
"body": "I'd call this `nestedHit` and make it final",
"created_at": "2017-10-18T08:47:46Z"
},
{
"body": "It looks to me like it duplicates the logic of creating a XContentBuilder in a given type and then write the filtered source as map. Could it be something like this?\r\n\r\n``` \r\n...\r\nObject value = source.filter(fetchSourceContext);\r\ntry {\r\n if (nestedHit) {\r\n value = getNestedSource((Map<String, Object>) value, hitContext);\r\n }\r\n final int initialCapacity = Math.min(1024, source.internalSourceRef().length()); // deal with null here\r\n try (BytesStreamOutput streamOutput = new BytesStreamOutput(initialCapacity)) {\r\n XContentBuilder builder = new XContentBuilder(source.sourceContentType().xContent(), streamOutput);\r\n builder.value(value);\r\n hitContext.hit().sourceRef(builder.bytes());\r\n }\r\n...\r\n```\r\n",
"created_at": "2017-10-18T08:52:49Z"
}
],
"title": "Return the _source of inner hit nested as is without wrapping it into its full path context"
} | {
"commits": [
{
"message": "Return the _source of inner hit nested as is without wrapping it into its full path context\n\nDue to a change happened via #26102 to make the nested source consistent\nwith or without source filtering, the _source of a nested inner hit was\nalways wrapped in the parent path. This turned out to be not ideal for\nusers relying on the nested source, as it would require additional parsing\non the client side. This change fixes this, the _source of nested inner hits\nis now no longer wrapped by parent json objects, irregardless of whether\nthe _source is included as is or source filtering is used.\n\nInternally source filtering and highlighting relies on the fact that the\n_source of nested inner hits are accessible by its full field path, so\nin order to now break this, the conversion of the _source into its binary\nform is performed in FetchSourceSubPhase, after any potential source filtering\nis performed to make sure the structure of _source of the nested inner hit\nis consistent irregardless if source filtering is performed.\n\nPR for #26944\n\nCloses #26944"
}
],
"files": [
{
"diff": "@@ -301,8 +301,6 @@ private SearchHit createNestedSearchHit(SearchContext context, int nestedTopDocI\n }\n context.lookup().source().setSource(nestedSourceAsMap);\n XContentType contentType = tuple.v1();\n- BytesReference nestedSource = contentBuilder(contentType).map(nestedSourceAsMap).bytes();\n- context.lookup().source().setSource(nestedSource);\n context.lookup().source().setSourceContentType(contentType);\n }\n return new SearchHit(nestedTopDocId, uid.id(), documentMapper.typeText(), nestedIdentity, searchFields);",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -20,13 +20,20 @@\n package org.elasticsearch.search.fetch.subphase;\n \n import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.fetch.FetchSubPhase;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.lookup.SourceLookup;\n \n import java.io.IOException;\n+import java.io.UncheckedIOException;\n+import java.util.Map;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.contentBuilder;\n \n public final class FetchSourceSubPhase implements FetchSubPhase {\n \n@@ -35,29 +42,40 @@ public void hitExecute(SearchContext context, HitContext hitContext) {\n if (context.sourceRequested() == false) {\n return;\n }\n+ final boolean nestedHit = hitContext.hit().getNestedIdentity() != null;\n SourceLookup source = context.lookup().source();\n FetchSourceContext fetchSourceContext = context.fetchSourceContext();\n assert fetchSourceContext.fetchSource();\n- if (fetchSourceContext.includes().length == 0 && fetchSourceContext.excludes().length == 0) {\n- hitContext.hit().sourceRef(source.internalSourceRef());\n- return;\n+ if (nestedHit == false) {\n+ if (fetchSourceContext.includes().length == 0 && fetchSourceContext.excludes().length == 0) {\n+ hitContext.hit().sourceRef(source.internalSourceRef());\n+ return;\n+ }\n+ if (source.internalSourceRef() == null) {\n+ throw new IllegalArgumentException(\"unable to fetch fields from _source field: _source is disabled in the mappings \" +\n+ \"for index [\" + context.indexShard().shardId().getIndexName() + \"]\");\n+ }\n }\n \n- if (source.internalSourceRef() == null) {\n- throw new IllegalArgumentException(\"unable to fetch fields from _source field: _source is disabled in the mappings \" +\n- \"for index [\" + context.indexShard().shardId().getIndexName() + \"]\");\n+ Object value = source.filter(fetchSourceContext);\n+ if (nestedHit) {\n+ value = getNestedSource((Map<String, Object>) value, hitContext);\n }\n-\n- final Object value = source.filter(fetchSourceContext);\n try {\n- final int initialCapacity = Math.min(1024, source.internalSourceRef().length());\n+ final int initialCapacity = nestedHit ? 1024 : Math.min(1024, source.internalSourceRef().length());\n BytesStreamOutput streamOutput = new BytesStreamOutput(initialCapacity);\n XContentBuilder builder = new XContentBuilder(source.sourceContentType().xContent(), streamOutput);\n builder.value(value);\n hitContext.hit().sourceRef(builder.bytes());\n } catch (IOException e) {\n throw new ElasticsearchException(\"Error filtering source\", e);\n }\n+ }\n \n+ private Map<String, Object> getNestedSource(Map<String, Object> sourceAsMap, HitContext hitContext) {\n+ for (SearchHit.NestedIdentity o = hitContext.hit().getNestedIdentity(); o != null; o = o.getChild()) {\n+ sourceAsMap = (Map<String, Object>) sourceAsMap.get(o.getField().string());\n+ }\n+ return sourceAsMap;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/FetchSourceSubPhase.java",
"status": "modified"
},
{
"diff": "@@ -729,7 +729,7 @@ public void testTopHitsInNestedSimple() throws Exception {\n assertThat(searchHits.getTotalHits(), equalTo(1L));\n assertThat(searchHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(searchHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat(extractValue(\"comments.date\", searchHits.getAt(0).getSourceAsMap()), equalTo(1));\n+ assertThat(extractValue(\"date\", searchHits.getAt(0).getSourceAsMap()), equalTo(1));\n \n bucket = terms.getBucketByKey(\"b\");\n assertThat(bucket.getDocCount(), equalTo(2L));\n@@ -738,10 +738,10 @@ public void testTopHitsInNestedSimple() throws Exception {\n assertThat(searchHits.getTotalHits(), equalTo(2L));\n assertThat(searchHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(searchHits.getAt(0).getNestedIdentity().getOffset(), equalTo(1));\n- assertThat(extractValue(\"comments.date\", searchHits.getAt(0).getSourceAsMap()), equalTo(2));\n+ assertThat(extractValue(\"date\", searchHits.getAt(0).getSourceAsMap()), equalTo(2));\n assertThat(searchHits.getAt(1).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(searchHits.getAt(1).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat(extractValue(\"comments.date\", searchHits.getAt(1).getSourceAsMap()), equalTo(3));\n+ assertThat(extractValue(\"date\", searchHits.getAt(1).getSourceAsMap()), equalTo(3));\n \n bucket = terms.getBucketByKey(\"c\");\n assertThat(bucket.getDocCount(), equalTo(1L));\n@@ -750,7 +750,7 @@ public void testTopHitsInNestedSimple() throws Exception {\n assertThat(searchHits.getTotalHits(), equalTo(1L));\n assertThat(searchHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(searchHits.getAt(0).getNestedIdentity().getOffset(), equalTo(1));\n- assertThat(extractValue(\"comments.date\", searchHits.getAt(0).getSourceAsMap()), equalTo(4));\n+ assertThat(extractValue(\"date\", searchHits.getAt(0).getSourceAsMap()), equalTo(4));\n }\n \n public void testTopHitsInSecondLayerNested() throws Exception {\n@@ -803,49 +803,49 @@ public void testTopHitsInSecondLayerNested() throws Exception {\n assertThat(topReviewers.getHits().getHits().length, equalTo(7));\n \n assertThat(topReviewers.getHits().getAt(0).getId(), equalTo(\"1\"));\n- assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(0).getSourceAsMap()), equalTo(\"user a\"));\n+ assertThat(extractValue(\"name\", topReviewers.getHits().getAt(0).getSourceAsMap()), equalTo(\"user a\"));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n \n assertThat(topReviewers.getHits().getAt(1).getId(), equalTo(\"1\"));\n- assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(1).getSourceAsMap()), equalTo(\"user b\"));\n+ assertThat(extractValue(\"name\", topReviewers.getHits().getAt(1).getSourceAsMap()), equalTo(\"user b\"));\n assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getOffset(), equalTo(0));\n assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getChild().getOffset(), equalTo(1));\n \n assertThat(topReviewers.getHits().getAt(2).getId(), equalTo(\"1\"));\n- assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(2).getSourceAsMap()), equalTo(\"user c\"));\n+ assertThat(extractValue(\"name\", topReviewers.getHits().getAt(2).getSourceAsMap()), equalTo(\"user c\"));\n assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getOffset(), equalTo(0));\n assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getChild().getOffset(), equalTo(2));\n \n assertThat(topReviewers.getHits().getAt(3).getId(), equalTo(\"1\"));\n- assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(3).getSourceAsMap()), equalTo(\"user c\"));\n+ assertThat(extractValue(\"name\", topReviewers.getHits().getAt(3).getSourceAsMap()), equalTo(\"user c\"));\n assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getOffset(), equalTo(1));\n assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getChild().getOffset(), equalTo(0));\n \n assertThat(topReviewers.getHits().getAt(4).getId(), equalTo(\"1\"));\n- assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(4).getSourceAsMap()), equalTo(\"user d\"));\n+ assertThat(extractValue(\"name\", topReviewers.getHits().getAt(4).getSourceAsMap()), equalTo(\"user d\"));\n assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getOffset(), equalTo(1));\n assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getChild().getOffset(), equalTo(1));\n \n assertThat(topReviewers.getHits().getAt(5).getId(), equalTo(\"1\"));\n- assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(5).getSourceAsMap()), equalTo(\"user e\"));\n+ assertThat(extractValue(\"name\", topReviewers.getHits().getAt(5).getSourceAsMap()), equalTo(\"user e\"));\n assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getOffset(), equalTo(1));\n assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getChild().getOffset(), equalTo(2));\n \n assertThat(topReviewers.getHits().getAt(6).getId(), equalTo(\"2\"));\n- assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(6).getSourceAsMap()), equalTo(\"user f\"));\n+ assertThat(extractValue(\"name\", topReviewers.getHits().getAt(6).getSourceAsMap()), equalTo(\"user f\"));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n@@ -901,7 +901,7 @@ public void testNestedFetchFeatures() {\n assertThat(field.getValue().toString(), equalTo(\"5\"));\n \n assertThat(searchHit.getSourceAsMap().size(), equalTo(1));\n- assertThat(extractValue(\"comments.message\", searchHit.getSourceAsMap()), equalTo(\"some comment\"));\n+ assertThat(extractValue(\"message\", searchHit.getSourceAsMap()), equalTo(\"some comment\"));\n }\n \n public void testTopHitsInNested() throws Exception {\n@@ -934,7 +934,7 @@ public void testTopHitsInNested() throws Exception {\n for (int j = 0; j < 3; j++) {\n assertThat(searchHits.getAt(j).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(searchHits.getAt(j).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat(extractValue(\"comments.id\", searchHits.getAt(j).getSourceAsMap()), equalTo(0));\n+ assertThat(extractValue(\"id\", searchHits.getAt(j).getSourceAsMap()), equalTo(0));\n \n HighlightField highlightField = searchHits.getAt(j).getHighlightFields().get(\"comments.message\");\n assertThat(highlightField.getFragments().length, equalTo(1));",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/TopHitsIT.java",
"status": "modified"
},
{
"diff": "@@ -596,9 +596,9 @@ public void testNestedSource() throws Exception {\n client().prepareIndex(\"index1\", \"message\", \"1\").setSource(jsonBuilder().startObject()\n .field(\"message\", \"quick brown fox\")\n .startArray(\"comments\")\n- .startObject().field(\"message\", \"fox eat quick\").endObject()\n- .startObject().field(\"message\", \"fox ate rabbit x y z\").endObject()\n- .startObject().field(\"message\", \"rabbit got away\").endObject()\n+ .startObject().field(\"message\", \"fox eat quick\").field(\"x\", \"y\").endObject()\n+ .startObject().field(\"message\", \"fox ate rabbit x y z\").field(\"x\", \"y\").endObject()\n+ .startObject().field(\"message\", \"rabbit got away\").field(\"x\", \"y\").endObject()\n .endArray()\n .endObject()).get();\n refresh();\n@@ -614,9 +614,11 @@ public void testNestedSource() throws Exception {\n assertHitCount(response, 1);\n \n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(2L));\n- assertThat(extractValue(\"comments.message\", response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getSourceAsMap()),\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getSourceAsMap().size(), equalTo(1));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getSourceAsMap().get(\"message\"),\n equalTo(\"fox eat quick\"));\n- assertThat(extractValue(\"comments.message\", response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(1).getSourceAsMap()),\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(1).getSourceAsMap().size(), equalTo(1));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(1).getSourceAsMap().get(\"message\"),\n equalTo(\"fox ate rabbit x y z\"));\n \n response = client().prepareSearch()\n@@ -627,9 +629,11 @@ public void testNestedSource() throws Exception {\n assertHitCount(response, 1);\n \n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(2L));\n- assertThat(extractValue(\"comments.message\", response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getSourceAsMap()),\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getSourceAsMap().size(), equalTo(2));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getSourceAsMap().get(\"message\"),\n equalTo(\"fox eat quick\"));\n- assertThat(extractValue(\"comments.message\", response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(1).getSourceAsMap()),\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getSourceAsMap().size(), equalTo(2));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(1).getSourceAsMap().get(\"message\"),\n equalTo(\"fox ate rabbit x y z\"));\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/InnerHitsIT.java",
"status": "modified"
},
{
"diff": "@@ -329,10 +329,8 @@ Top hits response snippet with a nested hit, which resides in the first slot of\n },\n \"_score\": 0.2876821,\n \"_source\": {\n- \"comments\": {\n- \"comment\": \"This car could have better brakes\", <3>\n- \"username\": \"baddriver007\"\n- }\n+ \"comment\": \"This car could have better brakes\", <3>\n+ \"username\": \"baddriver007\"\n }\n }\n ]",
"filename": "docs/reference/aggregations/metrics/tophits-aggregation.asciidoc",
"status": "modified"
},
{
"diff": "@@ -158,10 +158,8 @@ An example of a response snippet that could be generated from the above search r\n },\n \"_score\": 1.0,\n \"_source\": {\n- \"comments\" : {\n- \"author\": \"nik9000\",\n- \"number\": 2\n- }\n+ \"author\": \"nik9000\",\n+ \"number\": 2\n }\n }\n ]\n@@ -406,12 +404,8 @@ Which would look like:\n },\n \"_score\": 0.6931472,\n \"_source\": {\n- \"comments\": {\n- \"votes\": {\n- \"value\": 1,\n- \"voter\": \"kimchy\"\n- }\n- }\n+ \"value\": 1,\n+ \"voter\": \"kimchy\"\n }\n }\n ]",
"filename": "docs/reference/search/request/inner-hits.asciidoc",
"status": "modified"
}
]
} |
{
"body": "Spinoff from #14121...\n\nToday, when ES detects it's using too much heap vs the configured indexing buffer (default 10% of JVM heap) it opens a new searcher to force Lucene to move the bytes to disk, clear version map, etc.\n\nBut this has the unexpected side effect of making newly indexed/deleted documents visible to future searches, which is not nice for users who are trying to prevent that, e.g. #3593.\n\nAs @uschindler suggested in that issue, I think ES should have two separate searchers from the engine: one for search visibility, only ever refreshed according to the user's wishes, and another, used internally for freeing up heap, version map lookups, etc. Lucene will be efficient about this, sharing segment readers across those two searchers.\n\nI haven't started on this (need to finish #14121 first!) so if someone wants to take it, please feel free!\n",
"comments": [
{
"body": "I'll try to tackle this ... it doesn't look too hard, given the changes in #14121 which already begins separate \"write indexing buffer to disk\" from \"refresh\".\n",
"created_at": "2016-01-06T14:46:27Z"
},
{
"body": "Note that with this change, a refresh only happens when the user expects it to: on the periodic (default: every 1 second) interval, or when refresh API is explicitly invoked.\n\nBut this is a biggish change to ES's behavior vs today, e.g. `flush`, `forceMerge`, moving indexing buffers to disk because they are too big, etc. does NOT refresh, and a good number of tests are angry because of this ... so I'm slowly inserting `refresh()` for such tests.\n\nIt also has implications for transient disk usage, since ES will \"secretly\" refresh less often, meaning we hold segments, which may now be merged or deleted, open for longer. Users who disable `refresh_interval` (set to -1) need to be careful to invoke refresh API at important times (after `flush` or `forceMerge`).\n\nStill I think it is important we make ES's semantics/behavior crisp and well defined: `refresh`, and `refresh` alone, makes recent index changes visible to searches. No other operation should do this as an \"accidental\" side effect.\n",
"created_at": "2016-01-06T18:56:38Z"
},
{
"body": "Just a note - since ES moves shard around on it’s own will, if we want to support this, we’ll have to make sure shard relocation (i.e., copy all the files) maintains this semantics. This will be tricky for many reasons - for example, the user may issue a refresh command when the target shard is not yet ready to receive it (engine closed). Today we refresh the target at the end of every recovery for this reason. Have a “refresh when I say and only when I say” is much more complicated then the current “refresh at least when I say” (but whenever you want as well) semantics. I’m not sure it’s worth the complexity imho.\n\n> On 06 Jan 2016, at 19:56, Michael McCandless notifications@github.com wrote:\n> \n> Note that with this change, a refresh only happens when the user expects it to: on the periodic (default: every 1 second) interval, or when refresh API is explicitly invoked.\n> \n> But this is a biggish change to ES's behavior vs today, e.g. flush, forceMerge, moving indexing buffers to disk because they are too big, etc. does NOT refresh, and a good number of tests are angry because of this ... so I'm slowly inserting refresh() for such tests.\n> \n> It also has implications for transient disk usage, since ES will \"secretly\" refresh less often, meaning we hold segments, which may now be merged or deleted, open for longer. Users who disable refresh_interval (set to -1) need to be careful to invoke refresh API at important times (after flush or forceMerge).\n> \n> Still I think it is important we make ES's semantics/behavior crisp and well defined: refresh, and refresh alone, makes recent index changes visible to searches. No other operation should do this as an \"accidental\" side effect.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"created_at": "2016-01-06T19:11:02Z"
},
{
"body": "Thanks @bleskes, I agree this is too difficult to achieve \"perfectly\", and I think recovery should be unchanged here (refresh when the shard is done recovering).\n\nSimilarly, primary and each replica are in general searching slightly different of a shard today, i.e. when each refreshes every 1s by default, it's a different set of indexed docs that become visible, in general.\n\nFile-based replication would make this easier ;)\n\nSo I think those should remain out of scope, here, and we should still state that ES is a \"refresh at least when I say\", but with this issue \"less often when I don't say\" than today.\n\nOr are you saying we shouldn't even try to make any change here, i.e. leave the engine doing a normal search-visible refresh when e.g. it wants to free up heap used by version map?\n",
"created_at": "2016-01-06T22:40:34Z"
},
{
"body": "> So I think those should remain out of scope, here, and we should still state that ES is a \"refresh at least when I say\", but with this issue \"less often when I don't say\" than today.\n\n+1 to this - I think for the replication case we should refresh since we have to but for stuff like clearing version maps etc. we can improve the situation.\n",
"created_at": "2016-01-11T20:22:12Z"
}
],
"number": 15768,
"title": "Use separate searchers for \"search visibility\" vs \"move indexing buffer to disk\""
} | {
"body": "Today, when ES detects it's using too much heap vs the configured indexing\r\nbuffer (default 10% of JVM heap) it opens a new searcher to force Lucene to move\r\nthe bytes to disk, clear version map, etc.\r\n\r\nBut this has the unexpected side effect of making newly indexed/deleted\r\ndocuments visible to future searches, which is not nice for users who are trying\r\nto prevent that, e.g. #3593.\r\n\r\nThis is also an indirect spinoff from #26802 where we potentially pay a big\r\nprice on rebuilding caches etc. when updates / realtime-get is used. We are\r\nrefreshing the internal reader for realtime gets which causes for instance\r\nglobal ords to be rebuild. I think we can gain quite a bit if we'd use a reader\r\nthat is only used for GETs and not for searches etc. that way we can also solve\r\nproblems of searchers being refreshed unexpectedly aside of replica recovery /\r\nrelocation.\r\n\r\nCloses #15768\r\nCloses #26912",
"number": 26972,
"review_comments": [
{
"body": "shall we name these by scope? i.e., getScopeSearcherManager?",
"created_at": "2017-10-11T14:51:53Z"
},
{
"body": "Nit - now that we call `createSearchManager` twice, shall we move the following line out of that method and into this constructor? it feels unrelated.\r\n\r\n```\r\nlastCommittedSegmentInfos = readLastCommittedSegmentInfos(searcherManager, store);\r\n```",
"created_at": "2017-10-11T14:54:29Z"
},
{
"body": "Another nit - `searcherFactory` -> `searcherFactoryForSearch`?",
"created_at": "2017-10-11T14:57:06Z"
},
{
"body": "shall we make the refreshExternal an EnumSet of SearcherScope?",
"created_at": "2017-10-11T15:01:51Z"
},
{
"body": "++",
"created_at": "2017-10-11T15:04:19Z"
},
{
"body": "I wonder if we can now always use this \"refresh\" path here rather than the `IndexWriter.flush` - refresh is now much lighter? (not sure what other side effects are there).",
"created_at": "2017-10-11T15:21:45Z"
},
{
"body": "Doesn't the comment about releasing segments in tryRenewSyncCommit holds here too?",
"created_at": "2017-10-11T15:23:04Z"
},
{
"body": "Isn't tryRenewSyncCommit already doing it in this case?",
"created_at": "2017-10-11T15:24:58Z"
},
{
"body": "same comment - if `tryRenewSyncCommit` returned true, it already refreshed, no?",
"created_at": "2017-10-11T15:26:16Z"
},
{
"body": "nit - should we use try-with-resources to avoid leaking on failure?",
"created_at": "2017-10-11T15:28:22Z"
},
{
"body": "+1 I think that's good",
"created_at": "2017-10-11T18:36:08Z"
},
{
"body": "I think I did some copy past mess here. I will fix",
"created_at": "2017-10-11T18:36:26Z"
},
{
"body": "see my comment above sorry for the noise",
"created_at": "2017-10-11T18:36:35Z"
},
{
"body": "will do",
"created_at": "2017-10-11T18:36:43Z"
},
{
"body": "typo: unknown",
"created_at": "2017-10-11T18:41:39Z"
},
{
"body": ":) it's a good sign you bumped into this, isn't this changing semantics - before when there were no docs, we won't output the Field Stats sub object",
"created_at": "2017-10-12T09:14:18Z"
},
{
"body": "with the move to internal external (rather then get/search), I think this method can go and that also means that the Engine.acquireSearch(source, scope) can be made protected.",
"created_at": "2017-10-12T09:21:31Z"
},
{
"body": "++ sneaky and nice. I think we can extend InternalEngineTests.testSimpleOperations with:\r\n\r\n```\r\n // but, we can still get it (in realtime)\r\n getResult = engine.get(newGet(true, doc), searcherFactory);\r\n assertThat(getResult.exists(), equalTo(true));\r\n assertThat(getResult.docIdAndVersion(), notNullValue());\r\n getResult.release();\r\n\r\n+ // but not real time is not yet visible\r\n+ getResult = engine.get(newGet(false, doc), searcherFactory);\r\n+ assertThat(getResult.exists(), equalTo(false));\r\n+ getResult.release();\r\n\r\n````",
"created_at": "2017-10-12T09:26:05Z"
},
{
"body": "Never mind. I see it now :(",
"created_at": "2017-10-12T09:28:46Z"
},
{
"body": "I don't see the fall through? ",
"created_at": "2017-10-12T09:33:10Z"
},
{
"body": "++",
"created_at": "2017-10-12T09:37:49Z"
},
{
"body": "OK. Now I see why the engine changes can be done in 6.x. I would opt for keeping as it was (i.e. full refresh on flush) and do as a small following in 7.0 to change the refresh command in the engine + mark it as breaking etc. Just a suggestion.",
"created_at": "2017-10-12T09:39:47Z"
},
{
"body": "expand the diff below and you will see all the void 💃 ",
"created_at": "2017-10-12T12:09:46Z"
}
],
"title": "Use separate searchers for \"search visibility\" vs \"move indexing buffer to disk"
} | {
"commits": [
{
"message": "Use separate searchers for \"search visibility\" vs \"move indexing buffer to disk\"\n\nToday, when ES detects it's using too much heap vs the configured indexing\nbuffer (default 10% of JVM heap) it opens a new searcher to force Lucene to move\nthe bytes to disk, clear version map, etc.\n\nBut this has the unexpected side effect of making newly indexed/deleted\ndocuments visible to future searches, which is not nice for users who are trying\nto prevent that, e.g. #3593.\n\nThis is also an indirect spinoff from #26802 where we potentially pay a big\nprice on rebuilding caches etc. when updates / realtime-get is used. We are\nrefreshing the internal reader for realtime gets which causes for instance\nglobal ords to be rebuild. I think we can gain quite a bit if we'd use a reader\nthat is only used for GETs and not for searches etc. that way we can also solve\nproblems of searchers being refreshed unexpectedly aside of replica recovery /\nrelocation.\n\nCloses #15768\nCloses #26912"
},
{
"message": "remove unused import"
},
{
"message": "fix test"
},
{
"message": "take it one step further and make only Engine#refresh(String) do an external refresh"
},
{
"message": "add a unittest that simulates what an upsert does"
},
{
"message": "fix GC simulation"
},
{
"message": "align asserts with its comments"
},
{
"message": "add additional test and clarify comment"
},
{
"message": "Merge branch 'master' into use_interal_reader_manager"
}
],
"files": [
{
"diff": "@@ -305,9 +305,9 @@ private void buildFieldStatistics(XContentBuilder builder, Terms curTerms) throw\n long sumDocFreq = curTerms.getSumDocFreq();\n int docCount = curTerms.getDocCount();\n long sumTotalTermFrequencies = curTerms.getSumTotalTermFreq();\n- if (docCount > 0) {\n- assert ((sumDocFreq > 0)) : \"docCount >= 0 but sumDocFreq ain't!\";\n- assert ((sumTotalTermFrequencies > 0)) : \"docCount >= 0 but sumTotalTermFrequencies ain't!\";\n+ if (docCount >= 0) {\n+ assert ((sumDocFreq >= 0)) : \"docCount >= 0 but sumDocFreq ain't!\";\n+ assert ((sumTotalTermFrequencies >= 0)) : \"docCount >= 0 but sumTotalTermFrequencies ain't!\";\n builder.startObject(FieldStrings.FIELD_STATISTICS);\n builder.field(FieldStrings.SUM_DOC_FREQ, sumDocFreq);\n builder.field(FieldStrings.DOC_COUNT, docCount);",
"filename": "core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsResponse.java",
"status": "modified"
},
{
"diff": "@@ -90,7 +90,7 @@\n import java.util.concurrent.locks.Lock;\n import java.util.concurrent.locks.ReentrantLock;\n import java.util.concurrent.locks.ReentrantReadWriteLock;\n-import java.util.function.Function;\n+import java.util.function.BiFunction;\n \n public abstract class Engine implements Closeable {\n \n@@ -465,8 +465,9 @@ public enum SyncedFlushResult {\n PENDING_OPERATIONS\n }\n \n- protected final GetResult getFromSearcher(Get get, Function<String, Searcher> searcherFactory) throws EngineException {\n- final Searcher searcher = searcherFactory.apply(\"get\");\n+ protected final GetResult getFromSearcher(Get get, BiFunction<String, SearcherScope, Searcher> searcherFactory,\n+ SearcherScope scope) throws EngineException {\n+ final Searcher searcher = searcherFactory.apply(\"get\", scope);\n final DocIdAndVersion docIdAndVersion;\n try {\n docIdAndVersion = VersionsAndSeqNoResolver.loadDocIdAndVersion(searcher.reader(), get.uid());\n@@ -494,23 +495,40 @@ protected final GetResult getFromSearcher(Get get, Function<String, Searcher> se\n }\n }\n \n- public abstract GetResult get(Get get, Function<String, Searcher> searcherFactory) throws EngineException;\n+ public abstract GetResult get(Get get, BiFunction<String, SearcherScope, Searcher> searcherFactory) throws EngineException;\n+\n \n /**\n * Returns a new searcher instance. The consumer of this\n * API is responsible for releasing the returned searcher in a\n * safe manner, preferably in a try/finally block.\n *\n+ * @param source the source API or routing that triggers this searcher acquire\n+ *\n * @see Searcher#close()\n */\n public final Searcher acquireSearcher(String source) throws EngineException {\n+ return acquireSearcher(source, SearcherScope.EXTERNAL);\n+ }\n+\n+ /**\n+ * Returns a new searcher instance. The consumer of this\n+ * API is responsible for releasing the returned searcher in a\n+ * safe manner, preferably in a try/finally block.\n+ *\n+ * @param source the source API or routing that triggers this searcher acquire\n+ * @param scope the scope of this searcher ie. if the searcher will be used for get or search purposes\n+ *\n+ * @see Searcher#close()\n+ */\n+ public final Searcher acquireSearcher(String source, SearcherScope scope) throws EngineException {\n boolean success = false;\n /* Acquire order here is store -> manager since we need\n * to make sure that the store is not closed before\n * the searcher is acquired. */\n store.incRef();\n try {\n- final SearcherManager manager = getSearcherManager(); // can never be null\n+ final SearcherManager manager = getSearcherManager(source, scope); // can never be null\n /* This might throw NPE but that's fine we will run ensureOpen()\n * in the catch block and throw the right exception */\n final IndexSearcher searcher = manager.acquire();\n@@ -536,6 +554,10 @@ public final Searcher acquireSearcher(String source) throws EngineException {\n }\n }\n \n+ public enum SearcherScope {\n+ EXTERNAL, INTERNAL\n+ }\n+\n /** returns the translog for this engine */\n public abstract Translog getTranslog();\n \n@@ -768,7 +790,7 @@ public final boolean refreshNeeded() {\n the store is closed so we need to make sure we increment it here\n */\n try {\n- return getSearcherManager().isSearcherCurrent() == false;\n+ return getSearcherManager(\"refresh_needed\", SearcherScope.EXTERNAL).isSearcherCurrent() == false;\n } catch (IOException e) {\n logger.error(\"failed to access searcher manager\", e);\n failEngine(\"failed to access searcher manager\", e);\n@@ -1306,7 +1328,7 @@ public void release() {\n }\n }\n \n- protected abstract SearcherManager getSearcherManager();\n+ protected abstract SearcherManager getSearcherManager(String source, SearcherScope scope);\n \n /**\n * Method to close the engine while the write lock is held.",
"filename": "core/src/main/java/org/elasticsearch/index/engine/Engine.java",
"status": "modified"
},
{
"diff": "@@ -93,7 +93,7 @@\n import java.util.concurrent.atomic.AtomicLong;\n import java.util.concurrent.locks.Lock;\n import java.util.concurrent.locks.ReentrantLock;\n-import java.util.function.Function;\n+import java.util.function.BiFunction;\n import java.util.function.LongSupplier;\n \n public class InternalEngine extends Engine {\n@@ -108,20 +108,18 @@ public class InternalEngine extends Engine {\n \n private final IndexWriter indexWriter;\n \n- private final SearcherFactory searcherFactory;\n- private final SearcherManager searcherManager;\n+ private final SearcherManager externalSearcherManager;\n+ private final SearcherManager internalSearcherManager;\n \n private final Lock flushLock = new ReentrantLock();\n private final ReentrantLock optimizeLock = new ReentrantLock();\n \n // A uid (in the form of BytesRef) to the version map\n // we use the hashed variant since we iterate over it and check removal and additions on existing keys\n- private final LiveVersionMap versionMap;\n+ private final LiveVersionMap versionMap = new LiveVersionMap();\n \n private final KeyedLock<BytesRef> keyedLock = new KeyedLock<>();\n \n- private final AtomicBoolean versionMapRefreshPending = new AtomicBoolean();\n-\n private volatile SegmentInfos lastCommittedSegmentInfos;\n \n private final IndexThrottle throttle;\n@@ -153,7 +151,6 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException {\n maxUnsafeAutoIdTimestamp.set(Long.MAX_VALUE);\n }\n this.uidField = engineConfig.getIndexSettings().isSingleType() ? IdFieldMapper.NAME : UidFieldMapper.NAME;\n- this.versionMap = new LiveVersionMap();\n final TranslogDeletionPolicy translogDeletionPolicy = new TranslogDeletionPolicy(\n engineConfig.getIndexSettings().getTranslogRetentionSize().getBytes(),\n engineConfig.getIndexSettings().getTranslogRetentionAge().getMillis()\n@@ -163,15 +160,15 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException {\n store.incRef();\n IndexWriter writer = null;\n Translog translog = null;\n- SearcherManager manager = null;\n+ SearcherManager externalSearcherManager = null;\n+ SearcherManager internalSearcherManager = null;\n EngineMergeScheduler scheduler = null;\n boolean success = false;\n try {\n this.lastDeleteVersionPruneTimeMSec = engineConfig.getThreadPool().relativeTimeInMillis();\n \n mergeScheduler = scheduler = new EngineMergeScheduler(engineConfig.getShardId(), engineConfig.getIndexSettings());\n throttle = new IndexThrottle();\n- this.searcherFactory = new SearchFactory(logger, isClosed, engineConfig);\n try {\n final SeqNoStats seqNoStats;\n switch (openMode) {\n@@ -215,20 +212,21 @@ public InternalEngine(EngineConfig engineConfig) throws EngineException {\n throw e;\n }\n }\n- manager = createSearcherManager();\n- this.searcherManager = manager;\n- this.versionMap.setManager(searcherManager);\n+ internalSearcherManager = createSearcherManager(new SearcherFactory(), false);\n+ externalSearcherManager = createSearcherManager(new SearchFactory(logger, isClosed, engineConfig), true);\n+ this.internalSearcherManager = internalSearcherManager;\n+ this.externalSearcherManager = externalSearcherManager;\n+ internalSearcherManager.addListener(versionMap);\n assert pendingTranslogRecovery.get() == false : \"translog recovery can't be pending before we set it\";\n // don't allow commits until we are done with recovering\n pendingTranslogRecovery.set(openMode == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG);\n for (ReferenceManager.RefreshListener listener: engineConfig.getRefreshListeners()) {\n- searcherManager.addListener(listener);\n+ this.externalSearcherManager.addListener(listener);\n }\n success = true;\n } finally {\n if (success == false) {\n- IOUtils.closeWhileHandlingException(writer, translog, manager, scheduler);\n- versionMap.clear();\n+ IOUtils.closeWhileHandlingException(writer, translog, externalSearcherManager, internalSearcherManager, scheduler);\n if (isClosed.get() == false) {\n // failure we need to dec the store reference\n store.decRef();\n@@ -345,6 +343,7 @@ private void recoverFromTranslogInternal() throws IOException {\n logger.trace(\"flushing post recovery from translog. ops recovered [{}]. committed translog id [{}]. current id [{}]\",\n opsRecovered, translogGeneration == null ? null : translogGeneration.translogFileGeneration, translog.currentFileGeneration());\n flush(true, true);\n+ refresh(\"translog_recovery\");\n } else if (translog.isCurrent(translogGeneration) == false) {\n commitIndexWriter(indexWriter, translog, lastCommittedSegmentInfos.getUserData().get(Engine.SYNC_COMMIT_ID));\n refreshLastCommittedSegmentInfos();\n@@ -441,14 +440,16 @@ private String loadOrGenerateHistoryUUID(final IndexWriter writer, boolean force\n return uuid;\n }\n \n- private SearcherManager createSearcherManager() throws EngineException {\n+ private SearcherManager createSearcherManager(SearcherFactory searcherFactory, boolean readSegmentsInfo) throws EngineException {\n boolean success = false;\n SearcherManager searcherManager = null;\n try {\n try {\n final DirectoryReader directoryReader = ElasticsearchDirectoryReader.wrap(DirectoryReader.open(indexWriter), shardId);\n searcherManager = new SearcherManager(directoryReader, searcherFactory);\n- lastCommittedSegmentInfos = readLastCommittedSegmentInfos(searcherManager, store);\n+ if (readSegmentsInfo) {\n+ lastCommittedSegmentInfos = readLastCommittedSegmentInfos(searcherManager, store);\n+ }\n success = true;\n return searcherManager;\n } catch (IOException e) {\n@@ -468,10 +469,11 @@ private SearcherManager createSearcherManager() throws EngineException {\n }\n \n @Override\n- public GetResult get(Get get, Function<String, Searcher> searcherFactory) throws EngineException {\n+ public GetResult get(Get get, BiFunction<String, SearcherScope, Searcher> searcherFactory) throws EngineException {\n assert Objects.equals(get.uid().field(), uidField) : get.uid().field();\n try (ReleasableLock ignored = readLock.acquire()) {\n ensureOpen();\n+ SearcherScope scope;\n if (get.realtime()) {\n VersionValue versionValue = versionMap.getUnderLock(get.uid());\n if (versionValue != null) {\n@@ -482,12 +484,16 @@ public GetResult get(Get get, Function<String, Searcher> searcherFactory) throws\n throw new VersionConflictEngineException(shardId, get.type(), get.id(),\n get.versionType().explainConflictForReads(versionValue.version, get.version()));\n }\n- refresh(\"realtime_get\");\n+ refresh(\"realtime_get\", SearcherScope.INTERNAL);\n }\n+ scope = SearcherScope.INTERNAL;\n+ } else {\n+ // we expose what has been externally expose in a point in time snapshot via an explicit refresh\n+ scope = SearcherScope.EXTERNAL;\n }\n \n // no version, get the version from the index, we know that we refresh on flush\n- return getFromSearcher(get, searcherFactory);\n+ return getFromSearcher(get, searcherFactory, scope);\n }\n }\n \n@@ -1187,17 +1193,34 @@ private NoOpResult innerNoOp(final NoOp noOp) throws IOException {\n \n @Override\n public void refresh(String source) throws EngineException {\n+ refresh(source, SearcherScope.EXTERNAL);\n+ }\n+\n+ final void refresh(String source, SearcherScope scope) throws EngineException {\n // we obtain a read lock here, since we don't want a flush to happen while we are refreshing\n // since it flushes the index as well (though, in terms of concurrency, we are allowed to do it)\n try (ReleasableLock lock = readLock.acquire()) {\n ensureOpen();\n- searcherManager.maybeRefreshBlocking();\n+ switch (scope) {\n+ case EXTERNAL:\n+ // even though we maintain 2 managers we really do the heavy-lifting only once.\n+ // the second refresh will only do the extra work we have to do for warming caches etc.\n+ externalSearcherManager.maybeRefreshBlocking();\n+ // the break here is intentional we never refresh both internal / external together\n+ break;\n+ case INTERNAL:\n+ internalSearcherManager.maybeRefreshBlocking();\n+ break;\n+\n+ default:\n+ throw new IllegalArgumentException(\"unknown scope: \" + scope);\n+ }\n } catch (AlreadyClosedException e) {\n failOnTragicEvent(e);\n throw e;\n } catch (Exception e) {\n try {\n- failEngine(\"refresh failed\", e);\n+ failEngine(\"refresh failed source[\" + source + \"]\", e);\n } catch (Exception inner) {\n e.addSuppressed(inner);\n }\n@@ -1208,36 +1231,20 @@ public void refresh(String source) throws EngineException {\n // We check for pruning in each delete request, but we also prune here e.g. in case a delete burst comes in and then no more deletes\n // for a long time:\n maybePruneDeletedTombstones();\n- versionMapRefreshPending.set(false);\n mergeScheduler.refreshConfig();\n }\n \n @Override\n public void writeIndexingBuffer() throws EngineException {\n-\n // we obtain a read lock here, since we don't want a flush to happen while we are writing\n // since it flushes the index as well (though, in terms of concurrency, we are allowed to do it)\n try (ReleasableLock lock = readLock.acquire()) {\n ensureOpen();\n-\n- // TODO: it's not great that we secretly tie searcher visibility to \"freeing up heap\" here... really we should keep two\n- // searcher managers, one for searching which is only refreshed by the schedule the user requested (refresh_interval, or invoking\n- // refresh API), and another for version map interactions. See #15768.\n final long versionMapBytes = versionMap.ramBytesUsedForRefresh();\n final long indexingBufferBytes = indexWriter.ramBytesUsed();\n-\n- final boolean useRefresh = versionMapRefreshPending.get() || (indexingBufferBytes / 4 < versionMapBytes);\n- if (useRefresh) {\n- // The version map is using > 25% of the indexing buffer, so we do a refresh so the version map also clears\n- logger.debug(\"use refresh to write indexing buffer (heap size=[{}]), to also clear version map (heap size=[{}])\",\n- new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes));\n- refresh(\"write indexing buffer\");\n- } else {\n- // Most of our heap is used by the indexing buffer, so we do a cheaper (just writes segments, doesn't open a new searcher) IW.flush:\n- logger.debug(\"use IndexWriter.flush to write indexing buffer (heap size=[{}]) since version map is small (heap size=[{}])\",\n- new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes));\n- indexWriter.flush();\n- }\n+ logger.debug(\"use refresh to write indexing buffer (heap size=[{}]), to also clear version map (heap size=[{}])\",\n+ new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes));\n+ refresh(\"write indexing buffer\", SearcherScope.INTERNAL);\n } catch (AlreadyClosedException e) {\n failOnTragicEvent(e);\n throw e;\n@@ -1302,10 +1309,11 @@ final boolean tryRenewSyncCommit() {\n maybeFailEngine(\"renew sync commit\", ex);\n throw new EngineException(shardId, \"failed to renew sync commit\", ex);\n }\n- if (renewed) { // refresh outside of the write lock\n- refresh(\"renew sync commit\");\n+ if (renewed) {\n+ // refresh outside of the write lock\n+ // we have to refresh internal searcher here to ensure we release unreferenced segments.\n+ refresh(\"renew sync commit\", SearcherScope.INTERNAL);\n }\n-\n return renewed;\n }\n \n@@ -1347,7 +1355,7 @@ public CommitId flush(boolean force, boolean waitIfOngoing) throws EngineExcepti\n commitIndexWriter(indexWriter, translog, null);\n logger.trace(\"finished commit for flush\");\n // we need to refresh in order to clear older version values\n- refresh(\"version_table_flush\");\n+ refresh(\"version_table_flush\", SearcherScope.INTERNAL);\n translog.trimUnreferencedReaders();\n } catch (Exception e) {\n throw new FlushFailedEngineException(shardId, e);\n@@ -1651,8 +1659,9 @@ protected final void closeNoLock(String reason, CountDownLatch closedLatch) {\n assert rwl.isWriteLockedByCurrentThread() || failEngineLock.isHeldByCurrentThread() : \"Either the write lock must be held or the engine must be currently be failing itself\";\n try {\n this.versionMap.clear();\n+ internalSearcherManager.removeListener(versionMap);\n try {\n- IOUtils.close(searcherManager);\n+ IOUtils.close(externalSearcherManager, internalSearcherManager);\n } catch (Exception e) {\n logger.warn(\"Failed to close SearcherManager\", e);\n }\n@@ -1684,8 +1693,15 @@ protected final void closeNoLock(String reason, CountDownLatch closedLatch) {\n }\n \n @Override\n- protected SearcherManager getSearcherManager() {\n- return searcherManager;\n+ protected SearcherManager getSearcherManager(String source, SearcherScope scope) {\n+ switch (scope) {\n+ case INTERNAL:\n+ return internalSearcherManager;\n+ case EXTERNAL:\n+ return externalSearcherManager;\n+ default:\n+ throw new IllegalStateException(\"unknown scope: \" + scope);\n+ }\n }\n \n private Releasable acquireLock(BytesRef uid) {\n@@ -1698,7 +1714,7 @@ private Releasable acquireLock(Term uid) {\n \n private long loadCurrentVersionFromIndex(Term uid) throws IOException {\n assert incrementIndexVersionLookup();\n- try (Searcher searcher = acquireSearcher(\"load_version\")) {\n+ try (Searcher searcher = acquireSearcher(\"load_version\", SearcherScope.INTERNAL)) {\n return VersionsAndSeqNoResolver.loadVersion(searcher.reader(), uid);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -59,8 +59,6 @@ private static class Maps {\n \n private volatile Maps maps = new Maps();\n \n- private ReferenceManager<?> mgr;\n-\n /** Bytes consumed for each BytesRef UID:\n * In this base value, we account for the {@link BytesRef} object itself as\n * well as the header of the byte[] array it holds, and some lost bytes due\n@@ -98,21 +96,6 @@ private static class Maps {\n /** Tracks bytes used by tombstones (deletes) */\n final AtomicLong ramBytesUsedTombstones = new AtomicLong();\n \n- /** Sync'd because we replace old mgr. */\n- synchronized void setManager(ReferenceManager<?> newMgr) {\n- if (mgr != null) {\n- mgr.removeListener(this);\n- }\n- mgr = newMgr;\n-\n- // In case InternalEngine closes & opens a new IndexWriter/SearcherManager, all deletes are made visible, so we clear old and\n- // current here. This is safe because caller holds writeLock here (so no concurrent adds/deletes can be happeninge):\n- maps = new Maps();\n-\n- // So we are notified when reopen starts and finishes\n- mgr.addListener(this);\n- }\n-\n @Override\n public void beforeRefresh() throws IOException {\n // Start sending all updates after this point to the new\n@@ -249,11 +232,6 @@ synchronized void clear() {\n // and this will lead to an assert trip. Presumably it's fine if our ramBytesUsedTombstones is non-zero after clear since the index\n // is being closed:\n //ramBytesUsedTombstones.set(0);\n-\n- if (mgr != null) {\n- mgr.removeListener(this);\n- mgr = null;\n- }\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/index/engine/LiveVersionMap.java",
"status": "modified"
},
{
"diff": "@@ -1005,6 +1005,7 @@ public Engine.CommitId flush(FlushRequest request) {\n }\n final long time = System.nanoTime();\n final Engine.CommitId commitId = engine.flush(force, waitIfOngoing);\n+ engine.refresh(\"flush\"); // TODO this is technically wrong we should remove this in 7.0\n flushMetric.inc(System.nanoTime() - time);\n return commitId;\n }\n@@ -1032,8 +1033,12 @@ public void forceMerge(ForceMergeRequest forceMerge) throws IOException {\n if (logger.isTraceEnabled()) {\n logger.trace(\"force merge with {}\", forceMerge);\n }\n- getEngine().forceMerge(forceMerge.flush(), forceMerge.maxNumSegments(),\n+ Engine engine = getEngine();\n+ engine.forceMerge(forceMerge.flush(), forceMerge.maxNumSegments(),\n forceMerge.onlyExpungeDeletes(), false, false);\n+ if (forceMerge.flush()) {\n+ engine.refresh(\"force_merge\"); // TODO this is technically wrong we should remove this in 7.0\n+ }\n }\n \n /**\n@@ -1046,9 +1051,12 @@ public org.apache.lucene.util.Version upgrade(UpgradeRequest upgrade) throws IOE\n }\n org.apache.lucene.util.Version previousVersion = minimumCompatibleVersion();\n // we just want to upgrade the segments, not actually forge merge to a single segment\n- getEngine().forceMerge(true, // we need to flush at the end to make sure the upgrade is durable\n+ final Engine engine = getEngine();\n+ engine.forceMerge(true, // we need to flush at the end to make sure the upgrade is durable\n Integer.MAX_VALUE, // we just want to upgrade the segments, not actually optimize to a single segment\n false, true, upgrade.upgradeOnlyAncientSegments());\n+ engine.refresh(\"upgrade\"); // TODO this is technically wrong we should remove this in 7.0\n+\n org.apache.lucene.util.Version version = minimumCompatibleVersion();\n if (logger.isTraceEnabled()) {\n logger.trace(\"upgraded segments for {} from version {} to version {}\", shardId, previousVersion, version);\n@@ -1127,11 +1135,14 @@ public void failShard(String reason, @Nullable Exception e) {\n // fail the engine. This will cause this shard to also be removed from the node's index service.\n getEngine().failEngine(reason, e);\n }\n-\n public Engine.Searcher acquireSearcher(String source) {\n+ return acquireSearcher(source, Engine.SearcherScope.EXTERNAL);\n+ }\n+\n+ private Engine.Searcher acquireSearcher(String source, Engine.SearcherScope scope) {\n readAllowed();\n final Engine engine = getEngine();\n- final Engine.Searcher searcher = engine.acquireSearcher(source);\n+ final Engine.Searcher searcher = engine.acquireSearcher(source, scope);\n boolean success = false;\n try {\n final Engine.Searcher wrappedSearcher = searcherWrapper == null ? searcher : searcherWrapper.wrap(searcher);",
"filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java",
"status": "modified"
},
{
"diff": "@@ -86,6 +86,7 @@\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.lucene.Lucene;\n+import org.elasticsearch.common.lucene.index.ElasticsearchDirectoryReader;\n import org.elasticsearch.common.lucene.uid.Versions;\n import org.elasticsearch.common.lucene.uid.VersionsAndSeqNoResolver;\n import org.elasticsearch.common.lucene.uid.VersionsAndSeqNoResolver.DocIdAndSeqNo;\n@@ -942,7 +943,7 @@ public void testConcurrentGetAndFlush() throws Exception {\n engine.index(indexForDoc(doc));\n \n final AtomicReference<Engine.GetResult> latestGetResult = new AtomicReference<>();\n- final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n+ final BiFunction<String, Engine.SearcherScope, Searcher> searcherFactory = engine::acquireSearcher;\n latestGetResult.set(engine.get(newGet(true, doc), searcherFactory));\n final AtomicBoolean flushFinished = new AtomicBoolean(false);\n final CyclicBarrier barrier = new CyclicBarrier(2);\n@@ -977,7 +978,7 @@ public void testSimpleOperations() throws Exception {\n MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(0));\n searchResult.close();\n \n- final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n+ final BiFunction<String, Engine.SearcherScope, Searcher> searcherFactory = engine::acquireSearcher;\n \n // create a document\n Document document = testDocumentWithTextField();\n@@ -1002,6 +1003,12 @@ public void testSimpleOperations() throws Exception {\n assertThat(getResult.docIdAndVersion(), notNullValue());\n getResult.release();\n \n+ // but not real time is not yet visible\n+ getResult = engine.get(newGet(false, doc), searcherFactory);\n+ assertThat(getResult.exists(), equalTo(false));\n+ getResult.release();\n+\n+\n // refresh and it should be there\n engine.refresh(\"test\");\n \n@@ -1237,6 +1244,7 @@ public void testRenewSyncFlush() throws Exception {\n assertTrue(engine.tryRenewSyncCommit());\n assertEquals(1, engine.segments(false).size());\n } else {\n+ engine.refresh(\"test\");\n assertBusy(() -> assertEquals(1, engine.segments(false).size()));\n }\n assertEquals(store.readLastCommittedSegmentsInfo().getUserData().get(Engine.SYNC_COMMIT_ID), syncId);\n@@ -1311,6 +1319,38 @@ public void testVersioningNewCreate() throws IOException {\n assertThat(indexResult.getVersion(), equalTo(1L));\n }\n \n+ /**\n+ * simulates what an upsert / update API does\n+ */\n+ public void testVersionedUpdate() throws IOException {\n+ final BiFunction<String, Engine.SearcherScope, Searcher> searcherFactory = engine::acquireSearcher;\n+\n+ ParsedDocument doc = testParsedDocument(\"1\", null, testDocument(), B_1, null);\n+ Engine.Index create = new Engine.Index(newUid(doc), doc, Versions.MATCH_DELETED);\n+ Engine.IndexResult indexResult = engine.index(create);\n+ assertThat(indexResult.getVersion(), equalTo(1L));\n+ try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), create.uid()), searcherFactory)) {\n+ assertEquals(1, get.version());\n+ }\n+\n+ Engine.Index update_1 = new Engine.Index(newUid(doc), doc, 1);\n+ Engine.IndexResult update_1_result = engine.index(update_1);\n+ assertThat(update_1_result.getVersion(), equalTo(2L));\n+\n+ try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), create.uid()), searcherFactory)) {\n+ assertEquals(2, get.version());\n+ }\n+\n+ Engine.Index update_2 = new Engine.Index(newUid(doc), doc, 2);\n+ Engine.IndexResult update_2_result = engine.index(update_2);\n+ assertThat(update_2_result.getVersion(), equalTo(3L));\n+\n+ try (Engine.GetResult get = engine.get(new Engine.Get(true, doc.type(), doc.id(), create.uid()), searcherFactory)) {\n+ assertEquals(3, get.version());\n+ }\n+\n+ }\n+\n public void testVersioningNewIndex() throws IOException {\n ParsedDocument doc = testParsedDocument(\"1\", null, testDocument(), B_1, null);\n Engine.Index index = indexForDoc(doc);\n@@ -1337,12 +1377,14 @@ public void testForceMerge() throws IOException {\n assertEquals(numDocs, test.reader().numDocs());\n }\n engine.forceMerge(true, 1, false, false, false);\n+ engine.refresh(\"test\");\n assertEquals(engine.segments(true).size(), 1);\n \n ParsedDocument doc = testParsedDocument(Integer.toString(0), null, testDocument(), B_1, null);\n Engine.Index index = indexForDoc(doc);\n engine.delete(new Engine.Delete(index.type(), index.id(), index.uid()));\n engine.forceMerge(true, 10, true, false, false); //expunge deletes\n+ engine.refresh(\"test\");\n \n assertEquals(engine.segments(true).size(), 1);\n try (Engine.Searcher test = engine.acquireSearcher(\"test\")) {\n@@ -1354,7 +1396,7 @@ public void testForceMerge() throws IOException {\n index = indexForDoc(doc);\n engine.delete(new Engine.Delete(index.type(), index.id(), index.uid()));\n engine.forceMerge(true, 10, false, false, false); //expunge deletes\n-\n+ engine.refresh(\"test\");\n assertEquals(engine.segments(true).size(), 1);\n try (Engine.Searcher test = engine.acquireSearcher(\"test\")) {\n assertEquals(numDocs - 2, test.reader().numDocs());\n@@ -1561,6 +1603,7 @@ private void assertOpsOnReplica(List<Engine.Operation> ops, InternalEngine repli\n }\n if (randomBoolean()) {\n engine.flush();\n+ engine.refresh(\"test\");\n }\n firstOp = false;\n }\n@@ -1716,11 +1759,12 @@ private int assertOpsOnPrimary(List<Engine.Operation> ops, long currentOpVersion\n }\n if (randomBoolean()) {\n engine.flush();\n+ engine.refresh(\"test\");\n }\n \n if (rarely()) {\n // simulate GC deletes\n- engine.refresh(\"gc_simulation\");\n+ engine.refresh(\"gc_simulation\", Engine.SearcherScope.INTERNAL);\n engine.clearDeletedTombstones();\n if (docDeleted) {\n lastOpVersion = Versions.NOT_FOUND;\n@@ -1805,6 +1849,7 @@ public void testNonInternalVersioningOnPrimary() throws IOException {\n }\n if (randomBoolean()) {\n engine.flush();\n+ engine.refresh(\"test\");\n }\n }\n \n@@ -1884,7 +1929,7 @@ class OpAndVersion {\n ParsedDocument doc = testParsedDocument(\"1\", null, testDocument(), bytesArray(\"\"), null);\n final Term uidTerm = newUid(doc);\n engine.index(indexForDoc(doc));\n- final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n+ final BiFunction<String, Engine.SearcherScope, Searcher> searcherFactory = engine::acquireSearcher;\n for (int i = 0; i < thread.length; i++) {\n thread[i] = new Thread(() -> {\n startGun.countDown();\n@@ -2314,7 +2359,7 @@ public void testEnableGcDeletes() throws Exception {\n Engine engine = new InternalEngine(config(defaultSettings, store, createTempDir(), newMergePolicy(), null))) {\n engine.config().setEnableGcDeletes(false);\n \n- final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n+ final BiFunction<String, Engine.SearcherScope, Searcher> searcherFactory = engine::acquireSearcher;\n \n // Add document\n Document document = testDocument();\n@@ -2644,6 +2689,7 @@ public void testTranslogReplay() throws IOException {\n assertThat(indexResult.getVersion(), equalTo(1L));\n if (flush) {\n engine.flush();\n+ engine.refresh(\"test\");\n }\n \n doc = testParsedDocument(Integer.toString(randomId), null, testDocument(), new BytesArray(\"{}\"), null);\n@@ -3847,7 +3893,7 @@ public void testOutOfOrderSequenceNumbersWithVersionConflict() throws IOExceptio\n document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE));\n final ParsedDocument doc = testParsedDocument(\"1\", null, document, B_1, null);\n final Term uid = newUid(doc);\n- final Function<String, Searcher> searcherFactory = engine::acquireSearcher;\n+ final BiFunction<String, Engine.SearcherScope, Searcher> searcherFactory = engine::acquireSearcher;\n for (int i = 0; i < numberOfOperations; i++) {\n if (randomBoolean()) {\n final Engine.Index index = new Engine.Index(\n@@ -4203,4 +4249,58 @@ public void testFillUpSequenceIdGapsOnRecovery() throws IOException {\n IOUtils.close(recoveringEngine);\n }\n }\n+\n+\n+ public void assertSameReader(Searcher left, Searcher right) {\n+ List<LeafReaderContext> leftLeaves = ElasticsearchDirectoryReader.unwrap(left.getDirectoryReader()).leaves();\n+ List<LeafReaderContext> rightLeaves = ElasticsearchDirectoryReader.unwrap(right.getDirectoryReader()).leaves();\n+ assertEquals(rightLeaves.size(), leftLeaves.size());\n+ for (int i = 0; i < leftLeaves.size(); i++) {\n+ assertSame(leftLeaves.get(i).reader(), rightLeaves.get(0).reader());\n+ }\n+ }\n+\n+ public void assertNotSameReader(Searcher left, Searcher right) {\n+ List<LeafReaderContext> leftLeaves = ElasticsearchDirectoryReader.unwrap(left.getDirectoryReader()).leaves();\n+ List<LeafReaderContext> rightLeaves = ElasticsearchDirectoryReader.unwrap(right.getDirectoryReader()).leaves();\n+ if (rightLeaves.size() == leftLeaves.size()) {\n+ for (int i = 0; i < leftLeaves.size(); i++) {\n+ if (leftLeaves.get(i).reader() != rightLeaves.get(0).reader()) {\n+ return; // all is well\n+ }\n+ }\n+ fail(\"readers are same\");\n+ }\n+ }\n+\n+ public void testRefreshScopedSearcher() throws IOException {\n+ try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n+ Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)){\n+ assertSameReader(getSearcher, searchSearcher);\n+ }\n+ for (int i = 0; i < 10; i++) {\n+ final String docId = Integer.toString(i);\n+ final ParsedDocument doc =\n+ testParsedDocument(docId, null, testDocumentWithTextField(), SOURCE, null);\n+ Engine.Index primaryResponse = indexForDoc(doc);\n+ engine.index(primaryResponse);\n+ }\n+ assertTrue(engine.refreshNeeded());\n+ engine.refresh(\"test\", Engine.SearcherScope.INTERNAL);\n+ try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n+ Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)){\n+ assertEquals(10, getSearcher.reader().numDocs());\n+ assertEquals(0, searchSearcher.reader().numDocs());\n+ assertNotSameReader(getSearcher, searchSearcher);\n+ }\n+\n+ engine.refresh(\"test\", Engine.SearcherScope.EXTERNAL);\n+\n+ try (Searcher getSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.INTERNAL);\n+ Searcher searchSearcher = engine.acquireSearcher(\"test\", Engine.SearcherScope.EXTERNAL)){\n+ assertEquals(10, getSearcher.reader().numDocs());\n+ assertEquals(10, searchSearcher.reader().numDocs());\n+ assertSameReader(getSearcher, searchSearcher);\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java",
"status": "modified"
},
{
"diff": "@@ -1160,7 +1160,7 @@ public void testRefreshMetric() throws IOException {\n indexDoc(shard, \"test\", \"test\");\n try (Engine.GetResult ignored = shard.get(new Engine.Get(true, \"test\", \"test\",\n new Term(IdFieldMapper.NAME, Uid.encodeId(\"test\"))))) {\n- assertThat(shard.refreshStats().getTotal(), equalTo(refreshCount + 1));\n+ assertThat(shard.refreshStats().getTotal(), equalTo(refreshCount));\n }\n closeShards(shard);\n }",
"filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
},
{
"diff": "@@ -270,7 +270,6 @@ public void testConcurrentRefresh() throws Exception {\n * Uses a bunch of threads to index, wait for refresh, and non-realtime get documents to validate that they are visible after waiting\n * regardless of what crazy sequence of events causes the refresh listener to fire.\n */\n- @TestLogging(\"_root:debug,org.elasticsearch.index.engine.Engine.DW:trace\")\n public void testLotsOfThreads() throws Exception {\n int threadCount = between(3, 10);\n maxListeners = between(1, threadCount * 2);",
"filename": "core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java",
"status": "modified"
}
]
} |
{
"body": "Looking into #25737 I notice that the `*_script` rest-api-specs [1] definition of path parts as required is inconsistent with others rest-api-specs.\r\n\r\nFor example [index](https://github.com/elastic/elasticsearch/blob/master/rest-api-spec/src/main/resources/rest-api-spec/api/index.json) defines two different `paths` : `\"paths\": [\"/{index}/{type}\", \"/{index}/{type}/{id}\"]` and sets `index` and `type` to required, leaving `id` as an optional path part (as it is not in both `paths`).\r\nHowever `get_script` although defining only a single path : `\"paths\": [ \"/_scripts/{id}\" ]`, sets `id` *and* `lang` as required.\r\n\r\nMy interpretation of the `required : true` is that the path part *must* be in all paths. \r\n\r\nAs till now the `required` has not been enforced by the rest test runner, this has gone undetected. However for #25737 I am considering enforcing `required` both for path parts and parameters. This means I will run into more inconsistent definitions of the `required` for path parts.\r\n\r\nMy question here is: is my understanding of `required` in the rest-api-specs correct? If so, I would proceed and correct all inconsistent definitions of `required` I run into during the implementation of #25737.\r\n\r\n[1]*_script rest-api-specs:\r\n[delete_script](https://github.com/elastic/elasticsearch/blob/master/rest-api-spec/src/main/resources/rest-api-spec/api/delete_script.json)\r\n[get_script](https://github.com/elastic/elasticsearch/blob/master/rest-api-spec/src/main/resources/rest-api-spec/api/get_script.json)\r\n[put_script](https://github.com/elastic/elasticsearch/blob/master/rest-api-spec/src/main/resources/rest-api-spec/api/put_script.json)",
"comments": [
{
"body": "hi @olcbean I think your thoughts are spot on, if we validate use of `required` we also need to fix those tests. @jdconrad can you shed some light on the scripting APIs and these inconsistencies please?",
"created_at": "2017-10-09T11:09:06Z"
},
{
"body": "Sure. Sounds like a bug that was introduced due to maintaining backwards compatibility at some point. Now, that there's only a single path for each delete/get/put script commands it probably makes sense to add required for id.",
"created_at": "2017-10-09T15:53:14Z"
},
{
"body": "`context` also needs to be documented instead of `lang` here: https://github.com/elastic/elasticsearch/blob/master/rest-api-spec/src/main/resources/rest-api-spec/api/put_script.json#L14-L18",
"created_at": "2017-10-10T13:25:02Z"
}
],
"number": 26923,
"title": "Inconsistent API spec for `*_script`"
} | {
"body": "Removing inconsistencies in the resp api specs for the *_script by setting some path parts to optional.\r\n\r\nFixes #26923\r\n\r\nCC @javanna @jdconrad @Mpdreamz ",
"number": 26971,
"review_comments": [
{
"body": "Its not quite the `Script language` as `string`. It's a new parameter that will cause the passed script to be compiled and evaluated in a certain `ScriptContext`. \r\n\r\nSee: https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/script/ScriptModule.java#L41-L52\r\n\r\nThis should be an `enum` with multiple values:\r\n\r\n`search`, `aggs`, `executable`, `aggs_executable`, `update`, `ingest`, `filter`, `similarity`, `similarity_weight` & `template`. \r\n\r\nThats the current list for `master` `6.0` and `6.x` have a slightly smaller list:\r\n\r\nhttps://github.com/elastic/elasticsearch/blob/6.0/core/src/main/java/org/elasticsearch/script/ScriptModule.java#L41-L52\r\n\r\nhttps://github.com/elastic/elasticsearch/blob/6.x/core/src/main/java/org/elasticsearch/script/ScriptModule.java#L41-L52\r\n\r\n\r\n\r\n",
"created_at": "2017-10-11T15:47:17Z"
},
{
"body": "An enum is not correct. These values are pluggable.",
"created_at": "2017-10-11T18:32:36Z"
},
{
"body": "@Mpdreamz thanks for putting it in context!",
"created_at": "2017-10-11T20:26:38Z"
},
{
"body": "Thanks @rjernst, @olcbean LGTM now 👍 ",
"created_at": "2017-10-11T21:30:51Z"
}
],
"title": "Fix inconsistencies in the rest api specs for *_script"
} | {
"commits": [
{
"message": "fix inconsistencies in the rest api specs for *_script\nre-adding POST for put_script"
},
{
"message": "correct the desc for context in rest api spec for put_script"
}
],
"files": [
{
"diff": "@@ -41,6 +41,7 @@ public RestPutStoredScriptAction(Settings settings, RestController controller) {\n \n controller.registerHandler(POST, \"/_scripts/{id}\", this);\n controller.registerHandler(PUT, \"/_scripts/{id}\", this);\n+ controller.registerHandler(POST, \"/_scripts/{id}/{context}\", this);\n controller.registerHandler(PUT, \"/_scripts/{id}/{context}\", this);\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java",
"status": "modified"
},
{
"diff": "@@ -10,11 +10,6 @@\n \"type\" : \"string\",\n \"description\" : \"Script ID\",\n \"required\" : true\n- },\n- \"lang\" : {\n- \"type\" : \"string\",\n- \"description\" : \"Script language\",\n- \"required\" : true\n }\n },\n \"params\" : {",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/api/delete_script.json",
"status": "modified"
},
{
"diff": "@@ -10,11 +10,6 @@\n \"type\" : \"string\",\n \"description\" : \"Script ID\",\n \"required\" : true\n- },\n- \"lang\" : {\n- \"type\" : \"string\",\n- \"description\" : \"Script language\",\n- \"required\" : true\n }\n },\n \"params\" : {",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/api/get_script.json",
"status": "modified"
},
{
"diff": "@@ -11,10 +11,9 @@\n \"description\" : \"Script ID\",\n \"required\" : true\n },\n- \"lang\" : {\n+ \"context\" : {\n \"type\" : \"string\",\n- \"description\" : \"Script language\",\n- \"required\" : true\n+ \"description\" : \"Script context\"\n }\n },\n \"params\" : {",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/api/put_script.json",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: Version: 5.6.1, Build: 667b497/2017-09-14T19:22:05.189Z\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: 1.8.0_144\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nLinux NodeB 4.4.0-1013-aws #22-Ubuntu SMP Fri Mar 31 15:41:31 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWe have a cluster with 6 nodes.\r\nProbably due to network flakiness, some nodes started to loose connection with each other, during several minutes.\r\nAt least 2 nodes lost connection with the master => `NodeB` & `NodeA`.\r\n\r\nCluster went red, and stayed as is even after all the nodes came back to the cluster.\r\n\r\n```\r\n[2017-10-06T01:42:07,068][INFO ][o.e.c.r.a.AllocationService] [NodeC] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[index_1][1]] ...]).\r\n[2017-10-06T01:35:35,929][INFO ][o.e.c.r.a.AllocationService] [NodeC] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[index_2][0]] ...]).\r\n[2017-10-06T01:21:20,590][INFO ][o.e.c.r.a.AllocationService] [NodeC] Cluster health status changed from [YELLOW] to [RED] (reason: [{NodeB}{ItIC8RvWQk-K9xv4EtUH-g}{doC-pFtIRpmn-PaywU9AFg}{IpNodeB}{IpNodeB:9300}{availability_zone=us-east-1b, tag=histo} failed to ping, tried [3] times, each with maximum [30s] timeout]).\r\n[2017-10-06T01:20:50,553][INFO ][o.e.c.r.a.AllocationService] [NodeC] Cluster health status changed from [GREEN] to [YELLOW] (reason: [{NodeA}{tAZvKePkTqiqP44v4g4L7g}{areeqGx9RiO_q3_vip_fYA}{IpNodeA}{IpNodeA:9300}{availability_zone=us-east-1c, tag=histo} failed to ping, tried [3] times, each with maximum [30s] timeout]).\r\n```\r\n\r\nWhen executing a `/_cat/indices`:\r\n\r\n```\r\n[2017-10-06T01:33:27,125][WARN ][r.suppressed ] path: /_cat/indices, params: {}\r\njava.lang.NullPointerException: null\r\n at org.elasticsearch.rest.action.cat.RestIndicesAction.buildTable(RestIndicesAction.java:368) ~[elasticsearch-5.6.1.jar:5.6.1]\r\n at org.elasticsearch.rest.action.cat.RestIndicesAction$1$1$1.buildResponse(RestIndicesAction.java:116) ~[elasticsearch-5.6.1.jar:5.6.1]\r\n at org.elasticsearch.rest.action.cat.RestIndicesAction$1$1$1.buildResponse(RestIndicesAction.java:113) ~[elasticsearch-5.6.1.jar:5.6.1]\r\n at org.elasticsearch.rest.action.RestResponseListener.processResponse(RestResponseListener.java:37) ~[elasticsearch-5.6.1.jar:5.6.1]\r\n at org.elasticsearch.rest.action.RestActionListener.onResponse(RestActionListener.java:47) [elasticsearch-5.6.1.jar:5.6.1]\r\n```\r\n\r\nBugging line is the following: [RestIndicesAction.java#L368](https://github.com/elastic/elasticsearch/blob/v5.6.1/core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java#L368)\r\n\r\n^ A restart of the node NodeA at 01:35:16 fixed the issue.\r\n\r\n**Some relevant logs (not exhaustive):**\r\n\r\n```\r\n[2017-10-06T01:35:56,904][WARN ][o.e.c.a.s.ShardStateAction] [NodeC] [index_1][2] received shard failed for shard id [[index_1][2]], allocation id [hLAZTvTCRWGJ_vBnpc5xbg], primary term [4], message [mark copy as stale]\r\n[2017-10-06T01:35:35,981][WARN ][o.e.c.a.s.ShardStateAction] [NodeC] [index_2][0] received shard failed for shard id [[index_2][0]], allocation id [IvPPBlcRRnSQUA43s9v0qw], primary term [4], message [mark copy as stale]\r\n[2017-10-06T01:35:10,053][WARN ][o.e.c.a.s.ShardStateAction] [NodeC] [index_2][1] received shard failed for shard id [[index_2][1]], allocation id [xRch14rfR_OvfQHYvPul-g], primary term [2], message [mark copy as stale]\r\n[2017-10-06T01:35:09,840][WARN ][o.e.c.a.s.ShardStateAction] [NodeC] [index_2][1] received shard failed for shard id [[index_2][1]], allocation id [7JpkTXDnSR-Z54p3t9dlTQ], primary term [1], message [failed to perform indices:data/write/bulk[s] on replica [index_2][1], node[0dPW5AaBR--KS7JRNB32yA], [R], s[STARTED], a[id=7JpkTXDnSR-Z54p3t9dlTQ]], failure [RemoteTransportException[[NodeC][IpNodeC:9300][indices:data/write/bulk[s][r]]]; nested: IllegalStateException[active primary shard cannot be a replication target before relocation hand off [index_2][1], node[0dPW5AaBR--KS7JRNB32yA], [P], s[STARTED], a[id=7JpkTXDnSR-Z54p3t9dlTQ], state is [STARTED]]; ]\r\n[2017-10-06T01:35:09,840][WARN ][o.e.c.a.s.ShardStateAction] [NodeC] [index_2][1] received shard failed for shard id [[index_2][1]], allocation id [7JpkTXDnSR-Z54p3t9dlTQ], primary term [1], message [failed to perform indices:data/write/bulk[s] on replica [index_2][1], node[0dPW5AaBR--KS7JRNB32yA], [R], s[STARTED], a[id=7JpkTXDnSR-Z54p3t9dlTQ]], failure [RemoteTransportException[[NodeC][IpNodeC:9300][indices:data/write/bulk[s][r]]]; nested: IllegalStateException[active primary shard cannot be a replication target before relocation hand off [index_2][1], node[0dPW5AaBR--KS7JRNB32yA], [P], s[STARTED], a[id=7JpkTXDnSR-Z54p3t9dlTQ], state is [STARTED]]; ]\r\n[2017-10-06T01:35:09,894][WARN ][o.e.a.b.TransportShardBulkAction] [NodeF] [[index_2][1]] failed to perform indices:data/write/bulk[s] on replica [index_2][1], node[0dPW5AaBR--KS7JRNB32yA], [R], s[STARTED], a[id=7JpkTXDnSR-Z54p3t9dlTQ]\r\n[2017-10-06T01:35:04,107][WARN ][o.e.a.b.TransportShardBulkAction] [NodeD] [[index_2][0]] failed to perform indices:data/write/bulk[s] on replica [index_2][0], node[l-TN-YQMThO8V_srAwknTg], [R], s[STARTED], a[id=IvPPBlcRRnSQUA43s9v0qw]\r\n[2017-10-06T01:21:20,553][WARN ][o.e.d.z.PublishClusterStateAction] [NodeC] timed out waiting for all nodes to process published state [423] (timeout [30s], pending nodes: [{NodeD}{of6-ePXOT6uGk5TDKS1h-A}{IGu1YUCSRNiPOUgcq8HClw}{IpNodeD}{IpNodeD:9300}{availability_zone=us-east-1c, tag=fresh}, {NodeB}{ItIC8RvWQk-K9xv4EtUH-g}{doC-pFtIRpmn-PaywU9AFg}{IpNodeB}{IpNodeB:9300}{availability_zone=us-east-1b, tag=histo}, {NodeE}{_2uc635bS66TcqHVXjWpLA}{SzgLC8b0SpegMwaKLkPhgA}{IpNodeE}{IpNodeE:9300}{availability_zone=us-east-1a, tag=histo}])\r\n[2017-10-06T01:21:20,594][INFO ][o.e.c.s.ClusterService ] [NodeF] removed {{NodeB}{ItIC8RvWQk-K9xv4EtUH-g}{doC-pFtIRpmn-PaywU9AFg}{IpNodeB}{IpNodeB:9300}{availability_zone=us-east-1b, tag=histo},}, reason: zen-disco-receive(from master [master {NodeC}{0dPW5AaBR--KS7JRNB32yA}{bvYMHcw-QZ6xTN8SMaaMHw}{IpNodeC}{IpNodeC:9300}{availability_zone=us-east-1b, tag=fresh} committed version [424]])\r\n[2017-10-06T01:21:20,579][WARN ][o.e.c.s.ClusterService ] [NodeC] cluster state update task [zen-disco-node-failed({NodeA}{tAZvKePkTqiqP44v4g4L7g}{areeqGx9RiO_q3_vip_fYA}{IpNodeA}{IpNodeA:9300}{availability_zone=us-east-1c, tag=histo}), reason(failed to ping, tried [3] times, each with maximum [30s] timeout)[{NodeA}{tAZvKePkTqiqP44v4g4L7g}{areeqGx9RiO_q3_vip_fYA}{IpNodeA}{IpNodeA:9300}{availability_zone=us-east-1c, tag=histo} failed to ping, tried [3] times, each with maximum [30s] timeout]] took [30s] above the warn threshold of 30s\r\n```",
"comments": [
{
"body": "Is that an expected behaviour? Would it help against this kind of issue to have dedicated master nodes?\r\n\r\nIt's hard to say how we could reproduce unfortunately. Maybe you guys have a better idea.\r\n",
"created_at": "2017-10-10T09:18:50Z"
},
{
"body": "This happens when a primary shard is being relocated. I think, we can reproduce as follows.\r\n\r\n1. Have 2+ nodes running\r\n2. Create an index with `number_of_replicas >= 1, number_of_shards = 1`\r\n3. Continuously execute `GET /_cat/indices`\r\n4. Shutdown a node that contains the primary shard\r\n",
"created_at": "2017-10-10T15:48:38Z"
},
{
"body": "@jasontedor, Can I start working on this?",
"created_at": "2017-10-10T15:49:01Z"
},
{
"body": "@dnhatn Please do.",
"created_at": "2017-10-10T15:50:13Z"
}
],
"number": 26942,
"title": "NullPointerException on `/_cat/indices` when cluster RED"
} | {
"body": "When a node which contains the primary shard is unavailable, the primary\r\nstats (and the total stats) of an `IndexStats` will be empty for a short\r\nmoment (while the primary shard is being relocated). However, we assume\r\nthat these stats are always non-empty when handling `_cat/indices` in\r\nRestIndicesAction. This commit checks the content of these stats before\r\naccessing.\r\n\r\nCloses #26942",
"number": 26953,
"review_comments": [],
"title": "Fix NPE for /_cat/indices when no primary shard"
} | {
"commits": [
{
"message": "Fix NPE for /_cat/indices when no primary shard\n\nWhen a node which contains the primary shard is unavailable, the primary\nstats (and the total stats) of an `IndexStats` will be empty for a short\nmoment (while the primary shard is being relocated). However, we assume\nthat these stats are always non-empty when handling `_cat/indices` in\nRestIndicesAction. This commit checks the content of these stats before\naccessing.\n\nCloses #26942"
}
],
"files": [
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.indices.stats.CommonStats;\n import org.elasticsearch.action.admin.indices.stats.IndexStats;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsRequest;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n@@ -363,189 +364,193 @@ Table buildTable(RestRequest request, Index[] indices, ClusterHealthResponse res\n }\n }\n \n+ final CommonStats primaryStats = indexStats == null ? new CommonStats() : indexStats.getPrimaries();\n+ final CommonStats totalStats = indexStats == null ? new CommonStats() : indexStats.getTotal();\n+\n table.startRow();\n table.addCell(state == IndexMetaData.State.OPEN ? (indexHealth == null ? \"red*\" : indexHealth.getStatus().toString().toLowerCase(Locale.ROOT)) : null);\n table.addCell(state.toString().toLowerCase(Locale.ROOT));\n table.addCell(indexName);\n table.addCell(index.getUUID());\n table.addCell(indexHealth == null ? null : indexHealth.getNumberOfShards());\n table.addCell(indexHealth == null ? null : indexHealth.getNumberOfReplicas());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getDocs().getCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getDocs().getDeleted());\n+\n+ table.addCell(primaryStats.getDocs() == null ? null : primaryStats.getDocs().getCount());\n+ table.addCell(primaryStats.getDocs() == null ? null : primaryStats.getDocs().getDeleted());\n \n table.addCell(indexMetaData.getCreationDate());\n table.addCell(new DateTime(indexMetaData.getCreationDate(), DateTimeZone.UTC));\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getStore().size());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getStore().size());\n+ table.addCell(totalStats.getStore() == null ? null : totalStats.getStore().size());\n+ table.addCell(primaryStats.getStore() == null ? null : primaryStats.getStore().size());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getCompletion().getSize());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getCompletion().getSize());\n+ table.addCell(totalStats.getCompletion() == null ? null : totalStats.getCompletion().getSize());\n+ table.addCell(primaryStats.getCompletion() == null ? null : primaryStats.getCompletion().getSize());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getFieldData().getMemorySize());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFieldData().getMemorySize());\n+ table.addCell(totalStats.getFieldData() == null ? null : totalStats.getFieldData().getMemorySize());\n+ table.addCell(primaryStats.getFieldData() == null ? null : primaryStats.getFieldData().getMemorySize());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getFieldData().getEvictions());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFieldData().getEvictions());\n+ table.addCell(totalStats.getFieldData() == null ? null : totalStats.getFieldData().getEvictions());\n+ table.addCell(primaryStats.getFieldData() == null ? null : primaryStats.getFieldData().getEvictions());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getQueryCache().getMemorySize());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getQueryCache().getMemorySize());\n+ table.addCell(totalStats.getQueryCache() == null ? null : totalStats.getQueryCache().getMemorySize());\n+ table.addCell(primaryStats.getQueryCache() == null ? null : primaryStats.getQueryCache().getMemorySize());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getQueryCache().getEvictions());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getQueryCache().getEvictions());\n+ table.addCell(totalStats.getQueryCache() == null ? null : totalStats.getQueryCache().getEvictions());\n+ table.addCell(primaryStats.getQueryCache() == null ? null : primaryStats.getQueryCache().getEvictions());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getMemorySize());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getMemorySize());\n+ table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getMemorySize());\n+ table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getMemorySize());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getEvictions());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getEvictions());\n+ table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getEvictions());\n+ table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getEvictions());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getHitCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getHitCount());\n+ table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getHitCount());\n+ table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getHitCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getRequestCache().getMissCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRequestCache().getMissCount());\n+ table.addCell(totalStats.getRequestCache() == null ? null : totalStats.getRequestCache().getMissCount());\n+ table.addCell(primaryStats.getRequestCache() == null ? null : primaryStats.getRequestCache().getMissCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getFlush().getTotal());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFlush().getTotal());\n+ table.addCell(totalStats.getFlush() == null ? null : totalStats.getFlush().getTotal());\n+ table.addCell(primaryStats.getFlush() == null ? null : primaryStats.getFlush().getTotal());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getFlush().getTotalTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getFlush().getTotalTime());\n+ table.addCell(totalStats.getFlush() == null ? null : totalStats.getFlush().getTotalTime());\n+ table.addCell(primaryStats.getFlush() == null ? null : primaryStats.getFlush().getTotalTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().current());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().current());\n+ table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().current());\n+ table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().current());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getTime());\n+ table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getTime());\n+ table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getCount());\n+ table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getCount());\n+ table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getExistsTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getExistsTime());\n+ table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getExistsTime());\n+ table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getExistsTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getExistsCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getExistsCount());\n+ table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getExistsCount());\n+ table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getExistsCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getMissingTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getMissingTime());\n+ table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getMissingTime());\n+ table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getMissingTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getGet().getMissingCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getGet().getMissingCount());\n+ table.addCell(totalStats.getGet() == null ? null : totalStats.getGet().getMissingCount());\n+ table.addCell(primaryStats.getGet() == null ? null : primaryStats.getGet().getMissingCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getDeleteCurrent());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getDeleteCurrent());\n+ table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getDeleteCurrent());\n+ table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getDeleteCurrent());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getDeleteTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getDeleteTime());\n+ table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getDeleteTime());\n+ table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getDeleteTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getDeleteCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getDeleteCount());\n+ table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getDeleteCount());\n+ table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getDeleteCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexCurrent());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexCurrent());\n+ table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexCurrent());\n+ table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexCurrent());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexTime());\n+ table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexTime());\n+ table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexCount());\n+ table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexCount());\n+ table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getIndexing().getTotal().getIndexFailedCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getIndexing().getTotal().getIndexFailedCount());\n+ table.addCell(totalStats.getIndexing() == null ? null : totalStats.getIndexing().getTotal().getIndexFailedCount());\n+ table.addCell(primaryStats.getIndexing() == null ? null : primaryStats.getIndexing().getTotal().getIndexFailedCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getCurrent());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getCurrent());\n+ table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getCurrent());\n+ table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getCurrent());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getCurrentNumDocs());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getCurrentNumDocs());\n+ table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getCurrentNumDocs());\n+ table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getCurrentNumDocs());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getCurrentSize());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getCurrentSize());\n+ table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getCurrentSize());\n+ table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getCurrentSize());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotal());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotal());\n+ table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotal());\n+ table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotal());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotalNumDocs());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotalNumDocs());\n+ table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotalNumDocs());\n+ table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotalNumDocs());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotalSize());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotalSize());\n+ table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotalSize());\n+ table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotalSize());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getMerge().getTotalTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getMerge().getTotalTime());\n+ table.addCell(totalStats.getMerge() == null ? null : totalStats.getMerge().getTotalTime());\n+ table.addCell(primaryStats.getMerge() == null ? null : primaryStats.getMerge().getTotalTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getRefresh().getTotal());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRefresh().getTotal());\n+ table.addCell(totalStats.getRefresh() == null ? null : totalStats.getRefresh().getTotal());\n+ table.addCell(primaryStats.getRefresh() == null ? null : primaryStats.getRefresh().getTotal());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getRefresh().getTotalTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRefresh().getTotalTime());\n+ table.addCell(totalStats.getRefresh() == null ? null : totalStats.getRefresh().getTotalTime());\n+ table.addCell(primaryStats.getRefresh() == null ? null : primaryStats.getRefresh().getTotalTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getRefresh().getListeners());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getRefresh().getListeners());\n+ table.addCell(totalStats.getRefresh() == null ? null : totalStats.getRefresh().getListeners());\n+ table.addCell(primaryStats.getRefresh() == null ? null : primaryStats.getRefresh().getListeners());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getFetchCurrent());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getFetchCurrent());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getFetchCurrent());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getFetchCurrent());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getFetchTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getFetchTime());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getFetchTime());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getFetchTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getFetchCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getFetchCount());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getFetchCount());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getFetchCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getOpenContexts());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getOpenContexts());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getOpenContexts());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getOpenContexts());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getQueryCurrent());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getQueryCurrent());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getQueryCurrent());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getQueryCurrent());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getQueryTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getQueryTime());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getQueryTime());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getQueryTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getQueryCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getQueryCount());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getQueryCount());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getQueryCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getScrollCurrent());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getScrollCurrent());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getScrollCurrent());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getScrollCurrent());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getScrollTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getScrollTime());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getScrollTime());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getScrollTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getScrollCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getScrollCount());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getScrollCount());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getScrollCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getCount());\n+ table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getCount());\n+ table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getCount());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getMemory());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getMemory());\n+ table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getMemory());\n+ table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getMemory());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getIndexWriterMemory());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getIndexWriterMemory());\n+ table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getIndexWriterMemory());\n+ table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getIndexWriterMemory());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getVersionMapMemory());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getVersionMapMemory());\n+ table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getVersionMapMemory());\n+ table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getVersionMapMemory());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getBitsetMemory());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getBitsetMemory());\n+ table.addCell(totalStats.getSegments() == null ? null : totalStats.getSegments().getBitsetMemory());\n+ table.addCell(primaryStats.getSegments() == null ? null : primaryStats.getSegments().getBitsetMemory());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().current());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().current());\n+ table.addCell(totalStats.getWarmer() == null ? null : totalStats.getWarmer().current());\n+ table.addCell(primaryStats.getWarmer() == null ? null : primaryStats.getWarmer().current());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().total());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().total());\n+ table.addCell(totalStats.getWarmer() == null ? null : totalStats.getWarmer().total());\n+ table.addCell(primaryStats.getWarmer() == null ? null : primaryStats.getWarmer().total());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().totalTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().totalTime());\n+ table.addCell(totalStats.getWarmer() == null ? null : totalStats.getWarmer().totalTime());\n+ table.addCell(primaryStats.getWarmer() == null ? null : primaryStats.getWarmer().totalTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getSuggestCurrent());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getSuggestCurrent());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getSuggestCurrent());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getSuggestCurrent());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getSuggestTime());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getSuggestTime());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getSuggestTime());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getSuggestTime());\n \n- table.addCell(indexStats == null ? null : indexStats.getTotal().getSearch().getTotal().getSuggestCount());\n- table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSearch().getTotal().getSuggestCount());\n+ table.addCell(totalStats.getSearch() == null ? null : totalStats.getSearch().getTotal().getSuggestCount());\n+ table.addCell(primaryStats.getSearch() == null ? null : primaryStats.getSearch().getTotal().getSuggestCount());\n \n table.addCell(indexStats == null ? null : indexStats.getTotal().getTotalMemory());\n table.addCell(indexStats == null ? null : indexStats.getPrimaries().getTotalMemory());",
"filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java",
"status": "modified"
},
{
"diff": "@@ -136,11 +136,14 @@ public void testBuildTable() {\n private IndicesStatsResponse randomIndicesStatsResponse(final Index[] indices) {\n List<ShardStats> shardStats = new ArrayList<>();\n for (final Index index : indices) {\n- for (int i = 0; i < 2; i++) {\n+ int numShards = randomInt(5);\n+ int primaryIdx = randomIntBetween(-1, numShards - 1); // -1 means there is no primary shard.\n+ for (int i = 0; i < numShards; i++) {\n ShardId shardId = new ShardId(index, i);\n+ boolean primary = (i == primaryIdx);\n Path path = createTempDir().resolve(\"indices\").resolve(index.getUUID()).resolve(String.valueOf(i));\n- ShardRouting shardRouting = ShardRouting.newUnassigned(shardId, i == 0,\n- i == 0 ? StoreRecoverySource.EMPTY_STORE_INSTANCE : PeerRecoverySource.INSTANCE,\n+ ShardRouting shardRouting = ShardRouting.newUnassigned(shardId, primary,\n+ primary ? StoreRecoverySource.EMPTY_STORE_INSTANCE : PeerRecoverySource.INSTANCE,\n new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null)\n );\n shardRouting = shardRouting.initialize(\"node-0\", null, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE);",
"filename": "core/src/test/java/org/elasticsearch/rest/action/cat/RestIndicesActionTests.java",
"status": "modified"
}
]
} |
{
"body": "the client header is not set properly because of missing the i++.\r\n",
"comments": [
{
"body": "Can one of the admins verify this patch?",
"created_at": "2017-02-04T04:35:36Z"
},
{
"body": "@jasontedor \r\nAlready added the unit test for this fix.\r\nThere is no check that the numbers of the headers to set and the numbers of the headers in rest client are equal, because there is no public method to get the numbers of the headers in rest client .\r\nBut The Check in RestClient.builder() will result in NullPointerException when the headers are not set properly.\r\n",
"created_at": "2017-02-07T07:30:32Z"
},
{
"body": "@jasontedor Are you happy with the test which was added?",
"created_at": "2017-06-09T07:21:39Z"
},
{
"body": "Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?\n",
"created_at": "2017-06-09T07:21:40Z"
},
{
"body": "@slixurd is this something you're still interested in pursuing? Can you address the feedback?",
"created_at": "2017-08-15T20:54:14Z"
},
{
"body": "@slixurd As this PR seems to be stalled, I am going to close it. I have opened #26937 with the suggested tests, so that we can get this fix into 6.0. I hope you understand.",
"created_at": "2017-10-10T05:10:35Z"
}
],
"number": 22976,
"title": "fix the index of header when build rest client"
} | {
"body": "The headers passed to reindex were skipped except for the last one. This\r\ncommit fixes the copying of the headers, as well as adds a base test\r\ncase for rest client builders to access the headers within the built\r\nrest client.\r\n\r\nrelates #22976\r\n",
"number": 26937,
"review_comments": [],
"title": "Reindex: Fix headers in reindex action"
} | {
"commits": [
{
"message": "Reindex: Fix headers in reindex action\n\nThe headers passed to reindex were skipped except for the last one. This\ncommit fixes the copying of the headers, as well as adds a base test\ncase for rest client builders to access the headers within the built\nrest client.\n\nrelates #22976"
},
{
"message": "Fix test case to be abstract"
}
],
"files": [
{
"diff": "@@ -50,6 +50,7 @@\n import java.net.URI;\n import java.net.URISyntaxException;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n import java.util.Comparator;\n@@ -91,8 +92,9 @@ public class RestClient implements Closeable {\n private static final Log logger = LogFactory.getLog(RestClient.class);\n \n private final CloseableHttpAsyncClient client;\n- //we don't rely on default headers supported by HttpAsyncClient as those cannot be replaced\n- private final Header[] defaultHeaders;\n+ // We don't rely on default headers supported by HttpAsyncClient as those cannot be replaced.\n+ // These are package private for tests.\n+ final List<Header> defaultHeaders;\n private final long maxRetryTimeoutMillis;\n private final String pathPrefix;\n private final AtomicInteger lastHostIndex = new AtomicInteger(0);\n@@ -104,7 +106,7 @@ public class RestClient implements Closeable {\n HttpHost[] hosts, String pathPrefix, FailureListener failureListener) {\n this.client = client;\n this.maxRetryTimeoutMillis = maxRetryTimeoutMillis;\n- this.defaultHeaders = defaultHeaders;\n+ this.defaultHeaders = Collections.unmodifiableList(Arrays.asList(defaultHeaders));\n this.failureListener = failureListener;\n this.pathPrefix = pathPrefix;\n setHosts(hosts);",
"filename": "client/rest/src/main/java/org/elasticsearch/client/RestClient.java",
"status": "modified"
},
{
"diff": "@@ -201,7 +201,7 @@ static RestClient buildRestClient(RemoteInfo remoteInfo, long taskId, List<Threa\n Header[] clientHeaders = new Header[remoteInfo.getHeaders().size()];\n int i = 0;\n for (Map.Entry<String, String> header : remoteInfo.getHeaders().entrySet()) {\n- clientHeaders[i] = new BasicHeader(header.getKey(), header.getValue());\n+ clientHeaders[i++] = new BasicHeader(header.getKey(), header.getValue());\n }\n return RestClient.builder(new HttpHost(remoteInfo.getHost(), remoteInfo.getPort(), remoteInfo.getScheme()))\n .setDefaultHeaders(clientHeaders)",
"filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/TransportReindexAction.java",
"status": "modified"
},
{
"diff": "@@ -20,17 +20,21 @@\n package org.elasticsearch.index.reindex;\n \n import org.elasticsearch.client.RestClient;\n+import org.elasticsearch.client.RestClientBuilderTestCase;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.test.ESTestCase;\n \n import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.HashMap;\n import java.util.List;\n+import java.util.Map;\n \n import static java.util.Collections.emptyMap;\n import static java.util.Collections.synchronizedList;\n import static org.hamcrest.Matchers.hasSize;\n \n-public class ReindexFromRemoteBuildRestClientTests extends ESTestCase {\n+public class ReindexFromRemoteBuildRestClientTests extends RestClientBuilderTestCase {\n public void testBuildRestClient() throws Exception {\n RemoteInfo remoteInfo = new RemoteInfo(\"https\", \"localhost\", 9200, new BytesArray(\"ignored\"), null, null, emptyMap(),\n RemoteInfo.DEFAULT_SOCKET_TIMEOUT, RemoteInfo.DEFAULT_CONNECT_TIMEOUT);\n@@ -48,4 +52,22 @@ public void testBuildRestClient() throws Exception {\n client.close();\n }\n }\n+\n+ public void testHeaders() throws Exception {\n+ Map<String, String> headers = new HashMap<>();\n+ int numHeaders = randomIntBetween(1, 5);\n+ for (int i = 0; i < numHeaders; ++i) {\n+ headers.put(\"header\" + i, Integer.toString(i));\n+ }\n+ RemoteInfo remoteInfo = new RemoteInfo(\"https\", \"localhost\", 9200, new BytesArray(\"ignored\"), null, null,\n+ headers, RemoteInfo.DEFAULT_SOCKET_TIMEOUT, RemoteInfo.DEFAULT_CONNECT_TIMEOUT);\n+ long taskId = randomLong();\n+ List<Thread> threads = synchronizedList(new ArrayList<>());\n+ RestClient client = TransportReindexAction.buildRestClient(remoteInfo, taskId, threads);\n+ try {\n+ assertHeaders(client, headers);\n+ } finally {\n+ client.close();\n+ }\n+ }\n }",
"filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexFromRemoteBuildRestClientTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,48 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.client;\n+\n+import java.util.HashMap;\n+import java.util.Map;\n+\n+import joptsimple.internal.Strings;\n+import org.apache.http.Header;\n+import org.elasticsearch.test.ESTestCase;\n+\n+/**\n+ * A test case with access to internals of a RestClient.\n+ */\n+public abstract class RestClientBuilderTestCase extends ESTestCase {\n+ /** Checks the given rest client has the provided default headers. */\n+ public void assertHeaders(RestClient client, Map<String, String> expectedHeaders) {\n+ expectedHeaders = new HashMap<>(expectedHeaders); // copy so we can remove as we check\n+ for (Header header : client.defaultHeaders) {\n+ String name = header.getName();\n+ String expectedValue = expectedHeaders.remove(name);\n+ if (expectedValue == null) {\n+ fail(\"Found unexpected header in rest client: \" + name);\n+ }\n+ assertEquals(expectedValue, header.getValue());\n+ }\n+ if (expectedHeaders.isEmpty() == false) {\n+ fail(\"Missing expected headers in rest client: \" + Strings.join(expectedHeaders.keySet(), \", \"));\n+ }\n+ }\n+}",
"filename": "test/framework/src/main/java/org/elasticsearch/client/RestClientBuilderTestCase.java",
"status": "added"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 5.5\r\n\r\n**Plugins installed**: [] EMC SourceOne 7.2.5 \r\n\r\n**JVM version** (`java -version`): jre1.8.0_45\r\n\r\n\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Windows Server 2012R\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nUnable to install Elastic 5.5 using elasticsearch-service.bat\r\n**Steps to reproduce**:\r\n\r\n\r\n1. Open CMD in Admin mode\r\n2. cd to \"C:\\Program Files (x86)\\EMC SourceOne\\EXPBA\\bin\\Elastic\\elasticsearch\\bin\\\"\r\n3. run service.bat install or elasticsearch-service.bat install (depending on ES version)\r\n\r\nIssue is that with 5.5 and elasticsearch-service.bat that generates an error due to spaces in the path.\r\n\r\n**Provide logs (if relevant)**: none available\r\n\r\nWORKAROUND:\r\n\r\nin CMD line, change to cd `C:\\PROGRA~2\\EMCSOU~1\\EXPBA\\bin\\Elastic\\elasticsearch\\bin\\`\r\nUsing 8.3 format folder names solves the problem.\r\n\r\n\r\n",
"comments": [],
"number": 26454,
"title": "elasticsearch-service.bat that generates an error with install"
} | {
"body": "If the ES_HOME contains parentheses, the service cannot be installed.\r\n\r\nFixes #26454\r\n",
"number": 26916,
"review_comments": [],
"title": "Fix handling of Windows paths containing parentheses"
} | {
"commits": [
{
"message": "Fix handling of paths containing parentheses"
},
{
"message": "Merge branch 'master' into pr/26916\n\n* master: (22 commits)\n Allow only a fixed-size receive predictor (#26165)\n Add Homebrew instructions to getting started\n ingest: Fix bug that prevent date_index_name processor from accepting timestamps specified as a json number\n Scripting: Fix expressions to temporarily support filter scripts (#26824)\n Docs: Add note to contributing docs warning against tool based refactoring (#26936)\n Fix thread context handling of headers overriding (#26068)\n SearchWhileCreatingIndexIT: remove usage of _only_nodes\n update Lucene version for 6.0-RC2 version\n Calculate and cache result when advanceExact is called (#26920)\n Test query builder bwc against previous supported versions instead of just the current version.\n Set minimum_master_nodes on rolling-upgrade test (#26911)\n Return List instead of an array from settings (#26903)\n remove _primary and _replica shard preferences (#26791)\n fixing typo in datehistogram-aggregation.asciidoc (#26924)\n [API] Added the `terminate_after` parameter to the REST spec for \"Count\" API\n Setup debug logging for qa.full-cluster-restart\n Enable BWC testing against other remotes\n Use LF line endings in Painless generated files (#26822)\n [DOCS] Added info about snapshotting your data before an upgrade.\n Add documentation about disabling `_field_names`. (#26813)\n ..."
}
],
"files": [
{
"diff": "@@ -163,15 +163,15 @@ for %%a in (\"%ES_JAVA_OPTS:;=\",\"%\") do (\n @endlocal & set JVM_MS=%JVM_MS% & set JVM_MX=%JVM_MX% & set JVM_SS=%JVM_SS%\n \n if \"%JVM_MS%\" == \"\" (\n- echo minimum heap size not set; configure using -Xms via %ES_JVM_OPTIONS% or ES_JAVA_OPTS\n+ echo minimum heap size not set; configure using -Xms via \"%ES_JVM_OPTIONS%\" or ES_JAVA_OPTS\n goto:eof\n )\n if \"%JVM_MX%\" == \"\" (\n- echo maximum heap size not set; configure using -Xmx via %ES_JVM_OPTIONS% or ES_JAVA_OPTS\n+ echo maximum heap size not set; configure using -Xmx via \"%ES_JVM_OPTIONS%\" or ES_JAVA_OPTS\n goto:eof\n )\n if \"%JVM_SS%\" == \"\" (\n- echo thread stack size not set; configure using -Xss via %ES_JVM_OPTIONS% or ES_JAVA_OPTS\n+ echo thread stack size not set; configure using -Xss via \"%ES_JVM_OPTIONS%\" or ES_JAVA_OPTS\n goto:eof\n )\n ",
"filename": "distribution/src/main/resources/bin/elasticsearch-service.bat",
"status": "modified"
}
]
} |
{
"body": "Elasticsearch+Filebeat 6.0.0-rc1.\r\n\r\n[DateIndexNameProcessor](https://github.com/elastic/elasticsearch/blob/v6.0.0-rc1/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateIndexNameProcessor.java) expects the date to be a String. At line 64: `String date = ingestDocument.getFieldValue(field, String.class);`.\r\n\r\nThe log4j2 json logger outputs timestamp as unix_ms epoch (e.g. `{\"timeMillis\":1507099254201,…}`).\r\nI want the log to be indexed in a daily index, on the day it was generated (vs. ingested), so I tell filebeat to use an elasticsearch `date_index_name` pipeline. It fails with the following exception:\r\n\r\n> Caused by: java.lang.IllegalArgumentException: field [json.timeMillis] of type [java.lang.Long] cannot be cast to [java.lang.String]\r\n at o.e.ingest.IngestDocument.cast(IngestDocument.java:542)\r\n at o.e.ingest.IngestDocument.getFieldValue(IngestDocument.java:107) \r\n at o.e.ingest.common.DateIndexNameProcessor.execute(DateIndexNameProcessor.java:64)\r\n at o.e.ingest.CompoundProcessor.execute(CompoundProcessor.java:100)\r\n\r\nReproducer:\r\n\r\n1. Start elasticsearch (unzip and start) and create the pipeline:\r\n\r\n```json\r\nPUT /_ingest/pipeline/bugTimestampPipeline\r\n {\r\n \"description\": \"bugTimestampPipeline\",\r\n \"processors\" : [\r\n {\r\n \"date_index_name\" : {\r\n \"field\" : \"json.timeMillis\",\r\n \"date_formats\" : [ \"UNIX_MS\" ],\r\n \"index_name_prefix\" : \"myDailyIndex-\",\r\n \"date_rounding\" : \"d\",\r\n \"index_name_format\" : \"yyyy.MM.dd\"\r\n }\r\n }\r\n ]\r\n }\r\n```\r\n\r\n2. Create the filebeat configuration, and run `filebeat --path.config confBugTimestamp -c filebeat-bugTimestamp.yml`:\r\n\r\n• `confBugTimestamp/fields.yml`: a copy of `<filebeatDir>/fields.yml`\r\n\r\n• `confBugTimestamp/filebeat-bugTimestamp.yml`\r\n\r\n```yaml\r\nsetup.kibana:\r\n host: \"localhost:5601\"\r\n\r\noutput.elasticsearch:\r\n hosts: [\"localhost:9200\"]\r\n pipeline: bugTimestampPipeline\r\n\r\nfilebeat.prospectors:\r\n- type: log\r\n enabled: true\r\n paths:\r\n - logsBugTimestamp/*.json.log\r\n json.keys_under_root: false\r\n json.add_error_key: true\r\n json.message_key: message\r\n close_inactive: 24h\r\n close_renamed: true # because Windows (https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html#close-renamed)\r\n close_removed: true # because Windows (https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html#close-removed)\r\n```\r\n\r\n3. copy `bugTimestamp.json.log` into `logsBugTimestamp/`\r\n\r\n• `bugTimestamp.json.log`\r\n\r\n```json\r\n{\"timeMillis\":1507099254201,\"level\":\"INFO\",\"message\":\"foobar\"}\r\n```\r\n",
"comments": [
{
"body": "@iksnalybok Thanks for sharing this bug! I'll fix this soon.",
"created_at": "2017-10-06T12:11:38Z"
}
],
"number": 26890,
"title": "DateIndexNameProcessor does not support unix epoch format"
} | {
"body": "PR for #26890",
"number": 26910,
"review_comments": [],
"title": "date_index_name processor should not fail if timestamp is specified as json number"
} | {
"commits": [
{
"message": "ingest: Fix bug that prevent date_index_name processor from accepting timestamps specified as a json number\n\nCloses #26890"
}
],
"files": [
{
"diff": "@@ -25,6 +25,7 @@\n import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n+import java.util.Objects;\n import java.util.function.Function;\n \n import org.elasticsearch.ExceptionsHelper;\n@@ -61,7 +62,8 @@ public final class DateIndexNameProcessor extends AbstractProcessor {\n \n @Override\n public void execute(IngestDocument ingestDocument) throws Exception {\n- String date = ingestDocument.getFieldValue(field, String.class);\n+ // Date can be specified as a string or long:\n+ String date = Objects.toString(ingestDocument.getFieldValue(field, Object.class));\n \n DateTime dateTime = null;\n Exception lastException = null;",
"filename": "modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/DateIndexNameProcessor.java",
"status": "modified"
},
{
"diff": "@@ -62,6 +62,11 @@ public void testUnixMs()throws Exception {\n Collections.singletonMap(\"_field\", \"1000500\"));\n dateProcessor.execute(document);\n assertThat(document.getSourceAndMetadata().get(\"_index\"), equalTo(\"<events-{19700101||/m{yyyyMMdd|UTC}}>\"));\n+\n+ document = new IngestDocument(\"_index\", \"_type\", \"_id\", null, null,\n+ Collections.singletonMap(\"_field\", 1000500L));\n+ dateProcessor.execute(document);\n+ assertThat(document.getSourceAndMetadata().get(\"_index\"), equalTo(\"<events-{19700101||/m{yyyyMMdd|UTC}}>\"));\n }\n \n public void testUnix()throws Exception {",
"filename": "modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/DateIndexNameProcessorTests.java",
"status": "modified"
}
]
} |
{
"body": "The FS-based repository does not honour the `readonly` setting when restoring from a broken snapshot (in fact, restoring a snapshot should not require any writes at all).\r\n\r\nA write can currently happen, however, if the repository has missing index folders.\r\n\r\nReproduction scenario:\r\n\r\n1) Create snapshot of an index\r\n2) Remove the folder in the repository under `indices`\r\n3) Try to restore the snapshot. This will try to recreate said folder.\r\n\r\nStack trace when using a read-only filesystem:\r\n\r\n```\r\n[2016-11-11T20:32:22,830][WARN ][o.e.s.RestoreService ] [node_sm0] [test-repo:test-snap] failed to restore snapshot\r\norg.elasticsearch.ElasticsearchException: failed to create blob container\r\n\tat org.elasticsearch.common.blobstore.fs.FsBlobStore.blobContainer(FsBlobStore.java:67) ~[main/:?]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.readSnapshotMetaData(BlobStoreRepository.java:599) ~[main/:?]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.getSnapshotMetaData(BlobStoreRepository.java:554) ~[main/:?]\r\n\tat org.elasticsearch.snapshots.RestoreService.restoreSnapshot(RestoreService.java:187) ~[main/:?]\r\n\tat org.elasticsearch.action.admin.cluster.snapshots.restore.TransportRestoreSnapshotAction.masterOperation(TransportRestoreSnapshotAction.java:89) ~[main/:?]\r\n\tat org.elasticsearch.action.admin.cluster.snapshots.restore.TransportRestoreSnapshotAction.masterOperation(TransportRestoreSnapshotAction.java:49) ~[main/:?]\r\n\tat org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:87) ~[main/:?]\r\n\tat org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:171) ~[main/:?]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[main/:?]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[main/:?]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_60]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_60]\r\n\tat java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_60]\r\nCaused by: java.nio.file.AccessDeniedException: /private/var/folders/68/3gzf12zs4qb0q_gfjw5lx1fm0000gn/T/org.elasticsearch.snapshots.SharedClusterSnapshotRestoreIT_73B55FFF46128756-001/tempDir-002/repos/prkihNFaMA/indices/Odv1GtZFSlaFEOwTZWQcdQ\r\n\tat sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:?]\r\n\tat sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]\r\n\tat sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]\r\n\tat sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384) ~[?:?]\r\n\tat org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:132) ~[lucene-test-framework-6.3.0-snapshot-a66a445.jar:6.3.0-snapshot-a66a445 a66a44513ee8191e25b477372094bfa846450316 - jpountz - 2016-11-03 16:32:47]\r\n\tat org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:132) ~[lucene-test-framework-6.3.0-snapshot-a66a445.jar:6.3.0-snapshot-a66a445 a66a44513ee8191e25b477372094bfa846450316 - jpountz - 2016-11-03 16:32:47]\r\n\tat org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:132) ~[lucene-test-framework-6.3.0-snapshot-a66a445.jar:6.3.0-snapshot-a66a445 a66a44513ee8191e25b477372094bfa846450316 - jpountz - 2016-11-03 16:32:47]\r\n\tat org.apache.lucene.mockfile.FilterFileSystemProvider.createDirectory(FilterFileSystemProvider.java:132) ~[lucene-test-framework-6.3.0-snapshot-a66a445.jar:6.3.0-snapshot-a66a445 a66a44513ee8191e25b477372094bfa846450316 - jpountz - 2016-11-03 16:32:47]\r\n\tat java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_60]\r\n\tat java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) ~[?:1.8.0_60]\r\n\tat java.nio.file.Files.createDirectories(Files.java:767) ~[?:1.8.0_60]\r\n\tat org.elasticsearch.common.blobstore.fs.FsBlobStore.buildAndCreate(FsBlobStore.java:83) ~[main/:?]\r\n\tat org.elasticsearch.common.blobstore.fs.FsBlobStore.blobContainer(FsBlobStore.java:65) ~[main/:?]\r\n\t... 12 more\r\n```\r\n\r\nLine at fault:\r\n\r\nhttps://github.com/elastic/elasticsearch/blob/44ac5d057a8ceb6940c26275d9963bccb9f5065a/core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java#L65\r\n\r\nJust getting a BlobContainer for reading does an `mkdirs` on the root directory for that container.\r\n",
"comments": [
{
"body": "@ywelsch Can I work on this?",
"created_at": "2017-10-02T16:28:48Z"
},
{
"body": "@liketic sure",
"created_at": "2017-10-03T09:28:53Z"
}
],
"number": 21495,
"title": "FS-based repository does not honour readonly setting when restoring from a broken snapshot "
} | {
"body": "Closes #21495 \r\n\r\nFor `FsBlobStore` and `HdfsBlobStore`, if repository is read only, we can aware the read only setting and do not create directories if readonly is true.",
"number": 26909,
"review_comments": [],
"title": "Do not create directory on readonly repository (#21495)"
} | {
"commits": [
{
"message": "Do not create directories if repository is readonly (#21495)\n\nFor FsBlobStore and HdfsBlobStore, if the repository is read only, the blob store should aware the read only setting and do not create directories even it's not exists.\n\nDo not mkdirs if HDFS repository is read only"
}
],
"files": [
{
"diff": "@@ -39,10 +39,15 @@ public class FsBlobStore extends AbstractComponent implements BlobStore {\n \n private final int bufferSizeInBytes;\n \n+ private final boolean readOnly;\n+\n public FsBlobStore(Settings settings, Path path) throws IOException {\n super(settings);\n this.path = path;\n- Files.createDirectories(path);\n+ this.readOnly = settings.getAsBoolean(\"readonly\", false);\n+ if (!this.readOnly) {\n+ Files.createDirectories(path);\n+ }\n this.bufferSizeInBytes = (int) settings.getAsBytesSize(\"repositories.fs.buffer_size\", new ByteSizeValue(100, ByteSizeUnit.KB)).getBytes();\n }\n \n@@ -80,7 +85,9 @@ public void close() {\n \n private synchronized Path buildAndCreate(BlobPath path) throws IOException {\n Path f = buildPath(path);\n- Files.createDirectories(f);\n+ if (!readOnly) {\n+ Files.createDirectories(f);\n+ }\n return f;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java",
"status": "modified"
},
{
"diff": "@@ -20,12 +20,14 @@\n \n import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.common.blobstore.fs.FsBlobStore;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.repositories.ESBlobStoreTestCase;\n \n import java.io.IOException;\n+import java.nio.file.Files;\n import java.nio.file.Path;\n \n @LuceneTestCase.SuppressFileSystems(\"ExtrasFS\")\n@@ -35,4 +37,39 @@ protected BlobStore newBlobStore() throws IOException {\n Settings settings = randomBoolean() ? Settings.EMPTY : Settings.builder().put(\"buffer_size\", new ByteSizeValue(randomIntBetween(1, 100), ByteSizeUnit.KB)).build();\n return new FsBlobStore(settings, tempDir);\n }\n+\n+ public void testReadOnly() throws Exception {\n+ Settings settings = Settings.builder().put(\"readonly\", true).build();\n+ Path tempDir = createTempDir();\n+ Path path = tempDir.resolve(\"bar\");\n+\n+ try (FsBlobStore store = new FsBlobStore(settings, path)) {\n+ assertFalse(Files.exists(path));\n+ BlobPath blobPath = BlobPath.cleanPath().add(\"foo\");\n+ store.blobContainer(blobPath);\n+ Path storePath = store.path();\n+ for (String d : blobPath) {\n+ storePath = storePath.resolve(d);\n+ }\n+ assertFalse(Files.exists(storePath));\n+ }\n+\n+ settings = randomBoolean() ? Settings.EMPTY : Settings.builder().put(\"readonly\", false).build();\n+ try (FsBlobStore store = new FsBlobStore(settings, path)) {\n+ assertTrue(Files.exists(path));\n+ BlobPath blobPath = BlobPath.cleanPath().add(\"foo\");\n+ BlobContainer container = store.blobContainer(blobPath);\n+ Path storePath = store.path();\n+ for (String d : blobPath) {\n+ storePath = storePath.resolve(d);\n+ }\n+ assertTrue(Files.exists(storePath));\n+ assertTrue(Files.isDirectory(storePath));\n+\n+ byte[] data = randomBytes(randomIntBetween(10, scaledRandomIntBetween(1024, 1 << 16)));\n+ writeBlob(container, \"test\", new BytesArray(data));\n+ assertArrayEquals(readBlobFully(container, \"test\", data.length), data);\n+ assertTrue(container.blobExists(\"test\"));\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/common/blobstore/FsBlobStoreTests.java",
"status": "modified"
},
{
"diff": "@@ -39,17 +39,21 @@ final class HdfsBlobStore implements BlobStore {\n private final FileContext fileContext;\n private final HdfsSecurityContext securityContext;\n private final int bufferSize;\n+ private final boolean readOnly;\n private volatile boolean closed;\n \n- HdfsBlobStore(FileContext fileContext, String path, int bufferSize) throws IOException {\n+ HdfsBlobStore(FileContext fileContext, String path, int bufferSize, boolean readOnly) throws IOException {\n this.fileContext = fileContext;\n this.securityContext = new HdfsSecurityContext(fileContext.getUgi());\n this.bufferSize = bufferSize;\n this.root = execute(fileContext1 -> fileContext1.makeQualified(new Path(path)));\n- try {\n- mkdirs(root);\n- } catch (FileAlreadyExistsException ok) {\n- // behaves like Files.createDirectories\n+ this.readOnly = readOnly;\n+ if (!readOnly) {\n+ try {\n+ mkdirs(root);\n+ } catch (FileAlreadyExistsException ok) {\n+ // behaves like Files.createDirectories\n+ }\n }\n }\n \n@@ -80,12 +84,14 @@ public BlobContainer blobContainer(BlobPath path) {\n \n private Path buildHdfsPath(BlobPath blobPath) {\n final Path path = translateToHdfsPath(blobPath);\n- try {\n- mkdirs(path);\n- } catch (FileAlreadyExistsException ok) {\n- // behaves like Files.createDirectories\n- } catch (IOException ex) {\n- throw new ElasticsearchException(\"failed to create blob container\", ex);\n+ if (!readOnly) {\n+ try {\n+ mkdirs(path);\n+ } catch (FileAlreadyExistsException ok) {\n+ // behaves like Files.createDirectories\n+ } catch (IOException ex) {\n+ throw new ElasticsearchException(\"failed to create blob container\", ex);\n+ }\n }\n return path;\n }",
"filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobStore.java",
"status": "modified"
},
{
"diff": "@@ -106,7 +106,7 @@ protected void doStart() {\n SpecialPermission.check();\n FileContext fileContext = AccessController.doPrivileged((PrivilegedAction<FileContext>)\n () -> createContext(uri, getMetadata().settings()));\n- blobStore = new HdfsBlobStore(fileContext, pathSetting, bufferSize);\n+ blobStore = new HdfsBlobStore(fileContext, pathSetting, bufferSize, isReadOnly());\n logger.debug(\"Using file-system [{}] for URI [{}], path [{}]\", fileContext.getDefaultFileSystem(), fileContext.getDefaultFileSystem().getUri(), pathSetting);\n } catch (IOException e) {\n throw new UncheckedIOException(String.format(Locale.ROOT, \"Cannot create HDFS repository for uri [%s]\", uri), e);",
"filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,20 @@\n \n package org.elasticsearch.repositories.hdfs;\n \n+import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters;\n+import org.apache.hadoop.conf.Configuration;\n+import org.apache.hadoop.fs.AbstractFileSystem;\n+import org.apache.hadoop.fs.FileContext;\n+import org.apache.hadoop.fs.Path;\n+import org.apache.hadoop.fs.UnsupportedFileSystemException;\n+import org.elasticsearch.common.SuppressForbidden;\n+import org.elasticsearch.common.blobstore.BlobContainer;\n+import org.elasticsearch.common.blobstore.BlobPath;\n+import org.elasticsearch.common.blobstore.BlobStore;\n+import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.repositories.ESBlobStoreContainerTestCase;\n+\n+import javax.security.auth.Subject;\n import java.io.IOException;\n import java.lang.reflect.Constructor;\n import java.lang.reflect.InvocationTargetException;\n@@ -29,30 +43,28 @@\n import java.security.PrivilegedActionException;\n import java.security.PrivilegedExceptionAction;\n import java.util.Collections;\n-import javax.security.auth.Subject;\n \n-import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters;\n-import org.apache.hadoop.conf.Configuration;\n-import org.apache.hadoop.fs.AbstractFileSystem;\n-import org.apache.hadoop.fs.FileContext;\n-import org.apache.hadoop.fs.UnsupportedFileSystemException;\n-import org.elasticsearch.common.SuppressForbidden;\n-import org.elasticsearch.common.blobstore.BlobStore;\n-import org.elasticsearch.repositories.ESBlobStoreContainerTestCase;\n+import static org.elasticsearch.repositories.ESBlobStoreTestCase.randomBytes;\n+import static org.elasticsearch.repositories.ESBlobStoreTestCase.readBlobFully;\n+\n \n @ThreadLeakFilters(filters = {HdfsClientThreadLeakFilter.class})\n public class HdfsBlobStoreContainerTests extends ESBlobStoreContainerTestCase {\n \n @Override\n protected BlobStore newBlobStore() throws IOException {\n+ return new HdfsBlobStore(createTestContext(), \"temp\", 1024, false);\n+ }\n+\n+ private FileContext createTestContext() {\n FileContext fileContext;\n try {\n fileContext = AccessController.doPrivileged((PrivilegedExceptionAction<FileContext>)\n () -> createContext(new URI(\"hdfs:///\")));\n } catch (PrivilegedActionException e) {\n throw new RuntimeException(e.getCause());\n }\n- return new HdfsBlobStore(fileContext, \"temp\", 1024);\n+ return fileContext;\n }\n \n @SuppressForbidden(reason = \"lesser of two evils (the other being a bunch of JNI/classloader nightmares)\")\n@@ -69,7 +81,7 @@ private FileContext createContext(URI uri) {\n Class<?> clazz = Class.forName(\"org.apache.hadoop.security.User\");\n ctor = clazz.getConstructor(String.class);\n ctor.setAccessible(true);\n- } catch (ClassNotFoundException | NoSuchMethodException e) {\n+ } catch (ClassNotFoundException | NoSuchMethodException e) {\n throw new RuntimeException(e);\n }\n \n@@ -98,4 +110,33 @@ private FileContext createContext(URI uri) {\n }\n });\n }\n+\n+ public void testReadOnly() throws Exception {\n+ FileContext fileContext = createTestContext();\n+ // Constructor will not create dir if read only\n+ HdfsBlobStore hdfsBlobStore = new HdfsBlobStore(fileContext, \"dir\", 1024, true);\n+ FileContext.Util util = fileContext.util();\n+ Path root = fileContext.makeQualified(new Path(\"dir\"));\n+ assertFalse(util.exists(root));\n+ BlobPath blobPath = BlobPath.cleanPath().add(\"path\");\n+\n+ // blobContainer() will not create path if read only\n+ hdfsBlobStore.blobContainer(blobPath);\n+ Path hdfsPath = root;\n+ for (String p : blobPath) {\n+ hdfsPath = new Path(hdfsPath, p);\n+ }\n+ assertFalse(util.exists(hdfsPath));\n+\n+ // if not read only, directory will be created\n+ hdfsBlobStore = new HdfsBlobStore(fileContext, \"dir\", 1024, false);\n+ assertTrue(util.exists(root));\n+ BlobContainer container = hdfsBlobStore.blobContainer(blobPath);\n+ assertTrue(util.exists(hdfsPath));\n+\n+ byte[] data = randomBytes(randomIntBetween(10, scaledRandomIntBetween(1024, 1 << 16)));\n+ writeBlob(container, \"foo\", new BytesArray(data));\n+ assertArrayEquals(readBlobFully(container, \"foo\", data.length), data);\n+ assertTrue(container.blobExists(\"foo\"));\n+ }\n }",
"filename": "plugins/repository-hdfs/src/test/java/org/elasticsearch/repositories/hdfs/HdfsBlobStoreContainerTests.java",
"status": "modified"
},
{
"diff": "@@ -142,7 +142,7 @@ public void testVerifyOverwriteFails() throws IOException {\n }\n }\n \n- private void writeBlob(final BlobContainer container, final String blobName, final BytesArray bytesArray) throws IOException {\n+ protected void writeBlob(final BlobContainer container, final String blobName, final BytesArray bytesArray) throws IOException {\n try (InputStream stream = bytesArray.streamInput()) {\n container.writeBlob(blobName, stream, bytesArray.length());\n }",
"filename": "test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreContainerTestCase.java",
"status": "modified"
},
{
"diff": "@@ -78,7 +78,7 @@ public static byte[] randomBytes(int length) {\n return data;\n }\n \n- private static void writeBlob(BlobContainer container, String blobName, BytesArray bytesArray) throws IOException {\n+ protected static void writeBlob(BlobContainer container, String blobName, BytesArray bytesArray) throws IOException {\n try (InputStream stream = bytesArray.streamInput()) {\n container.writeBlob(blobName, stream, bytesArray.length());\n }",
"filename": "test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreTestCase.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: Version: 5.6.2, Build: 57e20f3/2017-09-23T13:16:45.703Z, JVM: 1.8.0_144\r\n\r\n**Plugins installed**: []\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen doing a cross-cluster search on a remote alias, the remote master will drop encounter a serialization error and will drop the connection with a remote node.\r\n\r\nThe client receives the following error: \r\n```\r\n{\"error\":{\"root_cause\":[{\"type\":\"transport_serialization_exception\",\"reason\":\"Failed to deserialize response of type [org.elasticsearch.search.fetch.QueryFetchSearchResult]\"}],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[{\"shard\":0,\"index\":\"concrete-index\",\"node\":\"a1P-CIioRmeJ96CqoWo0gA\",\"reason\":{\"type\":\"transport_serialization_exception\",\"reason\":\"Failed to deserialize response of type [org.elasticsearch.search.fetch.QueryFetchSearchResult]\",\"caused_by\":{\"type\":\"index_out_of_bounds_exception\",\"reason\":\"readerIndex(45) + length(1) exceeds writerIndex(45): UnpooledDuplicatedByteBuf(ridx: 45, widx: 45, cap: 65536, unwrapped: PooledHeapByteBuf(ridx: 6, widx: 45, cap: 65536))\"}}}]},\"status\":500}\r\n```\r\n\r\nThis issue does not always reproduce but I have noticed the following:\r\n- For an index with a one shard and one replica, it happens if there are three cluster nodes but not two.\r\n- It can be possible to mitigate the problem by assigning certain nodes to master and making them gateway nodes.\r\n- The issue sometimes triggers on a concrete index although less reliably\r\n\r\n**Steps to reproduce**:\r\n\r\n1. Setup clusters\r\n\r\nLocal client cluster:\r\n```\r\n$ elasticsearch-5.6.2/bin/elasticsearch \\\r\n -Epath.data=./data.client \\\r\n -Epath.logs=logs.client \\\r\n -Ecluster.name=client \\\r\n -Etransport.tcp.port=9300 \\\r\n -Ehttp.port=9200 \\\r\n -d\r\n```\r\n\r\nRemote cluster:\r\n```\r\n$ elasticsearch-5.6.2/bin/elasticsearch \\\r\n -Epath.data=./data.remote1 \\\r\n -Epath.logs=./logs.remote1 \\\r\n -Ecluster.name=remote \\\r\n -Etransport.tcp.port=9310 \\\r\n -Ehttp.port=9210 \\\r\n -Ediscovery.zen.ping.unicast.hosts=localhost:9320 \\\r\n -d\r\n$ elasticsearch-5.6.2/bin/elasticsearch \\\r\n -Epath.data=./data.remote2 \\\r\n -Epath.logs=./logs.remote2 \\\r\n -Ecluster.name=remote \\\r\n -Etransport.tcp.port=9320 \\\r\n -Ehttp.port=9220 \\\r\n -Ediscovery.zen.ping.unicast.hosts=localhost:9310 \\\r\n -d\r\n$ elasticsearch-5.6.2/bin/elasticsearch \\\r\n -Epath.data=./data.remote3 \\\r\n -Epath.logs=./logs.remote3 \\\r\n -Ecluster.name=remote \\\r\n -Etransport.tcp.port=9330 \\\r\n -Ehttp.port=9230 \\\r\n -Ediscovery.zen.ping.unicast.hosts=localhost:9310 \\\r\n -d\r\n```\r\n\r\n2. Wait for the nodes to start\r\n3. Add a seed for cross cluster search to the local node\r\n```\r\ncurl -XPUT 'http://localhost:9200/_cluster/settings?pretty' -H 'Content-Type: application/json' -d'\r\n{\r\n \"transient\": {\r\n \"search.remote.remote.seeds\": \"127.0.0.1:9310\"\r\n }\r\n}'\r\n```\r\n4. Create the remote index\r\n```\r\ncurl -XPUT 'http://localhost:9210/concrete-index?pretty' -H 'Content-Type: application/json' -d'\r\n{\r\n \"settings\" : {\r\n \"index\" : {\r\n \"number_of_shards\" : 1,\r\n \"number_of_replicas\" : 1\r\n }\r\n },\r\n \"aliases\" : {\r\n \"alias\" : {}\r\n }\r\n}'\r\n```\r\n\r\n5. Execute a search\r\n```\r\ncurl 'http://localhost:9200/remote:alias/_search?pretty'\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"transport_serialization_exception\",\r\n \"reason\" : \"Failed to deserialize response of type [org.elasticsearch.search.fetch.QueryFetchSearchResult]\"\r\n }\r\n ],\r\n \"type\" : \"search_phase_execution_exception\",\r\n \"reason\" : \"all shards failed\",\r\n \"phase\" : \"query\",\r\n \"grouped\" : true,\r\n \"failed_shards\" : [\r\n {\r\n \"shard\" : 0,\r\n \"index\" : \"concrete-index\",\r\n \"node\" : \"D7hySVoNTzK41TZdXsYElA\",\r\n \"reason\" : {\r\n \"type\" : \"transport_serialization_exception\",\r\n \"reason\" : \"Failed to deserialize response of type [org.elasticsearch.search.fetch.QueryFetchSearchResult]\",\r\n \"caused_by\" : {\r\n \"type\" : \"index_out_of_bounds_exception\",\r\n \"reason\" : \"readerIndex(45) + length(1) exceeds writerIndex(45): UnpooledDuplicatedByteBuf(ridx: 45, widx: 45, cap: 65536, unwrapped: PooledHeapByteBuf(ridx: 6, widx: 45, cap: 65536))\"\r\n }\r\n }\r\n }\r\n ]\r\n },\r\n \"status\" : 500\r\n}\r\n```\r\n\r\n",
"comments": [
{
"body": "I have experimented with this issue a bit more and I have noticed that the behavior does not seem linked to aliases. Instead, it seems linked to whether the gateway node the client node communicates with has the data being requested. I'll update the issue description accordingly.",
"created_at": "2017-10-02T15:24:39Z"
},
{
"body": "thanks a lot for opening this issue @greentruff . I was just able to reproduce this. I will dig and find out what causes it.",
"created_at": "2017-10-02T15:25:46Z"
},
{
"body": "Ok thanks @javanna. If it helps at all, I was able to reproduce the issue more reliably by having a single gateway node which is not a data node.",
"created_at": "2017-10-02T15:27:39Z"
},
{
"body": "Attaching the stacktrace for future reference:\r\n\r\n```\r\nCaused by: java.lang.IndexOutOfBoundsException: readerIndex(45) + length(1) exceeds writerIndex(45): UnpooledDuplicatedByteBuf(ridx: 45, widx: 45, cap: 65536, unwrapped: PooledHeapByteBuf(ridx: 6, widx: 45, cap: 65536))\r\n\tat io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1396) ~[?:?]\r\n\tat io.netty.buffer.AbstractByteBuf.readByte(AbstractByteBuf.java:687) ~[?:?]\r\n\tat org.elasticsearch.transport.netty4.ByteBufStreamInput.readByte(ByteBufStreamInput.java:135) ~[?:?]\r\n\tat org.elasticsearch.common.io.stream.FilterStreamInput.readByte(FilterStreamInput.java:40) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n\tat org.elasticsearch.common.io.stream.StreamInput.readInt(StreamInput.java:173) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n\tat org.elasticsearch.common.io.stream.StreamInput.readLong(StreamInput.java:215) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n\tat org.elasticsearch.search.fetch.FetchSearchResult.readFrom(FetchSearchResult.java:90) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n\tat org.elasticsearch.search.fetch.FetchSearchResult.readFetchSearchResult(FetchSearchResult.java:83) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n\tat org.elasticsearch.search.fetch.QueryFetchSearchResult.readFrom(QueryFetchSearchResult.java:90) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n\tat org.elasticsearch.transport.TcpTransport.handleResponse(TcpTransport.java:1417) ~[elasticsearch-5.6.2.jar:5.6.2]\r\n```",
"created_at": "2017-10-02T15:28:16Z"
},
{
"body": "hi @greentruff thanks again for reporting this. Turns out it was happening only when search went to a single shard. We have an optimization there to perform query and fetch in the same round rather than in separate rounds, but the proxying layer didn't support that optimization hence whenever you went through a gateway node, the response couldn't be serialized back to the coordinating node.",
"created_at": "2017-10-05T06:54:39Z"
},
{
"body": "Thanks a lot for the speedy fix !",
"created_at": "2017-10-05T08:15:26Z"
},
{
"body": "Hi Javanna \r\nI face the issue in cross cluster search in my environment the error code shows 503 handshake issue .Herei am using the same authority certificates i am using both the clusters but still i am facing the error in cross cluster connection could you response ASAP",
"created_at": "2018-09-13T03:10:28Z"
}
],
"number": 26833,
"title": "Cross cluster search sometimes fails"
} | {
"body": "The single shard optimization that we have in our search api changes the type of response returned by the query transport action name based on the shard search request. if the request goes to one shard, we will do query and fetch at the same time, hence the response will be different. The proxying layer used in cross cluster search was not aware of this distinction, which causes serialization issues every time a cross cluster search request goes to a single shard and goes through a gateway node which has to forward the shard request to a data node. The coordinating node would then expect a `QueryFetchSearchResult` while the gateway would return a `QuerySearchResult`.\r\n\r\nCloses #26833",
"number": 26881,
"review_comments": [
{
"body": "do we really need to hold on to the request or can we invoke the function here? I really think we shouldn't hold on to the request?!",
"created_at": "2017-10-04T14:29:34Z"
}
],
"title": "Fix serialization errors when cross cluster search goes to a single shard"
} | {
"commits": [
{
"message": "Fix serialization errors when cross cluster search go to a single shard\n\nThe single shard optimization that we have in our search api changes the type of response returned by the query transport action name based on the shard search request. if the request goes to one shard, we will do query and fetch at the same time, hence the response will be different. The proxying layer used in cross cluster search was not aware of this distinction, which causes serialization issues every time a search request goes to a single shard and goes through a gateway node which has to forward the shard request to a data node.\n\nCloses #26833"
},
{
"message": "fix test"
},
{
"message": "don't hold on to the request"
}
],
"files": [
{
"diff": "@@ -40,6 +40,7 @@\n import org.elasticsearch.search.fetch.ShardFetchRequest;\n import org.elasticsearch.search.fetch.ShardFetchSearchRequest;\n import org.elasticsearch.search.internal.InternalScrollSearchRequest;\n+import org.elasticsearch.search.internal.ShardSearchRequest;\n import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n import org.elasticsearch.search.query.QuerySearchRequest;\n import org.elasticsearch.search.query.QuerySearchResult;\n@@ -320,7 +321,8 @@ public void messageReceived(ScrollFreeContextRequest request, TransportChannel c\n channel.sendResponse(new SearchFreeContextResponse(freed));\n }\n });\n- TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_SCROLL_ACTION_NAME, SearchFreeContextResponse::new);\n+ TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_SCROLL_ACTION_NAME,\n+ (Supplier<TransportResponse>) SearchFreeContextResponse::new);\n transportService.registerRequestHandler(FREE_CONTEXT_ACTION_NAME, ThreadPool.Names.SAME, SearchFreeContextRequest::new,\n new TaskAwareTransportRequestHandler<SearchFreeContextRequest>() {\n @Override\n@@ -329,7 +331,8 @@ public void messageReceived(SearchFreeContextRequest request, TransportChannel c\n channel.sendResponse(new SearchFreeContextResponse(freed));\n }\n });\n- TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_ACTION_NAME, SearchFreeContextResponse::new);\n+ TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_ACTION_NAME,\n+ (Supplier<TransportResponse>) SearchFreeContextResponse::new);\n transportService.registerRequestHandler(CLEAR_SCROLL_CONTEXTS_ACTION_NAME, () -> TransportRequest.Empty.INSTANCE,\n ThreadPool.Names.SAME, new TaskAwareTransportRequestHandler<TransportRequest.Empty>() {\n @Override\n@@ -339,7 +342,7 @@ public void messageReceived(TransportRequest.Empty request, TransportChannel cha\n }\n });\n TransportActionProxy.registerProxyAction(transportService, CLEAR_SCROLL_CONTEXTS_ACTION_NAME,\n- () -> TransportResponse.Empty.INSTANCE);\n+ () -> TransportResponse.Empty.INSTANCE);\n \n transportService.registerRequestHandler(DFS_ACTION_NAME, ThreadPool.Names.SAME, ShardSearchTransportRequest::new,\n new TaskAwareTransportRequestHandler<ShardSearchTransportRequest>() {\n@@ -394,7 +397,8 @@ public void onFailure(Exception e) {\n });\n }\n });\n- TransportActionProxy.registerProxyAction(transportService, QUERY_ACTION_NAME, QuerySearchResult::new);\n+ TransportActionProxy.registerProxyAction(transportService, QUERY_ACTION_NAME,\n+ (request) -> ((ShardSearchRequest)request).numberOfShards() == 1 ? QueryFetchSearchResult::new : QuerySearchResult::new);\n \n transportService.registerRequestHandler(QUERY_ID_ACTION_NAME, ThreadPool.Names.SEARCH, QuerySearchRequest::new,\n new TaskAwareTransportRequestHandler<QuerySearchRequest>() {\n@@ -455,7 +459,8 @@ public void messageReceived(ShardSearchTransportRequest request, TransportChanne\n channel.sendResponse(new CanMatchResponse(canMatch));\n }\n });\n- TransportActionProxy.registerProxyAction(transportService, QUERY_CAN_MATCH_NAME, CanMatchResponse::new);\n+ TransportActionProxy.registerProxyAction(transportService, QUERY_CAN_MATCH_NAME,\n+ (Supplier<TransportResponse>) CanMatchResponse::new);\n }\n \n public static final class CanMatchResponse extends SearchPhaseResult {",
"filename": "core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java",
"status": "modified"
},
{
"diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.transport;\n \n-import org.apache.logging.log4j.util.Supplier;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -27,6 +26,8 @@\n \n import java.io.IOException;\n import java.io.UncheckedIOException;\n+import java.util.function.Function;\n+import java.util.function.Supplier;\n \n /**\n * TransportActionProxy allows an arbitrary action to be executed on a defined target node while the initial request is sent to a second\n@@ -41,19 +42,21 @@ private static class ProxyRequestHandler<T extends ProxyRequest> implements Tran\n \n private final TransportService service;\n private final String action;\n- private final Supplier<TransportResponse> responseFactory;\n+ private final Function<TransportRequest, Supplier<TransportResponse>> responseFunction;\n \n- ProxyRequestHandler(TransportService service, String action, Supplier<TransportResponse> responseFactory) {\n+ ProxyRequestHandler(TransportService service, String action, Function<TransportRequest,\n+ Supplier<TransportResponse>> responseFunction) {\n this.service = service;\n this.action = action;\n- this.responseFactory = responseFactory;\n+ this.responseFunction = responseFunction;\n }\n \n @Override\n public void messageReceived(T request, TransportChannel channel) throws Exception {\n DiscoveryNode targetNode = request.targetNode;\n TransportRequest wrappedRequest = request.wrapped;\n- service.sendRequest(targetNode, action, wrappedRequest, new ProxyResponseHandler<>(channel, responseFactory));\n+ service.sendRequest(targetNode, action, wrappedRequest,\n+ new ProxyResponseHandler<>(channel, responseFunction.apply(wrappedRequest)));\n }\n }\n \n@@ -126,12 +129,24 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n /**\n- * Registers a proxy request handler that allows to forward requests for the given action to another node.\n+ * Registers a proxy request handler that allows to forward requests for the given action to another node. To be used when the\n+ * response type changes based on the upcoming request (quite rare)\n+ */\n+ public static void registerProxyAction(TransportService service, String action,\n+ Function<TransportRequest, Supplier<TransportResponse>> responseFunction) {\n+ RequestHandlerRegistry requestHandler = service.getRequestHandler(action);\n+ service.registerRequestHandler(getProxyAction(action), () -> new ProxyRequest(requestHandler::newRequest), ThreadPool.Names.SAME,\n+ true, false, new ProxyRequestHandler<>(service, action, responseFunction));\n+ }\n+\n+ /**\n+ * Registers a proxy request handler that allows to forward requests for the given action to another node. To be used when the\n+ * response type is always the same (most of the cases).\n */\n public static void registerProxyAction(TransportService service, String action, Supplier<TransportResponse> responseSupplier) {\n RequestHandlerRegistry requestHandler = service.getRequestHandler(action);\n service.registerRequestHandler(getProxyAction(action), () -> new ProxyRequest(requestHandler::newRequest), ThreadPool.Names.SAME,\n- true, false, new ProxyRequestHandler<>(service, action, responseSupplier));\n+ true, false, new ProxyRequestHandler<>(service, action, request -> responseSupplier));\n }\n \n private static final String PROXY_ACTION_PREFIX = \"internal:transport/proxy/\";",
"filename": "core/src/main/java/org/elasticsearch/transport/TransportActionProxy.java",
"status": "modified"
},
{
"diff": "@@ -165,3 +165,14 @@\n - match: { hits.total: 2 }\n - match: { hits.hits.0._source.filter_field: 1 }\n - match: { hits.hits.0._index: \"my_remote_cluster:test_index\" }\n+\n+---\n+\"Single shard search gets properly proxied\":\n+\n+ - do:\n+ search:\n+ index: \"my_remote_cluster:single_doc_index\"\n+\n+ - match: { _shards.total: 1 }\n+ - match: { hits.total: 1 }\n+ - match: { hits.hits.0._index: \"my_remote_cluster:single_doc_index\"}",
"filename": "qa/multi-cluster-search/src/test/resources/rest-api-spec/test/multi_cluster/10_basic.yml",
"status": "modified"
}
]
} |
{
"body": "Today we represent each value of a list setting with it's own dedicated key that ends with the index of the value in the list. Aside of the obvious weirdness this has several issues especially if lists are massive since it causes massive runtime penalties when validating settings. Like a list of 100k words will literally cause a create index call to timeout and in-turn massive slowdown on all subsequent validations runs. \r\n\r\nThis change moves away from the current internal representation towards a single decoded string representation. A list is encoded into a JSON list internally in a fully backwards compatible way. A list is then a single key value pair in settings and all prefix based settings are internally converted to the new representation in the settings builder. This change also forbids to add a settings that ends with a `.0` which was internally used to detect a list setting. Once this has been rolled out for an entire major version all the internal `.0` handling can be removed since all settings will be converted.\r\n",
"comments": [
{
"body": "When discussing this many weeks ago, I suggested having the value of the internal map be a discriminated union. This could either simply be Object with the discrimination done by instanceof checks, or a simple POJO with a boolean+Object. I'm apprehensive about this PR because it means every iteration of a list setting requires creation of new strings (by parsing the \"list\" string). Why are these special marker strings to denote whether the value is a list any better than actually having a List and using instanceof? In either case, we have to ensure access to the internal map is removed, but the latter is much more transparent and easier to not mess up.",
"created_at": "2017-09-26T18:37:07Z"
},
{
"body": "I am closing this one since the encapsulation of Settings is so broken today I first have to clean up all the cruft around it to make it work. I think ultimately we can just have a `String,Object` map internally but before that we have to get rid of things like `getAsMap` on settings.",
"created_at": "2017-09-27T09:12:33Z"
}
],
"number": 26723,
"title": "Change format how settings represent lists / array"
} | {
"body": "Today we represent each value of a list setting with it's own dedicated key\r\nthat ends with the index of the value in the list. Aside of the obvious\r\nweirdness this has several issues especially if lists are massive since it\r\ncauses massive runtime penalties when validating settings. Like a list of 100k\r\nwords will literally cause a create index call to timeout and in-turn massive\r\nslowdown on all subsequent validations runs.\r\n\r\nWith this change we use a simple string list to represent the list. This change\r\nalso forbids to add a settings that ends with a .0 which was internally used to\r\ndetect a list setting. Once this has been rolled out for an entire major\r\nversion all the internal .0 handling can be removed since all settings will be\r\nconverted.\r\n\r\nRelates to #26723",
"number": 26878,
"review_comments": [
{
"body": "If we are still going to have getAsArray, maybe we should store as an array instead of List? Then this method is really just a check + cast.",
"created_at": "2017-10-04T17:41:53Z"
},
{
"body": "these comments are no longer relevant i think? The essentially part was because each list element was it's own settings value?",
"created_at": "2017-10-04T17:46:38Z"
},
{
"body": "so my plan was rather to change the return type in a followup to `List<String>` since I really don't want to return a mutable object. Makes sense?",
"created_at": "2017-10-04T18:45:15Z"
},
{
"body": "yeah correct I will remove it",
"created_at": "2017-10-04T18:46:12Z"
},
{
"body": "It does, I had not thought about the mutability of an array!",
"created_at": "2017-10-05T01:12:49Z"
}
],
"title": "Represent lists as actual lists inside Settings"
} | {
"commits": [
{
"message": "Represent lists as actual lists inside Settings\n\nToday we represent each value of a list setting with it's own dedicated key\nthat ends with the index of the value in the list. Aside of the obvious\nweirdness this has several issues especially if lists are massive since it\ncauses massive runtime penalties when validating settings. Like a list of 100k\nwords will literally cause a create index call to timeout and in-turn massive\nslowdown on all subsequent validations runs.\n\nWith this change we use a simple string list to represent the list. This change\nalso forbids to add a settings that ends with a .0 which was internally used to\ndetect a list setting. Once this has been rolled out for an entire major\nversion all the internal .0 handling can be removed since all settings will be\nconverted.\n\nRelates to #26723"
},
{
"message": "fix array parsing and javadocs"
},
{
"message": "cleanup settings a bit more"
},
{
"message": "fix settings serialization"
},
{
"message": "fix Settings.Builder#copy"
},
{
"message": "fix discovery tags"
},
{
"message": "remove stale comment"
},
{
"message": "Merge branch 'master' into fix_list_settings_for_real"
}
],
"files": [
{
"diff": "@@ -86,7 +86,7 @@ protected AbstractScopedSettings(Settings settings, Set<Setting<?>> settingsSet,\n \n protected void validateSettingKey(Setting setting) {\n if (isValidKey(setting.getKey()) == false && (setting.isGroupSetting() && isValidGroupKey(setting.getKey())\n- || isValidAffixKey(setting.getKey())) == false) {\n+ || isValidAffixKey(setting.getKey())) == false || setting.getKey().endsWith(\".0\")) {\n throw new IllegalArgumentException(\"illegal settings key: [\" + setting.getKey() + \"]\");\n }\n }\n@@ -534,7 +534,7 @@ private static boolean applyDeletes(Set<String> deletes, Settings.Builder builde\n boolean changed = false;\n for (String entry : deletes) {\n Set<String> keysToRemove = new HashSet<>();\n- Set<String> keySet = builder.internalMap().keySet();\n+ Set<String> keySet = builder.keys();\n for (String key : keySet) {\n if (Regex.simpleMatch(entry, key) && canRemove.test(key)) {\n // we have to re-check with canRemove here since we might have a wildcard expression foo.* that matches",
"filename": "core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java",
"status": "modified"
},
{
"diff": "@@ -127,7 +127,7 @@ public Settings getValue(Settings current, Settings previous) {\n Settings.Builder builder = Settings.builder();\n builder.put(current.filter(loggerPredicate));\n for (String key : previous.keySet()) {\n- if (loggerPredicate.test(key) && builder.internalMap().containsKey(key) == false) {\n+ if (loggerPredicate.test(key) && builder.keys().contains(key) == false) {\n if (ESLoggerFactory.LOG_LEVEL_SETTING.getConcreteSetting(key).exists(settings) == false) {\n builder.putNull(key);\n } else {",
"filename": "core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java",
"status": "modified"
},
{
"diff": "@@ -820,12 +820,6 @@ boolean hasComplexMatcher() {\n return true;\n }\n \n- @Override\n- public boolean exists(Settings settings) {\n- boolean exists = super.exists(settings);\n- return exists || settings.get(getKey() + \".0\") != null;\n- }\n-\n @Override\n public void diff(Settings.Builder builder, Settings source, Settings defaultSettings) {\n if (exists(source) == false) {",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Setting.java",
"status": "modified"
},
{
"diff": "@@ -39,7 +39,6 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n import org.elasticsearch.common.xcontent.ToXContentFragment;\n-import org.elasticsearch.common.xcontent.XContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -57,23 +56,18 @@\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n-import java.util.Dictionary;\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.Iterator;\n import java.util.List;\n-import java.util.Locale;\n import java.util.Map;\n import java.util.NoSuchElementException;\n-import java.util.Objects;\n import java.util.Set;\n import java.util.TreeMap;\n import java.util.concurrent.TimeUnit;\n import java.util.function.Function;\n import java.util.function.Predicate;\n import java.util.function.UnaryOperator;\n-import java.util.regex.Matcher;\n-import java.util.regex.Pattern;\n import java.util.stream.Collectors;\n import java.util.stream.Stream;\n \n@@ -87,10 +81,9 @@\n public final class Settings implements ToXContentFragment {\n \n public static final Settings EMPTY = new Builder().build();\n- private static final Pattern ARRAY_PATTERN = Pattern.compile(\"(.*)\\\\.\\\\d+$\");\n \n /** The raw settings from the full key to raw string value. */\n- private final Map<String, String> settings;\n+ private final Map<String, Object> settings;\n \n /** The secure settings storage associated with these settings. */\n private final SecureSettings secureSettings;\n@@ -104,7 +97,7 @@ public final class Settings implements ToXContentFragment {\n */\n private final SetOnce<Set<String>> keys = new SetOnce<>();\n \n- Settings(Map<String, String> settings, SecureSettings secureSettings) {\n+ Settings(Map<String, Object> settings, SecureSettings secureSettings) {\n // we use a sorted map for consistent serialization when using getAsMap()\n this.settings = Collections.unmodifiableSortedMap(new TreeMap<>(settings));\n this.secureSettings = secureSettings;\n@@ -120,7 +113,7 @@ SecureSettings getSecureSettings() {\n \n private Map<String, Object> getAsStructuredMap() {\n Map<String, Object> map = new HashMap<>(2);\n- for (Map.Entry<String, String> entry : settings.entrySet()) {\n+ for (Map.Entry<String, Object> entry : settings.entrySet()) {\n processSetting(map, \"\", entry.getKey(), entry.getValue());\n }\n for (Map.Entry<String, Object> entry : map.entrySet()) {\n@@ -133,7 +126,7 @@ private Map<String, Object> getAsStructuredMap() {\n return map;\n }\n \n- private void processSetting(Map<String, Object> map, String prefix, String setting, String value) {\n+ private void processSetting(Map<String, Object> map, String prefix, String setting, Object value) {\n int prefixLength = setting.indexOf('.');\n if (prefixLength == -1) {\n @SuppressWarnings(\"unchecked\") Map<String, Object> innerMap = (Map<String, Object>) map.get(prefix + setting);\n@@ -237,7 +230,7 @@ public Settings getAsSettings(String setting) {\n * @return The setting value, <tt>null</tt> if it does not exists.\n */\n public String get(String setting) {\n- return settings.get(setting);\n+ return toString(settings.get(setting));\n }\n \n /**\n@@ -373,83 +366,60 @@ public SizeValue getAsSize(String setting, SizeValue defaultValue) throws Settin\n }\n \n /**\n- * The values associated with a setting prefix as an array. The settings array is in the format of:\n- * <tt>settingPrefix.[index]</tt>.\n+ * The values associated with a setting key as an array.\n * <p>\n * It will also automatically load a comma separated list under the settingPrefix and merge with\n * the numbered format.\n *\n- * @param settingPrefix The setting prefix to load the array by\n+ * @param key The setting prefix to load the array by\n * @return The setting array values\n */\n- public String[] getAsArray(String settingPrefix) throws SettingsException {\n- return getAsArray(settingPrefix, Strings.EMPTY_ARRAY, true);\n+ public String[] getAsArray(String key) throws SettingsException {\n+ return getAsArray(key, Strings.EMPTY_ARRAY, true);\n }\n \n /**\n- * The values associated with a setting prefix as an array. The settings array is in the format of:\n- * <tt>settingPrefix.[index]</tt>.\n+ * The values associated with a setting key as an array.\n * <p>\n * If commaDelimited is true, it will automatically load a comma separated list under the settingPrefix and merge with\n * the numbered format.\n *\n- * @param settingPrefix The setting prefix to load the array by\n+ * @param key The setting key to load the array by\n * @return The setting array values\n */\n- public String[] getAsArray(String settingPrefix, String[] defaultArray) throws SettingsException {\n- return getAsArray(settingPrefix, defaultArray, true);\n+ public String[] getAsArray(String key, String[] defaultArray) throws SettingsException {\n+ return getAsArray(key, defaultArray, true);\n }\n \n /**\n- * The values associated with a setting prefix as an array. The settings array is in the format of:\n- * <tt>settingPrefix.[index]</tt>.\n+ * The values associated with a setting key as an array.\n * <p>\n * It will also automatically load a comma separated list under the settingPrefix and merge with\n * the numbered format.\n *\n- * @param settingPrefix The setting prefix to load the array by\n+ * @param key The setting key to load the array by\n * @param defaultArray The default array to use if no value is specified\n * @param commaDelimited Whether to try to parse a string as a comma-delimited value\n * @return The setting array values\n */\n- public String[] getAsArray(String settingPrefix, String[] defaultArray, Boolean commaDelimited) throws SettingsException {\n+ public String[] getAsArray(String key, String[] defaultArray, Boolean commaDelimited) throws SettingsException {\n List<String> result = new ArrayList<>();\n-\n- final String valueFromPrefix = get(settingPrefix);\n- final String valueFromPreifx0 = get(settingPrefix + \".0\");\n-\n- if (valueFromPrefix != null && valueFromPreifx0 != null) {\n- final String message = String.format(\n- Locale.ROOT,\n- \"settings object contains values for [%s=%s] and [%s=%s]\",\n- settingPrefix,\n- valueFromPrefix,\n- settingPrefix + \".0\",\n- valueFromPreifx0);\n- throw new IllegalStateException(message);\n- }\n-\n- if (get(settingPrefix) != null) {\n- if (commaDelimited) {\n- String[] strings = Strings.splitStringByCommaToArray(get(settingPrefix));\n+ final Object valueFromPrefix = settings.get(key);\n+ if (valueFromPrefix != null) {\n+ if (valueFromPrefix instanceof List) {\n+ result = ((List<String>) valueFromPrefix);\n+ } else if (commaDelimited) {\n+ String[] strings = Strings.splitStringByCommaToArray(get(key));\n if (strings.length > 0) {\n for (String string : strings) {\n result.add(string.trim());\n }\n }\n } else {\n- result.add(get(settingPrefix).trim());\n+ result.add(get(key).trim());\n }\n }\n \n- int counter = 0;\n- while (true) {\n- String value = get(settingPrefix + '.' + (counter++));\n- if (value == null) {\n- break;\n- }\n- result.add(value.trim());\n- }\n if (result.isEmpty()) {\n return defaultArray;\n }\n@@ -550,7 +520,7 @@ public Set<String> names() {\n */\n public String toDelimitedString(char delimiter) {\n StringBuilder sb = new StringBuilder();\n- for (Map.Entry<String, String> entry : settings.entrySet()) {\n+ for (Map.Entry<String, Object> entry : settings.entrySet()) {\n sb.append(entry.getKey()).append(\"=\").append(entry.getValue()).append(delimiter);\n }\n return sb.toString();\n@@ -575,19 +545,52 @@ public int hashCode() {\n public static Settings readSettingsFromStream(StreamInput in) throws IOException {\n Builder builder = new Builder();\n int numberOfSettings = in.readVInt();\n- for (int i = 0; i < numberOfSettings; i++) {\n- builder.put(in.readString(), in.readOptionalString());\n+ if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ for (int i = 0; i < numberOfSettings; i++) {\n+ String key = in.readString();\n+ Object value = in.readGenericValue();\n+ if (value == null) {\n+ builder.putNull(key);\n+ } else if (value instanceof List) {\n+ builder.putArray(key, (List<String>) value);\n+ } else {\n+ builder.put(key, value.toString());\n+ }\n+ }\n+ } else {\n+ for (int i = 0; i < numberOfSettings; i++) {\n+ String key = in.readString();\n+ String value = in.readOptionalString();\n+ builder.put(key, value);\n+ }\n }\n return builder.build();\n }\n \n public static void writeSettingsToStream(Settings settings, StreamOutput out) throws IOException {\n // pull settings to exclude secure settings in size()\n- Set<Map.Entry<String, String>> entries = settings.settings.entrySet();\n- out.writeVInt(entries.size());\n- for (Map.Entry<String, String> entry : entries) {\n- out.writeString(entry.getKey());\n- out.writeOptionalString(entry.getValue());\n+ Set<Map.Entry<String, Object>> entries = settings.settings.entrySet();\n+ if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ out.writeVInt(entries.size());\n+ for (Map.Entry<String, Object> entry : entries) {\n+ out.writeString(entry.getKey());\n+ out.writeGenericValue(entry.getValue());\n+ }\n+ } else {\n+ int size = entries.stream().mapToInt(e -> e.getValue() instanceof List ? ((List)e.getValue()).size() : 1).sum();\n+ out.writeVInt(size);\n+ for (Map.Entry<String, Object> entry : entries) {\n+ if (entry.getValue() instanceof List) {\n+ int idx = 0;\n+ for (String value : (List<String>)entry.getValue()) {\n+ out.writeString(entry.getKey() + \".\" + idx++);\n+ out.writeOptionalString(value);\n+ }\n+ } else {\n+ out.writeString(entry.getKey());\n+ out.writeOptionalString(toString(entry.getValue()));\n+ }\n+ }\n }\n }\n \n@@ -606,7 +609,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.field(entry.getKey(), entry.getValue());\n }\n } else {\n- for (Map.Entry<String, String> entry : settings.settings.entrySet()) {\n+ for (Map.Entry<String, Object> entry : settings.settings.entrySet()) {\n builder.field(entry.getKey(), entry.getValue());\n }\n }\n@@ -622,9 +625,7 @@ public static Settings fromXContent(XContentParser parser) throws IOException {\n return fromXContent(parser, true, false);\n }\n \n- private static Settings fromXContent(XContentParser parser, boolean allowNullValues,\n- boolean validateEndOfStream)\n- throws IOException {\n+ private static Settings fromXContent(XContentParser parser, boolean allowNullValues, boolean validateEndOfStream) throws IOException {\n if (parser.currentToken() == null) {\n parser.nextToken();\n }\n@@ -766,30 +767,30 @@ public static class Builder {\n public static final Settings EMPTY_SETTINGS = new Builder().build();\n \n // we use a sorted map for consistent serialization when using getAsMap()\n- private final Map<String, String> map = new TreeMap<>();\n+ private final Map<String, Object> map = new TreeMap<>();\n \n private SetOnce<SecureSettings> secureSettings = new SetOnce<>();\n \n private Builder() {\n \n }\n \n- public Map<String, String> internalMap() {\n- return this.map;\n+ public Set<String> keys() {\n+ return this.map.keySet();\n }\n \n /**\n * Removes the provided setting from the internal map holding the current list of settings.\n */\n public String remove(String key) {\n- return map.remove(key);\n+ return Settings.toString(map.remove(key));\n }\n \n /**\n * Returns a setting value based on the setting key.\n */\n public String get(String key) {\n- return map.get(key);\n+ return Settings.toString(map.get(key));\n }\n \n /** Return the current secure settings, or {@code null} if none have been set. */\n@@ -892,10 +893,17 @@ public Builder copy(String key, Settings source) {\n }\n \n public Builder copy(String key, String sourceKey, Settings source) {\n- if (source.keySet().contains(sourceKey) == false) {\n+ if (source.settings.containsKey(sourceKey) == false) {\n throw new IllegalArgumentException(\"source key not found in the source settings\");\n }\n- return put(key, source.get(sourceKey));\n+ final Object value = source.settings.get(sourceKey);\n+ if (value instanceof List) {\n+ return putArray(key, (List)value);\n+ } else if (value == null) {\n+ return putNull(key);\n+ } else {\n+ return put(key, Settings.toString(value));\n+ }\n }\n \n /**\n@@ -1027,16 +1035,7 @@ public Builder putArray(String setting, String... values) {\n */\n public Builder putArray(String setting, List<String> values) {\n remove(setting);\n- int counter = 0;\n- while (true) {\n- String value = map.remove(setting + '.' + (counter++));\n- if (value == null) {\n- break;\n- }\n- }\n- for (int i = 0; i < values.size(); i++) {\n- put(setting + \".\" + i, values.get(i));\n- }\n+ map.put(setting, Collections.unmodifiableList(new ArrayList<>(values)));\n return this;\n }\n \n@@ -1069,55 +1068,41 @@ public Builder put(Settings settings) {\n * @param copySecureSettings if <code>true</code> all settings including secure settings are copied.\n */\n public Builder put(Settings settings, boolean copySecureSettings) {\n- removeNonArraysFieldsIfNewSettingsContainsFieldAsArray(settings.settings);\n- map.putAll(settings.settings);\n+ Map<String, Object> settingsMap = new HashMap<>(settings.settings);\n+ processLegacyLists(settingsMap);\n+ map.putAll(settingsMap);\n if (copySecureSettings && settings.getSecureSettings() != null) {\n setSecureSettings(settings.getSecureSettings());\n }\n return this;\n }\n \n- /**\n- * Removes non array values from the existing map, if settings contains an array value instead\n- *\n- * Example:\n- * Existing map contains: {key:value}\n- * New map contains: {key:[value1,value2]} (which has been flattened to {}key.0:value1,key.1:value2})\n- *\n- * This ensure that that the 'key' field gets removed from the map in order to override all the\n- * data instead of merging\n- */\n- private void removeNonArraysFieldsIfNewSettingsContainsFieldAsArray(Map<String, String> settings) {\n- List<String> prefixesToRemove = new ArrayList<>();\n- for (final Map.Entry<String, String> entry : settings.entrySet()) {\n- final Matcher matcher = ARRAY_PATTERN.matcher(entry.getKey());\n- if (matcher.matches()) {\n- prefixesToRemove.add(matcher.group(1));\n- } else if (map.keySet().stream().anyMatch(key -> key.startsWith(entry.getKey() + \".\"))) {\n- prefixesToRemove.add(entry.getKey());\n- }\n- }\n- for (String prefix : prefixesToRemove) {\n- Iterator<Map.Entry<String, String>> iterator = map.entrySet().iterator();\n- while (iterator.hasNext()) {\n- Map.Entry<String, String> entry = iterator.next();\n- if (entry.getKey().startsWith(prefix + \".\") || entry.getKey().equals(prefix)) {\n- iterator.remove();\n+ private void processLegacyLists(Map<String, Object> map) {\n+ String[] array = map.keySet().toArray(new String[map.size()]);\n+ for (String key : array) {\n+ if (key.endsWith(\".0\")) { // let's only look at the head of the list and convert in order starting there.\n+ int counter = 0;\n+ String prefix = key.substring(0, key.lastIndexOf('.'));\n+ if (map.containsKey(prefix)) {\n+ throw new IllegalStateException(\"settings builder can't contain values for [\" + prefix + \"=\" + map.get(prefix)\n+ + \"] and [\" + key + \"=\" + map.get(key) + \"]\");\n+ }\n+ List<String> values = new ArrayList<>();\n+ while (true) {\n+ String listKey = prefix + '.' + (counter++);\n+ String value = get(listKey);\n+ if (value == null) {\n+ map.put(prefix, values);\n+ break;\n+ } else {\n+ values.add(value);\n+ map.remove(listKey);\n+ }\n }\n }\n }\n }\n \n- /**\n- * Sets all the provided settings.\n- */\n- public Builder put(Dictionary<Object,Object> properties) {\n- for (Object key : Collections.list(properties.keys())) {\n- map.put(Objects.toString(key), Objects.toString(properties.get(key)));\n- }\n- return this;\n- }\n-\n /**\n * Loads settings from the actual string content that represents them using {@link #fromXContent(XContentParser)}\n */\n@@ -1195,7 +1180,7 @@ public String resolvePlaceholder(String placeholderName) {\n if (value != null) {\n return value;\n }\n- return map.get(placeholderName);\n+ return Settings.toString(map.get(placeholderName));\n }\n \n @Override\n@@ -1215,14 +1200,14 @@ public boolean shouldRemoveMissingPlaceholder(String placeholderName) {\n }\n };\n \n- Iterator<Map.Entry<String, String>> entryItr = map.entrySet().iterator();\n+ Iterator<Map.Entry<String, Object>> entryItr = map.entrySet().iterator();\n while (entryItr.hasNext()) {\n- Map.Entry<String, String> entry = entryItr.next();\n- if (entry.getValue() == null) {\n+ Map.Entry<String, Object> entry = entryItr.next();\n+ if (entry.getValue() == null || entry.getValue() instanceof List) {\n // a null value obviously can't be replaced\n continue;\n }\n- String value = propertyPlaceholder.replacePlaceholders(entry.getValue(), placeholderResolver);\n+ String value = propertyPlaceholder.replacePlaceholders(Settings.toString(entry.getValue()), placeholderResolver);\n // if the values exists and has length, we should maintain it in the map\n // otherwise, the replace process resolved into removing it\n if (Strings.hasLength(value)) {\n@@ -1240,10 +1225,10 @@ public boolean shouldRemoveMissingPlaceholder(String placeholderName) {\n * If a setting doesn't start with the prefix, the builder appends the prefix to such setting.\n */\n public Builder normalizePrefix(String prefix) {\n- Map<String, String> replacements = new HashMap<>();\n- Iterator<Map.Entry<String, String>> iterator = map.entrySet().iterator();\n+ Map<String, Object> replacements = new HashMap<>();\n+ Iterator<Map.Entry<String, Object>> iterator = map.entrySet().iterator();\n while(iterator.hasNext()) {\n- Map.Entry<String, String> entry = iterator.next();\n+ Map.Entry<String, Object> entry = iterator.next();\n if (entry.getKey().startsWith(prefix) == false) {\n replacements.put(prefix + entry.getKey(), entry.getValue());\n iterator.remove();\n@@ -1258,30 +1243,31 @@ public Builder normalizePrefix(String prefix) {\n * set on this builder.\n */\n public Settings build() {\n+ processLegacyLists(map);\n return new Settings(map, secureSettings.get());\n }\n }\n \n // TODO We could use an FST internally to make things even faster and more compact\n- private static final class FilteredMap extends AbstractMap<String, String> {\n- private final Map<String, String> delegate;\n+ private static final class FilteredMap extends AbstractMap<String, Object> {\n+ private final Map<String, Object> delegate;\n private final Predicate<String> filter;\n private final String prefix;\n // we cache that size since we have to iterate the entire set\n // this is safe to do since this map is only used with unmodifiable maps\n private int size = -1;\n @Override\n- public Set<Entry<String, String>> entrySet() {\n- Set<Entry<String, String>> delegateSet = delegate.entrySet();\n- AbstractSet<Entry<String, String>> filterSet = new AbstractSet<Entry<String, String>>() {\n+ public Set<Entry<String, Object>> entrySet() {\n+ Set<Entry<String, Object>> delegateSet = delegate.entrySet();\n+ AbstractSet<Entry<String, Object>> filterSet = new AbstractSet<Entry<String, Object>>() {\n \n @Override\n- public Iterator<Entry<String, String>> iterator() {\n- Iterator<Entry<String, String>> iter = delegateSet.iterator();\n+ public Iterator<Entry<String, Object>> iterator() {\n+ Iterator<Entry<String, Object>> iter = delegateSet.iterator();\n \n- return new Iterator<Entry<String, String>>() {\n+ return new Iterator<Entry<String, Object>>() {\n private int numIterated;\n- private Entry<String, String> currentElement;\n+ private Entry<String, Object> currentElement;\n @Override\n public boolean hasNext() {\n if (currentElement != null) {\n@@ -1304,29 +1290,29 @@ public boolean hasNext() {\n }\n \n @Override\n- public Entry<String, String> next() {\n+ public Entry<String, Object> next() {\n if (currentElement == null && hasNext() == false) { // protect against no #hasNext call or not respecting it\n \n throw new NoSuchElementException(\"make sure to call hasNext first\");\n }\n- final Entry<String, String> current = this.currentElement;\n+ final Entry<String, Object> current = this.currentElement;\n this.currentElement = null;\n if (prefix == null) {\n return current;\n }\n- return new Entry<String, String>() {\n+ return new Entry<String, Object>() {\n @Override\n public String getKey() {\n return current.getKey().substring(prefix.length());\n }\n \n @Override\n- public String getValue() {\n+ public Object getValue() {\n return current.getValue();\n }\n \n @Override\n- public String setValue(String value) {\n+ public Object setValue(Object value) {\n throw new UnsupportedOperationException();\n }\n };\n@@ -1342,14 +1328,14 @@ public int size() {\n return filterSet;\n }\n \n- private FilteredMap(Map<String, String> delegate, Predicate<String> filter, String prefix) {\n+ private FilteredMap(Map<String, Object> delegate, Predicate<String> filter, String prefix) {\n this.delegate = delegate;\n this.filter = filter;\n this.prefix = prefix;\n }\n \n @Override\n- public String get(Object key) {\n+ public Object get(Object key) {\n if (key instanceof String) {\n final String theKey = prefix == null ? (String)key : prefix + key;\n if (filter.test(theKey)) {\n@@ -1437,4 +1423,9 @@ public String toString() {\n throw new UncheckedIOException(e);\n }\n }\n+\n+ private static String toString(Object o) {\n+ return o == null ? null : o.toString();\n+ }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java",
"status": "modified"
},
{
"diff": "@@ -30,8 +30,6 @@\n import java.util.HashSet;\n import java.util.Iterator;\n import java.util.List;\n-import java.util.Map;\n-import java.util.Map.Entry;\n import java.util.Set;\n \n /**\n@@ -107,10 +105,10 @@ private static Settings filterSettings(Iterable<String> patterns, Settings setti\n }\n if (!simpleMatchPatternList.isEmpty()) {\n String[] simpleMatchPatterns = simpleMatchPatternList.toArray(new String[simpleMatchPatternList.size()]);\n- Iterator<Entry<String, String>> iterator = builder.internalMap().entrySet().iterator();\n+ Iterator<String> iterator = builder.keys().iterator();\n while (iterator.hasNext()) {\n- Map.Entry<String, String> current = iterator.next();\n- if (Regex.simpleMatch(simpleMatchPatterns, current.getKey())) {\n+ String key = iterator.next();\n+ if (Regex.simpleMatch(simpleMatchPatterns, key)) {\n iterator.remove();\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/SettingsFilter.java",
"status": "modified"
},
{
"diff": "@@ -101,13 +101,8 @@ public static boolean isNoStopwords(Settings settings) {\n \n public static CharArraySet parseStemExclusion(Settings settings, CharArraySet defaultStemExclusion) {\n String value = settings.get(\"stem_exclusion\");\n- if (value != null) {\n- if (\"_none_\".equals(value)) {\n- return CharArraySet.EMPTY_SET;\n- } else {\n- // LUCENE 4 UPGRADE: Should be settings.getAsBoolean(\"stem_exclusion_case\", false)?\n- return new CharArraySet(Strings.commaDelimitedListToSet(value), false);\n- }\n+ if (\"_none_\".equals(value)) {\n+ return CharArraySet.EMPTY_SET;\n }\n String[] stemExclusion = settings.getAsArray(\"stem_exclusion\", null);\n if (stemExclusion != null) {\n@@ -164,7 +159,7 @@ public static CharArraySet parseWords(Environment env, Settings settings, String\n if (\"_none_\".equals(value)) {\n return CharArraySet.EMPTY_SET;\n } else {\n- return resolveNamedWords(Strings.commaDelimitedListToSet(value), namedWords, ignoreCase);\n+ return resolveNamedWords(Arrays.asList(settings.getAsArray(name)), namedWords, ignoreCase);\n }\n }\n List<String> pathLoadedWords = getWordList(env, settings, name);",
"filename": "core/src/main/java/org/elasticsearch/index/analysis/Analysis.java",
"status": "modified"
},
{
"diff": "@@ -134,7 +134,7 @@ static void initializeSettings(final Settings.Builder output, final Settings inp\n private static void finalizeSettings(Settings.Builder output, Terminal terminal) {\n // allow to force set properties based on configuration of the settings provided\n List<String> forcedSettings = new ArrayList<>();\n- for (String setting : output.internalMap().keySet()) {\n+ for (String setting : output.keys()) {\n if (setting.startsWith(\"force.\")) {\n forcedSettings.add(setting);\n }\n@@ -156,13 +156,13 @@ private static void finalizeSettings(Settings.Builder output, Terminal terminal)\n private static void replacePromptPlaceholders(Settings.Builder settings, Terminal terminal) {\n List<String> secretToPrompt = new ArrayList<>();\n List<String> textToPrompt = new ArrayList<>();\n- for (Map.Entry<String, String> entry : settings.internalMap().entrySet()) {\n- switch (entry.getValue()) {\n+ for (String key : settings.keys()) {\n+ switch (settings.get(key)) {\n case SECRET_PROMPT_VALUE:\n- secretToPrompt.add(entry.getKey());\n+ secretToPrompt.add(key);\n break;\n case TEXT_PROMPT_VALUE:\n- textToPrompt.add(entry.getKey());\n+ textToPrompt.add(key);\n break;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java",
"status": "modified"
},
{
"diff": "@@ -468,21 +468,21 @@ public void testDiff() throws IOException {\n ClusterSettings settings = new ClusterSettings(Settings.EMPTY, new HashSet<>(Arrays.asList(fooBar, fooBarBaz, foorBarQuux,\n someGroup, someAffix)));\n Settings diff = settings.diff(Settings.builder().put(\"foo.bar\", 5).build(), Settings.EMPTY);\n- assertEquals(4, diff.size()); // 4 since foo.bar.quux has 3 values essentially\n+ assertEquals(2, diff.size());\n assertThat(diff.getAsInt(\"foo.bar.baz\", null), equalTo(1));\n assertArrayEquals(diff.getAsArray(\"foo.bar.quux\", null), new String[] {\"a\", \"b\", \"c\"});\n \n diff = settings.diff(\n Settings.builder().put(\"foo.bar\", 5).build(),\n Settings.builder().put(\"foo.bar.baz\", 17).putArray(\"foo.bar.quux\", \"d\", \"e\", \"f\").build());\n- assertEquals(4, diff.size()); // 4 since foo.bar.quux has 3 values essentially\n+ assertEquals(2, diff.size());\n assertThat(diff.getAsInt(\"foo.bar.baz\", null), equalTo(17));\n assertArrayEquals(diff.getAsArray(\"foo.bar.quux\", null), new String[] {\"d\", \"e\", \"f\"});\n \n diff = settings.diff(\n Settings.builder().put(\"some.group.foo\", 5).build(),\n Settings.builder().put(\"some.group.foobar\", 17).put(\"some.group.foo\", 25).build());\n- assertEquals(6, diff.size()); // 6 since foo.bar.quux has 3 values essentially\n+ assertEquals(4, diff.size());\n assertThat(diff.getAsInt(\"some.group.foobar\", null), equalTo(17));\n assertNull(diff.get(\"some.group.foo\"));\n assertArrayEquals(diff.getAsArray(\"foo.bar.quux\", null), new String[] {\"a\", \"b\", \"c\"});\n@@ -492,7 +492,7 @@ public void testDiff() throws IOException {\n diff = settings.diff(\n Settings.builder().put(\"some.prefix.foo.somekey\", 5).build(),\n Settings.builder().put(\"some.prefix.foobar.somekey\", 17).put(\"some.prefix.foo.somekey\", 18).build());\n- assertEquals(6, diff.size()); // 6 since foo.bar.quux has 3 values essentially\n+ assertEquals(4, diff.size());\n assertThat(diff.getAsInt(\"some.prefix.foobar.somekey\", null), equalTo(17));\n assertNull(diff.get(\"some.prefix.foo.somekey\"));\n assertArrayEquals(diff.getAsArray(\"foo.bar.quux\", null), new String[] {\"a\", \"b\", \"c\"});\n@@ -518,7 +518,7 @@ public void testDiffWithAffixAndComplexMatcher() {\n diff = settings.diff(\n Settings.builder().put(\"foo.bar\", 5).build(),\n Settings.builder().put(\"foo.bar.baz\", 17).putArray(\"foo.bar.quux\", \"d\", \"e\", \"f\").build());\n- assertEquals(4, diff.size());\n+ assertEquals(2, diff.size());\n assertThat(diff.getAsInt(\"foo.bar.baz\", null), equalTo(17));\n assertArrayEquals(diff.getAsArray(\"foo.bar.quux\", null), new String[] {\"d\", \"e\", \"f\"});\n \n@@ -548,7 +548,7 @@ public void testDiffWithAffixAndComplexMatcher() {\n .putArray(\"foo.bar.quux\", \"x\", \"y\", \"z\")\n .putArray(\"foo.baz.quux\", \"d\", \"e\", \"f\")\n .build());\n- assertEquals(9, diff.size());\n+ assertEquals(5, diff.size());\n assertThat(diff.getAsInt(\"some.prefix.foobar.somekey\", null), equalTo(17));\n assertNull(diff.get(\"some.prefix.foo.somekey\"));\n assertArrayEquals(diff.getAsArray(\"foo.bar.quux\", null), new String[] {\"x\", \"y\", \"z\"});",
"filename": "core/src/test/java/org/elasticsearch/common/settings/ScopedSettingsTests.java",
"status": "modified"
},
{
"diff": "@@ -514,11 +514,11 @@ public void testListSettingAcceptsNumberSyntax() {\n List<String> input = Arrays.asList(\"test\", \"test1, test2\", \"test\", \",,,,\");\n Settings.Builder builder = Settings.builder().putArray(\"foo.bar\", input.toArray(new String[0]));\n // try to parse this really annoying format\n- for (String key : builder.internalMap().keySet()) {\n+ for (String key : builder.keys()) {\n assertTrue(\"key: \" + key + \" doesn't match\", listSetting.match(key));\n }\n builder = Settings.builder().put(\"foo.bar\", \"1,2,3\");\n- for (String key : builder.internalMap().keySet()) {\n+ for (String key : builder.keys()) {\n assertTrue(\"key: \" + key + \" doesn't match\", listSetting.match(key));\n }\n assertFalse(listSetting.match(\"foo_bar\"));",
"filename": "core/src/test/java/org/elasticsearch/common/settings/SettingTests.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,8 @@\n package org.elasticsearch.common.settings;\n \n import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -28,6 +30,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.VersionUtils;\n import org.hamcrest.CoreMatchers;\n \n import java.io.ByteArrayInputStream;\n@@ -36,6 +39,7 @@\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.util.Collections;\n+import java.util.HashMap;\n import java.util.Iterator;\n import java.util.Map;\n import java.util.NoSuchElementException;\n@@ -255,7 +259,7 @@ public void testThatArraysAreOverriddenCorrectly() throws IOException {\n .put(Settings.builder().put(\"value.data\", \"1\").build())\n .build();\n assertThat(settings.get(\"value.data\"), is(\"1\"));\n- assertThat(settings.get(\"value\"), is(nullValue()));\n+ assertThat(settings.get(\"value\"), is(\"[4, 5]\"));\n }\n \n public void testPrefixNormalization() {\n@@ -470,13 +474,18 @@ public void testWriteSettingsToStream() throws IOException {\n secureSettings.setString(\"test.key2.bog\", \"somethingsecure\");\n Settings.Builder builder = Settings.builder();\n builder.put(\"test.key1.baz\", \"blah1\");\n+ builder.putNull(\"test.key3.bar\");\n+ builder.putArray(\"test.key4.foo\", \"1\", \"2\");\n builder.setSecureSettings(secureSettings);\n- assertEquals(5, builder.build().size());\n+ assertEquals(7, builder.build().size());\n Settings.writeSettingsToStream(builder.build(), out);\n StreamInput in = StreamInput.wrap(out.bytes().toBytesRef().bytes);\n Settings settings = Settings.readSettingsFromStream(in);\n- assertEquals(1, settings.size());\n+ assertEquals(3, settings.size());\n assertEquals(\"blah1\", settings.get(\"test.key1.baz\"));\n+ assertNull(settings.get(\"test.key3.bar\"));\n+ assertTrue(settings.keySet().contains(\"test.key3.bar\"));\n+ assertArrayEquals(new String[] {\"1\", \"2\"}, settings.getAsArray(\"test.key4.foo\"));\n }\n \n public void testSecureSettingConflict() {\n@@ -487,14 +496,12 @@ public void testSecureSettingConflict() {\n }\n \n public void testGetAsArrayFailsOnDuplicates() {\n- final Settings settings =\n- Settings.builder()\n- .put(\"foobar.0\", \"bar\")\n- .put(\"foobar.1\", \"baz\")\n- .put(\"foobar\", \"foo\")\n- .build();\n- final IllegalStateException e = expectThrows(IllegalStateException.class, () -> settings.getAsArray(\"foobar\"));\n- assertThat(e, hasToString(containsString(\"settings object contains values for [foobar=foo] and [foobar.0=bar]\")));\n+ final IllegalStateException e = expectThrows(IllegalStateException.class, () -> Settings.builder()\n+ .put(\"foobar.0\", \"bar\")\n+ .put(\"foobar.1\", \"baz\")\n+ .put(\"foobar\", \"foo\")\n+ .build());\n+ assertThat(e, hasToString(containsString(\"settings builder can't contain values for [foobar=foo] and [foobar.0=bar]\")));\n }\n \n public void testToAndFromXContent() throws IOException {\n@@ -512,7 +519,7 @@ public void testToAndFromXContent() throws IOException {\n builder.endObject();\n XContentParser parser = createParser(builder);\n Settings build = Settings.fromXContent(parser);\n- assertEquals(7, build.size()); // each list element is it's own key hence 7 and not 5\n+ assertEquals(5, build.size());\n assertArrayEquals(new String[] {\"1\", \"2\", \"3\"}, build.getAsArray(\"foo.bar.baz\"));\n assertEquals(2, build.getAsInt(\"foo.foobar\", 0).intValue());\n assertEquals(\"test\", build.get(\"rootfoo\"));\n@@ -531,8 +538,8 @@ public void testSimpleJsonSettings() throws Exception {\n assertThat(settings.getAsInt(\"test1.test2.value3\", -1), equalTo(2));\n \n // check array\n- assertThat(settings.get(\"test1.test3.0\"), equalTo(\"test3-1\"));\n- assertThat(settings.get(\"test1.test3.1\"), equalTo(\"test3-2\"));\n+ assertNull(settings.get(\"test1.test3.0\"));\n+ assertNull(settings.get(\"test1.test3.1\"));\n assertThat(settings.getAsArray(\"test1.test3\").length, equalTo(2));\n assertThat(settings.getAsArray(\"test1.test3\")[0], equalTo(\"test3-1\"));\n assertThat(settings.getAsArray(\"test1.test3\")[1], equalTo(\"test3-2\"));\n@@ -571,7 +578,7 @@ public void testToXContent() throws IOException {\n builder.startObject();\n test.toXContent(builder, new ToXContent.MapParams(Collections.emptyMap()));\n builder.endObject();\n- assertEquals(\"{\\\"foo\\\":{\\\"bar\\\":{\\\"0\\\":\\\"1\\\",\\\"1\\\":\\\"2\\\",\\\"2\\\":\\\"3\\\",\\\"baz\\\":\\\"test\\\"}}}\", builder.string());\n+ assertEquals(\"{\\\"foo\\\":{\\\"bar.baz\\\":\\\"test\\\",\\\"bar\\\":[\\\"1\\\",\\\"2\\\",\\\"3\\\"]}}\", builder.string());\n \n test = Settings.builder().putArray(\"foo.bar\", \"1\", \"2\", \"3\").build();\n builder = XContentBuilder.builder(XContentType.JSON.xContent());\n@@ -584,7 +591,7 @@ public void testToXContent() throws IOException {\n builder.startObject();\n test.toXContent(builder, new ToXContent.MapParams(Collections.singletonMap(\"flat_settings\", \"true\")));\n builder.endObject();\n- assertEquals(\"{\\\"foo.bar.0\\\":\\\"1\\\",\\\"foo.bar.1\\\":\\\"2\\\",\\\"foo.bar.2\\\":\\\"3\\\"}\", builder.string());\n+ assertEquals(\"{\\\"foo.bar\\\":[\\\"1\\\",\\\"2\\\",\\\"3\\\"]}\", builder.string());\n }\n \n public void testLoadEmptyStream() throws IOException {\n@@ -604,8 +611,8 @@ public void testSimpleYamlSettings() throws Exception {\n assertThat(settings.getAsInt(\"test1.test2.value3\", -1), equalTo(2));\n \n // check array\n- assertThat(settings.get(\"test1.test3.0\"), equalTo(\"test3-1\"));\n- assertThat(settings.get(\"test1.test3.1\"), equalTo(\"test3-2\"));\n+ assertNull(settings.get(\"test1.test3.0\"));\n+ assertNull(settings.get(\"test1.test3.1\"));\n assertThat(settings.getAsArray(\"test1.test3\").length, equalTo(2));\n assertThat(settings.getAsArray(\"test1.test3\")[0], equalTo(\"test3-1\"));\n assertThat(settings.getAsArray(\"test1.test3\")[1], equalTo(\"test3-2\"));\n@@ -638,4 +645,78 @@ public void testMissingValue() throws Exception {\n e.getMessage(),\n e.getMessage().contains(\"null-valued setting found for key [foo] found at line number [1], column number [5]\"));\n }\n+\n+ public void testReadLegacyFromStream() throws IOException {\n+ BytesStreamOutput output = new BytesStreamOutput();\n+ output.setVersion(VersionUtils.getPreviousVersion(Version.CURRENT));\n+ output.writeVInt(5);\n+ output.writeString(\"foo.bar.1\");\n+ output.writeOptionalString(\"1\");\n+ output.writeString(\"foo.bar.0\");\n+ output.writeOptionalString(\"0\");\n+ output.writeString(\"foo.bar.2\");\n+ output.writeOptionalString(\"2\");\n+ output.writeString(\"foo.bar.3\");\n+ output.writeOptionalString(\"3\");\n+ output.writeString(\"foo.bar.baz\");\n+ output.writeOptionalString(\"baz\");\n+ StreamInput in = StreamInput.wrap(BytesReference.toBytes(output.bytes()));\n+ in.setVersion(VersionUtils.getPreviousVersion(Version.CURRENT));\n+ Settings settings = Settings.readSettingsFromStream(in);\n+ assertEquals(2, settings.size());\n+ assertArrayEquals(new String[]{\"0\", \"1\", \"2\", \"3\"}, settings.getAsArray(\"foo.bar\"));\n+ assertEquals(\"baz\", settings.get(\"foo.bar.baz\"));\n+ }\n+\n+ public void testWriteLegacyOutput() throws IOException {\n+ BytesStreamOutput output = new BytesStreamOutput();\n+ output.setVersion(VersionUtils.getPreviousVersion(Version.CURRENT));\n+ Settings settings = Settings.builder().putArray(\"foo.bar\", \"0\", \"1\", \"2\", \"3\")\n+ .put(\"foo.bar.baz\", \"baz\").putNull(\"foo.null\").build();\n+ Settings.writeSettingsToStream(settings, output);\n+ StreamInput in = StreamInput.wrap(BytesReference.toBytes(output.bytes()));\n+ assertEquals(6, in.readVInt());\n+ Map<String, String> keyValues = new HashMap<>();\n+ for (int i = 0; i < 6; i++){\n+ keyValues.put(in.readString(), in.readOptionalString());\n+ }\n+ assertEquals(keyValues.get(\"foo.bar.0\"), \"0\");\n+ assertEquals(keyValues.get(\"foo.bar.1\"), \"1\");\n+ assertEquals(keyValues.get(\"foo.bar.2\"), \"2\");\n+ assertEquals(keyValues.get(\"foo.bar.3\"), \"3\");\n+ assertEquals(keyValues.get(\"foo.bar.baz\"), \"baz\");\n+ assertTrue(keyValues.containsKey(\"foo.null\"));\n+ assertNull(keyValues.get(\"foo.null\"));\n+\n+ in = StreamInput.wrap(BytesReference.toBytes(output.bytes()));\n+ in.setVersion(output.getVersion());\n+ Settings readSettings = Settings.readSettingsFromStream(in);\n+ assertEquals(3, readSettings.size());\n+ assertArrayEquals(new String[] {\"0\", \"1\", \"2\", \"3\"}, readSettings.getAsArray(\"foo.bar\"));\n+ assertEquals(readSettings.get(\"foo.bar.baz\"), \"baz\");\n+ assertTrue(readSettings.keySet().contains(\"foo.null\"));\n+ assertNull(readSettings.get(\"foo.null\"));\n+ }\n+\n+ public void testReadWriteArray() throws IOException {\n+ BytesStreamOutput output = new BytesStreamOutput();\n+ output.setVersion(Version.CURRENT);\n+ Settings settings = Settings.builder().putArray(\"foo.bar\", \"0\", \"1\", \"2\", \"3\").put(\"foo.bar.baz\", \"baz\").build();\n+ Settings.writeSettingsToStream(settings, output);\n+ StreamInput in = StreamInput.wrap(BytesReference.toBytes(output.bytes()));\n+ Settings build = Settings.readSettingsFromStream(in);\n+ assertEquals(2, build.size());\n+ assertArrayEquals(build.getAsArray(\"foo.bar\"), new String[] {\"0\", \"1\", \"2\", \"3\"});\n+ assertEquals(build.get(\"foo.bar.baz\"), \"baz\");\n+ }\n+\n+ public void testCopy() {\n+ Settings settings = Settings.builder().putArray(\"foo.bar\", \"0\", \"1\", \"2\", \"3\").put(\"foo.bar.baz\", \"baz\").putNull(\"test\").build();\n+ assertArrayEquals(new String[] {\"0\", \"1\", \"2\", \"3\"}, Settings.builder().copy(\"foo.bar\", settings).build().getAsArray(\"foo.bar\"));\n+ assertEquals(\"baz\", Settings.builder().copy(\"foo.bar.baz\", settings).build().get(\"foo.bar.baz\"));\n+ assertNull(Settings.builder().copy(\"foo.bar.baz\", settings).build().get(\"test\"));\n+ assertTrue(Settings.builder().copy(\"test\", settings).build().keySet().contains(\"test\"));\n+ IllegalArgumentException iae = expectThrows(IllegalArgumentException.class, () -> Settings.builder().copy(\"not_there\", settings));\n+ assertEquals(\"source key not found in the source settings\", iae.getMessage());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,50 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.analysis.common;\n+\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.test.ESSingleNodeTestCase;\n+\n+import java.util.Collection;\n+import java.util.Collections;\n+\n+public class MassiveWordListTests extends ESSingleNodeTestCase {\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> getPlugins() {\n+ return Collections.singleton(CommonAnalysisPlugin.class);\n+ }\n+\n+ public void testCreateIndexWithMassiveWordList() {\n+ String[] wordList = new String[100000];\n+ for (int i = 0; i < wordList.length; i++) {\n+ wordList[i] = \"hello world\";\n+ }\n+ client().admin().indices().prepareCreate(\"test\").setSettings(Settings.builder()\n+ .put(\"index.number_of_shards\", 1)\n+ .put(\"analysis.analyzer.test_analyzer.type\", \"custom\")\n+ .put(\"analysis.analyzer.test_analyzer.tokenizer\", \"standard\")\n+ .putArray(\"analysis.analyzer.test_analyzer.filter\", \"dictionary_decompounder\", \"lowercase\")\n+ .put(\"analysis.filter.dictionary_decompounder.type\", \"dictionary_decompounder\")\n+ .putArray(\"analysis.filter.dictionary_decompounder.word_list\", wordList)\n+ ).get();\n+ }\n+}",
"filename": "modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/MassiveWordListTests.java",
"status": "added"
},
{
"diff": "@@ -115,8 +115,8 @@ class HostType {\n * instances with a tag key set to stage, and a value of dev. Several tags set will require all of those tags to be set for the\n * instance to be included.\n */\n- Setting.AffixSetting<String> TAG_SETTING = Setting.prefixKeySetting(\"discovery.ec2.tag.\",\n- key -> Setting.simpleString(key, Property.NodeScope));\n+ Setting.AffixSetting<List<String>> TAG_SETTING = Setting.prefixKeySetting(\"discovery.ec2.tag.\",\n+ key -> Setting.listSetting(key, Collections.emptyList(), Function.identity(), Property.NodeScope));\n \n AmazonEC2 client();\n }",
"filename": "plugins/discovery-ec2/src/main/java/org/elasticsearch/discovery/ec2/AwsEc2Service.java",
"status": "modified"
},
{
"diff": "@@ -65,7 +65,7 @@ class AwsEc2UnicastHostsProvider extends AbstractComponent implements UnicastHos\n \n private final Set<String> groups;\n \n- private final Map<String, String> tags;\n+ private final Map<String, List<String>> tags;\n \n private final Set<String> availabilityZones;\n \n@@ -206,7 +206,7 @@ private DescribeInstancesRequest buildDescribeInstancesRequest() {\n new Filter(\"instance-state-name\").withValues(\"running\", \"pending\")\n );\n \n- for (Map.Entry<String, String> tagFilter : tags.entrySet()) {\n+ for (Map.Entry<String, List<String>> tagFilter : tags.entrySet()) {\n // for a given tag key, OR relationship for multiple different values\n describeInstancesRequest.withFilters(\n new Filter(\"tag:\" + tagFilter.getKey()).withValues(tagFilter.getValue())",
"filename": "plugins/discovery-ec2/src/main/java/org/elasticsearch/discovery/ec2/AwsEc2UnicastHostsProvider.java",
"status": "modified"
},
{
"diff": "@@ -416,7 +416,7 @@ public void randomIndexTemplate() throws IOException {\n randomSettingsBuilder.put(\"index.codec\", CodecService.LUCENE_DEFAULT_CODEC);\n }\n \n- for (String setting : randomSettingsBuilder.internalMap().keySet()) {\n+ for (String setting : randomSettingsBuilder.keys()) {\n assertThat(\"non index. prefix setting set on index template, its a node setting...\", setting, startsWith(\"index.\"));\n }\n // always default delayed allocation to 0 to make sure we have tests are not delayed",
"filename": "test/framework/src/main/java/org/elasticsearch/test/ESIntegTestCase.java",
"status": "modified"
}
]
} |
{
"body": "Performing these operations:\r\n\r\n```json\r\nPOST /test/doc/1\r\n{\"body\": \"foo\"}\r\n\r\n# make sure you have path.repo=/tmp\r\nPUT /_snapshot/my_backup\r\n{\r\n \"type\": \"fs\",\r\n \"settings\": {\r\n \"chunk_size\": null,\r\n \"location\": \"/tmp/backups\"\r\n }\r\n}\r\n\r\nPUT /_snapshot/my_backup/snapshot-1?wait_for_completion=true\r\n\r\n# the snapshot will show as successful\r\nGET /_snapshot/my_backup/_all\r\n\r\nPOST /_snapshot/my_backup/snapshot-1/_restore\r\n```\r\n\r\nOn doing the restore, the index will be corrupt and fails with all kinds of:\r\n\r\n```\r\n[2017-09-29T15:32:51,804][WARN ][o.e.c.a.s.ShardStateAction] [zS7gqMd] [test][4] received shard failed for shard id [[test][4]], allocation id [1m3SAW23RiORYN3paf5zRg], primary term [0], message [failed recovery], failure [RecoveryFailedException[[test][4]: Recovery failed on {zS7gqMd}{zS7gqMdkQgenxYl5YYMuPg}{fbrBGXZVSoOiHjueD2o3_Q}{127.0.0.1}{127.0.0.1:9300}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[failed to restore snapshot [snapshot-1/bBueY70xTW-w2r3yGoQrmA]]; nested: IndexShardRestoreFailedException[Failed to recover index]; nested: CorruptIndexException[verification failed (hardware problem?) : expected=j36fb4 actual=null footer=null writtenLength=0 expectedLength=197 (resource=name [segments_1], length [197], checksum [j36fb4], writtenBy [6.2.0]) (resource=VerifyingIndexOutput(segments_1))]; ]\r\norg.elasticsearch.indices.recovery.RecoveryFailedException: [test][4]: Recovery failed on {zS7gqMd}{zS7gqMdkQgenxYl5YYMuPg}{fbrBGXZVSoOiHjueD2o3_Q}{127.0.0.1}{127.0.0.1:9300}\r\n\tat org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1710) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]\r\n\tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]\r\nCaused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery\r\n\tat org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:305) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:238) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1317) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1706) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\t... 4 more\r\nCaused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: restore failed\r\n\tat org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:413) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:240) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:263) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:238) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1317) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1706) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\t... 4 more\r\nCaused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: failed to restore snapshot [snapshot-1/bBueY70xTW-w2r3yGoQrmA]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:844) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:408) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:240) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:263) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:238) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1317) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1706) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\t... 4 more\r\nCaused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: Failed to recover index\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1524) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:842) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:408) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:240) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:263) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:238) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1317) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1706) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\t... 4 more\r\nCaused by: org.apache.lucene.index.CorruptIndexException: verification failed (hardware problem?) : expected=j36fb4 actual=null footer=null writtenLength=0 expectedLength=197 (resource=name [segments_1], length [197], checksum [j36fb4], writtenBy [6.2.0]) (resource=VerifyingIndexOutput(segments_1))\r\n\tat org.elasticsearch.index.store.Store$LuceneVerifyingIndexOutput.verify(Store.java:1124) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.store.Store.verify(Store.java:464) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restoreFile(BlobStoreRepository.java:1586) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1521) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:842) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:408) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:240) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:263) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:238) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1317) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\tat org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1706) ~[elasticsearch-6.0.0-alpha1.jar:6.0.0-alpha1]\r\n\t... 4 more\r\n```",
"comments": [],
"number": 26843,
"title": "Creating a repository with chunk_size: null causes snapshots to be corrupted"
} | {
"body": "Specifying a negative value or null as a chunk_size in FS repository can lead to corrupt snapshots.\r\n\r\nCloses #26843\r\n",
"number": 26844,
"review_comments": [
{
"body": "Since the validation logic of the chunk_size setting depends on the repository implemention, we should also check in `BlobStoreRepository` that when `repository.chunkSize()` is not null then the `getBytes()` value is greater than 0?",
"created_at": "2017-10-02T09:57:19Z"
},
{
"body": "@tlrx Good point. I've pushed the fix for this. Could you take another look when you have a chance?",
"created_at": "2017-10-05T02:33:12Z"
},
{
"body": "@imotov Thanks! Sorry to nitpick but shouldn't we throw an IAE or something instead of just ignoring the partSize?",
"created_at": "2017-10-05T07:30:17Z"
},
{
"body": "Could you take another look?",
"created_at": "2017-10-05T21:10:16Z"
}
],
"title": "Snapshot/Restore: better handle incorrect chunk_size settings in FS repo"
} | {
"commits": [
{
"message": "Snapshot/Restore: better handle incorrect chunk_size settings in FS repo\n\nSpecifying a negative value or null as a chunk_size in FS repository can lead to corrupt snapshots.\n\nCloses #26843"
},
{
"message": "Add additional safety check for blob size"
},
{
"message": "Add additional check for blob chunk size"
}
],
"files": [
{
"diff": "@@ -66,7 +66,7 @@ public FileInfo(String name, StoreFileMetaData metaData, ByteSizeValue partSize)\n this.metadata = metaData;\n \n long partBytes = Long.MAX_VALUE;\n- if (partSize != null) {\n+ if (partSize != null && partSize.getBytes() > 0) {\n partBytes = partSize.getBytes();\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java",
"status": "modified"
},
{
"diff": "@@ -241,6 +241,10 @@ protected BlobStoreRepository(RepositoryMetaData metadata, Settings globalSettin\n BlobStoreIndexShardSnapshot::fromXContent, namedXContentRegistry, isCompress());\n indexShardSnapshotsFormat = new ChecksumBlobStoreFormat<>(SNAPSHOT_INDEX_CODEC, SNAPSHOT_INDEX_NAME_FORMAT,\n BlobStoreIndexShardSnapshots::fromXContent, namedXContentRegistry, isCompress());\n+ ByteSizeValue chunkSize = chunkSize();\n+ if (chunkSize != null && chunkSize.getBytes() <= 0) {\n+ throw new IllegalArgumentException(\"the chunk size cannot be negative: [\" + chunkSize + \"]\");\n+ }\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java",
"status": "modified"
},
{
"diff": "@@ -54,10 +54,10 @@ public class FsRepository extends BlobStoreRepository {\n new Setting<>(\"location\", \"\", Function.identity(), Property.NodeScope);\n public static final Setting<String> REPOSITORIES_LOCATION_SETTING =\n new Setting<>(\"repositories.fs.location\", LOCATION_SETTING, Function.identity(), Property.NodeScope);\n- public static final Setting<ByteSizeValue> CHUNK_SIZE_SETTING =\n- Setting.byteSizeSetting(\"chunk_size\", new ByteSizeValue(-1), Property.NodeScope);\n- public static final Setting<ByteSizeValue> REPOSITORIES_CHUNK_SIZE_SETTING =\n- Setting.byteSizeSetting(\"repositories.fs.chunk_size\", new ByteSizeValue(-1), Property.NodeScope);\n+ public static final Setting<ByteSizeValue> CHUNK_SIZE_SETTING = Setting.byteSizeSetting(\"chunk_size\",\n+ new ByteSizeValue(Long.MAX_VALUE), new ByteSizeValue(5), new ByteSizeValue(Long.MAX_VALUE), Property.NodeScope);\n+ public static final Setting<ByteSizeValue> REPOSITORIES_CHUNK_SIZE_SETTING = Setting.byteSizeSetting(\"repositories.fs.chunk_size\",\n+ new ByteSizeValue(Long.MAX_VALUE), new ByteSizeValue(5), new ByteSizeValue(Long.MAX_VALUE), Property.NodeScope);\n public static final Setting<Boolean> COMPRESS_SETTING = Setting.boolSetting(\"compress\", false, Property.NodeScope);\n public static final Setting<Boolean> REPOSITORIES_COMPRESS_SETTING =\n Setting.boolSetting(\"repositories.fs.compress\", false, Property.NodeScope);\n@@ -95,10 +95,8 @@ public FsRepository(RepositoryMetaData metadata, Environment environment,\n blobStore = new FsBlobStore(settings, locationFile);\n if (CHUNK_SIZE_SETTING.exists(metadata.settings())) {\n this.chunkSize = CHUNK_SIZE_SETTING.get(metadata.settings());\n- } else if (REPOSITORIES_CHUNK_SIZE_SETTING.exists(settings)) {\n- this.chunkSize = REPOSITORIES_CHUNK_SIZE_SETTING.get(settings);\n } else {\n- this.chunkSize = null;\n+ this.chunkSize = REPOSITORIES_CHUNK_SIZE_SETTING.get(settings);\n }\n this.compress = COMPRESS_SETTING.exists(metadata.settings()) ? COMPRESS_SETTING.get(metadata.settings()) : REPOSITORIES_COMPRESS_SETTING.get(settings);\n this.basePath = BlobPath.cleanPath();",
"filename": "core/src/main/java/org/elasticsearch/repositories/fs/FsRepository.java",
"status": "modified"
},
{
"diff": "@@ -104,6 +104,7 @@\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.IndexSettings.INDEX_REFRESH_INTERVAL_SETTING;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAliasesExist;\n@@ -135,15 +136,29 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {\n MockRepository.Plugin.class);\n }\n \n+ private Settings randomRepoSettings() {\n+ Settings.Builder repoSettings = Settings.builder();\n+ repoSettings.put(\"location\", randomRepoPath());\n+ if (randomBoolean()) {\n+ repoSettings.put(\"compress\", randomBoolean());\n+ }\n+ if (randomBoolean()) {\n+ repoSettings.put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES);\n+ } else {\n+ if (randomBoolean()) {\n+ repoSettings.put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES);\n+ } else {\n+ repoSettings.put(\"chunk_size\", (String) null);\n+ }\n+ }\n+ return repoSettings.build();\n+ }\n+\n public void testBasicWorkFlow() throws Exception {\n Client client = client();\n \n logger.info(\"--> creating repository\");\n- assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n- .setType(\"fs\").setSettings(Settings.builder()\n- .put(\"location\", randomRepoPath())\n- .put(\"compress\", randomBoolean())\n- .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)));\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\").setType(\"fs\").setSettings(randomRepoSettings()));\n \n createIndex(\"test-idx-1\", \"test-idx-2\", \"test-idx-3\");\n ensureGreen();\n@@ -308,11 +323,7 @@ public void testFreshIndexUUID() {\n Client client = client();\n \n logger.info(\"--> creating repository\");\n- assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n- .setType(\"fs\").setSettings(Settings.builder()\n- .put(\"location\", randomRepoPath())\n- .put(\"compress\", randomBoolean())\n- .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)));\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\").setType(\"fs\").setSettings(randomRepoSettings()));\n \n createIndex(\"test\");\n String originalIndexUUID = client().admin().indices().prepareGetSettings(\"test\").get().getSetting(\"test\", IndexMetaData.SETTING_INDEX_UUID);\n@@ -356,11 +367,7 @@ public void testRestoreWithDifferentMappingsAndSettings() throws Exception {\n Client client = client();\n \n logger.info(\"--> creating repository\");\n- assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n- .setType(\"fs\").setSettings(Settings.builder()\n- .put(\"location\", randomRepoPath())\n- .put(\"compress\", randomBoolean())\n- .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)));\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\").setType(\"fs\").setSettings(randomRepoSettings()));\n \n logger.info(\"--> create index with foo type\");\n assertAcked(prepareCreate(\"test-idx\", 2, Settings.builder()",
"filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java",
"status": "modified"
}
]
} |
{
"body": "We are not following the Azure documentation about uploading blobs to Azure storage. https://docs.microsoft.com/en-us/azure/storage/blobs/storage-java-how-to-use-blob-storage#upload-a-blob-into-a-container\r\n\r\nInstead we are using our own implementation which might cause some troubles and rarely some blobs can be not immediately commited just after we close the stream. Using the standard implementation provided by Azure team should allow us to benefit from all the magic Azure SDK team already wrote.\r\n\r\nAnd well... Let's just read the doc!",
"comments": [
{
"body": "@imotov Thanks for the review. I did some manual testings this morning and it does not work.\r\n\r\nApparently the file `master.dat-temp` is not written in the azure container... \r\nGetting an exception saying that the container does not exist although I can see it in the azure Web interface... \r\n\r\nI'm digging... Probably something stupid on my end. :) \r\n\r\n",
"created_at": "2017-09-25T09:52:14Z"
},
{
"body": "@imotov I worked on IT so we can now pass them when needed (still a manual operation).\r\nI tried to simplify and remove non needed things.\r\n\r\nI tested everything manually:\r\n\r\n* Install elasticsearch 7.0.0-alpha1-SNAPSHOT\r\n* Install repository-azure plugin\r\n* Run the following test:\r\n\r\n```sh\r\n# Clean test env\r\ncurl -XDELETE localhost:9200/foo?pretty\r\ncurl -XDELETE localhost:9200/_snapshot/my_backup1/snap1?pretty\r\ncurl -XDELETE localhost:9200/_snapshot/my_backup1?pretty\r\n\r\n# Create data\r\ncurl -XPUT localhost:9200/foo/doc/1?pretty -H 'Content-Type: application/json' -d '{\r\n \"foo\": \"bar\"\r\n}'\r\ncurl -XPOST localhost:9200/foo/_refresh?pretty\r\n\r\n# Create repository using default account\r\ncurl -XPUT localhost:9200/_snapshot/my_backup1?pretty -H 'Content-Type: application/json' -d '{\r\n \"type\": \"azure\"\r\n}'\r\n\r\n# Backup\r\ncurl -XPOST \"localhost:9200/_snapshot/my_backup1/snap1?pretty&wait_for_completion=true\"\r\n\r\n# Delete existing index\r\ncurl -XDELETE localhost:9200/foo?pretty\r\n\r\n# Restore using default account\r\ncurl -XPOST \"localhost:9200/_snapshot/my_backup1/snap1/_restore?pretty&wait_for_completion=true\"\r\n\r\n# Check\r\ncurl -XGET localhost:9200/foo/_search?pretty\r\n\r\n# Remove backup\r\ncurl -XDELETE localhost:9200/_snapshot/my_backup1/snap1?pretty\r\ncurl -XDELETE localhost:9200/_snapshot/my_backup1?pretty\r\n```\r\n\r\nEverything is correct. I'm going to test with a bigger dataset now and check everything works.\r\nCould you give a final review on the code please as I changed some code recently?\r\n\r\nThanks!",
"created_at": "2017-09-26T09:36:18Z"
},
{
"body": "I tested with much more data (300mb) and everything is working well.\r\nLMK! :) ",
"created_at": "2017-09-26T09:53:06Z"
},
{
"body": "@dadoonet would it make sense to base this tests on [`ESBlobStoreRepositoryIntegTestCase`](https://github.com/elastic/elasticsearch/blob/master/test/framework/src/main/java/org/elasticsearch/repositories/blobstore/ESBlobStoreRepositoryIntegTestCase.java)? I think this base class has most of the tests that we want to run a repo to ensure that it behaves reasonably. If you find it lacking something, I think it would make sense to extend it so all other repos would benefit ",
"created_at": "2017-09-26T19:18:16Z"
},
{
"body": "@imotov Great! I did not remember about that class. Yeah. Definitely better using it as well.\r\n\r\nI pushed new changes.",
"created_at": "2017-09-26T20:21:42Z"
},
{
"body": "I backported it on 6.x yet.\r\n\r\nI'm planning to backport on 6.0 but it's a bit harder as some PR have not been merged to 6.0 like #23518 and #23405.\r\n",
"created_at": "2017-09-28T11:52:40Z"
},
{
"body": "Backported to 6.0 as well with 9aa5595d199d41f7681d6814616dd73d52a61b66\r\n",
"created_at": "2017-09-29T13:59:02Z"
},
{
"body": "Backported to 5.6 with https://github.com/elastic/elasticsearch/pull/26839/commits/28f17a72f617bde54ee6e1071e1491e03740d967 (see #26839)",
"created_at": "2017-10-03T13:30:03Z"
}
],
"number": 26751,
"title": "Use Azure upload method instead of our own implementation"
} | {
"body": "* Use Azure upload method instead of our own implementation\r\n\r\nWe are not following the Azure documentation about uploading blobs to Azure storage. https://docs.microsoft.com/en-us/azure/storage/blobs/storage-java-how-to-use-blob-storage#upload-a-blob-into-a-container\r\n\r\nInstead we are using our own implementation which might cause some troubles and rarely some blobs can be not immediately commited just after we close the stream. Using the standard implementation provided by Azure team should allow us to benefit from all the magic Azure SDK team already wrote.\r\n\r\nAnd well... Let's just read the doc!\r\n\r\n* Adapt integration tests\r\n* Simplify all the integration tests and extends ESBlobStoreRepositoryIntegTestCase tests\r\n\r\n * removes IT `testForbiddenContainerName()` as it is useless. The plugin does not create anymore the container but expects that the user has created it before registering the repository\r\n * merges 2 IT classes so all IT tests are ran from one single class\r\n * We don't remove/create anymore the container between each single test but only for the test suite\r\n\r\nBackport of #26751 in 5.6 branch\r\n",
"number": 26839,
"review_comments": [],
"title": "Use Azure upload method instead of our own implementation (#26751)"
} | {
"commits": [
{
"message": "Use Azure upload method instead of our own implementation (#26751)\n\n* Use Azure upload method instead of our own implementation\n\nWe are not following the Azure documentation about uploading blobs to Azure storage. https://docs.microsoft.com/en-us/azure/storage/blobs/storage-java-how-to-use-blob-storage#upload-a-blob-into-a-container\n\nInstead we are using our own implementation which might cause some troubles and rarely some blobs can be not immediately commited just after we close the stream. Using the standard implementation provided by Azure team should allow us to benefit from all the magic Azure SDK team already wrote.\n\nAnd well... Let's just read the doc!\n\n* Adapt integration tests\n* Simplify all the integration tests and extends ESBlobStoreRepositoryIntegTestCase tests\n\n * removes IT `testForbiddenContainerName()` as it is useless. The plugin does not create anymore the container but expects that the user has created it before registering the repository\n * merges 2 IT classes so all IT tests are ran from one single class\n * We don't remove/create anymore the container between each single test but only for the test suite\n\nBackport of #26751 in 5.6 branch"
}
],
"files": [
{
"diff": "@@ -26,13 +26,10 @@\n import org.elasticsearch.common.blobstore.BlobMetaData;\n import org.elasticsearch.common.blobstore.BlobPath;\n import org.elasticsearch.common.blobstore.support.AbstractBlobContainer;\n-import org.elasticsearch.common.io.Streams;\n import org.elasticsearch.common.logging.Loggers;\n-import org.elasticsearch.repositories.RepositoryException;\n \n import java.io.IOException;\n import java.io.InputStream;\n-import java.io.OutputStream;\n import java.net.HttpURLConnection;\n import java.net.URISyntaxException;\n import java.nio.file.FileAlreadyExistsException;\n@@ -99,24 +96,11 @@ public void writeBlob(String blobName, InputStream inputStream, long blobSize) t\n if (blobExists(blobName)) {\n throw new FileAlreadyExistsException(\"blob [\" + blobName + \"] already exists, cannot overwrite\");\n }\n- logger.trace(\"writeBlob({}, stream, {})\", blobName, blobSize);\n- try (OutputStream stream = createOutput(blobName)) {\n- Streams.copy(inputStream, stream);\n- }\n- }\n-\n- private OutputStream createOutput(String blobName) throws IOException {\n+ logger.trace(\"writeBlob({}, stream, {})\", buildKey(blobName), blobSize);\n try {\n- return new AzureOutputStream(blobStore.getOutputStream(blobStore.container(), buildKey(blobName)));\n- } catch (StorageException e) {\n- if (e.getHttpStatusCode() == HttpURLConnection.HTTP_NOT_FOUND) {\n- throw new NoSuchFileException(e.getMessage());\n- }\n- throw new IOException(e);\n- } catch (URISyntaxException e) {\n- throw new IOException(e);\n- } catch (IllegalArgumentException e) {\n- throw new RepositoryException(repositoryName, e.getMessage());\n+ blobStore.writeBlob(buildKey(blobName), inputStream, blobSize);\n+ } catch (URISyntaxException|StorageException e) {\n+ throw new IOException(\"Can not write blob \" + blobName, e);\n }\n }\n ",
"filename": "plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobContainer.java",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,6 @@\n import com.microsoft.azure.storage.LocationMode;\n import com.microsoft.azure.storage.StorageException;\n import org.elasticsearch.cloud.azure.storage.AzureStorageService;\n-import org.elasticsearch.cloud.azure.storage.AzureStorageService.Storage;\n import org.elasticsearch.cluster.metadata.RepositoryMetaData;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.blobstore.BlobContainer;\n@@ -34,11 +33,11 @@\n \n import java.io.IOException;\n import java.io.InputStream;\n-import java.io.OutputStream;\n import java.net.URISyntaxException;\n import java.util.Locale;\n import java.util.Map;\n \n+import static org.elasticsearch.cloud.azure.storage.AzureStorageService.Storage;\n import static org.elasticsearch.cloud.azure.storage.AzureStorageSettings.getValue;\n import static org.elasticsearch.repositories.azure.AzureRepository.Repository;\n \n@@ -137,11 +136,6 @@ public InputStream getInputStream(String container, String blob) throws URISynta\n return this.client.getInputStream(this.accountName, this.locMode, container, blob);\n }\n \n- public OutputStream getOutputStream(String container, String blob) throws URISyntaxException, StorageException\n- {\n- return this.client.getOutputStream(this.accountName, this.locMode, container, blob);\n- }\n-\n public Map<String,BlobMetaData> listBlobsByPrefix(String container, String keyPath, String prefix) throws URISyntaxException, StorageException\n {\n return this.client.listBlobsByPrefix(this.accountName, this.locMode, container, keyPath, prefix);\n@@ -151,4 +145,9 @@ public void moveBlob(String container, String sourceBlob, String targetBlob) thr\n {\n this.client.moveBlob(this.accountName, this.locMode, container, sourceBlob, targetBlob);\n }\n+\n+ public void writeBlob(String blobName, InputStream inputStream, long blobSize)\n+ throws URISyntaxException, StorageException, IOException {\n+ this.client.writeBlob(this.accountName, this.locMode, container, blobName, inputStream, blobSize);\n+ }\n }",
"filename": "plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/blobstore/AzureBlobStore.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,6 @@\n \n import java.io.IOException;\n import java.io.InputStream;\n-import java.io.OutputStream;\n import java.net.URISyntaxException;\n import java.util.Map;\n import java.util.function.Function;\n@@ -90,12 +89,12 @@ final class Storage {\n InputStream getInputStream(String account, LocationMode mode, String container, String blob)\n throws URISyntaxException, StorageException, IOException;\n \n- OutputStream getOutputStream(String account, LocationMode mode, String container, String blob)\n- throws URISyntaxException, StorageException;\n-\n Map<String,BlobMetaData> listBlobsByPrefix(String account, LocationMode mode, String container, String keyPath, String prefix)\n throws URISyntaxException, StorageException;\n \n void moveBlob(String account, LocationMode mode, String container, String sourceBlob, String targetBlob)\n throws URISyntaxException, StorageException;\n+\n+ void writeBlob(String account, LocationMode mode, String container, String blobName, InputStream inputStream, long blobSize) throws\n+ URISyntaxException, StorageException, IOException;\n }",
"filename": "plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageService.java",
"status": "modified"
},
{
"diff": "@@ -40,8 +40,8 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.repositories.RepositoryException;\n \n+import java.io.IOException;\n import java.io.InputStream;\n-import java.io.OutputStream;\n import java.net.URI;\n import java.net.URISyntaxException;\n import java.util.HashMap;\n@@ -257,13 +257,6 @@ public InputStream getInputStream(String account, LocationMode mode, String cont\n return client.getContainerReference(container).getBlockBlobReference(blob).openInputStream();\n }\n \n- @Override\n- public OutputStream getOutputStream(String account, LocationMode mode, String container, String blob) throws URISyntaxException, StorageException {\n- logger.trace(\"writing container [{}], blob [{}]\", container, blob);\n- CloudBlobClient client = this.getSelectedClient(account, mode);\n- return client.getContainerReference(container).getBlockBlobReference(blob).openOutputStream();\n- }\n-\n @Override\n public Map<String, BlobMetaData> listBlobsByPrefix(String account, LocationMode mode, String container, String keyPath, String prefix) throws URISyntaxException, StorageException {\n // NOTE: this should be here: if (prefix == null) prefix = \"\";\n@@ -314,4 +307,15 @@ public void moveBlob(String account, LocationMode mode, String container, String\n logger.debug(\"moveBlob container [{}], sourceBlob [{}], targetBlob [{}] -> done\", container, sourceBlob, targetBlob);\n }\n }\n+\n+ @Override\n+ public void writeBlob(String account, LocationMode mode, String container, String blobName, InputStream inputStream, long blobSize)\n+ throws URISyntaxException, StorageException, IOException {\n+ logger.trace(\"writeBlob({}, stream, {})\", blobName, blobSize);\n+ CloudBlobClient client = this.getSelectedClient(account, mode);\n+ CloudBlobContainer blobContainer = client.getContainerReference(container);\n+ CloudBlockBlob blob = blobContainer.getBlockBlobReference(blobName);\n+ blob.upload(inputStream, blobSize);\n+ logger.trace(\"writeBlob({}, stream, {}) - done\", blobName, blobSize);\n+ }\n }",
"filename": "plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceImpl.java",
"status": "modified"
},
{
"diff": "@@ -20,36 +20,26 @@\n package org.elasticsearch.cloud.azure;\n \n import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.settings.SettingsException;\n-\n-import java.io.IOException;\n \n public class AzureTestUtils {\n /**\n- * Read settings from file when running integration tests with ThirdParty annotation.\n- * elasticsearch.yml file path has to be set with -Dtests.config=/path/to/elasticsearch.yml.\n- * @return Settings from elasticsearch.yml integration test file (for 3rd party tests)\n+ * Mock settings from sysprops when running integration tests with ThirdParty annotation.\n+ * Start the tests with {@code -Dtests.azure.account=AzureStorageAccount and -Dtests.azure.key=AzureStorageKey}\n+ * @return Mock Settings from sysprops\n */\n- public static Settings readSettingsFromFile() {\n+ public static Settings generateMockSecureSettings() {\n Settings.Builder settings = Settings.builder();\n \n- // if explicit, just load it and don't load from env\n- try {\n- if (Strings.hasText(System.getProperty(\"tests.config\"))) {\n- try {\n- settings.loadFromPath(PathUtils.get((System.getProperty(\"tests.config\"))));\n- } catch (IOException e) {\n- throw new IllegalArgumentException(\"could not load azure tests config\", e);\n- }\n- } else {\n- throw new IllegalStateException(\"to run integration tests, you need to set -Dtests.thirdparty=true and \" +\n- \"-Dtests.config=/path/to/elasticsearch.yml\");\n- }\n- } catch (SettingsException exception) {\n- throw new IllegalStateException(\"your test configuration file is incorrect: \" + System.getProperty(\"tests.config\"), exception);\n+ if (Strings.isEmpty(System.getProperty(\"tests.azure.account\")) ||\n+ Strings.isEmpty(System.getProperty(\"tests.azure.key\"))) {\n+ throw new IllegalStateException(\"to run integration tests, you need to set -Dtests.thirdparty=true and \" +\n+ \"-Dtests.azure.account=azure-account -Dtests.azure.key=azure-key\");\n }\n+\n+ settings.put(\"cloud.azure.storage.default.account\", System.getProperty(\"tests.azure.account\"));\n+ settings.put(\"cloud.azure.storage.default.key\", System.getProperty(\"tests.azure.key\"));\n+\n return settings.build();\n }\n }",
"filename": "plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/AzureTestUtils.java",
"status": "modified"
},
{
"diff": "@@ -25,13 +25,13 @@\n import org.elasticsearch.common.blobstore.support.PlainBlobMetaData;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.component.AbstractComponent;\n+import org.elasticsearch.common.io.Streams;\n import org.elasticsearch.common.settings.Settings;\n \n import java.io.ByteArrayInputStream;\n import java.io.ByteArrayOutputStream;\n import java.io.IOException;\n import java.io.InputStream;\n-import java.io.OutputStream;\n import java.net.URISyntaxException;\n import java.nio.file.NoSuchFileException;\n import java.util.Locale;\n@@ -84,13 +84,6 @@ public InputStream getInputStream(String account, LocationMode mode, String cont\n return new ByteArrayInputStream(blobs.get(blob).toByteArray());\n }\n \n- @Override\n- public OutputStream getOutputStream(String account, LocationMode mode, String container, String blob) throws URISyntaxException, StorageException {\n- ByteArrayOutputStream outputStream = new ByteArrayOutputStream();\n- blobs.put(blob, outputStream);\n- return outputStream;\n- }\n-\n @Override\n public Map<String, BlobMetaData> listBlobsByPrefix(String account, LocationMode mode, String container, String keyPath, String prefix) {\n MapBuilder<String, BlobMetaData> blobsBuilder = MapBuilder.newMapBuilder();\n@@ -120,6 +113,17 @@ public void moveBlob(String account, LocationMode mode, String container, String\n }\n }\n \n+ @Override\n+ public void writeBlob(String account, LocationMode mode, String container, String blobName, InputStream inputStream, long blobSize)\n+ throws URISyntaxException, StorageException {\n+ try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {\n+ blobs.put(blobName, outputStream);\n+ Streams.copy(inputStream, outputStream);\n+ } catch (IOException e) {\n+ throw new StorageException(\"MOCK\", \"Error while writing mock stream\", e);\n+ }\n+ }\n+\n /**\n * Test if the given String starts with the specified prefix,\n * ignoring upper/lower case.",
"filename": "plugins/repository-azure/src/test/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceMock.java",
"status": "modified"
},
{
"diff": "@@ -27,23 +27,24 @@\n import org.elasticsearch.common.blobstore.BlobStore;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.repositories.ESBlobStoreTestCase;\n-import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.ESIntegTestCase.ThirdParty;\n \n import java.io.IOException;\n import java.net.URISyntaxException;\n \n-import static org.elasticsearch.cloud.azure.AzureTestUtils.readSettingsFromFile;\n+import static org.elasticsearch.cloud.azure.AzureTestUtils.generateMockSecureSettings;\n \n /**\n- * You must specify {@code -Dtests.thirdparty=true -Dtests.config=/path/to/elasticsearch.yml}\n- * in order to run these tests.\n+ * Those integration tests need an Azure access and must be run with\n+ * {@code -Dtests.thirdparty=true -Dtests.azure.account=AzureStorageAccount -Dtests.azure.key=AzureStorageKey}\n+ * options\n */\n-@ESIntegTestCase.ThirdParty\n+@ThirdParty\n public class AzureBlobStoreTests extends ESBlobStoreTestCase {\n @Override\n protected BlobStore newBlobStore() throws IOException {\n try {\n- Settings settings = readSettingsFromFile();\n+ Settings settings = generateMockSecureSettings();\n RepositoryMetaData metadata = new RepositoryMetaData(\"ittest\", \"azure\", Settings.EMPTY);\n AzureStorageService storageService = new AzureStorageServiceImpl(settings);\n AzureBlobStore blobStore = new AzureBlobStore(metadata, settings, storageService);",
"filename": "plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureBlobStoreTests.java",
"status": "modified"
},
{
"diff": "@@ -28,212 +28,138 @@\n import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.client.ClusterAdminClient;\n-import org.elasticsearch.cloud.azure.AbstractAzureWithThirdPartyIntegTestCase;\n import org.elasticsearch.cloud.azure.storage.AzureStorageService;\n import org.elasticsearch.cloud.azure.storage.AzureStorageServiceImpl;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n+import org.elasticsearch.plugin.repository.azure.AzureRepositoryPlugin;\n+import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.repositories.RepositoryMissingException;\n import org.elasticsearch.repositories.RepositoryVerificationException;\n import org.elasticsearch.repositories.azure.AzureRepository.Repository;\n+import org.elasticsearch.repositories.blobstore.ESBlobStoreRepositoryIntegTestCase;\n import org.elasticsearch.snapshots.SnapshotMissingException;\n+import org.elasticsearch.snapshots.SnapshotRestoreException;\n import org.elasticsearch.snapshots.SnapshotState;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n+import org.elasticsearch.test.ESIntegTestCase.ThirdParty;\n import org.elasticsearch.test.store.MockFSDirectoryService;\n+import org.elasticsearch.test.store.MockFSIndexStore;\n import org.junit.After;\n-import org.junit.Before;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n \n import java.net.URISyntaxException;\n+import java.util.Arrays;\n+import java.util.Collection;\n import java.util.Locale;\n import java.util.concurrent.TimeUnit;\n \n-import static org.elasticsearch.cloud.azure.AzureTestUtils.readSettingsFromFile;\n+import static org.elasticsearch.cloud.azure.AzureTestUtils.generateMockSecureSettings;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.lessThanOrEqualTo;\n \n /**\n- * This test needs Azure to run and -Dtests.thirdparty=true to be set\n- * and -Dtests.config=/path/to/elasticsearch.yml\n- * @see AbstractAzureWithThirdPartyIntegTestCase\n+ * Those integration tests need an Azure access and must be run with\n+ * {@code -Dtests.thirdparty=true -Dtests.azure.account=AzureStorageAccount -Dtests.azure.key=AzureStorageKey}\n+ * options\n */\n @ClusterScope(\n scope = ESIntegTestCase.Scope.SUITE,\n supportsDedicatedMasters = false, numDataNodes = 1,\n transportClientRatio = 0.0)\n-public class AzureSnapshotRestoreTests extends AbstractAzureWithThirdPartyIntegTestCase {\n- private String getRepositoryPath() {\n- String testName = \"it-\" + getTestName();\n- return testName.contains(\" \") ? Strings.split(testName, \" \")[0] : testName;\n- }\n+@ThirdParty\n+public class AzureSnapshotRestoreTests extends ESBlobStoreRepositoryIntegTestCase {\n \n- public static String getContainerName() {\n- String testName = \"snapshot-itest-\".concat(RandomizedTest.getContext().getRunnerSeedAsString().toLowerCase(Locale.ROOT));\n- return testName.contains(\" \") ? Strings.split(testName, \" \")[0] : testName;\n+ private static AzureStorageService getAzureStorageService() {\n+ return new AzureStorageServiceImpl(generateMockSecureSettings());\n }\n \n @Override\n- public Settings indexSettings() {\n- // During restore we frequently restore index to exactly the same state it was before, that might cause the same\n- // checksum file to be written twice during restore operation\n- return Settings.builder().put(super.indexSettings())\n- .put(MockFSDirectoryService.RANDOM_PREVENT_DOUBLE_WRITE_SETTING.getKey(), false)\n- .put(MockFSDirectoryService.RANDOM_NO_DELETE_OPEN_FILE_SETTING.getKey(), false)\n- .build();\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return Settings.builder()\n+ .put(generateMockSecureSettings())\n+ .put(super.nodeSettings(nodeOrdinal))\n+ .build();\n }\n \n- @Before @After\n- public final void wipeAzureRepositories() throws StorageException, URISyntaxException {\n- wipeRepositories();\n- cleanRepositoryFiles(\n- getContainerName(),\n- getContainerName().concat(\"-1\"),\n- getContainerName().concat(\"-2\"));\n+ private static String getContainerName() {\n+ /* Have a different name per test so that there is no possible race condition. As the long can be negative,\n+ * there mustn't be a hyphen between the 2 concatenated numbers\n+ * (can't have 2 consecutives hyphens on Azure containers)\n+ */\n+ String testName = \"snapshot-itest-\"\n+ .concat(RandomizedTest.getContext().getRunnerSeedAsString().toLowerCase(Locale.ROOT));\n+ return testName.contains(\" \") ? Strings.split(testName, \" \")[0] : testName;\n }\n \n- public void testSimpleWorkflow() {\n- Client client = client();\n- logger.info(\"--> creating azure repository with path [{}]\", getRepositoryPath());\n- PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(\"test-repo\")\n- .setType(\"azure\").setSettings(Settings.builder()\n- .put(Repository.CONTAINER_SETTING.getKey(), getContainerName())\n- .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath())\n- .put(Repository.CHUNK_SIZE_SETTING.getKey(), randomIntBetween(1000, 10000), ByteSizeUnit.BYTES)\n- ).get();\n- assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true));\n-\n- createIndex(\"test-idx-1\", \"test-idx-2\", \"test-idx-3\");\n- ensureGreen();\n-\n- logger.info(\"--> indexing some data\");\n- for (int i = 0; i < 100; i++) {\n- index(\"test-idx-1\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n- index(\"test-idx-2\", \"doc\", Integer.toString(i), \"foo\", \"baz\" + i);\n- index(\"test-idx-3\", \"doc\", Integer.toString(i), \"foo\", \"baz\" + i);\n- }\n- refresh();\n- assertThat(client.prepareSearch(\"test-idx-1\").setSize(0).get().getHits().totalHits(), equalTo(100L));\n- assertThat(client.prepareSearch(\"test-idx-2\").setSize(0).get().getHits().totalHits(), equalTo(100L));\n- assertThat(client.prepareSearch(\"test-idx-3\").setSize(0).get().getHits().totalHits(), equalTo(100L));\n-\n- logger.info(\"--> snapshot\");\n- CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\")\n- .setWaitForCompletion(true).setIndices(\"test-idx-*\", \"-test-idx-3\").get();\n- assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n- assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(),\n- equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n-\n- assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").get().getSnapshots()\n- .get(0).state(), equalTo(SnapshotState.SUCCESS));\n-\n- logger.info(\"--> delete some data\");\n- for (int i = 0; i < 50; i++) {\n- client.prepareDelete(\"test-idx-1\", \"doc\", Integer.toString(i)).get();\n- }\n- for (int i = 50; i < 100; i++) {\n- client.prepareDelete(\"test-idx-2\", \"doc\", Integer.toString(i)).get();\n- }\n- for (int i = 0; i < 100; i += 2) {\n- client.prepareDelete(\"test-idx-3\", \"doc\", Integer.toString(i)).get();\n- }\n- refresh();\n- assertThat(client.prepareSearch(\"test-idx-1\").setSize(0).get().getHits().totalHits(), equalTo(50L));\n- assertThat(client.prepareSearch(\"test-idx-2\").setSize(0).get().getHits().totalHits(), equalTo(50L));\n- assertThat(client.prepareSearch(\"test-idx-3\").setSize(0).get().getHits().totalHits(), equalTo(50L));\n-\n- logger.info(\"--> close indices\");\n- client.admin().indices().prepareClose(\"test-idx-1\", \"test-idx-2\").get();\n-\n- logger.info(\"--> restore all indices from the snapshot\");\n- RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\")\n- .setWaitForCompletion(true).get();\n- assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n-\n- ensureGreen();\n- assertThat(client.prepareSearch(\"test-idx-1\").setSize(0).get().getHits().totalHits(), equalTo(100L));\n- assertThat(client.prepareSearch(\"test-idx-2\").setSize(0).get().getHits().totalHits(), equalTo(100L));\n- assertThat(client.prepareSearch(\"test-idx-3\").setSize(0).get().getHits().totalHits(), equalTo(50L));\n+ @BeforeClass\n+ public static void createTestContainers() throws Exception {\n+ createTestContainer(getContainerName());\n+ // This is needed for testMultipleRepositories() test case\n+ createTestContainer(getContainerName() + \"-1\");\n+ createTestContainer(getContainerName() + \"-2\");\n+ }\n \n- // Test restore after index deletion\n- logger.info(\"--> delete indices\");\n- cluster().wipeIndices(\"test-idx-1\", \"test-idx-2\");\n- logger.info(\"--> restore one index after deletion\");\n- restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true)\n- .setIndices(\"test-idx-*\", \"-test-idx-2\").get();\n- assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n- ensureGreen();\n- assertThat(client.prepareSearch(\"test-idx-1\").setSize(0).get().getHits().totalHits(), equalTo(100L));\n- ClusterState clusterState = client.admin().cluster().prepareState().get().getState();\n- assertThat(clusterState.getMetaData().hasIndex(\"test-idx-1\"), equalTo(true));\n- assertThat(clusterState.getMetaData().hasIndex(\"test-idx-2\"), equalTo(false));\n+ @AfterClass\n+ public static void removeContainer() throws Exception {\n+ removeTestContainer(getContainerName());\n+ // This is needed for testMultipleRepositories() test case\n+ removeTestContainer(getContainerName() + \"-1\");\n+ removeTestContainer(getContainerName() + \"-2\");\n }\n \n /**\n- * For issue #51: https://github.com/elastic/elasticsearch-cloud-azure/issues/51\n+ * Create a test container in Azure\n+ * @param containerName container name to use\n */\n- public void testMultipleSnapshots() throws URISyntaxException, StorageException {\n- final String indexName = \"test-idx-1\";\n- final String typeName = \"doc\";\n- final String repositoryName = \"test-repo\";\n- final String snapshot1Name = \"test-snap-1\";\n- final String snapshot2Name = \"test-snap-2\";\n-\n- Client client = client();\n-\n- logger.info(\"creating index [{}]\", indexName);\n- createIndex(indexName);\n- ensureGreen();\n-\n- logger.info(\"indexing first document\");\n- index(indexName, typeName, Integer.toString(1), \"foo\", \"bar \" + Integer.toString(1));\n- refresh();\n- assertThat(client.prepareSearch(indexName).setSize(0).get().getHits().totalHits(), equalTo(1L));\n-\n- logger.info(\"creating Azure repository with path [{}]\", getRepositoryPath());\n- PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(repositoryName)\n- .setType(\"azure\").setSettings(Settings.builder()\n- .put(Repository.CONTAINER_SETTING.getKey(), getContainerName())\n- .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath())\n- .put(Repository.BASE_PATH_SETTING.getKey(), randomIntBetween(1000, 10000), ByteSizeUnit.BYTES)\n- ).get();\n- assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true));\n-\n- logger.info(\"creating snapshot [{}]\", snapshot1Name);\n- CreateSnapshotResponse createSnapshotResponse1 = client.admin().cluster().prepareCreateSnapshot(repositoryName, snapshot1Name)\n- .setWaitForCompletion(true).setIndices(indexName).get();\n- assertThat(createSnapshotResponse1.getSnapshotInfo().successfulShards(), greaterThan(0));\n- assertThat(createSnapshotResponse1.getSnapshotInfo().successfulShards(),\n- equalTo(createSnapshotResponse1.getSnapshotInfo().totalShards()));\n-\n- assertThat(client.admin().cluster().prepareGetSnapshots(repositoryName).setSnapshots(snapshot1Name).get().getSnapshots()\n- .get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ private static void createTestContainer(String containerName) throws Exception {\n+ // It could happen that we run this test really close to a previous one\n+ // so we might need some time to be able to create the container\n+ assertBusy(() -> {\n+ getAzureStorageService().createContainer(\"default\", LocationMode.PRIMARY_ONLY, containerName);\n+ }, 30, TimeUnit.SECONDS);\n+ }\n \n- logger.info(\"indexing second document\");\n- index(indexName, typeName, Integer.toString(2), \"foo\", \"bar \" + Integer.toString(2));\n- refresh();\n- assertThat(client.prepareSearch(indexName).setSize(0).get().getHits().totalHits(), equalTo(2L));\n+ /**\n+ * Remove a test container in Azure\n+ * @param containerName container name to use\n+ */\n+ private static void removeTestContainer(String containerName) throws URISyntaxException, StorageException {\n+ getAzureStorageService().removeContainer(\"default\", LocationMode.PRIMARY_ONLY, containerName);\n+ }\n \n- logger.info(\"creating snapshot [{}]\", snapshot2Name);\n- CreateSnapshotResponse createSnapshotResponse2 = client.admin().cluster().prepareCreateSnapshot(repositoryName, snapshot2Name)\n- .setWaitForCompletion(true).setIndices(indexName).get();\n- assertThat(createSnapshotResponse2.getSnapshotInfo().successfulShards(), greaterThan(0));\n- assertThat(createSnapshotResponse2.getSnapshotInfo().successfulShards(),\n- equalTo(createSnapshotResponse2.getSnapshotInfo().totalShards()));\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return Arrays.asList(AzureRepositoryPlugin.class, MockFSIndexStore.TestPlugin.class);\n+ }\n \n- assertThat(client.admin().cluster().prepareGetSnapshots(repositoryName).setSnapshots(snapshot2Name).get().getSnapshots()\n- .get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ private String getRepositoryPath() {\n+ String testName = \"it-\" + getTestName();\n+ return testName.contains(\" \") ? Strings.split(testName, \" \")[0] : testName;\n+ }\n \n- logger.info(\"closing index [{}]\", indexName);\n- client.admin().indices().prepareClose(indexName).get();\n+ @Override\n+ public Settings indexSettings() {\n+ // During restore we frequently restore index to exactly the same state it was before, that might cause the same\n+ // checksum file to be written twice during restore operation\n+ return Settings.builder().put(super.indexSettings())\n+ .put(MockFSDirectoryService.RANDOM_PREVENT_DOUBLE_WRITE_SETTING.getKey(), false)\n+ .put(MockFSDirectoryService.RANDOM_NO_DELETE_OPEN_FILE_SETTING.getKey(), false)\n+ .build();\n+ }\n \n- logger.info(\"attempting restore from snapshot [{}]\", snapshot1Name);\n- RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(repositoryName, snapshot1Name)\n- .setWaitForCompletion(true).get();\n- assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n- ensureGreen();\n- assertThat(client.prepareSearch(indexName).setSize(0).get().getHits().totalHits(), equalTo(1L));\n+ @After\n+ public final void wipeAzureRepositories() {\n+ try {\n+ client().admin().cluster().prepareDeleteRepository(\"*\").get();\n+ } catch (RepositoryMissingException ignored) {\n+ }\n }\n \n public void testMultipleRepositories() {\n@@ -365,8 +291,6 @@ public void testListBlobs_26() throws StorageException, URISyntaxException {\n \n // Get all snapshots - should have one\n assertThat(client.prepareGetSnapshots(\"test-repo\").get().getSnapshots().size(), equalTo(1));\n-\n-\n }\n \n /**\n@@ -396,56 +320,6 @@ public void testGetDeleteNonExistingSnapshot_28() throws StorageException, URISy\n }\n }\n \n- /**\n- * For issue #21: https://github.com/elastic/elasticsearch-cloud-azure/issues/21\n- */\n- public void testForbiddenContainerName() throws Exception {\n- checkContainerName(\"\", false);\n- checkContainerName(\"es\", false);\n- checkContainerName(\"-elasticsearch\", false);\n- checkContainerName(\"elasticsearch--integration\", false);\n- checkContainerName(\"elasticsearch_integration\", false);\n- checkContainerName(\"ElAsTicsearch_integration\", false);\n- checkContainerName(\"123456789-123456789-123456789-123456789-123456789-123456789-1234\", false);\n- checkContainerName(\"123456789-123456789-123456789-123456789-123456789-123456789-123\", true);\n- checkContainerName(\"elasticsearch-integration\", true);\n- checkContainerName(\"elasticsearch-integration-007\", true);\n- }\n-\n- /**\n- * Create repository with wrong or correct container name\n- * @param container Container name we want to create\n- * @param correct Is this container name correct\n- */\n- private void checkContainerName(final String container, final boolean correct) throws Exception {\n- logger.info(\"--> creating azure repository with container name [{}]\", container);\n- // It could happen that we just removed from a previous test the same container so\n- // we can not create it yet.\n- assertBusy(() -> {\n- try {\n- PutRepositoryResponse putRepositoryResponse = client().admin().cluster().preparePutRepository(\"test-repo\")\n- .setType(\"azure\").setSettings(Settings.builder()\n- .put(Repository.CONTAINER_SETTING.getKey(), container)\n- .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath())\n- .put(Repository.CHUNK_SIZE_SETTING.getKey(), randomIntBetween(1000, 10000), ByteSizeUnit.BYTES)\n- ).get();\n- client().admin().cluster().prepareDeleteRepository(\"test-repo\").get();\n- try {\n- logger.info(\"--> remove container [{}]\", container);\n- cleanRepositoryFiles(container);\n- } catch (StorageException | URISyntaxException e) {\n- // We can ignore that as we just try to clean after the test\n- }\n- assertTrue(putRepositoryResponse.isAcknowledged() == correct);\n- } catch (RepositoryVerificationException e) {\n- if (correct) {\n- logger.debug(\" -> container is being removed. Let's wait a bit...\");\n- fail();\n- }\n- }\n- }, 5, TimeUnit.MINUTES);\n- }\n-\n /**\n * Test case for issue #23: https://github.com/elastic/elasticsearch-cloud-azure/issues/23\n */\n@@ -464,7 +338,7 @@ public void testNonExistingRepo_23() {\n try {\n client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"no-existing-snapshot\").setWaitForCompletion(true).get();\n fail(\"Shouldn't be here\");\n- } catch (SnapshotMissingException ex) {\n+ } catch (SnapshotRestoreException ex) {\n // Expected\n }\n }\n@@ -474,24 +348,9 @@ public void testNonExistingRepo_23() {\n */\n public void testRemoveAndCreateContainer() throws Exception {\n final String container = getContainerName().concat(\"-testremove\");\n- final AzureStorageService storageService = new AzureStorageServiceImpl(internalCluster().getDefaultSettings());\n \n- // It could happen that we run this test really close to a previous one\n- // so we might need some time to be able to create the container\n- assertBusy(() -> {\n- try {\n- storageService.createContainer(null, LocationMode.PRIMARY_ONLY, container);\n- logger.debug(\" -> container created...\");\n- } catch (URISyntaxException e) {\n- // Incorrect URL. This should never happen.\n- fail();\n- } catch (StorageException e) {\n- // It could happen. Let's wait for a while.\n- logger.debug(\" -> container is being removed. Let's wait a bit...\");\n- fail();\n- }\n- }, 30, TimeUnit.SECONDS);\n- storageService.removeContainer(null, LocationMode.PRIMARY_ONLY, container);\n+ createTestContainer(container);\n+ removeTestContainer(container);\n \n ClusterAdminClient client = client().admin().cluster();\n logger.info(\"--> creating azure repository while container is being removed\");\n@@ -507,30 +366,52 @@ public void testRemoveAndCreateContainer() throws Exception {\n }\n \n /**\n- * Deletes repositories, supports wildcard notation.\n+ * Test that you can snapshot on the primary repository and list the available snapshots\n+ * from the secondary repository.\n+ *\n+ * Note that this test requires an Azure storage account which must be a Read-access geo-redundant\n+ * storage (RA-GRS) account type.\n+ * @throws Exception If anything goes wrong\n */\n- public static void wipeRepositories(String... repositories) {\n- // if nothing is provided, delete all\n- if (repositories.length == 0) {\n- repositories = new String[]{\"*\"};\n- }\n- for (String repository : repositories) {\n- try {\n- client().admin().cluster().prepareDeleteRepository(repository).get();\n- } catch (RepositoryMissingException ex) {\n- // ignore\n- }\n- }\n+ public void testGeoRedundantStorage() throws Exception {\n+ Client client = client();\n+ logger.info(\"--> creating azure primary repository\");\n+ PutRepositoryResponse putRepositoryResponsePrimary = client.admin().cluster().preparePutRepository(\"primary\")\n+ .setType(\"azure\").setSettings(Settings.builder()\n+ .put(Repository.CONTAINER_SETTING.getKey(), getContainerName())\n+ ).get();\n+ assertThat(putRepositoryResponsePrimary.isAcknowledged(), equalTo(true));\n+\n+ logger.info(\"--> start get snapshots on primary\");\n+ long startWait = System.currentTimeMillis();\n+ client.admin().cluster().prepareGetSnapshots(\"primary\").get();\n+ long endWait = System.currentTimeMillis();\n+ // definitely should be done in 30s, and if its not working as expected, it takes over 1m\n+ assertThat(endWait - startWait, lessThanOrEqualTo(30000L));\n+\n+ logger.info(\"--> creating azure secondary repository\");\n+ PutRepositoryResponse putRepositoryResponseSecondary = client.admin().cluster().preparePutRepository(\"secondary\")\n+ .setType(\"azure\").setSettings(Settings.builder()\n+ .put(Repository.CONTAINER_SETTING.getKey(), getContainerName())\n+ .put(Repository.LOCATION_MODE_SETTING.getKey(), \"secondary_only\")\n+ ).get();\n+ assertThat(putRepositoryResponseSecondary.isAcknowledged(), equalTo(true));\n+\n+ logger.info(\"--> start get snapshots on secondary\");\n+ startWait = System.currentTimeMillis();\n+ client.admin().cluster().prepareGetSnapshots(\"secondary\").get();\n+ endWait = System.currentTimeMillis();\n+ logger.info(\"--> end of get snapshots on secondary. Took {} ms\", endWait - startWait);\n+ assertThat(endWait - startWait, lessThanOrEqualTo(30000L));\n }\n \n- /**\n- * Purge the test containers\n- */\n- public void cleanRepositoryFiles(String... containers) throws StorageException, URISyntaxException {\n- Settings settings = readSettingsFromFile();\n- AzureStorageService client = new AzureStorageServiceImpl(settings);\n- for (String container : containers) {\n- client.removeContainer(null, LocationMode.PRIMARY_ONLY, container);\n- }\n+ @Override\n+ protected void createTestRepository(String name) {\n+ assertAcked(client().admin().cluster().preparePutRepository(name)\n+ .setType(AzureRepository.TYPE)\n+ .setSettings(Settings.builder()\n+ .put(Repository.CONTAINER_SETTING.getKey(), getContainerName())\n+ .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath())\n+ .put(Repository.CHUNK_SIZE_SETTING.getKey(), randomIntBetween(100, 1000), ByteSizeUnit.BYTES)));\n }\n }",
"filename": "plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "In the master branch, if I do a query with an expression script like:\r\n\r\n```json\r\n{\r\n \"query\": {\r\n \"function_score\": {\r\n \"query\": {\r\n \"constant_score\": {\r\n \"filter\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"script\": {\r\n \"script\": {\r\n \"lang\": \"expression\",\r\n \"inline\": \"birth_date >= doc[\\\"birth_date\\\"].value\",\r\n \"params\": {\r\n \"birth_date\": 14\r\n }}}}]}}}}}}}\r\n```\r\n\r\nI get the following error:\r\n\r\n```\r\nCaused by: java.lang.IllegalArgumentException: painless does not know how to handle context [filter] \r\n at org.elasticsearch.script.expression.ExpressionScriptEngine.compile(ExpressionScriptEngine.java:111) ~[?:?] at org.elasticsearch.script.ScriptService.compile(ScriptService.java:296) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT] at org.elasticsearch.index.query.ScriptQueryBuilder.doToQuery(ScriptQueryBuilder.java:130) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT] \r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:97) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT] \r\n at org.elasticsearch.index.query.BoolQueryBuilder.addBooleanClauses(BoolQueryBuilder.java:405) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.BoolQueryBuilder.doToQuery(BoolQueryBuilder.java:379) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:97) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toFilter(AbstractQueryBuilder.java:119) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.ConstantScoreQueryBuilder.doToQuery(ConstantScoreQueryBuilder.java:136) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:97) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder.doToQuery(FunctionScoreQueryBuilder.java:307) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:97) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.QueryShardContext.lambda$toQuery$2(QueryShardContext.java:304) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.QueryShardContext.toQuery(QueryShardContext.java:316) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.QueryShardContext.toQuery(QueryShardContext.java:303) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:669) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n```\r\n\r\nThe error also says \"painless\" instead of \"expressions\" from https://github.com/elastic/elasticsearch/blob/c0753235222dea250295f0caa2a2f7c332b056e7/modules/lang-expression/src/main/java/org/elasticsearch/script/expression/ExpressionScriptEngine.java#L111\r\n",
"comments": [
{
"body": "@rjernst - It appears this bug might be causing Kibana build failures on master. Any ETA on a fix?",
"created_at": "2017-09-05T13:15:31Z"
},
{
"body": "I'm going to mark this as a blocker for 6.1 since we disabled a sizable chunk of integration tests in Kibana in order to get builds passing. I *assume* this is a regression that needs to be fixed for 6.1 anyway, but I'm really just blocking on resolution one way or another.",
"created_at": "2017-09-06T17:02:52Z"
},
{
"body": "@epixa Looking back at this again, I'm actually not sure filters make sense for expressions. Expressions only know how to read numeric values, and return numeric values. There was previously hacky \"treat 0 as false and anything else as true\" code in filter scripts, but that was removed with my refactoring to create a filter script context.\r\n\r\nWhy can't Kibana use painless for filters? The same example script @dakrone gives in the issue description would work fine in painless.",
"created_at": "2017-09-07T02:53:53Z"
},
{
"body": "@rjernst Seems like a reasonable question to me, especially since we *want* people to use painless instead of lucene expressions for this stuff since it's designed more for these specific use cases rather than relying on hacky type coercion.\r\n\r\nThat said, unless we can guarantee complete compatibility between the behaviors of expression-based filters and painless-based filters, this is going to be a breaking change for a lot of Kibana users, so I think we should preserve the existing behavior until 7.0.\r\n\r\nPeople can filter on Kibana scripted fields, which can use either expressions or painless scripts. At the very least, we'll need to make changes to Kibana to make it so only painless scripted fields can be filtered on, we'll need to start throwing deprecation notices for the existing expression filters, and we probably want to add a migration mechanism to the upgrade assistant for people to convert their existing scripted fields over.\r\n\r\nIt's worth mentioning though, that we've never had any person (to my knowledge) that encountered unexpected behaviors with expression-based scripted fields in kibana. Was the 0->false coercion problematic from a performance or maintenance standpoint? Given the impact of the change on existing users and the amount of development that'll go into providing a bridge for those users going into 7.0, is it more practical for us to simply preserve the 0->false coercion as the documented behavior of how expressions work in a filter context?",
"created_at": "2017-09-07T14:47:37Z"
},
{
"body": "Part of the reason for the context work we have been doing in scripting is performance. When you do a coercion a million times (assuming one million docs being evaluated), the total time can be non-negligible. This is part of the reason expressions are currently faster in simple cases than painless. Once we have painless performance on par with expressions, I don't think there is any reason to keep expressions around. They were an early experiment in Lucene into doing scripted scoring, and will likely stay there for a long time. But having 2 languages in elasticsearch, especially one with limited functionality, is both confusing for users (\"which one should I use?\") and a maintenance burden on developers. Expanding on the latter, expressions require manual work for every new context we add. It is not simply a matter of \"preserving coercion\". There are a few classes necessary to be created and handled for every context expressions supports.\r\n\r\nSo I think beginning the journey to remove uses of expressions is well worth the time investment. I can add in a hack for 6.1, but I would like to remove it for 7.0 (ie remove filter script support for expressions then).",
"created_at": "2017-09-07T16:22:21Z"
},
{
"body": "+1 to add a workaround for now and removing filter support for expressions in 7.0 (or even remove expressions entirely?)",
"created_at": "2017-09-08T13:31:18Z"
},
{
"body": "@Bargs What do you think?",
"created_at": "2017-09-08T13:38:10Z"
},
{
"body": "The [benchmarks](https://elasticsearch-benchmarks.elastic.co/index.html#tracks/geonames/nightly/30d) still show expressions as being faster than painless. I think it'd make sense for us to wait for painless to catch up to expressions before we talk about removing it entirely.",
"created_at": "2017-09-08T13:42:00Z"
},
{
"body": "Is it worth potential confusion due to users wondering \"which one should I use\"?",
"created_at": "2017-09-08T14:05:21Z"
},
{
"body": "We've had that confusion for a long time though. I think the issue may be moot - we'll likely work to closing that performance gap anyway.",
"created_at": "2017-09-08T14:11:16Z"
},
{
"body": "In Kibana I think we either need to support expressions everywhere or not at all. Having some scripted fields that work with filtering and some that don't will be incredibly confusing to kibana users who didn't set up the scripted fields in the first place.\r\n\r\nRemoving expression support entirely will be a pretty big breaking change. Kibana maintainers will have to rewrite all of their scripts. I'm not sure how we could migrate them automatically. That might be ok as long as our reasons are good enough, breaking changes happen. But I think we need to be absolutely sure removing expressions doesn't make anything impossible that's already possible today. If expressions still outperform painless in certain scenarios, are there use cases where expressions are viable but painless is not?\r\n\r\nAs to confusion over having two languages, I don't think it's a problem, for Kibana at least. In Kibana we default scripted fields to painless and make it clear that's the recommended choice. ",
"created_at": "2017-09-08T14:15:06Z"
},
{
"body": "> I think the issue may be moot\r\n\r\n@nik9000 Not sure what you mean by that. Given the pervasiveness of expressions in Kibana described here, I think it is a worthwhile discussion to have. We need to be thinking far ahead on how to migrate users off of expressions. It is good that painless is the default. And in most cases I think an expressions should \"just work\" as a painless script, so I'm not that worried about transitioning. \r\n\r\nMy concern over continuing to support expressions as filter scripts is the possibility for confusion by users. Because expression only return a double, we have to interpret that double, and cannot distinguish between \"this was a boolean value\" and \"this was a double value\". For example, if a user had an expression like `doc['myfield'].value`, that would previously \"work\" as an expression filter script. But what does that mean? Implementation wise it would return true for non zero, but a user might think it means \"if the field exists\".\r\n\r\n> In Kibana I think we either need to support expressions everywhere or not at all. \r\n\r\n@Bargs This is simply not possible. Expressions already don't work in some contexts. For example, update scripts, reindex scripts, or anything else that doesn't return a numeric value. The only reason they worked before for filter scripts is this very old hack that existed within filter scripts which converted 0.0 to false and everything else to true.\r\n\r\nAs I said before, I can add a hack back in just for expressions for filter scripts, but I don't want to do so unless there is agreement and a plan of action to eliminate this hack long term. Regardless of when expressions are deprecated and removed overall, I don't want expressions supporting filter scripts because of the ambiguities I have described here.",
"created_at": "2017-09-08T16:00:54Z"
},
{
"body": "> @nik9000 Not sure what you mean by that.\r\n\r\nI meant that we are likely to close the performance gap significantly during the 6.x release cycle so we might be able to remove expressions entirely in 7.0 so my point about waiting until Painless catches up might not matter because it will catch up.\r\n\r\n\r\nI agree with your concern about expressions in filters. I find the tricks that kibana plays with scripts to be a bit tricky and this sort of 0-as-false thing plays along. I'd like to avoid it if we can but you are right that the transition path is going to be fun.\r\n\r\n\r\n\r\n> Expressions already don't work in some contexts. For example, update scripts, reindex scripts, or anything else that doesn't return a numeric value\r\n\r\nKibana has a slightly different meaning for the phrase \"script context\" then we do so we can have communications issues around this. One simplistic answer to this is \"kibana doesn't care about those contexts\". That isn't strictly true and is oversimplifying it gives you a sense as to why expressions work in all of kibanas script contexts.",
"created_at": "2017-09-08T16:58:10Z"
},
{
"body": "> One simplistic answer to this is \"kibana doesn't care about those contexts\".\r\n\r\nYes, thank you @nik9000, this is what I meant. I should have been more specific and said: \"In Kibana I think we either need to fully support expressions in \"*kibana* scripted fields\" or not at all. \"Kibana scripted fields\" aren't used for updating, reindexing, etc.\r\n\r\nSo just to clarify my thoughts: if we remove the ability to filter with expressions in Elasticsearch I think we should also remove expression support from \"Kibana scripted fields\" entirely. I'm ok with that if we're sure we're not leaving any users up the creek without a paddle. ",
"created_at": "2017-09-08T22:02:07Z"
},
{
"body": "@Bargs and I talked about this a bit, and he's going to proceed with deprecating expressions in Kibana scripted fields in 6.1 and removing them entirely from master.\r\n\r\n@rjernst Can you add a hack for this in 6.x so the existing behavior starts working again? Kibana is currently pinned to a month old commit of Elasticsearch in CI, so I'd like to undo that asap.",
"created_at": "2017-09-28T20:27:34Z"
},
{
"body": "Sure, this comment from you is enough of an agreement. I'll work on a PR soon. :)",
"created_at": "2017-09-28T21:46:42Z"
},
{
"body": "Awesome, thanks",
"created_at": "2017-09-28T21:55:14Z"
}
],
"number": 26429,
"title": "Expressions scripts in filter contexts throw exception"
} | {
"body": "This commit adds a hack converting 0.0 to false and non-zero to true for\r\nexpressions operating under a filter context.\r\n\r\ncloses #26429",
"number": 26824,
"review_comments": [],
"title": "Scripting: Fix expressions to temporarily support filter scripts"
} | {
"commits": [
{
"message": "Scripting: Fix expressions to temporarily support filter scripts\n\nThis commit adds a hack converting 0.0 to false and non-zero to true for\nexpressions operating under a filter context.\n\ncloses #26429"
},
{
"message": "Fix comment"
}
],
"files": [
{
"diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.script.ClassPermission;\n import org.elasticsearch.script.ExecutableScript;\n+import org.elasticsearch.script.FilterScript;\n import org.elasticsearch.script.ScriptContext;\n import org.elasticsearch.script.ScriptEngine;\n import org.elasticsearch.script.ScriptException;\n@@ -107,6 +108,9 @@ protected Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundE\n } else if (context.instanceClazz.equals(ExecutableScript.class)) {\n ExecutableScript.Factory factory = (p) -> new ExpressionExecutableScript(expr, p);\n return context.factoryClazz.cast(factory);\n+ } else if (context.instanceClazz.equals(FilterScript.class)) {\n+ FilterScript.Factory factory = (p, lookup) -> newFilterScript(expr, lookup, p);\n+ return context.factoryClazz.cast(factory);\n }\n throw new IllegalArgumentException(\"expression engine does not know how to handle script context [\" + context.name + \"]\");\n }\n@@ -236,6 +240,27 @@ private SearchScript.LeafFactory newSearchScript(Expression expr, SearchLookup l\n return new ExpressionSearchScript(expr, bindings, specialValue, needsScores);\n }\n \n+ /**\n+ * This is a hack for filter scripts, which must return booleans instead of doubles as expression do.\n+ * See https://github.com/elastic/elasticsearch/issues/26429.\n+ */\n+ private FilterScript.LeafFactory newFilterScript(Expression expr, SearchLookup lookup, @Nullable Map<String, Object> vars) {\n+ SearchScript.LeafFactory searchLeafFactory = newSearchScript(expr, lookup, vars);\n+ return ctx -> {\n+ SearchScript script = searchLeafFactory.newInstance(ctx);\n+ return new FilterScript(vars, lookup, ctx) {\n+ @Override\n+ public boolean execute() {\n+ return script.runAsDouble() != 0.0;\n+ }\n+ @Override\n+ public void setDocument(int docid) {\n+ script.setDocument(docid);\n+ }\n+ };\n+ };\n+ }\n+\n /**\n * converts a ParseException at compile-time or link-time to a ScriptException\n */",
"filename": "modules/lang-expression/src/main/java/org/elasticsearch/script/expression/ExpressionScriptEngine.java",
"status": "modified"
},
{
"diff": "@@ -700,4 +700,19 @@ public void testBoolean() throws Exception {\n assertEquals(2.0D, rsp.getHits().getAt(1).field(\"foo\").getValue(), 1.0D);\n assertEquals(2.0D, rsp.getHits().getAt(2).field(\"foo\").getValue(), 1.0D);\n }\n+\n+ public void testFilterScript() throws Exception {\n+ createIndex(\"test\");\n+ ensureGreen(\"test\");\n+ indexRandom(true,\n+ client().prepareIndex(\"test\", \"doc\", \"1\").setSource(\"foo\", 1.0),\n+ client().prepareIndex(\"test\", \"doc\", \"2\").setSource(\"foo\", 0.0));\n+ SearchRequestBuilder builder = buildRequest(\"doc['foo'].value\");\n+ Script script = new Script(ScriptType.INLINE, \"expression\", \"doc['foo'].value\", Collections.emptyMap());\n+ builder.setQuery(QueryBuilders.boolQuery().filter(QueryBuilders.scriptQuery(script)));\n+ SearchResponse rsp = builder.get();\n+ assertSearchResponse(rsp);\n+ assertEquals(1, rsp.getHits().getTotalHits());\n+ assertEquals(1.0D, rsp.getHits().getAt(0).field(\"foo\").getValue(), 0.0D);\n+ }\n }",
"filename": "modules/lang-expression/src/test/java/org/elasticsearch/script/expression/MoreExpressionTests.java",
"status": "modified"
}
]
} |
{
"body": "Hi, I am using elasticsarch `5.6.1`, I am trying to query the documents which `tag` is not empty by\r\n\r\n```\r\n{\"query\": {\"query_string\": {\"query\": \"tag:*\" }}}\r\n```\r\n\r\nit return all documents including those which tag is empty.\r\n\r\nhowerver, before I migrated from `2.4` to `5.6.1`, this query worked fine on `2.4`.\r\nIt only return documents which tag is not empty.\r\n\r\nIs there any default setting changed caused this query failed?\r\n\r\nIn the same time I tried below query with `?` before asterisk on current `5.6.1`\r\n\r\n```\r\n{\"query\": {\"query_string\": {\"query\": \"tag:?*\" }}}\r\n````\r\n\r\nit works as I expected, although I prefer not to modify all source code to this syntax.",
"comments": [
{
"body": "Am I correct that your `tag` field is mapped as `text`? I think this is because we now rewrite pure wildcards to `exists` queries for efficiency. But it might indeed perform differently on `text` fields.\r\n\r\n@jimczi What do you think?",
"created_at": "2017-09-28T10:36:04Z"
},
{
"body": "@jpountz , yes, the `tag` field is `text`",
"created_at": "2017-09-28T11:39:47Z"
},
{
"body": "> it return all documents including those which tag is empty.\r\n\r\nYes this is a change in behavior compared to 2.x. Note that the field is empty and not null, the following document would match `tag:*`:\r\n````\r\n{ \"tag\": \"\" }\r\n````\r\nand this one would **not**:\r\n````\r\n{ \"tag\": null }\r\n````\r\n\r\nThis is true for `keyword` and `text`field.\r\n\r\n> it works as I expected, although I prefer not to modify all source code to this syntax.\r\n\r\nCan you reindex your data with null values instead of empty ?\r\n`?*` works as expected but is also horribly costly if you have a lot of distinct terms.\r\nI'll add a note in the documentation, thanks for reporting @changsijay !\r\n\r\n\r\n",
"created_at": "2017-09-28T11:55:46Z"
}
],
"number": 26801,
"title": "query_string failed to search non empty string with single asterisk"
} | {
"body": "In 5.x pure wildcard queries `*` in `query_string` are rewritten to `exists` query for efficiency.\r\nThough this introduced a change in the documents that match such queries because\r\n`exists` query also return documents with an empty value for the field.\r\nThis commit clarifies this behavior for 5.x and beyond.\r\n\r\nCloses #26801\r\n",
"number": 26814,
"review_comments": [
{
"body": "This might be easier to understand if the order of the sentence was changed a bit: \"As a consequence, the wildcard field:* would match documents with an emtpy [...] like the following: ..ex1.. but the query would **not** match the following: ..ex2..",
"created_at": "2017-09-28T13:46:00Z"
}
],
"title": "Clarify pure wilcard matching with `query_string`"
} | {
"commits": [
{
"message": "Clarify pure wilcard matching with `query_string`\n\nIn 5.x pure wildcard queries `*` in `query_string` are rewritten to `exists` query for efficiency.\nThough this introduced a change in the document that match such queries because\n`exists` query also return documents with an empty value for the field.\nThis change clarifies this behavior for 5.x and beyond.\n\nCloses #26801"
},
{
"message": "review"
}
],
"files": [
{
"diff": "@@ -53,6 +53,25 @@ Be aware that wildcard queries can use an enormous amount of memory and\n perform very badly -- just think how many terms need to be queried to\n match the query string `\"a* b* c*\"`.\n \n+[WARNING]\n+=======\n+Pure wildcards `\\*` are rewritten to <<query-dsl-exists-query,`exists`>> queries for efficiency.\n+As a consequence, the wildcard `\"field:*\"` would match documents with an emtpy value\n+ like the following:\n+```\n+{\n+ \"field\": \"\"\n+}\n+```\n+\\... and would **not** match if the field is missing or set with an explicit null\n+value like the following:\n+```\n+{\n+ \"field\": null\n+}\n+```\n+=======\n+\n [WARNING]\n =======\n Allowing a wildcard at the beginning of a word (eg `\"*ing\"`) is particularly",
"filename": "docs/reference/query-dsl/query-string-syntax.asciidoc",
"status": "modified"
}
]
} |
{
"body": "I encountered the following error:\r\n```\r\n{\r\n \"error\":{\r\n \"root_cause\":[\r\n {\r\n \"type\":\"query_phase_execution_exception\",\r\n \"reason\":\"Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.\"\r\n }\r\n ],\r\n \"type\":\"search_phase_execution_exception\",\r\n \"reason\":\"all shards failed\",\r\n \"phase\":\"query\",\r\n \"grouped\":true,\r\n \"failed_shards\":[...]\r\n },\r\n \"status\":500\r\n}\r\n```\r\nMy question is why the status is 500? It's more of client's fault, isn't it? Shouldn't it be 4XX?",
"comments": [
{
"body": "I would agree on this, thanks for pointing it out. I think the status code is an unwanted side effect of the exception used for this error message, and the exception that it inherits from.",
"created_at": "2017-09-27T11:32:46Z"
},
{
"body": "@javanna Can we just response a bad request status for `QueryPhaseExecutionException` ? thanks.",
"created_at": "2017-09-27T12:03:36Z"
},
{
"body": "@liketic yes that would be a better thing to do.",
"created_at": "2017-09-27T12:06:38Z"
},
{
"body": "Not sure though if 4xx should be associated with `QueryPhaseExecutionException` which can also be a server side problem. The solution may also be to use a different exception for these cases where we validate user input. ",
"created_at": "2017-09-27T12:07:40Z"
},
{
"body": "Some possible options:\r\n1) Bind 4xx status to `QueryPhaseExecutionException ` \r\n2) Replace `QueryPhaseExecutionException` in those cases with `IllegalArgumentException`\r\n3) Create a new exception extends `QueryPhaseExecutionException`\r\n4) find a another existing exception to replace `QueryPhaseExecutionException` in those places and response 4xx.\r\n\r\nDo you prefer which one? Thanks!\r\n ",
"created_at": "2017-09-27T12:43:00Z"
},
{
"body": "I would not associate 4xx to all `QueryPhaseExecutionException` either. IMHO the best is option 3, but 2 also works.",
"created_at": "2017-09-27T14:17:38Z"
}
],
"number": 26799,
"title": "Why the response status of the error \"Result window is too large\" is 500?"
} | {
"body": "Resolves #26799 . I'm trying to raise a `IllegalArgumentException` if the validation is failed. But I think it maybe not the best resolution, happy to hear your comments.",
"number": 26811,
"review_comments": [
{
"body": "I would also make the same change for \"Cannot use [sort] option in conjunction with [rescore].\" .",
"created_at": "2017-10-05T10:11:04Z"
},
{
"body": "we are not testing here the response code. Better to replace with `catch: bad_request`. That way we don't check the message, but we make sure that 400 is returned.",
"created_at": "2017-10-05T11:50:51Z"
},
{
"body": "this one will start a node, which makes it more of an integration test, would it be possible to make it extend `ESTestCase`?",
"created_at": "2017-10-06T16:33:04Z"
},
{
"body": "Updated. Please review again.",
"created_at": "2017-10-07T06:17:56Z"
},
{
"body": "I think we can skip the java exception names here given that this guide is read by all our users, also the ones that are not familiar with Java. Let's focus on response codes only. Also I would make the title shorter:\r\n\r\n```\r\n=== `_search/scroll` returns `400` for invalid requests\r\n\r\nThe `/_search/scroll` endpoint returns `400 - Bad request` when the request invalid, while it would previously return `500 - Internal Server Error` in such case.\r\n\r\n```\r\n\r\n",
"created_at": "2017-10-23T14:09:51Z"
},
{
"body": "could we have a unit test for this change too?",
"created_at": "2017-10-23T14:16:37Z"
},
{
"body": "thanks a lot for taking the time to write this!",
"created_at": "2017-10-23T14:17:19Z"
},
{
"body": "do you think it would be possible to have a unit test for this too?",
"created_at": "2017-10-23T14:18:20Z"
},
{
"body": "It's not easy to add unit test for this. The `SearchServiceTests` is also extend `ESSingleNodeTestCase` for other test cases. And this has been covered by the integration test. I can try to add a unit test if you think it's necessary. ",
"created_at": "2017-10-24T12:43:55Z"
},
{
"body": "Yes.",
"created_at": "2017-10-24T12:44:19Z"
},
{
"body": "can you use expectThrows here instead?",
"created_at": "2017-10-24T14:44:49Z"
},
{
"body": "thanks a lot for adding this test!",
"created_at": "2017-10-24T14:46:10Z"
},
{
"body": "my bad. Thanks.",
"created_at": "2017-10-24T15:06:35Z"
}
],
"title": "Raise IllegalArgumentException instead if query validation failed"
} | {
"commits": [
{
"message": "Raise IllegalArgumentException instead if query validation failed"
},
{
"message": "Skip testing sliced scroll with invalid arguments for versions before 6.99.99"
},
{
"message": "Update migration document"
},
{
"message": "Add unit test"
},
{
"message": "Simplify exception assertion"
}
],
"files": [
{
"diff": "@@ -24,7 +24,6 @@\n import org.apache.lucene.search.Collector;\n import org.apache.lucene.search.FieldDoc;\n import org.apache.lucene.search.Query;\n-import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchTask;\n import org.elasticsearch.action.search.SearchType;\n@@ -81,7 +80,6 @@\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n-import java.util.concurrent.ExecutorService;\n \n final class DefaultSearchContext extends SearchContext {\n \n@@ -200,29 +198,28 @@ public void preProcess(boolean rewrite) {\n \n if (resultWindow > maxResultWindow) {\n if (scrollContext == null) {\n- throw new QueryPhaseExecutionException(this,\n+ throw new IllegalArgumentException(\n \"Result window is too large, from + size must be less than or equal to: [\" + maxResultWindow + \"] but was [\"\n + resultWindow + \"]. See the scroll api for a more efficient way to request large data sets. \"\n + \"This limit can be set by changing the [\" + IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey()\n + \"] index level setting.\");\n }\n- throw new QueryPhaseExecutionException(this,\n+ throw new IllegalArgumentException(\n \"Batch size is too large, size must be less than or equal to: [\" + maxResultWindow + \"] but was [\" + resultWindow\n + \"]. Scroll batch sizes cost as much memory as result windows so they are controlled by the [\"\n + IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey() + \"] index level setting.\");\n }\n if (rescore != null) {\n if (sort != null) {\n- throw new QueryPhaseExecutionException(this, \"Cannot use [sort] option in conjunction with [rescore].\");\n+ throw new IllegalArgumentException(\"Cannot use [sort] option in conjunction with [rescore].\");\n }\n int maxWindow = indexService.getIndexSettings().getMaxRescoreWindow();\n for (RescoreContext rescoreContext: rescore) {\n if (rescoreContext.getWindowSize() > maxWindow) {\n- throw new QueryPhaseExecutionException(this, \"Rescore window [\" + rescoreContext.getWindowSize() + \"] is too large. \"\n+ throw new IllegalArgumentException(\"Rescore window [\" + rescoreContext.getWindowSize() + \"] is too large. \"\n + \"It must be less than [\" + maxWindow + \"]. This prevents allocating massive heaps for storing the results \"\n + \"to be rescored. This limit can be set by changing the [\" + IndexSettings.MAX_RESCORE_WINDOW_SETTING.getKey()\n + \"] index level setting.\");\n-\n }\n }\n }\n@@ -231,7 +228,7 @@ public void preProcess(boolean rewrite) {\n int sliceLimit = indexService.getIndexSettings().getMaxSlicesPerScroll();\n int numSlices = sliceBuilder.getMax();\n if (numSlices > sliceLimit) {\n- throw new QueryPhaseExecutionException(this, \"The number of slices [\" + numSlices + \"] is too large. It must \"\n+ throw new IllegalArgumentException(\"The number of slices [\" + numSlices + \"] is too large. It must \"\n + \"be less than [\" + sliceLimit + \"]. This limit can be set by changing the [\" +\n IndexSettings.MAX_SLICES_PER_SCROLL.getKey() + \"] index level setting.\");\n }",
"filename": "core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -650,7 +650,7 @@ public void freeAllScrollContexts() {\n \n private void contextScrollKeepAlive(SearchContext context, long keepAlive) throws IOException {\n if (keepAlive > maxKeepAlive) {\n- throw new QueryPhaseExecutionException(context,\n+ throw new IllegalArgumentException(\n \"Keep alive for scroll (\" + TimeValue.timeValueMillis(keepAlive).format() + \") is too large. \" +\n \"It must be less than (\" + TimeValue.timeValueMillis(maxKeepAlive).format() + \"). \" +\n \"This limit can be set by changing the [\" + MAX_KEEPALIVE_SETTING.getKey() + \"] cluster level setting.\");",
"filename": "core/src/main/java/org/elasticsearch/search/SearchService.java",
"status": "modified"
},
{
"diff": "@@ -189,7 +189,7 @@ protected AggregatorFactory<?> doBuild(SearchContext context, AggregatorFactory<\n throws IOException {\n int maxFilters = context.indexShard().indexSettings().getMaxAdjacencyMatrixFilters();\n if (filters.size() > maxFilters){\n- throw new QueryPhaseExecutionException(context,\n+ throw new IllegalArgumentException(\n \"Number of filters is too large, must be less than or equal to: [\" + maxFilters + \"] but was [\"\n + filters.size() + \"].\"\n + \"This limit can be set by changing the [\" + IndexSettings.MAX_ADJACENCY_MATRIX_FILTERS_SETTING.getKey()",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,178 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search;\n+\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.RandomIndexWriter;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.QueryCachingPolicy;\n+import org.apache.lucene.search.Sort;\n+import org.apache.lucene.store.Directory;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.common.util.MockBigArrays;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.cache.IndexCache;\n+import org.elasticsearch.index.cache.query.QueryCache;\n+import org.elasticsearch.index.engine.Engine;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.query.AbstractQueryBuilder;\n+import org.elasticsearch.index.query.ParsedQuery;\n+import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n+import org.elasticsearch.search.internal.AliasFilter;\n+import org.elasticsearch.search.internal.ScrollContext;\n+import org.elasticsearch.search.internal.ShardSearchRequest;\n+import org.elasticsearch.search.rescore.RescoreContext;\n+import org.elasticsearch.search.slice.SliceBuilder;\n+import org.elasticsearch.search.sort.SortAndFormats;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.util.UUID;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.mockito.Matchers.anyObject;\n+import static org.mockito.Matchers.anyString;\n+import static org.mockito.Matchers.eq;\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.when;\n+\n+\n+public class DefaultSearchContextTests extends ESTestCase {\n+\n+ public void testPreProcess() throws Exception {\n+ TimeValue timeout = new TimeValue(randomIntBetween(1, 100));\n+ ShardSearchRequest shardSearchRequest = mock(ShardSearchRequest.class);\n+ when(shardSearchRequest.searchType()).thenReturn(SearchType.DEFAULT);\n+ ShardId shardId = new ShardId(\"index\", UUID.randomUUID().toString(), 1);\n+ when(shardSearchRequest.shardId()).thenReturn(shardId);\n+ when(shardSearchRequest.types()).thenReturn(new String[]{});\n+\n+ IndexShard indexShard = mock(IndexShard.class);\n+ QueryCachingPolicy queryCachingPolicy = mock(QueryCachingPolicy.class);\n+ when(indexShard.getQueryCachingPolicy()).thenReturn(queryCachingPolicy);\n+\n+ int maxResultWindow = randomIntBetween(50, 100);\n+ int maxRescoreWindow = randomIntBetween(50, 100);\n+ int maxSlicesPerScroll = randomIntBetween(50, 100);\n+ Settings settings = Settings.builder()\n+ .put(\"index.max_result_window\", maxResultWindow)\n+ .put(\"index.max_slices_per_scroll\", maxSlicesPerScroll)\n+ .put(\"index.max_rescore_window\", maxRescoreWindow)\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2)\n+ .build();\n+\n+ IndexService indexService = mock(IndexService.class);\n+ IndexCache indexCache = mock(IndexCache.class);\n+ QueryCache queryCache = mock(QueryCache.class);\n+ when(indexCache.query()).thenReturn(queryCache);\n+ when(indexService.cache()).thenReturn(indexCache);\n+ QueryShardContext queryShardContext = mock(QueryShardContext.class);\n+ when(indexService.newQueryShardContext(eq(shardId.id()), anyObject(), anyObject(), anyString())).thenReturn(queryShardContext);\n+ MapperService mapperService = mock(MapperService.class);\n+ when(mapperService.hasNested()).thenReturn(randomBoolean());\n+ when(indexService.mapperService()).thenReturn(mapperService);\n+\n+ IndexMetaData indexMetaData = IndexMetaData.builder(\"index\").settings(settings).build();\n+ IndexSettings indexSettings = new IndexSettings(indexMetaData, Settings.EMPTY);\n+ when(indexService.getIndexSettings()).thenReturn(indexSettings);\n+\n+ BigArrays bigArrays = new MockBigArrays(Settings.EMPTY, new NoneCircuitBreakerService());\n+\n+ try (Directory dir = newDirectory();\n+ RandomIndexWriter w = new RandomIndexWriter(random(), dir);\n+ IndexReader reader = w.getReader();\n+ Engine.Searcher searcher = new Engine.Searcher(\"test\", new IndexSearcher(reader))) {\n+\n+ DefaultSearchContext context1 = new DefaultSearchContext(1L, shardSearchRequest, null, searcher, indexService,\n+ indexShard, bigArrays, null, timeout, null, null);\n+ context1.from(300);\n+\n+ // resultWindow greater than maxResultWindow and scrollContext is null\n+ IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> context1.preProcess(false));\n+ assertThat(exception.getMessage(), equalTo(\"Result window is too large, from + size must be less than or equal to:\"\n+ + \" [\" + maxResultWindow + \"] but was [310]. See the scroll api for a more efficient way to request large data sets. \"\n+ + \"This limit can be set by changing the [\" + IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey()\n+ + \"] index level setting.\"));\n+\n+ // resultWindow greater than maxResultWindow and scrollContext isn't null\n+ context1.scrollContext(new ScrollContext());\n+ exception = expectThrows(IllegalArgumentException.class, () -> context1.preProcess(false));\n+ assertThat(exception.getMessage(), equalTo(\"Batch size is too large, size must be less than or equal to: [\"\n+ + maxResultWindow + \"] but was [310]. Scroll batch sizes cost as much memory as result windows so they are \"\n+ + \"controlled by the [\" + IndexSettings.MAX_RESULT_WINDOW_SETTING.getKey() + \"] index level setting.\"));\n+\n+ // resultWindow not greater than maxResultWindow and both rescore and sort are not null\n+ context1.from(0);\n+ DocValueFormat docValueFormat = mock(DocValueFormat.class);\n+ SortAndFormats sortAndFormats = new SortAndFormats(new Sort(), new DocValueFormat[]{docValueFormat});\n+ context1.sort(sortAndFormats);\n+\n+ RescoreContext rescoreContext = mock(RescoreContext.class);\n+ when(rescoreContext.getWindowSize()).thenReturn(500);\n+ context1.addRescore(rescoreContext);\n+\n+ exception = expectThrows(IllegalArgumentException.class, () -> context1.preProcess(false));\n+ assertThat(exception.getMessage(), equalTo(\"Cannot use [sort] option in conjunction with [rescore].\"));\n+\n+ // rescore is null but sort is not null and rescoreContext.getWindowSize() exceeds maxResultWindow\n+ context1.sort(null);\n+ exception = expectThrows(IllegalArgumentException.class, () -> context1.preProcess(false));\n+\n+ assertThat(exception.getMessage(), equalTo(\"Rescore window [\" + rescoreContext.getWindowSize() + \"] is too large. \"\n+ + \"It must be less than [\" + maxRescoreWindow + \"]. This prevents allocating massive heaps for storing the results \"\n+ + \"to be rescored. This limit can be set by changing the [\" + IndexSettings.MAX_RESCORE_WINDOW_SETTING.getKey()\n+ + \"] index level setting.\"));\n+\n+ // rescore is null but sliceBuilder is not null\n+ DefaultSearchContext context2 = new DefaultSearchContext(2L, shardSearchRequest, null, searcher, indexService,\n+ indexShard, bigArrays, null, timeout, null, null);\n+\n+ SliceBuilder sliceBuilder = mock(SliceBuilder.class);\n+ int numSlices = maxSlicesPerScroll + randomIntBetween(1, 100);\n+ when(sliceBuilder.getMax()).thenReturn(numSlices);\n+ context2.sliceBuilder(sliceBuilder);\n+\n+ exception = expectThrows(IllegalArgumentException.class, () -> context2.preProcess(false));\n+ assertThat(exception.getMessage(), equalTo(\"The number of slices [\" + numSlices + \"] is too large. It must \"\n+ + \"be less than [\" + maxSlicesPerScroll + \"]. This limit can be set by changing the [\" +\n+ IndexSettings.MAX_SLICES_PER_SCROLL.getKey() + \"] index level setting.\"));\n+\n+ // No exceptions should be thrown\n+ when(shardSearchRequest.getAliasFilter()).thenReturn(AliasFilter.EMPTY);\n+ when(shardSearchRequest.indexBoost()).thenReturn(AbstractQueryBuilder.DEFAULT_BOOST);\n+\n+ DefaultSearchContext context3 = new DefaultSearchContext(3L, shardSearchRequest, null, searcher, indexService,\n+ indexShard, bigArrays, null, timeout, null, null);\n+ ParsedQuery parsedQuery = ParsedQuery.parsedMatchAllQuery();\n+ context3.sliceBuilder(null).parsedQuery(parsedQuery).preProcess(false);\n+ assertEquals(context3.query(), context3.buildFilteredQuery(parsedQuery.query()));\n+ }\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/search/DefaultSearchContextTests.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,84 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.adjacency;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.query.QueryBuilder;\n+import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.search.aggregations.AggregatorFactories;\n+import org.elasticsearch.search.aggregations.AggregatorFactory;\n+import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.TestSearchContext;\n+\n+import java.util.Collections;\n+import java.util.HashMap;\n+import java.util.Map;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.is;\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.when;\n+\n+public class AdjacencyMatrixAggregationBuilderTests extends ESTestCase {\n+\n+\n+ public void testFilterSizeLimitation() throws Exception {\n+ // filter size grater than max size should thrown a exception\n+ QueryShardContext queryShardContext = mock(QueryShardContext.class);\n+ IndexShard indexShard = mock(IndexShard.class);\n+ Settings settings = Settings.builder()\n+ .put(\"index.max_adjacency_matrix_filters\", 2)\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2)\n+ .build();\n+ IndexMetaData indexMetaData = IndexMetaData.builder(\"index\").settings(settings).build();\n+ IndexSettings indexSettings = new IndexSettings(indexMetaData, Settings.EMPTY);\n+ when(indexShard.indexSettings()).thenReturn(indexSettings);\n+ SearchContext context = new TestSearchContext(queryShardContext, indexShard);\n+\n+ Map<String, QueryBuilder> filters = new HashMap<>(3);\n+ for (int i = 0; i < 3; i++) {\n+ QueryBuilder queryBuilder = mock(QueryBuilder.class);\n+ // return builder itself to skip rewrite\n+ when(queryBuilder.rewrite(queryShardContext)).thenReturn(queryBuilder);\n+ filters.put(\"filter\" + i, queryBuilder);\n+ }\n+ AdjacencyMatrixAggregationBuilder builder = new AdjacencyMatrixAggregationBuilder(\"dummy\", filters);\n+ IllegalArgumentException ex\n+ = expectThrows(IllegalArgumentException.class, () -> builder.doBuild(context, null, new AggregatorFactories.Builder()));\n+ assertThat(ex.getMessage(), equalTo(\"Number of filters is too large, must be less than or equal to: [2] but was [3].\"\n+ + \"This limit can be set by changing the [\" + IndexSettings.MAX_ADJACENCY_MATRIX_FILTERS_SETTING.getKey()\n+ + \"] index level setting.\"));\n+\n+ // filter size not grater than max size should return an instance of AdjacencyMatrixAggregatorFactory\n+ Map<String, QueryBuilder> emptyFilters = Collections.emptyMap();\n+\n+ AdjacencyMatrixAggregationBuilder aggregationBuilder = new AdjacencyMatrixAggregationBuilder(\"dummy\", emptyFilters);\n+ AggregatorFactory<?> factory = aggregationBuilder.doBuild(context, null, new AggregatorFactories.Builder());\n+ assertThat(factory instanceof AdjacencyMatrixAggregatorFactory, is(true));\n+ assertThat(factory.name(), equalTo(\"dummy\"));\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilderTests.java",
"status": "added"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.search.scroll;\n \n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.search.ClearScrollResponse;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n@@ -37,7 +36,6 @@\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.search.SearchHit;\n-import org.elasticsearch.search.query.QueryPhaseExecutionException;\n import org.elasticsearch.search.sort.FieldSortBuilder;\n import org.elasticsearch.search.sort.SortOrder;\n import org.elasticsearch.test.ESIntegTestCase;\n@@ -575,10 +573,10 @@ public void testInvalidScrollKeepAlive() throws IOException {\n .setSize(1)\n .setScroll(TimeValue.timeValueHours(2))\n .execute().actionGet());\n- QueryPhaseExecutionException queryPhaseExecutionException =\n- (QueryPhaseExecutionException) ExceptionsHelper.unwrap(exc, QueryPhaseExecutionException.class);\n- assertNotNull(queryPhaseExecutionException);\n- assertThat(queryPhaseExecutionException.getMessage(), containsString(\"Keep alive for scroll (2 hours) is too large\"));\n+ IllegalArgumentException illegalArgumentException =\n+ (IllegalArgumentException) ExceptionsHelper.unwrap(exc, IllegalArgumentException.class);\n+ assertNotNull(illegalArgumentException);\n+ assertThat(illegalArgumentException.getMessage(), containsString(\"Keep alive for scroll (2 hours) is too large\"));\n \n SearchResponse searchResponse = client().prepareSearch()\n .setQuery(matchAllQuery())\n@@ -592,10 +590,10 @@ public void testInvalidScrollKeepAlive() throws IOException {\n exc = expectThrows(Exception.class,\n () -> client().prepareSearchScroll(searchResponse.getScrollId())\n .setScroll(TimeValue.timeValueHours(3)).get());\n- queryPhaseExecutionException =\n- (QueryPhaseExecutionException) ExceptionsHelper.unwrap(exc, QueryPhaseExecutionException.class);\n- assertNotNull(queryPhaseExecutionException);\n- assertThat(queryPhaseExecutionException.getMessage(), containsString(\"Keep alive for scroll (3 hours) is too large\"));\n+ illegalArgumentException =\n+ (IllegalArgumentException) ExceptionsHelper.unwrap(exc, IllegalArgumentException.class);\n+ assertNotNull(illegalArgumentException);\n+ assertThat(illegalArgumentException.getMessage(), containsString(\"Keep alive for scroll (3 hours) is too large\"));\n }\n \n private void assertToXContentResponse(ClearScrollResponse response, boolean succeed, int numFreed) throws IOException {",
"filename": "core/src/test/java/org/elasticsearch/search/scroll/SearchScrollIT.java",
"status": "modified"
},
{
"diff": "@@ -18,3 +18,7 @@ PUT /_cluster/settings\n --------------------------------------------------\n // CONSOLE\n \n+=== `_search/scroll` returns `400` for invalid requests\n+\n+The `/_search/scroll` endpoint returns `400 - Bad request` when the request invalid, while it would previously \n+return `500 - Internal Server Error` in such case.",
"filename": "docs/reference/migration/migrate_7_0/search.asciidoc",
"status": "modified"
},
{
"diff": "@@ -103,8 +103,12 @@ setup:\n \n ---\n \"Sliced scroll with invalid arguments\":\n+ - skip:\n+ version: \" - 6.99.99\"\n+ reason: Prior versions return 500 rather than 404\n+\n - do:\n- catch: /query_phase_execution_exception.*The number of slices.*index.max_slices_per_scroll/\n+ catch: bad_request\n search:\n index: test_sliced_scroll\n size: 1",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/scroll/12_slices.yml",
"status": "modified"
},
{
"diff": "@@ -24,15 +24,12 @@\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchTask;\n import org.elasticsearch.action.search.SearchType;\n-import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.BigArrays;\n-import org.elasticsearch.common.util.concurrent.ThreadContext;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n-import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.ObjectMapper;",
"filename": "test/framework/src/main/java/org/elasticsearch/test/TestSearchContext.java",
"status": "modified"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Feature request -->\r\n\r\n**Describe the feature**: `scroll_size` seems to be ignored in my request\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 6.0.0-beta2\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): JDK 8\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Ubuntu\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\n**Steps to reproduce**:\r\n\r\nNo matter what value I set for `scroll_size`, the following request only updates 1,000 documents:\r\n```\r\nPOST crimes/_update_by_query?scroll_size=4000\r\n{\r\n \"query\" : {\r\n \"match\": {\r\n \"area.ward\": \"6\"\r\n }\r\n },\r\n \"script\" : {\r\n \"source\" : \"ctx._source.area.ward=10\"\r\n }\r\n}\r\n```\r\nHere is the response I get:\r\n```\r\n{\r\n \"took\": 112,\r\n \"timed_out\": false,\r\n \"total\": 1000,\r\n \"updated\": 1000,\r\n \"deleted\": 0,\r\n \"batches\": 1,\r\n \"version_conflicts\": 0,\r\n \"noops\": 0,\r\n \"retries\": {\r\n \"bulk\": 0,\r\n \"search\": 0\r\n },\r\n \"throttled_millis\": 0,\r\n \"requests_per_second\": -1,\r\n \"throttled_until_millis\": 0,\r\n \"failures\": []\r\n}\r\n```\r\n\r\nAm I doing something wrong? \r\n\r\n",
"comments": [
{
"body": "It looks like you have 1000 documents and a scroll size of 4000. This all looks correct. What is wrong?",
"created_at": "2017-09-25T15:07:29Z"
},
{
"body": "It's the other way around. I have over 4,000 documents that match that query, so I set scroll_size to 4000 so I could update them all in one request. But only 1,000 get updated at a time. I have to run the `_update_by_query` four times to get all documents updated, which is not what I expected. ",
"created_at": "2017-09-25T15:14:52Z"
},
{
"body": "I've reproduced this locally. The `size` parameter is getting set to the scroll size accidentally.",
"created_at": "2017-09-25T15:38:07Z"
},
{
"body": "@rfraposa I've renamed this to the actual issue.",
"created_at": "2017-09-25T15:49:32Z"
},
{
"body": "Note: this doesn't effect 5.6. I've not tracked down the patch that changed it but I'll put together a fix.",
"created_at": "2017-09-25T15:49:59Z"
},
{
"body": "This is the script I used to reproduce it:\r\n```\r\nfor i in $(seq 1 4000); do\r\n curl -HContent-Type:application/json -XPOST localhost:9200/test/test -d'{\"test\": \"test\"}'; echo\r\ndone\r\n\r\ncurl -XPOST localhost:9200/_refresh\r\n\r\ncurl -HContent-Type:application/json -XPOST 'localhost:9200/test/_update_by_query?refresh&pretty' -d'{\r\n \"query\": {\r\n \"match\": {\r\n \"test\": \"test\"\r\n }\r\n },\r\n \"script\": {\r\n \"inline\": \"ctx._source.foo=\\\"bar\\\"\"\r\n }\r\n}'\r\n```\r\n\r\nThat last one *should* return `4000` documents updated but it returns `1000`.",
"created_at": "2017-09-25T15:56:22Z"
},
{
"body": "Better reproduction:\r\n```\r\nfor i in $(seq 1 4000); do\r\n curl -HContent-Type:application/json -XPOST localhost:9200/test/test -d'{\"test\": \"test\"}'; echo\r\ndone\r\n\r\ncurl -XPOST localhost:9200/_refresh\r\n\r\ncurl -HContent-Type:application/json -XPOST 'localhost:9200/test/_update_by_query?refresh&pretty'\r\n\r\n\r\n#This one it ok!\r\ncurl -HContent-Type:application/json -XPOST 'localhost:9200/_reindex?refresh&pretty' -d'{\r\n \"source\": {\r\n \"index\": \"test\"\r\n },\r\n \"dest\": {\r\n \"index\": \"dest\"\r\n }\r\n}'\r\n\r\n\r\ncurl -HContent-Type:application/json -XPOST 'localhost:9200/dest/_delete_by_query?refresh&pretty' -d'{\r\n \"query\": {\r\n \"match_all\": {}\r\n }\r\n}'\r\n```\r\n\r\nShows that this effects `_update_by_query` and `_delete_by_query` but not `_reindex`.",
"created_at": "2017-09-25T15:59:18Z"
},
{
"body": "I dropped the `blocker` label because we're ok with having this be a \"known issue\". I have opened up a PR to fix it though.",
"created_at": "2017-09-25T18:26:20Z"
}
],
"number": 26761,
"title": "_update_by_query's default size accidentally changed to 1000"
} | {
"body": "We were accidentally defaulting it to the scroll size.\r\nUntwists some of the tricks that we play with parsing\r\nso that the size is no longer scrambled.\r\n\r\nCloses #26761\r\n",
"number": 26784,
"review_comments": [
{
"body": "can this be `searchReqeust.source()::size`",
"created_at": "2017-09-25T18:34:36Z"
},
{
"body": "I read your comment and though \"of course it can\" and implemented it and everything failed. I then remembered why I wrote it this way in the first place: `search.source()` is null at this time.",
"created_at": "2017-09-25T19:35:16Z"
},
{
"body": "oh hmm but it might be null later too no? should we protect against that?",
"created_at": "2017-09-25T19:58:46Z"
},
{
"body": "I can't be null later but I admit to it being weird. I'll leave a comment explaining.",
"created_at": "2017-09-25T20:06:09Z"
}
],
"title": "Fix update_by_query's default size parameter"
} | {
"commits": [
{
"message": "Fix update_by_query's default size parameter\n\nWe were accidentally defaulting it to the scroll size.\nUntwists some of the tricks that we play with parsing\nso that the size is no longer scrambled.\n\nCloses #26761"
},
{
"message": "Use method reference"
},
{
"message": "Revert \"Use method reference\"\n\nThis reverts commit 43975208aef0d7de405eeae093fe97e560a62664."
},
{
"message": "Add comment"
}
],
"files": [
{
"diff": "@@ -43,7 +43,7 @@ public abstract class AbstractBulkByScrollRequest<Self extends AbstractBulkByScr\n \n public static final int SIZE_ALL_MATCHES = -1;\n private static final TimeValue DEFAULT_SCROLL_TIMEOUT = timeValueMinutes(5);\n- private static final int DEFAULT_SCROLL_SIZE = 1000;\n+ static final int DEFAULT_SCROLL_SIZE = 1000;\n \n public static final int AUTO_SLICES = 0;\n public static final String AUTO_SLICES_VALUE = \"auto\";",
"filename": "core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequest.java",
"status": "modified"
},
{
"diff": "@@ -44,6 +44,7 @@\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.Set;\n+import java.util.function.IntConsumer;\n \n import static org.elasticsearch.common.unit.TimeValue.parseTimeValue;\n import static org.elasticsearch.rest.RestRequest.Method.GET;\n@@ -73,8 +74,21 @@ public String getName() {\n @Override\n public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {\n SearchRequest searchRequest = new SearchRequest();\n+ /*\n+ * We have to pull out the call to `source().size(size)` because\n+ * _update_by_query and _delete_by_query uses this same parsing\n+ * path but sets a different variable when it sees the `size`\n+ * url parameter.\n+ *\n+ * Note that we can't use `searchRequest.source()::size` because\n+ * `searchRequest.source()` is null right now. We don't have to\n+ * guard against it being null in the IntConsumer because it can't\n+ * be null later. If that is confusing to you then you are in good\n+ * company.\n+ */\n+ IntConsumer setSize = size -> searchRequest.source().size(size);\n request.withContentOrSourceParamParserOrNull(parser ->\n- parseSearchRequest(searchRequest, request, parser));\n+ parseSearchRequest(searchRequest, request, parser, setSize));\n \n return channel -> client.search(searchRequest, new RestStatusToXContentListener<>(channel));\n }\n@@ -84,9 +98,11 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC\n *\n * @param requestContentParser body of the request to read. This method does not attempt to read the body from the {@code request}\n * parameter\n+ * @param setSize how the size url parameter is handled. {@code udpate_by_query} and regular search differ here.\n */\n public static void parseSearchRequest(SearchRequest searchRequest, RestRequest request,\n- XContentParser requestContentParser) throws IOException {\n+ XContentParser requestContentParser,\n+ IntConsumer setSize) throws IOException {\n \n if (searchRequest.source() == null) {\n searchRequest.source(new SearchSourceBuilder());\n@@ -118,7 +134,7 @@ public static void parseSearchRequest(SearchRequest searchRequest, RestRequest r\n } else {\n searchRequest.searchType(searchType);\n }\n- parseSearchSource(searchRequest.source(), request);\n+ parseSearchSource(searchRequest.source(), request, setSize);\n searchRequest.requestCache(request.paramAsBoolean(\"request_cache\", null));\n \n String scroll = request.param(\"scroll\");\n@@ -136,7 +152,7 @@ public static void parseSearchRequest(SearchRequest searchRequest, RestRequest r\n * Parses the rest request on top of the SearchSourceBuilder, preserving\n * values that are not overridden by the rest request.\n */\n- private static void parseSearchSource(final SearchSourceBuilder searchSourceBuilder, RestRequest request) {\n+ private static void parseSearchSource(final SearchSourceBuilder searchSourceBuilder, RestRequest request, IntConsumer setSize) {\n QueryBuilder queryBuilder = RestActions.urlParamsToQueryBuilder(request);\n if (queryBuilder != null) {\n searchSourceBuilder.query(queryBuilder);\n@@ -148,7 +164,7 @@ private static void parseSearchSource(final SearchSourceBuilder searchSourceBuil\n }\n int size = request.paramAsInt(\"size\", -1);\n if (size != -1) {\n- searchSourceBuilder.size(size);\n+ setSize.accept(size);\n }\n \n if (request.hasParam(\"explain\")) {",
"filename": "core/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.script.mustache;\n \n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.common.ParseField;\n@@ -94,7 +93,7 @@ public String getName() {\n public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {\n // Creates the search request with all required params\n SearchRequest searchRequest = new SearchRequest();\n- RestSearchAction.parseSearchRequest(searchRequest, request, null);\n+ RestSearchAction.parseSearchRequest(searchRequest, request, null, size -> searchRequest.source().size(size));\n \n // Creates the search template request\n SearchTemplateRequest searchTemplateRequest;",
"filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/RestSearchTemplateAction.java",
"status": "modified"
},
{
"diff": "@@ -26,12 +26,12 @@ esplugin {\n }\n \n integTestCluster {\n- // Whitelist reindexing from the local node so we can test it.\n+ // Whitelist reindexing from the local node so we can test reindex-from-remote.\n setting 'reindex.remote.whitelist', '127.0.0.1:*'\n }\n \n run {\n- // Whitelist reindexing from the local node so we can test it.\n+ // Whitelist reindexing from the local node so we can test reindex-from-remote.\n setting 'reindex.remote.whitelist', '127.0.0.1:*'\n }\n ",
"filename": "modules/reindex/build.gradle",
"status": "modified"
},
{
"diff": "@@ -49,14 +49,12 @@ protected void parseInternalRequest(Request internal, RestRequest restRequest,\n assert restRequest != null : \"RestRequest should not be null\";\n \n SearchRequest searchRequest = internal.getSearchRequest();\n- int scrollSize = searchRequest.source().size();\n \n try (XContentParser parser = extractRequestSpecificFields(restRequest, bodyConsumers)) {\n- RestSearchAction.parseSearchRequest(searchRequest, restRequest, parser);\n+ RestSearchAction.parseSearchRequest(searchRequest, restRequest, parser, internal::setSize);\n }\n \n- internal.setSize(searchRequest.source().size());\n- searchRequest.source().size(restRequest.paramAsInt(\"scroll_size\", scrollSize));\n+ searchRequest.source().size(restRequest.paramAsInt(\"scroll_size\", searchRequest.source().size()));\n \n String conflicts = restRequest.param(\"conflicts\");\n if (conflicts != null) {",
"filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByQueryRestHandler.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.index.reindex;\n \n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.client.node.NodeClient;\n import org.elasticsearch.common.settings.Settings;",
"filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestDeleteByQueryAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,97 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.reindex;\n+\n+import org.apache.http.entity.ContentType;\n+import org.apache.http.entity.StringEntity;\n+import org.elasticsearch.client.Response;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.test.rest.ESRestTestCase;\n+import org.junit.Before;\n+\n+import java.io.IOException;\n+import java.util.Map;\n+\n+import static java.util.Collections.emptyMap;\n+import static java.util.Collections.singletonMap;\n+import static org.hamcrest.Matchers.hasEntry;\n+\n+/**\n+ * Tests {@code _update_by_query}, {@code _delete_by_query}, and {@code _reindex}\n+ * of many documents over REST. It is important to test many documents to make\n+ * sure that we don't change the default behavior of touching <strong>all</strong>\n+ * documents in the request.\n+ */\n+public class ManyDocumentsIT extends ESRestTestCase {\n+ private final int count = between(150, 2000);\n+\n+ @Before\n+ public void setupTestIndex() throws IOException {\n+ StringBuilder bulk = new StringBuilder();\n+ for (int i = 0; i < count; i++) {\n+ bulk.append(\"{\\\"index\\\":{}}\\n\");\n+ bulk.append(\"{\\\"test\\\":\\\"test\\\"}\\n\");\n+ }\n+ client().performRequest(\"POST\", \"/test/test/_bulk\", singletonMap(\"refresh\", \"true\"),\n+ new StringEntity(bulk.toString(), ContentType.APPLICATION_JSON));\n+ }\n+\n+ public void testReindex() throws IOException {\n+ Map<String, Object> response = toMap(client().performRequest(\"POST\", \"/_reindex\", emptyMap(), new StringEntity(\n+ \"{\\\"source\\\":{\\\"index\\\":\\\"test\\\"}, \\\"dest\\\":{\\\"index\\\":\\\"des\\\"}}\",\n+ ContentType.APPLICATION_JSON)));\n+ assertThat(response, hasEntry(\"total\", count));\n+ assertThat(response, hasEntry(\"created\", count));\n+ }\n+\n+ public void testReindexFromRemote() throws IOException {\n+ Map<?, ?> nodesInfo = toMap(client().performRequest(\"GET\", \"/_nodes/http\"));\n+ nodesInfo = (Map<?, ?>) nodesInfo.get(\"nodes\");\n+ Map<?, ?> nodeInfo = (Map<?, ?>) nodesInfo.values().iterator().next();\n+ Map<?, ?> http = (Map<?, ?>) nodeInfo.get(\"http\");\n+ String remote = \"http://\"+ http.get(\"publish_address\");\n+ Map<String, Object> response = toMap(client().performRequest(\"POST\", \"/_reindex\", emptyMap(), new StringEntity(\n+ \"{\\\"source\\\":{\\\"index\\\":\\\"test\\\",\\\"remote\\\":{\\\"host\\\":\\\"\" + remote + \"\\\"}}, \\\"dest\\\":{\\\"index\\\":\\\"des\\\"}}\",\n+ ContentType.APPLICATION_JSON)));\n+ assertThat(response, hasEntry(\"total\", count));\n+ assertThat(response, hasEntry(\"created\", count));\n+ }\n+\n+\n+ public void testUpdateByQuery() throws IOException {\n+ Map<String, Object> response = toMap(client().performRequest(\"POST\", \"/test/_update_by_query\"));\n+ assertThat(response, hasEntry(\"total\", count));\n+ assertThat(response, hasEntry(\"updated\", count));\n+ }\n+\n+ public void testDeleteByQuery() throws IOException {\n+ Map<String, Object> response = toMap(client().performRequest(\"POST\", \"/test/_delete_by_query\", emptyMap(), new StringEntity(\n+ \"{\\\"query\\\":{\\\"match_all\\\":{}}}\",\n+ ContentType.APPLICATION_JSON)));\n+ assertThat(response, hasEntry(\"total\", count));\n+ assertThat(response, hasEntry(\"deleted\", count));\n+ }\n+\n+ static Map<String, Object> toMap(Response response) throws IOException {\n+ return XContentHelper.convertToMap(JsonXContent.jsonXContent, response.getEntity().getContent(), false);\n+ }\n+\n+}",
"filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/ManyDocumentsIT.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,41 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.reindex;\n+\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n+import org.elasticsearch.rest.RestController;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.rest.FakeRestRequest;\n+\n+import java.io.IOException;\n+\n+import static java.util.Collections.emptyList;\n+import static org.mockito.Mockito.mock;\n+\n+public class RestDeleteByQueryActionTests extends ESTestCase {\n+ public void testParseEmpty() throws IOException {\n+ RestDeleteByQueryAction action = new RestDeleteByQueryAction(Settings.EMPTY, mock(RestController.class));\n+ DeleteByQueryRequest request = action.buildRequest(new FakeRestRequest.Builder(new NamedXContentRegistry(emptyList()))\n+ .build());\n+ assertEquals(AbstractBulkByScrollRequest.SIZE_ALL_MATCHES, request.getSize());\n+ assertEquals(AbstractBulkByScrollRequest.DEFAULT_SCROLL_SIZE, request.getSearchRequest().source().size());\n+ }\n+}",
"filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/RestDeleteByQueryActionTests.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,41 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.reindex;\n+\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n+import org.elasticsearch.rest.RestController;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.rest.FakeRestRequest;\n+\n+import java.io.IOException;\n+\n+import static java.util.Collections.emptyList;\n+import static org.mockito.Mockito.mock;\n+\n+public class RestUpdateByQueryActionTests extends ESTestCase {\n+ public void testParseEmpty() throws IOException {\n+ RestUpdateByQueryAction action = new RestUpdateByQueryAction(Settings.EMPTY, mock(RestController.class));\n+ UpdateByQueryRequest request = action.buildRequest(new FakeRestRequest.Builder(new NamedXContentRegistry(emptyList()))\n+ .build());\n+ assertEquals(AbstractBulkByScrollRequest.SIZE_ALL_MATCHES, request.getSize());\n+ assertEquals(AbstractBulkByScrollRequest.DEFAULT_SCROLL_SIZE, request.getSearchRequest().source().size());\n+ }\n+}",
"filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/RestUpdateByQueryActionTests.java",
"status": "added"
}
]
} |
{
"body": "We are not following the Azure documentation about uploading blobs to Azure storage. https://docs.microsoft.com/en-us/azure/storage/blobs/storage-java-how-to-use-blob-storage#upload-a-blob-into-a-container\r\n\r\nInstead we are using our own implementation which might cause some troubles and rarely some blobs can be not immediately commited just after we close the stream. Using the standard implementation provided by Azure team should allow us to benefit from all the magic Azure SDK team already wrote.\r\n\r\nAnd well... Let's just read the doc!",
"comments": [
{
"body": "@imotov Thanks for the review. I did some manual testings this morning and it does not work.\r\n\r\nApparently the file `master.dat-temp` is not written in the azure container... \r\nGetting an exception saying that the container does not exist although I can see it in the azure Web interface... \r\n\r\nI'm digging... Probably something stupid on my end. :) \r\n\r\n",
"created_at": "2017-09-25T09:52:14Z"
},
{
"body": "@imotov I worked on IT so we can now pass them when needed (still a manual operation).\r\nI tried to simplify and remove non needed things.\r\n\r\nI tested everything manually:\r\n\r\n* Install elasticsearch 7.0.0-alpha1-SNAPSHOT\r\n* Install repository-azure plugin\r\n* Run the following test:\r\n\r\n```sh\r\n# Clean test env\r\ncurl -XDELETE localhost:9200/foo?pretty\r\ncurl -XDELETE localhost:9200/_snapshot/my_backup1/snap1?pretty\r\ncurl -XDELETE localhost:9200/_snapshot/my_backup1?pretty\r\n\r\n# Create data\r\ncurl -XPUT localhost:9200/foo/doc/1?pretty -H 'Content-Type: application/json' -d '{\r\n \"foo\": \"bar\"\r\n}'\r\ncurl -XPOST localhost:9200/foo/_refresh?pretty\r\n\r\n# Create repository using default account\r\ncurl -XPUT localhost:9200/_snapshot/my_backup1?pretty -H 'Content-Type: application/json' -d '{\r\n \"type\": \"azure\"\r\n}'\r\n\r\n# Backup\r\ncurl -XPOST \"localhost:9200/_snapshot/my_backup1/snap1?pretty&wait_for_completion=true\"\r\n\r\n# Delete existing index\r\ncurl -XDELETE localhost:9200/foo?pretty\r\n\r\n# Restore using default account\r\ncurl -XPOST \"localhost:9200/_snapshot/my_backup1/snap1/_restore?pretty&wait_for_completion=true\"\r\n\r\n# Check\r\ncurl -XGET localhost:9200/foo/_search?pretty\r\n\r\n# Remove backup\r\ncurl -XDELETE localhost:9200/_snapshot/my_backup1/snap1?pretty\r\ncurl -XDELETE localhost:9200/_snapshot/my_backup1?pretty\r\n```\r\n\r\nEverything is correct. I'm going to test with a bigger dataset now and check everything works.\r\nCould you give a final review on the code please as I changed some code recently?\r\n\r\nThanks!",
"created_at": "2017-09-26T09:36:18Z"
},
{
"body": "I tested with much more data (300mb) and everything is working well.\r\nLMK! :) ",
"created_at": "2017-09-26T09:53:06Z"
},
{
"body": "@dadoonet would it make sense to base this tests on [`ESBlobStoreRepositoryIntegTestCase`](https://github.com/elastic/elasticsearch/blob/master/test/framework/src/main/java/org/elasticsearch/repositories/blobstore/ESBlobStoreRepositoryIntegTestCase.java)? I think this base class has most of the tests that we want to run a repo to ensure that it behaves reasonably. If you find it lacking something, I think it would make sense to extend it so all other repos would benefit ",
"created_at": "2017-09-26T19:18:16Z"
},
{
"body": "@imotov Great! I did not remember about that class. Yeah. Definitely better using it as well.\r\n\r\nI pushed new changes.",
"created_at": "2017-09-26T20:21:42Z"
},
{
"body": "I backported it on 6.x yet.\r\n\r\nI'm planning to backport on 6.0 but it's a bit harder as some PR have not been merged to 6.0 like #23518 and #23405.\r\n",
"created_at": "2017-09-28T11:52:40Z"
},
{
"body": "Backported to 6.0 as well with 9aa5595d199d41f7681d6814616dd73d52a61b66\r\n",
"created_at": "2017-09-29T13:59:02Z"
},
{
"body": "Backported to 5.6 with https://github.com/elastic/elasticsearch/pull/26839/commits/28f17a72f617bde54ee6e1071e1491e03740d967 (see #26839)",
"created_at": "2017-10-03T13:30:03Z"
}
],
"number": 26751,
"title": "Use Azure upload method instead of our own implementation"
} | {
"body": "While working on #26751 and doing some manual integration testing I found that this #22858 removed an important line of our code:\r\n\r\n`AzureRepository` overrides default `initializeSnapshot` method which creates metadata files and do other stuff.\r\n\r\nBut with PR #22858, I wrote:\r\n\r\n```java\r\n @Override\r\n public void initializeSnapshot(SnapshotId snapshotId, List<IndexId> indices, MetaData clusterMetadata) {\r\n if (blobStore.doesContainerExist(blobStore.container()) == false) {\r\n throw new IllegalArgumentException(\"The bucket [\" + blobStore.container() + \"] does not exist. Please create it before \" +\r\n \" creating an azure snapshot repository backed by it.\");\r\n }\r\n }\r\n```\r\n\r\ninstead of\r\n\r\n```java\r\n @Override\r\n public void initializeSnapshot(SnapshotId snapshotId, List<IndexId> indices, MetaData clusterMetadata) {\r\n if (blobStore.doesContainerExist(blobStore.container()) == false) {\r\n throw new IllegalArgumentException(\"The bucket [\" + blobStore.container() + \"] does not exist. Please create it before \" +\r\n \" creating an azure snapshot repository backed by it.\");\r\n }\r\n super.initializeSnapshot(snapshotId, indices, clusterMetadata);\r\n }\r\n```\r\n\r\nAs we never call `super.initializeSnapshot(...)` files are not created and we can't restore what we saved.\r\n\r\nCloses #26777.\r\n",
"number": 26778,
"review_comments": [],
"title": "Azure snapshots can not be restored anymore"
} | {
"commits": [
{
"message": "Azure snapshots can not be restored anymore\n\nWhile working on #26751 and doing some manual integration testing I found that this #22858 removed an important line of our code:\n\n`AzureRepository` overrides default `initializeSnapshot` method which creates metadata files and do other stuff.\n\nBut with PR #22858, I wrote:\n\n```java\n @Override\n public void initializeSnapshot(SnapshotId snapshotId, List<IndexId> indices, MetaData clusterMetadata) {\n if (blobStore.doesContainerExist(blobStore.container()) == false) {\n throw new IllegalArgumentException(\"The bucket [\" + blobStore.container() + \"] does not exist. Please create it before \" +\n \" creating an azure snapshot repository backed by it.\");\n }\n }\n```\n\ninstead of\n\n```java\n @Override\n public void initializeSnapshot(SnapshotId snapshotId, List<IndexId> indices, MetaData clusterMetadata) {\n if (blobStore.doesContainerExist(blobStore.container()) == false) {\n throw new IllegalArgumentException(\"The bucket [\" + blobStore.container() + \"] does not exist. Please create it before \" +\n \" creating an azure snapshot repository backed by it.\");\n }\n super.initializeSnapshot(snapshotId, indices, clusterMetadata);\n }\n```\n\nAs we never call `super.initializeSnapshot(...)` files are not created and we can't restore what we saved.\n\nCloses #26777."
}
],
"files": [
{
"diff": "@@ -157,6 +157,7 @@ public void initializeSnapshot(SnapshotId snapshotId, List<IndexId> indices, Met\n throw new IllegalArgumentException(\"The bucket [\" + blobStore.container() + \"] does not exist. Please create it before \" +\n \" creating an azure snapshot repository backed by it.\");\n }\n+ super.initializeSnapshot(snapshotId, indices, clusterMetadata);\n }\n \n @Override",
"filename": "plugins/repository-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepository.java",
"status": "modified"
}
]
} |
{
"body": "In 6.0.0 RC1 BC1, when installing a plugin that gets downloaded, the plugin itself downloads fine, but the plugin installer fails to retrieve the plugin's .sha1 file. This appears to be because we switched to producing sha512 in the build system. \r\n\r\n\r\n```\r\n ES_JAVA_OPTS=\"-Des.plugins.staging=<staging build id>\" bin/elasticsearch-plugin install repository-s3\r\n-> Downloading repository-s3 from elastic\r\n[=================================================] 100% \r\nException in thread \"main\" java.io.FileNotFoundException: https://staging.elastic.co/6.0.0-rc1-c8f2d2ee/downloads/elasticsearch-plugins/repository-s3/repository-s3-6.0.0-rc1.zip.sha1\r\n\tat sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1872)\r\n\tat sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)\r\n\tat sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)\r\n\tat java.net.URL.openStream(URL.java:1045)\r\n\tat org.elasticsearch.plugins.InstallPluginCommand.downloadZipAndChecksum(InstallPluginCommand.java:372)\r\n\tat org.elasticsearch.plugins.InstallPluginCommand.download(InstallPluginCommand.java:221)\r\n\tat org.elasticsearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:211)\r\n\tat org.elasticsearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:202)\r\n\tat org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:69)\r\n\tat org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134)\r\n\tat org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:69)\r\n\tat org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134)\r\n\tat org.elasticsearch.cli.Command.main(Command.java:90)\r\n\tat org.elasticsearch.plugins.PluginCli.main(PluginCli.java:47)\r\n```",
"comments": [
{
"body": "@skearns64 I am hitting this when trying to build Docker images with 5.6.2-BC1:\r\n\r\n```\r\nStep 12/20 : RUN for PLUGIN in x-pack ingest-user-agent ingest-geoip; do eval ES_JAVA_OPTS=\"-Des.plugins.staging=cb620858\" elasticsearch-plugin install --batch \"$PLUGIN\"; done\r\n ---> Running in ad92634ba17f\r\n-> Downloading x-pack from elastic\r\n[=================================================] 100%?? \r\nException in thread \"main\" java.io.FileNotFoundException: https://staging.elastic.co/5.6.2-cb620858/downloads/elasticsearch-plugins/x-pack/x-pack-5.6.2.zip.sha1\r\n at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1872)\r\n at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)\r\n at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)\r\n at java.net.URL.openStream(URL.java:1045)\r\n at org.elasticsearch.plugins.InstallPluginCommand.downloadZipAndChecksum(InstallPluginCommand.java:375)\r\n at org.elasticsearch.plugins.InstallPluginCommand.download(InstallPluginCommand.java:225)\r\n at org.elasticsearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:215)\r\n at org.elasticsearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:201)\r\n at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67)\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134)\r\n at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:69)\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134)\r\n at org.elasticsearch.cli.Command.main(Command.java:90)\r\n at org.elasticsearch.plugins.PluginCli.main(PluginCli.java:47)\r\n```\r\n\r\nI guess the PRs to move to sha512 haven't been backported to 5.6/5.5; does this warrant a new issue or just re-opening this, as I don't see a version label?",
"created_at": "2017-09-22T17:17:39Z"
},
{
"body": "@dliappis I will backport the change to 5.6.",
"created_at": "2017-09-22T17:19:47Z"
},
{
"body": "@dliappis I have backported this to 5.6.",
"created_at": "2017-09-22T18:18:51Z"
},
{
"body": "Thanks all for catching and working on this. I should have seen it coming when proposing the change and raised some downstream visibility. ⛵️ ",
"created_at": "2017-09-22T18:24:54Z"
},
{
"body": "@jasontedor Confirming that the latest `5.6.2-0836ee74` pulls plugins correctly and uses the right sha512 digests.",
"created_at": "2017-09-23T15:32:51Z"
}
],
"number": 26746,
"title": "Online plugin Installation looks for sha1, we now only produce sha512"
} | {
"body": "With 6.0 rc1 we now publish sha512 checksums for official plugins.\r\nHowever, in order to ease the pain for plugin authors, this commit adds\r\nbackcompat to still allow sha1 checksums. Also added tests for\r\nchecksums.\r\n\r\ncloses #26746",
"number": 26748,
"review_comments": [],
"title": "Plugins: Add backcompat for sha1 checksums"
} | {
"commits": [
{
"message": "Plugins: Add backcompat for sha1 checksums\n\nWith 6.0 rc1 we now publish sha512 checksums for official plugins.\nHowever, in order to ease the pain for plugin authors, this commit adds\nbackcompat to still allow sha1 checksums. Also added tests for\nchecksums.\n\ncloses #26746"
},
{
"message": "cleanup"
}
],
"files": [
{
"diff": "@@ -58,6 +58,7 @@\n import java.nio.file.attribute.PosixFileAttributes;\n import java.nio.file.attribute.PosixFilePermission;\n import java.nio.file.attribute.PosixFilePermissions;\n+import java.security.MessageDigest;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n@@ -218,15 +219,15 @@ private Path download(Terminal terminal, String pluginId, Path tmpDir) throws Ex\n if (OFFICIAL_PLUGINS.contains(pluginId)) {\n final String url = getElasticUrl(terminal, getStagingHash(), Version.CURRENT, pluginId, Platforms.PLATFORM_NAME);\n terminal.println(\"-> Downloading \" + pluginId + \" from elastic\");\n- return downloadZipAndChecksum(terminal, url, tmpDir);\n+ return downloadZipAndChecksum(terminal, url, tmpDir, false);\n }\n \n // now try as maven coordinates, a valid URL would only have a colon and slash\n String[] coordinates = pluginId.split(\":\");\n if (coordinates.length == 3 && pluginId.contains(\"/\") == false) {\n String mavenUrl = getMavenUrl(terminal, coordinates, Platforms.PLATFORM_NAME);\n terminal.println(\"-> Downloading \" + pluginId + \" from maven central\");\n- return downloadZipAndChecksum(terminal, mavenUrl, tmpDir);\n+ return downloadZipAndChecksum(terminal, mavenUrl, tmpDir, true);\n }\n \n // fall back to plain old URL\n@@ -312,8 +313,9 @@ private List<String> checkMisspelledPlugin(String pluginId) {\n }\n \n /** Downloads a zip from the url, into a temp file under the given temp dir. */\n+ // pkg private for tests\n @SuppressForbidden(reason = \"We use getInputStream to download plugins\")\n- private Path downloadZip(Terminal terminal, String urlString, Path tmpDir) throws IOException {\n+ Path downloadZip(Terminal terminal, String urlString, Path tmpDir) throws IOException {\n terminal.println(VERBOSE, \"Retrieving zip from \" + urlString);\n URL url = new URL(urlString);\n Path zip = Files.createTempFile(tmpDir, null, \".zip\");\n@@ -361,13 +363,26 @@ public void onProgress(int percent) {\n }\n }\n \n- /** Downloads a zip from the url, as well as a SHA1 checksum, and checks the checksum. */\n+ /** Downloads a zip from the url, as well as a SHA512 (or SHA1) checksum, and checks the checksum. */\n // pkg private for tests\n @SuppressForbidden(reason = \"We use openStream to download plugins\")\n- Path downloadZipAndChecksum(Terminal terminal, String urlString, Path tmpDir) throws Exception {\n+ private Path downloadZipAndChecksum(Terminal terminal, String urlString, Path tmpDir, boolean allowSha1) throws Exception {\n Path zip = downloadZip(terminal, urlString, tmpDir);\n pathsToDeleteOnShutdown.add(zip);\n- URL checksumUrl = new URL(urlString + \".sha1\");\n+ String checksumUrlString = urlString + \".sha512\";\n+ URL checksumUrl = openUrl(checksumUrlString);\n+ String digestAlgo = \"SHA-512\";\n+ if (checksumUrl == null && allowSha1) {\n+ // fallback to sha1, until 7.0, but with warning\n+ terminal.println(\"Warning: sha512 not found, falling back to sha1. This behavior is deprecated and will be removed in a \" +\n+ \"future release. Please update the plugin to use a sha512 checksum.\");\n+ checksumUrlString = urlString + \".sha1\";\n+ checksumUrl = openUrl(checksumUrlString);\n+ digestAlgo = \"SHA-1\";\n+ }\n+ if (checksumUrl == null) {\n+ throw new UserException(ExitCodes.IO_ERROR, \"Plugin checksum missing: \" + checksumUrlString);\n+ }\n final String expectedChecksum;\n try (InputStream in = checksumUrl.openStream()) {\n BufferedReader checksumReader = new BufferedReader(new InputStreamReader(in, StandardCharsets.UTF_8));\n@@ -378,15 +393,30 @@ Path downloadZipAndChecksum(Terminal terminal, String urlString, Path tmpDir) th\n }\n \n byte[] zipbytes = Files.readAllBytes(zip);\n- String gotChecksum = MessageDigests.toHexString(MessageDigests.sha1().digest(zipbytes));\n+ String gotChecksum = MessageDigests.toHexString(MessageDigest.getInstance(digestAlgo).digest(zipbytes));\n if (expectedChecksum.equals(gotChecksum) == false) {\n throw new UserException(ExitCodes.IO_ERROR,\n- \"SHA1 mismatch, expected \" + expectedChecksum + \" but got \" + gotChecksum);\n+ digestAlgo + \" mismatch, expected \" + expectedChecksum + \" but got \" + gotChecksum);\n }\n \n return zip;\n }\n \n+ /**\n+ * Creates a URL and opens a connection.\n+ *\n+ * If the URL returns a 404, {@code null} is returned, otherwise the open URL opject is returned.\n+ */\n+ // pkg private for tests\n+ URL openUrl(String urlString) throws Exception {\n+ URL checksumUrl = new URL(urlString);\n+ HttpURLConnection connection = (HttpURLConnection)checksumUrl.openConnection();\n+ if (connection.getResponseCode() == 404) {\n+ return null;\n+ }\n+ return checksumUrl;\n+ }\n+\n private Path unzip(Path zip, Path pluginsDir) throws IOException, UserException {\n // unzip plugin to a staging temp dir\n ",
"filename": "distribution/tools/plugin-cli/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java",
"status": "modified"
},
{
"diff": "@@ -24,11 +24,13 @@\n import com.google.common.jimfs.Jimfs;\n import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.Version;\n+import org.elasticsearch.cli.ExitCodes;\n import org.elasticsearch.cli.MockTerminal;\n import org.elasticsearch.cli.Terminal;\n import org.elasticsearch.cli.UserException;\n import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.hash.MessageDigests;\n import org.elasticsearch.common.io.FileSystemUtils;\n import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.io.PathUtilsForTesting;\n@@ -62,7 +64,7 @@\n import java.nio.file.attribute.PosixFileAttributes;\n import java.nio.file.attribute.PosixFilePermission;\n import java.nio.file.attribute.UserPrincipal;\n-import java.security.KeyStore;\n+import java.security.MessageDigest;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.HashSet;\n@@ -751,19 +753,33 @@ private void installPlugin(MockTerminal terminal, boolean isBatch) throws Except\n skipJarHellCommand.execute(terminal, pluginZip, isBatch, env.v2());\n }\n \n- public void assertInstallPluginFromUrl(String pluginId, String name, String url, String stagingHash) throws Exception {\n+ public MockTerminal assertInstallPluginFromUrl(String pluginId, String name, String url, String stagingHash,\n+ String shaExtension, Function<byte[], String> shaCalculator) throws Exception {\n Tuple<Path, Environment> env = createEnv(fs, temp);\n Path pluginDir = createPluginDir(temp);\n Path pluginZip = createPlugin(name, pluginDir, false);\n InstallPluginCommand command = new InstallPluginCommand() {\n @Override\n- Path downloadZipAndChecksum(Terminal terminal, String urlString, Path tmpDir) throws Exception {\n+ Path downloadZip(Terminal terminal, String urlString, Path tmpDir) throws IOException {\n assertEquals(url, urlString);\n Path downloadedPath = tmpDir.resolve(\"downloaded.zip\");\n Files.copy(pluginZip, downloadedPath);\n return downloadedPath;\n }\n @Override\n+ URL openUrl(String urlString) throws Exception {\n+ String expectedUrl = url + shaExtension;\n+ if (expectedUrl.equals(urlString)) {\n+ // calc sha an return file URL to it\n+ Path shaFile = temp.apply(\"shas\").resolve(\"downloaded.zip\" + shaExtension);\n+ byte[] zipbytes = Files.readAllBytes(pluginZip);\n+ String checksum = shaCalculator.apply(zipbytes);\n+ Files.write(shaFile, checksum.getBytes(StandardCharsets.UTF_8));\n+ return shaFile.toUri().toURL();\n+ }\n+ return null;\n+ }\n+ @Override\n boolean urlExists(Terminal terminal, String urlString) throws IOException {\n return urlString.equals(url);\n }\n@@ -776,8 +792,15 @@ void jarHellCheck(Path candidate, Path pluginsDir) throws Exception {\n // no jarhell check\n }\n };\n- installPlugin(pluginId, env.v1(), command);\n+ MockTerminal terminal = installPlugin(pluginId, env.v1(), command);\n assertPlugin(name, pluginDir, env.v2());\n+ return terminal;\n+ }\n+\n+ public void assertInstallPluginFromUrl(String pluginId, String name, String url, String stagingHash) throws Exception {\n+ MessageDigest digest = MessageDigest.getInstance(\"SHA-512\");\n+ assertInstallPluginFromUrl(pluginId, name, url, stagingHash, \".sha512\",\n+ bytes -> MessageDigests.toHexString(digest.digest(bytes)));\n }\n \n public void testOfficalPlugin() throws Exception {\n@@ -813,7 +836,59 @@ public void testMavenPlatformPlugin() throws Exception {\n assertInstallPluginFromUrl(\"mygroup:myplugin:1.0.0\", \"myplugin\", url, null);\n }\n \n- // TODO: test checksum (need maven/official below)\n+ public void testMavenSha1Backcompat() throws Exception {\n+ String url = \"https://repo1.maven.org/maven2/mygroup/myplugin/1.0.0/myplugin-1.0.0.zip\";\n+ MessageDigest digest = MessageDigest.getInstance(\"SHA-1\");\n+ MockTerminal terminal = assertInstallPluginFromUrl(\"mygroup:myplugin:1.0.0\", \"myplugin\", url, null,\n+ \".sha1\", bytes -> MessageDigests.toHexString(digest.digest(bytes)));\n+ assertTrue(terminal.getOutput(), terminal.getOutput().contains(\"sha512 not found, falling back to sha1\"));\n+ }\n+\n+ public void testOfficialShaMissing() throws Exception {\n+ String url = \"https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-icu/analysis-icu-\" + Version.CURRENT + \".zip\";\n+ MessageDigest digest = MessageDigest.getInstance(\"SHA-1\");\n+ UserException e = expectThrows(UserException.class, () ->\n+ assertInstallPluginFromUrl(\"analysis-icu\", \"analysis-icu\", url, null, \".sha1\",\n+ bytes -> MessageDigests.toHexString(digest.digest(bytes))));\n+ assertEquals(ExitCodes.IO_ERROR, e.exitCode);\n+ assertEquals(\"Plugin checksum missing: \" + url + \".sha512\", e.getMessage());\n+ }\n+\n+ public void testMavenShaMissing() throws Exception {\n+ String url = \"https://repo1.maven.org/maven2/mygroup/myplugin/1.0.0/myplugin-1.0.0.zip\";\n+ UserException e = expectThrows(UserException.class, () ->\n+ assertInstallPluginFromUrl(\"mygroup:myplugin:1.0.0\", \"myplugin\", url, null, \".dne\", bytes -> null));\n+ assertEquals(ExitCodes.IO_ERROR, e.exitCode);\n+ assertEquals(\"Plugin checksum missing: \" + url + \".sha1\", e.getMessage());\n+ }\n+\n+ public void testInvalidShaFile() throws Exception {\n+ String url = \"https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-icu/analysis-icu-\" + Version.CURRENT + \".zip\";\n+ MessageDigest digest = MessageDigest.getInstance(\"SHA-512\");\n+ UserException e = expectThrows(UserException.class, () ->\n+ assertInstallPluginFromUrl(\"analysis-icu\", \"analysis-icu\", url, null, \".sha512\",\n+ bytes -> MessageDigests.toHexString(digest.digest(bytes)) + \"\\nfoobar\"));\n+ assertEquals(ExitCodes.IO_ERROR, e.exitCode);\n+ assertTrue(e.getMessage(), e.getMessage().startsWith(\"Invalid checksum file\"));\n+ }\n+\n+ public void testSha512Mismatch() throws Exception {\n+ String url = \"https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-icu/analysis-icu-\" + Version.CURRENT + \".zip\";\n+ UserException e = expectThrows(UserException.class, () ->\n+ assertInstallPluginFromUrl(\"analysis-icu\", \"analysis-icu\", url, null, \".sha512\",\n+ bytes -> \"foobar\"));\n+ assertEquals(ExitCodes.IO_ERROR, e.exitCode);\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"SHA-512 mismatch, expected foobar\"));\n+ }\n+\n+ public void testSha1Mismatch() throws Exception {\n+ String url = \"https://repo1.maven.org/maven2/mygroup/myplugin/1.0.0/myplugin-1.0.0.zip\";\n+ UserException e = expectThrows(UserException.class, () ->\n+ assertInstallPluginFromUrl(\"mygroup:myplugin:1.0.0\", \"myplugin\", url, null,\n+ \".sha1\", bytes -> \"foobar\"));\n+ assertEquals(ExitCodes.IO_ERROR, e.exitCode);\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"SHA-1 mismatch, expected foobar\"));\n+ }\n \n public void testKeystoreNotRequired() throws Exception {\n Tuple<Path, Environment> env = createEnv(fs, temp);",
"filename": "distribution/tools/plugin-cli/src/test/java/org/elasticsearch/plugins/InstallPluginCommandTests.java",
"status": "modified"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion: 5.5.2, Build: b2f0c09/2017-08-14T12:33:14.154Z, JVM: 1.8.0_121\r\n\r\n**Plugins installed**:\r\n* repository-hdfs\r\n* x-pack\r\n\r\n**JVM version** (`java -version`):\r\njava version \"1.8.0_121\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_121-tdc1-b13)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nLinux HOSTNAME 3.0.101-0.113.TDC.1.R.0-default #1 SMP Fri Dec 9 04:51:20 PST 2016 (ca32437) x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nListing snapshots for the repository-hdfs plugin snapshot repository should work, but instead there is a `java.lang.SecurityException: access denied (\"java.lang.reflect.ReflectPermission\" \"suppressAccessChecks\")` from the JVM security manager.\r\n\r\n**Steps to reproduce**:\r\n 1. Install Elasticsearch\r\n 2. Install repository-hdfs plugin\r\n 3. Create Elasticsearch snapshot repository pointing to HDFS\r\n 4. Try to list snapshots from that repository (`curl -i -u USERNAME -H 'Accept: application/json' -H 'Content-Type: application/json' \"https://$(hostname -f):9200/_snapshot/CLUSTERNAME/_all?pretty\"`)\r\n\r\n**Provide logs (if relevant)**:\r\nStacktrace from missing security policy permission\r\n```\r\n...\r\n\r\n# first call to curl -i -u USERNAME -H 'Accept: application/json' -H 'Content-Type: application/json' \"https://$(hostname -f):9200/_snapshot/CLUSTERNAME/_all?pretty\"\r\n\r\norg.elasticsearch.transport.RemoteTransportException: [master-HOSTNAME][IP_ADDRESS:9301][cluster:admin/snapshot/get]\r\nCaused by: java.lang.SecurityException: access denied (\"java.lang.reflect.ReflectPermission\" \"suppressAccessChecks\")\r\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_121]\r\n at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_121]\r\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_121]\r\n at java.lang.reflect.AccessibleObject.setAccessible(AccessibleObject.java:128) ~[?:1.8.0_121]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:396) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) ~[?:?]\r\n at com.sun.proxy.$Proxy34.getServerDefaults(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:640) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.shouldEncryptData(DFSClient.java:1755) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:1761) ~[?:?]\r\n at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:210) ~[?:?]\r\n at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSUtilClient.peerFromSocketAndKey(DFSUtilClient.java:581) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2933) ~[?:?]\r\n at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:815) ~[?:?]\r\n at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:740) ~[?:?]\r\n at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:706) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:647) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:918) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:974) ~[?:?]\r\n at java.io.DataInputStream.read(DataInputStream.java:100) ~[?:1.8.0_121]\r\n at org.elasticsearch.common.io.Streams.copy(Streams.java:79) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.common.io.Streams.copy(Streams.java:60) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:762) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.snapshots.SnapshotsService.getRepositoryData(SnapshotsService.java:140) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:97) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:55) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:87) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.doRun(TransportMasterNodeAction.java:166) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_121]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_121]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]\r\n\r\n...\r\n\r\n# second call to curl -i -u USERNAME -H 'Accept: application/json' -H 'Content-Type: application/json' \"https://$(hostname -f):9200/_snapshot/CLUSTERNAME/_all?pretty\"\r\n\r\n[2017-09-01T14:08:10,615][WARN ][r.suppressed ] path: /_snapshot/CLUSTERNAME/_all, params: {pretty=, repository=CLUSTERNAME, snapshot=_all}\r\norg.elasticsearch.transport.RemoteTransportException: [master-HOSTNAME][IP_ADDRESS:9301][cluster:admin/snapshot/get]\r\nCaused by: java.lang.IllegalStateException\r\n at com.google.common.base.Preconditions.checkState(Preconditions.java:129) ~[?:?]\r\n at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:116) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:160) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) ~[?:?]\r\n at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) ~[?:?]\r\n at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1681) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1665) ~[?:?]\r\n at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:257) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1806) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1802) ~[?:?]\r\n at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1808) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1767) ~[?:?]\r\n at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1726) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore.lambda$execute$0(HdfsBlobStore.java:132) ~[?:?]\r\n at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_121]\r\n at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_121]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:129) ~[?:?]\r\n at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:930) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:908) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:746) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.snapshots.SnapshotsService.getRepositoryData(SnapshotsService.java:140) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:97) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:55) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:87) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.doRun(TransportMasterNodeAction.java:166) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.5.2.jar:5.5.2]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_121]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_121]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]\r\n```",
"comments": [
{
"body": "I was able to fix this with the following:\r\n```\r\n# Fix for security permission error\r\ncat /home/elasticsearch/.java.policy\r\ngrant {\r\n permission java.lang.reflect.ReflectPermission \"suppressAccessChecks\";\r\n};\r\n```",
"created_at": "2017-09-05T20:58:13Z"
},
{
"body": "Interestingly the repository-hdfs plugin already has a Java security policy file that requests `suppressAccessChecks` permissions. https://github.com/elastic/elasticsearch/blob/master/plugins/repository-hdfs/src/main/plugin-metadata/plugin-security.policy#L26 It looks like from the stacktrace that the error doesn't go through any of the repository-hdfs classes however.",
"created_at": "2017-09-05T20:59:31Z"
},
{
"body": "As you point out, the problem here is that the call is coming from core where we do not and will not grant `suppressAccessChecks`.",
"created_at": "2017-09-05T21:01:33Z"
},
{
"body": "@jbaiera Would you take a look please?",
"created_at": "2017-09-05T21:01:59Z"
},
{
"body": "@jasontedor yea I agree that adding `suppressAccessChecks` to the whole JVM is not preferred but didn't have a good way to work around it in the short term. We are currently moving from 2.x to 5.x for Elasticsearch and this was a pain point on Friday. Since we are in a development environment, I can provide/test other things if necessary.",
"created_at": "2017-09-05T21:56:27Z"
},
{
"body": "If I were to guess, the bulk of this issue comes from trying to get server defaults for the encryption introduced in Hadoop.\r\n\r\n```\r\n at org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:640) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.shouldEncryptData(DFSClient.java:1755) ~[?:?]\r\n at org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:1761) ~[?:?]\r\n```\r\nhttps://github.com/apache/hadoop/blob/branch-2.8.1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java#L1760\r\n\r\nI didn't see a quick way to short circuit this process by adding a Hadoop conf setting saying that encryption is either enabled or not. It looks like this code has to go to the Namenode.\r\n\r\nI was going to try `webhdfs://` instead of `hdfs://` but saw that the repository-hdfs plugin explicitly checks for the scheme to be `hdfs://`.\r\n\r\nhttps://github.com/elastic/elasticsearch/blob/master/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java#L87\r\n\r\nSome of these issues might go away if the plugin were to allow `webhdfs://` as an option since that is over HTTP instead of direct RPC calls. I saw #24455 is something related to that idea.",
"created_at": "2017-09-06T21:10:03Z"
},
{
"body": "Indeed the problem seems to actually be here: https://github.com/apache/hadoop/blob/branch-2.8.1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java#L640\r\n\r\n```\r\n serverDefaults = namenode.getServerDefaults();\r\n```\r\n\r\nSo it's trying to do reflection in `namenode` something that we do not allow to do that? I say this from\r\n\r\n```\r\nat com.sun.proxy.$Proxy34.getServerDefaults(Unknown Source) ~[?:?]\r\n```\r\n\r\nWonder if we could use the `dfs.trustedchannel.resolver.class` setting to force this to be bypassed: https://github.com/apache/hadoop/blob/branch-2.8.1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java#L206 ?",
"created_at": "2017-09-06T21:59:46Z"
},
{
"body": ">As you point out, the problem here is that the call is coming from core where we do not and will not grant suppressAccessChecks.\r\n\r\nThis is correct. Since there's a core utility in the stack trace (`org.elasticsearch.common.io.Streams.copy`) it's tripping up the security manager. What should really be happening is the stream should be wrapped in something that does the HDFS calls with privileges. Took a quick look and found that this seems to already be done in master with https://github.com/elastic/elasticsearch/pull/22793. Perhaps this just needs to be backported to the 5.x branches.",
"created_at": "2017-09-07T03:00:42Z"
},
{
"body": "#22793 looks like it could help. Elasticsearch 6.0 beta 2 would have that change right? I should be able to try it to see if it fixes this issue. If it works then would be great to see it backported to 5.x.",
"created_at": "2017-09-07T09:28:54Z"
},
{
"body": "It looks like #22793 is the only non test code change to repository-hdfs that isn't in the 5.x line.",
"created_at": "2017-09-07T09:52:45Z"
},
{
"body": "There's a reason that #22793 is not in 5.6. That change was predicated on removing socket permissions from core which was only done in 6.0+. Thus, we can not simply backport that change even if it picked cleanly (which it doesn't), it was engineered with different assumptions than are present in 5.6. It's plausible that some of the code in #22793 will help address the issue here but instead we have to:\r\n - have a failing test case on 5.6\r\n - pull only the relevant changes from #22793 into 5.6\r\n - prepare a new patch for 5.6 that goes through code review",
"created_at": "2017-09-07T13:25:30Z"
},
{
"body": "So I found the case where this can happen. I'm trying to boil it down to the simplest steps. From what I've gathered so far:\r\n* requires a readonly HDFS repository to be created and point to a location that isn't empty\r\n* requires that no other HDFS repository operations have been done already (like take a snapshot)\r\n* requires trying to list the readonly HDFS repository\r\n\r\nFor reference my test case is here:\r\n* https://github.com/risdenk/elasticsearch_hdfs_kerberos_testing\r\n* https://travis-ci.org/risdenk/elasticsearch_hdfs_kerberos_testing\r\n\r\nI've haven't had a chance to see if this is fixed in 6.x or master yet. I've been focusing on trying to recreate in a standalone test case with 5.5.2.",
"created_at": "2017-09-09T00:52:24Z"
},
{
"body": "Thank you so much @risdenk, that's awesome effort and much appreciated. @jbaiera will work closely with you on this starting next week. And 5.5.2 is definitely the right place to focus efforts (or the 5.6 branch is fine too).",
"created_at": "2017-09-09T00:59:54Z"
},
{
"body": "The code path differs for a readonly repository in this method:\r\n\r\nhttps://github.com/elastic/elasticsearch/blob/5.5/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L895\r\n\r\n**Non Read Only Path**\r\n\r\nThe `BlobStoreRepository.getRepositoryData()` method calls `BlobStoreRepository.latestIndexBlobId()` which in in the non readonly case will eventually call the `DFSClient.getServerDefaults()` method with the correct filecontext through `HdfsBlobContainer`. This eventually filters down to the correct HDFS blobstore implementation with `doPrivileged`. The `DFSClient.getServerDefaults()` is then cached for 60 minutes so the next call without the file context from `Streams.copy` works.\r\n\r\n* https://github.com/elastic/elasticsearch/blob/5.5/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L746\r\n* https://github.com/elastic/elasticsearch/blob/5.5/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L895\r\n* https://github.com/elastic/elasticsearch/blob/5.5/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobContainer.java#L141\r\n* https://github.com/elastic/elasticsearch/blob/5.5/plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobStore.java#L129\r\n* https://github.com/elastic/elasticsearch/blob/5.5/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L762\r\n\r\n**Failure Read Only Path**\r\n\r\nThe `BlobStoreRepository.getRepositoryData()` method calls `BlobStoreRepository.latestIndexBlobId()` which in in the readonly case doesn't call out to HDFS it looks like. This then means no cached server detaults in `DFSClient.getServerDefaults()` and so the next call without the file context from `Streams.copy` fails to work.\r\n\r\n* https://github.com/elastic/elasticsearch/blob/5.5/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L746\r\n* https://github.com/elastic/elasticsearch/blob/5.5/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L895\r\n* https://github.com/elastic/elasticsearch/blob/5.5/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L921\r\n* https://github.com/elastic/elasticsearch/blob/5.5/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L762\r\n* Failure due to `DFSClient` and `suppressAccessChecks`",
"created_at": "2017-09-09T01:13:36Z"
},
{
"body": "Side note: interestingly travis ci doesn't seem to fail where as locally on my Mac it does. All the testing is in Docker so not sure what the difference is right now.",
"created_at": "2017-09-09T01:38:04Z"
},
{
"body": "For reference, I switched the base Elasticsearch image to `6.0.0-beta2` and could not reproduce the same error. The HDFS repository lists correctly even with readonly.",
"created_at": "2017-09-09T01:43:11Z"
},
{
"body": "@risdenk Thanks for all the work toward distilling the issue down to a few simple steps. This is incredibly helpful information. I'll start working toward getting a failing test built and then we'll know where to go from there.",
"created_at": "2017-09-12T14:25:42Z"
},
{
"body": "@jbaiera - Glad the information is helpful.\r\n\r\nI finally got the `gradle integTestSecure` working and even after editing the `secure_hdfs_repository` REST test yaml files I wasn't able to reproduce the problem. I can logically see how the code path I'm hitting in my environment is hit but seem to have a hard time producing the issue in the test suite.\r\n\r\nI ran into a few problems with Vagrant 2.0 and on a Mac the vagrant test fixture just stops. On a linux box the test suite works ok.",
"created_at": "2017-09-12T15:49:23Z"
},
{
"body": "@risdenk I wanted to check in and share what I've got on this: I also was running into issues with reproducing this via the rest integration test tooling. I decided to try and reproduce by standing up the local hdfs fixture with a local elasticsearch instance and re-tracing your provided steps. This did indeed reproduce the error you've described.\r\n\r\nThis raises the question why it doesn't cleanly reproduce in the rest integration test framework. Diving deeper uncovered what looks like two unrelated bugs (one with the repository, and one potentially in the HDFS client) but hasn't yielded any answers for why it doesn't reproduce. I'll update back here with more info as it's available.",
"created_at": "2017-09-12T21:27:29Z"
},
{
"body": "@jbaiera Glad I'm not crazy it doesn't reproduce in the integration test but it does if you stand up a cluster. Last night I was able to get it to reproduce once in the rest integration test framework.\r\n\r\nThe branch for this is here: https://github.com/risdenk/elasticsearch/tree/test_readonly_repository\r\n\r\nI created this from the 5.5 branch. https://github.com/elastic/elasticsearch/compare/5.5...risdenk:test_readonly_repository\r\n\r\n",
"created_at": "2017-09-13T14:53:36Z"
},
{
"body": "From that branch I was able to recreate the `access denied` with `gradle integTestSecure` from the `plugins/repository-hdfs` directory.",
"created_at": "2017-09-13T15:00:05Z"
},
{
"body": "Checking in again on this since it's been the parent of a few real brain teasers:\r\n\r\nI was able to get the rest test framework to reproduce the error, but only when the test case that reproduces it is run relatively early in the suite. This is a problem since our testing suite has no real sense of ordering (The test spec names with the leading numbers are a bit misleading). I found that one of the reasons that the integration test passes when the test case is run later is because the suppress access check permissions are being skipped since the methods in question have already been set accessible by privileged code in previous test cases.\r\n\r\nThat being said, since there is at least a test that reproduces it now, I'll just get a PR up for fixing it soon. We'll have to just accept that the ordering of the tests matters for reproducing it since the state of the HDFS Client doesn't give us much choice.\r\n",
"created_at": "2017-09-15T21:42:34Z"
},
{
"body": "> This is a problem since our testing suite has no real sense of ordering (The test spec names with the leading numbers are a bit misleading).\r\n\r\nAh that explains a lot! I couldn't figure out why the integration tests passed when I explicitly set the order with the leading numbers.\r\n\r\nDo the tests run in parallel as well? If each test setup and tore down the repository there shouldn't be any overlap since the HDFS client is per repository. ",
"created_at": "2017-09-16T15:09:22Z"
},
{
"body": "The overlap in this scenario comes from all the tests executing on the same JVM and class instances. By the time the reproducing test case runs, previous test cases in the suite have already set the methods in question to be accessible from privileged callsites. When the reproducing test case attempts to run, the methods that it would normally try to set accessible are already accessible. The code skips trying to set them, and thusly avoids the security exceptions.\r\n\r\nThat said, there are still some other discrepancies between running the rest test suite and running against a local cluster that I haven't pinned down yet, but my hunch so far has been that the differences are more likely caused by how rapid the calls are made than the substance of the calls themselves. The HDFS client is pretty sensitive when a security manager is installed.",
"created_at": "2017-09-18T16:19:15Z"
},
{
"body": "After getting the requisite vagrant testing support commits backported to the 5.x branches and shoring up the last of the changes needed, I've opened #26714 for this.",
"created_at": "2017-09-19T17:23:59Z"
},
{
"body": "Thanks @jbaiera!",
"created_at": "2017-09-19T18:04:08Z"
},
{
"body": "I have merged #26714 and backported to the 5.5.x branch. This should make it into a release soon.",
"created_at": "2017-09-21T15:59:02Z"
}
],
"number": 26513,
"title": "java.lang.SecurityException: access denied (\"java.lang.reflect.ReflectPermission\" \"suppressAccessChecks\") for :Plugin Repository HDFS"
} | {
"body": "This PR is specific to the 5.x line, as #22793 in master and 6.x, while unrelated, fixes this problem.\r\n\r\nWhen a user goes to list the available snapshots under a `readonly` HDFDS repository, before any other repository actions are performed, the requests will be met with a security exception. In this scenario, certain methods within the RPC layer have yet to be set accessible for usage in HDFS's dynamic-proxy-based RPC client. Normally, these methods would be set accessible during a privileged call in the validation step, but this process is skipped for `readonly` repositories. Instead, the security check is made to see if the code allows for `supressAccessChecks`. While the HDFS repository has these permissions, the core code base that is on the stack trace does not, and thus, a security exception is thrown for that permission.\r\n\r\nThis PR adds a reproducing test case for the behavior and backports the relevant portions of #22793 - Namely the HDFSPrivilegedInputStream. Additional validations of permissions within privileged blocks are added to the privileged input stream. These validations will be forward-ported to master in a different PR (link).\r\n\r\nRelates #26513 ",
"number": 26714,
"review_comments": [],
"title": "Fix permission errors when using Read Only HDFS Repository"
} | {
"commits": [
{
"message": "Add test to reproduce readonly repository bug.\n\nMiniHDFS will now start with an existing repository with a single snapshot contained within.\nReadonly Repository is created in tests and attempts to list the snapshots within this repo.\nCorrecting typos..."
},
{
"message": "Backport \"Add doPrivilege blocks for socket connect ops in repository-hdfs (#22793)\"\n\nOnly pulled the relevant changes - such as the Priveleged input stream implementation for HDFS."
},
{
"message": "Adding special permission checks to the HDFS privileged stream.\nLimiting the permissions during privileged executions to the same ones used by the rest of the privileged code."
}
],
"files": [
{
"diff": "@@ -25,30 +25,37 @@\n import org.apache.hadoop.fs.Options.CreateOpts;\n import org.apache.hadoop.fs.Path;\n import org.apache.hadoop.fs.PathFilter;\n+import org.elasticsearch.SpecialPermission;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.blobstore.BlobMetaData;\n import org.elasticsearch.common.blobstore.BlobPath;\n import org.elasticsearch.common.blobstore.support.AbstractBlobContainer;\n import org.elasticsearch.common.blobstore.support.PlainBlobMetaData;\n import org.elasticsearch.repositories.hdfs.HdfsBlobStore.Operation;\n \n+import java.io.FilterInputStream;\n import java.io.IOException;\n import java.io.InputStream;\n import java.nio.file.FileAlreadyExistsException;\n import java.nio.file.NoSuchFileException;\n+import java.security.AccessController;\n+import java.security.PrivilegedActionException;\n+import java.security.PrivilegedExceptionAction;\n import java.util.Collections;\n import java.util.EnumSet;\n import java.util.LinkedHashMap;\n import java.util.Map;\n \n final class HdfsBlobContainer extends AbstractBlobContainer {\n private final HdfsBlobStore store;\n+ private final HdfsSecurityContext securityContext;\n private final Path path;\n private final int bufferSize;\n \n- HdfsBlobContainer(BlobPath blobPath, HdfsBlobStore store, Path path, int bufferSize) {\n+ HdfsBlobContainer(BlobPath blobPath, HdfsBlobStore store, Path path, int bufferSize, HdfsSecurityContext hdfsSecurityContext) {\n super(blobPath);\n this.store = store;\n+ this.securityContext = hdfsSecurityContext;\n this.path = path;\n this.bufferSize = bufferSize;\n }\n@@ -101,7 +108,10 @@ public InputStream readBlob(String blobName) throws IOException {\n return store.execute(new Operation<InputStream>() {\n @Override\n public InputStream run(FileContext fileContext) throws IOException {\n- return fileContext.open(new Path(path, blobName), bufferSize);\n+ // FSDataInputStream can open connections on read() or skip() so we wrap in\n+ // HDFSPrivilegedInputSteam which will ensure that underlying methods will\n+ // be called with the proper privileges.\n+ return new HDFSPrivilegedInputSteam(fileContext.open(new Path(path, blobName), bufferSize), securityContext);\n }\n });\n }\n@@ -161,4 +171,59 @@ public boolean accept(Path path) {\n public Map<String, BlobMetaData> listBlobs() throws IOException {\n return listBlobsByPrefix(null);\n }\n+\n+ /**\n+ * Exists to wrap underlying InputStream methods that might need to make connections or\n+ * perform actions within doPrivileged blocks. The HDFS Client performs a lot underneath\n+ * the FSInputStream, including making connections and executing reflection based RPC calls.\n+ */\n+ private static class HDFSPrivilegedInputSteam extends FilterInputStream {\n+\n+ private final HdfsSecurityContext securityContext;\n+\n+ HDFSPrivilegedInputSteam(InputStream in, HdfsSecurityContext hdfsSecurityContext) {\n+ super(in);\n+ this.securityContext = hdfsSecurityContext;\n+ }\n+\n+ public int read() throws IOException {\n+ return doPrivilegedOrThrow(in::read);\n+ }\n+\n+ public int read(byte b[]) throws IOException {\n+ return doPrivilegedOrThrow(() -> in.read(b));\n+ }\n+\n+ public int read(byte b[], int off, int len) throws IOException {\n+ return doPrivilegedOrThrow(() -> in.read(b, off, len));\n+ }\n+\n+ public long skip(long n) throws IOException {\n+ return doPrivilegedOrThrow(() -> in.skip(n));\n+ }\n+\n+ public int available() throws IOException {\n+ return doPrivilegedOrThrow(() -> in.available());\n+ }\n+\n+ public synchronized void reset() throws IOException {\n+ doPrivilegedOrThrow(() -> {\n+ in.reset();\n+ return null;\n+ });\n+ }\n+\n+ private <T> T doPrivilegedOrThrow(PrivilegedExceptionAction<T> action) throws IOException {\n+ SecurityManager sm = System.getSecurityManager();\n+ if (sm != null) {\n+ // unprivileged code such as scripts do not have SpecialPermission\n+ sm.checkPermission(new SpecialPermission());\n+ }\n+ try {\n+ return AccessController.doPrivileged(action, null, securityContext.getRestrictedExecutionPermissions());\n+ } catch (PrivilegedActionException e) {\n+ throw (IOException) e.getCause();\n+ }\n+ }\n+ }\n }",
"filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobContainer.java",
"status": "modified"
},
{
"diff": "@@ -86,7 +86,7 @@ public String toString() {\n \n @Override\n public BlobContainer blobContainer(BlobPath path) {\n- return new HdfsBlobContainer(path, this, buildHdfsPath(path), bufferSize);\n+ return new HdfsBlobContainer(path, this, buildHdfsPath(path), bufferSize, this.securityContext);\n }\n \n private Path buildHdfsPath(BlobPath blobPath) {",
"filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsBlobStore.java",
"status": "modified"
},
{
"diff": "@@ -139,7 +139,7 @@ private FileContext createContext(URI uri, Settings repositorySettings) {\n hadoopConfiguration.setBoolean(\"fs.hdfs.impl.disable.cache\", true);\n \n // Create the filecontext with our user information\n- // This will correctly configure the filecontext to have our UGI as it's internal user.\n+ // This will correctly configure the filecontext to have our UGI as its internal user.\n return ugi.doAs((PrivilegedAction<FileContext>) () -> {\n try {\n AbstractFileSystem fs = AbstractFileSystem.get(uri, hadoopConfiguration);",
"filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsRepository.java",
"status": "modified"
},
{
"diff": "@@ -26,8 +26,6 @@\n import java.nio.file.Path;\n import java.security.Permission;\n import java.util.Arrays;\n-import java.util.Locale;\n-import java.util.function.Supplier;\n import javax.security.auth.AuthPermission;\n import javax.security.auth.PrivateCredentialPermission;\n import javax.security.auth.kerberos.ServicePermission;\n@@ -41,7 +39,7 @@\n * Oversees all the security specific logic for the HDFS Repository plugin.\n *\n * Keeps track of the current user for a given repository, as well as which\n- * permissions to grant the blob store restricted execution methods.\n+ * permissions to grant to privileged methods inside the BlobStore.\n */\n class HdfsSecurityContext {\n \n@@ -56,7 +54,9 @@ class HdfsSecurityContext {\n // 1) hadoop dynamic proxy is messy with access rules\n new ReflectPermission(\"suppressAccessChecks\"),\n // 2) allow hadoop to add credentials to our Subject\n- new AuthPermission(\"modifyPrivateCredentials\")\n+ new AuthPermission(\"modifyPrivateCredentials\"),\n+ // 3) RPC Engine requires this for re-establishing pooled connections over the lifetime of the client\n+ new PrivateCredentialPermission(\"org.apache.hadoop.security.Credentials * \\\"*\\\"\", \"read\")\n };\n \n // If Security is enabled, we need all the following elevated permissions:",
"filename": "plugins/repository-hdfs/src/main/java/org/elasticsearch/repositories/hdfs/HdfsSecurityContext.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,29 @@\n+# Integration tests for HDFS Repository plugin\n+#\n+# Tests retrieving information about snapshot\n+#\n+---\n+\"Get a snapshot - readonly\":\n+ # Create repository\n+ - do:\n+ snapshot.create_repository:\n+ repository: test_snapshot_repository_ro\n+ body:\n+ type: hdfs\n+ settings:\n+ uri: \"hdfs://localhost:9999\"\n+ path: \"/user/elasticsearch/existing/readonly-repository\"\n+ readonly: true\n+\n+ # List snapshot info\n+ - do:\n+ snapshot.get:\n+ repository: test_snapshot_repository_ro\n+ snapshot: \"_all\"\n+\n+ - length: { snapshots: 1 }\n+\n+ # Remove our repository\n+ - do:\n+ snapshot.delete_repository:\n+ repository: test_snapshot_repository_ro",
"filename": "plugins/repository-hdfs/src/test/resources/rest-api-spec/test/hdfs_repository/30_snapshot_readonly.yaml",
"status": "added"
},
{
"diff": "@@ -0,0 +1,31 @@\n+# Integration tests for HDFS Repository plugin\n+#\n+# Tests retrieving information about snapshot\n+#\n+---\n+\"Get a snapshot - readonly\":\n+ # Create repository\n+ - do:\n+ snapshot.create_repository:\n+ repository: test_snapshot_repository_ro\n+ body:\n+ type: hdfs\n+ settings:\n+ uri: \"hdfs://localhost:9998\"\n+ path: \"/user/elasticsearch/existing/readonly-repository\"\n+ security:\n+ principal: \"elasticsearch@BUILD.ELASTIC.CO\"\n+ readonly: true\n+\n+ # List snapshot info\n+ - do:\n+ snapshot.get:\n+ repository: test_snapshot_repository_ro\n+ snapshot: \"_all\"\n+\n+ - length: { snapshots: 1 }\n+\n+ # Remove our repository\n+ - do:\n+ snapshot.delete_repository:\n+ repository: test_snapshot_repository_ro",
"filename": "plugins/repository-hdfs/src/test/resources/rest-api-spec/test/secure_hdfs_repository/30_snapshot_readonly.yaml",
"status": "added"
},
{
"diff": "@@ -19,7 +19,9 @@\n \n package hdfs;\n \n+import java.io.File;\n import java.lang.management.ManagementFactory;\n+import java.net.URL;\n import java.nio.charset.StandardCharsets;\n import java.nio.file.Files;\n import java.nio.file.Path;\n@@ -29,9 +31,11 @@\n import java.util.Arrays;\n import java.util.List;\n \n+import org.apache.commons.io.FileUtils;\n import org.apache.hadoop.conf.Configuration;\n import org.apache.hadoop.fs.CommonConfigurationKeysPublic;\n import org.apache.hadoop.fs.FileSystem;\n+import org.apache.hadoop.fs.FileUtil;\n import org.apache.hadoop.fs.permission.AclEntry;\n import org.apache.hadoop.fs.permission.AclEntryType;\n import org.apache.hadoop.fs.permission.FsAction;\n@@ -100,15 +104,35 @@ public static void main(String[] args) throws Exception {\n }\n MiniDFSCluster dfs = builder.build();\n \n- // Set the elasticsearch user directory up\n- if (UserGroupInformation.isSecurityEnabled()) {\n- FileSystem fs = dfs.getFileSystem();\n- org.apache.hadoop.fs.Path esUserPath = new org.apache.hadoop.fs.Path(\"/user/elasticsearch\");\n+ // Configure contents of the filesystem\n+ org.apache.hadoop.fs.Path esUserPath = new org.apache.hadoop.fs.Path(\"/user/elasticsearch\");\n+ try (FileSystem fs = dfs.getFileSystem()) {\n+\n+ // Set the elasticsearch user directory up\n fs.mkdirs(esUserPath);\n- List<AclEntry> acls = new ArrayList<>();\n- acls.add(new AclEntry.Builder().setType(AclEntryType.USER).setName(\"elasticsearch\").setPermission(FsAction.ALL).build());\n- fs.modifyAclEntries(esUserPath, acls);\n- fs.close();\n+ if (UserGroupInformation.isSecurityEnabled()) {\n+ List<AclEntry> acls = new ArrayList<>();\n+ acls.add(new AclEntry.Builder().setType(AclEntryType.USER).setName(\"elasticsearch\").setPermission(FsAction.ALL).build());\n+ fs.modifyAclEntries(esUserPath, acls);\n+ }\n+\n+ // Install a pre-existing repository into HDFS\n+ String directoryName = \"readonly-repository\";\n+ String archiveName = directoryName + \".tar.gz\";\n+ URL readOnlyRepositoryArchiveURL = MiniHDFS.class.getClassLoader().getResource(archiveName);\n+ if (readOnlyRepositoryArchiveURL != null) {\n+ Path tempDirectory = Files.createTempDirectory(MiniHDFS.class.getName());\n+ File readOnlyRepositoryArchive = tempDirectory.resolve(archiveName).toFile();\n+ FileUtils.copyURLToFile(readOnlyRepositoryArchiveURL, readOnlyRepositoryArchive);\n+ FileUtil.unTar(readOnlyRepositoryArchive, tempDirectory.toFile());\n+\n+ fs.copyFromLocalFile(true, true,\n+ new org.apache.hadoop.fs.Path(tempDirectory.resolve(directoryName).toAbsolutePath().toUri()),\n+ esUserPath.suffix(\"/existing/\" + directoryName)\n+ );\n+\n+ FileUtils.deleteDirectory(tempDirectory.toFile());\n+ }\n }\n \n // write our PID file",
"filename": "test/fixtures/hdfs-fixture/src/main/java/hdfs/MiniHDFS.java",
"status": "modified"
},
{
"diff": "",
"filename": "test/fixtures/hdfs-fixture/src/main/resources/readonly-repository.tar.gz",
"status": "added"
}
]
} |
{
"body": "It manifests itself by the following messages that might linger in the log files for a while after upgrade if x-pack is installed.\r\n\r\n```\r\n[2017-09-13T13:24:57,961][INFO ][o.e.c.m.TemplateUpgradeService] [node1] Starting template upgrade to version 5.6.0, 2 templates will be updated and 0 will be removed \r\n[2017-09-13T13:24:57,985][INFO ][o.e.c.m.TemplateUpgradeService] [node1] Finished upgrading templates to version 5.6.0 \r\n```\r\n\r\nThe problem is occurring because during application of templates the order of elements in the template mapping can get shuffled causing the follow-up check if update is need to fail. I am working on the fix.",
"comments": [
{
"body": "I'm still getting this with 6.2.1:\r\n```\r\n[2018-02-01T10:09:09,146][INFO ][o.e.c.m.TemplateUpgradeService] [analyzer01] Starting template upgrade to version 6.1.2, 1 templates will be updated and 0 will be removed\r\n[2018-02-01T10:09:09,260][INFO ][o.e.c.m.TemplateUpgradeService] [analyzer01] Finished upgrading templates to version 6.1.2\r\n[2018-02-01T10:09:18,168][INFO ][o.e.c.m.TemplateUpgradeService] [analyzer01] Starting template upgrade to version 6.1.2, 1 templates will be updated and 0 will be removed\r\n[2018-02-01T10:09:18,277][INFO ][o.e.c.m.TemplateUpgradeService] [analyzer01] Finished upgrading templates to version 6.1.2\r\n```",
"created_at": "2018-02-01T08:14:54Z"
},
{
"body": "I am encountering this issue when running the [rest-api YAML tests](https://github.com/elastic/elasticsearch/tree/master/rest-api-spec/src/main/resources/rest-api-spec/test) with the Ruby client in Docker. \r\n\r\nThe behavior observed: The rest api tests were passing when I ran elasticsearch and the tests outside docker. But when I ran both elasticsearch and the tests in docker, ES stopped responding halfway through the tests. I thought maybe Docker/ES were running out of memory. [Here](https://clients-ci.elastic.co/job/elastic+elasticsearch-ruby+pull-request/28/ELASTICSEARCH_VERSION=6.5.0,RUBY_TEST_VERSION=2.6.1,TEST_SUITE=rest_api,label=linux/console) is an example of the error on Jenkins. I've put the error in [this gist](https://gist.github.com/estolfo/e9a2bd7daa1ddb40a74e60d910a5a9b0) as well, in case the Jenkins job is no longer available when this issue is investigated.\r\n\r\nAfter inspecting the `pending_tasks` queue and correlating the tasks with parts of the Elasticsearch codebase, I found that the `TemplateUpgradeService` was running in between each test when we were deleting all index templates. The code run between each test was: ` $client.indices.delete_template(name: '*')`\r\n\r\nI’m guessing the cause of the issue is that the `TemplateUpgradeService` can get itself into a deadlock if called too often. The only way I was able to resolve this was to not call `delete_template `with `name: '*'` and instead delete specific templates in between tests.\r\n\r\nPlease let me know if there's any other information you need to investigate or if I can help reproduce the issue.",
"created_at": "2019-02-13T16:20:28Z"
}
],
"number": 26673,
"title": "TemplateUpgradeService get stuck in repeatedly upgrading templates after upgrade to 5.6.0"
} | {
"body": "TemplateUpgradeService might get stuck in repeatedly upgrading templates after upgrade to 5.6.0. This is caused by shuffling mappings definition in the template during template serialization. This commit makes the template serialization consistent.\r\n\r\nCloses #26673\r\n",
"number": 26698,
"review_comments": [],
"title": "Upgrade API: fix excessive logging and unnecessary template updates"
} | {
"commits": [
{
"message": "Upgrade API: fix excessive logging and unnecessary template updates\n\nTemplateUpgradeService might get stuck in repeatedly upgrading templates after upgrade to 5.6.0. This is caused by shuffling mappings definition in the template during template serialization. This commit makes the template serialization consistent.\n\nCloses #26673"
}
],
"files": [
{
"diff": "@@ -26,7 +26,6 @@\n import org.elasticsearch.cluster.Diff;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesArray;\n-import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.compress.CompressedXContent;\n@@ -405,7 +404,7 @@ public static void toInnerXContent(IndexTemplateMetaData indexTemplateMetaData,\n builder.startObject(\"mappings\");\n for (ObjectObjectCursor<String, CompressedXContent> cursor : indexTemplateMetaData.mappings()) {\n byte[] mappingSource = cursor.value.uncompressed();\n- Map<String, Object> mapping = XContentHelper.convertToMap(new BytesArray(mappingSource), false).v2();\n+ Map<String, Object> mapping = XContentHelper.convertToMap(new BytesArray(mappingSource), true).v2();\n if (mapping.size() == 1 && mapping.containsKey(cursor.key)) {\n // the type name is the root value, reduce it\n mapping = (Map<String, Object>) mapping.get(cursor.key);",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java",
"status": "modified"
},
{
"diff": "@@ -123,11 +123,11 @@ public void clusterChanged(ClusterChangedEvent event) {\n lastTemplateMetaData = templates;\n Optional<Tuple<Map<String, BytesReference>, Set<String>>> changes = calculateTemplateChanges(templates);\n if (changes.isPresent()) {\n- logger.info(\"Starting template upgrade to version {}, {} templates will be updated and {} will be removed\",\n- Version.CURRENT,\n- changes.get().v1().size(),\n- changes.get().v2().size());\n if (updatesInProgress.compareAndSet(0, changes.get().v1().size() + changes.get().v2().size())) {\n+ logger.info(\"Starting template upgrade to version {}, {} templates will be updated and {} will be removed\",\n+ Version.CURRENT,\n+ changes.get().v1().size(),\n+ changes.get().v2().size());\n threadPool.generic().execute(() -> updateTemplates(changes.get().v1(), changes.get().v2()));\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/TemplateUpgradeService.java",
"status": "modified"
},
{
"diff": "@@ -20,17 +20,27 @@\n \n import org.elasticsearch.Version;\n import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n import java.util.Arrays;\n import java.util.Base64;\n import java.util.Collections;\n \n+import static java.util.Collections.singletonMap;\n import static org.elasticsearch.cluster.metadata.AliasMetaData.newAliasMetaDataBuilder;\n+import static org.hamcrest.CoreMatchers.equalTo;\n \n public class IndexTemplateMetaDataTests extends ESTestCase {\n \n@@ -78,4 +88,36 @@ public void testIndexTemplateMetaData510() throws IOException {\n }\n }\n \n+ public void testIndexTemplateMetaDataXContentRoundTrip() throws Exception {\n+ ToXContent.Params params = new ToXContent.MapParams(singletonMap(\"reduce_mappings\", \"true\"));\n+\n+ String template = \"{\\\"index_patterns\\\" : [ \\\".test-*\\\" ],\\\"order\\\" : 1000,\" +\n+ \"\\\"settings\\\" : {\\\"number_of_shards\\\" : 1,\\\"number_of_replicas\\\" : 0},\" +\n+ \"\\\"mappings\\\" : {\\\"doc\\\" :\" +\n+ \"{\\\"properties\\\":{\\\"\" +\n+ randomAlphaOfLength(10) + \"\\\":{\\\"type\\\":\\\"text\\\"},\\\"\" +\n+ randomAlphaOfLength(10) + \"\\\":{\\\"type\\\":\\\"keyword\\\"}}\" +\n+ \"}}}\";\n+\n+ BytesReference templateBytes = new BytesArray(template);\n+ final IndexTemplateMetaData indexTemplateMetaData;\n+ try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, templateBytes, XContentType.JSON)) {\n+ indexTemplateMetaData = IndexTemplateMetaData.Builder.fromXContent(parser, \"test\");\n+ }\n+\n+ final BytesReference templateBytesRoundTrip;\n+ try (XContentBuilder builder = XContentBuilder.builder(JsonXContent.jsonXContent)) {\n+ builder.startObject();\n+ IndexTemplateMetaData.Builder.toXContent(indexTemplateMetaData, builder, params);\n+ builder.endObject();\n+ templateBytesRoundTrip = builder.bytes();\n+ }\n+\n+ final IndexTemplateMetaData indexTemplateMetaDataRoundTrip;\n+ try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, templateBytesRoundTrip, XContentType.JSON)) {\n+ indexTemplateMetaDataRoundTrip = IndexTemplateMetaData.Builder.fromXContent(parser, \"test\");\n+ }\n+ assertThat(indexTemplateMetaData, equalTo(indexTemplateMetaDataRoundTrip));\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaDataTests.java",
"status": "modified"
}
]
} |
{
"body": "Right now the below aggregation is not possible even though the 'nested_agg' does return a single bucket and nested aggregation is a single bucket aggregation. \n\nBelow sample aggregation generates an error message that says 'nested_agg' is not a single bucket aggregation and can not be in the order path.\n\n```\n{\nbuckets: {\nterms: {\n field: 'docId',\n order: {'nested_agg>sum_value': 'desc'}\n},\naggs: {\n nested_agg: {\n nested: {\n path: 'my_nested_object'\n },\n aggs: {\n sum_value: {\n sum: {field: 'my_nested_object.value'}\n }\n }\n }\n }\n }\n}\n```\n",
"comments": [
{
"body": "@colings86 any thoughts on this?\n",
"created_at": "2016-02-29T00:24:56Z"
},
{
"body": "I managed to reproduce this on the master branch and now know why this is happening but I don't have a solution as to how we can fix it short of just documenting that you can't order by an aggregation within a nested aggregation.\n\nThe issue is that the `NestedAggregatorFactory.createInternal()` method calls `AggregatorFactory.asMultiBucketAggregator()`. This creates a wrapper around the `NestedAggregator` that will create a separate instance of `NestedAggregator` for each parent bucket. We do this in the `NestedAggregator` to ensure the doc ids are delivered in order because with a single instance and multi-valued nested fields we could get documents not in order. Some of the aggregations rely on the fact that documents are collected in order. For example, we could collect (doc1, bucket1), (doc2, bucket1), (doc1, bucket2), (doc2, bucket2) which would be out of order, so by having separate instances we are guaranteeing docId order since each instance will only collect one bucket.\n\nI tried to change the `AggregationPath.validate()` method to use the underlying aggregator (the first instance of it at least) but then it fails later because we need to retrieve the value from the aggregator and there is no way of getting the value from a particular instance form the wrapper.\n",
"created_at": "2016-02-29T13:25:24Z"
},
{
"body": "I managed to somehow face this issue again. The \"path\" parameter in moving average can not point to a nested aggregation because nested aggregation is not a single bucket aggregation.\n",
"created_at": "2016-04-03T01:40:14Z"
},
{
"body": "Is there any way to get around this 'issue'?? I'm running into the same issues\n",
"created_at": "2016-04-29T13:34:17Z"
},
{
"body": "@clintongormley @colings86 Any update on this please? We are badly stuck without this...\n",
"created_at": "2016-06-28T13:54:08Z"
},
{
"body": "+1\n\nmy question on stackoverflow\nhttp://stackoverflow.com/questions/38089711/how-can-i-sort-aggregations-buckets-by-metric-results\n",
"created_at": "2016-06-29T06:29:25Z"
},
{
"body": "+1 for this as well. Very big use case scenario for us.\n",
"created_at": "2016-07-06T17:28:10Z"
},
{
"body": "+1 as this is showstoper for us to upgrade from es v1 to es v2\n",
"created_at": "2016-07-12T07:57:32Z"
},
{
"body": "We were bitten by the same thing. FWIW, we worked around it temporarily by ordering the aggregations in the application code after they are returned from ES.\n",
"created_at": "2016-07-12T08:12:59Z"
},
{
"body": "This missing feature is preventing us from upgrading to ES 2.X. Is there any plans to support this in the near future?\n",
"created_at": "2016-08-26T13:55:36Z"
},
{
"body": "@clintongormley Nested architecture is an important functionality in ES. Most companies build atleast at minimum some sort of functionality with nested mappings. This bug renders ES useless. Any updates?\n",
"created_at": "2016-08-26T14:03:59Z"
},
{
"body": "+1 sorting after the fact in our application isn't a viable option due to number of results. \n",
"created_at": "2016-10-06T19:12:09Z"
},
{
"body": "I have made a fix to sort which has nested aggregations in path. Also you might have multi-value buckets in path (you should just specify bucket key in path like \"colors.red>stats.variance\").\r\nI might create a pull request or just give a link to the commit in fork of ES 5.1.2 if anyone is interested. ",
"created_at": "2016-12-20T19:31:26Z"
},
{
"body": "That would be great, or link in your fork?\n\nOp di 20 dec. 2016 20:32 schreef idozorenko <notifications@github.com>:\n\n> I have made a fix to sort which has nested aggregations in path. Also you\n> might have multi-value buckets in path (you should just specify bucket key\n> in path like \"colors.red>stats.variance\").\n> I might create a pull request or just give a link to the commit in fork of\n> ES 5.1.2 if anyone is interested.\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/issues/16838#issuecomment-268335539>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AH4yOnC_e-o0zAsgVPxT6MWYR8jdUBwNks5rKC03gaJpZM4HkeIN>\n> .\n>\n",
"created_at": "2016-12-20T19:34:27Z"
},
{
"body": ">I might create a pull request\r\n\r\n:+1:",
"created_at": "2016-12-21T06:43:02Z"
},
{
"body": "As I'm not a contributor, I will just share my commit to ES 5.1 branch here. Please let me know if you have any questions.\r\n\r\nhttps://github.com/elastic/elasticsearch/commit/8f601a3c241cb652a889870d93fd32b3d226ef41\r\n",
"created_at": "2016-12-21T12:46:34Z"
},
{
"body": "@idozorenko feel free to submit a PR so that we can review the code - thanks",
"created_at": "2016-12-21T12:48:50Z"
},
{
"body": "Does this problem also occur in reverse_nested aggs? (not direct nested)",
"created_at": "2017-05-06T02:04:12Z"
},
{
"body": "yes",
"created_at": "2017-05-06T07:49:00Z"
},
{
"body": "+1 for this issue. We have exact same problem and same query is running with AWS ES 1.5. \r\n\r\nCan any one tell us about the status for this bug fix? This is very critical feature for us and can not move forward without this functionality? Does any one suggest to use ES 1.5 instead of 5.X version? (Personally i do not think we should do this)",
"created_at": "2017-05-08T11:00:35Z"
},
{
"body": "We didn't want to wait for a fix or to upgrade so we ended up restructuring\nour data to be a parent/child relationship vs nested. So far so good.\n\nOn Mon, May 8, 2017 at 5:01 AM, akashmpatel91 <notifications@github.com>\nwrote:\n\n> +1 for this issue. We have exact same problem and same query is running\n> with AWS ES 1.5.\n>\n> Can any one tell us about the status for this bug fix? This is very\n> critical feature for us and can not move forward without this\n> functionality? Does any one suggest to use ES 1.5 instead of 5.X version?\n> (Personally i do not think we should do this)\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/issues/16838#issuecomment-299837273>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/ABAejttN4mvEjco7X7BmwNBNVOOoopaQks5r3vX7gaJpZM4HkeIN>\n> .\n>\n",
"created_at": "2017-05-08T17:45:16Z"
},
{
"body": "@brettahale, Please note that \"parent-child relations can make queries hundreds of times slower\" as per ES documentation https://www.elastic.co/guide/en/elasticsearch/reference/master/tune-for-search-speed.html. \r\n\r\nWe did POC and it is correct, we are handling 5-6 billions documents and query is taking 6-7 sec to return results. With nested document query is returning results in 400 ms.",
"created_at": "2017-05-08T19:18:59Z"
},
{
"body": "Agreed, wasn't ideal but we were able to deliver our feature. I'd like to\nsee a fix here as well but after a chat with ES support, it sounded like a\nfoundational change that wasn't likely going to get fixed anytime soon.\n\nOn Mon, May 8, 2017 at 1:19 PM, akashmpatel91 <notifications@github.com>\nwrote:\n\n> @brettahale <https://github.com/brettahale>, Please note that\n> \"parent-child relations can make queries hundreds of times slower\" as per\n> ES documentation https://www.elastic.co/guide/en/elasticsearch/reference/\n> master/tune-for-search-speed.html.\n>\n> We did POC and it is correct, we are handling 5-6 billions documents and\n> query is taking 6-7 sec to return results. With nested document query is\n> returning results in 400 ms.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/issues/16838#issuecomment-299963988>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/ABAeju3EHsROM1hWyII36f5pPpPeVK0bks5r32rQgaJpZM4HkeIN>\n> .\n>\n",
"created_at": "2017-05-08T23:46:15Z"
},
{
"body": "Hello team, any update on this?",
"created_at": "2017-05-18T18:43:13Z"
},
{
"body": "Our workaround was to copy some fields from the parent docs into its nested docs, so we can still make terms aggregation on the nested docs but sort by fields \"found\" in the parent docs. This allowed us to omit the reverse_nested aggregation between the bucket and the sub bucket. Not ideal, but works in our cases. Of course, a fix would be much appreciated.\r\n\r\nEdit:\r\nApparently, there was no need for a workaround in my case.\r\nSee the comment below.",
"created_at": "2017-06-12T18:12:47Z"
},
{
"body": "@colings86: Actually I did managed to use Terms aggregation, a sub Reverse_Nested aggregation and a sub Cardinality aggregation for ordering. Something like that:\r\n```\r\n{\r\n \"aggs\" : {\r\n \"AllMovieNames\" : {\r\n \"terms\" : { \"field\" : \"MovieName\" },\r\n \"order\": {\r\n \"backToActors>distinctActorsCount\":\"desc\"\r\n },\r\n \"size\": 10\r\n },\r\n\t\t\"aggs\":\r\n\t\t{\r\n\t\t\t\"backToActors\":{\t\t\r\n\t\t\t\t\"reverse_nested\":{},\r\n\t\t\t\t\"aggs\":{\r\n\t\t\t\t\t\"distinctActorsCount\":{\r\n\t\t\t\t\t\t\"cardinality\":{\r\n\t\t\t\t\t\t\t\"field\":\"ActorName\"\r\n\t\t\t\t\t\t}\r\n\t\t\t\t\t}\r\n\t\t\t\t}\t\t\t\r\n\t\t\t}\t\r\n\t\t}\t\t\t\r\n }\t\r\n}\r\n```\r\nNo exception message was thrown, and the order was just as expected.\r\n\r\nAre you sure the problem occur in both Nested and Reverse_Nested sub aggregation? \r\nI'm using ElasticSearch 5.3.2.",
"created_at": "2017-06-13T18:34:06Z"
},
{
"body": "@IdanWo sorry, actually you are right, this problem doesn't occur on the `reverse_nested` aggregation, the only single bucket aggregation it should affect is the `nested` aggregation because thats the only single bucket aggregation that uses `AggregatorFactory.asMultiBucketAggregator()`",
"created_at": "2017-06-20T10:58:33Z"
},
{
"body": "I'm another victim of this insidious bug:\r\n\r\nSituation:\r\n\r\n- Document ROOT with two nested documents NESTED1 and NESTED2.\r\n- Term aggregation over a field in ROOT.NESTED1 (nested aggregation -to NESTED1- then term aggregation)\r\n- Sum aggregation over a field in ROOT.NESTED2 (inside the previous term aggregation, reverse nested aggregation -back to ROOT-, nested aggregation -to NESTED2- then sum aggregation)\r\n\r\nI cannot use the sum aggregation to sort the term aggregation because an error is thrown saying that the nested aggregation -to NESTED2- does not returns a single-bucket aggregation\r\n\r\n**Can someone update us with the status of this bug?**\r\n\r\n(I'm using ElasticSearch 5.4)",
"created_at": "2017-07-18T14:04:04Z"
},
{
"body": "Hello Team, any update on this Bug fix? It is not working only in 5.x version. Can you please provide ETA for this bug to be fixed? This is very important feature for term aggregation and blocking many clients.",
"created_at": "2017-08-04T06:31:46Z"
},
{
"body": "Hi All, I was able to get around this by doing something similar to the following. For this, there was only one nested value that will match the interval condition - but you could get creative :)\r\n\r\nMapping:\r\n```\r\n \"trendingpopularityjson\": {\r\n \"type\": \"nested\",\r\n \"include_in_parent\": true,\r\n \"properties\": {\r\n \"interval\": {\r\n \"type\": \"integer\"\r\n },\r\n \"trendingpopularity\": {\r\n \"type\": \"integer\"\r\n }\r\n }\r\n }\r\n```\r\n\r\nAggregation to sum inside. This would avoid the nested aggregation - making it easy :\r\n\r\n```\r\n \"Trend\": {\r\n \"sum\": {\r\n \"script\": {\r\n \"inline\": \"def d = doc['trendingpopularityjson.interval']; for (int i = 0; i < d.length; ++i) { if (d[i] == params.interval) { return doc['trendingpopularityjson.trendingpopularity'][i] } }\",\r\n \"params\": {\r\n \"interval\": 2\r\n },\r\n \"lang\": \"painless\"\r\n }\r\n }\r\n }\r\n```",
"created_at": "2017-08-04T20:49:20Z"
}
],
"number": 16838,
"title": "Sort term aggregation with nested aggregation in order path"
} | {
"body": "The nested aggregator now buffers all bucket ords per parent document and\r\nemits all bucket ords for a parent document's nested document once. This way\r\nthe nested documents document DocIdSetIterator gets used once per bucket\r\ninstead of wrapping the nested aggregator inside a multi bucket aggregator,\r\nwhich was the current solution upto now. This allows sorting by buckets\r\nunder a nested bucket.\r\n\r\nPR for #16838",
"number": 26683,
"review_comments": [
{
"body": "I guess this should be removed?",
"created_at": "2017-09-18T10:25:10Z"
},
{
"body": "I guess this should be removed?",
"created_at": "2017-09-18T10:28:22Z"
},
{
"body": "can you just call `doPostCollection()`? ",
"created_at": "2017-09-18T14:18:31Z"
},
{
"body": ":+1: ",
"created_at": "2017-09-19T10:07:44Z"
}
],
"title": "Allow aggregation sorting via nested aggregation"
} | {
"commits": [
{
"message": "aggs: Allow aggregation sorting via nested aggregation.\n\nThe nested aggregator now buffers all bucket ords per parent document and\nemits all bucket ords for a parent document's nested document once. This way\nthe nested documents document DocIdSetIterator gets used once per bucket\ninstead of wrapping the nested aggregator inside a multi bucket aggregator,\nwhich was the current solution upto now. This allows sorting by buckets\nunder a nested bucket.\n\nCloses #16838"
}
],
"files": [
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.search.aggregations.bucket.nested;\n \n+import com.carrotsearch.hppc.LongArrayList;\n import org.apache.lucene.index.IndexReaderContext;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.ReaderUtil;\n@@ -51,14 +52,19 @@ class NestedAggregator extends BucketsAggregator implements SingleBucketAggregat\n \n private final BitSetProducer parentFilter;\n private final Query childFilter;\n+ private final boolean collectsFromSingleBucket;\n+\n+ private BufferingNestedLeafBucketCollector bufferingNestedLeafBucketCollector;\n \n NestedAggregator(String name, AggregatorFactories factories, ObjectMapper parentObjectMapper, ObjectMapper childObjectMapper,\n- SearchContext context, Aggregator parentAggregator,\n- List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData) throws IOException {\n+ SearchContext context, Aggregator parentAggregator,\n+ List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData,\n+ boolean collectsFromSingleBucket) throws IOException {\n super(name, factories, context, parentAggregator, pipelineAggregators, metaData);\n Query parentFilter = parentObjectMapper != null ? parentObjectMapper.nestedTypeFilter() : Queries.newNonNestedFilter();\n this.parentFilter = context.bitsetFilterCache().getBitSetProducer(parentFilter);\n this.childFilter = childObjectMapper.nestedTypeFilter();\n+ this.collectsFromSingleBucket = collectsFromSingleBucket;\n }\n \n @Override\n@@ -71,26 +77,38 @@ public LeafBucketCollector getLeafCollector(final LeafReaderContext ctx, final L\n \n final BitSet parentDocs = parentFilter.getBitSet(ctx);\n final DocIdSetIterator childDocs = childDocsScorer != null ? childDocsScorer.iterator() : null;\n- return new LeafBucketCollectorBase(sub, null) {\n- @Override\n- public void collect(int parentDoc, long bucket) throws IOException {\n- // if parentDoc is 0 then this means that this parent doesn't have child docs (b/c these appear always before the parent\n- // doc), so we can skip:\n- if (parentDoc == 0 || parentDocs == null || childDocs == null) {\n- return;\n- }\n+ if (collectsFromSingleBucket) {\n+ return new LeafBucketCollectorBase(sub, null) {\n+ @Override\n+ public void collect(int parentDoc, long bucket) throws IOException {\n+ // if parentDoc is 0 then this means that this parent doesn't have child docs (b/c these appear always before the parent\n+ // doc), so we can skip:\n+ if (parentDoc == 0 || parentDocs == null || childDocs == null) {\n+ return;\n+ }\n \n- final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1);\n- int childDocId = childDocs.docID();\n- if (childDocId <= prevParentDoc) {\n- childDocId = childDocs.advance(prevParentDoc + 1);\n- }\n+ final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1);\n+ int childDocId = childDocs.docID();\n+ if (childDocId <= prevParentDoc) {\n+ childDocId = childDocs.advance(prevParentDoc + 1);\n+ }\n \n- for (; childDocId < parentDoc; childDocId = childDocs.nextDoc()) {\n- collectBucket(sub, childDocId, bucket);\n+ for (; childDocId < parentDoc; childDocId = childDocs.nextDoc()) {\n+ collectBucket(sub, childDocId, bucket);\n+ }\n }\n- }\n- };\n+ };\n+ } else {\n+ doPostCollection();\n+ return bufferingNestedLeafBucketCollector = new BufferingNestedLeafBucketCollector(sub, parentDocs, childDocs);\n+ }\n+ }\n+\n+ @Override\n+ protected void doPostCollection() throws IOException {\n+ if (bufferingNestedLeafBucketCollector != null) {\n+ bufferingNestedLeafBucketCollector.postCollect();\n+ }\n }\n \n @Override\n@@ -104,4 +122,63 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalNested(name, 0, buildEmptySubAggregations(), pipelineAggregators(), metaData());\n }\n \n+ class BufferingNestedLeafBucketCollector extends LeafBucketCollectorBase {\n+\n+ final BitSet parentDocs;\n+ final LeafBucketCollector sub;\n+ final DocIdSetIterator childDocs;\n+ final LongArrayList bucketBuffer = new LongArrayList();\n+\n+ int currentParentDoc = -1;\n+\n+ BufferingNestedLeafBucketCollector(LeafBucketCollector sub, BitSet parentDocs, DocIdSetIterator childDocs) {\n+ super(sub, null);\n+ this.sub = sub;\n+ this.parentDocs = parentDocs;\n+ this.childDocs = childDocs;\n+ }\n+\n+ @Override\n+ public void collect(int parentDoc, long bucket) throws IOException {\n+ // if parentDoc is 0 then this means that this parent doesn't have child docs (b/c these appear always before the parent\n+ // doc), so we can skip:\n+ if (parentDoc == 0 || parentDocs == null || childDocs == null) {\n+ return;\n+ }\n+\n+ if (currentParentDoc != parentDoc) {\n+ processChildBuckets(currentParentDoc, bucketBuffer);\n+ currentParentDoc = parentDoc;\n+ }\n+ bucketBuffer.add(bucket);\n+ }\n+\n+ void processChildBuckets(int parentDoc, LongArrayList buckets) throws IOException {\n+ if (bucketBuffer.isEmpty()) {\n+ return;\n+ }\n+\n+\n+ final int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1);\n+ int childDocId = childDocs.docID();\n+ if (childDocId <= prevParentDoc) {\n+ childDocId = childDocs.advance(prevParentDoc + 1);\n+ }\n+\n+ for (; childDocId < parentDoc; childDocId = childDocs.nextDoc()) {\n+ final long[] buffer = buckets.buffer;\n+ final int size = buckets.size();\n+ for (int i = 0; i < size; i++) {\n+ collectBucket(sub, childDocId, buffer[i]);\n+ }\n+ }\n+ bucketBuffer.clear();\n+ }\n+\n+ void postCollect() throws IOException {\n+ processChildBuckets(currentParentDoc, bucketBuffer);\n+ }\n+\n+ }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java",
"status": "modified"
},
{
"diff": "@@ -48,13 +48,11 @@ class NestedAggregatorFactory extends AggregatorFactory<NestedAggregatorFactory>\n @Override\n public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBucket, List<PipelineAggregator> pipelineAggregators,\n Map<String, Object> metaData) throws IOException {\n- if (collectsFromSingleBucket == false) {\n- return asMultiBucketAggregator(this, context, parent);\n- }\n if (childObjectMapper == null) {\n return new Unmapped(name, context, parent, pipelineAggregators, metaData);\n }\n- return new NestedAggregator(name, factories, parentObjectMapper, childObjectMapper, context, parent, pipelineAggregators, metaData);\n+ return new NestedAggregator(name, factories, parentObjectMapper, childObjectMapper, context, parent,\n+ pipelineAggregators, metaData, collectsFromSingleBucket);\n }\n \n private static final class Unmapped extends NonCollectingAggregator {",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorFactory.java",
"status": "modified"
},
{
"diff": "@@ -91,10 +91,6 @@ public boolean needsScores() {\n public LeafBucketCollector getLeafCollector(final LeafReaderContext ctx,\n final LeafBucketCollector sub) throws IOException {\n \n- for (LongObjectPagedHashMap.Cursor<TopDocsAndLeafCollector> cursor : topDocsCollectors) {\n- cursor.value.leafCollector = cursor.value.topLevelCollector.getLeafCollector(ctx);\n- }\n-\n return new LeafBucketCollectorBase(sub, null) {\n \n Scorer scorer;\n@@ -103,6 +99,11 @@ public LeafBucketCollector getLeafCollector(final LeafReaderContext ctx,\n public void setScorer(Scorer scorer) throws IOException {\n this.scorer = scorer;\n for (LongObjectPagedHashMap.Cursor<TopDocsAndLeafCollector> cursor : topDocsCollectors) {\n+ // Instantiate the leaf collector not in the getLeafCollector(...) method or in the constructor of this\n+ // anonymous class. Otherwise in the case this leaf bucket collector gets invoked with post collection\n+ // then we already have moved on to the next reader and then we may encounter assertion errors or\n+ // incorrect results.\n+ cursor.value.leafCollector = cursor.value.topLevelCollector.getLeafCollector(ctx);\n cursor.value.leafCollector.setScorer(scorer);\n }\n super.setScorer(scorer);",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsAggregator.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.apache.lucene.document.Document;\n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.SortedNumericDocValuesField;\n+import org.apache.lucene.document.SortedSetDocValuesField;\n import org.apache.lucene.index.DirectoryReader;\n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.IndexWriterConfig;\n@@ -34,21 +35,33 @@\n import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.store.Directory;\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.lucene.search.Queries;\n+import org.elasticsearch.index.mapper.KeywordFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.NumberFieldMapper;\n import org.elasticsearch.index.mapper.TypeFieldMapper;\n import org.elasticsearch.index.mapper.UidFieldMapper;\n import org.elasticsearch.search.aggregations.AggregatorTestCase;\n+import org.elasticsearch.search.aggregations.BucketOrder;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;\n import org.elasticsearch.search.aggregations.metrics.max.InternalMax;\n+import org.elasticsearch.search.aggregations.metrics.max.Max;\n import org.elasticsearch.search.aggregations.metrics.max.MaxAggregationBuilder;\n+import org.elasticsearch.search.aggregations.metrics.min.Min;\n+import org.elasticsearch.search.aggregations.metrics.min.MinAggregationBuilder;\n import org.elasticsearch.search.aggregations.metrics.sum.InternalSum;\n import org.elasticsearch.search.aggregations.metrics.sum.SumAggregationBuilder;\n+import org.elasticsearch.search.aggregations.support.ValueType;\n \n import java.io.IOException;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.List;\n+import java.util.Locale;\n import java.util.stream.DoubleStream;\n \n public class NestedAggregatorTests extends AggregatorTestCase {\n@@ -314,6 +327,189 @@ public void testResetRootDocId() throws Exception {\n }\n }\n \n+ public void testNestedOrdering() throws IOException {\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter iw = new RandomIndexWriter(random(), directory)) {\n+ iw.addDocuments(generateBook(\"1\", new String[]{\"a\"}, new int[]{12, 13, 14}));\n+ iw.addDocuments(generateBook(\"2\", new String[]{\"b\"}, new int[]{5, 50}));\n+ iw.addDocuments(generateBook(\"3\", new String[]{\"c\"}, new int[]{39, 19}));\n+ iw.addDocuments(generateBook(\"4\", new String[]{\"d\"}, new int[]{2, 1, 3}));\n+ iw.addDocuments(generateBook(\"5\", new String[]{\"a\"}, new int[]{70, 10}));\n+ iw.addDocuments(generateBook(\"6\", new String[]{\"e\"}, new int[]{23, 21}));\n+ iw.addDocuments(generateBook(\"7\", new String[]{\"e\", \"a\"}, new int[]{8, 8}));\n+ iw.addDocuments(generateBook(\"8\", new String[]{\"f\"}, new int[]{12, 14}));\n+ iw.addDocuments(generateBook(\"9\", new String[]{\"g\", \"c\", \"e\"}, new int[]{18, 8}));\n+ }\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n+ MappedFieldType fieldType1 = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG);\n+ fieldType1.setName(\"num_pages\");\n+ MappedFieldType fieldType2 = new KeywordFieldMapper.KeywordFieldType();\n+ fieldType2.setHasDocValues(true);\n+ fieldType2.setName(\"author\");\n+\n+ TermsAggregationBuilder termsBuilder = new TermsAggregationBuilder(\"authors\", ValueType.STRING)\n+ .field(\"author\").order(BucketOrder.aggregation(\"chapters>num_pages.value\", true));\n+ NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder(\"chapters\", \"nested_chapters\");\n+ MaxAggregationBuilder maxAgg = new MaxAggregationBuilder(\"num_pages\").field(\"num_pages\");\n+ nestedBuilder.subAggregation(maxAgg);\n+ termsBuilder.subAggregation(nestedBuilder);\n+\n+ Terms terms = search(newSearcher(indexReader, false, true),\n+ new MatchAllDocsQuery(), termsBuilder, fieldType1, fieldType2);\n+\n+ assertEquals(7, terms.getBuckets().size());\n+ assertEquals(\"authors\", terms.getName());\n+\n+ Terms.Bucket bucket = terms.getBuckets().get(0);\n+ assertEquals(\"d\", bucket.getKeyAsString());\n+ Max numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(3, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(1);\n+ assertEquals(\"f\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(14, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(2);\n+ assertEquals(\"g\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(18, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(3);\n+ assertEquals(\"e\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(23, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(4);\n+ assertEquals(\"c\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(39, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(5);\n+ assertEquals(\"b\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(50, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(6);\n+ assertEquals(\"a\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(70, (int) numPages.getValue());\n+\n+ // reverse order:\n+ termsBuilder = new TermsAggregationBuilder(\"authors\", ValueType.STRING)\n+ .field(\"author\").order(BucketOrder.aggregation(\"chapters>num_pages.value\", false));\n+ nestedBuilder = new NestedAggregationBuilder(\"chapters\", \"nested_chapters\");\n+ maxAgg = new MaxAggregationBuilder(\"num_pages\").field(\"num_pages\");\n+ nestedBuilder.subAggregation(maxAgg);\n+ termsBuilder.subAggregation(nestedBuilder);\n+\n+ terms = search(newSearcher(indexReader, false, true), new MatchAllDocsQuery(), termsBuilder, fieldType1, fieldType2);\n+\n+ assertEquals(7, terms.getBuckets().size());\n+ assertEquals(\"authors\", terms.getName());\n+\n+ bucket = terms.getBuckets().get(0);\n+ assertEquals(\"a\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(70, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(1);\n+ assertEquals(\"b\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(50, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(2);\n+ assertEquals(\"c\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(39, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(3);\n+ assertEquals(\"e\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(23, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(4);\n+ assertEquals(\"g\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(18, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(5);\n+ assertEquals(\"f\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(14, (int) numPages.getValue());\n+\n+ bucket = terms.getBuckets().get(6);\n+ assertEquals(\"d\", bucket.getKeyAsString());\n+ numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(3, (int) numPages.getValue());\n+ }\n+ }\n+ }\n+\n+ public void testNestedOrdering_random() throws IOException {\n+ int numBooks = randomIntBetween(32, 512);\n+ List<Tuple<String, int[]>> books = new ArrayList<>();\n+ for (int i = 0; i < numBooks; i++) {\n+ int numChapters = randomIntBetween(1, 8);\n+ int[] chapters = new int[numChapters];\n+ for (int j = 0; j < numChapters; j++) {\n+ chapters[j] = randomIntBetween(2, 64);\n+ }\n+ books.add(Tuple.tuple(String.format(Locale.ROOT, \"%03d\", i), chapters));\n+ }\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter iw = new RandomIndexWriter(random(), directory)) {\n+ int id = 0;\n+ for (Tuple<String, int[]> book : books) {\n+ iw.addDocuments(generateBook(\n+ String.format(Locale.ROOT, \"%03d\", id), new String[]{book.v1()}, book.v2())\n+ );\n+ id++;\n+ }\n+ }\n+ for (Tuple<String, int[]> book : books) {\n+ Arrays.sort(book.v2());\n+ }\n+ books.sort((o1, o2) -> {\n+ int cmp = Integer.compare(o1.v2()[0], o2.v2()[0]);\n+ if (cmp == 0) {\n+ return o1.v1().compareTo(o2.v1());\n+ } else {\n+ return cmp;\n+ }\n+ });\n+ try (IndexReader indexReader = wrap(DirectoryReader.open(directory))) {\n+ MappedFieldType fieldType1 = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG);\n+ fieldType1.setName(\"num_pages\");\n+ MappedFieldType fieldType2 = new KeywordFieldMapper.KeywordFieldType();\n+ fieldType2.setHasDocValues(true);\n+ fieldType2.setName(\"author\");\n+\n+ TermsAggregationBuilder termsBuilder = new TermsAggregationBuilder(\"authors\", ValueType.STRING)\n+ .size(books.size()).field(\"author\")\n+ .order(BucketOrder.compound(BucketOrder.aggregation(\"chapters>num_pages.value\", true), BucketOrder.key(true)));\n+ NestedAggregationBuilder nestedBuilder = new NestedAggregationBuilder(\"chapters\", \"nested_chapters\");\n+ MinAggregationBuilder minAgg = new MinAggregationBuilder(\"num_pages\").field(\"num_pages\");\n+ nestedBuilder.subAggregation(minAgg);\n+ termsBuilder.subAggregation(nestedBuilder);\n+\n+ Terms terms = search(newSearcher(indexReader, false, true),\n+ new MatchAllDocsQuery(), termsBuilder, fieldType1, fieldType2);\n+\n+ assertEquals(books.size(), terms.getBuckets().size());\n+ assertEquals(\"authors\", terms.getName());\n+\n+ for (int i = 0; i < books.size(); i++) {\n+ Tuple<String, int[]> book = books.get(i);\n+ Terms.Bucket bucket = terms.getBuckets().get(i);\n+ assertEquals(book.v1(), bucket.getKeyAsString());\n+ Min numPages = ((Nested) bucket.getAggregations().get(\"chapters\")).getAggregations().get(\"num_pages\");\n+ assertEquals(book.v2()[0], (int) numPages.getValue());\n+ }\n+ }\n+ }\n+ }\n+\n private double generateMaxDocs(List<Document> documents, int numNestedDocs, int id, String path, String fieldName) {\n return DoubleStream.of(generateDocuments(documents, numNestedDocs, id, path, fieldName))\n .max().orElse(Double.NEGATIVE_INFINITY);\n@@ -340,4 +536,26 @@ private double[] generateDocuments(List<Document> documents, int numNestedDocs,\n return values;\n }\n \n+ private List<Document> generateBook(String id, String[] authors, int[] numPages) {\n+ List<Document> documents = new ArrayList<>();\n+\n+ for (int numPage : numPages) {\n+ Document document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"book#\" + id, UidFieldMapper.Defaults.NESTED_FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"__nested_chapters\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new SortedNumericDocValuesField(\"num_pages\", numPage));\n+ documents.add(document);\n+ }\n+\n+ Document document = new Document();\n+ document.add(new Field(UidFieldMapper.NAME, \"book#\" + id, UidFieldMapper.Defaults.FIELD_TYPE));\n+ document.add(new Field(TypeFieldMapper.NAME, \"book\", TypeFieldMapper.Defaults.FIELD_TYPE));\n+ for (String author : authors) {\n+ document.add(new SortedSetDocValuesField(\"author\", new BytesRef(author)));\n+ }\n+ documents.add(document);\n+\n+ return documents;\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregatorTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**:\r\n5.5.1\r\n\r\n**Plugins installed**:\r\ningest-geoip, ingest-user-agent, x-pack\r\n\r\n**JVM version**:\r\nopenjdk version \"1.8.0_141\"\r\nOpenJDK Runtime Environment (build 1.8.0_141-b16)\r\nOpenJDK 64-Bit Server VM (build 25.141-b16, mixed mode)\r\n\r\n**OS version**:\r\nLinux e012087cca9f 4.9.36-moby #1 SMP Wed Jul 12 15:29:07 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n_count API doesn't return a 400 status \"parsing_exception\" error when sending data matching `\".*\"` regex pattern.\r\n\r\n**Steps to reproduce**:\r\n```\r\ndocker run -p 9200:9200 -e \"http.host=0.0.0.0\" -e \"transport.host=127.0.0.1\" -d docker.elastic.co/elasticsearch/elasticsearch:5.5.1\r\n\r\ncurl -XGET -u elastic:changeme -H 'Content-Type: application/json' http://localhost:9200/*/_count -d '\"\"'\r\n\r\n# (don't forget to kill the container to save your CPU :-)\r\n```\r\n\r\nFYI:\r\n- _search works with the same data (`\"\"`) returning a 400 status \"parsing_exception\" error\r\n- _count returns same error on data like `a`, `\"`, etc. but the problem occurs with data matching `\".*\"` regex pattern",
"comments": [
{
"body": "@Matthour this is indeed interesting, I just tried it on 5.2.2 and there `_count` and `_search` still give the same respone (a 400 with \"parse_exception\") when the request body is `\"\"`.\r\nStarting with 5.3.0, `_search` still returns 400 (\"parse_exception\") but `_count` doesn't return.",
"created_at": "2017-08-08T13:13:57Z"
}
],
"number": 26083,
"title": "Node blocked due to malformed JSON on _count API"
} | {
"body": "Took a shot at #26083 here :) \r\n\r\nThe problem is that `parser.nextToken()` will return `null` when hitting the end a JSON string. So if the String is partly valid like an empty string:\r\n\r\n```java\r\nfor (XContentParser.Token token = parser.nextToken(); token != XContentParser.Token.END_OBJECT; token = parser.nextToken()) {\r\n```\r\n\r\nwill loop forever because `nextToken()` will keep returning `null`.\r\nYou can see this by one core going to 100% load while a request like:\r\n\r\n```sh\r\ncurl -XGET -H 'Content-Type: application/json' http://localhost:9200/\\*/_count -d '\"\"'\r\n```\r\n\r\nblocks forever.\r\n\r\nIn `5.2` this seems to have been caught in the logic guessing the content type of the request and the above example returned:\r\n\r\n```sh\r\n{\"error\":{\"root_cause\":[{\"type\":\"parse_exception\",\"reason\":\"Failed to derive xcontent\"}],\"type\":\"parse_exception\",\"reason\":\"Failed to derive xcontent\"},\"status\":400}%\r\n```\r\n\r\nSince this is no longer the cause of the problem, I took a shot at a more appropriate error and it now returns:\r\n\r\n```sh\r\n\"error\":{\"root_cause\":[{\"type\":\"parsing_exception\",\"reason\":\"Failed to parse\",\"line\":1,\"col\":1}],\"type\":\"parsing_exception\",\"reason\":\"Failed to parse\",\"line\":1,\"col\":1,\"caused_by\":{\"type\":\"e_o_f_exception\",\"reason\":\"Unexpected end of JSON request body.\"}},\"status\":400}%\r\n```\r\n\r\nFixes #26083",
"number": 26680,
"review_comments": [
{
"body": "I'd rather like to have a check outside the parsing loop that asserts that the first token the parser emmits is XContentParser.Token.START_OBJECT. I think its save to assume we are expecting full json objects. I'd also just throw a ParsingException in that case.",
"created_at": "2017-09-18T11:02:43Z"
},
{
"body": "Can you add a test for the cases `\"\\\"someString\\\"\"` and maybe also make sure that we don't barf for the empty String, but simply return null like in the case of \"{}\"?",
"created_at": "2017-09-18T11:07:12Z"
},
{
"body": "@cbuescher but if we just verify that the first token is` XContentParser.Token.START_OBJECT`, we'll still see this infinite loop for `\"{\"` for example. Do you mean we want to accept that?\r\nIs it really a good idea to have the behavior of `org.elasticsearch.rest.action.RestActions#parseTopLevelQueryBuilder` be to loop forever on part of a valid JSON request? :)",
"created_at": "2017-09-18T11:29:12Z"
},
{
"body": "> we'll still see this infinite loop for \"{\" for example.\r\n\r\nI'd expect the parser to throw an exception on this?\r\n\r\n> Is it really a good idea to have the behavior of org.elasticsearch.rest.action.RestActions#parseTopLevelQueryBuilder be to loop forever on part of a valid JSON request? :)\r\n\r\nOf course not, and this is not what @cbuescher said. \r\n",
"created_at": "2017-09-18T11:38:22Z"
},
{
"body": "@tlrx\r\n\r\n> I'd expect the parser to throw an exception on this?\r\n\r\nCurrently, it doesn't throw on half a valid JSON request that's where my confusion was coming from. Also just goes into the infinite loop then.",
"created_at": "2017-09-18T11:40:31Z"
},
{
"body": "I just implemented the tests @cbuescher asked for below (good call obviously it started throwing on empty string now too :)).\r\nThe best fix I see that includes the empty string case as well would be to do this:\r\n\r\n```java\r\n if (parser.nextToken() == null) {\r\n return null;\r\n }\r\n for (XContentParser.Token token = parser.nextToken(); token != XContentParser.Token.END_OBJECT; token = parser.nextToken()) {\r\n if (token == null) {\r\n throw new ParsingException(parser.getTokenLocation(), \"Failed to parse\");\r\n }\r\n```\r\n\r\nif the first token isn't a `XContentParser.Token.START_OBJECT` the parser throws eventually anyways and we handle the empty string case without much code, how about it? :)\r\n",
"created_at": "2017-09-18T11:44:18Z"
},
{
"body": "updated the PR with the above suggestion",
"created_at": "2017-09-18T11:48:46Z"
},
{
"body": "Extended tests to cover this :)",
"created_at": "2017-09-18T11:48:55Z"
},
{
"body": "That won't loop forever, as far as I can see this will run into a JsonEOFException because if the unmatched brackets eventually.",
"created_at": "2017-09-18T11:55:04Z"
},
{
"body": "@cbuescher I tried it out by hand:\r\n\r\n```sh\r\ncurl -XGET -H 'Content-Type: application/json' http://localhost:9200/\\*/_count -d '\"{\"'\r\n```\r\n\r\nblocks forever",
"created_at": "2017-09-18T11:56:42Z"
},
{
"body": "Worse yet, it prevents ES from exiting even after I kill `curl`",
"created_at": "2017-09-18T11:58:01Z"
},
{
"body": "Thats a String token though...",
"created_at": "2017-09-18T11:59:09Z"
},
{
"body": "Wops my bad :) you're right, '{' dies just fine.\r\nSo basically we have `return null` if the first token is `null` (to cover the empty string case), throw if it's not null and not `ContentParser.Token.START_OBJECT` and leave the loop untouched? :)",
"created_at": "2017-09-18T12:02:06Z"
},
{
"body": "++",
"created_at": "2017-09-18T12:04:59Z"
},
{
"body": "And because this is really a bit confusing (I was suprised by some of this myself), maybe add some of the above cases in tests. Or wdyt @tlrx ",
"created_at": "2017-09-18T12:06:03Z"
},
{
"body": "@cbuescher I agree",
"created_at": "2017-09-18T12:08:13Z"
},
{
"body": "@cbuescher @tlrx done :) Added all the discussed cases here https://github.com/elastic/elasticsearch/pull/26680/files#diff-2098a5df6345871e4c3fdb934d3f9f49R72 and moved checks before the loop.",
"created_at": "2017-09-18T12:10:56Z"
},
{
"body": "Can you make the message a little bit more descriptive, since we know whats wrong in this case?",
"created_at": "2017-09-18T12:55:13Z"
},
{
"body": "If the error message is different, maybe you can differentiate between the case where we detect a missing START_OBJECT and the case where the underlying Jackson parser throws.",
"created_at": "2017-09-18T12:57:40Z"
},
{
"body": "@cbuescher sure, how about `\"Invalid JSON object, first token must be '{' but was 'VALUE_STRING'\"` ? (or whatever other token type it may be) I liked this better than printing the string back, which is hard to interpret for accidental string literals like in the \"'{'\" example if that makes sense? :)",
"created_at": "2017-09-18T13:47:26Z"
},
{
"body": "@cbuescher sure already done locally (as well as the broken indentation in the test), just waiting for an answer on the above :)",
"created_at": "2017-09-18T13:47:39Z"
},
{
"body": "It doesn't have to be JSON, we support Yaml, Cbor and Smile, so better to say we expected the beginning of an object or something along those lines.",
"created_at": "2017-09-18T14:10:38Z"
},
{
"body": "Maybe do sth. similar as this: https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java#L962",
"created_at": "2017-09-18T14:22:37Z"
},
{
"body": "yea that looks perfect, sec adjusting :)",
"created_at": "2017-09-18T14:25:37Z"
}
],
"title": "Fixed incomplete JSON body on count request making org.elasticsearch.rest.action.RestActions#parseTopLevelQueryBuilder go into endless loop"
} | {
"commits": [
{
"message": " #26083 Invalid JSON request body caused endless loop"
}
],
"files": [
{
"diff": "@@ -243,6 +243,15 @@ public RestResponse buildResponse(NodesResponse response, XContentBuilder builde\n private static QueryBuilder parseTopLevelQueryBuilder(XContentParser parser) {\n try {\n QueryBuilder queryBuilder = null;\n+ XContentParser.Token first = parser.nextToken();\n+ if (first == null) {\n+ return null;\n+ } else if (first != XContentParser.Token.START_OBJECT) {\n+ throw new ParsingException(\n+ parser.getTokenLocation(), \"Expected [\" + XContentParser.Token.START_OBJECT +\n+ \"] but found [\" + first + \"]\", parser.getTokenLocation()\n+ );\n+ }\n for (XContentParser.Token token = parser.nextToken(); token != XContentParser.Token.END_OBJECT; token = parser.nextToken()) {\n if (token == XContentParser.Token.FIELD_NAME) {\n String fieldName = parser.currentName();",
"filename": "core/src/main/java/org/elasticsearch/rest/action/RestActions.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.rest.action;\n \n+import com.fasterxml.jackson.core.io.JsonEOFException;\n+import java.util.Arrays;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n@@ -59,10 +61,32 @@ public void testParseTopLevelBuilder() throws IOException {\n }\n \n public void testParseTopLevelBuilderEmptyObject() throws IOException {\n- String requestBody = \"{}\";\n- try (XContentParser parser = createParser(JsonXContent.jsonXContent, requestBody)) {\n- QueryBuilder query = RestActions.getQueryContent(parser);\n- assertNull(query);\n+ for (String requestBody : Arrays.asList(\"{}\", \"\")) {\n+ try (XContentParser parser = createParser(JsonXContent.jsonXContent, requestBody)) {\n+ QueryBuilder query = RestActions.getQueryContent(parser);\n+ assertNull(query);\n+ }\n+ }\n+ }\n+\n+ public void testParseTopLevelBuilderMalformedJson() throws IOException {\n+ for (String requestBody : Arrays.asList(\"\\\"\\\"\", \"\\\"someString\\\"\", \"\\\"{\\\"\")) {\n+ try (XContentParser parser = createParser(JsonXContent.jsonXContent, requestBody)) {\n+ ParsingException exception =\n+ expectThrows(ParsingException.class, () -> RestActions.getQueryContent(parser));\n+ assertEquals(\"Expected [START_OBJECT] but found [VALUE_STRING]\", exception.getMessage());\n+ }\n+ }\n+ }\n+\n+ public void testParseTopLevelBuilderIncompleteJson() throws IOException {\n+ for (String requestBody : Arrays.asList(\"{\", \"{ \\\"query\\\" :\")) {\n+ try (XContentParser parser = createParser(JsonXContent.jsonXContent, requestBody)) {\n+ ParsingException exception =\n+ expectThrows(ParsingException.class, () -> RestActions.getQueryContent(parser));\n+ assertEquals(\"Failed to parse\", exception.getMessage());\n+ assertEquals(JsonEOFException.class, exception.getRootCause().getClass());\n+ }\n }\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/rest/action/RestActionsTests.java",
"status": "modified"
}
]
} |
{
"body": "`http.publish_host` is resolved while starting the node but I'm not entirely sure this makes sense.\r\n\r\nThis setting is a great way to advertise CNAME to http clients that might not be resolvable on the network the nodes themselves live on or the firewall they are behind.\r\n\r\n```yaml\r\nhttp.publish_host: elastic.co\r\n```\r\n\r\n#### GET /_nodes/http?pretty=true\r\n\r\n```json\r\n{\r\n \"_nodes\" : {\r\n \"total\" : 1,\r\n \"successful\" : 1,\r\n \"failed\" : 0\r\n },\r\n \"cluster_name\" : \"testtesttest\",\r\n \"nodes\" : {\r\n \"PTUZtqXjSY-mjPiQPWGXPg\" : {\r\n \"name\" : \"PTUZtqX\",\r\n \"transport_address\" : \"10.2.42.55:9300\",\r\n \"host\" : \"10.2.42.55\",\r\n \"ip\" : \"10.2.42.55\",\r\n \"version\" : \"5.0.0\",\r\n \"build_hash\" : \"253032b\",\r\n \"roles\" : [\r\n \"master\",\r\n \"data\",\r\n \"ingest\"\r\n ],\r\n \"http\" : {\r\n \"bound_address\" : [\r\n \"127.0.0.1:9200\",\r\n \"[::1]:9200\",\r\n \"10.2.42.55:9200\"\r\n ],\r\n \"publish_address\" : \"35.160.254.14:9200\",\r\n \"max_content_length_in_bytes\" : 104857600\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nIt also advertises the resolved IP and not the CNAME which might not play well with DNS caching, the lazy transport resolving coming in 5.1 as per #21630 does not seem to extend to these settings.\r\n\r\n",
"comments": [
{
"body": "Ok digging a bit further back it seems something has changed in how this is returned \r\n\r\nElasticsearch 2.1.1 used to return:\r\n\r\n```json\r\n\"publish_address\" : \"elastic.co/35.160.254.14:9200\"\r\n```\r\n\r\nBut in 2.4.6 and 5.x the cname part is dropped\r\n\r\n```json\r\n\"publish_address\" : \"35.160.254.14:9200\"\r\n````\r\n\r\nIn the .net client sniffing we give precedence to the part returned before the `/` as the hostname for the calls that we do.",
"created_at": "2016-12-07T13:46:39Z"
},
{
"body": "This behavior can create issues in environments where sniffing is used in conjunction with ssl certificates.\r\n\r\nWe currently have set up Logstash with ssl certificate checking and sniffing enabled. However our server certificates only contain the fqdns, not the server ip addresses, which I believe is not unusual (even if I don't condone it). \r\nSince ES always returns the ip address in publish_address, Logstash will try connecting to the ip address and reject the certificate, as it is only valid for the fqdn.\r\n\r\nI think it is fine for the publish_address to be resolved to an ip if it is inherited from network_host. But if it is explicitly configured to be something else I think we should presume that the user knows what he/she is trying to accomplish and return the configured string. If that breaks communication - well, shouldn't have fiddled with it :)\r\n\r\n",
"created_at": "2017-05-05T11:22:57Z"
},
{
"body": "Since the user here explicitly asked for `publish_host` to be set, there is a clear expectation of what should happen - the node should be reached via that value (0). By resolving it into an IP address elasticsearch made that impossible in many cases like when using ssl. This is clearly a bug.\r\n\r\n0 - https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#advanced-network-settings",
"created_at": "2017-07-07T16:16:42Z"
},
{
"body": "I'm happy to work on this if everybody agrees on the behavior as stated above:\r\n- keep behavior as is if publish_address is not set\r\n- return the configured value if publish_address is set\r\n",
"created_at": "2017-07-18T08:14:19Z"
},
{
"body": "Also impacts sniffing in the Java client: https://github.com/elastic/elasticsearch/issues/26125",
"created_at": "2017-08-10T00:39:34Z"
},
{
"body": "@javanna / @jasontedor is there any plan to address this issue in ES? This is a pretty nasty situation we're in, where clients can't use TLS in the predominant way its used, with domains, when using sniffing, which is a feature we highly encourage our users to use?\r\n\r\nShould I direct this issue to someone in particular?",
"created_at": "2017-08-10T19:58:52Z"
},
{
"body": "You can direct it to me. I'm on the tail-end of a vacation and will be available again next Monday; I'll take a look when I'm back. ",
"created_at": "2017-08-10T20:12:34Z"
},
{
"body": "Hey Party People,\r\nAny word on this? \r\nThanks\r\n-Kobar",
"created_at": "2017-08-16T16:53:48Z"
},
{
"body": "picking this up as discussed with @jasontedor ",
"created_at": "2018-08-10T15:42:56Z"
},
{
"body": "I've set jvm option `-Des.http.cname_in_publish_address=true` in `/etc/elasticsearch/instance01/jvm.options` with elasticsearch 6.5.0 and this still doesn't work for me. :(\r\n\r\nI only have the IP address being advertised, not the hostname I've set as `http.publish_host`",
"created_at": "2018-11-22T20:19:18Z"
},
{
"body": "Sorry about that. This was an oversight on my end. I forgot to back port the fix to `6.x` early enough for it to go into `6.5`. I handled this now in #35838 last night but it will only make it into `6.6` now.",
"created_at": "2018-11-23T08:31:38Z"
},
{
"body": "Ah, shame. It can't make it into 6.5.2? It made it into the [6.5.0 release notes](https://www.elastic.co/guide/en/elasticsearch/reference/current/release-notes-6.5.0.html#bug-6.5.0).",
"created_at": "2018-11-23T08:38:10Z"
},
{
"body": "> Ah, shame. It can't make it into 6.5.2? It made it into the 6.5.0 release notes.\r\n\r\nUnfortunately not. Sorry again for the the oversight.",
"created_at": "2018-11-23T08:42:54Z"
}
],
"number": 22029,
"title": "Should http.publish_host resolve the CNAME configured."
} | {
"body": "Fix #22029 report configured `http.publish_host` unresolved and verbatim to user. \r\n\r\nThe node still needs to be able to resolve the host name at startup so for certain network topologies this could still not be a solution. I think we'd need a `publish_host_unresolved` setting for that that sidesteps `InetAddress` all together, but will park that for a new issue. \r\n\r\nThe verbatim `publish_host` now shows up both on startup in console: \r\n\r\n> publish_address {elastic.co:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}\r\n\r\nand in the Nodes Info Api used by clients for sniffing (response truncated).\r\n\r\n```json\r\n{\r\n\t\"cluster_name\": \"qa_smoke-test-http-publish-host_integTestCluster\",\r\n\t\"nodes\": {\r\n\t\t\"1TrB9eQlQmm_9qHHqGoagA\": {\r\n\t\t\t\"name\": \"node-1\",\r\n\t\t\t\"http\": {\r\n\t\t\t\t\"bound_address\": [\r\n\t\t\t\t\t\"127.0.0.1:52922\",\r\n\t\t\t\t\t\"[::1]:52923\"\r\n\t\t\t\t],\r\n\t\t\t\t\"publish_address\": \"localhost:52922\"\r\n\t\t\t}\r\n\t\t},\r\n\t\t\"g-hAtMH_QcypO3OUO13lSg\": {\r\n\t\t\t\"name\": \"node-0\",\r\n\t\t\t\"http\": {\r\n\t\t\t\t\"bound_address\": [\r\n\t\t\t\t\t\"127.0.0.1:52808\",\r\n\t\t\t\t\t\"[::1]:52809\"\r\n\t\t\t\t],\r\n\t\t\t\t\"publish_address\": \"localhost:52808\"\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n}\r\n```\r\n\r\nRun new YAML test only:\r\n\r\n> gradle :qa:smoke-test-http-publish-host:integTestRunner -Dtests.class=org.elasticsearch.smoketest.SmokeTestHttpPublishHostClientYamlTestSuiteIT -Dtests.method=\"test {yaml=smoke_test_http_publish_host/10_should_return_verbatim/nodes info http publish host should be verbatim}\"\r\n\r\n\r\n\r\n\r\n",
"number": 26634,
"review_comments": [
{
"body": "Why do we need a sub-project for this, what's wrong with using the existing HTTP testing sub-project?",
"created_at": "2017-09-13T21:29:05Z"
},
{
"body": "I wanted a cluster with two nodes each setting `publish_host`. Not sure how i can bolt that on to the the existing HTTP testing sub project as part of the `integTest` gradle configuration, happy to learn how though :smile:",
"created_at": "2017-09-13T21:33:13Z"
},
{
"body": "You can start a second cluster after the first cluster stops. Look at `smoke-test-client` for an example where a second cluster is started after the first cluster stops.",
"created_at": "2017-09-13T21:42:53Z"
},
{
"body": "I may need a little more hand holding where exactly I should add my `*IntegCluster`, will pick it up again tomorrow.",
"created_at": "2017-09-13T21:51:24Z"
},
{
"body": "Also, can you explain why two nodes are needed?",
"created_at": "2017-09-13T23:18:58Z"
},
{
"body": "I am more than happy to help, reach out tomorrow.",
"created_at": "2017-09-13T23:19:08Z"
},
{
"body": "I wanted two nodes to be sure no information is lost on the transport layer, since I already needed a separate cluster to boot with the `publish_host` setting this seemed like the easiest way forward. Happy to drop down to one and write an isolated unit test for binary transport. \r\n\r\nWill ping you today, definitely need a nudge to see where I need to be adding what 👍 ",
"created_at": "2017-09-15T10:05:19Z"
},
{
"body": "I do not think we should be testing the transport layer here, that should happen elsewhere. Let's use one node.",
"created_at": "2017-09-15T10:23:49Z"
}
],
"title": "Fix http publish host"
} | {
"commits": [
{
"message": "Fix #22029 report configured unresolved http.publish_host verbatim to user, shows up both on startup in console and in the Nodes Info Api used by clients for sniffing"
},
{
"message": "Send verbatim representation over transport layer\n\nInclude yaml test for publish_host returning the host name as configured"
},
{
"message": "Updated YAML test description and normalized method arg across overloads"
},
{
"message": "adhere to right margin of 140"
},
{
"message": "add new module to settings.gradle"
}
],
"files": [
{
"diff": "@@ -88,13 +88,41 @@ public static String format(InetSocketAddress address) {\n return format(address.getAddress(), address.getPort());\n }\n \n- // note, we don't validate port, because we only allow InetSocketAddress\n+ /**\n+ * Formats a network address and port for display purposes.\n+ * Allowing the hostname to be passed verbatim as configured externally.\n+ * <p>\n+ * When {@code verbatimHost} is not null or empty it will be used as the hostname.\n+ * </p>\n+ * Otherwise this formats the address with {@link #format(InetAddress)}\n+ * and appends the port number. IPv6 addresses will be bracketed.\n+ * <p>\n+ * Example output:\n+ * <ul>\n+ * <li>With hostname: {@code elastic.co:9300}</li>\n+ * <li>IPv4: {@code 127.0.0.1:9300}</li>\n+ * <li>IPv6: {@code [::1]:9300}</li>\n+ * </ul>\n+ * @param address IPv4 or IPv6 address with port\n+ * @param verbatimHost String representing the host name.\n+ * @return formatted string\n+ */\n+ public static String format(InetSocketAddress address, String verbatimHost) {\n+ return format(address.getAddress(), address.getPort(), verbatimHost);\n+ }\n+\n static String format(InetAddress address, int port) {\n+ return format(address, port, null);\n+ }\n+ // note, we don't validate port, because we only allow InetSocketAddress\n+ static String format(InetAddress address, int port, String verbatimHost) {\n Objects.requireNonNull(address);\n \n StringBuilder builder = new StringBuilder();\n-\n- if (port != -1 && address instanceof Inet6Address) {\n+ if (verbatimHost != null && !verbatimHost.isEmpty()) {\n+ builder.append(verbatimHost);\n+ }\n+ else if (port != -1 && address instanceof Inet6Address) {\n builder.append(InetAddresses.toUriString(address));\n } else {\n builder.append(InetAddresses.toAddrString(address));",
"filename": "core/src/main/java/org/elasticsearch/common/network/NetworkAddress.java",
"status": "modified"
},
{
"diff": "@@ -22,9 +22,11 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.network.NetworkAddress;\n \n import java.io.IOException;\n+import java.net.Inet6Address;\n import java.net.InetAddress;\n import java.net.InetSocketAddress;\n import java.net.UnknownHostException;\n@@ -49,19 +51,25 @@ public final class TransportAddress implements Writeable {\n }\n \n private final InetSocketAddress address;\n+ private final String verbatimAddress;\n \n public TransportAddress(InetAddress address, int port) {\n this(new InetSocketAddress(address, port));\n }\n \n public TransportAddress(InetSocketAddress address) {\n+ this(address, null);\n+ }\n+\n+ public TransportAddress(InetSocketAddress address, String verbatimAddress) {\n if (address == null) {\n throw new IllegalArgumentException(\"InetSocketAddress must not be null\");\n }\n if (address.getAddress() == null) {\n throw new IllegalArgumentException(\"Address must be resolved but wasn't - InetSocketAddress#getAddress() returned null\");\n }\n this.address = address;\n+ this.verbatimAddress = verbatimAddress;\n }\n \n /**\n@@ -75,6 +83,7 @@ public TransportAddress(StreamInput in) throws IOException {\n final InetAddress inetAddress = InetAddress.getByAddress(host, a);\n int port = in.readInt();\n this.address = new InetSocketAddress(inetAddress, port);\n+ this.verbatimAddress = in.readOptionalString();\n }\n \n @Override\n@@ -87,6 +96,7 @@ public void writeTo(StreamOutput out) throws IOException {\n // these only make sense with respect to the local machine, and will only formulate\n // the address incorrectly remotely.\n out.writeInt(address.getPort());\n+ out.writeOptionalString(this.verbatimAddress);\n }\n \n /**\n@@ -126,6 +136,6 @@ public int hashCode() {\n \n @Override\n public String toString() {\n- return NetworkAddress.format(address);\n+ return NetworkAddress.format(address, this.verbatimAddress);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java",
"status": "modified"
},
{
"diff": "@@ -80,6 +80,7 @@\n import java.io.IOException;\n import java.net.InetAddress;\n import java.net.InetSocketAddress;\n+import java.net.UnknownHostException;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.List;\n@@ -328,10 +329,13 @@ private BoundTransportAddress createBoundHttpAddress() {\n } catch (Exception e) {\n throw new BindTransportException(\"Failed to resolve publish address\", e);\n }\n-\n final int publishPort = resolvePublishPort(settings, boundAddresses, publishInetAddress);\n final InetSocketAddress publishAddress = new InetSocketAddress(publishInetAddress, publishPort);\n- return new BoundTransportAddress(boundAddresses.toArray(new TransportAddress[0]), new TransportAddress(publishAddress));\n+ List<String> publishHosts = SETTING_HTTP_PUBLISH_HOST.get(settings);\n+ final String verbatimPublishAdress = publishHosts.isEmpty() ? null : publishHosts.get(0);\n+\n+ final TransportAddress publishTransportAdress = new TransportAddress(publishAddress, verbatimPublishAdress);\n+ return new BoundTransportAddress(boundAddresses.toArray(new TransportAddress[0]), publishTransportAdress);\n }\n \n // package private for tests",
"filename": "modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,31 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+apply plugin: 'elasticsearch.standalone-rest-test'\n+apply plugin: 'elasticsearch.rest-test'\n+\n+integTest {\n+}\n+\n+integTestCluster {\n+ numNodes = 2\n+ // localhost should be available to all and not be resolved\n+ setting 'http.publish_host', 'localhost'\n+}\n+",
"filename": "qa/smoke-test-http-publish-host/build.gradle",
"status": "added"
},
{
"diff": "@@ -0,0 +1,42 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.smoketest;\n+\n+import com.carrotsearch.randomizedtesting.annotations.Name;\n+import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;\n+\n+import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite;\n+import org.apache.lucene.util.TimeUnits;\n+import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate;\n+import org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase;\n+\n+@TimeoutSuite(millis = 40 * TimeUnits.MINUTE) // some of the windows test VMs are slow as hell\n+public class SmokeTestHttpPublishHostClientYamlTestSuiteIT extends ESClientYamlSuiteTestCase {\n+\n+ public SmokeTestHttpPublishHostClientYamlTestSuiteIT(@Name(\"yaml\") ClientYamlTestCandidate testCandidate) {\n+ super(testCandidate);\n+ }\n+\n+ @ParametersFactory\n+ public static Iterable<Object[]> parameters() throws Exception {\n+ return ESClientYamlSuiteTestCase.createParameters();\n+ }\n+}\n+",
"filename": "qa/smoke-test-http-publish-host/src/test/java/org/elasticsearch/smoketest/SmokeTestHttpPublishHostClientYamlTestSuiteIT.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,37 @@\n+---\n+\"nodes info http publish host should be verbatim\":\n+ - do:\n+ cluster.health:\n+ wait_for_nodes: 2\n+\n+ - is_true: cluster_name\n+ - is_false: timed_out\n+ - gte: { number_of_nodes: 2 }\n+ - gte: { number_of_data_nodes: 2 }\n+\n+ - do:\n+ cluster.state: {}\n+\n+ - set: { master_node: master }\n+\n+ - do:\n+ nodes.info:\n+ metric: [ http ]\n+\n+ - is_true: nodes.$master.http.publish_address\n+ - set: { nodes.$master.http.publish_address: host_name_master }\n+\n+ - match:\n+ $host_name_master : |\n+ /^localhost:\\d+$/\n+\n+ # Can only reliably get the long node id for the master node from the cluster.state API\n+ # All the cat API's return a truncated version\n+\n+ # The YAML tests will need a foreach(key in path) construct to test all the publish_address's for each node\n+ #\n+ #- is_true: nodes.1.http.publish_address\n+ #- set: { nodes.1.http.publish_address: host_name_1 }\n+ #- match:\n+ # $host_name_1 : |\n+ # /^localhost:\\d+$/",
"filename": "qa/smoke-test-http-publish-host/src/test/resources/rest-api-spec/test/smoke_test_http_publish_host/10_should_return_verbatim.yml",
"status": "added"
},
{
"diff": "@@ -75,6 +75,7 @@ List projects = [\n 'qa:smoke-test-ingest-with-all-dependencies',\n 'qa:smoke-test-ingest-disabled',\n 'qa:smoke-test-multinode',\n+ 'qa:smoke-test-http-publish-host',\n 'qa:smoke-test-plugins',\n 'qa:smoke-test-reindex-with-all-modules',\n 'qa:smoke-test-tribe-node',",
"filename": "settings.gradle",
"status": "modified"
}
]
} |
{
"body": "\r\n**Elasticsearch High Level REST Client version** 5.6.0-SNAPSHOT\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\n`RestHighLevelClient` javadoc states that\r\n\r\n> Can be sub-classed to expose additional client methods that make use of endpoints added to Elasticsearch through plugins, or to add support for custom response sections, again added to Elasticsearch through plugins.\r\n\r\nI would expect to be able to call all the protected methods if I am sub-classing from another package, but actually I currently need access to the `Request` class in order to do so.\r\n\r\nI am not willing to write my extension code in `org.elasticsearch.client` package in order to be able to extend it. The public/protected API should be sufficient to do so in my own package.\r\n\r\n**Steps to reproduce**:\r\nThere are two test cases that shows that `RestHighLevelClient` can be extended (`RestHighLevelClientExtTests` and `CustomRestHighLevelClientTests`), but the subclasses (`RestHighLevelClientExt` and `CustomRestClient`) are defined in the same package as `RestHighLevelClient`, if you move them in another package they no longer compile.\r\n",
"comments": [
{
"body": "Thanks, this is a good point. I will label this for discussion but I think it makes sense to allow this.",
"created_at": "2017-08-31T13:29:06Z"
},
{
"body": "You are right @tsachev this needs to be fixed.",
"created_at": "2017-09-11T15:00:01Z"
}
],
"number": 26455,
"title": "RestHighLevelClient cannot be sub classed outside of the `org.elasticsearch.client` package."
} | {
"body": "`Request` class is currently package protected, making it difficult for\r\nthe users to extend the `RestHighLevelClient` and to use its protected\r\nmethods to execute requests. This commit makes the `Request` class public\r\nand changes few methods of `RestHighLevelClient` to be protected.\r\n\r\ncloses #26455",
"number": 26627,
"review_comments": [
{
"body": "can we also test the visibility of the class and its constructor by checking their modifiers?",
"created_at": "2017-09-14T12:29:56Z"
},
{
"body": "sorry I didn't notice this before, but why changing the package here? It would be nice to be able to test everything we need with reflection, rather than changing package... unless there's another reason why this was done.",
"created_at": "2017-09-18T12:19:38Z"
},
{
"body": "No other reason, just to make it more obvious that ` CustomRestClient` extends the high level rest client while being located in a different package.",
"created_at": "2017-09-18T12:22:14Z"
},
{
"body": "Please let's not make an arbitrary (and inconsistent) change like that. We do not use \"extension\" packages anywhere else.",
"created_at": "2017-09-18T17:03:37Z"
}
],
"title": "Make RestHighLevelClient's Request class public"
} | {
"commits": [
{
"message": "Make RestHighLevelClient's Request class public\n\nRequest class is currently package protected, making it difficult for\nthe users to extend the RestHighLevelClient and to use its protected\nmethods to execute requests. This commit makes the Request class public\nand changes few methods of RestHighLevelClient to be protected.\n\ncloses #26455"
},
{
"message": "Apply feedback"
},
{
"message": "Remove package"
}
],
"files": [
{
"diff": "@@ -63,30 +63,47 @@\n import java.util.HashMap;\n import java.util.Locale;\n import java.util.Map;\n+import java.util.Objects;\n import java.util.StringJoiner;\n \n-final class Request {\n+public final class Request {\n \n static final XContentType REQUEST_BODY_CONTENT_TYPE = XContentType.JSON;\n \n- final String method;\n- final String endpoint;\n- final Map<String, String> params;\n- final HttpEntity entity;\n+ private final String method;\n+ private final String endpoint;\n+ private final Map<String, String> parameters;\n+ private final HttpEntity entity;\n \n- Request(String method, String endpoint, Map<String, String> params, HttpEntity entity) {\n- this.method = method;\n- this.endpoint = endpoint;\n- this.params = params;\n+ public Request(String method, String endpoint, Map<String, String> parameters, HttpEntity entity) {\n+ this.method = Objects.requireNonNull(method, \"method cannot be null\");\n+ this.endpoint = Objects.requireNonNull(endpoint, \"endpoint cannot be null\");\n+ this.parameters = Objects.requireNonNull(parameters, \"parameters cannot be null\");\n this.entity = entity;\n }\n \n+ public String getMethod() {\n+ return method;\n+ }\n+\n+ public String getEndpoint() {\n+ return endpoint;\n+ }\n+\n+ public Map<String, String> getParameters() {\n+ return parameters;\n+ }\n+\n+ public HttpEntity getEntity() {\n+ return entity;\n+ }\n+\n @Override\n public String toString() {\n return \"Request{\" +\n \"method='\" + method + '\\'' +\n \", endpoint='\" + endpoint + '\\'' +\n- \", params=\" + params +\n+ \", params=\" + parameters +\n \", hasBody=\" + (entity != null) +\n '}';\n }\n@@ -233,7 +250,7 @@ static Request bulk(BulkRequest bulkRequest) throws IOException {\n \n static Request exists(GetRequest getRequest) {\n Request request = get(getRequest);\n- return new Request(HttpHead.METHOD_NAME, request.endpoint, request.params, null);\n+ return new Request(HttpHead.METHOD_NAME, request.endpoint, request.parameters, null);\n }\n \n static Request get(GetRequest getRequest) {\n@@ -381,7 +398,7 @@ static String endpoint(String... parts) {\n * @return the {@link ContentType}\n */\n @SuppressForbidden(reason = \"Only allowed place to convert a XContentType to a ContentType\")\n- static ContentType createContentType(final XContentType xContentType) {\n+ public static ContentType createContentType(final XContentType xContentType) {\n return ContentType.create(xContentType.mediaTypeWithoutParameters(), (Charset) null);\n }\n ",
"filename": "client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java",
"status": "modified"
},
{
"diff": "@@ -425,7 +425,7 @@ protected <Req extends ActionRequest, Resp> Resp performRequest(Req request,\n Request req = requestConverter.apply(request);\n Response response;\n try {\n- response = client.performRequest(req.method, req.endpoint, req.params, req.entity, headers);\n+ response = client.performRequest(req.getMethod(), req.getEndpoint(), req.getParameters(), req.getEntity(), headers);\n } catch (ResponseException e) {\n if (ignores.contains(e.getResponse().getStatusLine().getStatusCode())) {\n try {\n@@ -474,7 +474,7 @@ protected <Req extends ActionRequest, Resp> void performRequestAsync(Req request\n }\n \n ResponseListener responseListener = wrapResponseListener(responseConverter, listener, ignores);\n- client.performRequestAsync(req.method, req.endpoint, req.params, req.entity, responseListener, headers);\n+ client.performRequestAsync(req.getMethod(), req.getEndpoint(), req.getParameters(), req.getEntity(), responseListener, headers);\n }\n \n <Resp> ResponseListener wrapResponseListener(CheckedFunction<Response, Resp, IOException> responseConverter,\n@@ -522,7 +522,7 @@ public void onFailure(Exception exception) {\n * that wraps the original {@link ResponseException}. The potential exception obtained while parsing is added to the returned\n * exception as a suppressed exception. This method is guaranteed to not throw any exception eventually thrown while parsing.\n */\n- ElasticsearchStatusException parseResponseException(ResponseException responseException) {\n+ protected ElasticsearchStatusException parseResponseException(ResponseException responseException) {\n Response response = responseException.getResponse();\n HttpEntity entity = response.getEntity();\n ElasticsearchStatusException elasticsearchException;\n@@ -542,8 +542,8 @@ ElasticsearchStatusException parseResponseException(ResponseException responseEx\n return elasticsearchException;\n }\n \n- <Resp> Resp parseEntity(\n- HttpEntity entity, CheckedFunction<XContentParser, Resp, IOException> entityParser) throws IOException {\n+ protected <Resp> Resp parseEntity(final HttpEntity entity,\n+ final CheckedFunction<XContentParser, Resp, IOException> entityParser) throws IOException {\n if (entity == null) {\n throw new IllegalStateException(\"Response body expected but not returned\");\n }",
"filename": "client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java",
"status": "modified"
},
{
"diff": "@@ -22,14 +22,12 @@\n import org.apache.http.Header;\n import org.apache.http.HttpEntity;\n import org.apache.http.HttpHost;\n-import org.apache.http.HttpResponse;\n import org.apache.http.ProtocolVersion;\n import org.apache.http.RequestLine;\n import org.apache.http.client.methods.HttpGet;\n import org.apache.http.entity.ByteArrayEntity;\n import org.apache.http.entity.ContentType;\n import org.apache.http.message.BasicHeader;\n-import org.apache.http.message.BasicHttpResponse;\n import org.apache.http.message.BasicRequestLine;\n import org.apache.http.message.BasicStatusLine;\n import org.apache.lucene.util.BytesRef;\n@@ -38,6 +36,12 @@\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.main.MainRequest;\n import org.elasticsearch.action.main.MainResponse;\n+import org.elasticsearch.action.support.PlainActionFuture;\n+import org.elasticsearch.client.Request;\n+import org.elasticsearch.client.Response;\n+import org.elasticsearch.client.ResponseListener;\n+import org.elasticsearch.client.RestClient;\n+import org.elasticsearch.client.RestHighLevelClient;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.common.SuppressForbidden;\n import org.elasticsearch.common.xcontent.XContentHelper;\n@@ -48,18 +52,22 @@\n import java.io.IOException;\n import java.lang.reflect.Method;\n import java.lang.reflect.Modifier;\n+import java.util.Arrays;\n import java.util.Collections;\n+import java.util.List;\n+import java.util.stream.Collectors;\n \n import static java.util.Collections.emptyMap;\n import static java.util.Collections.emptySet;\n-import static org.elasticsearch.client.ESRestHighLevelClientTestCase.execute;\n+import static org.hamcrest.Matchers.containsInAnyOrder;\n import static org.mockito.Matchers.any;\n import static org.mockito.Matchers.anyMapOf;\n import static org.mockito.Matchers.anyObject;\n import static org.mockito.Matchers.anyVararg;\n import static org.mockito.Matchers.eq;\n import static org.mockito.Mockito.doAnswer;\n import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.when;\n \n /**\n * Test and demonstrates how {@link RestHighLevelClient} can be extended to support custom endpoints.\n@@ -92,31 +100,45 @@ public void testCustomEndpoint() throws IOException {\n final MainRequest request = new MainRequest();\n final Header header = new BasicHeader(\"node_name\", randomAlphaOfLengthBetween(1, 10));\n \n- MainResponse response = execute(request, restHighLevelClient::custom, restHighLevelClient::customAsync, header);\n+ MainResponse response = restHighLevelClient.custom(request, header);\n assertEquals(header.getValue(), response.getNodeName());\n \n- response = execute(request, restHighLevelClient::customAndParse, restHighLevelClient::customAndParseAsync, header);\n+ response = restHighLevelClient.customAndParse(request, header);\n assertEquals(header.getValue(), response.getNodeName());\n }\n \n+ public void testCustomEndpointAsync() throws Exception {\n+ final MainRequest request = new MainRequest();\n+ final Header header = new BasicHeader(\"node_name\", randomAlphaOfLengthBetween(1, 10));\n+\n+ PlainActionFuture<MainResponse> future = PlainActionFuture.newFuture();\n+ restHighLevelClient.customAsync(request, future, header);\n+ assertEquals(header.getValue(), future.get().getNodeName());\n+\n+ future = PlainActionFuture.newFuture();\n+ restHighLevelClient.customAndParseAsync(request, future, header);\n+ assertEquals(header.getValue(), future.get().getNodeName());\n+ }\n+\n /**\n * The {@link RestHighLevelClient} must declare the following execution methods using the <code>protected</code> modifier\n * so that they can be used by subclasses to implement custom logic.\n */\n @SuppressForbidden(reason = \"We're forced to uses Class#getDeclaredMethods() here because this test checks protected methods\")\n public void testMethodsVisibility() throws ClassNotFoundException {\n- String[] methodNames = new String[]{\"performRequest\", \"performRequestAndParseEntity\", \"performRequestAsync\",\n- \"performRequestAsyncAndParseEntity\"};\n- for (String methodName : methodNames) {\n- boolean found = false;\n- for (Method method : RestHighLevelClient.class.getDeclaredMethods()) {\n- if (method.getName().equals(methodName)) {\n- assertTrue(\"Method \" + methodName + \" must be protected\", Modifier.isProtected(method.getModifiers()));\n- found = true;\n- }\n- }\n- assertTrue(\"Failed to find method \" + methodName, found);\n- }\n+ final String[] methodNames = new String[]{\"performRequest\",\n+ \"performRequestAsync\",\n+ \"performRequestAndParseEntity\",\n+ \"performRequestAsyncAndParseEntity\",\n+ \"parseEntity\",\n+ \"parseResponseException\"};\n+\n+ final List<String> protectedMethods = Arrays.stream(RestHighLevelClient.class.getDeclaredMethods())\n+ .filter(method -> Modifier.isProtected(method.getModifiers()))\n+ .map(Method::getName)\n+ .collect(Collectors.toList());\n+\n+ assertThat(protectedMethods, containsInAnyOrder(methodNames));\n }\n \n /**\n@@ -135,15 +157,20 @@ private Void mockPerformRequestAsync(Header httpHeader, ResponseListener respons\n * Mocks the synchronous request execution like if it was executed by Elasticsearch.\n */\n private Response mockPerformRequest(Header httpHeader) throws IOException {\n+ final Response mockResponse = mock(Response.class);\n+ when(mockResponse.getHost()).thenReturn(new HttpHost(\"localhost\", 9200));\n+\n ProtocolVersion protocol = new ProtocolVersion(\"HTTP\", 1, 1);\n- HttpResponse httpResponse = new BasicHttpResponse(new BasicStatusLine(protocol, 200, \"OK\"));\n+ when(mockResponse.getStatusLine()).thenReturn(new BasicStatusLine(protocol, 200, \"OK\"));\n \n MainResponse response = new MainResponse(httpHeader.getValue(), Version.CURRENT, ClusterName.DEFAULT, \"_na\", Build.CURRENT, true);\n BytesRef bytesRef = XContentHelper.toXContent(response, XContentType.JSON, false).toBytesRef();\n- httpResponse.setEntity(new ByteArrayEntity(bytesRef.bytes, ContentType.APPLICATION_JSON));\n+ when(mockResponse.getEntity()).thenReturn(new ByteArrayEntity(bytesRef.bytes, ContentType.APPLICATION_JSON));\n \n RequestLine requestLine = new BasicRequestLine(HttpGet.METHOD_NAME, ENDPOINT, protocol);\n- return new Response(requestLine, new HttpHost(\"localhost\", 9200), httpResponse);\n+ when(mockResponse.getRequestLine()).thenReturn(requestLine);\n+\n+ return mockResponse;\n }\n \n /**",
"filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,8 @@\n \n import org.apache.http.HttpEntity;\n import org.apache.http.entity.ByteArrayEntity;\n+import org.apache.http.entity.ContentType;\n+import org.apache.http.entity.StringEntity;\n import org.apache.http.util.EntityUtils;\n import org.elasticsearch.action.DocWriteRequest;\n import org.elasticsearch.action.bulk.BulkRequest;\n@@ -64,6 +66,8 @@\n \n import java.io.IOException;\n import java.io.InputStream;\n+import java.lang.reflect.Constructor;\n+import java.lang.reflect.Modifier;\n import java.util.HashMap;\n import java.util.Locale;\n import java.util.Map;\n@@ -77,20 +81,50 @@\n \n public class RequestTests extends ESTestCase {\n \n+ public void testConstructor() throws Exception {\n+ final String method = randomFrom(\"GET\", \"PUT\", \"POST\", \"HEAD\", \"DELETE\");\n+ final String endpoint = randomAlphaOfLengthBetween(1, 10);\n+ final Map<String, String> parameters = singletonMap(randomAlphaOfLength(5), randomAlphaOfLength(5));\n+ final HttpEntity entity = randomBoolean() ? new StringEntity(randomAlphaOfLengthBetween(1, 100), ContentType.TEXT_PLAIN) : null;\n+\n+ NullPointerException e = expectThrows(NullPointerException.class, () -> new Request(null, endpoint, parameters, entity));\n+ assertEquals(\"method cannot be null\", e.getMessage());\n+\n+ e = expectThrows(NullPointerException.class, () -> new Request(method, null, parameters, entity));\n+ assertEquals(\"endpoint cannot be null\", e.getMessage());\n+\n+ e = expectThrows(NullPointerException.class, () -> new Request(method, endpoint, null, entity));\n+ assertEquals(\"parameters cannot be null\", e.getMessage());\n+\n+ final Request request = new Request(method, endpoint, parameters, entity);\n+ assertEquals(method, request.getMethod());\n+ assertEquals(endpoint, request.getEndpoint());\n+ assertEquals(parameters, request.getParameters());\n+ assertEquals(entity, request.getEntity());\n+\n+ final Constructor<?>[] constructors = Request.class.getConstructors();\n+ assertEquals(\"Expected only 1 constructor\", 1, constructors.length);\n+ assertTrue(\"Request constructor is not public\", Modifier.isPublic(constructors[0].getModifiers()));\n+ }\n+\n+ public void testClassVisibility() throws Exception {\n+ assertTrue(\"Request class is not public\", Modifier.isPublic(Request.class.getModifiers()));\n+ }\n+\n public void testPing() {\n Request request = Request.ping();\n- assertEquals(\"/\", request.endpoint);\n- assertEquals(0, request.params.size());\n- assertNull(request.entity);\n- assertEquals(\"HEAD\", request.method);\n+ assertEquals(\"/\", request.getEndpoint());\n+ assertEquals(0, request.getParameters().size());\n+ assertNull(request.getEntity());\n+ assertEquals(\"HEAD\", request.getMethod());\n }\n \n public void testInfo() {\n Request request = Request.info();\n- assertEquals(\"/\", request.endpoint);\n- assertEquals(0, request.params.size());\n- assertNull(request.entity);\n- assertEquals(\"GET\", request.method);\n+ assertEquals(\"/\", request.getEndpoint());\n+ assertEquals(0, request.getParameters().size());\n+ assertNull(request.getEntity());\n+ assertEquals(\"GET\", request.getMethod());\n }\n \n public void testGet() {\n@@ -124,10 +158,10 @@ public void testDelete() throws IOException {\n }\n \n Request request = Request.delete(deleteRequest);\n- assertEquals(\"/\" + index + \"/\" + type + \"/\" + id, request.endpoint);\n- assertEquals(expectedParams, request.params);\n- assertEquals(\"DELETE\", request.method);\n- assertNull(request.entity);\n+ assertEquals(\"/\" + index + \"/\" + type + \"/\" + id, request.getEndpoint());\n+ assertEquals(expectedParams, request.getParameters());\n+ assertEquals(\"DELETE\", request.getMethod());\n+ assertNull(request.getEntity());\n }\n \n public void testExists() {\n@@ -200,10 +234,10 @@ private static void getAndExistsTest(Function<GetRequest, Request> requestConver\n }\n }\n Request request = requestConverter.apply(getRequest);\n- assertEquals(\"/\" + index + \"/\" + type + \"/\" + id, request.endpoint);\n- assertEquals(expectedParams, request.params);\n- assertNull(request.entity);\n- assertEquals(method, request.method);\n+ assertEquals(\"/\" + index + \"/\" + type + \"/\" + id, request.getEndpoint());\n+ assertEquals(expectedParams, request.getParameters());\n+ assertNull(request.getEntity());\n+ assertEquals(method, request.getMethod());\n }\n \n public void testIndex() throws IOException {\n@@ -267,16 +301,16 @@ public void testIndex() throws IOException {\n \n Request request = Request.index(indexRequest);\n if (indexRequest.opType() == DocWriteRequest.OpType.CREATE) {\n- assertEquals(\"/\" + index + \"/\" + type + \"/\" + id + \"/_create\", request.endpoint);\n+ assertEquals(\"/\" + index + \"/\" + type + \"/\" + id + \"/_create\", request.getEndpoint());\n } else if (id != null) {\n- assertEquals(\"/\" + index + \"/\" + type + \"/\" + id, request.endpoint);\n+ assertEquals(\"/\" + index + \"/\" + type + \"/\" + id, request.getEndpoint());\n } else {\n- assertEquals(\"/\" + index + \"/\" + type, request.endpoint);\n+ assertEquals(\"/\" + index + \"/\" + type, request.getEndpoint());\n }\n- assertEquals(expectedParams, request.params);\n- assertEquals(method, request.method);\n+ assertEquals(expectedParams, request.getParameters());\n+ assertEquals(method, request.getMethod());\n \n- HttpEntity entity = request.entity;\n+ HttpEntity entity = request.getEntity();\n assertTrue(entity instanceof ByteArrayEntity);\n assertEquals(indexRequest.getContentType().mediaTypeWithoutParameters(), entity.getContentType().getValue());\n try (XContentParser parser = createParser(xContentType.xContent(), entity.getContent())) {\n@@ -367,11 +401,11 @@ public void testUpdate() throws IOException {\n }\n \n Request request = Request.update(updateRequest);\n- assertEquals(\"/\" + index + \"/\" + type + \"/\" + id + \"/_update\", request.endpoint);\n- assertEquals(expectedParams, request.params);\n- assertEquals(\"POST\", request.method);\n+ assertEquals(\"/\" + index + \"/\" + type + \"/\" + id + \"/_update\", request.getEndpoint());\n+ assertEquals(expectedParams, request.getParameters());\n+ assertEquals(\"POST\", request.getMethod());\n \n- HttpEntity entity = request.entity;\n+ HttpEntity entity = request.getEntity();\n assertTrue(entity instanceof ByteArrayEntity);\n \n UpdateRequest parsedUpdateRequest = new UpdateRequest();\n@@ -485,12 +519,12 @@ public void testBulk() throws IOException {\n }\n \n Request request = Request.bulk(bulkRequest);\n- assertEquals(\"/_bulk\", request.endpoint);\n- assertEquals(expectedParams, request.params);\n- assertEquals(\"POST\", request.method);\n- assertEquals(xContentType.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());\n- byte[] content = new byte[(int) request.entity.getContentLength()];\n- try (InputStream inputStream = request.entity.getContent()) {\n+ assertEquals(\"/_bulk\", request.getEndpoint());\n+ assertEquals(expectedParams, request.getParameters());\n+ assertEquals(\"POST\", request.getMethod());\n+ assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());\n+ byte[] content = new byte[(int) request.getEntity().getContentLength()];\n+ try (InputStream inputStream = request.getEntity().getContent()) {\n Streams.readFully(inputStream, content);\n }\n \n@@ -541,7 +575,7 @@ public void testBulkWithDifferentContentTypes() throws IOException {\n bulkRequest.add(new DeleteRequest(\"index\", \"type\", \"2\"));\n \n Request request = Request.bulk(bulkRequest);\n- assertEquals(XContentType.JSON.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());\n+ assertEquals(XContentType.JSON.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());\n }\n {\n XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE);\n@@ -551,7 +585,7 @@ public void testBulkWithDifferentContentTypes() throws IOException {\n bulkRequest.add(new DeleteRequest(\"index\", \"type\", \"2\"));\n \n Request request = Request.bulk(bulkRequest);\n- assertEquals(xContentType.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());\n+ assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());\n }\n {\n XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE);\n@@ -563,7 +597,7 @@ public void testBulkWithDifferentContentTypes() throws IOException {\n }\n \n Request request = Request.bulk(new BulkRequest().add(updateRequest));\n- assertEquals(xContentType.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());\n+ assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());\n }\n {\n BulkRequest bulkRequest = new BulkRequest();\n@@ -712,12 +746,12 @@ public void testSearch() throws Exception {\n endpoint.add(type);\n }\n endpoint.add(\"_search\");\n- assertEquals(endpoint.toString(), request.endpoint);\n- assertEquals(expectedParams, request.params);\n+ assertEquals(endpoint.toString(), request.getEndpoint());\n+ assertEquals(expectedParams, request.getParameters());\n if (searchSourceBuilder == null) {\n- assertNull(request.entity);\n+ assertNull(request.getEntity());\n } else {\n- assertToXContentBody(searchSourceBuilder, request.entity);\n+ assertToXContentBody(searchSourceBuilder, request.getEntity());\n }\n }\n \n@@ -728,11 +762,11 @@ public void testSearchScroll() throws IOException {\n searchScrollRequest.scroll(randomPositiveTimeValue());\n }\n Request request = Request.searchScroll(searchScrollRequest);\n- assertEquals(\"GET\", request.method);\n- assertEquals(\"/_search/scroll\", request.endpoint);\n- assertEquals(0, request.params.size());\n- assertToXContentBody(searchScrollRequest, request.entity);\n- assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());\n+ assertEquals(\"GET\", request.getMethod());\n+ assertEquals(\"/_search/scroll\", request.getEndpoint());\n+ assertEquals(0, request.getParameters().size());\n+ assertToXContentBody(searchScrollRequest, request.getEntity());\n+ assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());\n }\n \n public void testClearScroll() throws IOException {\n@@ -742,11 +776,11 @@ public void testClearScroll() throws IOException {\n clearScrollRequest.addScrollId(randomAlphaOfLengthBetween(5, 10));\n }\n Request request = Request.clearScroll(clearScrollRequest);\n- assertEquals(\"DELETE\", request.method);\n- assertEquals(\"/_search/scroll\", request.endpoint);\n- assertEquals(0, request.params.size());\n- assertToXContentBody(clearScrollRequest, request.entity);\n- assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());\n+ assertEquals(\"DELETE\", request.getMethod());\n+ assertEquals(\"/_search/scroll\", request.getEndpoint());\n+ assertEquals(0, request.getParameters().size());\n+ assertToXContentBody(clearScrollRequest, request.getEntity());\n+ assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());\n }\n \n private static void assertToXContentBody(ToXContent expectedBody, HttpEntity actualEntity) throws IOException {",
"filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java",
"status": "modified"
},
{
"diff": "@@ -36,7 +36,7 @@\n import static org.mockito.Mockito.mock;\n \n /**\n- * This test works against a {@link RestHighLevelClient} subclass that simulats how custom response sections returned by\n+ * This test works against a {@link RestHighLevelClient} subclass that simulates how custom response sections returned by\n * Elasticsearch plugins can be parsed using the high level client.\n */\n public class RestHighLevelClientExtTests extends ESTestCase {",
"filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java",
"status": "modified"
}
]
} |
{
"body": "This pre-built token filter is inconsistent with other pre-built token filters since it includes `filter` in its name while the other ones don't.",
"comments": [
{
"body": "Hai! Can I kindly take up this issue and submit a PR?",
"created_at": "2017-01-21T08:15:00Z"
},
{
"body": "Thanks for showing interest in contributing to Elasticsearch.\r\n\r\nHere's a guide to how to go about it: https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md",
"created_at": "2017-01-23T18:31:13Z"
},
{
"body": "Should this issue be closed after the PR ?",
"created_at": "2017-08-21T08:47:52Z"
}
],
"number": 21978,
"title": "Rename `delimiter_payload_filter` to `delimiter_payload`."
} | {
"body": "Closes #21978 . Any comments are appreciated. 😄 ",
"number": 26625,
"review_comments": [
{
"body": "I think the deprecation also needs to happen in this PR. Using the old name, either in the settings when creating an index or when using it in an already existing index should trigger a deprecation warning and an entry in the logs.",
"created_at": "2017-10-30T09:51:52Z"
},
{
"body": "I pushed 7e7c54dce07cb806477e34ff81f2b46ea87edfb3 ",
"created_at": "2017-11-02T12:48:09Z"
},
{
"body": "Maybe rename it to something like `LegacyDelimitedPayloadTokenFilterFactory` but other than that, this is indeed the approach I was thinking of.",
"created_at": "2017-11-22T07:27:10Z"
},
{
"body": "Thanks!",
"created_at": "2017-11-22T07:39:01Z"
},
{
"body": "Maybe we should make it clearer that the filter is just renamed, and its only the old name that is deprecated, something like \"The `delimited_payload_filter` is renamed to `delimited_payload`, the old name is deprecated and will be removed at some point, so it should be replaces by `delimited_payload`\" or slightly different.",
"created_at": "2017-11-22T12:09:20Z"
}
],
"title": "Replace delimited_payload_filter by delimited_payload"
} | {
"commits": [
{
"message": "Replace delimited_payload_filter by delimited_payload"
},
{
"message": "Add deprecation log for delimited_payload_filter"
},
{
"message": "Add a wrapper of token filter factory for delimited_payload_filter"
},
{
"message": "Fix naming of legacy delimited payload filter factory class"
},
{
"message": "Merge master"
},
{
"message": "Update document wording"
},
{
"message": "Fix code style"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into rebane-delimited_payload_filter"
}
],
"files": [
{
"diff": "@@ -300,6 +300,7 @@ public String toString() {\n };\n }\n \n+ @SuppressWarnings(\"unchecked\")\n private <T> Map<String, T> buildMapping(Component component, IndexSettings settings, Map<String, Settings> settingsMap,\n Map<String, ? extends AnalysisModule.AnalysisProvider<T>> providerMap,\n Map<String, ? extends AnalysisModule.AnalysisProvider<T>> defaultInstance) throws IOException {",
"filename": "core/src/main/java/org/elasticsearch/index/analysis/AnalysisRegistry.java",
"status": "modified"
},
{
"diff": "@@ -152,10 +152,10 @@ public void testRandomPayloadWithDelimitedPayloadTokenFilter() throws IOExceptio\n .field(\"analyzer\", \"payload_test\").endObject().endObject().endObject().endObject();\n Settings setting = Settings.builder()\n .put(\"index.analysis.analyzer.payload_test.tokenizer\", \"whitespace\")\n- .putList(\"index.analysis.analyzer.payload_test.filter\", \"my_delimited_payload_filter\")\n- .put(\"index.analysis.filter.my_delimited_payload_filter.delimiter\", delimiter)\n- .put(\"index.analysis.filter.my_delimited_payload_filter.encoding\", encodingString)\n- .put(\"index.analysis.filter.my_delimited_payload_filter.type\", \"mock_payload_filter\").build();\n+ .putList(\"index.analysis.analyzer.payload_test.filter\", \"my_delimited_payload\")\n+ .put(\"index.analysis.filter.my_delimited_payload.delimiter\", delimiter)\n+ .put(\"index.analysis.filter.my_delimited_payload.encoding\", encodingString)\n+ .put(\"index.analysis.filter.my_delimited_payload.type\", \"mock_payload_filter\").build();\n createIndex(\"test\", setting, \"type1\", mapping);\n \n client().prepareIndex(\"test\", \"type1\", Integer.toString(1))",
"filename": "core/src/test/java/org/elasticsearch/action/termvectors/GetTermVectorsTests.java",
"status": "modified"
},
{
"diff": "@@ -1,7 +1,7 @@\n [[analysis-delimited-payload-tokenfilter]]\n === Delimited Payload Token Filter\n \n-Named `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.\n+Named `delimited_payload`. Splits tokens into tokens and payload whenever a delimiter character is found.\n \n Example: \"the|1 quick|2 fox|3\" is split by default into tokens `the`, `quick`, and `fox` with payloads `1`, `2`, and `3` respectively.\n ",
"filename": "docs/reference/analysis/tokenfilters/delimited-payload-tokenfilter.asciidoc",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@ way to reindex old indices is to use the `reindex` API.\n * <<breaking_70_mappings_changes>>\n * <<breaking_70_search_changes>>\n * <<breaking_70_plugins_changes>>\n+* <<breaking_70_analysis_changes>>\n * <<breaking_70_api_changes>>\n \n ",
"filename": "docs/reference/migration/migrate_7_0.asciidoc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,8 @@\n+[[breaking_70_analysis_changes]]\n+=== Analysis changes\n+\n+==== The `delimited_payload_filter` is renamed\n+\n+The `delimited_payload_filter` is renamed to `delimited_payload`, the old name is \n+deprecated and will be removed at some point, so it should be replaced by \n+`delimited_payload`.",
"filename": "docs/reference/migration/migrate_7_0/analysis.asciidoc",
"status": "added"
},
{
"diff": "@@ -103,7 +103,8 @@ public Map<String, AnalysisProvider<TokenFilterFactory>> getTokenFilters() {\n filters.put(\"czech_stem\", CzechStemTokenFilterFactory::new);\n filters.put(\"common_grams\", requriesAnalysisSettings(CommonGramsTokenFilterFactory::new));\n filters.put(\"decimal_digit\", DecimalDigitFilterFactory::new);\n- filters.put(\"delimited_payload_filter\", DelimitedPayloadTokenFilterFactory::new);\n+ filters.put(\"delimited_payload_filter\", LegacyDelimitedPayloadTokenFilterFactory::new);\n+ filters.put(\"delimited_payload\", DelimitedPayloadTokenFilterFactory::new);\n filters.put(\"dictionary_decompounder\", requriesAnalysisSettings(DictionaryCompoundWordTokenFilterFactory::new));\n filters.put(\"dutch_stem\", DutchStemTokenFilterFactory::new);\n filters.put(\"edge_ngram\", EdgeNGramTokenFilterFactory::new);\n@@ -195,6 +196,10 @@ public List<PreConfiguredTokenFilter> getPreConfiguredTokenFilters() {\n new DelimitedPayloadTokenFilter(input,\n DelimitedPayloadTokenFilterFactory.DEFAULT_DELIMITER,\n DelimitedPayloadTokenFilterFactory.DEFAULT_ENCODER)));\n+ filters.add(PreConfiguredTokenFilter.singleton(\"delimited_payload\", false, input ->\n+ new DelimitedPayloadTokenFilter(input,\n+ DelimitedPayloadTokenFilterFactory.DEFAULT_DELIMITER,\n+ DelimitedPayloadTokenFilterFactory.DEFAULT_ENCODER)));\n filters.add(PreConfiguredTokenFilter.singleton(\"dutch_stem\", false, input -> new SnowballFilter(input, new DutchStemmer())));\n filters.add(PreConfiguredTokenFilter.singleton(\"edge_ngram\", false, input ->\n new EdgeNGramTokenFilter(input, EdgeNGramTokenFilter.DEFAULT_MIN_GRAM_SIZE, EdgeNGramTokenFilter.DEFAULT_MAX_GRAM_SIZE)));",
"filename": "modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CommonAnalysisPlugin.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,39 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.analysis.common;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.env.Environment;\n+import org.elasticsearch.index.IndexSettings;\n+\n+public class LegacyDelimitedPayloadTokenFilterFactory extends DelimitedPayloadTokenFilterFactory {\n+ private static final DeprecationLogger DEPRECATION_LOGGER =\n+ new DeprecationLogger(Loggers.getLogger(LegacyDelimitedPayloadTokenFilterFactory.class));\n+\n+ LegacyDelimitedPayloadTokenFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) {\n+ super(indexSettings, env, name, settings);\n+ if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ DEPRECATION_LOGGER.deprecated(\"Deprecated [delimited_payload_filter] used, replaced by [delimited_payload]\");\n+ }\n+ }\n+}",
"filename": "modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/LegacyDelimitedPayloadTokenFilterFactory.java",
"status": "added"
},
{
"diff": "@@ -170,6 +170,7 @@ protected Map<String, Class<?>> getPreConfiguredTokenFilters() {\n filters.put(\"czech_stem\", null);\n filters.put(\"decimal_digit\", null);\n filters.put(\"delimited_payload_filter\", org.apache.lucene.analysis.payloads.DelimitedPayloadTokenFilterFactory.class);\n+ filters.put(\"delimited_payload\", org.apache.lucene.analysis.payloads.DelimitedPayloadTokenFilterFactory.class);\n filters.put(\"dutch_stem\", SnowballPorterFilterFactory.class);\n filters.put(\"edge_ngram\", null);\n filters.put(\"edgeNGram\", null);",
"filename": "modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/CommonAnalysisFactoryTests.java",
"status": "modified"
},
{
"diff": "@@ -1027,7 +1027,14 @@\n \n ---\n \"delimited_payload_filter\":\n+ - skip:\n+ version: \" - 6.99.99\"\n+ reason: delimited_payload_filter deprecated in 7.0, replaced by delimited_payload\n+ features: \"warnings\"\n+\n - do:\n+ warnings:\n+ - \"Deprecated [delimited_payload_filter] used, replaced by [delimited_payload]\"\n indices.create:\n index: test\n body:\n@@ -1039,6 +1046,8 @@\n delimiter: ^\n encoding: identity\n - do:\n+ warnings:\n+ - \"Deprecated [delimited_payload_filter] used, replaced by [delimited_payload]\"\n indices.analyze:\n index: test\n body:\n@@ -1050,6 +1059,8 @@\n \n # Test pre-configured token filter too:\n - do:\n+ warnings:\n+ - \"Deprecated [delimited_payload_filter] used, replaced by [delimited_payload]\"\n indices.analyze:\n body:\n text: foo|5\n@@ -1058,6 +1069,39 @@\n - length: { tokens: 1 }\n - match: { tokens.0.token: foo }\n \n+---\n+\"delimited_payload\":\n+ - do:\n+ indices.create:\n+ index: test\n+ body:\n+ settings:\n+ analysis:\n+ filter:\n+ my_delimited_payload:\n+ type: delimited_payload\n+ delimiter: ^\n+ encoding: identity\n+ - do:\n+ indices.analyze:\n+ index: test\n+ body:\n+ text: foo^bar\n+ tokenizer: keyword\n+ filter: [my_delimited_payload]\n+ - length: { tokens: 1 }\n+ - match: { tokens.0.token: foo }\n+\n+ # Test pre-configured token filter too:\n+ - do:\n+ indices.analyze:\n+ body:\n+ text: foo|5\n+ tokenizer: keyword\n+ filter: [delimited_payload]\n+ - length: { tokens: 1 }\n+ - match: { tokens.0.token: foo }\n+\n ---\n \"keep_filter\":\n - do:",
"filename": "modules/analysis-common/src/test/resources/rest-api-spec/test/analysis-common/40_token_filters.yml",
"status": "modified"
}
]
} |
{
"body": "This can be confusing since `disjoint` means the opposite of `intersects`.",
"comments": [
{
"body": "Hi @jpountz Seems like this is fixed in #26552 What need to do to fix this? Thanks.",
"created_at": "2017-09-13T02:47:03Z"
},
{
"body": "@liketic I don't think it is fixed. Documentation states that the relation can be of of `intersects`, `contains` and `within` (https://www.elastic.co/guide/en/elasticsearch/reference/current/range.html) so we need to fix `RangeQueryBuilder` to reject other values.",
"created_at": "2017-09-13T07:18:41Z"
}
],
"number": 26575,
"title": "Range queries on range fields accept `relation: disjoint` but treat it as `relation: intersects`"
} | {
"body": "<!--\r\nThank you for your interest in and contributing to Elasticsearch! There\r\nare a few simple things to check before submitting your pull request\r\nthat can help with the review process. You should delete these items\r\nfrom your submission, but they are here to help bring them to your\r\nattention.\r\n-->\r\n\r\nCloses #26575\r\n@jpountz Please help to review. Happy to make further changes. Thanks. \r\n",
"number": 26620,
"review_comments": [
{
"body": "It feels a bit wrong to me to make ShapeRelation aware of how it is used in range queries. I'd rather put this logic in RangeQueryBuilder.",
"created_at": "2017-09-13T09:48:46Z"
},
{
"body": "Make sense! Very thanks!",
"created_at": "2017-09-13T10:05:40Z"
}
],
"title": "Filter unsupported relation for RangeQueryBuilder"
} | {
"commits": [
{
"message": "Filter unsupported relation for range query builder"
}
],
"files": [
{
"diff": "@@ -115,10 +115,20 @@ public RangeQueryBuilder(StreamInput in) throws IOException {\n String relationString = in.readOptionalString();\n if (relationString != null) {\n relation = ShapeRelation.getRelationByName(relationString);\n+ if (relation != null && !isRelationAllowed(relation)) {\n+ throw new IllegalArgumentException(\n+ \"[range] query does not support relation [\" + relationString + \"]\");\n+ }\n }\n }\n }\n \n+ private boolean isRelationAllowed(ShapeRelation relation) {\n+ return relation == ShapeRelation.INTERSECTS\n+ || relation == ShapeRelation.CONTAINS\n+ || relation == ShapeRelation.WITHIN;\n+ }\n+\n @Override\n protected void doWriteTo(StreamOutput out) throws IOException {\n out.writeString(this.fieldName);\n@@ -317,6 +327,9 @@ public RangeQueryBuilder relation(String relation) {\n if (this.relation == null) {\n throw new IllegalArgumentException(relation + \" is not a valid relation\");\n }\n+ if (!isRelationAllowed(this.relation)) {\n+ throw new IllegalArgumentException(\"[range] query does not support relation [\" + relation + \"]\");\n+ }\n return this;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/query/RangeQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@\n import org.apache.lucene.search.TermRangeQuery;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.ParsingException;\n+import org.elasticsearch.common.geo.ShapeRelation;\n import org.elasticsearch.common.lucene.BytesRefs;\n import org.elasticsearch.index.mapper.DateFieldMapper;\n import org.elasticsearch.index.mapper.FieldNamesFieldMapper;\n@@ -535,4 +536,29 @@ public void testParseFailsWithMultipleFieldsWhenOneIsDate() {\n ParsingException e = expectThrows(ParsingException.class, () -> parseQuery(json));\n assertEquals(\"[range] query doesn't support multiple fields, found [age] and [\" + DATE_FIELD_NAME + \"]\", e.getMessage());\n }\n+\n+ public void testParseRelation() {\n+ String json =\n+ \"{\\n\" +\n+ \" \\\"range\\\": {\\n\" +\n+ \" \\\"age\\\": {\\n\" +\n+ \" \\\"gte\\\": 30,\\n\" +\n+ \" \\\"lte\\\": 40,\\n\" +\n+ \" \\\"relation\\\": \\\"disjoint\\\"\\n\" +\n+ \" }\" +\n+ \" }\\n\" +\n+ \" }\";\n+ String fieldName = randomAlphaOfLengthBetween(1, 20);\n+ IllegalArgumentException e1 = expectThrows(IllegalArgumentException.class, () -> parseQuery(json));\n+ assertEquals(\"[range] query does not support relation [disjoint]\", e1.getMessage());\n+ RangeQueryBuilder builder = new RangeQueryBuilder(fieldName);\n+ IllegalArgumentException e2 = expectThrows(IllegalArgumentException.class, ()->builder.relation(\"disjoint\"));\n+ assertEquals(\"[range] query does not support relation [disjoint]\", e2.getMessage());\n+ builder.relation(\"contains\");\n+ assertEquals(ShapeRelation.CONTAINS, builder.relation());\n+ builder.relation(\"within\");\n+ assertEquals(ShapeRelation.WITHIN, builder.relation());\n+ builder.relation(\"intersects\");\n+ assertEquals(ShapeRelation.INTERSECTS, builder.relation());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/query/RangeQueryBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 5.5.0\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): Java 8 u131\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Ubuntu 16.04\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\n**Steps to reproduce**:\r\n\r\n## Create index in 2.x (I used 2.3.4)\r\n\r\n```\r\nDELETE test1\r\nDELETE test2\r\n\r\nPUT test1\r\n{\r\n \"mappings\": {\r\n \"_default_\": {\r\n \"properties\": {\r\n \"testfield\": {\r\n \"type\": \"string\", \"index\": \"not_analyzed\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT test2\r\n{\r\n \"mappings\": {\r\n \"_default_\": {\r\n \"properties\": {\r\n \"testfield\": {\r\n \"type\": \"string\", \"index\": \"not_analyzed\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPOST test1/test\r\n{\r\n \"testfield\": \"my test\"\r\n}\r\n\r\n# Don't index anything into test2\r\n\r\nPOST test*/_search\r\n{\r\n \"query\": {\r\n \"match_all\": {}\r\n },\r\n \"sort\": [\r\n {\r\n \"testfield\": {\r\n \"order\": \"asc\",\r\n \"unmapped_type\": \"string\"\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\n## Upgrade to 5.x (I used 5.5.0)\r\n\r\nThe query above, now results in: (Also if I change to \"unmapped_type\": \"keyword\")\r\n\r\n```\r\n{\r\n \"took\": 19,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 10,\r\n \"successful\": 5,\r\n \"failed\": 5,\r\n \"failures\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"test2\",\r\n \"node\": \"JBHJ1HPYQbKMOxJ1hSotrw\",\r\n \"reason\": {\r\n \"type\": \"unsupported_operation_exception\",\r\n \"reason\": null\r\n }\r\n }\r\n ]\r\n },\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": null,\r\n \"hits\": [\r\n {\r\n \"_index\": \"test1\",\r\n \"_type\": \"test\",\r\n \"_id\": \"AV3Qh0G3rdu9hzgAfZn2\",\r\n \"_score\": null,\r\n \"_source\": {\r\n \"testfield\": \"my test\"\r\n },\r\n \"sort\": [\r\n \"my test\"\r\n ]\r\n }\r\n ]\r\n }\r\n}\r\n```\r\nand is producing this stack trace:\r\n\r\n```\r\n[2017-08-11T11:13:16,153][DEBUG][o.e.a.s.TransportSearchAction] [JBHJ1HP] [test2][0], node[JBHJ1HPYQbKMOxJ1hSotrw], [P], s[STARTED], a[id=OCMZA_bHTA-cdS6nEHIBgA]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[test*], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, source={\r\n \"query\" : {\r\n \"match_all\" : {\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"sort\" : [\r\n {\r\n \"testfield\" : {\r\n \"order\" : \"asc\",\r\n \"unmapped_type\" : \"keyword\"\r\n }\r\n }\r\n ]\r\n}}] lastShard [true]\r\norg.elasticsearch.transport.RemoteTransportException: [JBHJ1HP][127.0.0.1:9300][indices:data/read/search[phase/query]]\r\nCaused by: java.lang.UnsupportedOperationException\r\n\tat java.util.AbstractMap.put(AbstractMap.java:209) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.index.mapper.KeywordFieldMapper$TypeParser.parse(KeywordFieldMapper.java:152) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.index.mapper.MapperService.unmappedFieldType(MapperService.java:727) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.sort.FieldSortBuilder.build(FieldSortBuilder.java:260) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.sort.SortBuilder.buildSort(SortBuilder.java:156) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.parseSource(SearchService.java:630) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.createContext(SearchService.java:481) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:457) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:253) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:330) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:327) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n[2017-08-11T11:13:16,155][DEBUG][o.e.a.s.TransportSearchAction] [JBHJ1HP] [test2][2], node[JBHJ1HPYQbKMOxJ1hSotrw], [P], s[STARTED], a[id=kkIHhfYIRf6vXlBlebIpgg]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[test*], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, source={\r\n \"query\" : {\r\n \"match_all\" : {\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"sort\" : [\r\n {\r\n \"testfield\" : {\r\n \"order\" : \"asc\",\r\n \"unmapped_type\" : \"keyword\"\r\n }\r\n }\r\n ]\r\n}}] lastShard [true]\r\norg.elasticsearch.transport.RemoteTransportException: [JBHJ1HP][127.0.0.1:9300][indices:data/read/search[phase/query]]\r\nCaused by: java.lang.UnsupportedOperationException\r\n\tat java.util.AbstractMap.put(AbstractMap.java:209) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.index.mapper.KeywordFieldMapper$TypeParser.parse(KeywordFieldMapper.java:152) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.index.mapper.MapperService.unmappedFieldType(MapperService.java:727) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.sort.FieldSortBuilder.build(FieldSortBuilder.java:260) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.sort.SortBuilder.buildSort(SortBuilder.java:156) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.parseSource(SearchService.java:630) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.createContext(SearchService.java:481) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:457) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:253) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:330) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:327) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n[2017-08-11T11:13:16,157][DEBUG][o.e.a.s.TransportSearchAction] [JBHJ1HP] [test2][3], node[JBHJ1HPYQbKMOxJ1hSotrw], [P], s[STARTED], a[id=GyLowxvgSeO8ZabUK13b4g]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[test*], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, source={\r\n \"query\" : {\r\n \"match_all\" : {\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"sort\" : [\r\n {\r\n \"testfield\" : {\r\n \"order\" : \"asc\",\r\n \"unmapped_type\" : \"keyword\"\r\n }\r\n }\r\n ]\r\n}}] lastShard [true]\r\norg.elasticsearch.transport.RemoteTransportException: [JBHJ1HP][127.0.0.1:9300][indices:data/read/search[phase/query]]\r\nCaused by: java.lang.UnsupportedOperationException\r\n\tat java.util.AbstractMap.put(AbstractMap.java:209) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.index.mapper.KeywordFieldMapper$TypeParser.parse(KeywordFieldMapper.java:152) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.index.mapper.MapperService.unmappedFieldType(MapperService.java:727) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.sort.FieldSortBuilder.build(FieldSortBuilder.java:260) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.sort.SortBuilder.buildSort(SortBuilder.java:156) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.parseSource(SearchService.java:630) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.createContext(SearchService.java:481) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:457) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:253) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:330) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:327) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n[2017-08-11T11:13:16,161][DEBUG][o.e.a.s.TransportSearchAction] [JBHJ1HP] [test2][1], node[JBHJ1HPYQbKMOxJ1hSotrw], [P], s[STARTED], a[id=WEeR_WDhSW-Uxkys3ChfQg]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[test*], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, source={\r\n \"query\" : {\r\n \"match_all\" : {\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"sort\" : [\r\n {\r\n \"testfield\" : {\r\n \"order\" : \"asc\",\r\n \"unmapped_type\" : \"keyword\"\r\n }\r\n }\r\n ]\r\n}}] lastShard [true]\r\norg.elasticsearch.transport.RemoteTransportException: [JBHJ1HP][127.0.0.1:9300][indices:data/read/search[phase/query]]\r\nCaused by: java.lang.UnsupportedOperationException\r\n\tat java.util.AbstractMap.put(AbstractMap.java:209) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.index.mapper.KeywordFieldMapper$TypeParser.parse(KeywordFieldMapper.java:152) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.index.mapper.MapperService.unmappedFieldType(MapperService.java:727) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.sort.FieldSortBuilder.build(FieldSortBuilder.java:260) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.sort.SortBuilder.buildSort(SortBuilder.java:156) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.parseSource(SearchService.java:630) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.createContext(SearchService.java:481) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:457) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:253) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:330) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:327) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n[2017-08-11T11:13:16,167][DEBUG][o.e.a.s.TransportSearchAction] [JBHJ1HP] [test2][4], node[JBHJ1HPYQbKMOxJ1hSotrw], [P], s[STARTED], a[id=ka-M_4BJStSmPorrvv2_Cw]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[test*], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, source={\r\n \"query\" : {\r\n \"match_all\" : {\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"sort\" : [\r\n {\r\n \"testfield\" : {\r\n \"order\" : \"asc\",\r\n \"unmapped_type\" : \"keyword\"\r\n }\r\n }\r\n ]\r\n}}]\r\norg.elasticsearch.transport.RemoteTransportException: [JBHJ1HP][127.0.0.1:9300][indices:data/read/search[phase/query]]\r\nCaused by: java.lang.UnsupportedOperationException\r\n\tat java.util.AbstractMap.put(AbstractMap.java:209) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.index.mapper.KeywordFieldMapper$TypeParser.parse(KeywordFieldMapper.java:152) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.index.mapper.MapperService.unmappedFieldType(MapperService.java:727) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.sort.FieldSortBuilder.build(FieldSortBuilder.java:260) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.sort.SortBuilder.buildSort(SortBuilder.java:156) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.parseSource(SearchService.java:630) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.createContext(SearchService.java:481) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:457) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:253) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:330) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:327) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.5.0.jar:5.5.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n\r\n```\r\n\r\nAs soon as I index a doc in `test2`, the error goes away and the query is working:\r\n\r\n```\r\nPOST test2/test\r\n{\r\n \"testfield\": \"my test\"\r\n}\r\n\r\nPOST test*/_search\r\n{\r\n \"query\": {\r\n \"match_all\": {}\r\n },\r\n \"sort\": [\r\n {\r\n \"testfield\": {\r\n \"order\": \"asc\",\r\n \"unmapped_type\": \"keyword\"\r\n }\r\n }\r\n ]\r\n}\r\n``` \r\nnow results in:\r\n\r\n```\r\n{\r\n \"took\": 8,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 10,\r\n \"successful\": 10,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 2,\r\n \"max_score\": null,\r\n \"hits\": [\r\n {\r\n \"_index\": \"test1\",\r\n \"_type\": \"test\",\r\n \"_id\": \"AV3Qh0G3rdu9hzgAfZn2\",\r\n \"_score\": null,\r\n \"_source\": {\r\n \"testfield\": \"my test\"\r\n },\r\n \"sort\": [\r\n \"my test\"\r\n ]\r\n },\r\n {\r\n \"_index\": \"test2\",\r\n \"_type\": \"test\",\r\n \"_id\": \"AV3Qku9k6j_T2EVH4QtG\",\r\n \"_score\": null,\r\n \"_source\": {\r\n \"testfield\": \"my test\"\r\n },\r\n \"sort\": [\r\n \"my test\"\r\n ]\r\n }\r\n ]\r\n }\r\n}\r\n```",
"comments": [
{
"body": "Note: The point of referencing 2.3.4 in this issue is not to show \"this used to work, now it doesn't\".\r\nIt seems the bug only manifests itself on indexes that have undergone a 2.3.4 - > 5.x upgrade step. I tried setting up these indices from fresh on 5.5 without performing the 2.3.4 upgrade and queries were OK. \r\nI suspect it's quite an edge case with the combination of default mappings, upgrades and querying empty indexes to consider - is this affecting any production systems?",
"created_at": "2017-08-11T10:19:33Z"
},
{
"body": "Thanks @markharwood \r\n\r\n> Note: The point of referencing 2.3.4 in this issue is not to show \"this used to work, now it doesn't\".\r\nIt seems the bug only manifests itself on indexes that have undergone a 2.3.4 - > 5.x upgrade step. I tried setting up these indices from fresh on 5.5 without performing the 2.3.4 upgrade and queries were OK.\r\n\r\nYes, sorry for not making this clear. It only affects upgrades. If you create an index in 5.x, it will convert the string/not_analyzed to type keyword in the background. \r\n\r\n> I suspect it's quite an edge case with the combination of default mappings, upgrades and querying empty indexes to consider - is this affecting any production systems?\r\n\r\nIt was affecting my dev env, but it could also have happened on the prod system.\r\nYes, it's quite an edge case, but I'm not sure what the root cause is and if it could have other effects as well. ",
"created_at": "2017-08-11T11:29:02Z"
},
{
"body": "Ran into the same problem. Workaround in https://github.com/elastic/kibana/issues/13950 fixed my Kibana index. Thanks again @jakommo .",
"created_at": "2017-09-12T12:11:08Z"
},
{
"body": "This will be a bug that keeps coming back, until 5.6.1 is released. Any chance of doing this ASAP?",
"created_at": "2017-09-13T06:41:07Z"
},
{
"body": "5.6.1 is out, so I guess this is a non-issue now?",
"created_at": "2017-09-19T12:43:39Z"
},
{
"body": "Closed by #26602",
"created_at": "2017-09-19T13:15:01Z"
}
],
"number": 26162,
"title": "Index created on 2.x causing \"unsupported_operation_exception\" in 5.x when used for sorting"
} | {
"body": "Setting unmapped_type to `keyword` to sort an index created in 2.x throws an UnsupportedOperationException.\r\n\r\nFixes #26162",
"number": 26602,
"review_comments": [
{
"body": "Can you duplicate this test into one that tests 2.x indices and the other one that tests 5.x indices so that we have 100% coverage with every test run?",
"created_at": "2017-09-12T13:41:04Z"
},
{
"body": "same comment here as for FieldSortIT",
"created_at": "2017-09-12T13:41:27Z"
},
{
"body": "@jimczi Can you explain how this fixes the issue? Just curious.",
"created_at": "2017-09-14T18:30:29Z"
}
],
"title": "Fix unmapped_type creation for indices created in 2.x "
} | {
"commits": [
{
"message": "Fix unmapped_type for indices created in 2.x\n\nSetting unmapped_type to `keyword` to sort an index created in 2.x throws an NPE.\n\nFixes #26162"
},
{
"message": "add test"
},
{
"message": "make sure we always test BWC with indices created in 2.x"
},
{
"message": "fix test"
}
],
"files": [
{
"diff": "@@ -724,7 +724,7 @@ public MappedFieldType unmappedFieldType(String type) {\n if (typeParser == null) {\n throw new IllegalArgumentException(\"No mapper found for type [\" + type + \"]\");\n }\n- final Mapper.Builder<?, ?> builder = typeParser.parse(\"__anonymous_\" + type, emptyMap(), parserContext);\n+ final Mapper.Builder<?, ?> builder = typeParser.parse(\"__anonymous_\" + type, new HashMap<>(), parserContext);\n final BuilderContext builderContext = new BuilderContext(indexSettings.getSettings(), new ContentPath(1));\n fieldType = ((FieldMapper)builder.build(builderContext)).fieldType();\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.mapper;\n \n import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.Version;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -28,12 +29,16 @@\n import org.elasticsearch.index.mapper.KeywordFieldMapper.KeywordFieldType;\n import org.elasticsearch.index.mapper.MapperService.MergeReason;\n import org.elasticsearch.index.mapper.NumberFieldMapper.NumberFieldType;\n+import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n+import org.elasticsearch.test.InternalSettingsPlugin;\n+import org.elasticsearch.test.VersionUtils;\n import org.hamcrest.Matchers;\n \n import java.io.IOException;\n import java.io.UncheckedIOException;\n import java.util.Arrays;\n+import java.util.Collection;\n import java.util.Collections;\n import java.util.HashMap;\n import java.util.HashSet;\n@@ -48,6 +53,11 @@\n \n public class MapperServiceTests extends ESSingleNodeTestCase {\n \n+ @Override\n+ protected Collection<Class<? extends Plugin>> getPlugins() {\n+ return Collections.singletonList(InternalSettingsPlugin.class);\n+ }\n+\n public void testTypeNameStartsWithIllegalDot() {\n String index = \"test-index\";\n String type = \".test-type\";\n@@ -165,11 +175,33 @@ public void testMappingDepthExceedsLimit() throws Throwable {\n }\n \n public void testUnmappedFieldType() {\n- MapperService mapperService = createIndex(\"index\").mapperService();\n- assertThat(mapperService.unmappedFieldType(\"keyword\"), instanceOf(KeywordFieldType.class));\n- assertThat(mapperService.unmappedFieldType(\"long\"), instanceOf(NumberFieldType.class));\n- // back compat\n- assertThat(mapperService.unmappedFieldType(\"string\"), instanceOf(KeywordFieldType.class));\n+ assertUnmappedFieldType(Version.CURRENT);\n+ }\n+\n+ public void testUnmappedFieldTypeBWC() {\n+ // test BWC with indices created in 2.x\n+ Version version = VersionUtils.randomVersionBetween(random(), Version.V_2_0_0, Version.V_2_4_6);\n+ assertUnmappedFieldType(version);\n+ }\n+\n+ private void assertUnmappedFieldType(Version version) {\n+ MapperService mapperService =\n+ createIndex(\"index\", Settings.builder().put(\"index.version.created\", version).build()).mapperService();\n+ if (version.after(Version.V_2_4_6)) {\n+ assertThat(mapperService.unmappedFieldType(\"keyword\"), instanceOf(KeywordFieldType.class));\n+ } else {\n+ assertThat(mapperService.unmappedFieldType(\"keyword\"), instanceOf(StringFieldType.class));\n+ }\n+ if (version.after(Version.V_2_4_6)) {\n+ assertThat(mapperService.unmappedFieldType(\"long\"), instanceOf(NumberFieldType.class));\n+ } else {\n+ assertThat(mapperService.unmappedFieldType(\"long\"), instanceOf(LegacyLongFieldMapper.LongFieldType.class));\n+ }\n+ if (version.after(Version.V_2_4_6)) {\n+ assertThat(mapperService.unmappedFieldType(\"string\"), instanceOf(KeywordFieldType.class));\n+ } else {\n+ assertThat(mapperService.unmappedFieldType(\"string\"), instanceOf(StringFieldType.class));\n+ }\n assertWarnings(\"[unmapped_type:string] should be replaced with [unmapped_type:keyword]\");\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java",
"status": "modified"
},
{
"diff": "@@ -23,12 +23,14 @@\n import org.apache.lucene.util.LuceneTestCase;\n import org.apache.lucene.util.TestUtil;\n import org.apache.lucene.util.UnicodeUtil;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.ShardSearchFailure;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n@@ -39,6 +41,7 @@\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.InternalSettingsPlugin;\n+import org.elasticsearch.test.VersionUtils;\n import org.hamcrest.Matchers;\n \n import java.io.IOException;\n@@ -857,7 +860,18 @@ public void testSortMissingStrings() throws IOException {\n }\n \n public void testIgnoreUnmapped() throws Exception {\n- createIndex(\"test\");\n+ assertIgnoreUnmapped(Version.CURRENT);\n+ }\n+\n+ public void testIgnoreUnmappedBWC() throws Exception{\n+ // test BWC with indices created in 2.x\n+ Version version = VersionUtils.randomVersionBetween(random(), Version.V_2_0_0, Version.V_2_4_6);\n+ assertIgnoreUnmapped(version);\n+ }\n+\n+ private void assertIgnoreUnmapped(Version version) throws IOException {\n+ prepareCreate(\"test\")\n+ .setSettings(Settings.builder().put(indexSettings()).put(\"index.version.created\", version).build()).get();\n \n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(jsonBuilder().startObject()\n .field(\"id\", \"1\")",
"filename": "core/src/test/java/org/elasticsearch/search/sort/FieldSortIT.java",
"status": "modified"
}
]
} |
{
"body": "\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 5.5.2\r\n\r\n**Plugins installed**: [analysis-icu analysis-smartcn ingest-geoip x-pack\r\nanalysis-kuromoji analysis-stempel ingest-user-agent\r\n]\r\n\r\n**JVM version** (`java -version`):\r\n\r\n```\r\nopenjdk version \"1.8.0_141\"\r\nOpenJDK Runtime Environment (build 1.8.0_141-b16)\r\nOpenJDK 64-Bit Server VM (build 25.141-b16, mixed mode)\r\n```\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n\r\nLinux 4.9.47-1-lts #1 SMP Sat Sep 2 09:26:00 CEST 2017 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nI am trying to migrate from elasticsearch 2.4 to 5.x. Basically, everything is working as expected, but the part-of-speech filter does not remove the default stoptags which used to work alright before.\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. create an index with the kuromoji tokenizer and a part-of-speech filter\r\n\r\n```bash\r\n$ http PUT :32769/kuromoji_sample <<<'{\r\n \"settings\": {\r\n \"index\": {\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"my_analyzer\": {\r\n \"tokenizer\": \"kuromoji_tokenizer\",\r\n \"filter\": [\r\n \"my_posfilter\"\r\n ]\r\n }\r\n },\r\n \"filter\": {\r\n \"my_posfilter\": {\r\n \"type\": \"kuromoji_part_of_speech\",\r\n \"stoptags\": [\r\n \"助詞-格助詞-一般\",\r\n \"助詞-終助詞\"\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n\r\nHTTP/1.1 200 OK\r\ncontent-encoding: gzip\r\ncontent-type: application/json; charset=UTF-8\r\ntransfer-encoding: chunked\r\n\r\n{\r\n \"acknowledged\": true,\r\n \"shards_acknowledged\": true\r\n}\r\n\r\n```\r\n 2. analyze the text \"寿司がおいしいね\"\r\n\r\n```bash\r\n$ http :32769/kuromoji_sample/_analyze analyzer=my_analyzer text=\"寿司がおいしいね\"\r\n\r\nHTTP/1.1 200 OK\r\ncontent-encoding: gzip\r\ncontent-type: application/json; charset=UTF-8\r\ntransfer-encoding: chunked\r\n\r\n{\r\n \"tokens\": [\r\n {\r\n \"end_offset\": 2,\r\n \"position\": 0,\r\n \"start_offset\": 0,\r\n \"token\": \"寿司\",\r\n \"type\": \"word\"\r\n },\r\n {\r\n \"end_offset\": 7,\r\n \"position\": 2,\r\n \"start_offset\": 3,\r\n \"token\": \"おいしい\",\r\n \"type\": \"word\"\r\n }\r\n ]\r\n}\r\n```\r\nHere the \"が\" and \"ね\" characters are correctly removed.\r\n\r\n 3. create an index the same way as in step 1, but do not specify the `stoptags`:\r\n```bash\r\n$ http PUT :32769/kuromoji_sample_2 <<<'{\r\n \"settings\": {\r\n \"index\": {\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"my_analyzer\": {\r\n \"tokenizer\": \"kuromoji_tokenizer\",\r\n \"filter\": [\r\n \"my_posfilter\"\r\n ]\r\n }\r\n },\r\n \"filter\": {\r\n \"my_posfilter\": {\r\n \"type\": \"kuromoji_part_of_speech\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n\r\nHTTP/1.1 200 OK\r\ncontent-encoding: gzip\r\ncontent-type: application/json; charset=UTF-8\r\ntransfer-encoding: chunked\r\n\r\n{\r\n \"acknowledged\": true,\r\n \"shards_acknowledged\": true\r\n}\r\n```\r\n 4. analyze the text \"寿司がおいしいね\" again\r\n \r\n```bash\r\n$ http :32769/kuromoji_sample_2/_analyze analyzer=my_analyzer text=\"寿司がおいしいね\"\r\n\r\nHTTP/1.1 200 OK\r\ncontent-encoding: gzip\r\ncontent-type: application/json; charset=UTF-8\r\ntransfer-encoding: chunked\r\n\r\n{\r\n \"tokens\": [\r\n {\r\n \"end_offset\": 2,\r\n \"position\": 0,\r\n \"start_offset\": 0,\r\n \"token\": \"寿司\",\r\n \"type\": \"word\"\r\n },\r\n {\r\n \"end_offset\": 3,\r\n \"position\": 1,\r\n \"start_offset\": 2,\r\n \"token\": \"が\",\r\n \"type\": \"word\"\r\n },\r\n {\r\n \"end_offset\": 7,\r\n \"position\": 2,\r\n \"start_offset\": 3,\r\n \"token\": \"おいしい\",\r\n \"type\": \"word\"\r\n },\r\n {\r\n \"end_offset\": 8,\r\n \"position\": 3,\r\n \"start_offset\": 7,\r\n \"token\": \"ね\",\r\n \"type\": \"word\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\nThis example is taken from the documentation page here: https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-kuromoji-speech.html\r\n\r\nThat page says, that stoptags is \"An array of part-of-speech tags that should be removed. It defaults to the **stoptags.txt** file embedded in the lucene-analyzer-kuromoji.jar\"\r\n\r\nI have looked at the embedded file in that jar and could not find any difference to the version used by in the 2.4 kuromoji plugin.\r\n\r\nI also tried to define an empty array, or use a combination of latin characters, but it always returns four tokens instead of two.",
"comments": [
{
"body": "I can confirm the bug (reproduced it wit ES 5.5.2 on Windows 8.1).\r\nLooking at the [code](https://github.com/elastic/elasticsearch/blob/47ffa17efbc444c3d6c74a8020ee95eb4518c597/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiAnalyzerProvider.java#L38) is seems that potentially the default stopwords file is not loaded (maybe due to the modules classloader which might trip of the Lucene code).",
"created_at": "2017-09-07T10:07:49Z"
},
{
"body": "> I can confirm the bug (reproduced it wit ES 5.5.2 on Windows 8.1).\r\n> Looking at the code is seems that potentially the default stopwords file is not loaded (maybe due to the modules classloader which might trip of the Lucene code).\r\n\r\nThanks for having a look at it, @costin! But I think the problem may be different.\r\n\r\nI do not know how or why this changed, but it seems the part-of-speech is only initialized with the default stoptags list if using the default \"kuromoji\" analyzer (which calls [`JapaneseAnalyzer.getDefaultStopTags()`](https://github.com/elastic/elasticsearch/blob/47ffa17efbc444c3d6c74a8020ee95eb4518c597/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiAnalyzerProvider.java#L41)). When defining a custom analyzer, the stoptags list is only ever set from the `stoptags` list in the settings [here](https://github.com/elastic/elasticsearch/blob/47ffa17efbc444c3d6c74a8020ee95eb4518c597/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiPartOfSpeechFilterFactory.java#L38).",
"created_at": "2017-09-07T15:21:43Z"
},
{
"body": "We have changed analysis registration in 5.0. \r\nWe had different registration logic between prebuild and factory.\r\n\r\nI think this bug is related removing [\"KuromojiIndicesAnalysis\" ](https://github.com/elastic/elasticsearch/blob/2.4/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/indices/analysis/KuromojiIndicesAnalysis.java) . \r\n\r\nIn KuromojiIndicesAnalysis, we called getDefaultStopTags() in [line 100](https://github.com/elastic/elasticsearch/blob/2.4/plugins/analysis-kuromoji/src/main/java/org/elasticsearch/indices/analysis/KuromojiIndicesAnalysis.java#L100)...\r\n\r\n",
"created_at": "2017-09-11T03:57:14Z"
},
{
"body": "Yes, I think you hit the nail on the head! @johtani \r\n\r\nBut, how could this be fixed? Move the initialization of the part-of-speech anlyzer into the factory and call getDefaultStopTags if the `stoptags` setting is `null`?",
"created_at": "2017-09-11T06:50:22Z"
},
{
"body": "I think if the `stoptags` setting is null or not exists, call getDefaultStopTags in KuromojiPartOfSpeechFilterFactory constructer.",
"created_at": "2017-09-11T07:07:40Z"
}
],
"number": 26519,
"title": "Kuromoji analysis part-of-speech filter not working"
} | {
"body": "Fixes #26519 \r\n",
"number": 26600,
"review_comments": [],
"title": "Fix kuromoji default stoptags"
} | {
"commits": [
{
"message": "[TEST] Fix parameter order to `assertThat` call\n\nThe order was reversed, as the expected value was given for the actual value and\nvice versa. This led to a confusing assertion error message:\n\n```\nFAILURE 0.04s J1 | KuromojiAnalysisTests.testPartOfSpeechFilter <<< FAILURES!\n > Throwable #1: java.lang.AssertionError: expected different term at index 1\n > Expected: \"が\"\n > but: was \"おいしい\"\n```\n\nwhen the string \"が\" was actually not expected."
},
{
"message": "Use default stop-tags for Kuromoji part-of-speech filter\n\n* add new test which checks that part-of-speech tokens are removed when\n using the kuromoji_part_of_speech filter\n\n* initialize the default stop-tags in `KuromojiPartOfSpeechFilterFactory` if\n the `stoptags` are not given in the config"
}
],
"files": [
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.analysis;\n \n import org.apache.lucene.analysis.TokenStream;\n+import org.apache.lucene.analysis.ja.JapaneseAnalyzer;\n import org.apache.lucene.analysis.ja.JapanesePartOfSpeechStopFilter;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n@@ -38,6 +39,8 @@ public KuromojiPartOfSpeechFilterFactory(IndexSettings indexSettings, Environmen\n List<String> wordList = Analysis.getWordList(env, settings, \"stoptags\");\n if (wordList != null) {\n stopTags.addAll(wordList);\n+ } else {\n+ stopTags.addAll(JapaneseAnalyzer.getDefaultStopTags());\n }\n }\n ",
"filename": "plugins/analysis-kuromoji/src/main/java/org/elasticsearch/index/analysis/KuromojiPartOfSpeechFilterFactory.java",
"status": "modified"
},
{
"diff": "@@ -93,6 +93,21 @@ public void testBaseFormFilterFactory() throws IOException {\n assertSimpleTSOutput(tokenFilter.create(tokenizer), expected);\n }\n \n+ public void testPartOfSpeechFilter() throws IOException {\n+ TestAnalysis analysis = createTestAnalysis();\n+ TokenFilterFactory tokenFilter = analysis.tokenFilter.get(\"kuromoji_part_of_speech\");\n+\n+ assertThat(tokenFilter, instanceOf(KuromojiPartOfSpeechFilterFactory.class));\n+\n+ String source = \"寿司がおいしいね\";\n+ String[] expected_tokens = new String[]{\"寿司\", \"おいしい\"};\n+\n+ Tokenizer tokenizer = new JapaneseTokenizer(null, true, JapaneseTokenizer.Mode.SEARCH);\n+ tokenizer.setReader(new StringReader(source));\n+\n+ assertSimpleTSOutput(tokenFilter.create(tokenizer), expected_tokens);\n+ }\n+\n public void testReadingFormFilterFactory() throws IOException {\n TestAnalysis analysis = createTestAnalysis();\n TokenFilterFactory tokenFilter = analysis.tokenFilter.get(\"kuromoji_rf\");\n@@ -208,7 +223,7 @@ public static void assertSimpleTSOutput(TokenStream stream,\n int i = 0;\n while (stream.incrementToken()) {\n assertThat(expected.length, greaterThan(i));\n- assertThat( \"expected different term at index \" + i, expected[i++], equalTo(termAttr.toString()));\n+ assertThat(\"expected different term at index \" + i, termAttr.toString(), equalTo(expected[i++]));\n }\n assertThat(\"not all tokens produced\", i, equalTo(expected.length));\n }",
"filename": "plugins/analysis-kuromoji/src/test/java/org/elasticsearch/index/analysis/KuromojiAnalysisTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** \r\nVersion: 6.0.0-beta1-SNAPSHOT, Build: Unknown/2017-08-13T15:13:28.500Z, JVM: 1.8.0_102\r\n\r\n**JVM version**\r\njava version \"1.8.0_102\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_102-b14)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)\r\n\r\n**OS version**\r\nwin10, 64b\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nUsing a sorted index, terminated_early is set incorrectly to true\r\n\r\n**Steps to reproduce**:\r\n\r\n1. create the index, with 2 docs\r\n```\r\ncurl -XDELETE http://localhost:9200/twitter\r\ncurl -XPUT 'localhost:9200/twitter?pretty' -H 'Content-Type: application/json' -d'\r\n{\r\n \"settings\" : {\r\n \"number_of_shards\" : 1,\r\n \"number_of_replicas\" : 0,\r\n \"index\" : {\r\n \"sort.field\" : \"date\", \r\n \"sort.order\" : \"desc\" \r\n }\r\n },\r\n \"mappings\": {\r\n \"tweet\": {\r\n \"properties\": {\r\n \"field1\": {\r\n \"type\": \"keyword\"\r\n },\r\n \"date\": {\r\n \"type\": \"date\",\r\n \"format\": \"yyyy-MM-dd\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n'\r\ncurl -XPOST 'localhost:9200/_bulk?pretty' -H 'Content-Type: application/json' -d'\r\n{ \"index\" : { \"_index\" : \"twitter\", \"_type\" : \"tweet\", \"_id\" : \"1\" } }\r\n{ \"field1\" : \"value1\", \"date\" : \"2017-08-28\" }\r\n{ \"index\" : { \"_index\" : \"twitter\", \"_type\" : \"tweet\", \"_id\" : \"2\" } }\r\n{ \"field1\" : \"value2\", \"date\" : \"2017-08-28\" }\r\n'\r\n```\r\n\r\n2. test early termination when asking for 1 doc\r\n```\r\ncurl -XGET 'localhost:9200/twitter/_search?pretty' -H 'Content-Type: application/json' -d'\r\n{\r\n \"size\": 1,\r\n \"sort\": [ \r\n { \"date\": \"desc\" }\r\n ],\r\n \"track_total_hits\": false\r\n}\r\n'\r\n```\r\n\r\nthis returns correctly terminated_early\" : true and \"total\" : -1:\r\n\r\n```\r\n{\r\n \"took\" : 0,\r\n \"timed_out\" : false,\r\n \"terminated_early\" : true,\r\n \"_shards\" : {\r\n \"total\" : 1,\r\n \"successful\" : 1,\r\n \"skipped\" : 0,\r\n \"failed\" : 0\r\n },\r\n \"hits\" : {\r\n \"total\" : -1,\r\n \"max_score\" : null,\r\n \"hits\" : [\r\n {\r\n \"_index\" : \"twitter\",\r\n \"_type\" : \"tweet\",\r\n \"_id\" : \"1\",\r\n \"_score\" : null,\r\n \"_source\" : {\r\n \"field1\" : \"value1\",\r\n \"date\" : \"2017-08-28\"\r\n },\r\n \"sort\" : [\r\n 1503878400000\r\n ]\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n\r\n3. ask for 5 docs, no early termination\r\n\r\n```\r\ncurl -XGET 'localhost:9200/twitter/_search?pretty' -H 'Content-Type: application/json' -d'\r\n{\r\n \"size\": 5,\r\n \"sort\": [ \r\n { \"date\": \"desc\" }\r\n ]\r\n}\r\n'\r\n```\r\n\r\nreturns \r\n\r\n```\r\n{\r\n \"took\" : 0,\r\n \"timed_out\" : false,\r\n \"terminated_early\" : true,\r\n \"_shards\" : {\r\n \"total\" : 1,\r\n \"successful\" : 1,\r\n \"skipped\" : 0,\r\n \"failed\" : 0\r\n },\r\n \"hits\" : {\r\n \"total\" : 2,\r\n \"max_score\" : null,\r\n \"hits\" : [\r\n {\r\n \"_index\" : \"twitter\",\r\n \"_type\" : \"tweet\",\r\n \"_id\" : \"1\",\r\n \"_score\" : null,\r\n \"_source\" : {\r\n \"field1\" : \"value1\",\r\n \"date\" : \"2017-08-28\"\r\n },\r\n \"sort\" : [\r\n 1503878400000\r\n ]\r\n },\r\n {\r\n \"_index\" : \"twitter\",\r\n \"_type\" : \"tweet\",\r\n \"_id\" : \"2\",\r\n \"_score\" : null,\r\n \"_source\" : {\r\n \"field1\" : \"value2\",\r\n \"date\" : \"2017-08-28\"\r\n },\r\n \"sort\" : [\r\n 1503878400000\r\n ]\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n\r\nhere \"terminated_early\" : true, is incorrect, we didn't even ask for it. Hits \"total\" : 2 is ok\r\n\r\n4. now enable early termination for the same request\r\n\r\n```\r\ncurl -XGET 'localhost:9200/twitter/_search?pretty' -H 'Content-Type: application/json' -d'\r\n{\r\n \"size\": 5,\r\n \"sort\": [ \r\n { \"date\": \"desc\" }\r\n ],\r\n \"track_total_hits\": false\r\n}\r\n'\r\n```\r\n\r\nThis returns:\r\n\r\n```\r\n{ \r\n \"took\" : 0, \r\n \"timed_out\" : false, \r\n \"terminated_early\" : true, \r\n \"_shards\" : { \r\n \"total\" : 1, \r\n \"successful\" : 1, \r\n \"skipped\" : 0, \r\n \"failed\" : 0 \r\n }, \r\n \"hits\" : { \r\n \"total\" : -1, \r\n \"max_score\" : null, \r\n \"hits\" : [ \r\n { \r\n \"_index\" : \"twitter\", \r\n \"_type\" : \"tweet\", \r\n \"_id\" : \"1\", \r\n \"_score\" : null, \r\n \"_source\" : { \r\n \"field1\" : \"value1\", \r\n \"date\" : \"2017-08-28\" \r\n }, \r\n \"sort\" : [ \r\n 1503878400000 \r\n ] \r\n }, \r\n { \r\n \"_index\" : \"twitter\", \r\n \"_type\" : \"tweet\", \r\n \"_id\" : \"2\", \r\n \"_score\" : null, \r\n \"_source\" : { \r\n \"field1\" : \"value2\", \r\n \"date\" : \"2017-08-28\" \r\n }, \r\n \"sort\" : [ \r\n 1503878400000 \r\n ] \r\n } \r\n ] \r\n } \r\n} \r\n``` \r\n\r\n\"terminated_early\" : true, is incorrect. And Hits \"total\" : -1 is debatable, we asked for it so it could be ok if set, but it also can be argued that as it had the correct info it should return it?\r\n\r\n",
"comments": [
{
"body": "Thanks @jmlucjav \r\n\r\nWe always set `terminated_early` when the top hits collector was able to shortcut the collection even when `track_total_hits` is not disabled. When you set a size of 1 in your request, early termination is activated but it's also activated when you set the size to 5 because your shard contains only 2 documents. In such case the maximum size value is 2 and early termination kicks in on the last document, if you want to test a search without early termination you can send a query with size set to 3. \r\n`terminated_early` just means that the top hits collection did not check all documents to select the best N. Setting `track_total_hits` to false allows the entire query to terminate early (we don't need to count the total number of docs). Not sure if we should remove the flag when `track_total_hits` is not disabled, this is not a bug I did it conscientiously. IMO this is a good indication that the search was able to use the index sort to shortcut the collection (even if we had to count all documents). @jpountz WDYT ?\r\n",
"created_at": "2017-08-29T07:18:02Z"
},
{
"body": "> Not sure if we should remove the flag when track_total_hits is not disabled\r\n\r\nI was thinking along the same lines.\r\n\r\nI can see why the `terminated_early` flag can be confusing. Sometimes it means results are partial, sometimes it means that we were able to early terminate the collection of top docs but results are complete. It is also not clear to which part of the request it applies: when `terminate_after` is used then it applies to everything while it otherwise it only applies to top docs.\r\n\r\nI'm actually wondering about going one step further than your proposal: maybe we should only put `terminated_early` in the response as a way to indicate partial results, ie. when `terminate_after` is used.\r\n\r\n> Hits \"total\" : -1 is debatable, we asked for it so it could be ok if set, but it also can be argued that as it had the correct info it should return it?\r\n\r\nIn this special case with few matches, we could indeed know the match count. However we don't know it in the general case so I'd rather keep things the way they are today and potentially improve documentation in order to say that we will return `-1` when `track_total_hits` is `false` even if we had to visit all matches. I think this behaviour is also beneficial for testing: you would get similar results with a few documents in your index or millions of documents.",
"created_at": "2017-08-29T12:36:12Z"
},
{
"body": "Just hit upon this issue myself. I agree with @jpountz that interpreting \"terminated_early\" as indicating partial results is what most current users expect (given this only happens on \"terminate_after\" at present), so keeping that behavior intact with index sorting would be very much appreciated.\r\n\r\nAs for the special case, I imagine there might actually be many situations where users want to know if they're seeing a complete or partial set of hits.\r\n\r\nGranted, I believe this is actually the same as a client setting `hits.total = (query.size > hits.hits.length) ? hits.hits.length : -1`",
"created_at": "2017-09-11T14:44:59Z"
},
{
"body": "I just realized that my email was not public at the time I opened this ticket. Is the pioneer program still on? do I need to do something?",
"created_at": "2017-11-21T21:28:45Z"
},
{
"body": "@jmlucjav If you want to reach out to us (community@elastic.co) we will make sure the things that need to happen happen :)",
"created_at": "2017-11-22T07:14:30Z"
}
],
"number": 26408,
"title": "terminated_early is set incorrectly "
} | {
"body": "Early termination with index sorting always return the best top N in the response but set the flag `terminated_early` in the response. This can be confusing because we use the same flag for `terminate_after` which on the contrary returns partial results.\r\nThis change removes the flag when results are not partial (early termination due to index sorting) and keeps it only when `terminate_after` is used.\r\n\r\nCloses #26408",
"number": 26597,
"review_comments": [],
"title": "Early termination with index sorting should not set terminated_early in the response"
} | {
"commits": [
{
"message": "Early termination with index sorting should not set terminated_early in the response\n\nEarly termination with index sorting always return the best top N in the response but set the flag `terminated_early`\nin the response. This can be confusing because we use the same flag for `terminate_after` which on the contrary returns partial results.\nThis change removes the flag when results are not partial (early termination due to index sorting) and keeps it only when `terminate_after` is used.\n\nCloses #26408"
}
],
"files": [
{
"diff": "@@ -217,13 +217,11 @@ static QueryCollectorContext createEarlySortingTerminationCollectorContext(Index\n boolean trackTotalHits,\n boolean shouldCollect) {\n return new QueryCollectorContext(REASON_SEARCH_TERMINATE_AFTER_COUNT) {\n- private BooleanSupplier terminatedEarlySupplier;\n private IntSupplier countSupplier = null;\n \n @Override\n Collector create(Collector in) throws IOException {\n EarlyTerminatingSortingCollector sortingCollector = new EarlyTerminatingSortingCollector(in, indexSort, numHits);\n- terminatedEarlySupplier = sortingCollector::terminatedEarly;\n Collector collector = sortingCollector;\n if (trackTotalHits) {\n int count = shouldCollect ? -1 : shortcutTotalHitCount(reader, query);\n@@ -240,9 +238,6 @@ Collector create(Collector in) throws IOException {\n \n @Override\n void postProcess(QuerySearchResult result, boolean hasCollected) throws IOException {\n- if (terminatedEarlySupplier.getAsBoolean()) {\n- result.terminatedEarly(true);\n- }\n if (countSupplier != null) {\n final TopDocs topDocs = result.topDocs();\n topDocs.totalHits = countSupplier.getAsInt();",
"filename": "core/src/main/java/org/elasticsearch/search/query/QueryCollectorContext.java",
"status": "modified"
},
{
"diff": "@@ -38,7 +38,10 @@\n import org.apache.lucene.search.ConstantScoreQuery;\n import org.apache.lucene.search.FieldComparator;\n import org.apache.lucene.search.FieldDoc;\n+import org.apache.lucene.search.FilterCollector;\n+import org.apache.lucene.search.FilterLeafCollector;\n import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.LeafCollector;\n import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n@@ -64,10 +67,8 @@\n \n import static org.hamcrest.Matchers.anyOf;\n import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n import static org.hamcrest.Matchers.instanceOf;\n-import static org.hamcrest.Matchers.lessThan;\n \n public class QueryPhaseTests extends IndexShardTestCase {\n \n@@ -412,30 +413,19 @@ public void testIndexSortingEarlyTermination() throws Exception {\n context.setTask(new SearchTask(123L, \"\", \"\", \"\", null));\n context.sort(new SortAndFormats(sort, new DocValueFormat[] {DocValueFormat.RAW}));\n \n- final AtomicBoolean collected = new AtomicBoolean();\n final IndexReader reader = DirectoryReader.open(dir);\n- IndexSearcher contextSearcher = new IndexSearcher(reader) {\n- protected void search(List<LeafReaderContext> leaves, Weight weight, Collector collector) throws IOException {\n- collected.set(true);\n- super.search(leaves, weight, collector);\n- }\n- };\n+ IndexSearcher contextSearcher = new IndexSearcher(reader);\n QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort);\n- assertTrue(collected.get());\n- assertTrue(context.queryResult().terminatedEarly());\n assertThat(context.queryResult().topDocs().totalHits, equalTo((long) numDocs));\n assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1));\n assertThat(context.queryResult().topDocs().scoreDocs[0], instanceOf(FieldDoc.class));\n FieldDoc fieldDoc = (FieldDoc) context.queryResult().topDocs().scoreDocs[0];\n assertThat(fieldDoc.fields[0], equalTo(1));\n \n-\n {\n- collected.set(false);\n context.parsedPostFilter(new ParsedQuery(new MinDocQuery(1)));\n QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort);\n- assertTrue(collected.get());\n- assertTrue(context.queryResult().terminatedEarly());\n+ assertNull(context.queryResult().terminatedEarly());\n assertThat(context.queryResult().topDocs().totalHits, equalTo(numDocs - 1L));\n assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1));\n assertThat(context.queryResult().topDocs().scoreDocs[0], instanceOf(FieldDoc.class));\n@@ -444,10 +434,8 @@ protected void search(List<LeafReaderContext> leaves, Weight weight, Collector c\n \n final TotalHitCountCollector totalHitCountCollector = new TotalHitCountCollector();\n context.queryCollectors().put(TotalHitCountCollector.class, totalHitCountCollector);\n- collected.set(false);\n QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort);\n- assertTrue(collected.get());\n- assertTrue(context.queryResult().terminatedEarly());\n+ assertNull(context.queryResult().terminatedEarly());\n assertThat(context.queryResult().topDocs().totalHits, equalTo((long) numDocs));\n assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1));\n assertThat(context.queryResult().topDocs().scoreDocs[0], instanceOf(FieldDoc.class));\n@@ -457,27 +445,19 @@ protected void search(List<LeafReaderContext> leaves, Weight weight, Collector c\n }\n \n {\n- collected.set(false);\n+ contextSearcher = getAssertingEarlyTerminationSearcher(reader, 1);\n context.trackTotalHits(false);\n QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort);\n- assertTrue(collected.get());\n- assertTrue(context.queryResult().terminatedEarly());\n- assertThat(context.queryResult().topDocs().totalHits, lessThan((long) numDocs));\n+ assertNull(context.queryResult().terminatedEarly());\n assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1));\n assertThat(context.queryResult().topDocs().scoreDocs[0], instanceOf(FieldDoc.class));\n assertThat(fieldDoc.fields[0], anyOf(equalTo(1), equalTo(2)));\n \n- final TotalHitCountCollector totalHitCountCollector = new TotalHitCountCollector();\n- context.queryCollectors().put(TotalHitCountCollector.class, totalHitCountCollector);\n- collected.set(false);\n QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort);\n- assertTrue(collected.get());\n- assertTrue(context.queryResult().terminatedEarly());\n- assertThat(context.queryResult().topDocs().totalHits, lessThan((long) numDocs));\n+ assertNull(context.queryResult().terminatedEarly());\n assertThat(context.queryResult().topDocs().scoreDocs.length, equalTo(1));\n assertThat(context.queryResult().topDocs().scoreDocs[0], instanceOf(FieldDoc.class));\n assertThat(fieldDoc.fields[0], anyOf(equalTo(1), equalTo(2)));\n- assertThat(totalHitCountCollector.getTotalHits(), equalTo(numDocs));\n }\n reader.close();\n dir.close();\n@@ -498,8 +478,9 @@ public void testIndexSortScrollOptimization() throws Exception {\n doc.add(new NumericDocValuesField(\"tiebreaker\", i));\n w.addDocument(doc);\n }\n- // Make sure that we can early terminate queries on this index\n- w.forceMerge(3);\n+ if (randomBoolean()) {\n+ w.forceMerge(randomIntBetween(1, 10));\n+ }\n w.close();\n \n TestSearchContext context = new TestSearchContext(null, indexShard);\n@@ -513,28 +494,21 @@ public void testIndexSortScrollOptimization() throws Exception {\n context.setSize(10);\n context.sort(new SortAndFormats(sort, new DocValueFormat[] {DocValueFormat.RAW, DocValueFormat.RAW}));\n \n- final AtomicBoolean collected = new AtomicBoolean();\n final IndexReader reader = DirectoryReader.open(dir);\n- IndexSearcher contextSearcher = new IndexSearcher(reader) {\n- protected void search(List<LeafReaderContext> leaves, Weight weight, Collector collector) throws IOException {\n- collected.set(true);\n- super.search(leaves, weight, collector);\n- }\n- };\n+ IndexSearcher contextSearcher = new IndexSearcher(reader);\n \n QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort);\n assertThat(context.queryResult().topDocs().totalHits, equalTo((long) numDocs));\n- assertTrue(collected.get());\n assertNull(context.queryResult().terminatedEarly());\n assertThat(context.terminateAfter(), equalTo(0));\n assertThat(context.queryResult().getTotalHits(), equalTo((long) numDocs));\n int sizeMinus1 = context.queryResult().topDocs().scoreDocs.length - 1;\n FieldDoc lastDoc = (FieldDoc) context.queryResult().topDocs().scoreDocs[sizeMinus1];\n \n+ contextSearcher = getAssertingEarlyTerminationSearcher(reader, 10);\n QueryPhase.execute(context, contextSearcher, checkCancelled -> {}, sort);\n+ assertNull(context.queryResult().terminatedEarly());\n assertThat(context.queryResult().topDocs().totalHits, equalTo((long) numDocs));\n- assertTrue(collected.get());\n- assertTrue(context.queryResult().terminatedEarly());\n assertThat(context.terminateAfter(), equalTo(0));\n assertThat(context.queryResult().getTotalHits(), equalTo((long) numDocs));\n FieldDoc firstDoc = (FieldDoc) context.queryResult().topDocs().scoreDocs[0];\n@@ -551,4 +525,37 @@ protected void search(List<LeafReaderContext> leaves, Weight weight, Collector c\n reader.close();\n dir.close();\n }\n+\n+ static IndexSearcher getAssertingEarlyTerminationSearcher(IndexReader reader, int size) {\n+ return new IndexSearcher(reader) {\n+ protected void search(List<LeafReaderContext> leaves, Weight weight, Collector collector) throws IOException {\n+ final Collector in = new AssertingEalyTerminationFilterCollector(collector, size);\n+ super.search(leaves, weight, in);\n+ }\n+ };\n+ }\n+\n+ private static class AssertingEalyTerminationFilterCollector extends FilterCollector {\n+ private final int size;\n+\n+ AssertingEalyTerminationFilterCollector(Collector in, int size) {\n+ super(in);\n+ this.size = size;\n+ }\n+\n+ @Override\n+ public LeafCollector getLeafCollector(LeafReaderContext context) throws IOException {\n+ final LeafCollector in = super.getLeafCollector(context);\n+ return new FilterLeafCollector(in) {\n+ int collected;\n+\n+ @Override\n+ public void collect(int doc) throws IOException {\n+ assert collected <= size : \"should not collect more than \" + size + \" doc per segment, got \" + collected;\n+ ++ collected;\n+ super.collect(doc);\n+ }\n+ };\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/query/QueryPhaseTests.java",
"status": "modified"
},
{
"diff": "@@ -315,24 +315,18 @@ public void testSimpleIndexSortEarlyTerminate() throws Exception {\n refresh();\n \n SearchResponse searchResponse;\n- boolean hasEarlyTerminated = false;\n for (int i = 1; i < max; i++) {\n searchResponse = client().prepareSearch(\"test\")\n .addDocValueField(\"rank\")\n .setTrackTotalHits(false)\n .addSort(\"rank\", SortOrder.ASC)\n .setSize(i).execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(-1L));\n- if (searchResponse.isTerminatedEarly() != null) {\n- assertTrue(searchResponse.isTerminatedEarly());\n- hasEarlyTerminated = true;\n- }\n for (int j = 0; j < i; j++) {\n assertThat(searchResponse.getHits().getAt(j).field(\"rank\").getValue(),\n equalTo((long) j));\n }\n }\n- assertTrue(hasEarlyTerminated);\n }\n \n public void testInsaneFromAndSize() throws Exception {",
"filename": "core/src/test/java/org/elasticsearch/search/simple/SimpleSearchIT.java",
"status": "modified"
},
{
"diff": "@@ -196,16 +196,13 @@ as soon as N documents have been collected per segment.\n \"hits\" : []\n },\n \"took\": 20,\n- \"terminated_early\": true, <2>\n \"timed_out\": false\n }\n --------------------------------------------------\n // TESTRESPONSE[s/\"_shards\": \\.\\.\\./\"_shards\": \"$body._shards\",/]\n // TESTRESPONSE[s/\"took\": 20,/\"took\": \"$body.took\",/]\n-// TESTRESPONSE[s/\"terminated_early\": true,//]\n \n <1> The total number of hits matching the query is unknown because of early termination.\n-<2> Indicates whether the top docs retrieval has actually terminated_early.\n \n NOTE: Aggregations will collect all documents that match the query regardless of the value of `track_total_hits`\n ",
"filename": "docs/reference/index-modules/index-sorting.asciidoc",
"status": "modified"
},
{
"diff": "@@ -98,7 +98,6 @@\n sort: [\"rank\"]\n size: 1\n \n- - is_true: terminated_early\n - match: {hits.total: 8 }\n - length: {hits.hits: 1 }\n - match: {hits.hits.0._id: \"2\" }\n@@ -113,7 +112,6 @@\n track_total_hits: false\n size: 1\n \n- - match: {terminated_early: true}\n - match: {hits.total: -1 }\n - length: {hits.hits: 1 }\n - match: {hits.hits.0._id: \"2\" }\n@@ -134,7 +132,6 @@\n body:\n sort: _doc\n \n- - is_false: terminated_early\n - match: {hits.total: 8 }\n - length: {hits.hits: 8 }\n - match: {hits.hits.0._id: \"2\" }\n@@ -156,7 +153,6 @@\n track_total_hits: false\n size: 3\n \n- - match: {terminated_early: true }\n - match: {hits.total: -1 }\n - length: {hits.hits: 3 }\n - match: {hits.hits.0._id: \"2\" }",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.sort/10_basic.yml",
"status": "modified"
}
]
} |
{
"body": "When updating a bogus setting, the following error is returned:\r\n```\r\ncurl -XPUT -H \"Content-Type: application/json\" localhost:9200/_cluster/settings -d '\r\n{\r\n \"transient\": {\r\n \"bogus\": true\r\n }\r\n}'\r\n```\r\n\r\n```\r\n{\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"transient setting [bogus], not dynamically updateable\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"transient setting [bogus], not dynamically updateable\"},\"status\":400}\r\n```\r\nThis leads one to believe that this setting is recognized, but not updateable. Not only is this non-updateable, this is unrecognized and not supported.",
"comments": [],
"number": 25607,
"title": "Updating an unrecognized setting should error out with that reason"
} | {
"body": "Closes #25607",
"number": 26569,
"review_comments": [],
"title": "Throw exception if setting isn't recognized"
} | {
"commits": [
{
"message": "Throw exception if setting isn't recognized"
}
],
"files": [
{
"diff": "@@ -495,6 +495,8 @@ private boolean updateSettings(Settings toApply, Settings.Builder target, Settin\n // we don't validate if there is any dynamic setting with that prefix yet we could do in the future\n toRemove.add(entry.getKey());\n // we don't set changed here it's set after we apply deletes below if something actually changed\n+ } else if (get(entry.getKey()) == null) {\n+ throw new IllegalArgumentException(type + \" setting [\" + entry.getKey() + \"], not recognized\");\n } else if (entry.getValue() != null && canUpdate.test(entry.getKey())) {\n validate(entry.getKey(), toApply);\n settingsBuilder.put(entry.getKey(), entry.getValue());",
"filename": "core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java",
"status": "modified"
},
{
"diff": "@@ -25,18 +25,15 @@\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n import org.elasticsearch.common.logging.ESLoggerFactory;\n-import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.discovery.Discovery;\n import org.elasticsearch.discovery.DiscoverySettings;\n import org.elasticsearch.discovery.zen.ZenDiscovery;\n import org.elasticsearch.indices.recovery.RecoverySettings;\n import org.elasticsearch.test.ESIntegTestCase;\n-import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.junit.After;\n \n-import static org.elasticsearch.test.ESIntegTestCase.Scope.TEST;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertBlocked;\n import static org.hamcrest.Matchers.containsString;\n@@ -63,7 +60,7 @@ public void testClusterNonExistingSettingsUpdate() {\n .get();\n fail(\"bogus value\");\n } catch (IllegalArgumentException ex) {\n- assertEquals(ex.getMessage(), \"transient setting [no_idea_what_you_are_talking_about], not dynamically updateable\");\n+ assertEquals(\"transient setting [no_idea_what_you_are_talking_about], not recognized\", ex.getMessage());\n }\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/cluster/settings/ClusterSettingsIT.java",
"status": "modified"
}
]
} |
{
"body": "The error generated by [DocumentParser.java#L182]( https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java#L182) (`object field starting or ending with a [.] makes object resolution ambiguous: [\" + fullFieldPath + \"]\"`) incorrectly rejects the field path \"\", which is not ambiguous.\r\n\r\nThis kind of document (edge-case, I know) was accepted in ES v1.x. Ideally (for me) the document would be accepted, but in any case the error description is incorrect - if \"\" is a special case that is considered illegal by ES (but not JSON), it should be documented as such and reported accordingly.\r\n\r\n**Elasticsearch version**:\r\n5.2.1\r\n\r\n**Plugins installed**: []\r\nx-pack\r\n\r\n**JVM version**:\r\n```\r\njava version \"1.8.0_121\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_121-b13)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)\r\n\r\n```\r\n**OS version**:\r\n```\r\nDistributor ID:\tUbuntu\r\nDescription:\tUbuntu 14.04.5 LTS\r\nRelease:\t14.04\r\nCodename:\ttrusty\r\n```\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\n**Steps to reproduce**:\r\n 1. `curl -XPUT localhost:9200/test/test/1 -d '{ \"\": \"abc\" }'`\r\n\r\n**Provide logs (if relevant)**:\r\nResponse:\r\n```\r\n{\"error\":{\"root_cause\":[{\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse\"}],\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse\",\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"object field starting or ending with a [.] makes object resolution ambiguous: []\"}},\"status\":400}\r\n```\r\nExpected response:\r\n```\r\n{\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\",\"_version\":2,\"result\":\"updated\",\"_shards\":{\"total\":2,\"successful\":1,\"failed\":0},\"created\":false}\r\n```\r\n\r\n",
"comments": [
{
"body": "@matAtWork I definitely agree the message could be clearer. This was something I introduced to fix a different mapping problem in https://github.com/elastic/elasticsearch/pull/22891\r\n\r\nI'll take a look and see if I can make the error more descriptive of the actual problem.",
"created_at": "2017-02-24T15:26:07Z"
},
{
"body": "Thanks for the quick response. Incidentally, I got around my issue by creating a mapping with \"enabled: false\" for the objects containing these keys (they're a state machine - I don't want to search them), which introduces the anomaly that this can indexed in some mappings (enabled: false), but not in others\r\n",
"created_at": "2017-02-24T15:31:51Z"
},
{
"body": "I also ran into this issue, but I needed to index the field and search on it so I could not use @matAtWork's workaround. A more accurate error message would have saved me time and confusion.",
"created_at": "2017-06-01T17:02:41Z"
},
{
"body": "**elasticsearch version:**\r\n`5.4.1`\r\n\r\n**java version:**\r\n`\r\njava version \"1.8.0_131\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_131-b11)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)\r\n`\r\n**os version:**\r\n`\r\nCentOS Release 7.3.1611\r\n`\r\n\r\n**logstash version:**\r\n`2.4.1`\r\n\r\nIf you are using Logstash to write to Elasticsearch, you might get a message like this in the logstash log file:\r\n`\"response\":{\"index\":{\"_index\":\"logstash-2017.06.11\",\"_type\":\"REDACTED\",\"_id\":\"AVyYXGBT5FKNI6XHxtkZ\",\"status\":400,\"error\":{\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse\",\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"object field starting or ending with a [.] makes object resolution ambiguous: []\"}}}},\"level\":\"warn\"}`\r\n\r\nHere's a look at the document that Logstash is trying to get Elasticsearch to index:\r\n` \r\n{\"doc\": {\r\n \"hdr\": {\r\n \"timing\": {\r\n \"service_name\": {\r\n \"\": 0\r\n }\r\n }\r\n }\r\n} } \r\n`",
"created_at": "2017-06-11T18:33:13Z"
}
],
"number": 23348,
"title": "Incorrect parsing of a JSON doc containing a zero-length fieldName"
} | {
"body": "When a document is parsed with a `\"\"` for a field name, we currently throw a\r\nconfusing error about `.` being present in the field. This changes the error\r\nmessage to be clearer about what's causing the problem.\r\n\r\nResolves #23348\r\n",
"number": 26543,
"review_comments": [],
"title": "Throw a better error message for empty field names"
} | {
"commits": [
{
"message": "Throw a better error message for empty field names\n\nWhen a document is parsed with a `\"\"` for a field name, we currently throw a\nconfusing error about `.` being present in the field. This changes the error\nmessage to be clearer about what's causing the problem.\n\nResolves #23348"
},
{
"message": "Fix exception message in test"
}
],
"files": [
{
"diff": "@@ -175,14 +175,21 @@ private static MapperParsingException wrapInMapperParsingException(SourceToParse\n }\n \n private static String[] splitAndValidatePath(String fullFieldPath) {\n- String[] parts = fullFieldPath.split(\"\\\\.\");\n- for (String part : parts) {\n- if (Strings.hasText(part) == false) {\n- throw new IllegalArgumentException(\n- \"object field starting or ending with a [.] makes object resolution ambiguous: [\" + fullFieldPath + \"]\");\n+ if (fullFieldPath.contains(\".\")) {\n+ String[] parts = fullFieldPath.split(\"\\\\.\");\n+ for (String part : parts) {\n+ if (Strings.hasText(part) == false) {\n+ throw new IllegalArgumentException(\n+ \"object field starting or ending with a [.] makes object resolution ambiguous: [\" + fullFieldPath + \"]\");\n+ }\n+ }\n+ return parts;\n+ } else {\n+ if (Strings.isEmpty(fullFieldPath)) {\n+ throw new IllegalArgumentException(\"field name cannot be an empty string\");\n }\n+ return new String[] {fullFieldPath};\n }\n- return parts;\n }\n \n /** Creates a Mapping containing any dynamically added fields, or returns null if there were no dynamic mappings. */",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.mapper;\n \n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.bytes.BytesArray;\n@@ -1375,4 +1376,26 @@ public void testDynamicFieldsStartingAndEndingWithDot() throws Exception {\n containsString(\"object field starting or ending with a [.] makes object resolution ambiguous: [top..foo..bar]\"));\n }\n }\n+\n+ public void testBlankFieldNames() throws Exception {\n+ final BytesReference bytes = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"\", \"foo\")\n+ .endObject().bytes();\n+\n+ MapperParsingException err = expectThrows(MapperParsingException.class, () ->\n+ client().prepareIndex(\"idx\", \"type\").setSource(bytes, XContentType.JSON).get());\n+ assertThat(ExceptionsHelper.detailedMessage(err), containsString(\"field name cannot be an empty string\"));\n+\n+ final BytesReference bytes2 = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"foo\")\n+ .field(\"\", \"bar\")\n+ .endObject()\n+ .endObject().bytes();\n+\n+ err = expectThrows(MapperParsingException.class, () ->\n+ client().prepareIndex(\"idx\", \"type\").setSource(bytes2, XContentType.JSON).get());\n+ assertThat(ExceptionsHelper.detailedMessage(err), containsString(\"field name cannot be an empty string\"));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/DocumentParserTests.java",
"status": "modified"
},
{
"diff": "@@ -250,6 +250,6 @@ public void testDocumentWithBlankFieldName() {\n );\n assertThat(e.getMessage(), containsString(\"failed to parse\"));\n assertThat(e.getRootCause().getMessage(),\n- containsString(\"object field starting or ending with a [.] makes object resolution ambiguous: []\"));\n+ containsString(\"field name cannot be an empty string\"));\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/indexing/IndexActionIT.java",
"status": "modified"
}
]
} |
{
"body": "In the master branch, if I do a query with an expression script like:\r\n\r\n```json\r\n{\r\n \"query\": {\r\n \"function_score\": {\r\n \"query\": {\r\n \"constant_score\": {\r\n \"filter\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"script\": {\r\n \"script\": {\r\n \"lang\": \"expression\",\r\n \"inline\": \"birth_date >= doc[\\\"birth_date\\\"].value\",\r\n \"params\": {\r\n \"birth_date\": 14\r\n }}}}]}}}}}}}\r\n```\r\n\r\nI get the following error:\r\n\r\n```\r\nCaused by: java.lang.IllegalArgumentException: painless does not know how to handle context [filter] \r\n at org.elasticsearch.script.expression.ExpressionScriptEngine.compile(ExpressionScriptEngine.java:111) ~[?:?] at org.elasticsearch.script.ScriptService.compile(ScriptService.java:296) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT] at org.elasticsearch.index.query.ScriptQueryBuilder.doToQuery(ScriptQueryBuilder.java:130) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT] \r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:97) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT] \r\n at org.elasticsearch.index.query.BoolQueryBuilder.addBooleanClauses(BoolQueryBuilder.java:405) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.BoolQueryBuilder.doToQuery(BoolQueryBuilder.java:379) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:97) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toFilter(AbstractQueryBuilder.java:119) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.ConstantScoreQueryBuilder.doToQuery(ConstantScoreQueryBuilder.java:136) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:97) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder.doToQuery(FunctionScoreQueryBuilder.java:307) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:97) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.QueryShardContext.lambda$toQuery$2(QueryShardContext.java:304) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.QueryShardContext.toQuery(QueryShardContext.java:316) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.query.QueryShardContext.toQuery(QueryShardContext.java:303) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:669) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n```\r\n\r\nThe error also says \"painless\" instead of \"expressions\" from https://github.com/elastic/elasticsearch/blob/c0753235222dea250295f0caa2a2f7c332b056e7/modules/lang-expression/src/main/java/org/elasticsearch/script/expression/ExpressionScriptEngine.java#L111\r\n",
"comments": [
{
"body": "@rjernst - It appears this bug might be causing Kibana build failures on master. Any ETA on a fix?",
"created_at": "2017-09-05T13:15:31Z"
},
{
"body": "I'm going to mark this as a blocker for 6.1 since we disabled a sizable chunk of integration tests in Kibana in order to get builds passing. I *assume* this is a regression that needs to be fixed for 6.1 anyway, but I'm really just blocking on resolution one way or another.",
"created_at": "2017-09-06T17:02:52Z"
},
{
"body": "@epixa Looking back at this again, I'm actually not sure filters make sense for expressions. Expressions only know how to read numeric values, and return numeric values. There was previously hacky \"treat 0 as false and anything else as true\" code in filter scripts, but that was removed with my refactoring to create a filter script context.\r\n\r\nWhy can't Kibana use painless for filters? The same example script @dakrone gives in the issue description would work fine in painless.",
"created_at": "2017-09-07T02:53:53Z"
},
{
"body": "@rjernst Seems like a reasonable question to me, especially since we *want* people to use painless instead of lucene expressions for this stuff since it's designed more for these specific use cases rather than relying on hacky type coercion.\r\n\r\nThat said, unless we can guarantee complete compatibility between the behaviors of expression-based filters and painless-based filters, this is going to be a breaking change for a lot of Kibana users, so I think we should preserve the existing behavior until 7.0.\r\n\r\nPeople can filter on Kibana scripted fields, which can use either expressions or painless scripts. At the very least, we'll need to make changes to Kibana to make it so only painless scripted fields can be filtered on, we'll need to start throwing deprecation notices for the existing expression filters, and we probably want to add a migration mechanism to the upgrade assistant for people to convert their existing scripted fields over.\r\n\r\nIt's worth mentioning though, that we've never had any person (to my knowledge) that encountered unexpected behaviors with expression-based scripted fields in kibana. Was the 0->false coercion problematic from a performance or maintenance standpoint? Given the impact of the change on existing users and the amount of development that'll go into providing a bridge for those users going into 7.0, is it more practical for us to simply preserve the 0->false coercion as the documented behavior of how expressions work in a filter context?",
"created_at": "2017-09-07T14:47:37Z"
},
{
"body": "Part of the reason for the context work we have been doing in scripting is performance. When you do a coercion a million times (assuming one million docs being evaluated), the total time can be non-negligible. This is part of the reason expressions are currently faster in simple cases than painless. Once we have painless performance on par with expressions, I don't think there is any reason to keep expressions around. They were an early experiment in Lucene into doing scripted scoring, and will likely stay there for a long time. But having 2 languages in elasticsearch, especially one with limited functionality, is both confusing for users (\"which one should I use?\") and a maintenance burden on developers. Expanding on the latter, expressions require manual work for every new context we add. It is not simply a matter of \"preserving coercion\". There are a few classes necessary to be created and handled for every context expressions supports.\r\n\r\nSo I think beginning the journey to remove uses of expressions is well worth the time investment. I can add in a hack for 6.1, but I would like to remove it for 7.0 (ie remove filter script support for expressions then).",
"created_at": "2017-09-07T16:22:21Z"
},
{
"body": "+1 to add a workaround for now and removing filter support for expressions in 7.0 (or even remove expressions entirely?)",
"created_at": "2017-09-08T13:31:18Z"
},
{
"body": "@Bargs What do you think?",
"created_at": "2017-09-08T13:38:10Z"
},
{
"body": "The [benchmarks](https://elasticsearch-benchmarks.elastic.co/index.html#tracks/geonames/nightly/30d) still show expressions as being faster than painless. I think it'd make sense for us to wait for painless to catch up to expressions before we talk about removing it entirely.",
"created_at": "2017-09-08T13:42:00Z"
},
{
"body": "Is it worth potential confusion due to users wondering \"which one should I use\"?",
"created_at": "2017-09-08T14:05:21Z"
},
{
"body": "We've had that confusion for a long time though. I think the issue may be moot - we'll likely work to closing that performance gap anyway.",
"created_at": "2017-09-08T14:11:16Z"
},
{
"body": "In Kibana I think we either need to support expressions everywhere or not at all. Having some scripted fields that work with filtering and some that don't will be incredibly confusing to kibana users who didn't set up the scripted fields in the first place.\r\n\r\nRemoving expression support entirely will be a pretty big breaking change. Kibana maintainers will have to rewrite all of their scripts. I'm not sure how we could migrate them automatically. That might be ok as long as our reasons are good enough, breaking changes happen. But I think we need to be absolutely sure removing expressions doesn't make anything impossible that's already possible today. If expressions still outperform painless in certain scenarios, are there use cases where expressions are viable but painless is not?\r\n\r\nAs to confusion over having two languages, I don't think it's a problem, for Kibana at least. In Kibana we default scripted fields to painless and make it clear that's the recommended choice. ",
"created_at": "2017-09-08T14:15:06Z"
},
{
"body": "> I think the issue may be moot\r\n\r\n@nik9000 Not sure what you mean by that. Given the pervasiveness of expressions in Kibana described here, I think it is a worthwhile discussion to have. We need to be thinking far ahead on how to migrate users off of expressions. It is good that painless is the default. And in most cases I think an expressions should \"just work\" as a painless script, so I'm not that worried about transitioning. \r\n\r\nMy concern over continuing to support expressions as filter scripts is the possibility for confusion by users. Because expression only return a double, we have to interpret that double, and cannot distinguish between \"this was a boolean value\" and \"this was a double value\". For example, if a user had an expression like `doc['myfield'].value`, that would previously \"work\" as an expression filter script. But what does that mean? Implementation wise it would return true for non zero, but a user might think it means \"if the field exists\".\r\n\r\n> In Kibana I think we either need to support expressions everywhere or not at all. \r\n\r\n@Bargs This is simply not possible. Expressions already don't work in some contexts. For example, update scripts, reindex scripts, or anything else that doesn't return a numeric value. The only reason they worked before for filter scripts is this very old hack that existed within filter scripts which converted 0.0 to false and everything else to true.\r\n\r\nAs I said before, I can add a hack back in just for expressions for filter scripts, but I don't want to do so unless there is agreement and a plan of action to eliminate this hack long term. Regardless of when expressions are deprecated and removed overall, I don't want expressions supporting filter scripts because of the ambiguities I have described here.",
"created_at": "2017-09-08T16:00:54Z"
},
{
"body": "> @nik9000 Not sure what you mean by that.\r\n\r\nI meant that we are likely to close the performance gap significantly during the 6.x release cycle so we might be able to remove expressions entirely in 7.0 so my point about waiting until Painless catches up might not matter because it will catch up.\r\n\r\n\r\nI agree with your concern about expressions in filters. I find the tricks that kibana plays with scripts to be a bit tricky and this sort of 0-as-false thing plays along. I'd like to avoid it if we can but you are right that the transition path is going to be fun.\r\n\r\n\r\n\r\n> Expressions already don't work in some contexts. For example, update scripts, reindex scripts, or anything else that doesn't return a numeric value\r\n\r\nKibana has a slightly different meaning for the phrase \"script context\" then we do so we can have communications issues around this. One simplistic answer to this is \"kibana doesn't care about those contexts\". That isn't strictly true and is oversimplifying it gives you a sense as to why expressions work in all of kibanas script contexts.",
"created_at": "2017-09-08T16:58:10Z"
},
{
"body": "> One simplistic answer to this is \"kibana doesn't care about those contexts\".\r\n\r\nYes, thank you @nik9000, this is what I meant. I should have been more specific and said: \"In Kibana I think we either need to fully support expressions in \"*kibana* scripted fields\" or not at all. \"Kibana scripted fields\" aren't used for updating, reindexing, etc.\r\n\r\nSo just to clarify my thoughts: if we remove the ability to filter with expressions in Elasticsearch I think we should also remove expression support from \"Kibana scripted fields\" entirely. I'm ok with that if we're sure we're not leaving any users up the creek without a paddle. ",
"created_at": "2017-09-08T22:02:07Z"
},
{
"body": "@Bargs and I talked about this a bit, and he's going to proceed with deprecating expressions in Kibana scripted fields in 6.1 and removing them entirely from master.\r\n\r\n@rjernst Can you add a hack for this in 6.x so the existing behavior starts working again? Kibana is currently pinned to a month old commit of Elasticsearch in CI, so I'd like to undo that asap.",
"created_at": "2017-09-28T20:27:34Z"
},
{
"body": "Sure, this comment from you is enough of an agreement. I'll work on a PR soon. :)",
"created_at": "2017-09-28T21:46:42Z"
},
{
"body": "Awesome, thanks",
"created_at": "2017-09-28T21:55:14Z"
}
],
"number": 26429,
"title": "Expressions scripts in filter contexts throw exception"
} | {
"body": "This was a simple copy/paste bug in an earlier refactoring.\r\n\r\nrelates #26429",
"number": 26528,
"review_comments": [],
"title": "Fix reference to painless inside expression engine"
} | {
"commits": [
{
"message": "Fix reference to painless inside expression engine\n\nThis was a simple copy/paste bug in an earlier refactoring."
}
],
"files": [
{
"diff": "@@ -108,7 +108,7 @@ protected Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundE\n ExecutableScript.Factory factory = (p) -> new ExpressionExecutableScript(expr, p);\n return context.factoryClazz.cast(factory);\n }\n- throw new IllegalArgumentException(\"painless does not know how to handle context [\" + context.name + \"]\");\n+ throw new IllegalArgumentException(\"expression engine does not know how to handle script context [\" + context.name + \"]\");\n }\n \n private SearchScript.LeafFactory newSearchScript(Expression expr, SearchLookup lookup, @Nullable Map<String, Object> vars) {",
"filename": "modules/lang-expression/src/main/java/org/elasticsearch/script/expression/ExpressionScriptEngine.java",
"status": "modified"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion: 5.5.2, Build: b2f0c09/2017-08-14T12:33:14.154Z, JVM: 1.8.0_121\r\n\r\n**Plugins installed**:\r\n* repository-hdfs\r\n* x-pack\r\n\r\n**JVM version** (`java -version`):\r\njava version \"1.8.0_121\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_121-tdc1-b13)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nLinux HOSTNAME 3.0.101-0.113.TDC.1.R.0-default #1 SMP Fri Dec 9 04:51:20 PST 2016 (ca32437) x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nShould not see slf4j message in the stdout logs\r\n\r\n**Steps to reproduce**:\r\n 1. Extract Elasticsearch 5.5.2\r\n 2. Install repository-hdfs plugin\r\n 3. Start Elasticsearch\r\n 4. See slf4j error in stdout\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n...\r\nSLF4J: Failed to load class \"org.slf4j.impl.StaticLoggerBinder\".\r\nSLF4J: Defaulting to no-operation (NOP) logger implementation\r\nSLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.\r\n...\r\n```",
"comments": [
{
"body": "One of the major side effects of the missing slf4j library is that logging can't be enabled for the `org.apache.hadoop` logs trying to debug any issues with the plugin.",
"created_at": "2017-09-05T20:46:21Z"
},
{
"body": "```bash\r\n# make hadoop slf4j logging go to log4j2 instead of noop\r\n## Error upon start of Elasticsearch\r\n## SLF4J: Failed to load class \"org.slf4j.impl.StaticLoggerBinder\". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.\r\nwget -O ${ELASTICSEARCH_HOME}/plugins/repository-hdfs/log4j-slf4j-impl-2.8.2.jar http://central.maven.org/maven2/org/apache/logging/log4j/log4j-slf4j-impl/2.8.2/log4j-slf4j-impl-2.8.2.jar\r\n```\r\nTested with the following\r\n```bash\r\n# Change log level as necesary for hadoop\r\ncurl -i -u USERNAME -XPUT -H 'Accept: application/json' -H 'Content-Type: application/json' \"https://$(hostname -f):9200/_cluster/settings\" -d '{\"transient\" : { \"logger.org.apache.hadoop\" : \"info\"}}'\r\n```",
"created_at": "2017-09-05T20:47:02Z"
}
],
"number": 26512,
"title": "Missing slf4j binding for :Plugin Repository HDFS "
} | {
"body": "This commit adds the Log4j to SLF4J binding JAR to the repository-hdfs plugin so that SLF4J can detect Log4j at runtime and therefore use the server Log4j implementation for logging (and the usual Elasticsearch APIs can be used for setting logging levels).\r\n\r\nCloses #26512\r\n\r\n\r\n",
"number": 26514,
"review_comments": [],
"title": "Add Log4j to SLF4J binding for repository-hdfs"
} | {
"commits": [
{
"message": "Add Log4j to SLF4J binding for repository-hdfs\n\nThis commit adds the Log4j to SLF4J binding JAR to the repository-hdfs\nplugin so that SLF4J can detect Log4j at runtime and therefore use the\nserver Log4j implementation for logging (and the usual Elasticsearch\nAPIs can be used for setting logging levels)."
},
{
"message": "Add SHA"
},
{
"message": "Add licenses"
},
{
"message": "Third party audit"
},
{
"message": "Add third party audit exclusion"
}
],
"files": [
{
"diff": "@@ -57,6 +57,7 @@ dependencies {\n compile 'commons-lang:commons-lang:2.6'\n compile 'javax.servlet:servlet-api:2.5'\n compile \"org.slf4j:slf4j-api:${versions.slf4j}\"\n+ compile \"org.apache.logging.log4j:log4j-slf4j-impl:${versions.log4j}\"\n \n hdfsFixture project(':test:fixtures:hdfs-fixture')\n }\n@@ -470,9 +471,8 @@ thirdPartyAudit.excludes = [\n // internal java api: sun.misc.SignalHandler\n 'org.apache.hadoop.util.SignalLogger$Handler',\n \n- // optional dependencies of slf4j-api\n- 'org.slf4j.impl.StaticMDCBinder',\n- 'org.slf4j.impl.StaticMarkerBinder',\n+ // we are not pulling in slf4j-ext, this is okay, Log4j will fallback gracefully\n+ 'org.slf4j.ext.EventData',\n \n 'org.apache.log4j.AppenderSkeleton',\n 'org.apache.log4j.AsyncAppender',\n@@ -493,12 +493,6 @@ thirdPartyAudit.excludes = [\n 'com.squareup.okhttp.ResponseBody'\n ]\n \n-// Gradle 2.13 bundles org.slf4j.impl.StaticLoggerBinder in its core.jar which leaks into the forbidden APIs ant task\n-// Gradle 2.14+ does not bundle this class anymore so we need to properly exclude it here.\n-if (GradleVersion.current() > GradleVersion.version(\"2.13\")) {\n- thirdPartyAudit.excludes += ['org.slf4j.impl.StaticLoggerBinder']\n-}\n-\n if (JavaVersion.current() > JavaVersion.VERSION_1_8) {\n thirdPartyAudit.excludes += ['javax.xml.bind.annotation.adapters.HexBinaryAdapter']\n }",
"filename": "plugins/repository-hdfs/build.gradle",
"status": "modified"
},
{
"diff": "@@ -0,0 +1 @@\n+1bd7f6b6ddbaf8a21d6c2b288d0cc5bc5b791cc0\n\\ No newline at end of file",
"filename": "plugins/repository-hdfs/licenses/log4j-slf4j-impl-2.9.0.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1,202 @@\n+\n+ Apache License\n+ Version 2.0, January 2004\n+ http://www.apache.org/licenses/\n+\n+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n+\n+ 1. Definitions.\n+\n+ \"License\" shall mean the terms and conditions for use, reproduction,\n+ and distribution as defined by Sections 1 through 9 of this document.\n+\n+ \"Licensor\" shall mean the copyright owner or entity authorized by\n+ the copyright owner that is granting the License.\n+\n+ \"Legal Entity\" shall mean the union of the acting entity and all\n+ other entities that control, are controlled by, or are under common\n+ control with that entity. For the purposes of this definition,\n+ \"control\" means (i) the power, direct or indirect, to cause the\n+ direction or management of such entity, whether by contract or\n+ otherwise, or (ii) ownership of fifty percent (50%) or more of the\n+ outstanding shares, or (iii) beneficial ownership of such entity.\n+\n+ \"You\" (or \"Your\") shall mean an individual or Legal Entity\n+ exercising permissions granted by this License.\n+\n+ \"Source\" form shall mean the preferred form for making modifications,\n+ including but not limited to software source code, documentation\n+ source, and configuration files.\n+\n+ \"Object\" form shall mean any form resulting from mechanical\n+ transformation or translation of a Source form, including but\n+ not limited to compiled object code, generated documentation,\n+ and conversions to other media types.\n+\n+ \"Work\" shall mean the work of authorship, whether in Source or\n+ Object form, made available under the License, as indicated by a\n+ copyright notice that is included in or attached to the work\n+ (an example is provided in the Appendix below).\n+\n+ \"Derivative Works\" shall mean any work, whether in Source or Object\n+ form, that is based on (or derived from) the Work and for which the\n+ editorial revisions, annotations, elaborations, or other modifications\n+ represent, as a whole, an original work of authorship. For the purposes\n+ of this License, Derivative Works shall not include works that remain\n+ separable from, or merely link (or bind by name) to the interfaces of,\n+ the Work and Derivative Works thereof.\n+\n+ \"Contribution\" shall mean any work of authorship, including\n+ the original version of the Work and any modifications or additions\n+ to that Work or Derivative Works thereof, that is intentionally\n+ submitted to Licensor for inclusion in the Work by the copyright owner\n+ or by an individual or Legal Entity authorized to submit on behalf of\n+ the copyright owner. For the purposes of this definition, \"submitted\"\n+ means any form of electronic, verbal, or written communication sent\n+ to the Licensor or its representatives, including but not limited to\n+ communication on electronic mailing lists, source code control systems,\n+ and issue tracking systems that are managed by, or on behalf of, the\n+ Licensor for the purpose of discussing and improving the Work, but\n+ excluding communication that is conspicuously marked or otherwise\n+ designated in writing by the copyright owner as \"Not a Contribution.\"\n+\n+ \"Contributor\" shall mean Licensor and any individual or Legal Entity\n+ on behalf of whom a Contribution has been received by Licensor and\n+ subsequently incorporated within the Work.\n+\n+ 2. Grant of Copyright License. Subject to the terms and conditions of\n+ this License, each Contributor hereby grants to You a perpetual,\n+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n+ copyright license to reproduce, prepare Derivative Works of,\n+ publicly display, publicly perform, sublicense, and distribute the\n+ Work and such Derivative Works in Source or Object form.\n+\n+ 3. Grant of Patent License. Subject to the terms and conditions of\n+ this License, each Contributor hereby grants to You a perpetual,\n+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n+ (except as stated in this section) patent license to make, have made,\n+ use, offer to sell, sell, import, and otherwise transfer the Work,\n+ where such license applies only to those patent claims licensable\n+ by such Contributor that are necessarily infringed by their\n+ Contribution(s) alone or by combination of their Contribution(s)\n+ with the Work to which such Contribution(s) was submitted. If You\n+ institute patent litigation against any entity (including a\n+ cross-claim or counterclaim in a lawsuit) alleging that the Work\n+ or a Contribution incorporated within the Work constitutes direct\n+ or contributory patent infringement, then any patent licenses\n+ granted to You under this License for that Work shall terminate\n+ as of the date such litigation is filed.\n+\n+ 4. Redistribution. You may reproduce and distribute copies of the\n+ Work or Derivative Works thereof in any medium, with or without\n+ modifications, and in Source or Object form, provided that You\n+ meet the following conditions:\n+\n+ (a) You must give any other recipients of the Work or\n+ Derivative Works a copy of this License; and\n+\n+ (b) You must cause any modified files to carry prominent notices\n+ stating that You changed the files; and\n+\n+ (c) You must retain, in the Source form of any Derivative Works\n+ that You distribute, all copyright, patent, trademark, and\n+ attribution notices from the Source form of the Work,\n+ excluding those notices that do not pertain to any part of\n+ the Derivative Works; and\n+\n+ (d) If the Work includes a \"NOTICE\" text file as part of its\n+ distribution, then any Derivative Works that You distribute must\n+ include a readable copy of the attribution notices contained\n+ within such NOTICE file, excluding those notices that do not\n+ pertain to any part of the Derivative Works, in at least one\n+ of the following places: within a NOTICE text file distributed\n+ as part of the Derivative Works; within the Source form or\n+ documentation, if provided along with the Derivative Works; or,\n+ within a display generated by the Derivative Works, if and\n+ wherever such third-party notices normally appear. The contents\n+ of the NOTICE file are for informational purposes only and\n+ do not modify the License. You may add Your own attribution\n+ notices within Derivative Works that You distribute, alongside\n+ or as an addendum to the NOTICE text from the Work, provided\n+ that such additional attribution notices cannot be construed\n+ as modifying the License.\n+\n+ You may add Your own copyright statement to Your modifications and\n+ may provide additional or different license terms and conditions\n+ for use, reproduction, or distribution of Your modifications, or\n+ for any such Derivative Works as a whole, provided Your use,\n+ reproduction, and distribution of the Work otherwise complies with\n+ the conditions stated in this License.\n+\n+ 5. Submission of Contributions. Unless You explicitly state otherwise,\n+ any Contribution intentionally submitted for inclusion in the Work\n+ by You to the Licensor shall be under the terms and conditions of\n+ this License, without any additional terms or conditions.\n+ Notwithstanding the above, nothing herein shall supersede or modify\n+ the terms of any separate license agreement you may have executed\n+ with Licensor regarding such Contributions.\n+\n+ 6. Trademarks. This License does not grant permission to use the trade\n+ names, trademarks, service marks, or product names of the Licensor,\n+ except as required for reasonable and customary use in describing the\n+ origin of the Work and reproducing the content of the NOTICE file.\n+\n+ 7. Disclaimer of Warranty. Unless required by applicable law or\n+ agreed to in writing, Licensor provides the Work (and each\n+ Contributor provides its Contributions) on an \"AS IS\" BASIS,\n+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n+ implied, including, without limitation, any warranties or conditions\n+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n+ PARTICULAR PURPOSE. You are solely responsible for determining the\n+ appropriateness of using or redistributing the Work and assume any\n+ risks associated with Your exercise of permissions under this License.\n+\n+ 8. Limitation of Liability. In no event and under no legal theory,\n+ whether in tort (including negligence), contract, or otherwise,\n+ unless required by applicable law (such as deliberate and grossly\n+ negligent acts) or agreed to in writing, shall any Contributor be\n+ liable to You for damages, including any direct, indirect, special,\n+ incidental, or consequential damages of any character arising as a\n+ result of this License or out of the use or inability to use the\n+ Work (including but not limited to damages for loss of goodwill,\n+ work stoppage, computer failure or malfunction, or any and all\n+ other commercial damages or losses), even if such Contributor\n+ has been advised of the possibility of such damages.\n+\n+ 9. Accepting Warranty or Additional Liability. While redistributing\n+ the Work or Derivative Works thereof, You may choose to offer,\n+ and charge a fee for, acceptance of support, warranty, indemnity,\n+ or other liability obligations and/or rights consistent with this\n+ License. However, in accepting such obligations, You may act only\n+ on Your own behalf and on Your sole responsibility, not on behalf\n+ of any other Contributor, and only if You agree to indemnify,\n+ defend, and hold each Contributor harmless for any liability\n+ incurred by, or claims asserted against, such Contributor by reason\n+ of your accepting any such warranty or additional liability.\n+\n+ END OF TERMS AND CONDITIONS\n+\n+ APPENDIX: How to apply the Apache License to your work.\n+\n+ To apply the Apache License to your work, attach the following\n+ boilerplate notice, with the fields enclosed by brackets \"[]\"\n+ replaced with your own identifying information. (Don't include\n+ the brackets!) The text should be enclosed in the appropriate\n+ comment syntax for the file format. We also recommend that a\n+ file or class name and description of purpose be included on the\n+ same \"printed page\" as the copyright notice for easier\n+ identification within third-party archives.\n+\n+ Copyright 1999-2005 The Apache Software Foundation\n+\n+ Licensed under the Apache License, Version 2.0 (the \"License\");\n+ you may not use this file except in compliance with the License.\n+ You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+ Unless required by applicable law or agreed to in writing, software\n+ distributed under the License is distributed on an \"AS IS\" BASIS,\n+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ See the License for the specific language governing permissions and\n+ limitations under the License.",
"filename": "plugins/repository-hdfs/licenses/log4j-slf4j-impl-LICENSE.txt",
"status": "added"
},
{
"diff": "@@ -0,0 +1,5 @@\n+Apache log4j\n+Copyright 2007 The Apache Software Foundation\n+\n+This product includes software developed at\n+The Apache Software Foundation (http://www.apache.org/).\n\\ No newline at end of file",
"filename": "plugins/repository-hdfs/licenses/log4j-slf4j-impl-NOTICE.txt",
"status": "added"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`):\r\nmaster\r\n**Plugins installed**: []\r\nN/A\r\n**JVM version** (`java -version`):\r\nN/A\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nN/A\r\n**Description of the problem including expected versus actual behavior**:\r\nI believe that there is a typo in org.elasticsearch.common.joda.Joda. As seen below, ```Time``` should not be capitalized, but rather should be ```basic_t_time```.\r\n\r\nhttps://github.com/elastic/elasticsearch/blob/8f0369296f61d8c6ddcf821899715b4d969d4438/core/src/main/java/org/elasticsearch/common/joda/Joda.java#L82",
"comments": [
{
"body": "I want to take it as my first PR.",
"created_at": "2017-09-05T20:15:41Z"
},
{
"body": "@ashworx Thanks for your interest but there is an open PR (#26503) for this one already.",
"created_at": "2017-09-05T20:26:28Z"
},
{
"body": "I would like to contribute to this project, how do I go ahead?",
"created_at": "2017-09-25T14:13:40Z"
}
],
"number": 26500,
"title": "Typo in date format"
} | {
"body": "Fix #26500 ",
"number": 26503,
"review_comments": [
{
"body": "can you use `expectThrows` instead?",
"created_at": "2017-09-07T08:25:36Z"
},
{
"body": "Yes, thanks.",
"created_at": "2017-09-07T09:18:40Z"
}
],
"title": "Fix typo in date format"
} | {
"commits": [
{
"message": "Fix typo"
},
{
"message": "Add unit test for btt time format"
}
],
"files": [
{
"diff": "@@ -79,7 +79,7 @@ public static FormatDateTimeFormatter forPattern(String input, Locale locale) {\n formatter = ISODateTimeFormat.basicTime();\n } else if (\"basicTimeNoMillis\".equals(input) || \"basic_time_no_millis\".equals(input)) {\n formatter = ISODateTimeFormat.basicTimeNoMillis();\n- } else if (\"basicTTime\".equals(input) || \"basic_t_Time\".equals(input)) {\n+ } else if (\"basicTTime\".equals(input) || \"basic_t_time\".equals(input)) {\n formatter = ISODateTimeFormat.basicTTime();\n } else if (\"basicTTimeNoMillis\".equals(input) || \"basic_t_time_no_millis\".equals(input)) {\n formatter = ISODateTimeFormat.basicTTimeNoMillis();",
"filename": "core/src/main/java/org/elasticsearch/common/joda/Joda.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,53 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.joda;\n+\n+import org.elasticsearch.test.ESTestCase;\n+import org.joda.time.DateTime;\n+import org.joda.time.DateTimeZone;\n+import org.joda.time.format.DateTimeFormatter;\n+\n+\n+public class JodaTests extends ESTestCase {\n+\n+\n+ public void testBasicTTimePattern() {\n+ FormatDateTimeFormatter formatter1 = Joda.forPattern(\"basic_t_time\");\n+ assertEquals(formatter1.format(), \"basic_t_time\");\n+ DateTimeFormatter parser1 = formatter1.parser();\n+\n+ assertEquals(parser1.getZone(), DateTimeZone.UTC);\n+\n+ FormatDateTimeFormatter formatter2 = Joda.forPattern(\"basicTTime\");\n+ assertEquals(formatter2.format(), \"basicTTime\");\n+ DateTimeFormatter parser2 = formatter2.parser();\n+\n+ assertEquals(parser2.getZone(), DateTimeZone.UTC);\n+\n+ DateTime dt = new DateTime(2004, 6, 9, 10, 20, 30, 40, DateTimeZone.UTC);\n+ assertEquals(\"T102030.040Z\", parser1.print(dt));\n+ assertEquals(\"T102030.040Z\", parser2.print(dt));\n+\n+ expectThrows(IllegalArgumentException.class, () -> Joda.forPattern(\"basic_t_Time\"));\n+ expectThrows(IllegalArgumentException.class, () -> Joda.forPattern(\"basic_T_Time\"));\n+ expectThrows(IllegalArgumentException.class, () -> Joda.forPattern(\"basic_T_time\"));\n+ }\n+\n+}",
"filename": "core/src/test/java/org/elasticsearch/common/joda/JodaTests.java",
"status": "added"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\n\r\nVersion: 6.0.0-beta1, Build: 896afa4/2017-08-03T23:14:26.258Z, JVM: 1.8.0_144\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`):\r\n\r\njava version \"1.8.0_144\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_144-b01)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n\r\nLinux 816fd3d99829 4.4.0-91-generic #114-Ubuntu SMP Tue Aug 8 11:56:56 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nI'm running Elasticsearch within a docker container (Docker version 17.05.0-ce, build 89658be). Have used older versions up to Elasticsearch 5.5.1 with the same setup without any problems.\r\n\r\nWhen testing to run Elasticsearch 6.0.0-beta1 I have faced an issue that I haven't seem before. The issue is that Elasticsearch crashes when receiving a request.\r\n\r\nWhen starting up the Elasticsearch container, it starts up and Elasticsearch health status seems ok by looking in the logs as it states `Cluster health status changed from [RED] to [YELLOW]` and only a single Elasticsearch instance is running.\r\n\r\nRunning a simple API towards the elasticsearch instance works fine:\r\n\r\n```\r\nuser@host:/elasticsearch# curl localhost:9200/\r\n{\r\n \"name\" : \"dvphuFi\",\r\n \"cluster_name\" : \"elasticsearch-logs\",\r\n \"cluster_uuid\" : \"k8QHY2SRTsyZn8sLo5vNRw\",\r\n \"version\" : {\r\n \"number\" : \"6.0.0-beta1\",\r\n \"build_hash\" : \"896afa4\",\r\n \"build_date\" : \"2017-08-03T23:14:26.258Z\",\r\n \"build_snapshot\" : false,\r\n \"lucene_version\" : \"7.0.0\",\r\n \"minimum_wire_compatibility_version\" : \"5.6.0\",\r\n \"minimum_index_compatibility_version\" : \"5.0.0\"\r\n },\r\n \"tagline\" : \"You Know, for Search\"\r\n}\r\n```\r\n\r\nHowever, running any query will cause the Elasticsearch instance to crash:\r\n\r\n```\r\nuser@host:/elasticsearch# curl localhost:9200/_search?q=test\r\ncurl: (52) Empty reply from server\r\n```\r\n\r\nThe following steps was done in order to get the system working again after the error started happening:\r\n1. Stop, remove and restart the docker image -> Error still occurred\r\n2. Restarting the Docker deamon (via `service docker restart` -> Error still occurred\r\n3. Restarting the Linux host -> Error still occurred\r\n4. Deleting the Elasticsearch data dir -> System works ok again, however data needs to be re-indexed\r\n\r\nMore logs will be attached as a comment to this issue.\r\n\r\nBelow is an excerpt from the logs:\r\n\r\n```\r\n[2017-08-14T11:20:27,152][INFO ][o.e.n.Node ] [] initializing ...\r\n[2017-08-14T11:20:27,257][INFO ][o.e.e.NodeEnvironment ] [dvphuFi] using [1] data paths, mounts [[/data (/dev/sda1)]], net usable_space [14.5gb], net total_space [31.3gb], types [ext4]\r\n[2017-08-14T11:20:27,257][INFO ][o.e.e.NodeEnvironment ] [dvphuFi] heap size [3.8gb], compressed ordinary object pointers [true]\r\n[2017-08-14T11:20:27,591][INFO ][o.e.n.Node ] node name [dvphuFi] derived from node ID [dvphuFiRT8aEJicgCiXprg]; set [node.name] to override\r\n[2017-08-14T11:20:27,591][INFO ][o.e.n.Node ] version[6.0.0-beta1], pid[14], build[896afa4/2017-08-03T23:14:26.258Z], OS[Linux/4.4.0-91-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01]\r\n[2017-08-14T11:20:27,592][INFO ][o.e.n.Node ] JVM arguments [-Xms4g, -Xmx4g, -Dlog4j2.disable.jmx=true, -Des.path.home=/elasticsearch, -Des.path.conf=/elasticsearch/config]\r\n[2017-08-14T11:20:27,592][WARN ][o.e.n.Node ] version [6.0.0-beta1] is a pre-release version of Elasticsearch and is not suitable for production\r\n[2017-08-14T11:20:28,200][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [aggs-matrix-stats]\r\n[2017-08-14T11:20:28,200][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [analysis-common]\r\n[2017-08-14T11:20:28,201][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [ingest-common]\r\n[2017-08-14T11:20:28,201][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [lang-expression]\r\n[2017-08-14T11:20:28,201][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [lang-mustache]\r\n[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [lang-painless]\r\n[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [parent-join]\r\n[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [percolator]\r\n[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [reindex]\r\n[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [repository-url]\r\n[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [transport-netty4]\r\n[2017-08-14T11:20:28,203][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [tribe]\r\n[2017-08-14T11:20:28,203][INFO ][o.e.p.PluginsService ] [dvphuFi] no plugins loaded\r\n[2017-08-14T11:20:29,526][INFO ][o.e.d.DiscoveryModule ] [dvphuFi] using discovery type [zen]\r\n[2017-08-14T11:20:30,304][INFO ][o.e.n.Node ] initialized\r\n[2017-08-14T11:20:30,305][INFO ][o.e.n.Node ] [dvphuFi] starting ...\r\n[2017-08-14T11:20:30,336][INFO ][i.n.u.i.PlatformDependent] Your platform does not provide complete low-level API for accessing direct buffers reliably. Unless explicitly requested, heap buffer will always be preferred to avoid potential system instability.\r\n[2017-08-14T11:20:30,431][INFO ][o.e.t.TransportService ] [dvphuFi] publish_address {172.18.0.10:9300}, bound_addresses {0.0.0.0:9300}\r\n[2017-08-14T11:20:30,440][INFO ][o.e.b.BootstrapChecks ] [dvphuFi] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks\r\n[2017-08-14T11:20:33,486][INFO ][o.e.c.s.MasterService ] [dvphuFi] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{riLrClxxSlugm8Ks7PMClg}{172.18.0.10}{172.18.0.10:9300}\r\n[2017-08-14T11:20:33,492][INFO ][o.e.c.s.ClusterApplierService] [dvphuFi] new_master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{riLrClxxSlugm8Ks7PMClg}{172.18.0.10}{172.18.0.10:9300}, reason: apply cluster state (from master [master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{riLrClxxSlugm8Ks7PMClg}{172.18.0.10}{172.18.0.10:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])\r\n[2017-08-14T11:20:33,526][INFO ][o.e.h.n.Netty4HttpServerTransport] [dvphuFi] publish_address {172.18.0.10:9200}, bound_addresses {0.0.0.0:9200}\r\n[2017-08-14T11:20:33,526][INFO ][o.e.n.Node ] [dvphuFi] started\r\n[2017-08-14T11:20:35,029][INFO ][o.e.g.GatewayService ] [dvphuFi] recovered [125] indices into cluster_state\r\n[2017-08-14T11:20:45,558][INFO ][o.e.c.r.a.AllocationService] [dvphuFi] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-logs-2017.08.09][2], [logstash-logs-2017.08.09][3]] ...]).\r\n[2017-08-14T11:31:18,825][ERROR][o.e.t.n.Netty4Utils ] fatal error on the network layer\r\n at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:179)\r\n at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:81)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)\r\n at io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:850)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:364)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:63)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)\r\n at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)\r\n at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)\r\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)\r\n at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)\r\n at java.lang.Thread.run(Thread.java:748)\r\n[2017-08-14T11:31:18,839][WARN ][o.e.h.n.Netty4HttpServerTransport] [dvphuFi] caught exception while handling client http traffic, closing connection [id: 0x423a8b89, L:/127.0.0.1:9200 - R:/127.0.0.1:44298]\r\njava.lang.StackOverflowError: null\r\n at com.carrotsearch.hppc.ObjectObjectHashMap.<init>(ObjectObjectHashMap.java:123) ~[hppc-0.7.1.jar:?]\r\n...\r\n...\r\n...\r\n```\r\n\r\nFull log is attached below.\r\n\r\n**Steps to reproduce**:\r\n\r\nPlease include a *minimal* but *complete* recreation of the problem, including\r\n(e.g.) index creation, mappings, settings, query etc. The easier you make for\r\nus to reproduce it, the more likely that somebody will take the time to look at it.\r\n\r\n 1. Start and run Elasticsearch within a Docker container\r\n 2. Run a query against the Elasticsearch instance such as `/_search?q=test`\r\n 3. Elasticsearch crashes with a stacktrace in the logs (see below)\r\n\r\n**Provide logs (if relevant)**:\r\n\r\nLogs will be added as a comment.\r\n",
"comments": [
{
"body": "Full log:\r\n[elasticsearch_log.txt](https://github.com/elastic/elasticsearch/files/1222132/elasticsearch_log.txt)",
"created_at": "2017-08-14T12:07:41Z"
},
{
"body": "Can you clarify one thing? It seems that you are using your own Docker image and not the official Docker image provided by Elastic. Is that correct?",
"created_at": "2017-08-14T12:21:03Z"
},
{
"body": "Yes, that's correct. I'm using my own Docker image (which I have used without problems up until ES 5.5.1).",
"created_at": "2017-08-14T12:31:08Z"
},
{
"body": "Can you share the Dockerfile?",
"created_at": "2017-08-14T12:34:34Z"
},
{
"body": "Of course!\r\n\r\nThis is the Dockerfile:\r\n```\r\nFROM ubuntu:16.04\r\n\r\nENV ELASTICSEARCH_VERSION 6.0.0-beta1\r\n\r\n# no tty in this container\r\nENV DEBIAN_FRONTEND noninteractive\r\n\r\n# Update index and install packages\r\nRUN apt-get update && apt-get install -y \\\r\n apt-utils \\\r\n software-properties-common \\\r\n curl \\\r\n nano \\\r\n sudo \\\r\n nmap\r\n\r\n# Install JDK 8\r\nRUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \\\r\n add-apt-repository -y ppa:webupd8team/java && \\\r\n apt-get update && \\\r\n apt-get install -y oracle-java8-installer wget unzip tar && \\\r\n rm -rf /var/lib/apt/lists/* && \\\r\n rm -rf /var/cache/oracle-jdk8-installer\r\n\r\n# Define JAVA variables\r\nENV JAVA_HOME /usr/lib/jvm/java-8-oracle\r\nENV JAVA /usr/bin/java\r\n\r\n# Download and install Elasticsearch\r\nRUN \\\r\n cd / && \\\r\n curl https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ELASTICSEARCH_VERSION.tar.gz | \\\r\n tar xzf - && \\\r\n mv elasticsearch-$ELASTICSEARCH_VERSION /elasticsearch\r\n\r\n# Define mountable directories.\r\nVOLUME [\"/data\"]\r\n\r\n# Add elasticsearch config files\r\nADD docker/elasticsearch/config/elasticsearch.yml /elasticsearch/config/\r\nADD docker/elasticsearch/config/jvm.options /elasticsearch/config/\r\n\r\n# Setup limits to enable mlock\r\nADD docker/elasticsearch/config/limits.conf /etc/security/\r\n\r\n# Define working directory.\r\nWORKDIR /elasticsearch\r\n\r\n# Create user elasticsearch\r\nRUN groupadd -g 1000 elasticsearch && useradd elasticsearch -u 1000 -g 1000\r\n\r\nRUN set -ex && for path in data logs plugins config config/scripts; do \\\r\n mkdir -p \"$path\"; \\\r\n chown -R elasticsearch:elasticsearch \"$path\"; \\\r\n done\r\n\r\n# Add elasticsearch to path\r\nENV PATH=$PATH:/elasticsearch/bin\r\n\r\n# Add start script\r\nADD docker/elasticsearch/config/start.sh /\r\n\r\n# Define default command.\r\nCMD [\"/start.sh\"]\r\n\r\n# Expose ports.\r\n# - 9200: HTTP\r\n# - 9300: transport\r\nEXPOSE 9200\r\nEXPOSE 9300\r\n```\r\n\r\nelasticsearch.yml:\r\n```\r\ncluster.name: \"elasticsearch-logs\"\r\nnetwork.host: 0.0.0.0\r\npath.data: /data/elasticsearch\r\npath.logs: /logs/elasticsearch\r\nbootstrap.memory_lock: true\r\ndiscovery.zen.minimum_master_nodes: 1\r\nnode.max_local_storage_nodes: 1\r\n```\r\n\r\njvm.options:\r\n```\r\n-Xms4g\r\n-Xmx4g\r\n```\r\n\r\nlimits.conf:\r\n```\r\nelasticsearch soft nproc 65535555\r\nelasticsearch hard nproc 65553555\r\nelasticsearch soft nofile 655355\r\nelasticsearch hard nofile 655355\r\nelasticsearch soft memlock unlimited\r\nelasticsearch hard memlock unlimited\r\n```\r\n\r\nstart.sh:\r\n```\r\n#!/bin/sh\r\n\r\n# make sure that data and log dir exists\r\nmkdir -p /data/elasticsearch\r\nmkdir -p /logs/elasticsearch\r\n\r\n# ...and that elasticsearch user can write to them\r\nchown -R elasticsearch:elasticsearch /data/elasticsearch\r\nchown -R elasticsearch:elasticsearch /logs/elasticsearch\r\n\r\n# provision elasticsearch user\r\nadduser elasticsearch sudo\r\nchown -R elasticsearch /elasticsearch /data\r\necho '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers\r\n\r\n# allow for memlock\r\nulimit -l unlimited\r\n\r\n# setting up java heap size\r\necho -Xms${JVM_HEAP_GB}g > /elasticsearch/config/jvm.options\r\necho -Xmx${JVM_HEAP_GB}g >> /elasticsearch/config/jvm.options\r\n\r\n# Disabling JVM security manager exceptions\r\necho -Dlog4j2.disable.jmx=true >> /elasticsearch/config/jvm.options\r\n\r\n# run\r\nsudo -E -u elasticsearch /elasticsearch/bin/elasticsearch\r\n```",
"created_at": "2017-08-14T12:49:31Z"
},
{
"body": "This does not reproduce for me. What additional steps can you provide that reliably reproduces this? Additionally, are there any additional lines to the stack trace available?",
"created_at": "2017-08-14T13:30:33Z"
},
{
"body": "I think I know what the problem is. Here's the key: the constructor for `ObjectObjectHashMap` is not recursive, the stack overflow is not occurring on a recursive call, it is just overflowing on what is otherwise a shallow stack. I think that you're running on a system with limited resources, I think that the default thread stack sizes are too small. Please add: `-Xss1m` to your `jvm.options` by adding the line:\r\n\r\n`echo -Xss1m > /elasticsearch/config/jvm.options`\r\n\r\nto your `start.sh`.\r\n\r\nI think that this is not an Elasticsearch bug and I am going to close this issue. Please let me know either way if this does not resolve the problem that you are encountering. If there is an Elasticsearch bug here, I will reopen the issue.\r\n\r\nBy the way, I think that you should try to use the entire `jvm.options` file that we ship with, aside from the heap settings.",
"created_at": "2017-08-14T13:34:31Z"
},
{
"body": "Thanks for taking the time to look at this and for the suggestions on improving the Docker setup! I plan to start using the official images but haven't gotten there yet,\r\n\r\nI haven't been able to reproduce this error from scratch again but if I re-start the image with the same data directory as when the bug was reported, the issue happens every time.\r\n\r\nI've tried adding the `-Xss1m` option to the JVM options file but the problem remains the same. Also tried changing the image so that the same `jvm.options` file is used as the one shipped with elasticsearch is used but the problem remains the same.\r\n\r\nWhen examining the logs I could see that during the first startup after changing to use the `jvm.options` shipped with ES the log file the were errors stating `failed to list shard for shard_started on node` that seems to be caused by a missing file (`/data/elasticsearch/nodes/0/indices/4xs7KSPsSymjyMiTEDdsIA/_state/state-165.st`).\r\n\r\nCould it be that the ES data directory has become corrupt for some reason (i.e. limited resourced/out of disk/sudden restart) and that the node won't start up correctly after that?\r\n\r\nAnyway, loosing existing data and having to re-read everything to a fresh data directory is a solution that works for me in case this would happen again. However, if the problem is caused by the data dir becoming corrupt (for whatever reason) I think it would be better if ES either didn't start up at all or tried to recover the data (if possible). The situation here seems to be that ES starts up but fails when the first request comes in.\r\n\r\nHere are logs for the scenario described above:\r\n```\r\n[2017-08-14T14:01:34,028][INFO ][o.e.n.Node ] [] initializing ...\r\n[2017-08-14T14:01:34,134][INFO ][o.e.e.NodeEnvironment ] [dvphuFi] using [1] data paths, mounts [[/data (/dev/sda1)]], net usable_space [14.4gb], net total_space [31.3gb], types [ext4]\r\n[2017-08-14T14:01:34,135][INFO ][o.e.e.NodeEnvironment ] [dvphuFi] heap size [3.9gb], compressed ordinary object pointers [true]\r\n[2017-08-14T14:01:34,457][INFO ][o.e.n.Node ] node name [dvphuFi] derived from node ID [dvphuFiRT8aEJicgCiXprg]; set [node.name] to override\r\n[2017-08-14T14:01:34,458][INFO ][o.e.n.Node ] version[6.0.0-beta1], pid[18], build[896afa4/2017-08-03T23:14:26.258Z], OS[Linux/4.4.0-91-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01]\r\n[2017-08-14T14:01:34,458][INFO ][o.e.n.Node ] JVM arguments [-Xms4g, -Xmx4g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Dlog4j2.disable.jmx=true, -Des.path.home=/elasticsearch, -Des.path.conf=/elasticsearch/config]\r\n[2017-08-14T14:01:34,458][WARN ][o.e.n.Node ] version [6.0.0-beta1] is a pre-release version of Elasticsearch and is not suitable for production\r\n[2017-08-14T14:01:35,147][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [aggs-matrix-stats]\r\n[2017-08-14T14:01:35,147][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [analysis-common]\r\n[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [ingest-common]\r\n[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [lang-expression]\r\n[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [lang-mustache]\r\n[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [lang-painless]\r\n[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [parent-join]\r\n[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [percolator]\r\n[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [reindex]\r\n[2017-08-14T14:01:35,149][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [repository-url]\r\n[2017-08-14T14:01:35,149][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [transport-netty4]\r\n[2017-08-14T14:01:35,149][INFO ][o.e.p.PluginsService ] [dvphuFi] loaded module [tribe]\r\n[2017-08-14T14:01:35,149][INFO ][o.e.p.PluginsService ] [dvphuFi] no plugins loaded\r\n[2017-08-14T14:01:36,291][INFO ][o.e.d.DiscoveryModule ] [dvphuFi] using discovery type [zen]\r\n[2017-08-14T14:01:37,208][INFO ][o.e.n.Node ] initialized\r\n[2017-08-14T14:01:37,208][INFO ][o.e.n.Node ] [dvphuFi] starting ...\r\n[2017-08-14T14:01:37,340][INFO ][o.e.t.TransportService ] [dvphuFi] publish_address {172.18.0.12:9300}, bound_addresses {0.0.0.0:9300}\r\n[2017-08-14T14:01:37,348][INFO ][o.e.b.BootstrapChecks ] [dvphuFi] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks\r\n[2017-08-14T14:01:40,418][INFO ][o.e.c.s.MasterService ] [dvphuFi] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{UpdneAxRTKSslmuiTqQJ5g}{172.18.0.12}{172.18.0.12:9300}\r\n[2017-08-14T14:01:40,423][INFO ][o.e.c.s.ClusterApplierService] [dvphuFi] new_master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{UpdneAxRTKSslmuiTqQJ5g}{172.18.0.12}{172.18.0.12:9300}, reason: apply cluster state (from master [master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{UpdneAxRTKSslmuiTqQJ5g}{172.18.0.12}{172.18.0.12:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])\r\n[2017-08-14T14:01:40,458][INFO ][o.e.h.n.Netty4HttpServerTransport] [dvphuFi] publish_address {172.18.0.12:9200}, bound_addresses {0.0.0.0:9200}\r\n[2017-08-14T14:01:40,459][INFO ][o.e.n.Node ] [dvphuFi] started\r\n[2017-08-14T14:01:41,760][WARN ][o.e.g.GatewayAllocator$InternalPrimaryShardAllocator] [dvphuFi] [logstash-logs-2017.08.03][4]: failed to list shard for shard_started on node [dvphuFiRT8aEJicgCiXprg]\r\norg.elasticsearch.action.FailedNodeException: Failed node [dvphuFiRT8aEJicgCiXprg]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:239) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$200(TransportNodesAction.java:153) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:211) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1060) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:1164) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1142) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$7.onFailure(TransportService.java:661) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:623) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\nCaused by: org.elasticsearch.transport.RemoteTransportException: [dvphuFi][172.18.0.12:9300][internal:gateway/local/started_shards[n]]\r\nCaused by: org.elasticsearch.ElasticsearchException: failed to load started shards\r\n\tat org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:171) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:62) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:140) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:262) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:258) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:650) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\t... 3 more\r\nCaused by: org.elasticsearch.ElasticsearchException: java.io.IOException: failed to read [id:165, legacy:false, file:/data/elasticsearch/nodes/0/indices/4xs7KSPsSymjyMiTEDdsIA/_state/state-165.st]\r\n\tat org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:150) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:334) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:128) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:62) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:140) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:262) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:258) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:650) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\t... 3 more\r\nCaused by: java.io.IOException: failed to read [id:165, legacy:false, file:/data/elasticsearch/nodes/0/indices/4xs7KSPsSymjyMiTEDdsIA/_state/state-165.st]\r\n\tat org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:327) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:128) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:62) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:140) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:262) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:258) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:650) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\t... 3 more\r\nCaused by: java.nio.file.NoSuchFileException: /data/elasticsearch/nodes/0/indices/4xs7KSPsSymjyMiTEDdsIA/_state/state-165.st\r\n\tat sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) ~[?:?]\r\n\tat sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]\r\n\tat sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]\r\n\tat sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) ~[?:?]\r\n\tat java.nio.file.Files.newByteChannel(Files.java:361) ~[?:1.8.0_144]\r\n\tat java.nio.file.Files.newByteChannel(Files.java:407) ~[?:1.8.0_144]\r\n\tat org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:77) ~[lucene-core-7.0.0-snapshot-00142c9.jar:7.0.0-snapshot-00142c9 00142c921322a92de5007be2a114893aaa072498 - jpountz - 2017-07-11 09:24:13]\r\n\tat org.elasticsearch.gateway.MetaDataStateFormat.read(MetaDataStateFormat.java:187) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:322) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:128) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:62) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:140) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:262) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:258) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:650) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\t... 3 more\r\n[2017-08-14T14:01:42,027][INFO ][o.e.g.GatewayService ] [dvphuFi] recovered [125] indices into cluster_state\r\n[2017-08-14T14:01:53,844][INFO ][o.e.c.r.a.AllocationService] [dvphuFi] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-logs-2017.08.09][0], [logstash-logs-2017.08.09][3], [.kibana][0], [logstash-logs-2017.08.09][1]] ...]).\r\n[2017-08-14T14:02:07,612][ERROR][o.e.t.n.Netty4Utils ] fatal error on the network layer\r\n\tat org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:179)\r\n\tat org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:81)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:850)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:364)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:63)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n\tat io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n[2017-08-14T14:02:07,628][WARN ][o.e.h.n.Netty4HttpServerTransport] [dvphuFi] caught exception while handling client http traffic, closing connection [id: 0x17bb1be3, L:/127.0.0.1:9200 - R:/127.0.0.1:56592]\r\njava.lang.StackOverflowError: null\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRefCounted.incRef(AbstractRefCounted.java:41) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.index.store.Store.incRef(Store.java:366) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.index.engine.Engine.acquireSearcher(Engine.java:504) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:1111) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.search.SearchService.createSearchContext(SearchService.java:572) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.search.SearchService.canMatch(SearchService.java:905) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:431) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:428) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:644) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService.access$000(TransportService.java:74) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$3.sendRequest(TransportService.java:137) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:592) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:512) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService.sendChildRequest(TransportService.java:552) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.SearchTransportService.sendCanMatch(SearchTransportService.java:114) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.executePhaseOnShard(CanMatchPreFilterSearchPhase.java:68) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.performPhaseOnShard(InitialSearchPhase.java:160) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:149) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:207) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.onShardResult(InitialSearchPhase.java:190) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.access$000(InitialSearchPhase.java:46) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase$1.innerOnResponse(InitialSearchPhase.java:164) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.SearchActionListener.onResponse(SearchActionListener.java:45) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.SearchActionListener.onResponse(SearchActionListener.java:29) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:46) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1053) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$DirectResponseChannel.processResponse(TransportService.java:1127) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1117) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1106) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.DelegatingTransportChannel.sendResponse(DelegatingTransportChannel.java:60) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry$TransportChannelWrapper.sendResponse(RequestHandlerRegistry.java:108) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:432) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:428) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:644) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService.access$000(TransportService.java:74) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService$3.sendRequest(TransportService.java:137) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:592) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:512) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.transport.TransportService.sendChildRequest(TransportService.java:552) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.SearchTransportService.sendCanMatch(SearchTransportService.java:114) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.executePhaseOnShard(CanMatchPreFilterSearchPhase.java:68) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.performPhaseOnShard(InitialSearchPhase.java:160) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:149) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:207) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.onShardResult(InitialSearchPhase.java:190) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.access$000(InitialSearchPhase.java:46) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase$1.innerOnResponse(InitialSearchPhase.java:164) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n```\r\n\r\n\r\n",
"created_at": "2017-08-14T14:53:06Z"
},
{
"body": "Can you send me the data directory? (We can arrange a method to share privately if necessary).",
"created_at": "2017-08-14T15:04:04Z"
},
{
"body": "Also, can you share the output: `cat /proc/1/limits | grep \"Max stack size\"` or `ulimit -s` (from the container)?",
"created_at": "2017-08-14T15:06:08Z"
},
{
"body": "Thanks, I am unfortunately not able to give you access to the data directory as it contains logs files with sensible data. I will try to reproduce this with other data and if I succeed I will definitely share that dir with you.\r\n\r\nHere is the output from `cat /proc/1/limits | grep \"Max stack size\"`:\r\n\r\n```\r\nroot@79b8a65639cc:/elasticsearch# cat /proc/1/limits | grep \"Max stack size\"\r\nMax stack size 8388608 unlimited bytes\r\n```\r\n\r\n`ulimit -s`:\r\n\r\n```\r\nroot@79b8a65639cc:/elasticsearch# hal@hal-vm:~/development/EGMSlurper$ ulimit -s\r\n8192\r\n```\r\n",
"created_at": "2017-08-15T08:11:34Z"
},
{
"body": "Okay, those look reasonable.\r\n\r\nI understand about the data, so a reproduction will certainly help to get to the bottom of this one. I'm going to reopen this issue.\r\n\r\nMeanwhile, I've marked you as eligible for the [Pioneer Program](https://www.elastic.co/blog/elastic-pioneer-program-6-0).",
"created_at": "2017-08-15T16:41:13Z"
},
{
"body": "Thanks for the report @algestam. I have found the problem and opened #26484.",
"created_at": "2017-09-03T20:19:28Z"
},
{
"body": "Good news :) Thanks @jasontedor!",
"created_at": "2017-09-03T20:45:47Z"
}
],
"number": 26198,
"title": "Dockerized Elasticsearch instance crashes when receiving request"
} | {
"body": "If the query coordinating node is also a data node that holds all the shards for a search request, we can end up recursing through the can match phase (because we send a local request and on response in the listener move to the next shard and do this again, without ever having returned from previous shards). This recursion can lead to stack overflow for even a reasonable number of indices (daily indices over a sixty days with five shards per day is enough to trigger the stack overflow). Moreover, all this execution would be happening on a network thread (the thread that initially received the query). With this commit, we allow search phases to override max concurrent requests. This allows the can match phase to avoid recursing through the shards towards a stack overflow.\r\n\r\nCloses #26198\r\n",
"number": 26484,
"review_comments": [
{
"body": "why do we need to start a node? can't we use an existing node and use it's _name for allocation filtering?",
"created_at": "2017-09-04T09:46:04Z"
},
{
"body": "nit - this isn't really a can match test but more an \"can we run with lot's of shards on nodes\" tests. I think it's good to have but maybe we can fold it into one of the generic search IT suites? maybe `SimpleSearchIT` (although nothing is simple in life ;)",
"created_at": "2017-09-04T09:49:24Z"
},
{
"body": "this is worrisome... how long does this test run normally? should we fallback to mocking?",
"created_at": "2017-09-04T09:54:47Z"
},
{
"body": "This test is removed, I am now using mocking.",
"created_at": "2017-09-13T02:30:39Z"
},
{
"body": "can we maybe not warp on every parameter please?",
"created_at": "2017-09-13T07:46:19Z"
},
{
"body": "can we leave a comment why we do this here?",
"created_at": "2017-09-13T07:47:10Z"
}
],
"title": "Let search phases override max concurrent requests"
} | {
"commits": [
{
"message": "Fork can match requests to the search thread pool\n\nIf the query coordinating node is also a data node that holds all the\nshards for a search request, we can end up recursing through the can\nmatch phase (because we send a local request and on response in the\nlistener move to the next shard and do this again, without ever having\nreturned from previous shards). This recursion can lead to stack\noverflow for even a reasonable number of indices (daily indices over a\nsixty days with five shards per day is enough to trigger the stack\noverflow). Moreover, all this execution would be happening on a network\nthread (the thread that initially received the query). With this commit,\nwe fork can match requests to the search thread pool to prevent this."
},
{
"message": "Increase ensure green timeout"
},
{
"message": "Increase timeout"
},
{
"message": "Merge remote-tracking branch 'origin/master' into can-match-stack-overflow\n\n* origin/master: (59 commits)\n Fix Lucene version of 5.6.1.\n Remove azure deprecated settings (#26099)\n Handle the 5.6.0 release\n Allow plugins to validate cluster-state on join (#26595)\n Remove index mapper dynamic settings (#25734)\n update AWS SDK for ECS Task IAM support in discovery-ec2 (#26479)\n Azure repository: Accelerate the listing of files (used in delete snapshot) (#25710)\n Build: Remove norelease from forbidden patterns (#26592)\n Fix reference to painless inside expression engine (#26528)\n Build: Move javadoc linking to root build.gradle (#26529)\n Test: Remove leftover static bwc test case (#26584)\n Docs: Remove remaining references to file and native scripts (#26580)\n Snapshot fallback should consider build.snapshot\n #26496: Set the correct bwc version after backport to 6.x\n Fix the MapperFieldType.rangeQuery API. (#26552)\n Deduplicate `_field_names`. (#26550)\n [Docs] Update method setSource(byte[] source) (#26561)\n [Docs] Fix typo in javadocs (#26556)\n Allow multiple digits in Vagrant 2.x minor versions\n Support Vagrant 2.x\n ..."
},
{
"message": "Iteration"
},
{
"message": "Cleanup"
},
{
"message": "Iteration"
}
],
"files": [
{
"diff": "@@ -76,8 +76,8 @@ protected AbstractSearchAsyncAction(String name, Logger logger, SearchTransportS\n Executor executor, SearchRequest request,\n ActionListener<SearchResponse> listener, GroupShardsIterator<SearchShardIterator> shardsIts,\n TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion,\n- SearchTask task, SearchPhaseResults<Result> resultConsumer) {\n- super(name, request, shardsIts, logger);\n+ SearchTask task, SearchPhaseResults<Result> resultConsumer, int maxConcurrentShardRequests) {\n+ super(name, request, shardsIts, logger, maxConcurrentShardRequests);\n this.timeProvider = timeProvider;\n this.logger = logger;\n this.searchTransportService = searchTransportService;",
"filename": "core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java",
"status": "modified"
},
{
"diff": "@@ -26,10 +26,6 @@\n import org.elasticsearch.search.internal.AliasFilter;\n import org.elasticsearch.transport.Transport;\n \n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.Iterator;\n-import java.util.List;\n import java.util.Map;\n import java.util.concurrent.Executor;\n import java.util.function.BiFunction;\n@@ -55,9 +51,12 @@ final class CanMatchPreFilterSearchPhase extends AbstractSearchAsyncAction<Searc\n ActionListener<SearchResponse> listener, GroupShardsIterator<SearchShardIterator> shardsIts,\n TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion,\n SearchTask task, Function<GroupShardsIterator<SearchShardIterator>, SearchPhase> phaseFactory) {\n+ /*\n+ * We set max concurrent shard requests to the number of shards to otherwise avoid deep recursing that would occur if the local node\n+ * is the coordinating node for the query, holds all the shards for the request, and there are a lot of shards.\n+ */\n super(\"can_match\", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request,\n- listener,\n- shardsIts, timeProvider, clusterStateVersion, task, new BitSetSearchPhaseResults(shardsIts.size()));\n+ listener, shardsIts, timeProvider, clusterStateVersion, task, new BitSetSearchPhaseResults(shardsIts.size()), shardsIts.size());\n this.phaseFactory = phaseFactory;\n this.shardsIts = shardsIts;\n }",
"filename": "core/src/main/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhase.java",
"status": "modified"
},
{
"diff": "@@ -52,7 +52,8 @@ abstract class InitialSearchPhase<FirstResult extends SearchPhaseResult> extends\n private final AtomicInteger shardExecutionIndex = new AtomicInteger(0);\n private final int maxConcurrentShardRequests;\n \n- InitialSearchPhase(String name, SearchRequest request, GroupShardsIterator<SearchShardIterator> shardsIts, Logger logger) {\n+ InitialSearchPhase(String name, SearchRequest request, GroupShardsIterator<SearchShardIterator> shardsIts, Logger logger,\n+ int maxConcurrentShardRequests) {\n super(name);\n this.request = request;\n this.shardsIts = shardsIts;\n@@ -62,7 +63,7 @@ abstract class InitialSearchPhase<FirstResult extends SearchPhaseResult> extends\n // on a per shards level we use shardIt.remaining() to increment the totalOps pointer but add 1 for the current shard result\n // we process hence we add one for the non active partition here.\n this.expectedTotalOps = shardsIts.totalSizeWith1ForEmpty();\n- maxConcurrentShardRequests = Math.min(request.getMaxConcurrentShardRequests(), shardsIts.size());\n+ this.maxConcurrentShardRequests = Math.min(maxConcurrentShardRequests, shardsIts.size());\n }\n \n private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard, @Nullable String nodeId,",
"filename": "core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java",
"status": "modified"
},
{
"diff": "@@ -42,7 +42,8 @@ final class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction\n final GroupShardsIterator<SearchShardIterator> shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider,\n final long clusterStateVersion, final SearchTask task) {\n super(\"dfs\", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener,\n- shardsIts, timeProvider, clusterStateVersion, task, new ArraySearchPhaseResults<>(shardsIts.size()));\n+ shardsIts, timeProvider, clusterStateVersion, task, new ArraySearchPhaseResults<>(shardsIts.size()),\n+ request.getMaxConcurrentShardRequests());\n this.searchPhaseController = searchPhaseController;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java",
"status": "modified"
},
{
"diff": "@@ -42,7 +42,8 @@ final class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction<Se\n final GroupShardsIterator<SearchShardIterator> shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider,\n long clusterStateVersion, SearchTask task) {\n super(\"query\", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener,\n- shardsIts, timeProvider, clusterStateVersion, task, searchPhaseController.newSearchPhaseResults(request, shardsIts.size()));\n+ shardsIts, timeProvider, clusterStateVersion, task, searchPhaseController.newSearchPhaseResults(request, shardsIts.size()),\n+ request.getMaxConcurrentShardRequests());\n this.searchPhaseController = searchPhaseController;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java",
"status": "modified"
},
{
"diff": "@@ -47,9 +47,9 @@\n import org.elasticsearch.tasks.Task;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.RemoteClusterService;\n+import org.elasticsearch.transport.TaskAwareTransportRequestHandler;\n import org.elasticsearch.transport.Transport;\n import org.elasticsearch.transport.TransportActionProxy;\n-import org.elasticsearch.transport.TaskAwareTransportRequestHandler;\n import org.elasticsearch.transport.TransportChannel;\n import org.elasticsearch.transport.TransportException;\n import org.elasticsearch.transport.TransportRequest;\n@@ -59,7 +59,6 @@\n \n import java.io.IOException;\n import java.io.UncheckedIOException;\n-import java.util.Collections;\n import java.util.HashMap;\n import java.util.Map;\n import java.util.function.BiFunction;\n@@ -447,7 +446,7 @@ public void messageReceived(ShardFetchSearchRequest request, TransportChannel ch\n });\n TransportActionProxy.registerProxyAction(transportService, FETCH_ID_ACTION_NAME, FetchSearchResult::new);\n \n- // this is super cheap and should not hit thread-pool rejections\n+ // this is cheap, it does not fetch during the rewrite phase, so we can let it quickly execute on a networking thread\n transportService.registerRequestHandler(QUERY_CAN_MATCH_NAME, ThreadPool.Names.SAME, ShardSearchTransportRequest::new,\n new TaskAwareTransportRequestHandler<ShardSearchTransportRequest>() {\n @Override",
"filename": "core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java",
"status": "modified"
},
{
"diff": "@@ -60,11 +60,12 @@ private AbstractSearchAsyncAction<SearchPhaseResult> createAction(\n System::nanoTime);\n }\n \n+ final SearchRequest request = new SearchRequest();\n return new AbstractSearchAsyncAction<SearchPhaseResult>(\"test\", null, null, null,\n Collections.singletonMap(\"foo\", new AliasFilter(new MatchAllQueryBuilder())), Collections.singletonMap(\"foo\", 2.0f), null,\n- new SearchRequest(), null, new GroupShardsIterator<>(Collections.singletonList(\n+ request, null, new GroupShardsIterator<>(Collections.singletonList(\n new SearchShardIterator(null, null, Collections.emptyList(), null))), timeProvider, 0, null,\n- new InitialSearchPhase.ArraySearchPhaseResults<>(10)) {\n+ new InitialSearchPhase.ArraySearchPhaseResults<>(10), request.getMaxConcurrentShardRequests()) {\n @Override\n protected SearchPhase getNextPhase(final SearchPhaseResults<SearchPhaseResult> results, final SearchPhaseContext context) {\n return null;",
"filename": "core/src/test/java/org/elasticsearch/action/search/AbstractSearchAsyncActionTests.java",
"status": "modified"
},
{
"diff": "@@ -170,4 +170,61 @@ public void run() throws IOException {\n assertEquals(shard1, !result.get().get(0).skip());\n assertFalse(result.get().get(1).skip()); // never skip the failure\n }\n+\n+ /*\n+ * In cases that a query coordinating node held all the shards for a query, the can match phase would recurse and end in stack overflow\n+ * when subjected to max concurrent search requests. This test is a test for that situation.\n+ */\n+ public void testLotsOfShards() throws InterruptedException {\n+ final TransportSearchAction.SearchTimeProvider timeProvider =\n+ new TransportSearchAction.SearchTimeProvider(0, System.nanoTime(), System::nanoTime);\n+\n+ final Map<String, Transport.Connection> lookup = new ConcurrentHashMap<>();\n+ final DiscoveryNode primaryNode = new DiscoveryNode(\"node_1\", buildNewFakeTransportAddress(), Version.CURRENT);\n+ final DiscoveryNode replicaNode = new DiscoveryNode(\"node_2\", buildNewFakeTransportAddress(), Version.CURRENT);\n+ lookup.put(\"node1\", new SearchAsyncActionTests.MockConnection(primaryNode));\n+ lookup.put(\"node2\", new SearchAsyncActionTests.MockConnection(replicaNode));\n+\n+ final SearchTransportService searchTransportService =\n+ new SearchTransportService(Settings.builder().put(\"search.remote.connect\", false).build(), null, null) {\n+ @Override\n+ public void sendCanMatch(\n+ Transport.Connection connection,\n+ ShardSearchTransportRequest request,\n+ SearchTask task,\n+ ActionListener<CanMatchResponse> listener) {\n+ listener.onResponse(new CanMatchResponse(randomBoolean()));\n+ }\n+ };\n+\n+ final AtomicReference<GroupShardsIterator<SearchShardIterator>> result = new AtomicReference<>();\n+ final CountDownLatch latch = new CountDownLatch(1);\n+ final OriginalIndices originalIndices = new OriginalIndices(new String[]{\"idx\"}, IndicesOptions.strictExpandOpenAndForbidClosed());\n+ final GroupShardsIterator<SearchShardIterator> shardsIter =\n+ SearchAsyncActionTests.getShardsIter(\"idx\", originalIndices, 2048, randomBoolean(), primaryNode, replicaNode);\n+ final CanMatchPreFilterSearchPhase canMatchPhase = new CanMatchPreFilterSearchPhase(\n+ logger,\n+ searchTransportService,\n+ (clusterAlias, node) -> lookup.get(node),\n+ Collections.singletonMap(\"_na_\", new AliasFilter(null, Strings.EMPTY_ARRAY)),\n+ Collections.emptyMap(),\n+ EsExecutors.newDirectExecutorService(),\n+ new SearchRequest(),\n+ null,\n+ shardsIter,\n+ timeProvider,\n+ 0,\n+ null,\n+ (iter) -> new SearchPhase(\"test\") {\n+ @Override\n+ public void run() throws IOException {\n+ result.set(iter);\n+ latch.countDown();\n+ }});\n+\n+ canMatchPhase.start();\n+ latch.await();\n+\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhaseTests.java",
"status": "modified"
},
{
"diff": "@@ -110,7 +110,8 @@ public void onFailure(Exception e) {\n new TransportSearchAction.SearchTimeProvider(0, 0, () -> 0),\n 0,\n null,\n- new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size())) {\n+ new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size()),\n+ request.getMaxConcurrentShardRequests()) {\n \n @Override\n protected void executePhaseOnShard(SearchShardIterator shardIt, ShardRouting shard,\n@@ -199,7 +200,8 @@ public void onFailure(Exception e) {\n new TransportSearchAction.SearchTimeProvider(0, 0, () -> 0),\n 0,\n null,\n- new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size())) {\n+ new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size()),\n+ request.getMaxConcurrentShardRequests()) {\n \n @Override\n protected void executePhaseOnShard(SearchShardIterator shardIt, ShardRouting shard,\n@@ -300,7 +302,8 @@ public void sendFreeContext(Transport.Connection connection, long contextId, Ori\n new TransportSearchAction.SearchTimeProvider(0, 0, () -> 0),\n 0,\n null,\n- new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size())) {\n+ new InitialSearchPhase.ArraySearchPhaseResults<>(shardsIter.size()),\n+ request.getMaxConcurrentShardRequests()) {\n TestSearchResponse response = new TestSearchResponse();\n \n @Override",
"filename": "core/src/test/java/org/elasticsearch/action/search/SearchAsyncActionTests.java",
"status": "modified"
}
]
} |
{
"body": "I was just looking at a heap dump from a user where some threads we holding strong references to very large string builders. The problem seems to come from Log4j's ParameterizedMessage which uses a static `ThreadLocal<StringBuilder>` in order to reuse memory when building messages. Unfortunately, the string builders never decrease in size, so this thing can only grow over time given that we use fixed thread pools.",
"comments": [
{
"body": "I wonder if we should add `ThreadLocal` to the forbidden list in third-party-audit? This won't stop us from using libraries with it, but it would make us aware, and we could then document in the excludes list why it is ok for each dependency.",
"created_at": "2017-03-29T20:00:54Z"
},
{
"body": "+1 to auditing dependencies for threadlocals\r\nI also opened an issue against log4j, suggesting that this threadlocal be removed: https://issues.apache.org/jira/browse/LOG4J2-1858.",
"created_at": "2017-03-30T09:51:17Z"
},
{
"body": "i would love to solve this issue.",
"created_at": "2017-04-02T13:42:23Z"
},
{
"body": "@kumar1005 This is an upstream issue, so it would have to be addressed in Log4j2 directly.",
"created_at": "2017-04-03T09:40:29Z"
},
{
"body": "This has been fixed upstream and should be fixed in log4j 2.9.",
"created_at": "2017-04-18T13:52:18Z"
},
{
"body": "According to them, 2.9 is not going to happen anytime soon. Forking the fixed class does not seem easy either as it relies on several pkg-private classes, so I'm leaning towards waiting for 2.9 to be released since I don't have the cycles to do any better.",
"created_at": "2017-04-19T08:43:21Z"
}
],
"number": 23798,
"title": "Log4j's ParameterizedMessage has a static ThreadLocal<StringBuilder>"
} | {
"body": "This commit upgrades the Log4j dependency from version 2.8.2 to version 2.9.0.\r\n\r\nCloses #23798\r\n",
"number": 26450,
"review_comments": [],
"title": "Upgrade to Log4j 2.9.0"
} | {
"commits": [
{
"message": "Upgrade to Log4j 2.9.0\n\nThis commit upgrades the Log4j dependency from version 2.8.2 to version\n2.9.0."
},
{
"message": "Fix third party audit"
},
{
"message": "Fix Log4j :("
},
{
"message": "Merge branch 'master' into log4j-2.9.0\n\n* master:\n Allow abort of bulk items before processing (#26434)\n [Tests] Improve testing of FieldSortBuilder (#26437)\n Upgrade to lucene-7.0.0-snapshot-d94a5f0. (#26441)\n Implement adaptive replica selection (#26128)\n Build: Quiet bwc build output (#26430)\n Migrate Search requests to use Writeable reading strategies (#26428)\n Changed version from 7.0.0-alpha1 to 6.1.0 in the nested sorting serialization check.\n Remove dead path conf BWC code in build"
},
{
"message": "Clarify comment"
}
],
"files": [
{
"diff": "@@ -8,7 +8,7 @@ jts = 1.13\n jackson = 2.8.6\n snakeyaml = 1.15\n # When updating log4j, please update also docs/java-api/index.asciidoc\n-log4j = 2.8.2\n+log4j = 2.9.0\n slf4j = 1.6.2\n \n # when updating the JNA version, also update the version in buildSrc/build.gradle",
"filename": "buildSrc/version.properties",
"status": "modified"
},
{
"diff": "@@ -157,12 +157,11 @@ thirdPartyAudit.excludes = [\n 'com.fasterxml.jackson.databind.ObjectMapper',\n \n // from log4j\n- 'com.beust.jcommander.IStringConverter',\n- 'com.beust.jcommander.JCommander',\n 'com.conversantmedia.util.concurrent.DisruptorBlockingQueue',\n 'com.conversantmedia.util.concurrent.SpinPolicy',\n 'com.fasterxml.jackson.annotation.JsonInclude$Include',\n 'com.fasterxml.jackson.databind.DeserializationContext',\n+ 'com.fasterxml.jackson.databind.DeserializationFeature',\n 'com.fasterxml.jackson.databind.JsonMappingException',\n 'com.fasterxml.jackson.databind.JsonNode',\n 'com.fasterxml.jackson.databind.Module$SetupContext',\n@@ -203,11 +202,11 @@ thirdPartyAudit.excludes = [\n 'javax.jms.Connection',\n 'javax.jms.ConnectionFactory',\n 'javax.jms.Destination',\n+ 'javax.jms.JMSException',\n+ 'javax.jms.MapMessage',\n 'javax.jms.Message',\n 'javax.jms.MessageConsumer',\n- 'javax.jms.MessageListener',\n 'javax.jms.MessageProducer',\n- 'javax.jms.ObjectMessage',\n 'javax.jms.Session',\n 'javax.mail.Authenticator',\n 'javax.mail.Message$RecipientType',\n@@ -247,6 +246,7 @@ thirdPartyAudit.excludes = [\n 'org.osgi.framework.BundleEvent',\n 'org.osgi.framework.BundleReference',\n 'org.osgi.framework.FrameworkUtil',\n+ 'org.osgi.framework.ServiceRegistration',\n 'org.osgi.framework.SynchronousBundleListener',\n 'org.osgi.framework.wiring.BundleWire',\n 'org.osgi.framework.wiring.BundleWiring',",
"filename": "core/build.gradle",
"status": "modified"
},
{
"diff": "@@ -0,0 +1 @@\n+7e2f1637394eecdc3c8cd067b3f2cf4801b1bcf6\n\\ No newline at end of file",
"filename": "core/licenses/log4j-1.2-api-2.9.0.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+e0dcd508dfc4864a2f5a1963d6ffad170d970375\n\\ No newline at end of file",
"filename": "core/licenses/log4j-api-2.9.0.jar.sha1",
"status": "added"
},
{
"diff": "@@ -0,0 +1 @@\n+052f6548ae1688e126c29b5dc400929dc0128615\n\\ No newline at end of file",
"filename": "core/licenses/log4j-core-2.9.0.jar.sha1",
"status": "added"
},
{
"diff": "@@ -46,7 +46,12 @@ public static Logger getLogger(String prefix, String name) {\n }\n \n public static Logger getLogger(String prefix, Class<?> clazz) {\n- return getLogger(prefix, LogManager.getLogger(clazz));\n+ /*\n+ * Do not use LogManager#getLogger(Class) as this now uses Class#getCanonicalName under the hood; as this returns null for local and\n+ * anonymous classes, any place we create, for example, an abstract component defined as an anonymous class (e.g., in tests) will\n+ * result in a logger with a null name which will blow up in a lookup inside of Log4j.\n+ */\n+ return getLogger(prefix, LogManager.getLogger(clazz.getName()));\n }\n \n public static Logger getLogger(String prefix, Logger logger) {",
"filename": "core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java",
"status": "modified"
},
{
"diff": "@@ -83,7 +83,7 @@ You need to also include Log4j 2 dependencies:\n <dependency>\n <groupId>org.apache.logging.log4j</groupId>\n <artifactId>log4j-core</artifactId>\n- <version>2.8.2</version>\n+ <version>2.9.0</version>\n </dependency>\n --------------------------------------------------\n \n@@ -111,7 +111,7 @@ If you want to use another logger than Log4j 2, you can use http://www.slf4j.org\n <dependency>\n <groupId>org.apache.logging.log4j</groupId>\n <artifactId>log4j-to-slf4j</artifactId>\n- <version>2.8.2</version>\n+ <version>2.9.0</version>\n </dependency>\n <dependency>\n <groupId>org.slf4j</groupId>",
"filename": "docs/java-api/index.asciidoc",
"status": "modified"
}
]
} |
{
"body": "It currently depends on the default locale, which makes it trappy: behaviour cannot be reproduced on machines that have a different locale.",
"comments": [
{
"body": "@andy-elastic I assigned this one to you as I thought you would be interested. For the record, there is no particular urgency.",
"created_at": "2017-07-07T08:11:44Z"
},
{
"body": "I did wonder where else this would appear after seeing (or rather, being told by the build) we disallow String.format with the default locale\r\n\r\nThis is where the default locale gets used, right?https://github.com/elastic/elasticsearch/blob/b24326271e6778d5d595005e7e1e4258e7e7ee24/plugins/analysis-icu/src/main/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapper.java#L389",
"created_at": "2017-07-07T16:26:53Z"
},
{
"body": "@andy-elastic Correct.",
"created_at": "2017-07-10T09:00:56Z"
},
{
"body": "Merged in #26413 ",
"created_at": "2017-08-29T17:48:55Z"
}
],
"number": 25587,
"title": "ICUCollationKeywordFieldMapper should use Locale.ROOT as a default locale"
} | {
"body": "Calls to Collator.getInstance without arguments returns a\r\ncollator that uses the system's default locale, which we don't\r\nwant because it makes behavior harder to reproduce. Change it\r\nto always use the root locale instead.\r\n\r\nFor #25587\r\n\r\nThe original issue just mentioned ICUCollationKeywordFieldMapper, I went ahead and fixed the usage in IcuCollationTokenFilterFactory as well. I believe that's all the usages of this API.\r\n\r\nI considered adding this method to the forbidden APIs, but after looking at what's already listed I wasn't sure if this was important/common enough to make the cut. Will gladly add that in if that's what we want.",
"number": 26413,
"review_comments": [],
"title": "ICU plugin: use root locale by default for collators"
} | {
"commits": [
{
"message": "ICU plugin: use root locale by default for collators\n\nCalls to Collator.getInstance without arguments returns a\ncollator that uses the system's default locale, which we don't\nwant because it makes behavior harder to reproduce. Change it\nto always use the root locale instead.\n\nFor #25587"
}
],
"files": [
{
"diff": "@@ -84,7 +84,7 @@ public IcuCollationTokenFilterFactory(IndexSettings indexSettings, Environment e\n }\n collator = Collator.getInstance(locale);\n } else {\n- collator = Collator.getInstance();\n+ collator = Collator.getInstance(ULocale.ROOT);\n }\n }\n ",
"filename": "plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuCollationTokenFilterFactory.java",
"status": "modified"
},
{
"diff": "@@ -389,7 +389,7 @@ public Collator buildCollator() {\n }\n collator = Collator.getInstance(locale);\n } else {\n- collator = Collator.getInstance();\n+ collator = Collator.getInstance(ULocale.ROOT);\n }\n }\n ",
"filename": "plugins/analysis-icu/src/main/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,19 @@\n \n // Tests borrowed from Solr's Icu collation key filter factory test.\n public class SimpleIcuCollationTokenFilterTests extends ESTestCase {\n+ /*\n+ * Tests usage where we do not provide a language or locale\n+ */\n+ public void testDefaultUsage() throws Exception {\n+ Settings settings = Settings.builder()\n+ .put(\"index.analysis.filter.myCollator.type\", \"icu_collation\")\n+ .put(\"index.analysis.filter.myCollator.strength\", \"primary\")\n+ .build();\n+ TestAnalysis analysis = createTestAnalysis(new Index(\"test\", \"_na_\"), settings, new AnalysisICUPlugin());\n+\n+ TokenFilterFactory filterFactory = analysis.tokenFilter.get(\"myCollator\");\n+ assertCollatesToSame(filterFactory, \"FOO\", \"foo\");\n+ }\n /*\n * Turkish has some funny casing.\n * This test shows how you can solve this kind of thing easily with collation.",
"filename": "plugins/analysis-icu/src/test/java/org/elasticsearch/index/analysis/SimpleIcuCollationTokenFilterTests.java",
"status": "modified"
},
{
"diff": "@@ -78,7 +78,7 @@ public void testTermsQuery() {\n ft.setName(\"field\");\n ft.setIndexOptions(IndexOptions.DOCS);\n \n- Collator collator = Collator.getInstance().freeze();\n+ Collator collator = Collator.getInstance(ULocale.ROOT).freeze();\n ((CollationFieldType) ft).setCollator(collator);\n \n RawCollationKey fooKey = collator.getRawCollationKey(\"foo\", null);\n@@ -126,7 +126,7 @@ public void testRangeQuery() {\n ft.setName(\"field\");\n ft.setIndexOptions(IndexOptions.DOCS);\n \n- Collator collator = Collator.getInstance().freeze();\n+ Collator collator = Collator.getInstance(ULocale.ROOT).freeze();\n ((CollationFieldType) ft).setCollator(collator);\n \n RawCollationKey aKey = collator.getRawCollationKey(\"a\", null);",
"filename": "plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/CollationFieldTypeTests.java",
"status": "modified"
},
{
"diff": "@@ -82,7 +82,7 @@ public void testDefaults() throws Exception {\n IndexableField[] fields = doc.rootDoc().getFields(\"field\");\n assertEquals(2, fields.length);\n \n- Collator collator = Collator.getInstance();\n+ Collator collator = Collator.getInstance(ULocale.ROOT);\n RawCollationKey key = collator.getRawCollationKey(\"1234\", null);\n BytesRef expected = new BytesRef(key.bytes, 0, key.size);\n \n@@ -126,7 +126,7 @@ public void testBackCompat() throws Exception {\n IndexableField[] fields = doc.rootDoc().getFields(\"field\");\n assertEquals(2, fields.length);\n \n- Collator collator = Collator.getInstance();\n+ Collator collator = Collator.getInstance(ULocale.ROOT);\n RawCollationKey key = collator.getRawCollationKey(\"1234\", null);\n BytesRef expected = new BytesRef(key.bytes, 0, key.size);\n \n@@ -189,7 +189,7 @@ public void testNullValue() throws IOException {\n .bytes(),\n XContentType.JSON));\n \n- Collator collator = Collator.getInstance();\n+ Collator collator = Collator.getInstance(ULocale.ROOT);\n RawCollationKey key = collator.getRawCollationKey(\"1234\", null);\n BytesRef expected = new BytesRef(key.bytes, 0, key.size);\n \n@@ -284,7 +284,7 @@ public void testMultipleValues() throws IOException {\n IndexableField[] fields = doc.rootDoc().getFields(\"field\");\n assertEquals(4, fields.length);\n \n- Collator collator = Collator.getInstance();\n+ Collator collator = Collator.getInstance(ULocale.ROOT);\n RawCollationKey key = collator.getRawCollationKey(\"1234\", null);\n BytesRef expected = new BytesRef(key.bytes, 0, key.size);\n \n@@ -305,7 +305,7 @@ public void testMultipleValues() throws IOException {\n assertThat(fieldType.indexOptions(), equalTo(IndexOptions.NONE));\n assertEquals(DocValuesType.SORTED_SET, fieldType.docValuesType());\n \n- collator = Collator.getInstance();\n+ collator = Collator.getInstance(ULocale.ROOT);\n key = collator.getRawCollationKey(\"5678\", null);\n expected = new BytesRef(key.bytes, 0, key.size);\n ",
"filename": "plugins/analysis-icu/src/test/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "Related to; https://github.com/elastic/elasticsearch/pull/26329\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): Version: 6.0.0-beta2-SNAPSHOT, Build: d6a7e25/2017-08-28T13:34:58.542Z, JVM: 1.8.0_141\r\n\r\n**Plugins installed**: [x-pack]\r\n\r\n**JVM version** (`java -version`): openjdk version \"1.8.0_141\"\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nLinux packer-virtualbox-iso-1501424719 4.4.0-87-generic #110~14.04.1-Ubuntu SMP Tue Jul 18 14:51:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nAfter upgrading from 5.6.0-SNAPSHOT (with x-pack) to 6.0.0-beta2-SNAPSHOT (both from this morning's unified release builds) elasticsearch won't start because it can't read the /etc/elasticsearch/elasticsearch.keystore file\r\n\r\n**Steps to reproduce**:\r\n\r\nPlease include a *minimal* but *complete* recreation of the problem, including\r\n(e.g.) index creation, mappings, settings, query etc. The easier you make for\r\nus to reproduce it, the more likely that somebody will take the time to look at it.\r\n\r\n 1. Install 5.6.0 elasticsearch and x-pack (links below)\r\n 1. I'm running in production mode (network.host: 0.0.0.0) and have changed the default password\r\n 1. `service elasticsearch stop`\r\n 1. `/usr/share/elasticsearch/bin/elasticsearch-plugin remove x-pack`\r\n 1. I backup my elasticsearch.yml before upgrading, taking the new config, then merge my changes back afterwards\r\n 1. `dpkg -i --force-confnew ./elasticsearch-6.0.0-beta2-SNAPSHOT.deb`\r\n 1. `/usr/share/elasticsearch/bin/elasticsearch-plugin install -b file:///vagrant/qa/x-pack-6.0.0-beta2-SNAPSHOT.zip`\r\n 1. `service elasticsearch start` see log below\r\n 1. `-rw------- 1 root root 416 Aug 28 19:09 elasticsearch.keystore`\r\n\r\nI \"fixed\" it with;\r\n`chown root:elasticsearch /etc/elasticsearch/elasticsearch.keystore`\r\n`chmod 660 /etc/elasticsearch/elasticsearch.keystore`\r\nNow Elasticsearch starts\r\n\r\nhttps://snapshots.elastic.co/downloads/elasticsearch/elasticsearch-5.6.0-SNAPSHOT.deb\r\nhttps://snapshots.elastic.co/downloads/packs/x-pack/x-pack-5.6.0-SNAPSHOT.zip\r\nhttps://snapshots.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0-beta2-SNAPSHOT.deb\r\nhttps://snapshots.elastic.co/downloads/packs/x-pack/x-pack-6.0.0-beta2-SNAPSHOT.zip\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n+ service elasticsearch start\r\n * Starting Elasticsearch Server\r\nException in thread \"main\" org.elasticsearch.bootstrap.BootstrapException: java.nio.file.AccessDeniedException: /etc/elasticsearch/elasticsearch.keystore\r\nLikely root cause: java.nio.file.AccessDeniedException: /etc/elasticsearch/elasticsearch.keystore\r\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)\r\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\r\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\r\n at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)\r\n at java.nio.file.Files.newByteChannel(Files.java:361)\r\n at java.nio.file.Files.newByteChannel(Files.java:407)\r\n at org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:77)\r\n at org.elasticsearch.common.settings.KeyStoreWrapper.load(KeyStoreWrapper.java:199)\r\n at org.elasticsearch.bootstrap.Bootstrap.loadSecureSettings(Bootstrap.java:225)\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:287)\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:130)\r\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:121)\r\n at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:69)\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134)\r\n at org.elasticsearch.cli.Command.main(Command.java:90)\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85)\r\nRefer to the log for complete error details.\r\n ...fail!\r\n```\r\n\r\n",
"comments": [],
"number": 26410,
"title": "elasticsearch can't read keystore file (.deb package install)"
} | {
"body": "When creating the keystore explicitly (from executing elasticsearch-keystore create) or implicitly (for plugins that require the keystore to be created on install) on an Elasticsearch package installation, we are running as the root user. This leaves /etc/elasticsearch/elasticsearch.keystore having the wrong ownership (root:root) so that the elasticsearch user can not read the keystore on startup. This commit adds setgid to /etc/elasticsearch on package installation so that when executing this directory (as we would when creating the keystore), we will end up with the correct ownership (root:elasticsearch). Additionally, we set the permissions on the keystore to be 660 so that the elasticsearch user via its group can read this file on startup.\r\n\r\nCloses #26410\r\n",
"number": 26412,
"review_comments": [],
"title": "setgid on /etc/elasticearch on package install"
} | {
"commits": [
{
"message": "setgid on /etc/elasticearch on package install\n\nWhen creating the keystore explicitly (from executing\nelasticsearch-keystore create) or implicitly (for plugins that require\nthe keystore to be created on install) on an Elasticsearch package\ninstallation, we are running as the root user. This leaves\n/etc/elasticsearch/elasticsearch.keystore having the wrong ownership\n(root:root) so that the elasticsearch user can not read the keystore on\nstartup. This commit adds setgid to /etc/elasticsearch on package\ninstallation so that when executing this directory (as we would when\ncreating the keystore), we will end up with the correct ownership\n(root:elasticsearch). Additionally, we set the permissions on the\nkeystore to be 660 so that the elasticsearch user via its group can read\nthis file on startup."
}
],
"files": [
{
"diff": "@@ -330,7 +330,7 @@ public void save(Path configDir) throws Exception {\n PosixFileAttributeView attrs = Files.getFileAttributeView(keystoreFile, PosixFileAttributeView.class);\n if (attrs != null) {\n // don't rely on umask: ensure the keystore has minimal permissions\n- attrs.setPermissions(PosixFilePermissions.fromString(\"rw-------\"));\n+ attrs.setPermissions(PosixFilePermissions.fromString(\"rw-rw----\"));\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/common/settings/KeyStoreWrapper.java",
"status": "modified"
},
{
"diff": "@@ -100,6 +100,7 @@ fi\n chown -R elasticsearch:elasticsearch /var/lib/elasticsearch\n chown -R elasticsearch:elasticsearch /var/log/elasticsearch\n chown -R root:elasticsearch /etc/elasticsearch\n+chmod g+s /etc/elasticsearch\n chmod 0750 /etc/elasticsearch\n \n if [ -f /etc/default/elasticsearch ]; then",
"filename": "distribution/src/main/packaging/scripts/postinst",
"status": "modified"
},
{
"diff": "@@ -94,7 +94,7 @@ verify_package_installation() {\n assert_file \"$ESHOME/bin/elasticsearch-plugin\" f root root 755\n assert_file \"$ESHOME/bin/elasticsearch-translog\" f root root 755\n assert_file \"$ESHOME/lib\" d root root 755\n- assert_file \"$ESCONFIG\" d root elasticsearch 750\n+ assert_file \"$ESCONFIG\" d root elasticsearch 2750\n assert_file \"$ESCONFIG/elasticsearch.yml\" f root elasticsearch 660\n assert_file \"$ESCONFIG/jvm.options\" f root elasticsearch 660\n assert_file \"$ESCONFIG/log4j2.properties\" f root elasticsearch 660",
"filename": "qa/vagrant/src/test/resources/packaging/utils/packages.bash",
"status": "modified"
}
]
} |
{
"body": "With the following code it is possible to reproduce our issue of highlighting_query with nested query and wildcards in Elasticsearch 5.5.0. In Elasticsearch 2.4.4 we can’t reproduce this issue. The result returns only highlight: “snippet” and not \"du.content.content4b.contenttext\"\r\n```\r\nPUT testcase\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"number_of_replicas\": 0\r\n }\r\n },\r\n \"mappings\": {\r\n \"searchdoc\": {\r\n \"dynamic_templates\": [\r\n {\r\n \"nested_du_content\": {\r\n \"match\": \"content4*\",\r\n \"mapping\": {\r\n \"type\": \"object\",\r\n \"doc_values\": false,\r\n \"properties\": {\r\n \"contenttext\": {\r\n \"type\": \"string\",\r\n \"copy_to\": [\r\n \"du.content.contenttext\"\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n }\r\n ],\r\n \"properties\": {\r\n \"snippet\": {\r\n \"type\": \"string\",\r\n \"doc_values\": false\r\n },\r\n \"du\": {\r\n \"dynamic\": \"strict\",\r\n \"type\": \"nested\",\r\n \"include_in_root\": \"true\",\r\n \"properties\": {\r\n \"content\": {\r\n \"type\": \"nested\",\r\n \"include_in_parent\": \"true\",\r\n \"dynamic\": \"true\",\r\n \"properties\": {\r\n \"contenttext\": {\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPOST testcase/searchdoc\r\n{\r\n \"snippet\": [\r\n \"Bensheim - Auerbach. Das Fürstenlager, ein Landschaftsgarten.\",\r\n \"Schwanenteich, am Ufer schwarzer Schwan, im Teich badet Hund Neufundländer\",\r\n \"Blumen Engelstrompeten\",\r\n \"Gebäude und Anlage Fürstenlager\",\r\n \"Quellen, Park Herrenwiese, Hügellandschaft, Tempel, einzelne große alte Bäume, Wein, Eremitage\"\r\n ],\r\n \"du\": {\r\n \"content\": [\r\n {\r\n \"content4t\": {\r\n \"contenttext\": \"Bensheim - Auerbach. Das Fürstenlager, ein Landschaftsgarten.\"\r\n }\r\n },\r\n {\r\n \"content4b\": {\r\n \"contenttext\": \"Schwanenteich, am Ufer schwarzer Schwan, im Teich badet Hund Neufundländer\"\r\n }\r\n },\r\n {\r\n \"content4b\": {\r\n \"contenttext\": \"Blumen Engelstrompeten\"\r\n }\r\n },\r\n {\r\n \"content4b\": {\r\n \"contenttext\": \"Gebäude und Anlage Fürstenlager\"\r\n }\r\n },\r\n {\r\n \"content4b\": {\r\n \"contenttext\": \"Quellen, Park Herrenwiese, Hügellandschaft, Tempel, einzelne große alte Bäume, Wein, Eremitage\"\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n\r\nGET testcase/_search\r\n{\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"match_all\": {}\r\n }\r\n ]\r\n }\r\n },\r\n \"_source\": false,\r\n \"highlight\": {\r\n \"highlight_query\": {\r\n \"bool\": {\r\n \"should\": [\r\n {\r\n \"query_string\": {\r\n \"query\": \"hund*\",\r\n \"default_field\": \"snippet\"\r\n }\r\n },\r\n {\r\n \"nested\": {\r\n \"query\": {\r\n \"query_string\": {\r\n \"query\": \"hund*\",\r\n \"default_field\": \"du.content.content4b.contenttext\"\r\n }\r\n },\r\n \"path\": \"du.content\"\r\n }\r\n }\r\n ]\r\n }\r\n },\r\n \"fields\": {\r\n \"maintitle\": {},\r\n \"subtitle\": {},\r\n \"du.content.*.contenttext\": {\r\n \"fragment_size\": 50,\r\n \"number_of_fragments\": 3\r\n },\r\n \"snippet\": {\r\n \"fragment_size\": 50,\r\n \"number_of_fragments\": 3\r\n }\r\n }\r\n }\r\n}\r\n```\r\nResult with Elasticsearch 2.4.4\r\n```\r\n\"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"hr-sd\",\r\n \"_type\": \"searchdoc\",\r\n \"_id\": \"100010549@2@0@0#0\",\r\n \"_score\": 1,\r\n \"highlight\": {\r\n \"snippet\": [\r\n \"Schwanenteich, am Ufer schwarzer Schwan, im Teich badet <em>Hund</em> Neufundländer\"\r\n ],\r\n \"du.content.content4b.contenttext\": [\r\n \"Schwanenteich, am Ufer schwarzer Schwan, im Teich badet <em>Hund</em> Neufundländer\"\r\n ]\r\n }\r\n }\r\n ]\r\n```\r\nResult with Elasticsearch 5.5.0\r\n```\r\n\"hits\": [\r\n {\r\n \"_index\": \"hr-sd\",\r\n \"_type\": \"searchdoc\",\r\n \"_id\": \"100010549@2@0@0#0\",\r\n \"_score\": 1,\r\n \"highlight\": {\r\n \"snippet\": [\r\n \"Schwanenteich, am Ufer schwarzer Schwan, im Teich badet <em>Hund</em> Neufundländer\"\r\n ]\r\n }\r\n }\r\n ]\r\n```",
"comments": [
{
"body": "I've run some tests, and ES 5.4.0 starts diverging in behavior from prior versions. ES 5.3.3 still shows the same behavior as ES 2.x. Could it be related to https://github.com/elastic/elasticsearch/pull/24214 @jpountz ?",
"created_at": "2017-08-16T10:39:47Z"
}
],
"number": 26230,
"title": "highlight_query doesn't work with nested query and wildcards"
} | {
"body": "This commit extracts the inner query in the `ESToParentBlockJoinQuery` for highlighting.\r\nThis query has been added in 5.4 and breaks plain highlighting on nested queries.\r\nHighlighters that use postings or term vectors are not affected because they can't highlight nested documents correctly.\r\n\r\nFixes #26230",
"number": 26305,
"review_comments": [],
"title": "Fix nested query highlighting"
} | {
"commits": [
{
"message": "Fix nested query highlighting\n\nThis commit extracts the inner query in the ESToParentBlockJoinQuery for highlighting.\nThis query has been added in 5.4 and breaks plain highlighting on nested queries.\nHighlighters that use postings or term vectors are not affected because they can't highlight nested documents correctly.\n\nFixes #26230"
}
],
"files": [
{
"diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.common.lucene.all.AllTermQuery;\n import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n+import org.elasticsearch.index.search.ESToParentBlockJoinQuery;\n \n import java.io.IOException;\n import java.text.BreakIterator;\n@@ -210,6 +211,8 @@ private Collection<Query> rewriteCustomQuery(Query query) {\n return Collections.singletonList(new TermQuery(atq.getTerm()));\n } else if (query instanceof FunctionScoreQuery) {\n return Collections.singletonList(((FunctionScoreQuery) query).getSubQuery());\n+ } else if (query instanceof ESToParentBlockJoinQuery) {\n+ return Collections.singletonList(((ESToParentBlockJoinQuery) query).getChildQuery());\n } else {\n return null;\n }",
"filename": "core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.SynonymQuery;\n import org.apache.lucene.search.TermQuery;\n+import org.apache.lucene.search.join.ToParentBlockJoinQuery;\n import org.apache.lucene.search.spans.SpanTermQuery;\n import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n@@ -90,6 +91,11 @@ void flatten(Query sourceQuery, IndexReader reader, Collection<Query> flatQuerie\n for (Term term : synQuery.getTerms()) {\n flatten(new TermQuery(term), reader, flatQueries, boost);\n }\n+ } else if (sourceQuery instanceof ESToParentBlockJoinQuery) {\n+ Query childQuery = ((ESToParentBlockJoinQuery) sourceQuery).getChildQuery();\n+ if (childQuery != null) {\n+ flatten(childQuery, reader, flatQueries, boost);\n+ }\n } else {\n super.flatten(sourceQuery, reader, flatQueries, boost);\n }",
"filename": "core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.search.highlight.WeightedSpanTerm;\n import org.apache.lucene.search.highlight.WeightedSpanTermExtractor;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n+import org.elasticsearch.index.search.ESToParentBlockJoinQuery;\n \n import java.io.IOException;\n import java.util.Map;\n@@ -86,6 +87,8 @@ protected void extract(Query query, float boost, Map<String, WeightedSpanTerm> t\n return;\n } else if (query instanceof FunctionScoreQuery) {\n super.extract(((FunctionScoreQuery) query).getSubQuery(), boost, terms);\n+ } else if (query instanceof ESToParentBlockJoinQuery) {\n+ super.extract(((ESToParentBlockJoinQuery) query).getChildQuery(), boost, terms);\n } else {\n super.extract(query, boost, terms);\n }",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/CustomQueryScorer.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n import org.apache.lucene.search.join.ScoreMode;\n-import org.elasticsearch.Version;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -2841,4 +2840,80 @@ public void testHighlightQueryRewriteDatesWithNow() throws Exception {\n equalTo(\"<x>hello</x> world\"));\n }\n }\n+\n+ public void testWithNestedQuery() throws Exception {\n+ String mapping = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"text\")\n+ .field(\"type\", \"text\")\n+ .field(\"index_options\", \"offsets\")\n+ .field(\"term_vector\", \"with_positions_offsets\")\n+ .endObject()\n+ .startObject(\"foo\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"text\")\n+ .field(\"type\", \"text\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().endObject().string();\n+ prepareCreate(\"test\").addMapping(\"type\", mapping, XContentType.JSON).get();\n+\n+ client().prepareIndex(\"test\", \"type\", \"1\").setSource(jsonBuilder().startObject()\n+ .startArray(\"foo\")\n+ .startObject().field(\"text\", \"brown\").endObject()\n+ .startObject().field(\"text\", \"cow\").endObject()\n+ .endArray()\n+ .field(\"text\", \"brown\")\n+ .endObject()).setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)\n+ .get();\n+\n+ for (String type : new String[] {\"unified\", \"plain\"}) {\n+ SearchResponse searchResponse = client().prepareSearch()\n+ .setQuery(nestedQuery(\"foo\", matchQuery(\"foo.text\", \"brown cow\"), ScoreMode.None))\n+ .highlighter(new HighlightBuilder()\n+ .field(new Field(\"foo.text\").highlighterType(type)))\n+ .get();\n+ assertHitCount(searchResponse, 1);\n+ HighlightField field = searchResponse.getHits().getAt(0).getHighlightFields().get(\"foo.text\");\n+ assertThat(field.getFragments().length, equalTo(2));\n+ assertThat(field.getFragments()[0].string(), equalTo(\"<em>brown</em>\"));\n+ assertThat(field.getFragments()[1].string(), equalTo(\"<em>cow</em>\"));\n+\n+ searchResponse = client().prepareSearch()\n+ .setQuery(nestedQuery(\"foo\", prefixQuery(\"foo.text\", \"bro\"), ScoreMode.None))\n+ .highlighter(new HighlightBuilder()\n+ .field(new Field(\"foo.text\").highlighterType(type)))\n+ .get();\n+ assertHitCount(searchResponse, 1);\n+ field = searchResponse.getHits().getAt(0).getHighlightFields().get(\"foo.text\");\n+ assertThat(field.getFragments().length, equalTo(1));\n+ assertThat(field.getFragments()[0].string(), equalTo(\"<em>brown</em>\"));\n+\n+ searchResponse = client().prepareSearch()\n+ .setQuery(nestedQuery(\"foo\", prefixQuery(\"foo.text\", \"bro\"), ScoreMode.None))\n+ .highlighter(new HighlightBuilder()\n+ .field(new Field(\"foo.text\").highlighterType(\"plain\")))\n+ .get();\n+ assertHitCount(searchResponse, 1);\n+ field = searchResponse.getHits().getAt(0).getHighlightFields().get(\"foo.text\");\n+ assertThat(field.getFragments().length, equalTo(1));\n+ assertThat(field.getFragments()[0].string(), equalTo(\"<em>brown</em>\"));\n+ }\n+\n+ // For unified and fvh highlighters we just check that the nested query is correctly extracted\n+ // but we highlight the root text field since nested documents cannot be highlighted with postings nor term vectors\n+ // directly.\n+ for (String type : ALL_TYPES) {\n+ SearchResponse searchResponse = client().prepareSearch()\n+ .setQuery(nestedQuery(\"foo\", prefixQuery(\"foo.text\", \"bro\"), ScoreMode.None))\n+ .highlighter(new HighlightBuilder()\n+ .field(new Field(\"text\").highlighterType(type).requireFieldMatch(false)))\n+ .get();\n+ assertHitCount(searchResponse, 1);\n+ HighlightField field = searchResponse.getHits().getAt(0).getHighlightFields().get(\"text\");\n+ assertThat(field.getFragments().length, equalTo(1));\n+ assertThat(field.getFragments()[0].string(), equalTo(\"<em>brown</em>\"));\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/highlight/HighlighterSearchIT.java",
"status": "modified"
}
]
} |
{
"body": "Recreation:\r\n```\r\nPUT index\r\n{\r\n \"mappings\": {\r\n \"doc\": {\r\n \"properties\": {\r\n \"client_ip\": {\r\n \"type\": \"ip\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nGET index/_validate/query?rewrite=true\r\n{\r\n \"query\": {\r\n \"query_string\": {\r\n \"query\": \"client_ip:\\\"::ffff:0:0/96\\\"\"\r\n }\r\n }\r\n}\r\n```\r\nwhich gives\r\n\r\n```\r\n{\r\n \"valid\": true,\r\n \"_shards\": {\r\n \"total\": 1,\r\n \"successful\": 1,\r\n \"failed\": 0\r\n },\r\n \"explanations\": [\r\n {\r\n \"index\": \"index\",\r\n \"valid\": true,\r\n \"explanation\": \"\"\"MatchNoDocsQuery(\"failed [client_ip] query, caused by illegal_argument_exception:[illegal prefixLength '96'. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges]\")\"\"\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\nThe issue is that the ip address is automatically translated to an ipv4 address, so we then interpret the prefix length as if the ip was an ipv4 address.",
"comments": [
{
"body": "It is not easy to fix solely in Elasticsearch so I opened a Lucene issue for discussion. https://issues.apache.org/jira/browse/LUCENE-7920",
"created_at": "2017-08-07T09:32:42Z"
},
{
"body": "I am passing along some additional comment by the customer.\r\nHopefully, it gives you more clarity to the problem.\r\n\r\n```\r\nonly in my use case,\r\nIf IPv4 and CIDR >= 96, there is no problem when\r\nthe value like(specified CIDR -96) is used internally.\r\n \r\n \r\nnot work: client_ip:\"::ffff:0:0/96\"\r\n-> work: client_ip:\"::ffff:0:0/0\"\r\n \r\nwork: client_ip:\"::fffe:0:0/95\"\r\n \r\nnot work: client_ip:\"::ffff:0:0/95\"\r\n-> I do not plan to use this query\r\n```",
"created_at": "2017-08-17T00:43:47Z"
},
{
"body": "We couldn't reach agreement as to what to do in that case so we will just disallow prefix queries on ipv6-mapped ipv4 addresses since they introduce ambiguity as to how the prefix length should be interpreted.",
"created_at": "2017-08-17T13:00:20Z"
}
],
"number": 26078,
"title": "`ip` field does not allow prefix queries with ipv6-mapped ipv4 addresses"
} | {
"body": "It introduces ambiguity as to whether the prefix length should be interpreted as\r\na v4 prefix length or a v6 prefix length.\r\n\r\nSee https://issues.apache.org/jira/browse/LUCENE-7920.\r\n\r\nCloses #26078",
"number": 26254,
"review_comments": [
{
"body": "Can you test the edges for each, so 129, 128 and 32, 33?",
"created_at": "2017-08-17T15:52:28Z"
}
],
"title": "Reject IPv6-mapped IPv4 addresses when using the CIDR notation."
} | {
"commits": [
{
"message": "Reject IPv6-mapped IPv4 addresses when using the CIDR notation.\n\nIt introduces ambiguity as to whether the prefix length should be interpreted as\na v4 prefix length or a v6 prefix length.\n\nSee https://issues.apache.org/jira/browse/LUCENE-7920.\n\nCloses #26078"
},
{
"message": "iter"
}
],
"files": [
{
"diff": "@@ -16,6 +16,8 @@\n \n package org.elasticsearch.common.network;\n \n+import org.elasticsearch.common.collect.Tuple;\n+\n import java.net.Inet4Address;\n import java.net.Inet6Address;\n import java.net.InetAddress;\n@@ -354,4 +356,32 @@ private static InetAddress bytesToInetAddress(byte[] addr) {\n throw new AssertionError(e);\n }\n }\n+\n+ /**\n+ * Parse an IP address and its prefix length using the CIDR notation.\n+ * @throws IllegalArgumentException if the string is not formatted as {@code ip_address/prefix_length}\n+ * @throws IllegalArgumentException if the IP address is an IPv6-mapped ipv4 address\n+ * @throws IllegalArgumentException if the prefix length is not in 0-32 for IPv4 addresses and 0-128 for IPv6 addresses\n+ * @throws NumberFormatException if the prefix length is not an integer\n+ */\n+ public static Tuple<InetAddress, Integer> parseCidr(String maskedAddress) {\n+ String[] fields = maskedAddress.split(\"/\");\n+ if (fields.length == 2) {\n+ final String addressString = fields[0];\n+ final InetAddress address = forString(addressString);\n+ if (addressString.contains(\":\") && address.getAddress().length == 4) {\n+ throw new IllegalArgumentException(\"CIDR notation is not allowed with IPv6-mapped IPv4 address [\" + addressString +\n+ \" as it introduces ambiguity as to whether the prefix length should be interpreted as a v4 prefix length or a\" +\n+ \" v6 prefix length\");\n+ }\n+ final int prefixLength = Integer.parseInt(fields[1]);\n+ if (prefixLength < 0 || prefixLength > 8 * address.getAddress().length) {\n+ throw new IllegalArgumentException(\"Illegal prefix length [\" + prefixLength + \"] in [\" + maskedAddress +\n+ \"]. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges\");\n+ }\n+ return new Tuple<>(address, prefixLength);\n+ } else {\n+ throw new IllegalArgumentException(\"Expected [ip/prefix] but was [\" + maskedAddress + \"]\");\n+ }\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/network/InetAddresses.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.Explicit;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -163,14 +164,8 @@ public Query termQuery(Object value, @Nullable QueryShardContext context) {\n }\n String term = value.toString();\n if (term.contains(\"/\")) {\n- String[] fields = term.split(\"/\");\n- if (fields.length == 2) {\n- InetAddress address = InetAddresses.forString(fields[0]);\n- int prefixLength = Integer.parseInt(fields[1]);\n- return InetAddressPoint.newPrefixQuery(name(), address, prefixLength);\n- } else {\n- throw new IllegalArgumentException(\"Expected [ip/prefix] but was [\" + term + \"]\");\n- }\n+ final Tuple<InetAddress, Integer> cidr = InetAddresses.parseCidr(term);\n+ return InetAddressPoint.newPrefixQuery(name(), cidr.v1(), cidr.v2());\n }\n InetAddress address = InetAddresses.forString(term);\n return InetAddressPoint.newExactQuery(name(), address);",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/IpFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParsingException;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.network.InetAddresses;\n@@ -127,21 +128,12 @@ public static class Range implements ToXContentObject {\n }\n \n Range(String key, String mask) {\n- String[] splits = mask.split(\"/\");\n- if (splits.length != 2) {\n- throw new IllegalArgumentException(\"Expected [ip/prefix_length] but got [\" + mask\n- + \"], which contains zero or more than one [/]\");\n- }\n- InetAddress value = InetAddresses.forString(splits[0]);\n- int prefixLength = Integer.parseInt(splits[1]);\n- // copied from InetAddressPoint.newPrefixQuery\n- if (prefixLength < 0 || prefixLength > 8 * value.getAddress().length) {\n- throw new IllegalArgumentException(\"illegal prefixLength [\" + prefixLength\n- + \"] in [\" + mask + \"]. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges\");\n- }\n+ final Tuple<InetAddress, Integer> cidr = InetAddresses.parseCidr(mask);\n+ final InetAddress address = cidr.v1();\n+ final int prefixLength = cidr.v2();\n // create the lower value by zeroing out the host portion, upper value by filling it with all ones.\n- byte lower[] = value.getAddress();\n- byte upper[] = value.getAddress();\n+ byte lower[] = address.getAddress();\n+ byte upper[] = address.getAddress();\n for (int i = prefixLength; i < 8 * lower.length; i++) {\n int m = 1 << (7 - (i & 7));\n lower[i >> 3] &= ~m;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/IpRangeAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -16,7 +16,9 @@\n \n package org.elasticsearch.common.network;\n \n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.test.ESTestCase;\n+import org.hamcrest.Matchers;\n \n import java.net.InetAddress;\n import java.net.UnknownHostException;\n@@ -214,4 +216,34 @@ public void testToUriStringIPv6() {\n InetAddress ip = InetAddresses.forString(ipStr);\n assertEquals(\"[3ffe::1]\", InetAddresses.toUriString(ip));\n }\n+\n+ public void testParseCidr() {\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> InetAddresses.parseCidr(\"\"));\n+ assertThat(e.getMessage(), Matchers.containsString(\"Expected [ip/prefix] but was []\"));\n+\n+ e = expectThrows(IllegalArgumentException.class, () -> InetAddresses.parseCidr(\"192.168.1.42/33\"));\n+ assertThat(e.getMessage(), Matchers.containsString(\"Illegal prefix length\"));\n+\n+ e = expectThrows(IllegalArgumentException.class, () -> InetAddresses.parseCidr(\"::1/129\"));\n+ assertThat(e.getMessage(), Matchers.containsString(\"Illegal prefix length\"));\n+\n+ e = expectThrows(IllegalArgumentException.class, () -> InetAddresses.parseCidr(\"::ffff:0:0/96\"));\n+ assertThat(e.getMessage(), Matchers.containsString(\"CIDR notation is not allowed with IPv6-mapped IPv4 address\"));\n+\n+ Tuple<InetAddress, Integer> cidr = InetAddresses.parseCidr(\"192.168.0.0/24\");\n+ assertEquals(InetAddresses.forString(\"192.168.0.0\"), cidr.v1());\n+ assertEquals(Integer.valueOf(24), cidr.v2());\n+\n+ cidr = InetAddresses.parseCidr(\"::fffe:0:0/95\");\n+ assertEquals(InetAddresses.forString(\"::fffe:0:0\"), cidr.v1());\n+ assertEquals(Integer.valueOf(95), cidr.v2());\n+\n+ cidr = InetAddresses.parseCidr(\"192.168.0.0/32\");\n+ assertEquals(InetAddresses.forString(\"192.168.0.0\"), cidr.v1());\n+ assertEquals(Integer.valueOf(32), cidr.v2());\n+\n+ cidr = InetAddresses.parseCidr(\"::fffe:0:0/128\");\n+ assertEquals(InetAddresses.forString(\"::fffe:0:0\"), cidr.v1());\n+ assertEquals(Integer.valueOf(128), cidr.v2());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/common/network/InetAddressesTests.java",
"status": "modified"
}
]
} |
{
"body": "This instruction tells systemd to create a directory /var/run/elasticsearch before starting Elasticsearch.\r\n\r\nWithout this change, the default PID_DIR (/var/run/elasticsearch) may not exist, and without it, Elasticsearch will fail to start.",
"comments": [
{
"body": "/cc @jpcarey who described the problem to me.\r\n\r\nI was able to reproduce[*] the problem he described that /var/run/elasticsearch may not exist, and systemd was not creating it, and the elasticsearch service would fail to start without the directory.\r\n\r\n[*] My reproduction was not with elasticsearch's systemd service, but with a much simpler one to test systemd's behavior with respect to /var/run. Without `RuntimeDirectory=foo` the /var/run/foo directory is not created by systemd.",
"created_at": "2017-03-09T21:23:30Z"
},
{
"body": "This directory is created during installation, so I'm not sure I understand how this situation could have arisen?\r\n\r\n```\r\n$ rpm -qlp elasticsearch-5.2.2.rpm | grep /var/run/elasticsearch\r\nwarning: elasticsearch-5.2.2.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY\r\n/var/run/elasticsearch\r\n```\r\n\r\n```\r\n$ dpkg -c elasticsearch-5.2.2.deb | grep /var/run/elasticsearch\r\ndrwxr-xr-x elasticsearch/elasticsearch 0 2017-02-24 17:29 ./var/run/elasticsearch/\r\n```",
"created_at": "2017-03-09T22:08:03Z"
},
{
"body": "On fedora and possibly other red hat systems, /var/run is a symlink to\n/run, and /run is a tmpfs that is destroyed on system shutdown, so this\ndirectory is new every time the system boots.\n\n\nOn Thu, Mar 9, 2017 at 2:08 PM Jason Tedor <notifications@github.com> wrote:\n\n> This directory is created during installation, so I'm not sure I\n> understand how this situation could have arisen?\n>\n> $ rpm -qlp elasticsearch-5.2.2.rpm | grep /var/run/elasticsearch\n> warning: elasticsearch-5.2.2.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY\n> /var/run/elasticsearch\n>\n> $ dpkg -c elasticsearch-5.2.2.deb | grep /var/run/elasticsearch\n> drwxr-xr-x elasticsearch/elasticsearch 0 2017-02-24 17:29 ./var/run/elasticsearch/\n>\n> —\n> You are receiving this because you authored the thread.\n>\n>\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/pull/23526#issuecomment-285499231>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AAIC6vyOWNyr046pPxxrnL9iLtmwTCFrks5rkHh5gaJpZM4MYpkX>\n> .\n>\n",
"created_at": "2017-03-09T22:53:07Z"
},
{
"body": "We provide `/usr/lib/tmpfiles.d/elasticsearch.conf` that creates this file on system boot exactly for this reason. I still do not see how this could have arisen? 😦",
"created_at": "2017-03-10T00:15:30Z"
},
{
"body": "@jordansissel Given the existence of the tmpfiles configuration @jasontedor mentioned, can this PR be closed?",
"created_at": "2017-06-09T07:43:58Z"
},
{
"body": "I heard, but cannot confirm, that those encountering this did not have the `systemd-tmpfiles-setup.service` running, which likely meant it was not started at boot for whatever reason.\r\n\r\nI had the `systemd-tmpfiles-setup.service` running, and deleted `/var/run/elasticsearch`. Elasticsearch would no longer start, as expected (journalctl: `Likely root cause: java.nio.file.AccessDeniedException: /var/run/elasticsearch`). \r\n\r\nIf you manually invoke `sudo systemd-tmpfiles --create`, the `/var/run/elasticsearch` folder is created.\r\n\r\nRerunning the same test, but adding `RuntimeDirectory=elasticsearch` to an override file, allowed elasticsearch to start properly when the `/var/run/elasticsearch` directory was missing.\r\n\r\nI would be in favor of adding the `RuntimeDirectory`, since it seems to avoid needing to rely on an external service's unknown working status / schedule.\r\n\r\nhttps://www.freedesktop.org/software/systemd/man/tmpfiles.d.html\r\n> System daemons frequently require private runtime directories below /run to place communication sockets and similar in. For these, consider declaring them in their unit files using RuntimeDirectory= (see systemd.exec(5) for details), if this is feasible.",
"created_at": "2017-06-09T14:52:24Z"
},
{
"body": "Given than it's possible `systemd-tmpfiles-setup` may be a disabled service, I think it is beneficial to have the `RuntimeDirectory` regardless in the ES service file. Since no one has +1 or -1 this PR I am going to approve and merge it. If anyone disagrees we can definitely revert and re-open discussion, but it has been 2 months so I don't want this to stagnate either way.",
"created_at": "2017-08-15T20:20:17Z"
},
{
"body": "I'm good with this having been integrated after yesterday seeing another user run into a problem with their `systemd-tmpfiles-setup` not running correctly. See: https://discuss.elastic.co/t/elasticsearch-service-doesnt-start-upon-system-reboot/96994",
"created_at": "2017-08-16T01:48:43Z"
},
{
"body": "I cherry-picked this to 6.0 and 6.x too.",
"created_at": "2017-08-16T01:50:13Z"
},
{
"body": "Thanks Jason!",
"created_at": "2017-08-16T01:53:00Z"
},
{
"body": "I opened #26229 to add a test too.",
"created_at": "2017-08-16T02:25:48Z"
},
{
"body": "❤️ ",
"created_at": "2017-08-16T04:16:16Z"
}
],
"number": 23526,
"title": "Set RuntimeDirectory in systemd service"
} | {
"body": "We previously added a RuntimeDirectory directive to the systemd service file for Elasticsearch. This commit adds a packaging test for the situation that this directive was intended to address.\r\n\r\nRelates #23526\r\n",
"number": 26229,
"review_comments": [
{
"body": "We could also check that the directory has been correctly created",
"created_at": "2017-08-16T07:33:57Z"
},
{
"body": "Thanks for the suggestion; I pushed a check in 7c83a2f3f1e72ea5448a36bbd46d6ce49585c68e.",
"created_at": "2017-08-16T08:38:08Z"
}
],
"title": "Add packaging test for systemd runtime directive"
} | {
"commits": [
{
"message": "Add packaging test for systemd runtime directive\n\nWe previously added a RuntimeDirectory directive to the systemd service\nfile for Elasticsearch. This commit adds a packaging test for the\nsituation that this directive was intended to address."
},
{
"message": "Add directory existence check"
}
],
"files": [
{
"diff": "@@ -236,3 +236,13 @@ setup() {\n [ \"$max_address_space\" == \"unlimited\" ]\n systemctl stop elasticsearch.service\n }\n+\n+@test \"[SYSTEMD] test runtime directory\" {\n+ clean_before_test\n+ install_package\n+ sudo rm -rf /var/run/elasticsearch\n+ systemctl start elasticsearch.service\n+ wait_for_elasticsearch_status\n+ [ -d /var/run/elasticsearch ]\n+ systemctl stop elasticsearch.service\n+}",
"filename": "qa/vagrant/src/test/resources/packaging/tests/60_systemd.bats",
"status": "modified"
}
]
} |
{
"body": "The deprecation logger in AbstractXContentParser is static. This is done for performance reasons, to avoid constructing a deprecation logger for every parser of which there can be many (e.g., one for every document when scripting). This is fine, but the static here is a problem because it means we touch loggers before logging is initialized (when constructing a list setting in Environment which is a precursor to initializing logging). Therefore, to maintain the previous change (not constructing a parser for every instance) but avoiding the problems with static, we have to lazy initialize here. This is not perfect, there is a volatile read behind the scenes. This could be avoided (e.g., by not using set once) but I prefer the safety that set once provides. I think this should be the approach unless it otherwise proves problematic.\r\n\r\nRelates #25879\r\n\r\n",
"comments": [
{
"body": "Note that this is similar to the approach in `Setting` introduced in #25474 for exactly the same reason as here.",
"created_at": "2017-08-14T21:53:55Z"
},
{
"body": "@prog8 Would you be able to test this implementation against the case where you were previously encountering performance issues with the deprecation loggers from content parsers?",
"created_at": "2017-08-14T21:56:03Z"
}
],
"number": 26210,
"title": "Lazy initialize deprecation logger in parser"
} | {
"body": "In a few places we need to lazy initialize static deprecation loggers. This is needed to avoid touching logging before logging is configured, but deprecation loggers that are used in foundational classes like settings and parsers would be initialized before logging is configured. Previously we used a lazy set once pattern which is fine, but there's a simpler approach: the holder pattern.\r\n\r\nRelates #26210\r\n",
"number": 26218,
"review_comments": [],
"title": "Use holder pattern for lazy deprecation loggers"
} | {
"commits": [
{
"message": "Use holder pattern for lazy deprecation loggers\n\nIn a few places we need to lazy initialize static deprecation\nloggers. This is needed to avoid touching logging before logging is\nconfigured, but deprecation loggers that are used in foundational\nclasses like settings and parsers would be initialized before logging is\nconfigured. Previously we used a lazy set once pattern which is fine,\nbut there's a simpler approach: the holder pattern."
},
{
"message": "Mark field as private"
}
],
"files": [
{
"diff": "@@ -19,7 +19,6 @@\n package org.elasticsearch.common.settings;\n \n import org.apache.logging.log4j.Logger;\n-import org.apache.lucene.util.SetOnce;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.action.support.ToXContentToBytes;\n@@ -355,23 +354,13 @@ public String getRaw(Settings settings) {\n return settings.get(getKey(), defaultValue.apply(settings));\n }\n \n- private static SetOnce<DeprecationLogger> deprecationLogger = new SetOnce<>();\n-\n- // we have to initialize lazily otherwise a logger would be constructed before logging is initialized\n- private static synchronized DeprecationLogger getDeprecationLogger() {\n- if (deprecationLogger.get() == null) {\n- deprecationLogger.set(new DeprecationLogger(Loggers.getLogger(Settings.class)));\n- }\n- return deprecationLogger.get();\n- }\n-\n /** Logs a deprecation warning if the setting is deprecated and used. */\n void checkDeprecation(Settings settings) {\n // They're using the setting, so we need to tell them to stop\n if (this.isDeprecated() && this.exists(settings)) {\n // It would be convenient to show its replacement key, but replacement is often not so simple\n final String key = getKey();\n- getDeprecationLogger().deprecatedAndMaybeLog(\n+ Settings.DeprecationLoggerHolder.deprecationLogger.deprecatedAndMaybeLog(\n key,\n \"[{}] setting was deprecated in Elasticsearch and will be removed in a future release! \"\n + \"See the breaking changes documentation for the next major version.\",",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Setting.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.LogConfigurator;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.loader.SettingsLoader;\n import org.elasticsearch.common.settings.loader.SettingsLoaderFactory;\n@@ -63,7 +64,6 @@\n import java.util.Objects;\n import java.util.Set;\n import java.util.TreeMap;\n-import java.util.concurrent.ConcurrentHashMap;\n import java.util.concurrent.TimeUnit;\n import java.util.function.Function;\n import java.util.function.Predicate;\n@@ -320,6 +320,15 @@ public Long getAsLong(String setting, Long defaultValue) {\n }\n }\n \n+ /**\n+ * We have to lazy initialize the deprecation logger as otherwise a static logger here would be constructed before logging is configured\n+ * leading to a runtime failure (see {@link LogConfigurator#checkErrorListener()} ). The premature construction would come from any\n+ * {@link Setting} object constructed in, for example, {@link org.elasticsearch.env.Environment}.\n+ */\n+ static class DeprecationLoggerHolder {\n+ static DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(Settings.class));\n+ }\n+\n /**\n * Returns the setting value (as boolean) associated with the setting key. If it does not exists,\n * returns the default value provided.\n@@ -328,7 +337,7 @@ public Boolean getAsBoolean(String setting, Boolean defaultValue) {\n String rawValue = get(setting);\n Boolean booleanValue = Booleans.parseBooleanExact(rawValue, defaultValue);\n if (rawValue != null && Booleans.isStrictlyBoolean(rawValue) == false) {\n- DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(Settings.class));\n+ final DeprecationLogger deprecationLogger = DeprecationLoggerHolder.deprecationLogger;\n deprecationLogger.deprecated(\"Expected a boolean [true/false] for setting [{}] but got [{}]\", setting, rawValue);\n }\n return booleanValue;",
"filename": "core/src/main/java/org/elasticsearch/common/settings/Settings.java",
"status": "modified"
},
{
"diff": "@@ -20,11 +20,12 @@\n package org.elasticsearch.common.xcontent.support;\n \n import org.apache.lucene.util.BytesRef;\n-import org.apache.lucene.util.SetOnce;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.LogConfigurator;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n import org.elasticsearch.common.xcontent.XContentParser;\n \n@@ -55,32 +56,16 @@ private static void checkCoerceString(boolean coerce, Class<? extends Number> cl\n }\n }\n \n- // do not use this field directly, use AbstractXContentParser#getDeprecationLogger\n- private static final SetOnce<DeprecationLogger> deprecationLogger = new SetOnce<>();\n-\n- private static DeprecationLogger getDeprecationLogger() {\n- /*\n- * This implementation is intentionally verbose to make the minimum number of volatile reads. In the case that the set once is\n- * already initialized, this implementation makes exactly one volatile read. In the case that the set once is not initialized we\n- * make exactly two volatile reads.\n- */\n- final DeprecationLogger logger = deprecationLogger.get();\n- if (logger == null) {\n- synchronized (AbstractXContentParser.class) {\n- final DeprecationLogger innerLogger = deprecationLogger.get();\n- if (innerLogger == null) {\n- final DeprecationLogger newLogger = new DeprecationLogger(Loggers.getLogger(AbstractXContentParser.class));\n- deprecationLogger.set(newLogger);\n- return newLogger;\n- } else {\n- return innerLogger;\n- }\n- }\n- } else {\n- return logger;\n- }\n+ /**\n+ * We have to lazy initialize the deprecation logger as otherwise a static logger here would be constructed before logging is configured\n+ * leading to a runtime failure (see {@link LogConfigurator#checkErrorListener()} ). The premature construction would come from any\n+ * {@link Setting} object constructed in, for example, {@link org.elasticsearch.env.Environment}.\n+ */\n+ private static class DeprecationLoggerHolder {\n+ static DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(AbstractXContentParser.class));\n }\n \n+\n private final NamedXContentRegistry xContentRegistry;\n \n public AbstractXContentParser(NamedXContentRegistry xContentRegistry) {\n@@ -137,7 +122,8 @@ public boolean booleanValue() throws IOException {\n booleanValue = doBooleanValue();\n }\n if (interpretedAsLenient) {\n- getDeprecationLogger().deprecated(\"Expected a boolean [true/false] for property [{}] but got [{}]\", currentName(), rawValue);\n+ final DeprecationLogger deprecationLogger = DeprecationLoggerHolder.deprecationLogger;\n+ deprecationLogger.deprecated(\"Expected a boolean [true/false] for property [{}] but got [{}]\", currentName(), rawValue);\n }\n return booleanValue;\n ",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java",
"status": "modified"
}
]
} |
{
"body": "ES version **2.4.1**:\r\n\r\nConsider the following mapping:\r\n\r\n```json\r\n{\r\n \"timestamp_source\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"my_timestamp_1\": {\r\n \"type\": \"date\"\r\n },\r\n \"my_timestamp_2\": {\r\n \"type\": \"date\"\r\n }\r\n }\r\n },\r\n \"quantity\": {\r\n \"type\": \"float\"\r\n }\r\n}\r\n```\r\n\r\nand the following document:\r\n\r\n```json\r\n{\r\n \"timestamp_source\": [\r\n {\r\n \"my_timestamp_1\": \"2014-05-11T23:52:38+0000\",\r\n \"my_timestamp_2\": \"2015-05-11T23:52:38+0000\"\r\n },\r\n {\r\n \"my_timestamp_1\": \"2016-05-11T23:52:38+0000\",\r\n \"my_timestamp_2\": \"2017-05-11T23:52:38+0000\"\r\n }\r\n ],\r\n \"quantity\": 42\r\n}\r\n```\r\n\r\nI'm after bucketing the timestamps, summing the quantities and calculating the `avg_bucket`. However the following query fails w/ a cryptic error msg:\r\n\r\n```bash\r\ncurl -XGET 'localhost:9200/my_index/_search?pretty&size=0' -d '\r\n{\r\n \"query\": {\r\n \"match_all\": {}\r\n },\r\n \"aggs\": {\r\n \"by_frequency\": {\r\n \"nested\": {\r\n \"path\": \"timestamp_source\"\r\n },\r\n \"aggs\": {\r\n \"by_frequency\": {\r\n \"date_histogram\": {\r\n \"field\": \"timestamp_source.my_timestamp_1\",\r\n \"interval\": \"month\",\r\n \"min_doc_count\": 0\r\n },\r\n \"aggs\": {\r\n \"total\": {\r\n \"reverse_nested\": {},\r\n \"aggs\": {\r\n \"total\": {\r\n \"sum\": {\r\n \"field\": \"quantity\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"monthly_average\": {\r\n \"avg_bucket\": {\r\n \"buckets_path\": \"by_frequency>by_frequency>total>total\",\r\n \"gap_policy\": \"insert_zeros\"\r\n }\r\n }\r\n }\r\n}\r\n'\r\n\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [ ],\r\n \"type\" : \"reduce_search_phase_exception\",\r\n \"reason\" : \"[reduce] \",\r\n \"phase\" : \"fetch\",\r\n \"grouped\" : true,\r\n \"failed_shards\" : [ ],\r\n \"caused_by\" : {\r\n \"type\" : \"class_cast_exception\",\r\n \"reason\" : null\r\n }\r\n },\r\n \"status\" : 503\r\n}\r\n```\r\n\r\n**Actual behaviour**\r\n\r\nThe error message isn't descriptive enough.\r\n\r\n**Expected behaviour**\r\n\r\nThe 'reason' field explains in more detail the reasons behind failure.\r\n\r\nAlternatively (if this query is valid), the query succeeds.\r\n\r\n",
"comments": [
{
"body": "I agree that the error message isn't very helpful here, my guess is that you hit a bug with the validation of the buckets path. There will be a stack trace in the server logs relating to that ClassCastException, could you paste it here so we can fix the bug in the validation?\r\n\r\nAlthough there is a bug in the validation your request should fail because the `*_bucket` aggregations need to be direct-siblings to the multi-bucket aggregations they are working on, if you move your `monthly_average` aggregation so its a sub-aggregation to your `by_frequency` `nested` aggregations an a sibling to the `by_frequency` `date_histogram` aggregation, and change the `buckets+path` to `by_frequency>total>total` I think it might work (although depending on the cause of the `ClassCastException` you may still hit that).",
"created_at": "2017-07-18T15:02:49Z"
},
{
"body": "The stack trace is:\r\n```\r\n[2017-07-18 14:33:48,542][WARN ][rest.suppressed ] path: /some_crazy_schema_v4/_search, params: {pretty=, size=0, index=some_crazy_schema_v4}\r\nFailed to execute phase [fetch], [reduce]\r\nat org.elasticsearch.action.search.SearchQueryThenFetchAsyncAction$2.onFailure(SearchQueryThenFetchAsyncAction.java:146)\r\nat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39)\r\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\nat java.lang.Thread.run(Thread.java:745)\r\nCaused by: java.lang.ClassCastException\r\n[2017-07-18 14:38:25,821][DEBUG][action.search ] [In-Betweener] failed to reduce search\r\nFailed to execute phase [fetch], [reduce]\r\nat org.elasticsearch.action.search.SearchQueryThenFetchAsyncAction$2.onFailure(SearchQueryThenFetchAsyncAction.java:146)\r\nat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39)\r\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\nat java.lang.Thread.run(Thread.java:745)\r\nCaused by: java.lang.ClassCastException\r\n```",
"created_at": "2017-07-18T15:30:17Z"
},
{
"body": "Also thanks a lot for your suggestion, it worked!",
"created_at": "2017-07-18T15:33:54Z"
},
{
"body": "So having dug into this a bit, the error was exactly because of the problem I outlined with your request but it isn't particularly helpful to the end user. I am working on a fix to make this validation happen earlier and give back a more descriptive message for the next person who makes the same kind of mistake\r\n\r\nthanks @kujon for raising this",
"created_at": "2017-07-21T13:33:27Z"
}
],
"number": 25775,
"title": "Cryptic error message for pipeline aggregations based on combination of nested and reverse_nested aggs"
} | {
"body": "This adds a validation step to the BucketMetricsPipelineAggregationBuilder which ensure that the first aggregation in the `buckets_path` is a multi-bucket aggregation. It does this using a new `MultiBucketAggregationBuilder` marker interface.\r\n\r\nThe change also moves the validate of pipeline aggregations to the `AggregatorFactories.build()` method so the validate can inspect sibling `AggregatorBuilder` objects rather than `AggregatorFactory` objects. Further it removes the validate from `AggregatorFactory` since this was never implemented and since aggregators only depend on their own internal state and not on other aggregators they should be validated ideally at setter time but in rare case where this is not possible the validation should be done in the `AggregationBuilder.build()` step.\r\n\r\nCloses #25775",
"number": 26215,
"review_comments": [
{
"body": "let's split on both chars at once? `split(\"[>\\\\.]\")`",
"created_at": "2017-08-23T17:32:42Z"
}
],
"title": "Check bucket metric ages point to a multi bucket agg"
} | {
"commits": [
{
"message": "Check bucket metric ages point to a multi bucket agg\n\nThis adds a validation step to the BucketMetricsPipelineAggregationBuilder which ensure that the first aggregation in the `buckets_path` is a multi-bucket aggregation. It does this using a new `MultiBucketAggregationBuilder` marker interface.\n\nThe change also moves the validate of pipeline aggregations to the `AggregatorFactories.build()` method so the validate can inspect sibling `AggregatorBuilder` objects rather than `AggregatorFactory` objects. Further it removes the validate from `AggregatorFactory` since this was never implemented and since aggregators only depend on their own internal state and not on other aggregators they should be validated ideally at setter time but in rare case where this is not possible the validation should be done in the `AggregationBuilder.build()` step.\n\nCloses #25775\n\nMove validate stage to happen during AggregatorFactories.Builder.build\n\nAlso removes validate method from normal aggs since it was never used."
},
{
"message": "review comment fix"
}
],
"files": [
{
"diff": "@@ -101,7 +101,6 @@\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n-import java.util.Objects;\n import java.util.Optional;\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.atomic.AtomicLong;\n@@ -710,7 +709,6 @@ private void parseSource(DefaultSearchContext context, SearchSourceBuilder sourc\n if (source.aggregations() != null) {\n try {\n AggregatorFactories factories = source.aggregations().build(context, null);\n- factories.validate();\n context.aggregations(new SearchContextAggregations(factories));\n } catch (IOException e) {\n throw new AggregationInitializationException(\"Failed to create aggregators\", e);",
"filename": "core/src/main/java/org/elasticsearch/search/SearchService.java",
"status": "modified"
},
{
"diff": "@@ -238,15 +238,6 @@ public int countPipelineAggregators() {\n return pipelineAggregatorFactories.size();\n }\n \n- public void validate() {\n- for (AggregatorFactory<?> factory : factories) {\n- factory.validate();\n- }\n- for (PipelineAggregationBuilder factory : pipelineAggregatorFactories) {\n- factory.validate(parent, factories, pipelineAggregatorFactories);\n- }\n- }\n-\n public static class Builder implements Writeable, ToXContentObject {\n private final Set<String> names = new HashSet<>();\n private final List<AggregationBuilder> aggregationBuilders = new ArrayList<>();\n@@ -330,7 +321,8 @@ public AggregatorFactories build(SearchContext context, AggregatorFactory<?> par\n if (skipResolveOrder) {\n orderedpipelineAggregators = new ArrayList<>(pipelineAggregatorBuilders);\n } else {\n- orderedpipelineAggregators = resolvePipelineAggregatorOrder(this.pipelineAggregatorBuilders, this.aggregationBuilders);\n+ orderedpipelineAggregators = resolvePipelineAggregatorOrder(this.pipelineAggregatorBuilders, this.aggregationBuilders,\n+ parent);\n }\n AggregatorFactory<?>[] aggFactories = new AggregatorFactory<?>[aggregationBuilders.size()];\n for (int i = 0; i < aggregationBuilders.size(); i++) {\n@@ -340,7 +332,8 @@ public AggregatorFactories build(SearchContext context, AggregatorFactory<?> par\n }\n \n private List<PipelineAggregationBuilder> resolvePipelineAggregatorOrder(\n- List<PipelineAggregationBuilder> pipelineAggregatorBuilders, List<AggregationBuilder> aggBuilders) {\n+ List<PipelineAggregationBuilder> pipelineAggregatorBuilders, List<AggregationBuilder> aggBuilders,\n+ AggregatorFactory<?> parent) {\n Map<String, PipelineAggregationBuilder> pipelineAggregatorBuildersMap = new HashMap<>();\n for (PipelineAggregationBuilder builder : pipelineAggregatorBuilders) {\n pipelineAggregatorBuildersMap.put(builder.getName(), builder);\n@@ -354,6 +347,7 @@ private List<PipelineAggregationBuilder> resolvePipelineAggregatorOrder(\n Set<PipelineAggregationBuilder> temporarilyMarked = new HashSet<>();\n while (!unmarkedBuilders.isEmpty()) {\n PipelineAggregationBuilder builder = unmarkedBuilders.get(0);\n+ builder.validate(parent, aggBuilders, pipelineAggregatorBuilders);\n resolvePipelineAggregatorOrder(aggBuildersMap, pipelineAggregatorBuildersMap, orderedPipelineAggregatorrs, unmarkedBuilders,\n temporarilyMarked, builder);\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java",
"status": "modified"
},
{
"diff": "@@ -188,15 +188,6 @@ public String name() {\n return name;\n }\n \n- /**\n- * Validates the state of this factory (makes sure the factory is properly\n- * configured)\n- */\n- public final void validate() {\n- doValidate();\n- factories.validate();\n- }\n-\n public void doValidate() {\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactory.java",
"status": "modified"
},
{
"diff": "@@ -68,7 +68,7 @@ public final String[] getBucketsPaths() {\n * Internal: Validates the state of this factory (makes sure the factory is properly\n * configured)\n */\n- protected abstract void validate(AggregatorFactory<?> parent, AggregatorFactory<?>[] factories,\n+ protected abstract void validate(AggregatorFactory<?> parent, List<AggregationBuilder> factories,\n List<PipelineAggregationBuilder> pipelineAggregatorFactories);\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/PipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,30 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket;\n+\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+\n+/**\n+ * Marker interface to indicate that the {@link AggregationBuilder} is for a\n+ * multi-bucket aggregation.\n+ */\n+public interface MultiBucketAggregationBuilder {\n+\n+}",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/MultiBucketAggregationBuilder.java",
"status": "added"
},
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrixAggregator.KeyedFilter;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.query.QueryPhaseExecutionException;\n@@ -46,7 +47,8 @@\n import java.util.Map.Entry;\n import java.util.Objects;\n \n-public class AdjacencyMatrixAggregationBuilder extends AbstractAggregationBuilder<AdjacencyMatrixAggregationBuilder> {\n+public class AdjacencyMatrixAggregationBuilder extends AbstractAggregationBuilder<AdjacencyMatrixAggregationBuilder>\n+ implements MultiBucketAggregationBuilder {\n public static final String NAME = \"adjacency_matrix\";\n \n private static final String DEFAULT_SEPARATOR = \"&\";",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/adjacency/AdjacencyMatrixAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -31,8 +31,9 @@\n import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n-import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregator.KeyedFilter;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregator.KeyedFilter;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n@@ -44,7 +45,8 @@\n \n import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder;\n \n-public class FiltersAggregationBuilder extends AbstractAggregationBuilder<FiltersAggregationBuilder> {\n+public class FiltersAggregationBuilder extends AbstractAggregationBuilder<FiltersAggregationBuilder>\n+ implements MultiBucketAggregationBuilder {\n public static final String NAME = \"filters\";\n \n private static final ParseField FILTERS_FIELD = new ParseField(\"filters\");",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FiltersAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.bucket.BucketUtils;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n import org.elasticsearch.search.aggregations.support.ValueType;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder;\n@@ -50,7 +51,8 @@\n import java.io.IOException;\n import java.util.Objects;\n \n-public class GeoGridAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource.GeoPoint, GeoGridAggregationBuilder> {\n+public class GeoGridAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource.GeoPoint, GeoGridAggregationBuilder>\n+ implements MultiBucketAggregationBuilder {\n public static final String NAME = \"geohash_grid\";\n public static final int DEFAULT_PRECISION = 5;\n public static final int DEFAULT_MAX_NUM_CELLS = 10000;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.search.aggregations.BucketOrder;\n import org.elasticsearch.search.aggregations.InternalOrder;\n import org.elasticsearch.search.aggregations.InternalOrder.CompoundOrder;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n import org.elasticsearch.search.aggregations.support.ValueType;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric;\n@@ -53,8 +54,8 @@\n /**\n * A builder for histograms on date fields.\n */\n-public class DateHistogramAggregationBuilder\n- extends ValuesSourceAggregationBuilder<ValuesSource.Numeric, DateHistogramAggregationBuilder> {\n+public class DateHistogramAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource.Numeric, DateHistogramAggregationBuilder>\n+ implements MultiBucketAggregationBuilder {\n public static final String NAME = \"date_histogram\";\n \n public static final Map<String, DateTimeUnit> DATE_FIELD_UNITS;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.search.aggregations.BucketOrder;\n import org.elasticsearch.search.aggregations.InternalOrder;\n import org.elasticsearch.search.aggregations.InternalOrder.CompoundOrder;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n import org.elasticsearch.search.aggregations.support.ValueType;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric;\n@@ -47,8 +48,8 @@\n /**\n * A builder for histograms on numeric fields.\n */\n-public class HistogramAggregationBuilder\n- extends ValuesSourceAggregationBuilder<ValuesSource.Numeric, HistogramAggregationBuilder> {\n+public class HistogramAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource.Numeric, HistogramAggregationBuilder>\n+ implements MultiBucketAggregationBuilder {\n public static final String NAME = \"histogram\";\n \n private static final ObjectParser<double[], Void> EXTENDED_BOUNDS_PARSER = new ObjectParser<>(",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder;\n@@ -35,7 +36,7 @@\n import java.util.function.Function;\n \n public abstract class AbstractRangeBuilder<AB extends AbstractRangeBuilder<AB, R>, R extends Range>\n- extends ValuesSourceAggregationBuilder<ValuesSource.Numeric, AB> {\n+ extends ValuesSourceAggregationBuilder<ValuesSource.Numeric, AB> implements MultiBucketAggregationBuilder {\n \n protected final InternalRange.Factory<?, ?> rangeFactory;\n protected List<R> ranges = new ArrayList<>();",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.significant.heuristics.JLHScore;\n import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic;\n import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristicParser;\n@@ -51,7 +52,8 @@\n \n import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder;\n \n-public class SignificantTermsAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource, SignificantTermsAggregationBuilder> {\n+public class SignificantTermsAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource, SignificantTermsAggregationBuilder>\n+ implements MultiBucketAggregationBuilder {\n public static final String NAME = \"significant_terms\";\n \n static final ParseField BACKGROUND_FILTER = new ParseField(\"background_filter\");",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.search.aggregations.BucketOrder;\n import org.elasticsearch.search.aggregations.InternalOrder;\n import org.elasticsearch.search.aggregations.InternalOrder.CompoundOrder;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregator.BucketCountThresholds;\n import org.elasticsearch.search.aggregations.support.ValueType;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n@@ -45,7 +46,8 @@\n import java.util.List;\n import java.util.Objects;\n \n-public class TermsAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource, TermsAggregationBuilder> {\n+public class TermsAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource, TermsAggregationBuilder>\n+ implements MultiBucketAggregationBuilder {\n public static final String NAME = \"terms\";\n \n public static final ParseField EXECUTION_HINT_FIELD_NAME = new ParseField(\"execution_hint\");",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/TermsAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n \n@@ -80,7 +81,7 @@ public String type() {\n * configured)\n */\n @Override\n- public final void validate(AggregatorFactory<?> parent, AggregatorFactory<?>[] factories,\n+ public final void validate(AggregatorFactory<?> parent, List<AggregationBuilder> factories,\n List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n doValidate(parent, factories, pipelineAggregatorFactories);\n }\n@@ -98,7 +99,7 @@ public final PipelineAggregator create() throws IOException {\n return aggregator;\n }\n \n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] factories,\n+ public void doValidate(AggregatorFactory<?> parent, List<AggregationBuilder> factories,\n List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/AbstractPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -23,16 +23,19 @@\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n-import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy;\n+import org.elasticsearch.search.aggregations.bucket.MultiBucketAggregationBuilder;\n import org.elasticsearch.search.aggregations.pipeline.AbstractPipelineAggregationBuilder;\n+import org.elasticsearch.search.aggregations.pipeline.BucketHelpers.GapPolicy;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n \n import java.io.IOException;\n import java.util.List;\n import java.util.Map;\n import java.util.Objects;\n+import java.util.Optional;\n \n public abstract class BucketMetricsPipelineAggregationBuilder<AF extends BucketMetricsPipelineAggregationBuilder<AF>>\n extends AbstractPipelineAggregationBuilder<AF> {\n@@ -106,12 +109,29 @@ public GapPolicy gapPolicy() {\n protected abstract PipelineAggregator createInternal(Map<String, Object> metaData) throws IOException;\n \n @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n+ public void doValidate(AggregatorFactory<?> parent, List<AggregationBuilder> aggBuilders,\n List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n if (bucketsPaths.length != 1) {\n throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n + \" must contain a single entry for aggregation [\" + name + \"]\");\n }\n+ // Need to find the first agg name in the buckets path to check its a\n+ // multi bucket agg: aggs are split with '>' and can optionally have a\n+ // metric name after them by using '.' so need to split on both to get\n+ // just the agg name\n+ final String firstAgg = bucketsPaths[0].split(\"[>\\\\.]\")[0];\n+ Optional<AggregationBuilder> aggBuilder = aggBuilders.stream().filter((builder) -> builder.getName().equals(firstAgg))\n+ .findAny();\n+ if (aggBuilder.isPresent()) {\n+ if ((aggBuilder.get() instanceof MultiBucketAggregationBuilder) == false) {\n+ throw new IllegalArgumentException(\"The first aggregation in \" + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" must be a multi-bucket aggregation for aggregation [\" + name + \"] found :\"\n+ + aggBuilder.get().getClass().getName() + \" for buckets path: \" + bucketsPaths[0]);\n+ }\n+ } else {\n+ throw new IllegalArgumentException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" aggregation does not exist for aggregation [\" + name + \"]: \" + bucketsPaths[0]);\n+ }\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,14 +22,11 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.search.aggregations.AggregatorFactory;\n-import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsParser;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder;\n \n import java.io.IOException;\n-import java.util.List;\n import java.util.Map;\n \n public class AvgBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder<AvgBucketPipelineAggregationBuilder> {\n@@ -56,15 +53,6 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n return new AvgBucketPipelineAggregator(name, bucketsPaths, gapPolicy(), formatter(), metaData);\n }\n \n- @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n- List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n- if (bucketsPaths.length != 1) {\n- throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n- + \" must contain a single entry for aggregation [\" + name + \"]\");\n- }\n- }\n-\n @Override\n protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n return builder;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/avg/AvgBucketPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,14 +22,11 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.search.aggregations.AggregatorFactory;\n-import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsParser;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder;\n \n import java.io.IOException;\n-import java.util.List;\n import java.util.Map;\n \n public class MaxBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder<MaxBucketPipelineAggregationBuilder> {\n@@ -56,15 +53,6 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n return new MaxBucketPipelineAggregator(name, bucketsPaths, gapPolicy(), formatter(), metaData);\n }\n \n- @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n- List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n- if (bucketsPaths.length != 1) {\n- throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n- + \" must contain a single entry for aggregation [\" + name + \"]\");\n- }\n- }\n-\n @Override\n protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n return builder;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/max/MaxBucketPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,14 +22,11 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.search.aggregations.AggregatorFactory;\n-import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsParser;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder;\n \n import java.io.IOException;\n-import java.util.List;\n import java.util.Map;\n \n public class MinBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder<MinBucketPipelineAggregationBuilder> {\n@@ -56,15 +53,6 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n return new MinBucketPipelineAggregator(name, bucketsPaths, gapPolicy(), formatter(), metaData);\n }\n \n- @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n- List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n- if (bucketsPaths.length != 1) {\n- throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n- + \" must contain a single entry for aggregation [\" + name + \"]\");\n- }\n- }\n-\n @Override\n protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n return builder;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/min/MinBucketPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n@@ -42,7 +43,7 @@ public class PercentilesBucketPipelineAggregationBuilder\n extends BucketMetricsPipelineAggregationBuilder<PercentilesBucketPipelineAggregationBuilder> {\n public static final String NAME = \"percentiles_bucket\";\n \n- private static final ParseField PERCENTS_FIELD = new ParseField(\"percents\");\n+ public static final ParseField PERCENTS_FIELD = new ParseField(\"percents\");\n \n private double[] percents = new double[] { 1.0, 5.0, 25.0, 50.0, 75.0, 95.0, 99.0 };\n \n@@ -94,12 +95,9 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n }\n \n @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n+ public void doValidate(AggregatorFactory<?> parent, List<AggregationBuilder> aggFactories,\n List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n- if (bucketsPaths.length != 1) {\n- throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n- + \" must contain a single entry for aggregation [\" + name + \"]\");\n- }\n+ super.doValidate(parent, aggFactories, pipelineAggregatorFactories);\n \n for (Double p : percents) {\n if (p == null || p < 0.0 || p > 100.0) {",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/percentile/PercentilesBucketPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,15 +22,11 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.search.aggregations.AggregatorFactory;\n-import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n-import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator.Parser;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsParser;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder;\n \n import java.io.IOException;\n-import java.util.List;\n import java.util.Map;\n \n public class StatsBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder<StatsBucketPipelineAggregationBuilder> {\n@@ -58,15 +54,6 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n return new StatsBucketPipelineAggregator(name, bucketsPaths, gapPolicy(), formatter(), metaData);\n }\n \n- @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n- List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n- if (bucketsPaths.length != 1) {\n- throw new IllegalStateException(Parser.BUCKETS_PATH.getPreferredName()\n- + \" must contain a single entry for aggregation [\" + name + \"]\");\n- }\n- }\n-\n @Override\n protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n return builder;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/StatsBucketPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,10 +22,10 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n-import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator.Parser;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder;\n \n import java.io.IOException;\n@@ -82,12 +82,9 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n }\n \n @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n+ public void doValidate(AggregatorFactory<?> parent, List<AggregationBuilder> aggBuilders,\n List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n- if (bucketsPaths.length != 1) {\n- throw new IllegalStateException(Parser.BUCKETS_PATH.getPreferredName()\n- + \" must contain a single entry for aggregation [\" + name + \"]\");\n- }\n+ super.doValidate(parent, aggBuilders, pipelineAggregatorFactories);\n \n if (sigma < 0.0 ) {\n throw new IllegalStateException(ExtendedStatsBucketParser.SIGMA.getPreferredName()",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/stats/extended/ExtendedStatsBucketPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,14 +22,11 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.search.aggregations.AggregatorFactory;\n-import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsParser;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregationBuilder;\n \n import java.io.IOException;\n-import java.util.List;\n import java.util.Map;\n \n public class SumBucketPipelineAggregationBuilder extends BucketMetricsPipelineAggregationBuilder<SumBucketPipelineAggregationBuilder> {\n@@ -56,15 +53,6 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n return new SumBucketPipelineAggregator(name, bucketsPaths, gapPolicy(), formatter(), metaData);\n }\n \n- @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n- List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n- if (bucketsPaths.length != 1) {\n- throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n- + \" must contain a single entry for aggregation [\" + name + \"]\");\n- }\n- }\n-\n @Override\n protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n return builder;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/sum/SumBucketPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregatorFactory;\n@@ -96,7 +97,7 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n }\n \n @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n+ public void doValidate(AggregatorFactory<?> parent, List<AggregationBuilder> aggFactories,\n List<PipelineAggregationBuilder> pipelineAggregatorFactories) {\n if (bucketsPaths.length != 1) {\n throw new IllegalStateException(BUCKETS_PATH.getPreferredName()",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;\n@@ -155,7 +156,7 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n }\n \n @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n+ public void doValidate(AggregatorFactory<?> parent, List<AggregationBuilder> aggFactories,\n List<PipelineAggregationBuilder> pipelineAggregatoractories) {\n if (bucketsPaths.length != 1) {\n throw new IllegalStateException(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/derivative/DerivativePipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.PipelineAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregatorFactory;\n@@ -255,7 +256,7 @@ protected PipelineAggregator createInternal(Map<String, Object> metaData) throws\n }\n \n @Override\n- public void doValidate(AggregatorFactory<?> parent, AggregatorFactory<?>[] aggFactories,\n+ public void doValidate(AggregatorFactory<?> parent, List<AggregationBuilder> aggFactories,\n List<PipelineAggregationBuilder> pipelineAggregatoractories) {\n if (minimize != null && minimize && !model.canBeMinimized()) {\n // If the user asks to minimize, but this model doesn't support",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/pipeline/movavg/MovAvgPipelineAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,16 @@\n \n package org.elasticsearch.search.aggregations.pipeline.bucketmetrics;\n \n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.avg.AvgBucketPipelineAggregationBuilder;\n+import org.elasticsearch.search.aggregations.support.ValueType;\n+\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.List;\n \n public class AvgBucketTests extends AbstractBucketMetricsTestCase<AvgBucketPipelineAggregationBuilder> {\n \n@@ -28,5 +37,31 @@ protected AvgBucketPipelineAggregationBuilder doCreateTestAggregatorFactory(Stri\n return new AvgBucketPipelineAggregationBuilder(name, bucketsPath);\n }\n \n+ public void testValidate() {\n+ AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder(\"global\");\n+ AggregationBuilder multiBucketAgg = new TermsAggregationBuilder(\"terms\", ValueType.STRING);\n+ final List<AggregationBuilder> aggBuilders = new ArrayList<>();\n+ aggBuilders.add(singleBucketAgg);\n+ aggBuilders.add(multiBucketAgg);\n+\n+ // First try to point to a non-existent agg\n+ final AvgBucketPipelineAggregationBuilder builder = new AvgBucketPipelineAggregationBuilder(\"name\", \"invalid_agg>metric\");\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n+ () -> builder.validate(null, aggBuilders, Collections.emptyList()));\n+ assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" aggregation does not exist for aggregation [name]: invalid_agg>metric\", ex.getMessage());\n+ \n+ // Now try to point to a single bucket agg\n+ AvgBucketPipelineAggregationBuilder builder2 = new AvgBucketPipelineAggregationBuilder(\"name\", \"global>metric\");\n+ ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList()));\n+ assertEquals(\"The first aggregation in \" + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" must be a multi-bucket aggregation for aggregation [name] found :\" + GlobalAggregationBuilder.class.getName()\n+ + \" for buckets path: global>metric\", ex.getMessage());\n+ \n+ // Now try to point to a valid multi-bucket agg (no exception should be thrown)\n+ AvgBucketPipelineAggregationBuilder builder3 = new AvgBucketPipelineAggregationBuilder(\"name\", \"terms>metric\");\n+ builder3.validate(null, aggBuilders, Collections.emptyList());\n+ \n+ }\n \n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/AvgBucketTests.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,16 @@\n \n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended.ExtendedStatsBucketPipelineAggregationBuilder;\n+import org.elasticsearch.search.aggregations.support.ValueType;\n+\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.List;\n \n import static org.hamcrest.Matchers.equalTo;\n \n@@ -52,4 +61,32 @@ public void testSigmaFromInt() throws Exception {\n \n assertThat(builder.sigma(), equalTo(5.0));\n }\n+\n+ public void testValidate() {\n+ AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder(\"global\");\n+ AggregationBuilder multiBucketAgg = new TermsAggregationBuilder(\"terms\", ValueType.STRING);\n+ final List<AggregationBuilder> aggBuilders = new ArrayList<>();\n+ aggBuilders.add(singleBucketAgg);\n+ aggBuilders.add(multiBucketAgg);\n+\n+ // First try to point to a non-existent agg\n+ final ExtendedStatsBucketPipelineAggregationBuilder builder = new ExtendedStatsBucketPipelineAggregationBuilder(\"name\",\n+ \"invalid_agg>metric\");\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n+ () -> builder.validate(null, aggBuilders, Collections.emptyList()));\n+ assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" aggregation does not exist for aggregation [name]: invalid_agg>metric\", ex.getMessage());\n+\n+ // Now try to point to a single bucket agg\n+ ExtendedStatsBucketPipelineAggregationBuilder builder2 = new ExtendedStatsBucketPipelineAggregationBuilder(\"name\", \"global>metric\");\n+ ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList()));\n+ assertEquals(\"The first aggregation in \" + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" must be a multi-bucket aggregation for aggregation [name] found :\" + GlobalAggregationBuilder.class.getName()\n+ + \" for buckets path: global>metric\", ex.getMessage());\n+\n+ // Now try to point to a valid multi-bucket agg (no exception should be\n+ // thrown)\n+ ExtendedStatsBucketPipelineAggregationBuilder builder3 = new ExtendedStatsBucketPipelineAggregationBuilder(\"name\", \"terms>metric\");\n+ builder3.validate(null, aggBuilders, Collections.emptyList());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/ExtendedStatsBucketTests.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,16 @@\n \n package org.elasticsearch.search.aggregations.pipeline.bucketmetrics;\n \n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.max.MaxBucketPipelineAggregationBuilder;\n+import org.elasticsearch.search.aggregations.support.ValueType;\n+\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.List;\n \n public class MaxBucketTests extends AbstractBucketMetricsTestCase<MaxBucketPipelineAggregationBuilder> {\n \n@@ -28,5 +37,31 @@ protected MaxBucketPipelineAggregationBuilder doCreateTestAggregatorFactory(Stri\n return new MaxBucketPipelineAggregationBuilder(name, bucketsPath);\n }\n \n+ public void testValidate() {\n+ AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder(\"global\");\n+ AggregationBuilder multiBucketAgg = new TermsAggregationBuilder(\"terms\", ValueType.STRING);\n+ final List<AggregationBuilder> aggBuilders = new ArrayList<>();\n+ aggBuilders.add(singleBucketAgg);\n+ aggBuilders.add(multiBucketAgg);\n+\n+ // First try to point to a non-existent agg\n+ final MaxBucketPipelineAggregationBuilder builder = new MaxBucketPipelineAggregationBuilder(\"name\", \"invalid_agg>metric\");\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n+ () -> builder.validate(null, aggBuilders, Collections.emptyList()));\n+ assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" aggregation does not exist for aggregation [name]: invalid_agg>metric\", ex.getMessage());\n+\n+ // Now try to point to a single bucket agg\n+ MaxBucketPipelineAggregationBuilder builder2 = new MaxBucketPipelineAggregationBuilder(\"name\", \"global>metric\");\n+ ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList()));\n+ assertEquals(\"The first aggregation in \" + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" must be a multi-bucket aggregation for aggregation [name] found :\" + GlobalAggregationBuilder.class.getName()\n+ + \" for buckets path: global>metric\", ex.getMessage());\n+\n+ // Now try to point to a valid multi-bucket agg (no exception should be\n+ // thrown)\n+ MaxBucketPipelineAggregationBuilder builder3 = new MaxBucketPipelineAggregationBuilder(\"name\", \"terms>metric\");\n+ builder3.validate(null, aggBuilders, Collections.emptyList());\n+ }\n \n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/MaxBucketTests.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,16 @@\n \n package org.elasticsearch.search.aggregations.pipeline.bucketmetrics;\n \n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.min.MinBucketPipelineAggregationBuilder;\n+import org.elasticsearch.search.aggregations.support.ValueType;\n+\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.List;\n \n public class MinBucketTests extends AbstractBucketMetricsTestCase<MinBucketPipelineAggregationBuilder> {\n \n@@ -28,5 +37,31 @@ protected MinBucketPipelineAggregationBuilder doCreateTestAggregatorFactory(Stri\n return new MinBucketPipelineAggregationBuilder(name, bucketsPath);\n }\n \n+ public void testValidate() {\n+ AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder(\"global\");\n+ AggregationBuilder multiBucketAgg = new TermsAggregationBuilder(\"terms\", ValueType.STRING);\n+ final List<AggregationBuilder> aggBuilders = new ArrayList<>();\n+ aggBuilders.add(singleBucketAgg);\n+ aggBuilders.add(multiBucketAgg);\n+\n+ // First try to point to a non-existent agg\n+ final MinBucketPipelineAggregationBuilder builder = new MinBucketPipelineAggregationBuilder(\"name\", \"invalid_agg>metric\");\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n+ () -> builder.validate(null, aggBuilders, Collections.emptyList()));\n+ assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" aggregation does not exist for aggregation [name]: invalid_agg>metric\", ex.getMessage());\n+\n+ // Now try to point to a single bucket agg\n+ MinBucketPipelineAggregationBuilder builder2 = new MinBucketPipelineAggregationBuilder(\"name\", \"global>metric\");\n+ ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList()));\n+ assertEquals(\"The first aggregation in \" + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" must be a multi-bucket aggregation for aggregation [name] found :\" + GlobalAggregationBuilder.class.getName()\n+ + \" for buckets path: global>metric\", ex.getMessage());\n+\n+ // Now try to point to a valid multi-bucket agg (no exception should be\n+ // thrown)\n+ MinBucketPipelineAggregationBuilder builder3 = new MinBucketPipelineAggregationBuilder(\"name\", \"terms>metric\");\n+ builder3.validate(null, aggBuilders, Collections.emptyList());\n+ }\n \n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/MinBucketTests.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,16 @@\n \n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.percentile.PercentilesBucketPipelineAggregationBuilder;\n+import org.elasticsearch.search.aggregations.support.ValueType;\n+\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.List;\n \n import static org.hamcrest.Matchers.equalTo;\n \n@@ -56,4 +65,32 @@ public void testPercentsFromMixedArray() throws Exception {\n \n assertThat(builder.percents(), equalTo(new double[]{0.0, 20.0, 50.0, 75.99}));\n }\n+\n+ public void testValidate() {\n+ AggregationBuilder singleBucketAgg = new GlobalAggregationBuilder(\"global\");\n+ AggregationBuilder multiBucketAgg = new TermsAggregationBuilder(\"terms\", ValueType.STRING);\n+ final List<AggregationBuilder> aggBuilders = new ArrayList<>();\n+ aggBuilders.add(singleBucketAgg);\n+ aggBuilders.add(multiBucketAgg);\n+\n+ // First try to point to a non-existent agg\n+ final PercentilesBucketPipelineAggregationBuilder builder = new PercentilesBucketPipelineAggregationBuilder(\"name\",\n+ \"invalid_agg>metric\");\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n+ () -> builder.validate(null, aggBuilders, Collections.emptyList()));\n+ assertEquals(PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" aggregation does not exist for aggregation [name]: invalid_agg>metric\", ex.getMessage());\n+\n+ // Now try to point to a single bucket agg\n+ PercentilesBucketPipelineAggregationBuilder builder2 = new PercentilesBucketPipelineAggregationBuilder(\"name\", \"global>metric\");\n+ ex = expectThrows(IllegalArgumentException.class, () -> builder2.validate(null, aggBuilders, Collections.emptyList()));\n+ assertEquals(\"The first aggregation in \" + PipelineAggregator.Parser.BUCKETS_PATH.getPreferredName()\n+ + \" must be a multi-bucket aggregation for aggregation [name] found :\" + GlobalAggregationBuilder.class.getName()\n+ + \" for buckets path: global>metric\", ex.getMessage());\n+\n+ // Now try to point to a valid multi-bucket agg (no exception should be\n+ // thrown)\n+ PercentilesBucketPipelineAggregationBuilder builder3 = new PercentilesBucketPipelineAggregationBuilder(\"name\", \"terms>metric\");\n+ builder3.validate(null, aggBuilders, Collections.emptyList());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/PercentilesBucketTests.java",
"status": "modified"
}
]
} |
{
"body": "Due to the weird way of structuring the serialization code in `AcknowledgedRequest`, many request types forgot to properly serialize the request timeout, for example \"index deletion\", \"index rollover\", \"index shrink\", \"putting pipeline\", and other requests. This means that if those requests were not directly sent to the master node, the acknowledgement timeout information would be lost (and the default used instead).\r\nSome requests also don't properly expose the timeout mechanism in the REST layer, such as put / delete stored script. This PR fixes all that.\r\n\r\n5.6 backport is here: #26213",
"comments": [
{
"body": "Thanks @nik9000 @s1monw ",
"created_at": "2017-08-15T23:44:33Z"
}
],
"number": 26189,
"title": "Serialize and expose timeout of acknowledged requests in REST layer"
} | {
"body": "Due to the weird way of structuring the serialization code in AcknowledgedRequest, many request types forgot to properly serialize the request timeout, for example \"index deletion\", \"index rollover\", \"index shrink\", \"putting pipeline\", and other requests. This means that if those requests were not directly sent to the master node, the acknowledgement timeout information would be lost (and the default used instead).\r\nSome requests also don't properly expose the timeout mechanism in the REST layer, such as put / delete stored script. This PR fixes all that.\r\n\r\nThis is the 5.6 backport of #26189",
"number": 26213,
"review_comments": [],
"title": "Serialize and expose timeout of acknowledged requests in REST layer (ES 5.6)"
} | {
"commits": [
{
"message": "Serialize timeout of AcknowledgedRequests\n\nDue to the weird way of structuring the serialization code, many request types forgot to properly\nserialize the request timeout, for example index deletion or index rollover / shrink request, putting\npipeline request, ..."
}
],
"files": [
{
"diff": "@@ -20,37 +20,18 @@\n package org.elasticsearch.action.admin.indices.delete;\n \n import org.elasticsearch.action.support.IndicesOptions;\n-import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;\n+import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder;\n import org.elasticsearch.client.ElasticsearchClient;\n-import org.elasticsearch.common.unit.TimeValue;\n \n /**\n *\n */\n-public class DeleteIndexRequestBuilder extends MasterNodeOperationRequestBuilder<DeleteIndexRequest, DeleteIndexResponse, DeleteIndexRequestBuilder> {\n+public class DeleteIndexRequestBuilder extends AcknowledgedRequestBuilder<DeleteIndexRequest, DeleteIndexResponse, DeleteIndexRequestBuilder> {\n \n public DeleteIndexRequestBuilder(ElasticsearchClient client, DeleteIndexAction action, String... indices) {\n super(client, action, new DeleteIndexRequest(indices));\n }\n \n- /**\n- * Timeout to wait for the index deletion to be acknowledged by current cluster nodes. Defaults\n- * to <tt>60s</tt>.\n- */\n- public DeleteIndexRequestBuilder setTimeout(TimeValue timeout) {\n- request.timeout(timeout);\n- return this;\n- }\n-\n- /**\n- * Timeout to wait for the index deletion to be acknowledged by current cluster nodes. Defaults\n- * to <tt>10s</tt>.\n- */\n- public DeleteIndexRequestBuilder setTimeout(String timeout) {\n- request.timeout(timeout);\n- return this;\n- }\n-\n /**\n * Specifies what type of requested indices to ignore and wildcard indices expressions.\n * <p>",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexRequestBuilder.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.action.support.master;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.ack.AckedRequest;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -73,19 +74,45 @@ public final TimeValue timeout() {\n /**\n * Reads the timeout value\n */\n+ @Deprecated\n protected void readTimeout(StreamInput in) throws IOException {\n- timeout = new TimeValue(in);\n+ // in older ES versions, we would explicitly call this method in subclasses\n+ // now we properly serialize the timeout value as part of the readFrom method\n+ if (in.getVersion().before(Version.V_5_6_0_UNRELEASED)) {\n+ timeout = new TimeValue(in);\n+ }\n }\n \n /**\n * writes the timeout value\n */\n+ @Deprecated\n protected void writeTimeout(StreamOutput out) throws IOException {\n- timeout.writeTo(out);\n+ // in older ES versions, we would explicitly call this method in subclasses\n+ // now we properly serialize the timeout value as part of the writeTo method\n+ if (out.getVersion().before(Version.V_5_6_0_UNRELEASED)) {\n+ timeout.writeTo(out);\n+ }\n }\n \n @Override\n public TimeValue ackTimeout() {\n return timeout;\n }\n+\n+ @Override\n+ public void readFrom(StreamInput in) throws IOException {\n+ super.readFrom(in);\n+ if (in.getVersion().onOrAfter(Version.V_5_6_0_UNRELEASED)) {\n+ timeout = new TimeValue(in);\n+ }\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ super.writeTo(out);\n+ if (out.getVersion().onOrAfter(Version.V_5_6_0_UNRELEASED)) {\n+ timeout.writeTo(out);\n+ }\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/support/master/AcknowledgedRequest.java",
"status": "modified"
},
{
"diff": "@@ -60,6 +60,9 @@ public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client\n }\n \n DeleteStoredScriptRequest deleteStoredScriptRequest = new DeleteStoredScriptRequest(id, lang);\n+ deleteStoredScriptRequest.timeout(request.paramAsTime(\"timeout\", deleteStoredScriptRequest.timeout()));\n+ deleteStoredScriptRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", deleteStoredScriptRequest.masterNodeTimeout()));\n+\n return channel -> client.admin().cluster().deleteStoredScript(deleteStoredScriptRequest, new AcknowledgedRestListener<>(channel));\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteStoredScriptAction.java",
"status": "modified"
},
{
"diff": "@@ -66,6 +66,8 @@ public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client\n }\n \n PutStoredScriptRequest putRequest = new PutStoredScriptRequest(id, lang, content, request.getXContentType());\n+ putRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", putRequest.masterNodeTimeout()));\n+ putRequest.timeout(request.paramAsTime(\"timeout\", putRequest.timeout()));\n return channel -> client.admin().cluster().putStoredScript(putRequest, new AcknowledgedRestListener<>(channel));\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,14 @@\n }\n },\n \"params\" : {\n+ \"timeout\": {\n+ \"type\" : \"time\",\n+ \"description\" : \"Explicit operation timeout\"\n+ },\n+ \"master_timeout\": {\n+ \"type\" : \"time\",\n+ \"description\" : \"Specify timeout for connection to master\"\n+ }\n }\n },\n \"body\": null",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/api/delete_script.json",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,14 @@\n }\n },\n \"params\" : {\n+ \"timeout\": {\n+ \"type\" : \"time\",\n+ \"description\" : \"Explicit operation timeout\"\n+ },\n+ \"master_timeout\": {\n+ \"type\" : \"time\",\n+ \"description\" : \"Specify timeout for connection to master\"\n+ }\n }\n },\n \"body\": {",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/api/put_script.json",
"status": "modified"
}
]
} |
{
"body": "Elasticsearch 5.5.0\r\nI realized that logger in AbstractXContentParser is not static. This causes a lot of new logger instantiations\r\n\r\n```\r\n private final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(getClass()));\r\n```\r\n\r\nThis is unfortunately very expensive when I have a script which looks into _source to extract value from it. It seems this logger is instantiated for each document which has very significant impact on performance. See below. Many threads are hanging on \"sun.reflect.Reflection.getCallerClass(Native Method)\"\r\n\r\n``` java.lang.Thread.State: RUNNABLE\r\n at sun.reflect.Reflection.getCallerClass(Native Method)\r\n at java.lang.Class.newInstance(Class.java:397)\r\n at org.apache.logging.log4j.spi.AbstractLogger.createDefaultMessageFactory(AbstractLogger.java:212)\r\n at org.apache.logging.log4j.spi.AbstractLogger.<init>(AbstractLogger.java:128)\r\n at org.apache.logging.log4j.spi.ExtendedLoggerWrapper.<init>(ExtendedLoggerWrapper.java:44)\r\n at org.elasticsearch.common.logging.PrefixLogger.<init>(PrefixLogger.java:46)\r\n at org.elasticsearch.common.logging.ESLoggerFactory.getLogger(ESLoggerFactory.java:53)\r\n at org.elasticsearch.common.logging.ESLoggerFactory.getLogger(ESLoggerFactory.java:49)\r\n at org.elasticsearch.common.logging.ESLoggerFactory.getLogger(ESLoggerFactory.java:57)\r\n at org.elasticsearch.common.logging.Loggers.getLogger(Loggers.java:101)\r\n```\r\n\r\nCan this logger be as static one?\r\n\r\n",
"comments": [
{
"body": "I think the reason this is not static is that we want to use the class name of the concrete class for the logger name instead of creating a logger for the `AbstractXContentParser` itself. I see that @danielmitterdorfer added this deprecation logging so maybe he has an opinion on whether we need to have the class name of the concrete class here or if creating a logger for `AbstractXContentParser` would be ok so the logger can be static?",
"created_at": "2017-07-25T12:01:27Z"
},
{
"body": "Alternatively, it is possible to instantiate depreciation logger in the same way it is done in some part of `org.elasticsearch.common.settings.Setting`. This is done conditionally only if depreciation has been detected.\r\n\r\n```\r\n protected void checkDeprecation(Settings settings) {\r\n // They're using the setting, so we need to tell them to stop\r\n if (this.isDeprecated() && this.exists(settings) && settings.addDeprecatedSetting(this)) {\r\n // It would be convenient to show its replacement key, but replacement is often not so simple\r\n final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(getClass()));\r\n deprecationLogger.deprecated(\"[{}] setting was deprecated in Elasticsearch and will be removed in a future release! \" +\r\n \"See the breaking changes documentation for the next major version.\", getKey());\r\n }\r\n }\r\n```\r\n\r\nNotice that in `AbstractXContentParser` `deprecationLogger` is used only in one place and in most cases the condition will not be matched so instantiating `deprecationLogger` on each `AbstractXContentParser` instantiation seems to be redundant and can hurt performance significantly.\r\n\r\n``` \r\n if (interpretedAsLenient) {\r\n deprecationLogger.deprecated(\"Expected a boolean [true/false] for property [{}] but got [{}]\", currentName(), rawValue);\r\n }\r\n```",
"created_at": "2017-07-25T12:09:44Z"
},
{
"body": "@colings86 Yes, the original motivation was to have the concrete class name to narrow the focus. But given the performance impact + the fact that there are only four implementations (JSON, CBOR, Smile and Yaml) I think the pragmatic choice is to just use a static instance here?",
"created_at": "2017-07-25T12:15:05Z"
},
{
"body": "Seems it did not auto-close. Closed by #25881.",
"created_at": "2017-07-25T13:58:14Z"
},
{
"body": "We really need to be more careful when we make something static, especially for a class that is so fundamental. The issue here is that merely constructing a list setting (e.g., the setting object for `path.data`) causes a JSON content parser to be initialized which causes this static initializer to run which touches logging. This will happen *before* logging is even configured and that's a no-no.",
"created_at": "2017-08-14T21:34:08Z"
},
{
"body": "For this I opened #26210.",
"created_at": "2017-08-14T21:55:22Z"
}
],
"number": 25879,
"title": "AbstractXContentParser - logger is not static"
} | {
"body": "The deprecation logger in AbstractXContentParser is static. This is done for performance reasons, to avoid constructing a deprecation logger for every parser of which there can be many (e.g., one for every document when scripting). This is fine, but the static here is a problem because it means we touch loggers before logging is initialized (when constructing a list setting in Environment which is a precursor to initializing logging). Therefore, to maintain the previous change (not constructing a parser for every instance) but avoiding the problems with static, we have to lazy initialize here. This is not perfect, there is a volatile read behind the scenes. This could be avoided (e.g., by not using set once) but I prefer the safety that set once provides. I think this should be the approach unless it otherwise proves problematic.\r\n\r\nRelates #25879\r\n\r\n",
"number": 26210,
"review_comments": [
{
"body": "any reason we didn't use the [holder](https://en.wikipedia.org/wiki/Initialization-on-demand_holder_idiom) idiom here? \r\n\r\nie:\r\n```Java\r\n\r\nprivate static class Holder {\r\n static final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(AbstractXContentParser.class));\r\n}\r\n\r\nprivate static DeprecationLogger getDeprecationLogger() {\r\n return Holder.deprecationLogger;\r\n}\r\n```",
"created_at": "2017-08-15T11:52:35Z"
},
{
"body": "We can do that too, it's better, I'll open a PR later.",
"created_at": "2017-08-15T11:56:17Z"
},
{
"body": "++ thanks",
"created_at": "2017-08-15T13:17:36Z"
},
{
"body": "I opened #26218.",
"created_at": "2017-08-15T13:30:34Z"
}
],
"title": "Lazy initialize deprecation logger in parser"
} | {
"commits": [
{
"message": "Lazy initialize deprecation logger in parser\n\nThe deprecation logger in AbstractXContentParser is static. This is done\nfor performance reasons, to avoid constructing a deprecation logger for\nevery parser of which there can be many (e.g., one for every document\nwhen scripting). This is fine, but the static here is a problem because\nit means we touch loggers before logging is initialized (when\nconstructing a list setting in Environment which is a precursor to\ninitializing logging). Therefore, to maintain the previous change (not\nconstructing a parser for every instance) but avoiding the problems with\nstatic, we have to lazy initialize here. This is not perfect, there is a\nvolatile read behind the scenes. This could be avoided (e.g., by not\nusing set once) but I prefer the safety that set once provides. I think\nthis should be the approach unless it otherwise proves problematic."
},
{
"message": "Merge branch '5.6' into static-deprecation-logger\n\n* 5.6:\n Allow not configure logging without config\n Snapshot/Restore: Ensure that shard failure reasons are correctly stored in CS (#26127)\n Update reference from DateHistogram to Histogram (#26169)"
}
],
"files": [
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.common.xcontent.support;\n \n import org.apache.lucene.util.BytesRef;\n+import org.apache.lucene.util.SetOnce;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.logging.DeprecationLogger;\n@@ -54,7 +55,31 @@ private static void checkCoerceString(boolean coerce, Class<? extends Number> cl\n }\n }\n \n- private static final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(AbstractXContentParser.class));\n+ // do not use this field directly, use AbstractXContentParser#getDeprecationLogger\n+ private static final SetOnce<DeprecationLogger> deprecationLogger = new SetOnce<>();\n+\n+ private static DeprecationLogger getDeprecationLogger() {\n+ /*\n+ * This implementation is intentionally verbose to make the minimum number of volatile reads. In the case that the set once is\n+ * already initialized, this implementation makes exactly one volatile read. In the case that the set once is not initialized we\n+ * make exactly two volatile reads.\n+ */\n+ final DeprecationLogger logger = deprecationLogger.get();\n+ if (logger == null) {\n+ synchronized (AbstractXContentParser.class) {\n+ final DeprecationLogger innerLogger = deprecationLogger.get();\n+ if (innerLogger == null) {\n+ final DeprecationLogger newLogger = new DeprecationLogger(Loggers.getLogger(AbstractXContentParser.class));\n+ deprecationLogger.set(newLogger);\n+ return newLogger;\n+ } else {\n+ return innerLogger;\n+ }\n+ }\n+ } else {\n+ return logger;\n+ }\n+ }\n \n private final NamedXContentRegistry xContentRegistry;\n \n@@ -112,7 +137,7 @@ public boolean booleanValue() throws IOException {\n booleanValue = doBooleanValue();\n }\n if (interpretedAsLenient) {\n- deprecationLogger.deprecated(\"Expected a boolean [true/false] for property [{}] but got [{}]\", currentName(), rawValue);\n+ getDeprecationLogger().deprecated(\"Expected a boolean [true/false] for property [{}] but got [{}]\", currentName(), rawValue);\n }\n return booleanValue;\n ",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java",
"status": "modified"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion: 6.0.0-beta1-SNAPSHOT, Build: 684ccc6/2017-08-07T10:13:11.429Z, JVM: 1.8.0_141\r\nand\r\nVersion: 6.1.0-SNAPSHOT, Build: cd79600/2017-08-04T17:19:57.932Z, JVM: 1.8.0_141\r\n\r\n**Plugins installed**: [X-Pack, repository-s3]\r\n\r\n**JVM version** (`java -version`):\r\n\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nLinux 4.4.35-33.55.amzn1.x86_64 #1 SMP Tue Dec 6 20:30:04 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux\r\nand\r\nDarwin 16.7.0 Darwin Kernel Version 16.7.0\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nRunning `elasticsearch -d` and specifying a custom pid file causes a spurious error message. \r\n\r\n**Steps to reproduce**:\r\n\r\nPlease include a *minimal* but *complete* recreation of the problem, including\r\n(e.g.) index creation, mappings, settings, query etc. The easier you make for\r\nus to reproduce it, the more likely that somebody will take the time to look at it.\r\n\r\n 1. Run `bin/elasticsearch -p /data/test.pid -d` \r\n\r\nElasticsearch runs, the pid file is created, however output to console is as follows:\r\n`bin/elasticsearch: line 30: [: /data/test.pid: binary operator expected`\r\n \r\n**Provide logs (if relevant)**:\r\nn/a\r\n",
"comments": [],
"number": 26080,
"title": "Spurious error message when running elasticsearch daemon using a custom pid file"
} | {
"body": "In bin/elasticsearch, we grep the command line looking for various flags that indicate the process should be daemonized. To do this, we simply test command status from the grep. Sadly, this is utterly broken (unreleased) as instead we are testing the output of the command, not the command status. This commit fixes this issue.\r\n\r\nCloses #26080\r\n\r\n",
"number": 26196,
"review_comments": [],
"title": "Fix daemonization command status test"
} | {
"commits": [
{
"message": "Fix daemonization command status test\n\nIn bin/elasticsearch, we grep the command line looking for various flags\nthat indicate the process should be daemonized. To do this, we simply\ntest command status from the grep. Sadly, this is utterly broken\n(unreleased) as instead we are testing the output of the command, not\nthe command status. This commit fixes this issue."
}
],
"files": [
{
"diff": "@@ -27,7 +27,7 @@ ES_JVM_OPTIONS=\"$CONF_DIR\"/jvm.options\n ES_JAVA_OPTS=\"`parse_jvm_options \"$ES_JVM_OPTIONS\"` $ES_JAVA_OPTS\"\n \n # manual parsing to find out, if process should be detached\n-if [ ! `echo $* | grep -E '(^-d |-d$| -d |--daemonize$|--daemonize )'` ]; then\n+if ! echo $* | grep -E '(^-d |-d$| -d |--daemonize$|--daemonize )' > /dev/null; then\n exec \\\n \"$JAVA\" \\\n $ES_JAVA_OPTS \\",
"filename": "distribution/src/main/resources/bin/elasticsearch",
"status": "modified"
}
]
} |
{
"body": "The configuration setting for Index Tombstones \"cluster.indices.tombstones.size\" described here: https://www.elastic.co/guide/en/elasticsearch/reference/current/misc-cluster.html#cluster-max-tombstones\r\n\r\nis unable to be set and a startup exception is thrown suggesting the configuration is an unknown setting.\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\n- tested on\r\n - 5.4.3\r\n - 5.5.1\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`):\r\n```\r\njava version \"1.8.0_77\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_77-b03)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)\r\n```\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n- Tested on:\r\n - OSX\r\n - `Darwin Petes-MBP 16.7.0 Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64 x86_64`\r\n - Linux\r\n - `Linux inspiron 4.11.11-300.fc26.x86_64 #1 SMP Mon Jul 17 16:32:11 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux`\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\n**Steps to reproduce**:\r\n\r\n\r\n 1. Run up Elasticsearch and use this setting in the configuration:\r\n - `cluster.indices.tombstones.size: 1`\r\n 2. Start Elasticsearch and check logs, which should contain a startup exception:\r\n - `org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting.....`\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n[2017-08-14T18:07:21,539][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]\r\norg.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting [cluster.indices.tombstones.size] please check that any required plugins are installed, or check the breaking changes documentation for removed settings\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:127) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.5.1.jar:5.5.1]\r\nCaused by: java.lang.IllegalArgumentException: unknown setting [cluster.indices.tombstones.size] please check that any required plugins are installed, or check the breaking changes documentation for removed settings\r\n at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:293) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:256) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:139) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.node.Node.<init>(Node.java:343) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.node.Node.<init>(Node.java:244) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:232) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:232) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:351) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n ... 6 more\r\n```\r\n\r\n",
"comments": [
{
"body": "I've opened #26193",
"created_at": "2017-08-14T08:46:44Z"
}
],
"number": 26191,
"title": "Index tombstones size configuration is unable to be set as it's unrecognised"
} | {
"body": "Closes #26191",
"number": 26193,
"review_comments": [],
"title": "Register setting `cluster.indices.tombstones.size`"
} | {
"commits": [
{
"message": "Register setting cluster.indices.tombstones.size"
}
],
"files": [
{
"diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.cluster.InternalClusterInfoService;\n import org.elasticsearch.cluster.NodeConnectionsService;\n import org.elasticsearch.cluster.action.index.MappingUpdatedAction;\n+import org.elasticsearch.cluster.metadata.IndexGraveyard;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.allocation.DiskThresholdSettings;\n import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator;\n@@ -402,6 +403,7 @@ public void apply(Settings value, Settings current, Settings previous) {\n SearchModule.INDICES_MAX_CLAUSE_COUNT_SETTING,\n ThreadPool.ESTIMATED_TIME_INTERVAL_SETTING,\n FastVectorHighlighter.SETTING_TV_HIGHLIGHT_MULTI_VALUE,\n- Node.BREAKER_TYPE_KEY\n+ Node.BREAKER_TYPE_KEY,\n+ IndexGraveyard.SETTING_MAX_TOMBSTONES\n )));\n }",
"filename": "core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java",
"status": "modified"
},
{
"diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.client.Client;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexGraveyard;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n@@ -331,7 +332,8 @@ public void testIndexDeletionWhenNodeRejoins() throws Exception {\n \n final List<String> nodes;\n logger.info(\"--> starting a cluster with \" + numNodes + \" nodes\");\n- nodes = internalCluster().startNodes(numNodes);\n+ nodes = internalCluster().startNodes(numNodes,\n+ Settings.builder().put(IndexGraveyard.SETTING_MAX_TOMBSTONES.getKey(), randomIntBetween(10, 100)).build());\n logger.info(\"--> create an index\");\n createIndex(indexName);\n ",
"filename": "core/src/test/java/org/elasticsearch/gateway/GatewayIndexStateIT.java",
"status": "modified"
}
]
} |
{
"body": "Due to the weird way of structuring the serialization code in AcknowledgedRequest, many request types forgot to properly serialize the request timeout, for example \"index deletion\", \"index rollover\", \"index shrink\", \"putting pipeline\", and other requests. This means that if those requests were not directly sent to the master node, the acknowledgement timeout information would be lost (and the default used instead).\r\nSome requests also don't properly expose the timeout mechanism in the REST layer, such as put / delete stored script. This PR fixes all that.\r\n\r\nThis is the 5.6 backport of #26189",
"comments": [],
"number": 26213,
"title": "Serialize and expose timeout of acknowledged requests in REST layer (ES 5.6)"
} | {
"body": "Due to the weird way of structuring the serialization code in `AcknowledgedRequest`, many request types forgot to properly serialize the request timeout, for example \"index deletion\", \"index rollover\", \"index shrink\", \"putting pipeline\", and other requests. This means that if those requests were not directly sent to the master node, the acknowledgement timeout information would be lost (and the default used instead).\r\nSome requests also don't properly expose the timeout mechanism in the REST layer, such as put / delete stored script. This PR fixes all that.\r\n\r\n5.6 backport is here: #26213",
"number": 26189,
"review_comments": [],
"title": "Serialize and expose timeout of acknowledged requests in REST layer"
} | {
"commits": [
{
"message": "Serialize timeout of AcknowledgedRequests\n\nDue to the weird way of structuring the serialization code, many request types forgot to properly\nserialize the request timeout, for example index deletion or index rollover / shrink request, putting\npipeline request, ..."
},
{
"message": "Expose timeout and master_timeout for all acked request types"
},
{
"message": "update rest-api-spec"
},
{
"message": "Simplify due to 5.6 bwc layer"
},
{
"message": "remove superfluous bwc tests"
}
],
"files": [
{
"diff": "@@ -81,13 +81,11 @@ public String name() {\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n name = in.readString();\n- readTimeout(in);\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeString(name);\n- writeTimeout(out);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/delete/DeleteRepositoryRequest.java",
"status": "modified"
},
{
"diff": "@@ -220,7 +220,6 @@ public void readFrom(StreamInput in) throws IOException {\n name = in.readString();\n type = in.readString();\n settings = readSettingsFromStream(in);\n- readTimeout(in);\n verify = in.readBoolean();\n }\n \n@@ -230,7 +229,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeString(name);\n out.writeString(type);\n writeSettingsToStream(settings, out);\n- writeTimeout(out);\n out.writeBoolean(verify);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java",
"status": "modified"
},
{
"diff": "@@ -81,13 +81,11 @@ public String name() {\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n name = in.readString();\n- readTimeout(in);\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeString(name);\n- writeTimeout(out);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/VerifyRepositoryRequest.java",
"status": "modified"
},
{
"diff": "@@ -128,7 +128,6 @@ public void readFrom(StreamInput in) throws IOException {\n dryRun = in.readBoolean();\n explain = in.readBoolean();\n retryFailed = in.readBoolean();\n- readTimeout(in);\n }\n \n @Override\n@@ -138,7 +137,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeBoolean(dryRun);\n out.writeBoolean(explain);\n out.writeBoolean(retryFailed);\n- writeTimeout(out);\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/reroute/ClusterRerouteRequest.java",
"status": "modified"
},
{
"diff": "@@ -148,14 +148,12 @@ public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n transientSettings = readSettingsFromStream(in);\n persistentSettings = readSettingsFromStream(in);\n- readTimeout(in);\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n writeSettingsToStream(transientSettings, out);\n writeSettingsToStream(persistentSettings, out);\n- writeTimeout(out);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java",
"status": "modified"
},
{
"diff": "@@ -467,14 +467,12 @@ public ActionRequestValidationException validate() {\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n allAliasActions = in.readList(AliasActions::new);\n- readTimeout(in);\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeList(allAliasActions);\n- writeTimeout(out);\n }\n \n public IndicesOptions indicesOptions() {",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java",
"status": "modified"
},
{
"diff": "@@ -105,15 +105,13 @@ public CloseIndexRequest indicesOptions(IndicesOptions indicesOptions) {\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n indices = in.readStringArray();\n- readTimeout(in);\n indicesOptions = IndicesOptions.readIndicesOptions(in);\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeStringArray(indices);\n- writeTimeout(out);\n indicesOptions.writeIndicesOptions(out);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/close/CloseIndexRequest.java",
"status": "modified"
},
{
"diff": "@@ -487,7 +487,6 @@ public void readFrom(StreamInput in) throws IOException {\n cause = in.readString();\n index = in.readString();\n settings = readSettingsFromStream(in);\n- readTimeout(in);\n int size = in.readVInt();\n for (int i = 0; i < size; i++) {\n final String type = in.readString();\n@@ -518,7 +517,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeString(cause);\n out.writeString(index);\n writeSettingsToStream(settings, out);\n- writeTimeout(out);\n out.writeVInt(mappings.size());\n for (Map.Entry<String, String> entry : mappings.entrySet()) {\n out.writeString(entry.getKey());",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java",
"status": "modified"
},
{
"diff": "@@ -20,34 +20,15 @@\n package org.elasticsearch.action.admin.indices.delete;\n \n import org.elasticsearch.action.support.IndicesOptions;\n-import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder;\n+import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder;\n import org.elasticsearch.client.ElasticsearchClient;\n-import org.elasticsearch.common.unit.TimeValue;\n \n-public class DeleteIndexRequestBuilder extends MasterNodeOperationRequestBuilder<DeleteIndexRequest, DeleteIndexResponse, DeleteIndexRequestBuilder> {\n+public class DeleteIndexRequestBuilder extends AcknowledgedRequestBuilder<DeleteIndexRequest, DeleteIndexResponse, DeleteIndexRequestBuilder> {\n \n public DeleteIndexRequestBuilder(ElasticsearchClient client, DeleteIndexAction action, String... indices) {\n super(client, action, new DeleteIndexRequest(indices));\n }\n \n- /**\n- * Timeout to wait for the index deletion to be acknowledged by current cluster nodes. Defaults\n- * to <tt>60s</tt>.\n- */\n- public DeleteIndexRequestBuilder setTimeout(TimeValue timeout) {\n- request.timeout(timeout);\n- return this;\n- }\n-\n- /**\n- * Timeout to wait for the index deletion to be acknowledged by current cluster nodes. Defaults\n- * to <tt>10s</tt>.\n- */\n- public DeleteIndexRequestBuilder setTimeout(String timeout) {\n- request.timeout(timeout);\n- return this;\n- }\n-\n /**\n * Specifies what type of requested indices to ignore and wildcard indices expressions.\n * <p>",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexRequestBuilder.java",
"status": "modified"
},
{
"diff": "@@ -313,7 +313,6 @@ public void readFrom(StreamInput in) throws IOException {\n source = XContentHelper.convertToJson(new BytesArray(source), false, false, XContentFactory.xContentType(source));\n }\n updateAllTypes = in.readBoolean();\n- readTimeout(in);\n concreteIndex = in.readOptionalWriteable(Index::new);\n }\n \n@@ -325,7 +324,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeOptionalString(type);\n out.writeString(source);\n out.writeBoolean(updateAllTypes);\n- writeTimeout(out);\n out.writeOptionalWriteable(concreteIndex);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequest.java",
"status": "modified"
},
{
"diff": "@@ -105,15 +105,13 @@ public OpenIndexRequest indicesOptions(IndicesOptions indicesOptions) {\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n indices = in.readStringArray();\n- readTimeout(in);\n indicesOptions = IndicesOptions.readIndicesOptions(in);\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeStringArray(indices);\n- writeTimeout(out);\n indicesOptions.writeIndicesOptions(out);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequest.java",
"status": "modified"
},
{
"diff": "@@ -166,7 +166,6 @@ public void readFrom(StreamInput in) throws IOException {\n indices = in.readStringArray();\n indicesOptions = IndicesOptions.readIndicesOptions(in);\n settings = readSettingsFromStream(in);\n- readTimeout(in);\n preserveExisting = in.readBoolean();\n }\n \n@@ -176,7 +175,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeStringArrayNullable(indices);\n indicesOptions.writeIndicesOptions(out);\n writeSettingsToStream(settings, out);\n- writeTimeout(out);\n out.writeBoolean(preserveExisting);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequest.java",
"status": "modified"
},
{
"diff": "@@ -86,7 +86,6 @@ public void readFrom(StreamInput in) throws IOException {\n String oldestLuceneSegment = in.readString();\n versions.put(index, new Tuple<>(upgradeVersion, oldestLuceneSegment));\n }\n- readTimeout(in);\n }\n \n @Override\n@@ -98,6 +97,5 @@ public void writeTo(StreamOutput out) throws IOException {\n Version.writeVersion(entry.getValue().v1(), out);\n out.writeString(entry.getValue().v2());\n }\n- writeTimeout(out);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/admin/indices/upgrade/post/UpgradeSettingsRequest.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.action.support.master;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.ack.AckedRequest;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -70,22 +71,20 @@ public final TimeValue timeout() {\n return timeout;\n }\n \n- /**\n- * Reads the timeout value\n- */\n- protected void readTimeout(StreamInput in) throws IOException {\n- timeout = new TimeValue(in);\n+ @Override\n+ public TimeValue ackTimeout() {\n+ return timeout;\n }\n \n- /**\n- * writes the timeout value\n- */\n- protected void writeTimeout(StreamOutput out) throws IOException {\n- timeout.writeTo(out);\n+ @Override\n+ public void readFrom(StreamInput in) throws IOException {\n+ super.readFrom(in);\n+ timeout = new TimeValue(in);\n }\n \n @Override\n- public TimeValue ackTimeout() {\n- return timeout;\n+ public void writeTo(StreamOutput out) throws IOException {\n+ super.writeTo(out);\n+ timeout.writeTo(out);\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/action/support/master/AcknowledgedRequest.java",
"status": "modified"
},
{
"diff": "@@ -47,6 +47,9 @@ public String getName() {\n public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {\n String id = request.param(\"id\");\n DeleteStoredScriptRequest deleteStoredScriptRequest = new DeleteStoredScriptRequest(id);\n+ deleteStoredScriptRequest.timeout(request.paramAsTime(\"timeout\", deleteStoredScriptRequest.timeout()));\n+ deleteStoredScriptRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", deleteStoredScriptRequest.masterNodeTimeout()));\n+\n return channel -> client.admin().cluster().deleteStoredScript(deleteStoredScriptRequest, new AcknowledgedRestListener<>(channel));\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestDeleteStoredScriptAction.java",
"status": "modified"
},
{
"diff": "@@ -58,6 +58,8 @@ public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client\n StoredScriptSource source = StoredScriptSource.parse(content, xContentType);\n \n PutStoredScriptRequest putRequest = new PutStoredScriptRequest(id, context, content, request.getXContentType(), source);\n+ putRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", putRequest.masterNodeTimeout()));\n+ putRequest.timeout(request.paramAsTime(\"timeout\", putRequest.timeout()));\n return channel -> client.admin().cluster().putStoredScript(putRequest, new AcknowledgedRestListener<>(channel));\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestPutStoredScriptAction.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.action.admin.cluster.storedscripts;\n \n-import org.elasticsearch.Version;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -28,7 +27,6 @@\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n-import java.util.Base64;\n import java.util.Collections;\n \n public class PutStoredScriptRequestTests extends ESTestCase {\n@@ -50,25 +48,4 @@ public void testSerialization() throws IOException {\n }\n }\n }\n-\n- public void testSerializationBwc() throws IOException {\n- final byte[] rawStreamBytes = Base64.getDecoder().decode(\"ADwDCG11c3RhY2hlAQZzY3JpcHQCe30A\");\n- final Version version = randomFrom(Version.V_5_0_0, Version.V_5_0_1, Version.V_5_0_2,\n- Version.V_5_1_1, Version.V_5_1_2, Version.V_5_2_0);\n- try (StreamInput in = StreamInput.wrap(rawStreamBytes)) {\n- in.setVersion(version);\n- PutStoredScriptRequest serialized = new PutStoredScriptRequest();\n- serialized.readFrom(in);\n- assertEquals(XContentType.JSON, serialized.xContentType());\n- assertEquals(\"script\", serialized.id());\n- assertEquals(new BytesArray(\"{}\"), serialized.content());\n-\n- try (BytesStreamOutput out = new BytesStreamOutput()) {\n- out.setVersion(version);\n- serialized.writeTo(out);\n- out.flush();\n- assertArrayEquals(rawStreamBytes, out.bytes().toBytesRef().bytes);\n- }\n- }\n- }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -19,15 +19,12 @@\n \n package org.elasticsearch.action.admin.indices.analyze;\n \n-import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.test.ESTestCase;\n-import org.elasticsearch.test.VersionUtils;\n \n import java.io.IOException;\n-import java.util.Base64;\n \n \n public class AnalyzeRequestTests extends ESTestCase {\n@@ -92,20 +89,4 @@ public void testSerialization() throws IOException {\n }\n }\n }\n-\n- public void testSerializationBwc() throws IOException {\n- // AnalyzeRequest serializedRequest = new AnalyzeRequest(\"foo\");\n- // serializedRequest.text(\"text\");\n- // serializedRequest.normalizer(\"normalizer\");\n- // Using Version.V_6_0_0_beta1\n- final byte[] data = Base64.getDecoder().decode(\"AAABA2ZvbwEEdGV4dAAAAAAAAAABCm5vcm1hbGl6ZXI=\");\n- final Version version = VersionUtils.randomVersionBetween(random(), Version.V_5_0_0, Version.V_5_4_0);\n- try (StreamInput in = StreamInput.wrap(data)) {\n- in.setVersion(version);\n- AnalyzeRequest request = new AnalyzeRequest();\n- request.readFrom(in);\n- assertEquals(\"foo\", request.index());\n- assertNull(\"normalizer support after 6.0.0\", request.normalizer());\n- }\n- }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -19,16 +19,13 @@\n \n package org.elasticsearch.action.admin.indices.create;\n \n-import org.elasticsearch.Version;\n-import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n-import java.util.Base64;\n \n public class CreateIndexRequestTests extends ESTestCase {\n \n@@ -48,25 +45,4 @@ public void testSerialization() throws IOException {\n }\n }\n }\n-\n- public void testSerializationBwc() throws IOException {\n- final byte[] data = Base64.getDecoder().decode(\"ADwDAANmb28APAMBB215X3R5cGULeyJ0eXBlIjp7fX0AAAD////+AA==\");\n- final Version version = randomFrom(Version.V_5_0_0, Version.V_5_0_1, Version.V_5_0_2, Version.V_5_1_1, Version.V_5_1_2,\n- Version.V_5_2_0);\n- try (StreamInput in = StreamInput.wrap(data)) {\n- in.setVersion(version);\n- CreateIndexRequest serialized = new CreateIndexRequest();\n- serialized.readFrom(in);\n- assertEquals(\"foo\", serialized.index());\n- BytesReference bytesReference = JsonXContent.contentBuilder().startObject().startObject(\"type\").endObject().endObject().bytes();\n- assertEquals(bytesReference.utf8ToString(), serialized.mappings().get(\"my_type\"));\n-\n- try (BytesStreamOutput out = new BytesStreamOutput()) {\n- out.setVersion(version);\n- serialized.writeTo(out);\n- out.flush();\n- assertArrayEquals(data, out.bytes().toBytesRef().bytes);\n- }\n- }\n- }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -31,7 +31,6 @@\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n-import java.util.Base64;\n \n public class PutMappingRequestTests extends ESTestCase {\n \n@@ -95,17 +94,4 @@ public void testPutMappingRequestSerialization() throws IOException {\n }\n }\n }\n-\n- public void testSerializationBwc() throws IOException {\n- final byte[] data = Base64.getDecoder().decode(\"ADwDAQNmb28MAA8tLS0KZm9vOiAiYmFyIgoAPAMAAAA=\");\n- final Version version = randomFrom(Version.V_5_0_0, Version.V_5_0_1, Version.V_5_0_2,\n- Version.V_5_1_1, Version.V_5_1_2, Version.V_5_2_0);\n- try (StreamInput in = StreamInput.wrap(data)) {\n- in.setVersion(version);\n- PutMappingRequest request = new PutMappingRequest();\n- request.readFrom(in);\n- String mapping = YamlXContent.contentBuilder().startObject().field(\"foo\", \"bar\").endObject().string();\n- assertEquals(XContentHelper.convertToJson(new BytesArray(mapping), false, XContentType.YAML), request.source());\n- }\n- }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.action.ingest;\n \n-import org.elasticsearch.Version;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -28,7 +27,6 @@\n \n import java.io.IOException;\n import java.nio.charset.StandardCharsets;\n-import java.util.Base64;\n \n public class PutPipelineRequestTests extends ESTestCase {\n \n@@ -45,23 +43,4 @@ public void testSerializationWithXContent() throws IOException {\n assertEquals(XContentType.JSON, serialized.getXContentType());\n assertEquals(\"{}\", serialized.getSource().utf8ToString());\n }\n-\n- public void testSerializationBwc() throws IOException {\n- final byte[] data = Base64.getDecoder().decode(\"ADwDATECe30=\");\n- final Version version = randomFrom(Version.V_5_0_0, Version.V_5_0_1, Version.V_5_0_2,\n- Version.V_5_1_1, Version.V_5_1_2, Version.V_5_2_0);\n- try (StreamInput in = StreamInput.wrap(data)) {\n- in.setVersion(version);\n- PutPipelineRequest request = new PutPipelineRequest();\n- request.readFrom(in);\n- assertEquals(XContentType.JSON, request.getXContentType());\n- assertEquals(\"{}\", request.getSource().utf8ToString());\n-\n- try (BytesStreamOutput out = new BytesStreamOutput()) {\n- out.setVersion(version);\n- request.writeTo(out);\n- assertArrayEquals(data, out.bytes().toBytesRef().bytes);\n- }\n- }\n- }\n }",
"filename": "core/src/test/java/org/elasticsearch/action/ingest/PutPipelineRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.ingest;\n \n-import org.elasticsearch.Version;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n@@ -35,7 +34,6 @@\n \n import java.io.IOException;\n import java.nio.charset.StandardCharsets;\n-import java.util.Base64;\n \n public class PipelineConfigurationTests extends ESTestCase {\n \n@@ -52,24 +50,6 @@ public void testSerialization() throws IOException {\n assertEquals(\"{}\", serialized.getConfig().utf8ToString());\n }\n \n- public void testSerializationBwc() throws IOException {\n- final byte[] data = Base64.getDecoder().decode(\"ATECe30AAAA=\");\n- try (StreamInput in = StreamInput.wrap(data)) {\n- final Version version = randomFrom(Version.V_5_0_0, Version.V_5_0_1, Version.V_5_0_2,\n- Version.V_5_1_1, Version.V_5_1_2, Version.V_5_2_0);\n- in.setVersion(version);\n- PipelineConfiguration configuration = PipelineConfiguration.readFrom(in);\n- assertEquals(XContentType.JSON, configuration.getXContentType());\n- assertEquals(\"{}\", configuration.getConfig().utf8ToString());\n-\n- try (BytesStreamOutput out = new BytesStreamOutput()) {\n- out.setVersion(version);\n- configuration.writeTo(out);\n- assertArrayEquals(data, out.bytes().toBytesRef().bytes);\n- }\n- }\n- }\n-\n public void testParser() throws IOException {\n ContextParser<Void, PipelineConfiguration> parser = PipelineConfiguration.getParser();\n XContentType xContentType = randomFrom(XContentType.values());",
"filename": "core/src/test/java/org/elasticsearch/ingest/PipelineConfigurationTests.java",
"status": "modified"
},
{
"diff": "@@ -35,7 +35,6 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n-import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n@@ -54,7 +53,6 @@\n \n import java.io.IOException;\n import java.util.ArrayList;\n-import java.util.Base64;\n import java.util.Collection;\n import java.util.Collections;\n import java.util.List;\n@@ -250,26 +248,6 @@ public void testCreateMultiDocumentSearcher() throws Exception {\n assertThat(result.clauses().get(1).getOccur(), equalTo(BooleanClause.Occur.MUST_NOT));\n }\n \n- public void testSerializationBwc() throws IOException {\n- final byte[] data = Base64.getDecoder().decode(\"P4AAAAAFZmllbGQEdHlwZQAAAAAAAA57ImZvbyI6ImJhciJ9AAAAAA==\");\n- final Version version = randomFrom(Version.V_5_0_0, Version.V_5_0_1, Version.V_5_0_2,\n- Version.V_5_1_1, Version.V_5_1_2, Version.V_5_2_0);\n- try (StreamInput in = StreamInput.wrap(data)) {\n- in.setVersion(version);\n- PercolateQueryBuilder queryBuilder = new PercolateQueryBuilder(in);\n- assertEquals(\"type\", queryBuilder.getDocumentType());\n- assertEquals(\"field\", queryBuilder.getField());\n- assertEquals(\"{\\\"foo\\\":\\\"bar\\\"}\", queryBuilder.getDocument().utf8ToString());\n- assertEquals(XContentType.JSON, queryBuilder.getXContentType());\n-\n- try (BytesStreamOutput out = new BytesStreamOutput()) {\n- out.setVersion(version);\n- queryBuilder.writeTo(out);\n- assertArrayEquals(data, out.bytes().toBytesRef().bytes);\n- }\n- }\n- }\n-\n private static BytesReference randomSource() {\n try {\n XContentBuilder xContent = XContentFactory.jsonBuilder();",
"filename": "modules/percolator/src/test/java/org/elasticsearch/percolator/PercolateQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,14 @@\n }\n },\n \"params\" : {\n+ \"timeout\": {\n+ \"type\" : \"time\",\n+ \"description\" : \"Explicit operation timeout\"\n+ },\n+ \"master_timeout\": {\n+ \"type\" : \"time\",\n+ \"description\" : \"Specify timeout for connection to master\"\n+ }\n }\n },\n \"body\": null",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/api/delete_script.json",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,14 @@\n }\n },\n \"params\" : {\n+ \"timeout\": {\n+ \"type\" : \"time\",\n+ \"description\" : \"Explicit operation timeout\"\n+ },\n+ \"master_timeout\": {\n+ \"type\" : \"time\",\n+ \"description\" : \"Specify timeout for connection to master\"\n+ },\n \"context\": {\n \"type\" : \"string\",\n \"description\" : \"Context name to compile script against\"",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/api/put_script.json",
"status": "modified"
}
]
} |
{
"body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion: 5.5.1, Build: 19c13d0/2017-07-18T20:44:24.823Z, JVM: 1.8.0_141\r\n\r\n**Plugins installed**:\r\ndiscovery-ec2 ingest-geoip ingest-user-agent x-pack\r\n\r\n**JVM version** (`java -version`): \r\nopenjdk version \"1.8.0_141\"\r\nOpenJDK Runtime Environment (build 1.8.0_141-b16)\r\nOpenJDK 64-Bit Server VM (build 25.141-b16, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nLinux cd24f177cfe3 4.9.38-16.33.amzn1.x86_64 #1 SMP Thu Jul 20 01:31:29 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nThe docs at https://www.elastic.co/guide/en/elasticsearch/reference/current/shard-allocation-filtering.html and https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-filtering.html specify that these settings support wildcards (in our case, we're concerned specifically with `index.routing.allocation.include._ip`).\r\n\r\nThis is not currently true. Attempting to use the example from the docs results in:\r\n```json\r\n{\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"invalid IP address [192.168.2.*] for [_ip]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"invalid IP address [192.168.2.*] for [_ip]\"},\"status\":400}\r\n```\r\n\r\nThis is preventing us from upgrading a 2.x cluster to 5.5.1, because the value is currently set to \"*\". In 2.x, I don't believe it's possible to delete these settings.\r\n\r\nI'm sure we can think of some nasty workaround that will allow us to upgrade our old cluster, but suggestions are welcome\r\n\r\n**Steps to reproduce**:\r\n\r\n```bash\r\ncurl -s -XPUT localhost:9200/_cluster/settings --data-binary @- <<\"END\" | jq .\r\n{\r\n \"transient\": {\r\n \"cluster.routing.allocation.include._ip\" : \"*\"\r\n }\r\n}\r\nEND\r\n```\r\n\r\n```json\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"invalid IP address [*] for [_ip]\"\r\n }\r\n ],\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"invalid IP address [*] for [_ip]\"\r\n },\r\n \"status\": 400\r\n}\r\n```\r\n\r\n**Provide logs**:\r\n\r\nLogs from an attempt to start a node using ES 5.5.1, with an index from ES 2.4.1:\r\n\r\n```\r\n[2017-08-14T00:43:25,522][ERROR][org.elasticsearch.gateway.GatewayMetaState] [cd24f177cfe3] failed to read local state, exiting...\r\norg.elasticsearch.ElasticsearchException: java.io.IOException: failed to read [id:5, legacy:false, file:/var/lib/elasticsearch/data/nodes/0/indices/.kibana/_state/state-5.st]\r\n\tat org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:150) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:334) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.util.IndexFolderUpgrader.upgrade(IndexFolderUpgrader.java:90) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.util.IndexFolderUpgrader.upgradeIndicesIfNeeded(IndexFolderUpgrader.java:128) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:91) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]\r\n\tat sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [?:1.8.0_141]\r\n\tat sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [?:1.8.0_141]\r\n\tat java.lang.reflect.Constructor.newInstance(Constructor.java:423) [?:1.8.0_141]\r\n\tat org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:49) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:116) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:825) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:50) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:116) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:825) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:50) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:191) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:183) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:818) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:183) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:173) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:161) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:96) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.Guice.createInjector(Guice.java:96) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:42) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.node.Node.<init>(Node.java:497) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.node.Node.<init>(Node.java:244) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:232) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:232) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:351) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.cli.Command.main(Command.java:88) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.5.1.jar:5.5.1]\r\nCaused by: java.io.IOException: failed to read [id:5, legacy:false, file:/var/lib/elasticsearch/data/nodes/0/indices/.kibana/_state/state-5.st]\r\n\tat org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:327) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\t... 46 more\r\nCaused by: java.lang.IllegalArgumentException: invalid IP address [*] for [_ip]\r\n\tat org.elasticsearch.cluster.node.DiscoveryNodeFilters.lambda$static$0(DiscoveryNodeFilters.java:58) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.settings.Setting$3.get(Setting.java:908) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.common.settings.Setting$3.get(Setting.java:885) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.cluster.metadata.IndexMetaData$Builder.build(IndexMetaData.java:1026) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.cluster.metadata.IndexMetaData$Builder.fromXContent(IndexMetaData.java:1240) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.cluster.metadata.IndexMetaData$1.fromXContent(IndexMetaData.java:1302) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.cluster.metadata.IndexMetaData$1.fromXContent(IndexMetaData.java:1293) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.gateway.MetaDataStateFormat.read(MetaDataStateFormat.java:202) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\tat org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:322) ~[elasticsearch-5.5.1.jar:5.5.1]\r\n\t... 46 more\r\n```\r\n\r\n",
"comments": [
{
"body": "Thank you for the bug report. This is caused by the changes in #22591 and affects ES versions >= 5.3.0.",
"created_at": "2017-08-14T03:37:25Z"
},
{
"body": "I've opened #26187 that will fix this.",
"created_at": "2017-08-14T04:18:13Z"
},
{
"body": "Thanks @ywelsch !",
"created_at": "2017-08-14T04:21:54Z"
},
{
"body": "FYI, setting `routing.allocation.include._ip` to `\"\"` instead of `\"*\"` appears to be a valid workaround.",
"created_at": "2017-08-14T06:30:04Z"
}
],
"number": 26184,
"title": "Shard IP filtering does not support wildcards (which breaks 2.x to 5.5.1 upgrade)"
} | {
"body": "PR #22591 broke usage of wildcards for IP-based allocation filtering, which is documented at https://www.elastic.co/guide/en/elasticsearch/reference/current/shard-allocation-filtering.html\r\n\r\nCloses #26184",
"number": 26187,
"review_comments": [],
"title": "Allow wildcards for shard IP filtering"
} | {
"commits": [
{
"message": "Allow wildcards for shard IP filtering"
}
],
"files": [
{
"diff": "@@ -41,7 +41,7 @@ public enum OpType {\n /**\n * Validates the IP addresses in a group of {@link Settings} by looking for the keys\n * \"_ip\", \"_host_ip\", and \"_publish_ip\" and ensuring each of their comma separated values\n- * is a valid IP address.\n+ * that has no wildcards is a valid IP address.\n */\n public static final Consumer<Settings> IP_VALIDATOR = (settings) -> {\n Map<String, String> settingsMap = settings.getAsMap();\n@@ -52,7 +52,7 @@ public enum OpType {\n }\n if (\"_ip\".equals(propertyKey) || \"_host_ip\".equals(propertyKey) || \"_publish_ip\".equals(propertyKey)) {\n for (String value : Strings.tokenizeToStringArray(entry.getValue(), \",\")) {\n- if (InetAddresses.isInetAddress(value) == false) {\n+ if (Regex.isSimpleMatchPattern(value) == false && InetAddresses.isInetAddress(value) == false) {\n throw new IllegalArgumentException(\"invalid IP address [\" + value + \"] for [\" + propertyKey + \"]\");\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNodeFilters.java",
"status": "modified"
},
{
"diff": "@@ -245,6 +245,17 @@ public void testIpPublishFilteringNotMatchingOr() {\n assertThat(filters.match(node), equalTo(true));\n }\n \n+ public void testIpPublishFilteringMatchingWildcard() {\n+ boolean matches = randomBoolean();\n+ Settings settings = shuffleSettings(Settings.builder()\n+ .put(\"xxx._publish_ip\", matches ? \"192.1.*\" : \"192.2.*\")\n+ .build());\n+ DiscoveryNodeFilters filters = DiscoveryNodeFilters.buildFromSettings(OR, \"xxx.\", settings);\n+\n+ DiscoveryNode node = new DiscoveryNode(\"\", \"\", \"\", \"\", \"192.1.1.54\", localAddress, emptyMap(), emptySet(), null);\n+ assertThat(filters.match(node), equalTo(matches));\n+ }\n+\n public void testCommaSeparatedValuesTrimmed() {\n DiscoveryNode node = new DiscoveryNode(\"\", \"\", \"\", \"\", \"192.1.1.54\", localAddress, singletonMap(\"tag\", \"B\"), emptySet(), null);\n ",
"filename": "core/src/test/java/org/elasticsearch/cluster/node/DiscoveryNodeFiltersTests.java",
"status": "modified"
},
{
"diff": "@@ -185,11 +185,22 @@ public void testInvalidIPFilter() {\n String ipKey = randomFrom(\"_ip\", \"_host_ip\", \"_publish_ip\");\n Setting<Settings> filterSetting = randomFrom(IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING,\n IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING, IndexMetaData.INDEX_ROUTING_EXCLUDE_GROUP_SETTING);\n+ String invalidIP = randomFrom(\"192..168.1.1\", \"192.300.1.1\");\n IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> {\n IndexScopedSettings indexScopedSettings = new IndexScopedSettings(Settings.EMPTY, IndexScopedSettings.BUILT_IN_INDEX_SETTINGS);\n- indexScopedSettings.updateDynamicSettings(Settings.builder().put(filterSetting.getKey() + ipKey, \"192..168.1.1\").build(),\n+ indexScopedSettings.updateDynamicSettings(Settings.builder().put(filterSetting.getKey() + ipKey, invalidIP).build(),\n Settings.builder().put(Settings.EMPTY), Settings.builder(), \"test ip validation\");\n });\n- assertEquals(\"invalid IP address [192..168.1.1] for [\" + ipKey + \"]\", e.getMessage());\n+ assertEquals(\"invalid IP address [\" + invalidIP + \"] for [\" + ipKey + \"]\", e.getMessage());\n+ }\n+\n+ public void testWildcardIPFilter() {\n+ String ipKey = randomFrom(\"_ip\", \"_host_ip\", \"_publish_ip\");\n+ Setting<Settings> filterSetting = randomFrom(IndexMetaData.INDEX_ROUTING_REQUIRE_GROUP_SETTING,\n+ IndexMetaData.INDEX_ROUTING_INCLUDE_GROUP_SETTING, IndexMetaData.INDEX_ROUTING_EXCLUDE_GROUP_SETTING);\n+ String wildcardIP = randomFrom(\"192.168.*\", \"192.*.1.1\");\n+ IndexScopedSettings indexScopedSettings = new IndexScopedSettings(Settings.EMPTY, IndexScopedSettings.BUILT_IN_INDEX_SETTINGS);\n+ indexScopedSettings.updateDynamicSettings(Settings.builder().put(filterSetting.getKey() + ipKey, wildcardIP).build(),\n+ Settings.builder().put(Settings.EMPTY), Settings.builder(), \"test ip validation\");\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDeciderTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version** (`bin/elasticsearch --version`): git revision 82fa531ab4 (HEAD revision as of now) but this behavior can be reproduced also with Elasticsearch 5.x.\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nThe percentiles aggregation allows to use the `t-digest` or `HDR` histogram approaches. Although they should be mutually exclusive, it is possible for users to define options for both approaches which is confusing because only the second approach is applied. \r\n\r\n**Steps to reproduce**:\r\n\r\n1. Insert a document into an empty index:\r\n\r\n```\r\nPOST /my_index/doc\r\n{\r\n \"load_time\": 20\r\n}\r\n```\r\n\r\n2. Run the following aggregation from the docs but specify `tdigest` and `hdr` options:\r\n\r\n```\r\nGET /_search\r\n{\r\n \"aggs\": {\r\n \"load_time_outlier\": {\r\n \"percentiles\": {\r\n \"field\": \"load_time\",\r\n \"percents\": [99],\r\n \"tdigest\": {\r\n \"compression\": 200\r\n },\r\n \"hdr\": {\r\n \"number_of_significant_value_digits\": 3\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n* **Actual outcome**: It produces results.\r\n* **Expected outcome**: Specifying options for both `tdigest` and `hdr` should be considered invalid and Elasticseach should throw an Exception.",
"comments": [],
"number": 26095,
"title": "Percentiles aggregation allows multiple methods"
} | {
"body": "Currently the `percentiles` aggregation allows specifying both possible methods\r\nin the query DSL, but only the later one is used. This changes it to rejecting\r\nsuch requests with an error. Setting the method multiple times via the java API\r\nstill works (and the last one wins).\r\n\r\nCloses #26095",
"number": 26163,
"review_comments": [
{
"body": "In a first wip of ths PR I was doing the checks if the setter was called multiple times in the original builder, but that felt akward since it would also affect the java api and would mean the PercentilesAggregationBuilder would somehow need that call counter somewhere. Attempts to use ConstructingObjectParser to check that methods was specified only once also were not promosing, so I went with this internal temporary object for the parser output for now as it felt least invasive. ",
"created_at": "2017-08-11T09:49:58Z"
},
{
"body": "Should we also check for empty arrays too? Or maybe down in the `percentiles()` method?",
"created_at": "2017-08-14T13:50:26Z"
},
{
"body": "Sure, I will add something along those lines.",
"created_at": "2017-08-14T18:26:18Z"
}
],
"title": "Reject multiple methods in `percentiles` aggregation"
} | {
"commits": [
{
"message": "Reject multiple methods in `percentiles` aggregation\n\nCurrently the `percentiles` aggregation allows specifying both possible methods\nin the query DSL, but only the later one is used. This changes it to rejecting\nsuch requests with an error. Setting the method multiple times via the java API\nstill works (and the last one wins).\n\nCloses #26095"
},
{
"message": "Add check for empty percentiles array"
}
],
"files": [
{
"diff": "@@ -43,6 +43,7 @@\n import java.io.IOException;\n import java.util.Arrays;\n import java.util.Objects;\n+import java.util.function.Consumer;\n \n public class PercentilesAggregationBuilder extends LeafOnly<ValuesSource.Numeric, PercentilesAggregationBuilder> {\n public static final String NAME = Percentiles.TYPE_NAME;\n@@ -76,7 +77,7 @@ private static class HDROptions {\n NUMBER_SIGNIFICANT_DIGITS_FIELD);\n }\n \n- private static final ObjectParser<PercentilesAggregationBuilder, Void> PARSER;\n+ private static final ObjectParser<InternalBuilder, Void> PARSER;\n static {\n PARSER = new ObjectParser<>(PercentilesAggregationBuilder.NAME);\n ValuesSourceParserHelper.declareNumericFields(PARSER, true, true, false);\n@@ -103,7 +104,26 @@ private static class HDROptions {\n }\n \n public static AggregationBuilder parse(String aggregationName, XContentParser parser) throws IOException {\n- return PARSER.parse(parser, new PercentilesAggregationBuilder(aggregationName), null);\n+ InternalBuilder internal = PARSER.parse(parser, new InternalBuilder(aggregationName), null);\n+ // we need to return a PercentilesAggregationBuilder for equality checks to work\n+ PercentilesAggregationBuilder returnedAgg = new PercentilesAggregationBuilder(internal.name);\n+ setIfNotNull(returnedAgg::valueType, internal.valueType());\n+ setIfNotNull(returnedAgg::format, internal.format());\n+ setIfNotNull(returnedAgg::missing, internal.missing());\n+ setIfNotNull(returnedAgg::field, internal.field());\n+ setIfNotNull(returnedAgg::script, internal.script());\n+ setIfNotNull(returnedAgg::method, internal.method());\n+ setIfNotNull(returnedAgg::percentiles, internal.percentiles());\n+ returnedAgg.keyed(internal.keyed());\n+ returnedAgg.compression(internal.compression());\n+ returnedAgg.numberOfSignificantValueDigits(internal.numberOfSignificantValueDigits());\n+ return returnedAgg;\n+ }\n+\n+ private static <T> void setIfNotNull(Consumer<T> consumer, T value) {\n+ if (value != null) {\n+ consumer.accept(value);\n+ }\n }\n \n private double[] percents = DEFAULT_PERCENTS;\n@@ -144,6 +164,9 @@ public PercentilesAggregationBuilder percentiles(double... percents) {\n if (percents == null) {\n throw new IllegalArgumentException(\"[percents] must not be null: [\" + name + \"]\");\n }\n+ if (percents.length == 0) {\n+ throw new IllegalArgumentException(\"[percents] must not be empty: [\" + name + \"]\");\n+ }\n double[] sortedPercents = Arrays.copyOf(percents, percents.length);\n Arrays.sort(sortedPercents);\n this.percents = sortedPercents;\n@@ -293,4 +316,29 @@ protected int innerHashCode() {\n public String getType() {\n return NAME;\n }\n+\n+ /**\n+ * Private specialization of this builder that should only be used by the parser, this enables us to\n+ * overwrite {@link #method()} to check that it is not defined twice in xContent and throw\n+ * an error, while the Java API should allow to overwrite the method\n+ */\n+ private static class InternalBuilder extends PercentilesAggregationBuilder {\n+\n+ private boolean setOnce = false;\n+\n+ private InternalBuilder(String name) {\n+ super(name);\n+ }\n+\n+ @Override\n+ public InternalBuilder method(PercentilesMethod method) {\n+ if (setOnce == false) {\n+ super.method(method);\n+ setOnce = true;\n+ return this;\n+ } else {\n+ throw new IllegalStateException(\"Only one percentiles method should be declared.\");\n+ }\n+ }\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/PercentilesAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,9 +19,14 @@\n \n package org.elasticsearch.search.aggregations.metrics;\n \n+import org.elasticsearch.common.ParsingException;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.search.aggregations.BaseAggregationTestCase;\n import org.elasticsearch.search.aggregations.metrics.percentiles.PercentilesAggregationBuilder;\n \n+import java.io.IOException;\n+\n public class PercentilesTests extends BaseAggregationTestCase<PercentilesAggregationBuilder> {\n \n @Override\n@@ -55,4 +60,37 @@ protected PercentilesAggregationBuilder createTestAggregatorBuilder() {\n return factory;\n }\n \n+ public void testNullOrEmptyPercentilesThrows() throws IOException {\n+ PercentilesAggregationBuilder builder = new PercentilesAggregationBuilder(\"testAgg\");\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, () -> builder.percentiles(null));\n+ assertEquals(\"[percents] must not be null: [testAgg]\", ex.getMessage());\n+\n+ ex = expectThrows(IllegalArgumentException.class, () -> builder.percentiles(new double[0]));\n+ assertEquals(\"[percents] must not be empty: [testAgg]\", ex.getMessage());\n+ }\n+\n+ public void testExceptionMultipleMethods() throws IOException {\n+ final String illegalAgg = \"{\\n\" +\n+ \" \\\"percentiles\\\": {\\n\" +\n+ \" \\\"field\\\": \\\"load_time\\\",\\n\" +\n+ \" \\\"percents\\\": [99],\\n\" +\n+ \" \\\"tdigest\\\": {\\n\" +\n+ \" \\\"compression\\\": 200\\n\" +\n+ \" },\\n\" +\n+ \" \\\"hdr\\\": {\\n\" +\n+ \" \\\"number_of_significant_value_digits\\\": 3\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+ XContentParser parser = createParser(JsonXContent.jsonXContent, illegalAgg);\n+ assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken());\n+ assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken());\n+ ParsingException e = expectThrows(ParsingException.class,\n+ () -> PercentilesAggregationBuilder.parse(\"myPercentiles\", parser));\n+ assertEquals(\n+ \"ParsingException[[percentiles] failed to parse field [hdr]]; \"\n+ + \"nested: IllegalStateException[Only one percentiles method should be declared.];; \"\n+ + \"java.lang.IllegalStateException: Only one percentiles method should be declared.\",\n+ e.getDetailedMessage());\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/PercentilesTests.java",
"status": "modified"
}
]
} |
{
"body": "This commit changes the way we handle field expansion in `match`, `multi_match` and `query_string` query.\r\n The main changes are:\r\n\r\n- For exact field name, the new behavior is to rewrite to a matchnodocs query when the field name is not found in the mapping.\r\n\r\n- For partial field names (with `*` suffix), the expansion is done only on `keyword`, `text`, `date`, `ip` and `number` field types. Other field types are simply ignored.\r\n\r\n- For all fields (`*`), the expansion is done on accepted field types only (see above) and metadata fields are also filtered.\r\n\r\n- The `*` notation can also be used to set `default_field` option on`query_string` query. This should replace the needs for the extra option `use_all_fields` which is deprecated in this change.\r\n\r\nThis commit also rewrites simple `*` query to matchalldocs query when all fields are requested (Fixes #25556). \r\n\r\nThe same change should be done on `simple_query_string` for completeness.\r\n\r\n`use_all_fields` option in `query_string` is also deprecated in this change, `default_field` should be set to `*` instead.\r\n\r\nRelates #25551\r\n ",
"comments": [
{
"body": "Thanks @dakrone , good catch.\r\nI pushed some changes to fix the bug.",
"created_at": "2017-07-14T19:19:37Z"
},
{
"body": "This PR has been updated to refactor the expansion of field names for the main es queries (the simple query string should be done in a follow up). The new description reflects the latest status of the PR (and explains the main changes). @dakrone I need to document the new behavior and work a bit on tests but the code is ready for a review I think ;).",
"created_at": "2017-07-17T20:57:17Z"
},
{
"body": "> For partial field names (with * suffix), the expansion is done only on keyword, text, date and number field types. Other field types are simply ignored.\r\n\r\nDoes this include the `ip` field? (I can't remember if `ip` counts as a `number`)",
"created_at": "2017-07-18T16:53:08Z"
},
{
"body": "> Does this include the ip field? (I can't remember if ip counts as a number)\r\n\r\nIt's a separate type but you are right this include the `ip` type as well. I've updated the description of the PR.",
"created_at": "2017-07-18T16:58:00Z"
},
{
"body": "Hey @jimczi, I'm seeing an odd difference between `default_field` and `fields`. `fields: [\"*\"]` sometimes throws an error when `default_field: \"*\"` does not. For example:\r\n\r\nI'm using a weblogs data set with IP fields. If I send the following query, I get an error:\r\n\r\n```\r\n {\r\n \"query_string\": {\r\n \"query\": \"200\",\r\n \"analyze_wildcard\": true,\r\n \"fields\": [\r\n \"*\"\r\n ]\r\n }\r\n }\r\n```\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"query_shard_exception\",\r\n \"reason\": \"failed to create query: {\\n \\\"bool\\\" : {\\n \\\"must\\\" : [\\n {\\n \\\"query_string\\\" : {\\n \\\"query\\\" : \\\"200\\\",\\n \\\"fields\\\" : [\\n \\\"*^1.0\\\"\\n ],\\n \\\"type\\\" : \\\"best_fields\\\",\\n \\\"default_operator\\\" : \\\"or\\\",\\n \\\"max_determinized_states\\\" : 10000,\\n \\\"enable_position_increments\\\" : true,\\n \\\"fuzziness\\\" : \\\"AUTO\\\",\\n \\\"fuzzy_prefix_length\\\" : 0,\\n \\\"fuzzy_max_expansions\\\" : 50,\\n \\\"phrase_slop\\\" : 0,\\n \\\"analyze_wildcard\\\" : true,\\n \\\"escape\\\" : false,\\n \\\"boost\\\" : 1.0\\n }\\n },\\n {\\n \\\"range\\\" : {\\n \\\"@timestamp\\\" : {\\n \\\"from\\\" : 1501607670716,\\n \\\"to\\\" : 1501608570716,\\n \\\"include_lower\\\" : true,\\n \\\"include_upper\\\" : true,\\n \\\"format\\\" : \\\"epoch_millis\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n }\\n ],\\n \\\"adjust_pure_negative\\\" : true,\\n \\\"boost\\\" : 1.0\\n }\\n}\",\r\n \"index_uuid\": \"3FMTvZhWQyimxtxw3pJvvg\",\r\n \"index\": \"logstash-0\"\r\n }\r\n ],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"all shards failed\",\r\n \"phase\": \"query\",\r\n \"grouped\": true,\r\n \"failed_shards\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"logstash-0\",\r\n \"node\": \"38QhIDc9TTGCRXYNWAmFJA\",\r\n \"reason\": {\r\n \"type\": \"query_shard_exception\",\r\n \"reason\": \"failed to create query: {\\n \\\"bool\\\" : {\\n \\\"must\\\" : [\\n {\\n \\\"query_string\\\" : {\\n \\\"query\\\" : \\\"200\\\",\\n \\\"fields\\\" : [\\n \\\"*^1.0\\\"\\n ],\\n \\\"type\\\" : \\\"best_fields\\\",\\n \\\"default_operator\\\" : \\\"or\\\",\\n \\\"max_determinized_states\\\" : 10000,\\n \\\"enable_position_increments\\\" : true,\\n \\\"fuzziness\\\" : \\\"AUTO\\\",\\n \\\"fuzzy_prefix_length\\\" : 0,\\n \\\"fuzzy_max_expansions\\\" : 50,\\n \\\"phrase_slop\\\" : 0,\\n \\\"analyze_wildcard\\\" : true,\\n \\\"escape\\\" : false,\\n \\\"boost\\\" : 1.0\\n }\\n },\\n {\\n \\\"range\\\" : {\\n \\\"@timestamp\\\" : {\\n \\\"from\\\" : 1501607670716,\\n \\\"to\\\" : 1501608570716,\\n \\\"include_lower\\\" : true,\\n \\\"include_upper\\\" : true,\\n \\\"format\\\" : \\\"epoch_millis\\\",\\n \\\"boost\\\" : 1.0\\n }\\n }\\n }\\n ],\\n \\\"adjust_pure_negative\\\" : true,\\n \\\"boost\\\" : 1.0\\n }\\n}\",\r\n \"index_uuid\": \"3FMTvZhWQyimxtxw3pJvvg\",\r\n \"index\": \"logstash-0\",\r\n \"caused_by\": {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"'200' is not an IP string literal.\"\r\n }\r\n }\r\n }\r\n ]\r\n },\r\n \"status\": 400\r\n }\r\n```\r\n\r\nA multi_match with `fields: [\"*\"]` fails with the same error:\r\n\r\n```\r\n{\r\n \"multi_match\": {\r\n \"query\": 200,\r\n \"fields\": [\r\n \"*\"\r\n ],\r\n \"type\": \"phrase\"\r\n }\r\n}\r\n```\r\n\r\nThe following query_string query using default_field, however, works fine:\r\n\r\n```\r\n{\r\n \"query_string\": {\r\n \"query\": \"200\",\r\n \"analyze_wildcard\": true,\r\n \"default_field\": \"*\"\r\n }\r\n}\r\n```\r\n\r\nFor me, it would be ideal if `fields: [\"*\"]` worked like `default_field: \"*\"`. If `fields: [\"*\"]` throws errors depending on the query value, I can't really send user provided values and I'm back to abusing the query_string query in order to do searches across all fields.\r\n",
"created_at": "2017-08-01T17:44:29Z"
},
{
"body": "This is just a leftover of the `all_fields` mode. By default the `query_string` and `multi_match` query are not lenient, they throw an error when an invalid content is used for a field type. When the `all_fields` was used (by default or when it was explicitly set) the leniency was forced to true. I kept this behavior for `query_string` query using `default_field:*` but it's just to make sure that the default (when default_field is not set) doesn't change. If you use `fields:*` or the `multi_match` query you'll have to set the leniency to true manually.",
"created_at": "2017-08-01T18:49:27Z"
},
{
"body": "Ah, I had no idea that `lenient` parameter existed. Thanks @jimczi, that's exactly what I needed!",
"created_at": "2017-08-01T22:10:39Z"
}
],
"number": 25726,
"title": "Refactor field expansion for match, multi_match and query_string query"
} | {
"body": "This change is a continuation of #25726 that aligns field expansions and text analysis for the `simple_query_string` with the `query_string` and `multi_match query`.\r\nThe main changes are:\r\n\r\n * For exact field name, the new behavior is to rewrite to a matchnodocs query when the field name is not found in the mapping.\r\n\r\n * For partial field names (with * suffix), the expansion is done only on keyword, text, date, ip and number field types. Other field types are simply ignored.\r\n\r\n * For all fields (*), the expansion is done on accepted field types only (see above) and metadata fields are also filtered.\r\n\r\nThe `use_all_fields option` is deprecated in this change and can be replaced by setting `*` in the fields parameter.\r\nThis commit also changes how text fields are analyzed. Previously the default search analyzer (or the provided analyzer) was used to analyze every text part\r\n, ignoring the analyzer set on the field in the mapping. With this change, the field analyzer is used instead unless an analyzer has been forced in the parameter of the query.\r\n\r\nFinally now that all full text queries can handle the special \"*\" expansion (`all_fields` mode), the `index.query.default_field` is now set to `*` for indices created in 6.",
"number": 26145,
"review_comments": [
{
"body": "this is technically breaking, because prior to this change the following would work:\r\n\r\n```java\r\nbuilder.field(\"foo\", 1.5);\r\nbuilder.useAllFields(true);\r\nbuilder.useAllFields(false);\r\n```\r\n\r\nAnd since setting it to `true` permanently changes the `fieldsAndWeights`. While it seems contrived, I could see user code doing this passing the builder off to independent functions, the builder should be idempotent in this case",
"created_at": "2017-08-10T16:12:33Z"
},
{
"body": "I believe this does need to calculate whether all fields mode is going to be used, otherwise a mixed 5.6 - 6.0 cluster will have the behavior correct on 6.0 and incorrect on 5.6, since 5.6 calculates the decision for whether to use `all_fields` using different criteria than 6.0 will?",
"created_at": "2017-08-10T16:18:43Z"
},
{
"body": "Can you add javadocs for this class please?",
"created_at": "2017-08-10T16:20:07Z"
},
{
"body": "And also javadocs for all methods in this class too please",
"created_at": "2017-08-10T16:21:23Z"
},
{
"body": "I think this no longer needs to be fully qualified since the class name is not the same any more",
"created_at": "2017-08-10T16:38:49Z"
},
{
"body": "It also breaks what I would expect to happen with:\r\n\r\n```java\r\nbuilder.useAllFields(true);\r\nbuilder.field(\"foo\", 1.0);\r\n```\r\n\r\nWhere I would expect it to use `all_fields` mode, but instead it does not work since the order matters.",
"created_at": "2017-08-10T17:03:00Z"
},
{
"body": "In the latter case it would fail the query early because useAllFields freeze the fieldsAndWeights map. I think it's better than the current behavior which accepts this query but fails when the query is actually built (fields and useAllFields are mutually exclusive) ?\r\n\r\nFor the first example I don't know if it's really a problem. Maybe we should just disallow `useAllFields(false)` ? I don't see a good reason to unset it especially now that the index default field defaults to `*` ? ",
"created_at": "2017-08-10T18:42:49Z"
},
{
"body": "I agree. I pushed https://github.com/elastic/elasticsearch/pull/26145/commits/456ceac1eac240e119291bd7277f356a1b8be6a5 to fix this.",
"created_at": "2017-08-10T18:44:49Z"
},
{
"body": "> In the latter case it would fail the query early because useAllFields freeze the fieldsAndWeights map. I think it's better than the current behavior which accepts this query but fails when the query is actually built (fields and useAllFields are mutually exclusive) ?\r\n\r\nSure, that sounds fine, as long as we're okay with this failing (esoteric) use-case.\r\n\r\n> For the first example I don't know if it's really a problem. Maybe we should just disallow useAllFields(false) ? I don't see a good reason to unset it especially now that the index default field defaults to * ?\r\n\r\nYeah, I don't either, the only argument I can think of keeping `useAllFields(boolean)` is that then it's apparent it's a \"setter\" and not a \"getter\"",
"created_at": "2017-08-11T18:12:15Z"
}
],
"title": "Refactor simple_query_string to handle text part like multi_match and query_string"
} | {
"commits": [
{
"message": "Refactor simple_query_string to handle text part like multi_match and query_string\n\nThis change is a continuation of #25726 that aligns field expansions for the simple_query_string with the query_string and multi_match query.\nThe main changes are:\n\n * For exact field name, the new behavior is to rewrite to a matchnodocs query when the field name is not found in the mapping.\n\n * For partial field names (with * suffix), the expansion is done only on keyword, text, date, ip and number field types. Other field types are simply ignored.\n\n * For all fields (*), the expansion is done on accepted field types only (see above) and metadata fields are also filtered.\n\nThe use_all_fields option is deprecated in this change and can be replaced by setting `*` in the fields parameter.\nThis commit also changes how text fields are analyzed. Previously the default search analyzer (or the provided analyzer) was used to analyze every text part\n, ignoring the analyzer set on the field in the mapping. With this change, the field analyzer is used instead unless an analyzer has been forced in the parameter of the query.\n\nFinally now that all full text queries can handle the special \"*\" expansion (`all_fields` mode), the `index.query.default_field` is now set to `*` for indices created in 6."
},
{
"message": "After review"
},
{
"message": "handle null booleans"
}
],
"files": [
{
"diff": "@@ -116,15 +116,6 @@ public static Query fixNegativeQueryIfNeeded(Query q) {\n return q;\n }\n \n- public static boolean isConstantMatchAllQuery(Query query) {\n- if (query instanceof ConstantScoreQuery) {\n- return isConstantMatchAllQuery(((ConstantScoreQuery) query).getQuery());\n- } else if (query instanceof MatchAllDocsQuery) {\n- return true;\n- }\n- return false;\n- }\n-\n public static Query applyMinimumShouldMatch(BooleanQuery query, @Nullable String minimumShouldMatch) {\n if (minimumShouldMatch == null) {\n return query;",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/Queries.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.lucene.all.AllField;\n import org.elasticsearch.common.settings.IndexScopedSettings;\n import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Setting.Property;\n@@ -48,9 +49,21 @@\n * be called for each settings update.\n */\n public final class IndexSettings {\n-\n- public static final Setting<String> DEFAULT_FIELD_SETTING =\n- new Setting<>(\"index.query.default_field\", AllFieldMapper.NAME, Function.identity(), Property.IndexScope);\n+ public static final String DEFAULT_FIELD_SETTING_KEY = \"index.query.default_field\";\n+ public static final Setting<String> DEFAULT_FIELD_SETTING;\n+ static {\n+ Function<Settings, String> defValue = settings -> {\n+ final String defaultField;\n+ if (settings.getAsVersion(IndexMetaData.SETTING_VERSION_CREATED, null) != null &&\n+ Version.indexCreated(settings).before(Version.V_6_0_0_alpha1)) {\n+ defaultField = AllFieldMapper.NAME;\n+ } else {\n+ defaultField = \"*\";\n+ }\n+ return defaultField;\n+ };\n+ DEFAULT_FIELD_SETTING = new Setting<>(DEFAULT_FIELD_SETTING_KEY, defValue, Function.identity(), Property.IndexScope, Property.Dynamic);\n+ }\n public static final Setting<Boolean> QUERY_STRING_LENIENT_SETTING =\n Setting.boolSetting(\"index.query_string.lenient\", false, Property.IndexScope);\n public static final Setting<Boolean> QUERY_STRING_ANALYZE_WILDCARD =",
"filename": "core/src/main/java/org/elasticsearch/index/IndexSettings.java",
"status": "modified"
},
{
"diff": "@@ -35,7 +35,7 @@\n import org.elasticsearch.index.query.support.QueryParsers;\n import org.elasticsearch.index.search.MatchQuery;\n import org.elasticsearch.index.search.MultiMatchQuery;\n-import org.elasticsearch.index.search.QueryStringQueryParser;\n+import org.elasticsearch.index.search.QueryParserHelper;\n \n import java.io.IOException;\n import java.util.HashMap;\n@@ -767,7 +767,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n }\n }\n \n- Map<String, Float> newFieldsBoosts = QueryStringQueryParser.resolveMappingFields(context, fieldsBoosts);\n+ Map<String, Float> newFieldsBoosts = QueryParserHelper.resolveMappingFields(context, fieldsBoosts);\n return multiMatchQuery.parse(type, newFieldsBoosts, value, minimumShouldMatch);\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -34,7 +34,9 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.analysis.NamedAnalyzer;\n+import org.elasticsearch.index.mapper.AllFieldMapper;\n import org.elasticsearch.index.query.support.QueryParsers;\n+import org.elasticsearch.index.search.QueryParserHelper;\n import org.elasticsearch.index.search.QueryStringQueryParser;\n import org.joda.time.DateTimeZone;\n \n@@ -304,7 +306,7 @@ public String defaultField() {\n */\n @Deprecated\n public QueryStringQueryBuilder useAllFields(Boolean useAllFields) {\n- if (useAllFields) {\n+ if (useAllFields != null && useAllFields) {\n this.defaultField = \"*\";\n }\n return this;\n@@ -938,20 +940,19 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n queryParser = new QueryStringQueryParser(context, defaultField, isLenient);\n }\n } else if (fieldsAndWeights.size() > 0) {\n- final Map<String, Float> resolvedFields = QueryStringQueryParser.resolveMappingFields(context, fieldsAndWeights);\n+ final Map<String, Float> resolvedFields = QueryParserHelper.resolveMappingFields(context, fieldsAndWeights);\n queryParser = new QueryStringQueryParser(context, resolvedFields, isLenient);\n } else {\n- // Expand to all fields if:\n- // - The index default search field is \"*\"\n- // - The index default search field is \"_all\" and _all is disabled\n- // TODO the index default search field should be \"*\" for new indices.\n- if (Regex.isMatchAllPattern(context.defaultField()) ||\n- (context.getMapperService().allEnabled() == false && \"_all\".equals(context.defaultField()))) {\n- // Automatically determine the fields from the index mapping.\n- // Automatically set leniency to \"true\" if unset so mismatched fields don't cause exceptions;\n+ String defaultField = context.defaultField();\n+ if (context.getMapperService().allEnabled() == false &&\n+ AllFieldMapper.NAME.equals(defaultField)) {\n+ // For indices created before 6.0 with _all disabled\n+ defaultField = \"*\";\n+ }\n+ if (Regex.isMatchAllPattern(defaultField)) {\n queryParser = new QueryStringQueryParser(context, lenient == null ? true : lenient);\n } else {\n- queryParser = new QueryStringQueryParser(context, context.defaultField(), isLenient);\n+ queryParser = new QueryStringQueryParser(context, defaultField, isLenient);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/query/QueryStringQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -31,16 +31,17 @@\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.mapper.MappedFieldType;\n-import org.elasticsearch.index.query.SimpleQueryParser.Settings;\n-import org.elasticsearch.index.search.QueryStringQueryParser;\n+import org.elasticsearch.index.mapper.AllFieldMapper;\n+import org.elasticsearch.index.search.QueryParserHelper;\n+import org.elasticsearch.index.search.SimpleQueryStringQueryParser;\n+import org.elasticsearch.index.search.SimpleQueryStringQueryParser.Settings;\n \n import java.io.IOException;\n+import java.util.Collections;\n import java.util.HashMap;\n import java.util.Locale;\n import java.util.Map;\n import java.util.Objects;\n-import java.util.TreeMap;\n \n /**\n * SimpleQuery is a query parser that acts similar to a query_string query, but\n@@ -57,7 +58,7 @@\n * <li>'{@code ~}N' at the end of phrases specifies near/slop query: <tt>\"term1 term2\"~5</tt>\n * </ul>\n * <p>\n- * See: {@link SimpleQueryParser} for more information.\n+ * See: {@link SimpleQueryStringQueryParser} for more information.\n * <p>\n * This query supports these options:\n * <p>\n@@ -104,7 +105,8 @@ public class SimpleQueryStringBuilder extends AbstractQueryBuilder<SimpleQuerySt\n private static final ParseField QUERY_FIELD = new ParseField(\"query\");\n private static final ParseField FIELDS_FIELD = new ParseField(\"fields\");\n private static final ParseField QUOTE_FIELD_SUFFIX_FIELD = new ParseField(\"quote_field_suffix\");\n- private static final ParseField ALL_FIELDS_FIELD = new ParseField(\"all_fields\");\n+ private static final ParseField ALL_FIELDS_FIELD = new ParseField(\"all_fields\")\n+ .withAllDeprecated(\"Set [fields] to `*` instead\");\n private static final ParseField GENERATE_SYNONYMS_PHRASE_QUERY = new ParseField(\"auto_generate_synonyms_phrase_query\");\n \n /** Query text to parse. */\n@@ -114,10 +116,8 @@ public class SimpleQueryStringBuilder extends AbstractQueryBuilder<SimpleQuerySt\n * currently _ALL. Uses a TreeMap to hold the fields so boolean clauses are\n * always sorted in same order for generated Lucene query for easier\n * testing.\n- *\n- * Can be changed back to HashMap once https://issues.apache.org/jira/browse/LUCENE-6305 is fixed.\n */\n- private final Map<String, Float> fieldsAndWeights = new TreeMap<>();\n+ private Map<String, Float> fieldsAndWeights = new HashMap<>();\n /** If specified, analyzer to use to parse the query text, defaults to registered default in toQuery. */\n private String analyzer;\n /** Default operator to use for linking boolean clauses. Defaults to OR according to docs. */\n@@ -126,8 +126,6 @@ public class SimpleQueryStringBuilder extends AbstractQueryBuilder<SimpleQuerySt\n private String minimumShouldMatch;\n /** Any search flags to be used, ALL by default. */\n private int flags = DEFAULT_FLAGS;\n- /** Flag specifying whether query should be forced to expand to all searchable fields */\n- private Boolean useAllFields;\n /** Whether or not the lenient flag has been set or not */\n private boolean lenientSet = false;\n \n@@ -173,7 +171,12 @@ public SimpleQueryStringBuilder(StreamInput in) throws IOException {\n minimumShouldMatch = in.readOptionalString();\n if (in.getVersion().onOrAfter(Version.V_5_1_1)) {\n settings.quoteFieldSuffix(in.readOptionalString());\n- useAllFields = in.readOptionalBoolean();\n+ if (in.getVersion().before(Version.V_6_0_0_beta2)) {\n+ Boolean useAllFields = in.readOptionalBoolean();\n+ if (useAllFields != null && useAllFields) {\n+ useAllFields(true);\n+ }\n+ }\n }\n if (in.getVersion().onOrAfter(Version.V_6_1_0)) {\n settings.autoGenerateSynonymsPhraseQuery(in.readBoolean());\n@@ -205,7 +208,13 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n out.writeOptionalString(minimumShouldMatch);\n if (out.getVersion().onOrAfter(Version.V_5_1_1)) {\n out.writeOptionalString(settings.quoteFieldSuffix());\n- out.writeOptionalBoolean(useAllFields);\n+ if (out.getVersion().before(Version.V_6_0_0_beta2)) {\n+ if (useAllFields()) {\n+ out.writeOptionalBoolean(true);\n+ } else {\n+ out.writeOptionalBoolean(null);\n+ }\n+ }\n }\n if (out.getVersion().onOrAfter(Version.V_6_1_0)) {\n out.writeBoolean(settings.autoGenerateSynonymsPhraseQuery());\n@@ -258,12 +267,19 @@ public String analyzer() {\n return this.analyzer;\n }\n \n+ @Deprecated\n public Boolean useAllFields() {\n- return useAllFields;\n+ return fieldsAndWeights.size() == 1 && fieldsAndWeights.keySet().stream().anyMatch(Regex::isMatchAllPattern);\n }\n \n+ /**\n+ * This setting is deprecated, set {@link #field(String)} to \"*\" instead.\n+ */\n+ @Deprecated\n public SimpleQueryStringBuilder useAllFields(Boolean useAllFields) {\n- this.useAllFields = useAllFields;\n+ if (useAllFields != null && useAllFields) {\n+ this.fieldsAndWeights = Collections.singletonMap(\"*\", 1.0f);\n+ }\n return this;\n }\n \n@@ -381,71 +397,41 @@ public boolean autoGenerateSynonymsPhraseQuery() {\n \n @Override\n protected Query doToQuery(QueryShardContext context) throws IOException {\n- // field names in builder can have wildcards etc, need to resolve them here\n- Map<String, Float> resolvedFieldsAndWeights = new TreeMap<>();\n-\n- if ((useAllFields != null && useAllFields) && (fieldsAndWeights.size() != 0)) {\n- throw addValidationError(\"cannot use [all_fields] parameter in conjunction with [fields]\", null);\n- }\n-\n- // If explicitly required to use all fields, use all fields, OR:\n- // Automatically determine the fields (to replace the _all field) if all of the following are true:\n- // - The _all field is disabled,\n- // - and the default_field has not been changed in the settings\n- // - and no fields are specified in the request\n Settings newSettings = new Settings(settings);\n- if ((this.useAllFields != null && this.useAllFields) ||\n- (context.getMapperService().allEnabled() == false &&\n- \"_all\".equals(context.defaultField()) &&\n- this.fieldsAndWeights.isEmpty())) {\n- resolvedFieldsAndWeights = QueryStringQueryParser.resolveMappingField(context, \"*\", 1.0f,\n- false, false);\n- // Need to use lenient mode when using \"all-mode\" so exceptions aren't thrown due to mismatched types\n- newSettings.lenient(lenientSet ? settings.lenient() : true);\n+ final Map<String, Float> resolvedFieldsAndWeights;\n+ if (fieldsAndWeights.isEmpty() == false) {\n+ resolvedFieldsAndWeights = QueryParserHelper.resolveMappingFields(context, fieldsAndWeights);\n } else {\n- // Use the default field if no fields specified\n- if (fieldsAndWeights.isEmpty()) {\n- resolvedFieldsAndWeights.put(resolveIndexName(context.defaultField(), context), AbstractQueryBuilder.DEFAULT_BOOST);\n- } else {\n- for (Map.Entry<String, Float> fieldEntry : fieldsAndWeights.entrySet()) {\n- if (Regex.isSimpleMatchPattern(fieldEntry.getKey())) {\n- for (String fieldName : context.getMapperService().simpleMatchToIndexNames(fieldEntry.getKey())) {\n- resolvedFieldsAndWeights.put(fieldName, fieldEntry.getValue());\n- }\n- } else {\n- resolvedFieldsAndWeights.put(resolveIndexName(fieldEntry.getKey(), context), fieldEntry.getValue());\n- }\n- }\n+ String defaultField = context.defaultField();\n+ if (context.getMapperService().allEnabled() == false &&\n+ AllFieldMapper.NAME.equals(defaultField)) {\n+ // For indices created before 6.0 with _all disabled\n+ defaultField = \"*\";\n+ }\n+ boolean isAllField = Regex.isMatchAllPattern(defaultField);\n+ if (isAllField) {\n+ newSettings.lenient(lenientSet ? settings.lenient() : true);\n }\n+ resolvedFieldsAndWeights = QueryParserHelper.resolveMappingField(context, defaultField, 1.0f,\n+ false, !isAllField);\n }\n \n- // Use standard analyzer by default if none specified\n- Analyzer luceneAnalyzer;\n+ final SimpleQueryStringQueryParser sqp;\n if (analyzer == null) {\n- luceneAnalyzer = context.getMapperService().searchAnalyzer();\n+ sqp = new SimpleQueryStringQueryParser(resolvedFieldsAndWeights, flags, newSettings, context);\n } else {\n- luceneAnalyzer = context.getIndexAnalyzers().get(analyzer);\n+ Analyzer luceneAnalyzer = context.getIndexAnalyzers().get(analyzer);\n if (luceneAnalyzer == null) {\n throw new QueryShardException(context, \"[\" + SimpleQueryStringBuilder.NAME + \"] analyzer [\" + analyzer\n + \"] not found\");\n }\n-\n+ sqp = new SimpleQueryStringQueryParser(luceneAnalyzer, resolvedFieldsAndWeights, flags, newSettings, context);\n }\n-\n- SimpleQueryParser sqp = new SimpleQueryParser(luceneAnalyzer, resolvedFieldsAndWeights, flags, newSettings, context);\n sqp.setDefaultOperator(defaultOperator.toBooleanClauseOccur());\n Query query = sqp.parse(queryText);\n return Queries.maybeApplyMinimumShouldMatch(query, minimumShouldMatch);\n }\n \n- private static String resolveIndexName(String fieldName, QueryShardContext context) {\n- MappedFieldType fieldType = context.fieldMapper(fieldName);\n- if (fieldType != null) {\n- return fieldType.name();\n- }\n- return fieldName;\n- }\n-\n @Override\n protected void doXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject(NAME);\n@@ -477,9 +463,6 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep\n if (minimumShouldMatch != null) {\n builder.field(MINIMUM_SHOULD_MATCH_FIELD.getPreferredName(), minimumShouldMatch);\n }\n- if (useAllFields != null) {\n- builder.field(ALL_FIELDS_FIELD.getPreferredName(), useAllFields);\n- }\n builder.field(GENERATE_SYNONYMS_PHRASE_QUERY.getPreferredName(), settings.autoGenerateSynonymsPhraseQuery());\n printBoostAndQueryName(builder);\n builder.endObject();\n@@ -498,7 +481,6 @@ public static SimpleQueryStringBuilder fromXContent(XContentParser parser) throw\n Boolean lenient = null;\n boolean analyzeWildcard = SimpleQueryStringBuilder.DEFAULT_ANALYZE_WILDCARD;\n String quoteFieldSuffix = null;\n- Boolean useAllFields = null;\n boolean autoGenerateSynonymsPhraseQuery = true;\n \n XContentParser.Token token;\n@@ -564,7 +546,7 @@ public static SimpleQueryStringBuilder fromXContent(XContentParser parser) throw\n } else if (QUOTE_FIELD_SUFFIX_FIELD.match(currentFieldName)) {\n quoteFieldSuffix = parser.textOrNull();\n } else if (ALL_FIELDS_FIELD.match(currentFieldName)) {\n- useAllFields = parser.booleanValue();\n+ // Ignore deprecated option\n } else if (GENERATE_SYNONYMS_PHRASE_QUERY.match(currentFieldName)) {\n autoGenerateSynonymsPhraseQuery = parser.booleanValue();\n } else {\n@@ -582,19 +564,13 @@ public static SimpleQueryStringBuilder fromXContent(XContentParser parser) throw\n throw new ParsingException(parser.getTokenLocation(), \"[\" + SimpleQueryStringBuilder.NAME + \"] query text missing\");\n }\n \n- if ((useAllFields != null && useAllFields) && (fieldsAndWeights.size() != 0)) {\n- throw new ParsingException(parser.getTokenLocation(),\n- \"cannot use [all_fields] parameter in conjunction with [fields]\");\n- }\n-\n SimpleQueryStringBuilder qb = new SimpleQueryStringBuilder(queryBody);\n qb.boost(boost).fields(fieldsAndWeights).analyzer(analyzerName).queryName(queryName).minimumShouldMatch(minimumShouldMatch);\n qb.flags(flags).defaultOperator(defaultOperator);\n if (lenient != null) {\n qb.lenient(lenient);\n }\n qb.analyzeWildcard(analyzeWildcard).boost(boost).quoteFieldSuffix(quoteFieldSuffix);\n- qb.useAllFields(useAllFields);\n qb.autoGenerateSynonymsPhraseQuery(autoGenerateSynonymsPhraseQuery);\n return qb;\n }\n@@ -606,7 +582,7 @@ public String getWriteableName() {\n \n @Override\n protected int doHashCode() {\n- return Objects.hash(fieldsAndWeights, analyzer, defaultOperator, queryText, minimumShouldMatch, settings, flags, useAllFields);\n+ return Objects.hash(fieldsAndWeights, analyzer, defaultOperator, queryText, minimumShouldMatch, settings, flags);\n }\n \n @Override\n@@ -615,8 +591,6 @@ protected boolean doEquals(SimpleQueryStringBuilder other) {\n && Objects.equals(defaultOperator, other.defaultOperator) && Objects.equals(queryText, other.queryText)\n && Objects.equals(minimumShouldMatch, other.minimumShouldMatch)\n && Objects.equals(settings, other.settings)\n- && (flags == other.flags)\n- && (useAllFields == other.useAllFields);\n+ && (flags == other.flags);\n }\n-\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.index.query;\n \n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.index.search.SimpleQueryStringQueryParser;\n \n import java.util.Locale;\n \n@@ -28,18 +29,18 @@\n public enum SimpleQueryStringFlag {\n ALL(-1),\n NONE(0),\n- AND(SimpleQueryParser.AND_OPERATOR),\n- NOT(SimpleQueryParser.NOT_OPERATOR),\n- OR(SimpleQueryParser.OR_OPERATOR),\n- PREFIX(SimpleQueryParser.PREFIX_OPERATOR),\n- PHRASE(SimpleQueryParser.PHRASE_OPERATOR),\n- PRECEDENCE(SimpleQueryParser.PRECEDENCE_OPERATORS),\n- ESCAPE(SimpleQueryParser.ESCAPE_OPERATOR),\n- WHITESPACE(SimpleQueryParser.WHITESPACE_OPERATOR),\n- FUZZY(SimpleQueryParser.FUZZY_OPERATOR),\n+ AND(SimpleQueryStringQueryParser.AND_OPERATOR),\n+ NOT(SimpleQueryStringQueryParser.NOT_OPERATOR),\n+ OR(SimpleQueryStringQueryParser.OR_OPERATOR),\n+ PREFIX(SimpleQueryStringQueryParser.PREFIX_OPERATOR),\n+ PHRASE(SimpleQueryStringQueryParser.PHRASE_OPERATOR),\n+ PRECEDENCE(SimpleQueryStringQueryParser.PRECEDENCE_OPERATORS),\n+ ESCAPE(SimpleQueryStringQueryParser.ESCAPE_OPERATOR),\n+ WHITESPACE(SimpleQueryStringQueryParser.WHITESPACE_OPERATOR),\n+ FUZZY(SimpleQueryStringQueryParser.FUZZY_OPERATOR),\n // NEAR and SLOP are synonymous, since \"slop\" is a more familiar term than \"near\"\n- NEAR(SimpleQueryParser.NEAR_OPERATOR),\n- SLOP(SimpleQueryParser.NEAR_OPERATOR);\n+ NEAR(SimpleQueryStringQueryParser.NEAR_OPERATOR),\n+ SLOP(SimpleQueryStringQueryParser.NEAR_OPERATOR);\n \n final int value;\n ",
"filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringFlag.java",
"status": "modified"
},
{
"diff": "@@ -29,7 +29,6 @@\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.FuzzyQuery;\n-import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.MultiPhraseQuery;\n import org.apache.lucene.search.MultiTermQuery;\n import org.apache.lucene.search.PhraseQuery;",
"filename": "core/src/main/java/org/elasticsearch/index/search/MatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -29,7 +29,6 @@\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.query.AbstractQueryBuilder;\n@@ -59,7 +58,7 @@ public MultiMatchQuery(QueryShardContext context) {\n private Query parseAndApply(Type type, String fieldName, Object value, String minimumShouldMatch, Float boostValue) throws IOException {\n Query query = parse(type, fieldName, value);\n query = Queries.maybeApplyMinimumShouldMatch(query, minimumShouldMatch);\n- if (query != null && boostValue != null && boostValue != AbstractQueryBuilder.DEFAULT_BOOST) {\n+ if (query != null && boostValue != null && boostValue != AbstractQueryBuilder.DEFAULT_BOOST && query instanceof MatchNoDocsQuery == false) {\n query = new BoostQuery(query, boostValue);\n }\n return query;\n@@ -268,7 +267,7 @@ static Query blendTerms(QueryShardContext context, BytesRef[] values, Float comm\n blendedBoost[i] = boost;\n i++;\n } else {\n- if (boost != 1f) {\n+ if (boost != 1f && query instanceof MatchNoDocsQuery == false) {\n query = new BoostQuery(query, boost);\n }\n queries.add(query);",
"filename": "core/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,165 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.search;\n+\n+import org.elasticsearch.common.regex.Regex;\n+import org.elasticsearch.index.mapper.DateFieldMapper;\n+import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.IpFieldMapper;\n+import org.elasticsearch.index.mapper.KeywordFieldMapper;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.MetadataFieldMapper;\n+import org.elasticsearch.index.mapper.NumberFieldMapper;\n+import org.elasticsearch.index.mapper.ScaledFloatFieldMapper;\n+import org.elasticsearch.index.mapper.TextFieldMapper;\n+import org.elasticsearch.index.query.QueryShardContext;\n+\n+import java.util.Collection;\n+import java.util.HashMap;\n+import java.util.HashSet;\n+import java.util.Map;\n+import java.util.Set;\n+\n+/**\n+ * Helpers to extract and expand field names from a mapping\n+ */\n+public final class QueryParserHelper {\n+ // Mapping types the \"all-ish\" query can be executed against\n+ private static final Set<String> ALLOWED_QUERY_MAPPER_TYPES;\n+\n+ static {\n+ ALLOWED_QUERY_MAPPER_TYPES = new HashSet<>();\n+ ALLOWED_QUERY_MAPPER_TYPES.add(DateFieldMapper.CONTENT_TYPE);\n+ ALLOWED_QUERY_MAPPER_TYPES.add(IpFieldMapper.CONTENT_TYPE);\n+ ALLOWED_QUERY_MAPPER_TYPES.add(KeywordFieldMapper.CONTENT_TYPE);\n+ for (NumberFieldMapper.NumberType nt : NumberFieldMapper.NumberType.values()) {\n+ ALLOWED_QUERY_MAPPER_TYPES.add(nt.typeName());\n+ }\n+ ALLOWED_QUERY_MAPPER_TYPES.add(ScaledFloatFieldMapper.CONTENT_TYPE);\n+ ALLOWED_QUERY_MAPPER_TYPES.add(TextFieldMapper.CONTENT_TYPE);\n+ }\n+\n+ private QueryParserHelper() {}\n+\n+ /**\n+ * Get a {@link FieldMapper} associated with a field name or null.\n+ * @param mapperService The mapper service where to find the mapping.\n+ * @param field The field name to search.\n+ */\n+ public static FieldMapper getFieldMapper(MapperService mapperService, String field) {\n+ for (DocumentMapper mapper : mapperService.docMappers(true)) {\n+ FieldMapper fieldMapper = mapper.mappers().smartNameFieldMapper(field);\n+ if (fieldMapper != null) {\n+ return fieldMapper;\n+ }\n+ }\n+ return null;\n+ }\n+\n+ public static Map<String, Float> resolveMappingFields(QueryShardContext context,\n+ Map<String, Float> fieldsAndWeights) {\n+ return resolveMappingFields(context, fieldsAndWeights, null);\n+ }\n+\n+ /**\n+ * Resolve all the field names and patterns present in the provided map with the\n+ * {@link QueryShardContext} and returns a new map containing all the expanded fields with their original boost.\n+ * @param context The context of the query.\n+ * @param fieldsAndWeights The map of fields and weights to expand.\n+ * @param fieldSuffix The suffix name to add to the expanded field names if a mapping exists for that name.\n+ * The original name of the field is kept if adding the suffix to the field name does not point to a valid field\n+ * in the mapping.\n+ */\n+ public static Map<String, Float> resolveMappingFields(QueryShardContext context,\n+ Map<String, Float> fieldsAndWeights,\n+ String fieldSuffix) {\n+ Map<String, Float> resolvedFields = new HashMap<>();\n+ for (Map.Entry<String, Float> fieldEntry : fieldsAndWeights.entrySet()) {\n+ boolean allField = Regex.isMatchAllPattern(fieldEntry.getKey());\n+ boolean multiField = Regex.isSimpleMatchPattern(fieldEntry.getKey());\n+ float weight = fieldEntry.getValue() == null ? 1.0f : fieldEntry.getValue();\n+ Map<String, Float> fieldMap = resolveMappingField(context, fieldEntry.getKey(), weight,\n+ !multiField, !allField, fieldSuffix);\n+ resolvedFields.putAll(fieldMap);\n+ }\n+ return resolvedFields;\n+ }\n+\n+ /**\n+ * Resolves the provided pattern or field name from the {@link QueryShardContext} and return a map of\n+ * the expanded fields with their original boost.\n+ * @param context The context of the query\n+ * @param fieldOrPattern The field name or the pattern to resolve\n+ * @param weight The weight for the field\n+ * @param acceptAllTypes Whether all field type should be added when a pattern is expanded.\n+ * If false, only {@link #ALLOWED_QUERY_MAPPER_TYPES} are accepted and other field types\n+ * are discarded from the query.\n+ * @param acceptMetadataField Whether metadata fields should be added when a pattern is expanded.\n+ */\n+ public static Map<String, Float> resolveMappingField(QueryShardContext context, String fieldOrPattern, float weight,\n+ boolean acceptAllTypes, boolean acceptMetadataField) {\n+ return resolveMappingField(context, fieldOrPattern, weight, acceptAllTypes, acceptMetadataField, null);\n+ }\n+\n+ /**\n+ * Resolves the provided pattern or field name from the {@link QueryShardContext} and return a map of\n+ * the expanded fields with their original boost.\n+ * @param context The context of the query\n+ * @param fieldOrPattern The field name or the pattern to resolve\n+ * @param weight The weight for the field\n+ * @param acceptAllTypes Whether all field type should be added when a pattern is expanded.\n+ * If false, only {@link #ALLOWED_QUERY_MAPPER_TYPES} are accepted and other field types\n+ * are discarded from the query.\n+ * @param acceptMetadataField Whether metadata fields should be added when a pattern is expanded.\n+ * @param fieldSuffix The suffix name to add to the expanded field names if a mapping exists for that name.\n+ * The original name of the field is kept if adding the suffix to the field name does not point to a valid field\n+ * in the mapping.\n+ */\n+ public static Map<String, Float> resolveMappingField(QueryShardContext context, String fieldOrPattern, float weight,\n+ boolean acceptAllTypes, boolean acceptMetadataField, String fieldSuffix) {\n+ Collection<String> allFields = context.simpleMatchToIndexNames(fieldOrPattern);\n+ Map<String, Float> fields = new HashMap<>();\n+ for (String fieldName : allFields) {\n+ if (fieldSuffix != null && context.fieldMapper(fieldName + fieldSuffix) != null) {\n+ fieldName = fieldName + fieldSuffix;\n+ }\n+ FieldMapper mapper = getFieldMapper(context.getMapperService(), fieldName);\n+ if (mapper == null) {\n+ // Unmapped fields are not ignored\n+ fields.put(fieldOrPattern, weight);\n+ continue;\n+ }\n+ if (acceptMetadataField == false && mapper instanceof MetadataFieldMapper) {\n+ // Ignore metadata fields\n+ continue;\n+ }\n+ // Ignore fields that are not in the allowed mapper types. Some\n+ // types do not support term queries, and thus we cannot generate\n+ // a special query for them.\n+ String mappingType = mapper.fieldType().typeName();\n+ if (acceptAllTypes == false && ALLOWED_QUERY_MAPPER_TYPES.contains(mappingType) == false) {\n+ continue;\n+ }\n+ fields.put(fieldName, weight);\n+ }\n+ return fields;\n+ }\n+}",
"filename": "core/src/main/java/org/elasticsearch/index/search/QueryParserHelper.java",
"status": "added"
},
{
"diff": "@@ -21,7 +21,6 @@\n \n import org.apache.lucene.analysis.Analyzer;\n import org.apache.lucene.analysis.TokenStream;\n-import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;\n import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;\n import org.apache.lucene.index.Term;\n@@ -49,18 +48,10 @@\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.index.mapper.AllFieldMapper;\n import org.elasticsearch.index.mapper.DateFieldMapper;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.FieldNamesFieldMapper;\n-import org.elasticsearch.index.mapper.IpFieldMapper;\n-import org.elasticsearch.index.mapper.KeywordFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n-import org.elasticsearch.index.mapper.MetadataFieldMapper;\n-import org.elasticsearch.index.mapper.NumberFieldMapper;\n-import org.elasticsearch.index.mapper.ScaledFloatFieldMapper;\n import org.elasticsearch.index.mapper.StringFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n-import org.elasticsearch.index.mapper.TextFieldMapper;\n import org.elasticsearch.index.query.ExistsQueryBuilder;\n import org.elasticsearch.index.query.MultiMatchQueryBuilder;\n import org.elasticsearch.index.query.QueryShardContext;\n@@ -69,17 +60,14 @@\n \n import java.io.IOException;\n import java.util.ArrayList;\n-import java.util.Collection;\n import java.util.Collections;\n-import java.util.HashMap;\n-import java.util.HashSet;\n import java.util.List;\n import java.util.Map;\n-import java.util.Set;\n \n import static org.elasticsearch.common.lucene.search.Queries.fixNegativeQueryIfNeeded;\n import static org.elasticsearch.common.lucene.search.Queries.newLenientFieldQuery;\n import static org.elasticsearch.common.lucene.search.Queries.newUnmappedFieldQuery;\n+import static org.elasticsearch.index.search.QueryParserHelper.resolveMappingField;\n \n /**\n * A {@link XQueryParser} that uses the {@link MapperService} in order to build smarter\n@@ -88,22 +76,8 @@\n * to assemble the result logically.\n */\n public class QueryStringQueryParser extends XQueryParser {\n- // Mapping types the \"all-ish\" query can be executed against\n- private static final Set<String> ALLOWED_QUERY_MAPPER_TYPES;\n private static final String EXISTS_FIELD = \"_exists_\";\n \n- static {\n- ALLOWED_QUERY_MAPPER_TYPES = new HashSet<>();\n- ALLOWED_QUERY_MAPPER_TYPES.add(DateFieldMapper.CONTENT_TYPE);\n- ALLOWED_QUERY_MAPPER_TYPES.add(IpFieldMapper.CONTENT_TYPE);\n- ALLOWED_QUERY_MAPPER_TYPES.add(KeywordFieldMapper.CONTENT_TYPE);\n- for (NumberFieldMapper.NumberType nt : NumberFieldMapper.NumberType.values()) {\n- ALLOWED_QUERY_MAPPER_TYPES.add(nt.typeName());\n- }\n- ALLOWED_QUERY_MAPPER_TYPES.add(ScaledFloatFieldMapper.CONTENT_TYPE);\n- ALLOWED_QUERY_MAPPER_TYPES.add(TextFieldMapper.CONTENT_TYPE);\n- }\n-\n private final QueryShardContext context;\n private final Map<String, Float> fieldsAndWeights;\n private final boolean lenient;\n@@ -162,8 +136,9 @@ public QueryStringQueryParser(QueryShardContext context, Map<String, Float> fiel\n * @param lenient If set to `true` will cause format based failures (like providing text to a numeric field) to be ignored.\n */\n public QueryStringQueryParser(QueryShardContext context, boolean lenient) {\n- this(context, \"*\", resolveMappingField(context, \"*\", 1.0f, false, false),\n- lenient, context.getMapperService().searchAnalyzer());\n+ this(context, \"*\",\n+ resolveMappingField(context, \"*\", 1.0f, false, false),\n+ lenient, context.getMapperService().searchAnalyzer());\n }\n \n private QueryStringQueryParser(QueryShardContext context, String defaultField,\n@@ -177,69 +152,6 @@ private QueryStringQueryParser(QueryShardContext context, String defaultField,\n this.lenient = lenient;\n }\n \n-\n- private static FieldMapper getFieldMapper(MapperService mapperService, String field) {\n- for (DocumentMapper mapper : mapperService.docMappers(true)) {\n- FieldMapper fieldMapper = mapper.mappers().smartNameFieldMapper(field);\n- if (fieldMapper != null) {\n- return fieldMapper;\n- }\n- }\n- return null;\n- }\n-\n- public static Map<String, Float> resolveMappingFields(QueryShardContext context, Map<String, Float> fieldsAndWeights) {\n- Map<String, Float> resolvedFields = new HashMap<>();\n- for (Map.Entry<String, Float> fieldEntry : fieldsAndWeights.entrySet()) {\n- boolean allField = Regex.isMatchAllPattern(fieldEntry.getKey());\n- boolean multiField = Regex.isSimpleMatchPattern(fieldEntry.getKey());\n- float weight = fieldEntry.getValue() == null ? 1.0f : fieldEntry.getValue();\n- Map<String, Float> fieldMap = resolveMappingField(context, fieldEntry.getKey(), weight, !multiField, !allField);\n- resolvedFields.putAll(fieldMap);\n- }\n- return resolvedFields;\n- }\n-\n- public static Map<String, Float> resolveMappingField(QueryShardContext context, String field, float weight,\n- boolean acceptMetadataField, boolean acceptAllTypes) {\n- return resolveMappingField(context, field, weight, acceptMetadataField, acceptAllTypes, false, null);\n- }\n-\n- /**\n- * Given a shard context, return a map of all fields in the mappings that\n- * can be queried. The map will be field name to a float of 1.0f.\n- */\n- private static Map<String, Float> resolveMappingField(QueryShardContext context, String field, float weight,\n- boolean acceptAllTypes, boolean acceptMetadataField,\n- boolean quoted, String quoteFieldSuffix) {\n- Collection<String> allFields = context.simpleMatchToIndexNames(field);\n- Map<String, Float> fields = new HashMap<>();\n- for (String fieldName : allFields) {\n- if (quoted && quoteFieldSuffix != null && context.fieldMapper(fieldName + quoteFieldSuffix) != null) {\n- fieldName = fieldName + quoteFieldSuffix;\n- }\n- FieldMapper mapper = getFieldMapper(context.getMapperService(), fieldName);\n- if (mapper == null) {\n- // Unmapped fields are not ignored\n- fields.put(field, weight);\n- continue;\n- }\n- if (acceptMetadataField == false && mapper instanceof MetadataFieldMapper) {\n- // Ignore metadata fields\n- continue;\n- }\n- // Ignore fields that are not in the allowed mapper types. Some\n- // types do not support term queries, and thus we cannot generate\n- // a special query for them.\n- String mappingType = mapper.fieldType().typeName();\n- if (acceptAllTypes == false && ALLOWED_QUERY_MAPPER_TYPES.contains(mappingType) == false) {\n- continue;\n- }\n- fields.put(fieldName, weight);\n- }\n- return fields;\n- }\n-\n @Override\n public void setDefaultOperator(Operator op) {\n super.setDefaultOperator(op);\n@@ -343,7 +255,7 @@ private Map<String, Float> extractMultiFields(String field, boolean quoted) {\n boolean multiFields = Regex.isSimpleMatchPattern(field);\n // Filters unsupported fields if a pattern is requested\n // Filters metadata fields if all fields are requested\n- return resolveMappingField(context, field, 1.0f, !allFields, !multiFields, quoted, quoteFieldSuffix);\n+ return resolveMappingField(context, field, 1.0f, !allFields, !multiFields, quoted ? quoteFieldSuffix : null);\n } else {\n return fieldsAndWeights;\n }\n@@ -577,22 +489,20 @@ protected Query getPrefixQuery(String field, String termStr) throws ParseExcepti\n }\n \n private Query getPrefixQuerySingle(String field, String termStr) throws ParseException {\n- currentFieldType = null;\n Analyzer oldAnalyzer = getAnalyzer();\n try {\n currentFieldType = context.fieldMapper(field);\n- if (currentFieldType != null) {\n- setAnalyzer(forceAnalyzer == null ? queryBuilder.context.getSearchAnalyzer(currentFieldType) : forceAnalyzer);\n- Query query = null;\n- if (currentFieldType instanceof StringFieldType == false) {\n- query = currentFieldType.prefixQuery(termStr, getMultiTermRewriteMethod(), context);\n- }\n- if (query == null) {\n- query = getPossiblyAnalyzedPrefixQuery(currentFieldType.name(), termStr);\n- }\n- return query;\n+ if (currentFieldType == null) {\n+ return newUnmappedFieldQuery(field);\n+ }\n+ setAnalyzer(forceAnalyzer == null ? queryBuilder.context.getSearchAnalyzer(currentFieldType) : forceAnalyzer);\n+ Query query = null;\n+ if (currentFieldType instanceof StringFieldType == false) {\n+ query = currentFieldType.prefixQuery(termStr, getMultiTermRewriteMethod(), context);\n+ } else {\n+ query = getPossiblyAnalyzedPrefixQuery(currentFieldType.name(), termStr);\n }\n- return getPossiblyAnalyzedPrefixQuery(field, termStr);\n+ return query;\n } catch (RuntimeException e) {\n if (lenient) {\n return newLenientFieldQuery(field, e);\n@@ -784,12 +694,12 @@ private Query getRegexpQuerySingle(String field, String termStr) throws ParseExc\n Analyzer oldAnalyzer = getAnalyzer();\n try {\n currentFieldType = queryBuilder.context.fieldMapper(field);\n- if (currentFieldType != null) {\n- setAnalyzer(forceAnalyzer == null ? queryBuilder.context.getSearchAnalyzer(currentFieldType) : forceAnalyzer);\n- Query query = super.getRegexpQuery(field, termStr);\n- return query;\n+ if (currentFieldType == null) {\n+ return newUnmappedFieldQuery(field);\n }\n- return super.getRegexpQuery(field, termStr);\n+ setAnalyzer(forceAnalyzer == null ? queryBuilder.context.getSearchAnalyzer(currentFieldType) : forceAnalyzer);\n+ Query query = super.getRegexpQuery(field, termStr);\n+ return query;\n } catch (RuntimeException e) {\n if (lenient) {\n return newLenientFieldQuery(field, e);\n@@ -863,30 +773,4 @@ public Query parse(String query) throws ParseException {\n }\n return super.parse(query);\n }\n-\n- /**\n- * Checks if graph analysis should be enabled for the field depending\n- * on the provided {@link Analyzer}\n- */\n- protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field,\n- String queryText, boolean quoted, int phraseSlop) {\n- assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST;\n-\n- // Use the analyzer to get all the tokens, and then build an appropriate\n- // query based on the analysis chain.\n- try (TokenStream source = analyzer.tokenStream(field, queryText)) {\n- if (source.hasAttribute(DisableGraphAttribute.class)) {\n- /**\n- * A {@link TokenFilter} in this {@link TokenStream} disabled the graph analysis to avoid\n- * paths explosion. See {@link ShingleTokenFilterFactory} for details.\n- */\n- setEnableGraphQueries(false);\n- }\n- Query query = super.createFieldQuery(source, operator, field, quoted, phraseSlop);\n- setEnableGraphQueries(true);\n- return query;\n- } catch (IOException e) {\n- throw new RuntimeException(\"Error analyzing query text\", e);\n- }\n- }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/search/QueryStringQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.index.query;\n \n+import org.apache.lucene.analysis.MockSynonymAnalyzer;\n+import org.apache.lucene.analysis.standard.StandardAnalyzer;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanQuery;\n@@ -29,27 +31,29 @@\n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.PrefixQuery;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.SynonymQuery;\n import org.apache.lucene.search.TermQuery;\n+import org.apache.lucene.search.spans.SpanNearQuery;\n+import org.apache.lucene.search.spans.SpanOrQuery;\n+import org.apache.lucene.search.spans.SpanQuery;\n+import org.apache.lucene.search.spans.SpanTermQuery;\n import org.apache.lucene.util.TestUtil;\n import org.elasticsearch.Version;\n-import org.elasticsearch.cluster.metadata.MetaData;\n-import org.elasticsearch.common.ParsingException;\n-import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.search.SimpleQueryStringQueryParser;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.test.AbstractQueryTestCase;\n \n import java.io.IOException;\n+import java.util.Collections;\n import java.util.HashMap;\n import java.util.HashSet;\n-import java.util.Iterator;\n import java.util.Locale;\n import java.util.Map;\n import java.util.Set;\n \n import static org.hamcrest.Matchers.anyOf;\n-import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.either;\n import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.notNullValue;\n@@ -85,13 +89,13 @@ protected SimpleQueryStringBuilder doCreateTestQueryBuilder() {\n }\n }\n \n- int fieldCount = randomIntBetween(0, 10);\n+ int fieldCount = randomIntBetween(0, 2);\n Map<String, Float> fields = new HashMap<>();\n for (int i = 0; i < fieldCount; i++) {\n if (randomBoolean()) {\n- fields.put(randomAlphaOfLengthBetween(1, 10), AbstractQueryBuilder.DEFAULT_BOOST);\n+ fields.put(STRING_FIELD_NAME, AbstractQueryBuilder.DEFAULT_BOOST);\n } else {\n- fields.put(randomBoolean() ? STRING_FIELD_NAME : randomAlphaOfLengthBetween(1, 10), 2.0f / randomIntBetween(1, 20));\n+ fields.put(STRING_FIELD_NAME_2, 2.0f / randomIntBetween(1, 20));\n }\n }\n result.fields(fields);\n@@ -234,52 +238,35 @@ public void testDefaultFieldParsing() throws IOException {\n protected void doAssertLuceneQuery(SimpleQueryStringBuilder queryBuilder, Query query, SearchContext context) throws IOException {\n assertThat(query, notNullValue());\n \n- if (\"\".equals(queryBuilder.value())) {\n+ if (queryBuilder.value().isEmpty()) {\n assertThat(query, instanceOf(MatchNoDocsQuery.class));\n } else if (queryBuilder.fields().size() > 1) {\n- assertThat(query, anyOf(instanceOf(BooleanQuery.class), instanceOf(DisjunctionMaxQuery.class)));\n- if (query instanceof BooleanQuery) {\n- BooleanQuery boolQuery = (BooleanQuery) query;\n- for (BooleanClause clause : boolQuery.clauses()) {\n- if (clause.getQuery() instanceof TermQuery) {\n- TermQuery inner = (TermQuery) clause.getQuery();\n- assertThat(inner.getTerm().bytes().toString(), is(inner.getTerm().bytes().toString().toLowerCase(Locale.ROOT)));\n- }\n- }\n- assertThat(boolQuery.clauses().size(), equalTo(queryBuilder.fields().size()));\n- Iterator<Map.Entry<String, Float>> fieldsIterator = queryBuilder.fields().entrySet().iterator();\n- for (BooleanClause booleanClause : boolQuery) {\n- Map.Entry<String, Float> field = fieldsIterator.next();\n- assertTermOrBoostQuery(booleanClause.getQuery(), field.getKey(), queryBuilder.value(), field.getValue());\n- }\n- if (queryBuilder.minimumShouldMatch() != null) {\n- assertThat(boolQuery.getMinimumNumberShouldMatch(), greaterThan(0));\n- }\n- } else if (query instanceof DisjunctionMaxQuery) {\n- DisjunctionMaxQuery maxQuery = (DisjunctionMaxQuery) query;\n- for (Query disjunct : maxQuery.getDisjuncts()) {\n- if (disjunct instanceof TermQuery) {\n- TermQuery inner = (TermQuery) disjunct;\n- assertThat(inner.getTerm().bytes().toString(), is(inner.getTerm().bytes().toString().toLowerCase(Locale.ROOT)));\n- }\n+ assertThat(query, instanceOf(DisjunctionMaxQuery.class));\n+ DisjunctionMaxQuery maxQuery = (DisjunctionMaxQuery) query;\n+ for (Query disjunct : maxQuery.getDisjuncts()) {\n+ assertThat(disjunct, either(instanceOf(TermQuery.class))\n+ .or(instanceOf(BoostQuery.class))\n+ .or(instanceOf(MatchNoDocsQuery.class)));\n+ Query termQuery = disjunct;\n+ if (disjunct instanceof BoostQuery) {\n+ termQuery = ((BoostQuery) disjunct).getQuery();\n }\n- assertThat(maxQuery.getDisjuncts().size(), equalTo(queryBuilder.fields().size()));\n- Iterator<Map.Entry<String, Float>> fieldsIterator = queryBuilder.fields().entrySet().iterator();\n- for (Query disjunct : maxQuery) {\n- Map.Entry<String, Float> field = fieldsIterator.next();\n- assertTermOrBoostQuery(disjunct, field.getKey(), queryBuilder.value(), field.getValue());\n+ if (termQuery instanceof TermQuery) {\n+ TermQuery inner = (TermQuery) termQuery;\n+ assertThat(inner.getTerm().bytes().toString(), is(inner.getTerm().bytes().toString().toLowerCase(Locale.ROOT)));\n+ } else {\n+ assertThat(termQuery, instanceOf(MatchNoDocsQuery.class));\n }\n }\n } else if (queryBuilder.fields().size() == 1) {\n Map.Entry<String, Float> field = queryBuilder.fields().entrySet().iterator().next();\n assertTermOrBoostQuery(query, field.getKey(), queryBuilder.value(), field.getValue());\n } else if (queryBuilder.fields().size() == 0) {\n- MapperService ms = context.mapperService();\n- if (ms.allEnabled()) {\n- assertTermQuery(query, MetaData.ALL, queryBuilder.value());\n- } else {\n- assertThat(query.getClass(),\n- anyOf(equalTo(BooleanQuery.class), equalTo(DisjunctionMaxQuery.class), equalTo(MatchNoDocsQuery.class)));\n+ assertThat(query, either(instanceOf(DisjunctionMaxQuery.class)).or(instanceOf(MatchNoDocsQuery.class)));\n+ if (query instanceof DisjunctionMaxQuery) {\n+ for (Query disjunct : (DisjunctionMaxQuery) query) {\n+ assertThat(disjunct, either(instanceOf(TermQuery.class)).or(instanceOf(MatchNoDocsQuery.class)));\n+ }\n }\n } else {\n fail(\"Encountered lucene query type we do not have a validation implementation for in our \"\n@@ -335,7 +322,7 @@ public void testFromJson() throws IOException {\n \"{\\n\" +\n \" \\\"simple_query_string\\\" : {\\n\" +\n \" \\\"query\\\" : \\\"\\\\\\\"fried eggs\\\\\\\" +(eggplant | potato) -frittata\\\",\\n\" +\n- \" \\\"fields\\\" : [ \\\"_all^1.0\\\", \\\"body^5.0\\\" ],\\n\" +\n+ \" \\\"fields\\\" : [ \\\"body^5.0\\\" ],\\n\" +\n \" \\\"analyzer\\\" : \\\"snowball\\\",\\n\" +\n \" \\\"flags\\\" : -1,\\n\" +\n \" \\\"default_operator\\\" : \\\"and\\\",\\n\" +\n@@ -351,12 +338,13 @@ public void testFromJson() throws IOException {\n checkGeneratedJson(json, parsed);\n \n assertEquals(json, \"\\\"fried eggs\\\" +(eggplant | potato) -frittata\", parsed.value());\n- assertEquals(json, 2, parsed.fields().size());\n+ assertEquals(json, 1, parsed.fields().size());\n assertEquals(json, \"snowball\", parsed.analyzer());\n assertEquals(json, \".quote\", parsed.quoteFieldSuffix());\n }\n \n public void testMinimumShouldMatch() throws IOException {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n QueryShardContext shardContext = createShardContext();\n int numberOfTerms = randomIntBetween(1, 4);\n StringBuilder queryString = new StringBuilder();\n@@ -369,7 +357,7 @@ public void testMinimumShouldMatch() throws IOException {\n }\n int numberOfFields = randomIntBetween(1, 4);\n for (int i = 0; i < numberOfFields; i++) {\n- simpleQueryStringBuilder.field(\"f\" + i);\n+ simpleQueryStringBuilder.field(STRING_FIELD_NAME);\n }\n int percent = randomIntBetween(1, 100);\n simpleQueryStringBuilder.minimumShouldMatch(percent + \"%\");\n@@ -379,7 +367,7 @@ public void testMinimumShouldMatch() throws IOException {\n if (numberOfFields * numberOfTerms == 1) {\n assertThat(query, instanceOf(TermQuery.class));\n } else if (numberOfTerms == 1) {\n- assertThat(query, instanceOf(DisjunctionMaxQuery.class));\n+ assertThat(query, either(instanceOf(DisjunctionMaxQuery.class)).or(instanceOf(TermQuery.class)));\n } else {\n assertThat(query, instanceOf(BooleanQuery.class));\n BooleanQuery boolQuery = (BooleanQuery) query;\n@@ -403,6 +391,7 @@ public void testIndexMetaField() throws IOException {\n }\n \n public void testExpandedTerms() throws Exception {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n // Prefix\n Query query = new SimpleQueryStringBuilder(\"aBc*\")\n .field(STRING_FIELD_NAME)\n@@ -430,18 +419,122 @@ public void testExpandedTerms() throws Exception {\n assertEquals(expected, query);\n }\n \n- public void testAllFieldsWithFields() throws IOException {\n- String json =\n- \"{\\n\" +\n- \" \\\"simple_query_string\\\" : {\\n\" +\n- \" \\\"query\\\" : \\\"this that thus\\\",\\n\" +\n- \" \\\"fields\\\" : [\\\"foo\\\"],\\n\" +\n- \" \\\"all_fields\\\" : true\\n\" +\n- \" }\\n\" +\n- \"}\";\n+ public void testAnalyzeWildcard() throws IOException {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ SimpleQueryStringQueryParser.Settings settings = new SimpleQueryStringQueryParser.Settings();\n+ settings.analyzeWildcard(true);\n+ SimpleQueryStringQueryParser parser = new SimpleQueryStringQueryParser(new StandardAnalyzer(),\n+ Collections.singletonMap(STRING_FIELD_NAME, 1.0f), -1, settings, createShardContext());\n+ for (Operator op : Operator.values()) {\n+ BooleanClause.Occur defaultOp = op.toBooleanClauseOccur();\n+ parser.setDefaultOperator(defaultOp);\n+ Query query = parser.parse(\"first foo-bar-foobar* last\");\n+ Query expectedQuery =\n+ new BooleanQuery.Builder()\n+ .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, \"first\")), defaultOp))\n+ .add(new BooleanQuery.Builder()\n+ .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, \"foo\")), defaultOp))\n+ .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, \"bar\")), defaultOp))\n+ .add(new BooleanClause(new PrefixQuery(new Term(STRING_FIELD_NAME, \"foobar\")), defaultOp))\n+ .build(), defaultOp)\n+ .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, \"last\")), defaultOp))\n+ .build();\n+ assertThat(query, equalTo(expectedQuery));\n+ }\n+ }\n \n- ParsingException e = expectThrows(ParsingException.class, () -> parseQuery(json));\n- assertThat(e.getMessage(),\n- containsString(\"cannot use [all_fields] parameter in conjunction with [fields]\"));\n+ public void testAnalyzerWildcardWithSynonyms() throws IOException {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ SimpleQueryStringQueryParser.Settings settings = new SimpleQueryStringQueryParser.Settings();\n+ settings.analyzeWildcard(true);\n+ SimpleQueryStringQueryParser parser = new SimpleQueryStringQueryParser(new MockRepeatAnalyzer(),\n+ Collections.singletonMap(STRING_FIELD_NAME, 1.0f), -1, settings, createShardContext());\n+ for (Operator op : Operator.values()) {\n+ BooleanClause.Occur defaultOp = op.toBooleanClauseOccur();\n+ parser.setDefaultOperator(defaultOp);\n+ Query query = parser.parse(\"first foo-bar-foobar* last\");\n+ Query expectedQuery = new BooleanQuery.Builder()\n+ .add(new BooleanClause(new SynonymQuery(new Term(STRING_FIELD_NAME, \"first\"),\n+ new Term(STRING_FIELD_NAME, \"first\")), defaultOp))\n+ .add(new BooleanQuery.Builder()\n+ .add(new BooleanClause(new SynonymQuery(new Term(STRING_FIELD_NAME, \"foo\"),\n+ new Term(STRING_FIELD_NAME, \"foo\")), defaultOp))\n+ .add(new BooleanClause(new SynonymQuery(new Term(STRING_FIELD_NAME, \"bar\"),\n+ new Term(STRING_FIELD_NAME, \"bar\")), defaultOp))\n+ .add(new BooleanQuery.Builder()\n+ .add(new BooleanClause(new PrefixQuery(new Term(STRING_FIELD_NAME, \"foobar\")),\n+ BooleanClause.Occur.SHOULD))\n+ .add(new BooleanClause(new PrefixQuery(new Term(STRING_FIELD_NAME, \"foobar\")),\n+ BooleanClause.Occur.SHOULD))\n+ .build(), defaultOp)\n+ .build(), defaultOp)\n+ .add(new BooleanClause(new SynonymQuery(new Term(STRING_FIELD_NAME, \"last\"),\n+ new Term(STRING_FIELD_NAME, \"last\")), defaultOp))\n+ .build();\n+ assertThat(query, equalTo(expectedQuery));\n+ }\n+ }\n+\n+ public void testAnalyzerWithGraph() {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ SimpleQueryStringQueryParser.Settings settings = new SimpleQueryStringQueryParser.Settings();\n+ settings.analyzeWildcard(true);\n+ SimpleQueryStringQueryParser parser = new SimpleQueryStringQueryParser(new MockSynonymAnalyzer(),\n+ Collections.singletonMap(STRING_FIELD_NAME, 1.0f), -1, settings, createShardContext());\n+ for (Operator op : Operator.values()) {\n+ BooleanClause.Occur defaultOp = op.toBooleanClauseOccur();\n+ parser.setDefaultOperator(defaultOp);\n+ // non-phrase won't detect multi-word synonym because of whitespace splitting\n+ Query query = parser.parse(\"guinea pig\");\n+\n+ Query expectedQuery = new BooleanQuery.Builder()\n+ .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, \"guinea\")), defaultOp))\n+ .add(new BooleanClause(new TermQuery(new Term(STRING_FIELD_NAME, \"pig\")), defaultOp))\n+ .build();\n+ assertThat(query, equalTo(expectedQuery));\n+\n+ // phrase will pick it up\n+ query = parser.parse(\"\\\"guinea pig\\\"\");\n+ SpanTermQuery span1 = new SpanTermQuery(new Term(STRING_FIELD_NAME, \"guinea\"));\n+ SpanTermQuery span2 = new SpanTermQuery(new Term(STRING_FIELD_NAME, \"pig\"));\n+ expectedQuery = new SpanOrQuery(\n+ new SpanNearQuery(new SpanQuery[] { span1, span2 }, 0, true),\n+ new SpanTermQuery(new Term(STRING_FIELD_NAME, \"cavy\")));\n+\n+ assertThat(query, equalTo(expectedQuery));\n+\n+ // phrase with slop\n+ query = parser.parse(\"big \\\"tiny guinea pig\\\"~2\");\n+\n+ expectedQuery = new BooleanQuery.Builder()\n+ .add(new TermQuery(new Term(STRING_FIELD_NAME, \"big\")), defaultOp)\n+ .add(new SpanNearQuery(new SpanQuery[] {\n+ new SpanTermQuery(new Term(STRING_FIELD_NAME, \"tiny\")),\n+ new SpanOrQuery(\n+ new SpanNearQuery(new SpanQuery[] { span1, span2 }, 0, true),\n+ new SpanTermQuery(new Term(STRING_FIELD_NAME, \"cavy\"))\n+ )\n+ }, 2, true), defaultOp)\n+ .build();\n+ assertThat(query, equalTo(expectedQuery));\n+ }\n+ }\n+\n+ public void testQuoteFieldSuffix() {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ SimpleQueryStringQueryParser.Settings settings = new SimpleQueryStringQueryParser.Settings();\n+ settings.analyzeWildcard(true);\n+ settings.quoteFieldSuffix(\"_2\");\n+ SimpleQueryStringQueryParser parser = new SimpleQueryStringQueryParser(new MockSynonymAnalyzer(),\n+ Collections.singletonMap(STRING_FIELD_NAME, 1.0f), -1, settings, createShardContext());\n+ assertEquals(new TermQuery(new Term(STRING_FIELD_NAME, \"bar\")), parser.parse(\"bar\"));\n+ assertEquals(new TermQuery(new Term(STRING_FIELD_NAME_2, \"bar\")), parser.parse(\"\\\"bar\\\"\"));\n+\n+ // Now check what happens if the quote field does not exist\n+ settings.quoteFieldSuffix(\".quote\");\n+ parser = new SimpleQueryStringQueryParser(new MockSynonymAnalyzer(),\n+ Collections.singletonMap(STRING_FIELD_NAME, 1.0f), -1, settings, createShardContext());\n+ assertEquals(new TermQuery(new Term(STRING_FIELD_NAME, \"bar\")), parser.parse(\"bar\"));\n+ assertEquals(new TermQuery(new Term(STRING_FIELD_NAME, \"bar\")), parser.parse(\"\\\"bar\\\"\"));\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -548,12 +548,6 @@ public void testExplicitAllFieldsRequested() throws Exception {\n simpleQueryStringQuery(\"foo eggplant\").defaultOperator(Operator.AND).useAllFields(true)).get();\n assertHits(resp.getHits(), \"1\");\n assertHitCount(resp, 1L);\n-\n- Exception e = expectThrows(Exception.class, () ->\n- client().prepareSearch(\"test\").setQuery(\n- simpleQueryStringQuery(\"blah\").field(\"f1\").useAllFields(true)).get());\n- assertThat(ExceptionsHelper.detailedMessage(e),\n- containsString(\"cannot use [all_fields] parameter in conjunction with [fields]\"));\n }\n \n public void testAllFieldsWithSpecifiedLeniency() throws IOException {",
"filename": "core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,7 @@ GET /_search\n // CONSOLE\n \n Note, `message` is the name of a field, you can substitute the name of\n-any field (including `_all`) instead.\n+any field instead.\n \n [[query-dsl-match-query-boolean]]\n ==== match",
"filename": "docs/reference/query-dsl/match-query.asciidoc",
"status": "modified"
},
{
"diff": "@@ -13,8 +13,7 @@ GET /_search\n \"query\": {\n \"simple_query_string\" : {\n \"query\": \"\\\"fried eggs\\\" +(eggplant | potato) -frittata\",\n- \"analyzer\": \"snowball\",\n- \"fields\": [\"body^5\",\"_all\"],\n+ \"fields\": [\"title^5\", \"body\"],\n \"default_operator\": \"and\"\n }\n }\n@@ -30,15 +29,17 @@ The `simple_query_string` top level parameters include:\n |`query` |The actual query to be parsed. See below for syntax.\n \n |`fields` |The fields to perform the parsed query against. Defaults to the\n-`index.query.default_field` index settings, which in turn defaults to `_all`.\n+`index.query.default_field` index settings, which in turn defaults to `*`.\n+`*` extracts all fields in the mapping that are eligible to term queries\n+and filters the metadata fields.\n \n |`default_operator` |The default operator used if no explicit operator\n is specified. For example, with a default operator of `OR`, the query\n `capital of Hungary` is translated to `capital OR of OR Hungary`, and\n with default operator of `AND`, the same query is translated to\n `capital AND of AND Hungary`. The default value is `OR`.\n \n-|`analyzer` |The analyzer used to analyze each term of the query when\n+|`analyzer` |Force the analyzer to use to analyze each term of the query when\n creating composite queries.\n \n |`flags` |Flags specifying which features of the `simple_query_string` to\n@@ -65,7 +66,8 @@ comprehensive example.\n |`auto_generate_synonyms_phrase_query` |Whether phrase queries should be automatically generated for multi terms synonyms.\n Defaults to `true`.\n \n-|`all_fields` | Perform the query on all fields detected in the mapping that can\n+|`all_fields` | deprecated[6.0.0, set `fields` to `*` instead]\n+Perform the query on all fields detected in the mapping that can\n be queried. Will be used by default when the `_all` field is disabled and no\n `default_field` is specified index settings, and no `fields` are specified.\n |=======================================================================\n@@ -114,12 +116,9 @@ documents that contain \"baz\".\n ==== Default Field\n When not explicitly specifying the field to search on in the query\n string syntax, the `index.query.default_field` will be used to derive\n-which field to search on. It defaults to `_all` field.\n-\n-If the `_all` field is disabled and no `fields` are specified in the request`,\n-the `simple_query_string` query will automatically attempt to determine the\n-existing fields in the index's mapping that are queryable, and perform the\n-search on those fields.\n+which field to search on. It defaults to `*` and the query will automatically\n+attempt to determine the existing fields in the index's mapping that are queryable,\n+and perform the search on those fields.\n \n [float]\n ==== Multi Field",
"filename": "docs/reference/query-dsl/simple-query-string-query.asciidoc",
"status": "modified"
}
]
} |
{
"body": "On elasticsearch 6.0.0-beta1, I'm running the following code:\r\n\r\n```\r\nDELETE my_index\r\nPUT my_index\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"default_search\": {\r\n \"type\": \"custom\",\r\n \"tokenizer\": \"whitespace\"\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"notes\": {\r\n \"properties\": {\r\n \"foo\": {\r\n \"type\": \"text\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nThis gives in logs:\r\n\r\n```\r\n[2017-08-10T11:06:03,613][WARN ][o.e.c.a.s.ShardStateAction] [cEGsSDR] [my_index][4] received shard failed for shard id [[my_index][4]], allocation id [PoY2wM19RBiQCE6D4qfm-w], primary term [0], message [failed to update mapping for index], failure [MapperParsingException[Failed to parse mapping [notes]: [_all] is disabled in 6.0. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field.]; nested: IllegalArgumentException[[_all] is disabled in 6.0. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field.]; ]\r\norg.elasticsearch.index.mapper.MapperParsingException: Failed to parse mapping [notes]: [_all] is disabled in 6.0. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field.\r\n\tat org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:339) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:292) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.index.mapper.MapperService.updateMapping(MapperService.java:222) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.index.IndexService.updateMapping(IndexService.java:502) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.indices.cluster.IndicesClusterStateService.createIndices(IndicesClusterStateService.java:446) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:220) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:492) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_121]\r\n\tat org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:489) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:476) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:426) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:155) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]\r\n\tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]\r\nCaused by: java.lang.IllegalArgumentException: [_all] is disabled in 6.0. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field.\r\n\tat org.elasticsearch.index.mapper.AllFieldMapper$TypeParser.parse(AllFieldMapper.java:108) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:126) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:91) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\tat org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:336) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]\r\n\t... 17 more\r\n```",
"comments": [
{
"body": "It's related to the fact I have here defined `default_search` analyzer BTW. Removing it does not cause any issue.",
"created_at": "2017-08-10T09:09:01Z"
},
{
"body": "I could reproduce. Sneaky...",
"created_at": "2017-08-10T13:31:19Z"
}
],
"number": 26136,
"title": "Incorrect `_all` is disabled message"
} | {
"body": "By default we only serialize analyzers if the index analyzer is not the\r\n`default` analyzer or if the `search_analyzer` is different from the index\r\n`analyzer`. This raises issues with the `_all` field when the\r\n`index.analysis.analyzer.default_search` is set, since it automatically makes\r\nthe `search_analyzer` different from the index `analyzer`. Then there are\r\nexceptions since we expect the `_all` configuration to be empty on 6.0 indices.\r\n\r\nCloses #26136",
"number": 26143,
"review_comments": [],
"title": "Fix serialization of the `_all` field."
} | {
"commits": [
{
"message": "Fix serialization of the `_all` field.\n\nBy default we only serialize analyzers if the index analyzer is not the\n`default` analyzer or if the `search_analyzer` is different from the index\n`analyzer`. This raises issues with the `_all` field when the\n`index.analysis.analyzer.default_search` is set, since it automatically makes\nthe `search_analyzer` different from the index `analyzer`. Then there are\nexceptions since we expect the `_all` configuration to be empty on 6.0 indices.\n\nCloses #26136"
},
{
"message": "iter"
}
],
"files": [
{
"diff": "@@ -273,6 +273,9 @@ private void innerToXContent(XContentBuilder builder, boolean includeDefaults) t\n if (includeDefaults || enabledState != Defaults.ENABLED) {\n builder.field(\"enabled\", enabledState.enabled);\n }\n+ if (enabled() == false) {\n+ return;\n+ }\n if (includeDefaults || fieldType().stored() != Defaults.FIELD_TYPE.stored()) {\n builder.field(\"store\", fieldType().stored());\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,40 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper;\n+\n+import org.elasticsearch.common.compress.CompressedXContent;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.mapper.MapperService.MergeReason;\n+import org.elasticsearch.test.ESSingleNodeTestCase;\n+\n+public class AllFieldMapperTests extends ESSingleNodeTestCase {\n+\n+ public void testUpdateDefaultSearchAnalyzer() throws Exception {\n+ IndexService indexService = createIndex(\"test\", Settings.builder()\n+ .put(\"index.analysis.analyzer.default_search.type\", \"custom\")\n+ .put(\"index.analysis.analyzer.default_search.tokenizer\", \"standard\").build());\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"doc\").endObject().endObject().string();\n+ indexService.mapperService().merge(\"doc\", new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, false);\n+ assertEquals(mapping, indexService.mapperService().documentMapper(\"doc\").mapping().toString());\n+ }\n+\n+}",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/AllFieldMapperTests.java",
"status": "added"
}
]
} |
{
"body": "```\r\nPUT test\r\n{\r\n \"mappings\": {\r\n \"doc\": {\r\n \"properties\": {\r\n \"comments\": {\r\n \"type\": \"nested\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT test/doc/1?refresh\r\n{\r\n \"title\": \"Test title\",\r\n \"comments\": [\r\n {\r\n \"author\": \"kimchy\",\r\n \"text\": \"comment text\"\r\n },\r\n {\r\n \"author\": \"nik9000\",\r\n \"text\": \"words words words\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\nThis query works fine and returns inner_hits in both ES 1.7 and 5.5:\r\n```\r\nPOST test/_search\r\n{\r\n \"query\": {\r\n \"nested\": {\r\n \"path\": \"comments\",\r\n \"query\": {\r\n \"match\": {\"comments.text\" : \"words\"}\r\n },\r\n \"inner_hits\": {} \r\n }\r\n }\r\n}\r\n```\r\n\r\nThis query works differently:\r\n```\r\nPOST test/_search\r\n{\r\n \"query\": {\r\n \"indices\": {\r\n \"indices\": [\r\n \"test\"\r\n ],\r\n \"query\": {\r\n \"nested\": {\r\n \"path\": \"comments\",\r\n \"query\": {\r\n \"match\": {\r\n \"comments.text\": \"words\"\r\n }\r\n },\r\n \"inner_hits\": {}\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nES 1.7 result:\r\n```\r\n\"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 1.2171685,\r\n \"hits\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"1\",\r\n \"_score\": 1.2171685,\r\n \"_source\": {\r\n \"title\": \"Test title\",\r\n \"comments\": [\r\n {\r\n \"author\": \"kimchy\",\r\n \"text\": \"comment text\"\r\n },\r\n {\r\n \"author\": \"nik9000\",\r\n \"text\": \"words words words\"\r\n }\r\n ]\r\n },\r\n \"inner_hits\": {\r\n \"comments\": {\r\n \"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 1.2171685,\r\n \"hits\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"1\",\r\n \"_nested\": {\r\n \"field\": \"comments\",\r\n \"offset\": 1\r\n },\r\n \"_score\": 1.2171685,\r\n \"_source\": {\r\n \"author\": \"nik9000\",\r\n \"text\": \"words words words\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n```\r\n\r\nES 5.5 result:\r\n```\r\n\"hits\": {\r\n \"total\": 1,\r\n \"max_score\": 0.9651416,\r\n \"hits\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"1\",\r\n \"_score\": 0.9651416,\r\n \"_source\": {\r\n \"title\": \"Test title\",\r\n \"comments\": [\r\n {\r\n \"author\": \"kimchy\",\r\n \"text\": \"comment text\"\r\n },\r\n {\r\n \"author\": \"nik9000\",\r\n \"text\": \"words words words\"\r\n }\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n```\r\n\r\nAs you see ES 1.7 results contains inner_hits, but ES 5.5 doesn't\r\n\r\nTested ES versions:\r\n```\r\n\"version\": {\r\n \"number\": \"5.5.0\",\r\n \"build_hash\": \"260387d\",\r\n \"build_date\": \"2017-06-30T23:16:05.735Z\",\r\n \"build_snapshot\": false,\r\n \"lucene_version\": \"6.6.0\"\r\n }\r\n```\r\n```\r\n\"version\": {\r\n \"number\": \"1.7.3\",\r\n \"build_hash\": \"05d4530971ef0ea46d0f4fa6ee64dbc8df659682\",\r\n \"build_timestamp\": \"2015-10-15T09:14:17Z\",\r\n \"build_snapshot\": false,\r\n \"lucene_version\": \"4.10.4\"\r\n }\r\n```",
"comments": [
{
"body": "I've been able to reproduce the issue (thanks for the nice instructions) and it looks like a bug related to index queries.\r\nI think that, index queries being [deprecated](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-indices-query.html) (in 5.x and removed in 6), they somehow exclude the nested documents altogether from the results, hence why inner hits are empty (basically it doesn't matter if they are specified or not).\r\n\r\nI'll leave @martijnvg comment more on this; in the meantime the workaround (as indicated in the original issue) and my recommendation is to specify the indices in the URL (which looks like it already does) instead of using index queries. \r\nEven if this bug is fixed, upgrading from 5.x (whenever that will be) will require the queries to be rewritten again...\r\n\r\nCheers,\r\n",
"created_at": "2017-08-10T09:15:05Z"
},
{
"body": "@unknownlighter Thanks for reporting this bug. I'll merge a fix soon for this. Please note like @costin said, that you will have migrate away from the `indices` query at some point, because this query will no longer exist in 6.0 and up.",
"created_at": "2017-08-10T11:33:22Z"
},
{
"body": "Fixed via #26138",
"created_at": "2017-08-10T12:09:10Z"
},
{
"body": "Is this issue related to my topic https://discuss.elastic.co/t/highlight-query-nested-query-with-wildcard-search/96610/2?u=alexander_ott",
"created_at": "2017-08-11T09:28:38Z"
}
],
"number": 26133,
"title": "inner_hits doesn't work inside indices query"
} | {
"body": "PR for #26133",
"number": 26138,
"review_comments": [],
"title": "Fix inner hits to work with queries wrapped in an indices query"
} | {
"commits": [
{
"message": "Fix inner hits to work with queries wrapped in an `indices` query.\n\nCloses #26133"
}
],
"files": [
{
"diff": "@@ -33,6 +33,7 @@\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collection;\n+import java.util.Map;\n import java.util.Objects;\n import java.util.Optional;\n \n@@ -232,6 +233,12 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n return noMatchQuery.toQuery(context);\n }\n \n+ @Override\n+ protected void extractInnerHitBuilders(Map<String, InnerHitContextBuilder> innerHits) {\n+ InnerHitContextBuilder.extractInnerHits(innerQuery, innerHits);\n+ InnerHitContextBuilder.extractInnerHits(noMatchQuery, innerHits);\n+ }\n+\n @Override\n public int doHashCode() {\n return Objects.hash(innerQuery, noMatchQuery, Arrays.hashCode(indices));",
"filename": "core/src/main/java/org/elasticsearch/index/query/IndicesQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -54,6 +54,7 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.common.xcontent.support.XContentMapValues.extractValue;\n import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.indicesQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n import static org.elasticsearch.index.query.QueryBuilders.nestedQuery;\n@@ -641,4 +642,28 @@ public void testInnerHitsWithIgnoreUnmapped() throws Exception {\n assertHitCount(response, 2);\n assertSearchHits(response, \"1\", \"3\");\n }\n+\n+ public void testInnerHitsInsideIndicesQuery() throws Exception {\n+ assertAcked(prepareCreate(\"index1\").addMapping(\"message\", \"comments\", \"type=nested\"));\n+ client().prepareIndex(\"index1\", \"message\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"message\", \"quick brown fox\")\n+ .startArray(\"comments\")\n+ .startObject().field(\"message\", \"fox eat quick\").endObject()\n+ .startObject().field(\"message\", \"fox ate rabbit x y z\").endObject()\n+ .startObject().field(\"message\", \"rabbit got away\").endObject()\n+ .endArray()\n+ .endObject()).get();\n+ refresh();\n+\n+ SearchResponse response = client().prepareSearch()\n+ .setQuery(indicesQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.None)\n+ .innerHit(new InnerHitBuilder()), \"index1\"))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(2L));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(1).getNestedIdentity().getOffset(), equalTo(1));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/InnerHitsIT.java",
"status": "modified"
}
]
} |
{
"body": "The failure reason for snapshot shard failures might not be propagated properly if the master node changes after the errors were reported by other data nodes. This commits ensures that the snapshot shard failure reason is preserved properly and adds workaround for reading old snapshot files where this information might not have been preserved.\r\n\r\nCloses #25878\r\n\r\n",
"comments": [
{
"body": "@ywelsch thanks a lot for review and help! I will let it \"cook\" in CI on the master branch over the weekend and then I will backport it to 5.6 and update the version in writeTo and readFrom if everything goes well. ",
"created_at": "2017-07-28T16:32:54Z"
}
],
"number": 25941,
"title": "Snapshot/Restore: Ensure that shard failure reasons are correctly stored in CS"
} | {
"body": "The failure reasons for snapshot shard failures might not be propagated properly if the master node changes after errors were reported by other data nodes, which causes them to be stored as null in snapshot files. This commits adds a workaround for reading such snapshot files where this information might not have been preserved and makes sure that the reason is not null if it gets cluster state from another master. This is a partial backport of #25941 to 5.6.\r\n\r\nCloses #25878\r\n",
"number": 26127,
"review_comments": [],
"title": "Snapshot/Restore: fix NPE while handling null failure reasons"
} | {
"commits": [
{
"message": "Snapshot/Restore: Ensure that shard failure reasons are correctly stored in CS (#25941)\n\nThe failure reason for snapshot shard failures might not be propagated properly if the master node changes after the errors were reported by other data nodes. This commits adds a workaround for reading old snapshot files where this information might not have been preserved and makes sure that the reason is \"\" if it gets cluster state from another master.\n\nCloses #25878"
}
],
"files": [
{
"diff": "@@ -241,6 +241,8 @@ public ShardSnapshotStatus(String nodeId, State state, String reason) {\n this.nodeId = nodeId;\n this.state = state;\n this.reason = reason;\n+ // If the state is failed we have to have a reason for this failure\n+ assert state.failed() == false || reason != null;\n }\n \n public ShardSnapshotStatus(StreamInput in) throws IOException {\n@@ -403,7 +405,9 @@ public SnapshotsInProgress(StreamInput in) throws IOException {\n ShardId shardId = ShardId.readShardId(in);\n String nodeId = in.readOptionalString();\n State shardState = State.fromValue(in.readByte());\n- builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState));\n+ // Workaround for https://github.com/elastic/elasticsearch/issues/25878\n+ String reason = shardState.failed() ? \"\" : null;\n+ builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState, reason));\n }\n long repositoryStateId = UNDEFINED_REPOSITORY_STATE_ID;\n if (in.getVersion().onOrAfter(REPOSITORY_ID_INTRODUCED_VERSION)) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/SnapshotsInProgress.java",
"status": "modified"
},
{
"diff": "@@ -62,6 +62,7 @@ public SnapshotShardFailure(@Nullable String nodeId, ShardId shardId, String rea\n this.nodeId = nodeId;\n this.shardId = shardId;\n this.reason = reason;\n+ assert reason != null;\n status = RestStatus.INTERNAL_SERVER_ERROR;\n }\n \n@@ -192,7 +193,9 @@ public static SnapshotShardFailure fromXContent(XContentParser parser) throws IO\n } else if (\"node_id\".equals(currentFieldName)) {\n snapshotShardFailure.nodeId = parser.text();\n } else if (\"reason\".equals(currentFieldName)) {\n- snapshotShardFailure.reason = parser.text();\n+ // Workaround for https://github.com/elastic/elasticsearch/issues/25878\n+ // Some old snapshot might still have null in shard failure reasons\n+ snapshotShardFailure.reason = parser.textOrNull();\n } else if (\"shard_id\".equals(currentFieldName)) {\n shardId = parser.intValue();\n } else if (\"status\".equals(currentFieldName)) {\n@@ -215,6 +218,11 @@ public static SnapshotShardFailure fromXContent(XContentParser parser) throws IO\n throw new ElasticsearchParseException(\"index shard was not set\");\n }\n snapshotShardFailure.shardId = new ShardId(index, index_uuid, shardId);\n+ // Workaround for https://github.com/elastic/elasticsearch/issues/25878\n+ // Some old snapshot might still have null in shard failure reasons\n+ if (snapshotShardFailure.reason == null) {\n+ snapshotShardFailure.reason = \"\";\n+ }\n return snapshotShardFailure;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotShardFailure.java",
"status": "modified"
},
{
"diff": "@@ -1128,7 +1128,8 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n for (ObjectObjectCursor<ShardId, ShardSnapshotStatus> shardEntry : snapshotEntry.shards()) {\n ShardSnapshotStatus status = shardEntry.value;\n if (!status.state().completed()) {\n- shardsBuilder.put(shardEntry.key, new ShardSnapshotStatus(status.nodeId(), State.ABORTED));\n+ shardsBuilder.put(shardEntry.key, new ShardSnapshotStatus(status.nodeId(), State.ABORTED,\n+ \"aborted by snapshot deletion\"));\n } else {\n shardsBuilder.put(shardEntry.key, status);\n }",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -57,12 +57,12 @@ public void testWaitingIndices() {\n // test more than one waiting shard in an index\n shards.put(new ShardId(idx1Name, idx1UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), State.WAITING));\n shards.put(new ShardId(idx1Name, idx1UUID, 1), new ShardSnapshotStatus(randomAlphaOfLength(2), State.WAITING));\n- shards.put(new ShardId(idx1Name, idx1UUID, 2), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState()));\n+ shards.put(new ShardId(idx1Name, idx1UUID, 2), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState(), \"\"));\n // test exactly one waiting shard in an index\n shards.put(new ShardId(idx2Name, idx2UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), State.WAITING));\n- shards.put(new ShardId(idx2Name, idx2UUID, 1), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState()));\n+ shards.put(new ShardId(idx2Name, idx2UUID, 1), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState(), \"\"));\n // test no waiting shards in an index\n- shards.put(new ShardId(idx3Name, idx3UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState()));\n+ shards.put(new ShardId(idx3Name, idx3UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState(), \"\"));\n Entry entry = new Entry(snapshot, randomBoolean(), randomBoolean(), State.INIT,\n indices, System.currentTimeMillis(), randomLong(), shards.build());\n ",
"filename": "core/src/test/java/org/elasticsearch/cluster/SnapshotsInProgressTests.java",
"status": "modified"
},
{
"diff": "@@ -141,13 +141,20 @@ public SnapshotInfo waitForCompletion(String repository, String snapshotName, Ti\n return null;\n }\n \n- public static String blockMasterFromFinalizingSnapshot(final String repositoryName) {\n+ public static String blockMasterFromFinalizingSnapshotOnIndexFile(final String repositoryName) {\n final String masterName = internalCluster().getMasterName();\n ((MockRepository)internalCluster().getInstance(RepositoriesService.class, masterName)\n .repository(repositoryName)).setBlockOnWriteIndexFile(true);\n return masterName;\n }\n \n+ public static String blockMasterFromFinalizingSnapshotOnSnapFile(final String repositoryName) {\n+ final String masterName = internalCluster().getMasterName();\n+ ((MockRepository)internalCluster().getInstance(RepositoriesService.class, masterName)\n+ .repository(repositoryName)).setBlockAndFailOnWriteSnapFiles(true);\n+ return masterName;\n+ }\n+\n public static String blockNodeWithIndex(final String repositoryName, final String indexName) {\n for(String node : internalCluster().nodesInclude(indexName)) {\n ((MockRepository)internalCluster().getInstance(RepositoriesService.class, node).repository(repositoryName))",
"filename": "core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java",
"status": "modified"
},
{
"diff": "@@ -798,6 +798,67 @@ public void testMasterShutdownDuringSnapshot() throws Exception {\n assertEquals(0, snapshotInfo.failedShards());\n }\n \n+\n+ public void testMasterAndDataShutdownDuringSnapshot() throws Exception {\n+ logger.info(\"--> starting three master nodes and two data nodes\");\n+ internalCluster().startMasterOnlyNodes(3);\n+ internalCluster().startDataOnlyNodes(2);\n+\n+ final Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"mock\").setSettings(Settings.builder()\n+ .put(\"location\", randomRepoPath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)));\n+\n+ assertAcked(prepareCreate(\"test-idx\", 0, Settings.builder().put(\"number_of_shards\", between(1, 20))\n+ .put(\"number_of_replicas\", 0)));\n+ ensureGreen();\n+\n+ logger.info(\"--> indexing some data\");\n+ final int numdocs = randomIntBetween(10, 100);\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[numdocs];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(\"test-idx\", \"type1\", Integer.toString(i)).setSource(\"field1\", \"bar \" + i);\n+ }\n+ indexRandom(true, builders);\n+ flushAndRefresh();\n+\n+ final int numberOfShards = getNumShards(\"test-idx\").numPrimaries;\n+ logger.info(\"number of shards: {}\", numberOfShards);\n+\n+ final String masterNode = blockMasterFromFinalizingSnapshotOnSnapFile(\"test-repo\");\n+ final String dataNode = blockNodeWithIndex(\"test-repo\", \"test-idx\");\n+\n+ dataNodeClient().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(false).setIndices(\"test-idx\").get();\n+\n+ logger.info(\"--> stopping data node {}\", dataNode);\n+ stopNode(dataNode);\n+ logger.info(\"--> stopping master node {} \", masterNode);\n+ internalCluster().stopCurrentMasterNode();\n+\n+ logger.info(\"--> wait until the snapshot is done\");\n+\n+ assertBusy(() -> {\n+ GetSnapshotsResponse snapshotsStatusResponse = client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").get();\n+ SnapshotInfo snapshotInfo = snapshotsStatusResponse.getSnapshots().get(0);\n+ assertTrue(snapshotInfo.state().completed());\n+ }, 1, TimeUnit.MINUTES);\n+\n+ logger.info(\"--> verify that snapshot was partial\");\n+\n+ GetSnapshotsResponse snapshotsStatusResponse = client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").get();\n+ SnapshotInfo snapshotInfo = snapshotsStatusResponse.getSnapshots().get(0);\n+ assertEquals(SnapshotState.PARTIAL, snapshotInfo.state());\n+ assertNotEquals(snapshotInfo.totalShards(), snapshotInfo.successfulShards());\n+ assertThat(snapshotInfo.failedShards(), greaterThan(0));\n+ for (SnapshotShardFailure failure : snapshotInfo.shardFailures()) {\n+ assertNotNull(failure.reason());\n+ }\n+ }\n+\n @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/25281\")\n public void testMasterShutdownDuringFailedSnapshot() throws Exception {\n logger.info(\"--> starting two master nodes and two data nodes\");\n@@ -831,7 +892,7 @@ public void testMasterShutdownDuringFailedSnapshot() throws Exception {\n assertEquals(ClusterHealthStatus.RED, client().admin().cluster().prepareHealth().get().getStatus()),\n 30, TimeUnit.SECONDS);\n \n- final String masterNode = blockMasterFromFinalizingSnapshot(\"test-repo\");\n+ final String masterNode = blockMasterFromFinalizingSnapshotOnIndexFile(\"test-repo\");\n \n logger.info(\"--> snapshot\");\n client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\")",
"filename": "core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java",
"status": "modified"
},
{
"diff": "@@ -2293,9 +2293,9 @@ public void testDeleteOrphanSnapshot() throws Exception {\n public ClusterState execute(ClusterState currentState) {\n // Simulate orphan snapshot\n ImmutableOpenMap.Builder<ShardId, ShardSnapshotStatus> shards = ImmutableOpenMap.builder();\n- shards.put(new ShardId(idxName, \"_na_\", 0), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n- shards.put(new ShardId(idxName, \"_na_\", 1), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n- shards.put(new ShardId(idxName, \"_na_\", 2), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n+ shards.put(new ShardId(idxName, \"_na_\", 0), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED, \"aborted\"));\n+ shards.put(new ShardId(idxName, \"_na_\", 1), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED, \"aborted\"));\n+ shards.put(new ShardId(idxName, \"_na_\", 2), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED, \"aborted\"));\n List<Entry> entries = new ArrayList<>();\n entries.add(new Entry(new Snapshot(repositoryName,\n createSnapshotResponse.getSnapshotInfo().snapshotId()),",
"filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java",
"status": "modified"
},
{
"diff": "@@ -104,6 +104,9 @@ public long getFailureCount() {\n * finalization of a snapshot, while permitting other IO operations to proceed unblocked. */\n private volatile boolean blockOnWriteIndexFile;\n \n+ /** Allows blocking on writing the snapshot file at the end of snapshot creation to simulate a died master node */\n+ private volatile boolean blockAndFailOnWriteSnapFile;\n+\n private volatile boolean atomicMove;\n \n private volatile boolean blocked = false;\n@@ -118,6 +121,7 @@ public MockRepository(RepositoryMetaData metadata, Environment environment,\n blockOnControlFiles = metadata.settings().getAsBoolean(\"block_on_control\", false);\n blockOnDataFiles = metadata.settings().getAsBoolean(\"block_on_data\", false);\n blockOnInitialization = metadata.settings().getAsBoolean(\"block_on_init\", false);\n+ blockAndFailOnWriteSnapFile = metadata.settings().getAsBoolean(\"block_on_snap\", false);\n randomPrefix = metadata.settings().get(\"random\", \"default\");\n waitAfterUnblock = metadata.settings().getAsLong(\"wait_after_unblock\", 0L);\n atomicMove = metadata.settings().getAsBoolean(\"atomic_move\", true);\n@@ -168,13 +172,18 @@ public synchronized void unblock() {\n blockOnControlFiles = false;\n blockOnInitialization = false;\n blockOnWriteIndexFile = false;\n+ blockAndFailOnWriteSnapFile = false;\n this.notifyAll();\n }\n \n public void blockOnDataFiles(boolean blocked) {\n blockOnDataFiles = blocked;\n }\n \n+ public void setBlockAndFailOnWriteSnapFiles(boolean blocked) {\n+ blockAndFailOnWriteSnapFile = blocked;\n+ }\n+\n public void setBlockOnWriteIndexFile(boolean blocked) {\n blockOnWriteIndexFile = blocked;\n }\n@@ -187,7 +196,8 @@ private synchronized boolean blockExecution() {\n logger.debug(\"Blocking execution\");\n boolean wasBlocked = false;\n try {\n- while (blockOnDataFiles || blockOnControlFiles || blockOnInitialization || blockOnWriteIndexFile) {\n+ while (blockOnDataFiles || blockOnControlFiles || blockOnInitialization || blockOnWriteIndexFile ||\n+ blockAndFailOnWriteSnapFile) {\n blocked = true;\n this.wait();\n wasBlocked = true;\n@@ -266,6 +276,8 @@ private void maybeIOExceptionOrBlock(String blobName) throws IOException {\n throw new IOException(\"Random IOException\");\n } else if (blockOnControlFiles) {\n blockExecutionAndMaybeWait(blobName);\n+ } else if (blobName.startsWith(\"snap-\") && blockAndFailOnWriteSnapFile) {\n+ blockExecutionAndFail(blobName);\n }\n }\n }\n@@ -283,6 +295,15 @@ private void blockExecutionAndMaybeWait(final String blobName) {\n }\n }\n \n+ /**\n+ * Blocks an I/O operation on the blob fails and throws an exception when unblocked\n+ */\n+ private void blockExecutionAndFail(final String blobName) throws IOException {\n+ logger.info(\"blocking I/O operation for file [{}] at path [{}]\", blobName, path());\n+ blockExecution();\n+ throw new IOException(\"exception after block\");\n+ }\n+\n MockBlobContainer(BlobContainer delegate) {\n super(delegate);\n }",
"filename": "core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.5.0 (issue first appeared while on 5.4.1)\r\n\r\n**Plugins installed**: [x-pack, repository-s3]\r\n\r\n**JVM version** (`java -version`): openjdk version \"1.8.0_131\"\r\nOpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-1~bpo8+1-b11)\r\nOpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Linux ip-10-127-1-159 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1 (2016-12-30) x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nTrying to get the list of available snapshots on an S3 backed repository fails with NullPointerException.\r\n\r\n```\r\ncurl elasticsearch:9200/_snapshot/long_term/_all\r\n{\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]\"}],\"type\":\"null_pointer_exception\",\"reason\":null},\"status\":500}\r\n```\r\n\r\nElasticsearch logs:\r\n\r\n```\r\n[2017-07-25T12:01:47,038][WARN ][r.suppressed ] path: /_snapshot/long_term/_all, params: {repository=long_term, snapshot=_all}\r\norg.elasticsearch.transport.RemoteTransportException: [SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]\r\nCaused by: java.lang.NullPointerException\r\n```\r\n\r\nI use curator to take the backups and after grabbing backups successfully it fails when it tries to delete old snapshots because that's when it requires a list too:\r\n\r\n```\r\n2017-07-25 11:53:02,191 ERROR Failed to complete action: delete_snapshots. <class 'curator.exceptions.FailedExecution'>: Unable to get snapshot information from repository: long_term. Error: TransportError(500, 'null_pointer_exception', '[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]')\r\n```\r\n\r\nI have a feeling this is due to some kind of timeout. I turned on debug logging and while I couldn't find a more specific reason this fails I noticed it made ~ 2K requests to S3 until it failed and it stopped at 90 seconds. Is this a configurable timeout?\r\n\r\nIn the past getting a list of snapshots took increasingly long but it eventually responded. Now it breaks earlier than that.\r\n\r\nAlso posted on the forums: https://discuss.elastic.co/t/nullpointerexception-when-getting-list-of-snapshots-on-s3/94458",
"comments": [
{
"body": "Could you paste the full stack trace from the Elasticsearch server logs?",
"created_at": "2017-07-25T10:30:58Z"
},
{
"body": "There's no more logs for the null pointer entry. There's a ton of logs for the headers and each of the 2K requests do you want me to post those? All of those responded with 200 OK though.",
"created_at": "2017-07-25T11:09:18Z"
},
{
"body": "These should be the logs from the last request before the null pointer. I tried to sensor out any possibly sensitive info. Maybe the returned payload was what triggered the issue?\r\n\r\n```\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.c.p.RequestAuthCache] Auth cache not set in the context\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.c.p.RequestProxyAuthentication] Proxy auth state: UNCHALLENGED\r\n[2017-07-25T12:27:45,437][DEBUG][c.a.h.i.c.SdkHttpClient ] Attempt 1 to execute request\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.i.c.DefaultClientConnection] Sending request: GET /long_term/snap-GRrT8CKjS7qdq42NZf3T2A.dat HTTP/1.1\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"GET /long_term/snap-GRrT8CKjS7qdq42NZf3T2A.dat HTTP/1.1[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"Host: elastic-stack-backupsbucket-*****************.s3-eu-west-1.amazonaws.com[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"x-amz-content-sha256: *********************[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"Authorization: AWS4-HMAC-SHA256 Credential=****************/20170725/eu-west-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-d\r\nate;x-amz-security-token, Signature=***************************[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"X-Amz-Date: 20170725T092745Z[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"User-Agent: aws-sdk-java/1.10.69 Linux/3.16.0-4-amd64 OpenJDK_64-Bit_Server_VM/25.131-b11/1.8.0_131[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"X-Amz-Security-Token: **********************************[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"amz-sdk-invocation-id: 23f8b7a2-93bb-46f4-a492-cf692051dc43[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"amz-sdk-retry: 0/0/[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"Content-Type: application/octet-stream[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"Connection: Keep-Alive[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> GET /long_term/snap-GRrT8CKjS7qdq42NZf3T2A.dat HTTP/1.1\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> Host: elastic-stack-backupsbucket-*****************.s3-eu-west-1.amazonaws.com\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> Authorization: AWS4-HMAC-SHA256 Credential=****************/20170725/eu-west-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-dat\r\ne;x-amz-security-token, Signature=***************************\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> X-Amz-Date: 20170725T092745Z\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> User-Agent: aws-sdk-java/1.10.69 Linux/3.16.0-4-amd64 OpenJDK_64-Bit_Server_VM/25.131-b11/1.8.0_131\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> X-Amz-Security-Token: **********************************\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> amz-sdk-invocation-id: 23f8b7a2-93bb-46f4-a492-cf692051dc43\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> amz-sdk-retry: 0/0/\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> Content-Type: application/octet-stream\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> Connection: Keep-Alive\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"HTTP/1.1 200 OK[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"x-amz-id-2: ************************[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"x-amz-request-id: 3E117E943CA08991[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Date: Tue, 25 Jul 2017 09:27:46 GMT[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Last-Modified: Wed, 19 Jul 2017 01:07:25 GMT[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"ETag: \"8e87c087b7474433ba26057f74233e5a\"[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Accept-Ranges: bytes[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Content-Type: application/octet-stream[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Content-Length: 302[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Server: AmazonS3[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.i.c.DefaultClientConnection] Receiving response: HTTP/1.1 200 OK\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << HTTP/1.1 200 OK\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << x-amz-id-2: *************************************\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << x-amz-request-id: 3E117E943CA08991\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Date: Tue, 25 Jul 2017 09:27:46 GMT\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Last-Modified: Wed, 19 Jul 2017 01:07:25 GMT\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << ETag: \"8e87c087b7474433ba26057f74233e5a\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Accept-Ranges: bytes\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Content-Type: application/octet-stream\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Content-Length: 302\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Server: AmazonS3\r\n[2017-07-25T12:27:45,509][DEBUG][c.a.h.i.c.SdkHttpClient ] Connection can be kept alive for 60000 MILLISECONDS\r\n[2017-07-25T12:27:45,509][DEBUG][c.a.requestId ] x-amzn-RequestId: not available\r\n[2017-07-25T12:27:45,509][DEBUG][c.a.request ] Received successful response: 200, AWS Request ID: 3E117E943CA08991\r\n[2017-07-25T12:27:45,509][DEBUG][c.a.requestId ] AWS Request ID: 3E117E943CA08991\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"?[0xd7]l[0x17][0x8]snapshot[0x0][0x0][0x0][0x1]DFL[0x0]l[0x92][0xcd]J[0x3]1[0x14][0x85][0xe9][0xc2][0x85][0xe2]SH]t[0xd1]![0xc9]Lg[0xda][0xee]tP[0x17]B[0x17][0xa6][0xed]B[0x90]!4[0x19][0x9a][0xd2]fln[0xd2][0x95]++[0xe]E[0x90]y[0xdc][0xfe]l[0x1c]DQ[0xe8][0x85]lr[0xf8][0xce][0xb9][0xf7][0x90][0xf4][g'[0xfb][0x12][0x8c]x[0x86]i[0xe1][0xf6]k#[0x16][0xea]IHY[0x18]h[0xff][0xaa]mFhB[0x12][0xda]#[0x94][0xc4][0x9d]h[0xed][0xbd][0x96][0xa3][0xbb][0x7];[0xec][0xa6][0xf7]3[0x9e],[0xe5]2b[0x83][0xc7]<[0x1c][0xb2][0xab][0xcd]JY[0xd0][0x85][0xc9][0xb4]l[0x9e][0xe][0xae]?[0xdf][0xb5][0x91]z[0xa2]`[0xcb]^?2W[0xf4];- [0xf5][0x89][0x10][0x91]v02[0xc1]H[0x8a][0x91]=[0x8c]D[0xed]1f?[0x9e][0x1e][0x7][0xec]83[0xe]B[0x82][0xd9][0x19]&v[0xb1][0x95][0xd0][0xee][0x18]I[0xb0][0x9a][0x14][0x9b]NCl:Z[0x13]#)[0xdb][0xbd][0x81][0x13]N[0xdd][0xf2]Q[0x9a][0xde]p[0xbe][0xa9]o[0xd6]eN/[0xd4]e#[0x18][0x9f]_[0xbc][0xbc][0x96][0xca][0xc8]?[0xa1]e[0xaa][0xf]W81[0xcf]`*[0xac][0x84]f[0xa3][0xaa][0xc0]O[0xea][0xe7][0x86][0xdc][0xff][0x13][0xcb]\\[0xe8][0xb9][0xb7][0xf5]/[0xd8][0x1d][0xe]_[0x0][0x0][0x0][0xff][0xff][0x3][0x0][0xc0]([0x93][0xe8][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0xf4][0x1f]J[0xbe]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.i.c.PoolingClientConnectionManager] Connection [id: 4949][route: {s}->https://elastic-stack-backupsbucket-*************.s3-eu-west-1.amazonaws.com:443] can be kept alive for 60000 MILLISECONDS\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.i.c.PoolingClientConnectionManager] Connection released: [id: 4949][route: {s}->https://elastic-stack-backupsbucket-******************.s3-eu-west-1.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50]\r\n\r\n```",
"created_at": "2017-07-25T11:21:47Z"
},
{
"body": "FYI we use a coordinating node and 3 data nodes. I do the snapshot requests to the coordinating node, and all the S3 requests seem to originate from the data node that's currently the master (10.127.1.203).\r\n\r\nSome more logs:\r\n\r\nI see ~ 1k of these logs 15 sec after start of the request and ~ 500 at the end:\r\n\r\n [2017-07-25T12:27:46,968][DEBUG][o.e.s.SearchService ] [SVVyQPF] freeing search context [1977515], time [225057509], lastAccessTime [224978476], keepAlive [30000]\r\n\r\nThese pop up between requests:\r\n\r\n [2017-07-25T12:27:45,374][DEBUG][o.a.h.i.c.PoolingClientConnectionManager] Connection released: [id: 4949][route: {s}->https://elastic-stack-backupsbucket-**********.s3-eu-west-1.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50]\r\n [2017-07-25T12:27:45,374][DEBUG][o.a.h.i.c.PoolingClientConnectionManager] Connection [id: 4949][route: {s}->https://elastic-stack-backupsbucket-**********.s3-eu-west-1.amazonaws.com:443] can be kept alive for 60000 MILLISECONDS\r\n\r\nThese are the things logged on the master node around the time the coordinating node logged the exception (excluding the freeing search context logs mentioned above):\r\n\r\n [2017-07-25T12:27:45,509][DEBUG][o.a.h.i.c.DefaultClientConnection] Receiving response: HTTP/1.1 200 OK\r\n [2017-07-25T12:27:45,509][DEBUG][o.a.h.i.c.PoolingClientConnectionManager] Connection [id: 4949][route: {s}->https://elastic-stack-backupsbucket-***********.s3-eu-west-1.amazonaws.com:443] can be kept alive for 60000 MILLISECONDS\r\n [2017-07-25T12:27:45,509][DEBUG][c.a.requestId ] x-amzn-RequestId: not available\r\n [2017-07-25T12:27:45,541][DEBUG][o.e.m.j.JvmGcMonitorService] [SVVyQPF] [gc][221514] overhead, spent [106ms] collecting in the last [1s]\r\n [2017-07-25T12:27:47,497][DEBUG][o.e.x.m.a.GetDatafeedsStatsAction$TransportAction] [SVVyQPF] Get stats for datafeed '_all'\r\n [2017-07-25T12:27:47,652][DEBUG][o.e.x.m.e.l.LocalExporter] monitoring index templates and pipelines are installed on master node, service can start\r\n [2017-07-25T12:27:48,542][DEBUG][o.e.m.j.JvmGcMonitorService] [SVVyQPF] [gc][221517] overhead, spent [111ms] collecting in the last [1s]",
"created_at": "2017-07-25T14:06:20Z"
},
{
"body": "Hmm, I don't see any smoking gun here. I am not really sure how to move forward with this without knowing where this NPE occurs or being able to reproduce this issue locally.",
"created_at": "2017-07-26T12:39:20Z"
},
{
"body": "Ok as I understand it there should have been a stack trace after the \"caused by\" line right? Maybe we can look into why that's not present and then we'll have more info for the specific issue? Also there's that `r.suppressed` thing. That would at least point the to class in which the NPE occurred but that's not available either. Can I configure something to make that visible?",
"created_at": "2017-07-26T12:44:44Z"
},
{
"body": "@eirc you said that\r\n\r\n> These should be the logs from the last request before the null pointer\r\n\r\nbut the timestamp from these logs are `12:27` whereas the NPE has a timestamp of `12:01`.\r\nCan you provide the full logs from both the master node and the coordinating node? (You can share them in private with us if you don't want to post them publicly)",
"created_at": "2017-07-26T12:58:09Z"
},
{
"body": "@eirc, @ywelsch and I discussed this more and we have a couple of other things we would like you to try:\r\n\r\n1) could you execute `curl elasticsearch:9200/_snapshot/long_term/_all?error_trace=true` and see if the stack trace shows up there\r\n\r\n2) could you execute `curl localhost:9200/_snapshot/long_term/_all` on the current master node. And if it works, but still fails when you execute it against a coordinating node we would really appreciate this output as well.\r\n\r\n",
"created_at": "2017-07-26T13:06:03Z"
},
{
"body": "Regarding the time discrepancies, the NPE happens every time I request a listing. At 12:27 I had debug logging on so that's why most of the logs are from that time. At 12:01 was probably one of the first tests. The same NPE log appeared at 12:27 and every time I did a listing request.",
"created_at": "2017-07-26T13:43:49Z"
},
{
"body": "Ok now there's some light at the end of the tunnel!\r\n\r\nFirst if I get the listing from the master node it actually works! By requesting on the coordinating (or any other) node it fails with that same behaviour. Adding error_trace=true to the request yields some useful info finally:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [{\r\n \"type\": \"remote_transport_exception\",\r\n \"reason\": \"[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]\",\r\n \"stack_trace\": \"[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: RemoteTransportException[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: NullPointerException;\\n\\tat org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:618)\\n\\tat org.elasticsearch.ElasticsearchException.generateFailureXContent(ElasticsearchException.java:563)\\n\\tat org.elasticsearch.rest.BytesRestResponse.build(BytesRestResponse.java:138)\\n\\tat org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:96)\\n\\tat org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:91)\\n\\tat org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58)\\n\\tat org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94)\\n\\tat org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:185)\\n\\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1067)\\n\\tat org.elasticsearch.transport.TcpTransport.lambda$handleException$16(TcpTransport.java:1467)\\n\\tat org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:110)\\n\\tat org.elasticsearch.transport.TcpTransport.handleException(TcpTransport.java:1465)\\n\\tat org.elasticsearch.transport.TcpTransport.handlerResponseError(TcpTransport.java:1457)\\n\\tat org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1401)\\n\\tat org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\\n\\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)\\n\\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297)\\n\\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413)\\n\\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\\n\\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\\n\\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)\\n\\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)\\n\\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)\\n\\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)\\n\\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)\\n\\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)\\n\\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)\\n\\tat java.lang.Thread.run(Thread.java:748)\\nCaused by: RemoteTransportException[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: NullPointerException;\\nCaused by: java.lang.NullPointerException\\n\"\r\n }],\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null,\r\n \"stack_trace\": \"java.lang.NullPointerException\\n\"\r\n },\r\n \"status\": 500\r\n}\r\n```\r\n\r\nHere's the formatted stack trace for your convenience:\r\n\r\n```\r\n[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: RemoteTransportException[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: NullPointerException;\r\n at org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:618)\r\n at org.elasticsearch.ElasticsearchException.generateFailureXContent(ElasticsearchException.java:563)\r\n at org.elasticsearch.rest.BytesRestResponse.build(BytesRestResponse.java:138)\r\n at org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:96)\r\n at org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:91)\r\n at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58)\r\n at org.elasticsearch.action.support.TransportAction.onFailure(TransportAction.java:94)\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.handleException(TransportMasterNodeAction.java:185)\r\n at org.elasticsearch.transport.TransportService.handleException(TransportService.java:1067)\r\n at org.elasticsearch.transport.TcpTransport.lambda(TcpTransport.java:1467)\r\n at org.elasticsearch.common.util.concurrent.EsExecutors.execute(EsExecutors.java:110)\r\n at org.elasticsearch.transport.TcpTransport.handleException(TcpTransport.java:1465)\r\n at org.elasticsearch.transport.TcpTransport.handlerResponseError(TcpTransport.java:1457)\r\n at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1401)\r\n at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297)\r\n at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413)\r\n at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at io.netty.channel.DefaultChannelPipeline.channelRead(DefaultChannelPipeline.java:1334)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)\r\n at io.netty.channel.nio.AbstractNioByteChannel.read(AbstractNioByteChannel.java:134)\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)\r\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)\r\n at io.netty.util.concurrent.SingleThreadEventExecutor.run(SingleThreadEventExecutor.java:858)\r\n at java.lang.Thread.run(Thread.java:748)\r\nCaused by: RemoteTransportException[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: NullPointerException;\r\nCaused by: java.lang.NullPointerException\r\n```",
"created_at": "2017-07-26T13:50:13Z"
},
{
"body": "@eirc any chance you can email me the output that you get from master? My mail is igor at elastic.co. If not, could you try getting one snapshot at a time on the coordinating node and checking what's different between the snapshots that can be retrieved and the snapshots that cause this NPE? \r\n\r\nBy the way, does the coordinating node have a different es version?",
"created_at": "2017-07-26T14:07:20Z"
},
{
"body": "Just confirmed all elasticsearches are on 5.5.0. Can I check the version of plugins someway? When I upgraded the stack I remember I had to remove and reinstall plugins to be of proper versions.\r\n\r\nI'll make a script to pull each snapshot individually and see which one(s) are breaking now.",
"created_at": "2017-07-26T14:13:11Z"
},
{
"body": "In 5.5.0 all plugins should be 5.5.0. Otherwise, elasticsearch wouldn't work. In any case, based on what we know so far, I don't think it's a plugin-related issue. Our current theory is that snapshot info serialization code breaks on one or more snapshots that you have in your repository. However, we just reviewed this code and couldn't find any obvious issues. That's why we would like to figure out which snapshot information master is trying to send to the coordinating node in order to reproduce and fix the problem. ",
"created_at": "2017-07-26T14:32:44Z"
},
{
"body": "I emailed you the full snapshot list. My script ~managed to successfully grab each snapshot individually from the coordinating node~ (where grabbing them all failed). I noticed some of the snapshots have some shard failures but that shouldn't be an issue right? Maybe it's the size of the response that's the issue here? I got ~2k snapshots and the response is 1.2 MB.",
"created_at": "2017-07-26T14:59:14Z"
},
{
"body": "No scratch that, there *is* a single snapshot which produces the NPE when I get it on it's own.",
"created_at": "2017-07-26T15:04:41Z"
},
{
"body": "\r\nHere is the JSON I can get from the master but not from other nodes:\r\n\r\n```\r\n{\r\n \"snapshots\": [\r\n {\r\n \"snapshot\": \"wsj-snapshot-20170720085856\",\r\n \"uuid\": \"yIbELYjgQN-_BgjRd4Vb0A\",\r\n \"version_id\": 5040199,\r\n \"version\": \"5.4.1\",\r\n \"indices\": [\r\n \"wsj-2017.07.19\",\r\n \"wsj-iis-2017.07.11\",\r\n \"wsj-2017.07.08\",\r\n \"wsj-2017.07.15\",\r\n \"wsj-2017.07.11\",\r\n \"wsj-2017.07.12\",\r\n \"wsj-2017.07.02\",\r\n \"wsj-2017.07.10\",\r\n \"wsj-2017.07.06\",\r\n \"wsj-2017.06.30\",\r\n \"wsj-2017.07.05\",\r\n \"wsj-2017.07.14\",\r\n \"wsj-2017.07.03\",\r\n \"wsj-2017.07.16\",\r\n \"wsj-2017.07.17\",\r\n \"wsj-2017.07.07\",\r\n \"wsj-2017.07.01\",\r\n \"wsj-2017.07.09\",\r\n \"wsj-2017.07.04\",\r\n \"wsj-2017.07.18\",\r\n \"wsj-2017.07.13\"\r\n ],\r\n \"state\": \"PARTIAL\",\r\n \"start_time\": \"2017-07-20T08:58:57.243Z\",\r\n \"start_time_in_millis\": 1500541137243,\r\n \"end_time\": \"2017-07-20T11:52:37.938Z\",\r\n \"end_time_in_millis\": 1500551557938,\r\n \"duration_in_millis\": 10420695,\r\n \"failures\": [\r\n {\r\n \"index\": \"wsj-2017.07.16\",\r\n \"index_uuid\": \"wsj-2017.07.16\",\r\n \"shard_id\": 0,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.18\",\r\n \"index_uuid\": \"wsj-2017.07.18\",\r\n \"shard_id\": 1,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.06.30\",\r\n \"index_uuid\": \"wsj-2017.06.30\",\r\n \"shard_id\": 0,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-iis-2017.07.11\",\r\n \"index_uuid\": \"wsj-iis-2017.07.11\",\r\n \"shard_id\": 4,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.18\",\r\n \"index_uuid\": \"wsj-2017.07.18\",\r\n \"shard_id\": 0,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.06\",\r\n \"index_uuid\": \"wsj-2017.07.06\",\r\n \"shard_id\": 0,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-iis-2017.07.11\",\r\n \"index_uuid\": \"wsj-iis-2017.07.11\",\r\n \"shard_id\": 0,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.19\",\r\n \"index_uuid\": \"wsj-2017.07.19\",\r\n \"shard_id\": 4,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.03\",\r\n \"index_uuid\": \"wsj-2017.07.03\",\r\n \"shard_id\": 4,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-iis-2017.07.11\",\r\n \"index_uuid\": \"wsj-iis-2017.07.11\",\r\n \"shard_id\": 3,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.03\",\r\n \"index_uuid\": \"wsj-2017.07.03\",\r\n \"shard_id\": 0,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.16\",\r\n \"index_uuid\": \"wsj-2017.07.16\",\r\n \"shard_id\": 3,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.01\",\r\n \"index_uuid\": \"wsj-2017.07.01\",\r\n \"shard_id\": 1,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.18\",\r\n \"index_uuid\": \"wsj-2017.07.18\",\r\n \"shard_id\": 4,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.05\",\r\n \"index_uuid\": \"wsj-2017.07.05\",\r\n \"shard_id\": 4,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.05\",\r\n \"index_uuid\": \"wsj-2017.07.05\",\r\n \"shard_id\": 1,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.06.30\",\r\n \"index_uuid\": \"wsj-2017.06.30\",\r\n \"shard_id\": 1,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.06.30\",\r\n \"index_uuid\": \"wsj-2017.06.30\",\r\n \"shard_id\": 4,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.18\",\r\n \"index_uuid\": \"wsj-2017.07.18\",\r\n \"shard_id\": 3,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.01\",\r\n \"index_uuid\": \"wsj-2017.07.01\",\r\n \"shard_id\": 4,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.03\",\r\n \"index_uuid\": \"wsj-2017.07.03\",\r\n \"shard_id\": 3,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-iis-2017.07.11\",\r\n \"index_uuid\": \"wsj-iis-2017.07.11\",\r\n \"shard_id\": 1,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.19\",\r\n \"index_uuid\": \"wsj-2017.07.19\",\r\n \"shard_id\": 0,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.19\",\r\n \"index_uuid\": \"wsj-2017.07.19\",\r\n \"shard_id\": 3,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.19\",\r\n \"index_uuid\": \"wsj-2017.07.19\",\r\n \"shard_id\": 1,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.16\",\r\n \"index_uuid\": \"wsj-2017.07.16\",\r\n \"shard_id\": 4,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.03\",\r\n \"index_uuid\": \"wsj-2017.07.03\",\r\n \"shard_id\": 1,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n }\r\n ],\r\n \"shards\": {\r\n \"total\": 27,\r\n \"failed\": 27,\r\n \"successful\": 0\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nInterestingly this snapshot includes that `wsj-iis-2017.07.11` index which we then deleted (since due to the naming it would get mixed up a lot with the `wsj-*` indices) and recreated with another name. Those `IndexNotFoundException[no such index]` messages look weird though because the mentioned indices do exist, are still on the cluster and I can query them.",
"created_at": "2017-07-26T15:13:34Z"
},
{
"body": "🏆 deleted the offending snapshot and the listing now works! 🥇 \r\n\r\nIf you need any more info on the \"bug\" itself I'll be happy to provide. Also my issue is solved but I'll leave this for you to close in case you want to follow the thread deeper.",
"created_at": "2017-07-26T15:28:41Z"
},
{
"body": "Thanks @eirc. We have found the line that is causing this NPE. We are just doing some root cause analysis at the moment to see if there is more to it. It's definitely a bug. Thanks a lot for very detailed information and your willingness to work with us on it!",
"created_at": "2017-07-26T15:33:30Z"
},
{
"body": "@eirc I spent some time trying to reproduce the issue, but no matter what I try I cannot get my snapshot into the state where it produces `null`s in shard failures. It looks like the snapshot in question took place a week ago. Do you remember, by any chance, what was going on with the cluster during this time? Do you still have log files from that day?",
"created_at": "2017-07-26T22:55:15Z"
},
{
"body": "My current best guess is that that index I mentioned we deleted (wsj-iis) was deleted during the backup process and maybe that mucked up things somehow. I can check the logs at the time for more concrete info but that has to until tomorrow when i get back to work :)",
"created_at": "2017-07-26T23:06:33Z"
},
{
"body": "Yes, deletion of indices during a snapshot is the first thing I tried. It is producing a slightly different snapshot info that doesn't contain any nulls. It seems that I am missing some key ingredient here. I am done for today as well, but it would be awesome if you could check the logs tomorrow. ",
"created_at": "2017-07-26T23:12:51Z"
},
{
"body": "The issue I see is that the code incorrectly assumes that `reason` is non-null in case where there is a `SnapshotShardFailure`. The failure is constructed from a `ShardSnapshotStatus` object that is in a \"failed\" state (one of FAILED, ABORTED, MISSING). I see two places where we can possibly have a `ShardSnapshotStatus` object with \"failed\" state and where the \"reason\" can be null: \r\n- cluster state serialization (to be precise: SnapshotsInProgress), because we don't serialize the \"reason\". This means that on master failover it can become null. This scenario can be verified by adding the assertion `reason != null` to the `SnapshotShardFailure` constructor and running the (currently disabled) test `testMasterShutdownDuringFailedSnapshot` a few times.\r\n- the call `shardsBuilder.put(shardEntry.key, new ShardSnapshotStatus(status.nodeId(), State.ABORTED))` when aborting a snapshot. Here it's more difficult to come up with a scenario. But unless we can rule that one out, I would still consider it an issue.\r\n\r\nI think the easiest fix for now would be to assume that reason is Nullable and adapt the serialization code accordingly. WDYT @imotov ?",
"created_at": "2017-07-27T07:22:36Z"
},
{
"body": "Seems like that index was actually deleted a few days later after all so that was probably a red herring.\r\n\r\nOk there's a huge spike of logs during that snapshot's creation time, I'll try to aggregate what I see as most important:\r\n\r\n## Related to the snapshot itself (ie searching for \"20170720085856\")\r\n\r\n29 occurrences of\r\n\r\n```\r\n[2017-07-20T14:44:49,461][WARN ][o.e.s.SnapshotShardsService] [Ht8LDxX] [[wsj-iis-2017.07.11][2]] [long_term:wsj-snapshot-20170720085856/yIbELYjgQN-_BgjRd4Vb0A] failed to create snapshot\r\norg.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: Failed to snapshot\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:397) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:88) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:335) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.apache.lucene.store.AlreadyClosedException: engine is closed\r\n\tat org.elasticsearch.index.shard.IndexShard.getEngine(IndexShard.java:1446) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.index.shard.IndexShard.acquireIndexCommit(IndexShard.java:836) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:380) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\t... 7 more\r\n```\r\n\r\nand 2 of\r\n\r\n```\r\n[2017-07-20T14:44:49,459][WARN ][o.e.s.SnapshotShardsService] [Ht8LDxX] [[wsj-2017.07.19][2]] [long_term:wsj-snapshot-20170720085856/yIbELYjgQN-_BgjRd4Vb0A] failed to create snapshot\r\norg.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: Aborted\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext$AbortableInputStream.checkAborted(BlobStoreRepository.java:1501) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext$AbortableInputStream.read(BlobStoreRepository.java:1494) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.io.FilterInputStream.read(FilterInputStream.java:107) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:76) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:57) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:100) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshotFile(BlobStoreRepository.java:1428) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1370) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:967) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:382) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:88) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:335) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n```\r\n\r\n## no index state found\r\n\r\n1702 occurrences of the following from one data node:\r\n\r\n```\r\n[2017-07-20T14:51:22,103][WARN ][o.e.c.u.IndexFolderUpgrader] [/mnt/elasticsearch-data-02/nodes/0/indices/8oH-hwzeQAmJR7TZkUxf1w] no index state found - ignoring\r\n```\r\n\r\nand one similar from another host\r\n\r\n## unexpected error while indexing monitoring document\r\n\r\na spike of ~ 2.5k of those at the start of the snapshot:\r\n\r\n```\r\n[2017-07-20T14:44:48,526][WARN ][o.e.x.m.e.l.LocalExporter] unexpected error while indexing monitoring document\r\norg.elasticsearch.xpack.monitoring.exporter.ExportException: NodeClosedException[node closed {Ht8LDxX}{Ht8LDxXGQAGEna893aC57w}{vq-tK9uISPexLeENQ82FRw}{10.127.1.207}{10.127.1.207:9300}{ml.enabled=true}]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:131) ~[?:?]\r\n\tat java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_131]\r\n\tat java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[?:1.8.0_131]\r\n\tat java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_131]\r\n\tat java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[?:1.8.0_131]\r\n\tat java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_131]\r\n\tat java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:132) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:115) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:583) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:389) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:384) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:827) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onClusterServiceClose(TransportReplicationAction.java:810) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onClusterServiceClose(ClusterStateObserver.java:304) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onClose(ClusterStateObserver.java:224) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.service.ClusterService.addTimeoutListener(ClusterService.java:385) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:166) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:103) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:802) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleException(TransportReplicationAction.java:781) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1050) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:876) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.elasticsearch.node.NodeClosedException: node closed {Ht8LDxX}{Ht8LDxXGQAGEna893aC57w}{vq-tK9uISPexLeENQ82FRw}{10.127.1.207}{10.127.1.207:9300}{ml.enabled=true}\r\n\t... 15 more\r\n```\r\n\r\nand a similar number of those at the end of the snapshot:\r\n\r\n```\r\n[2017-07-20T14:51:05,408][WARN ][o.e.x.m.e.l.LocalExporter] unexpected error while indexing monitoring document\r\norg.elasticsearch.xpack.monitoring.exporter.ExportException: TransportException[transport stopped, action: indices:data/write/bulk[s][p]]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:131) ~[?:?]\r\n\tat java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_131]\r\n\tat java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[?:1.8.0_131]\r\n\tat java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_131]\r\n\tat java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[?:1.8.0_131]\r\n\tat java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_131]\r\n\tat java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:132) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:115) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:583) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:389) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:384) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:827) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleException(TransportReplicationAction.java:783) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1050) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:247) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.elasticsearch.transport.TransportException: transport stopped, action: indices:data/write/bulk[s][p]\r\n\tat org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:246) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\t... 5 more\r\n```\r\n\r\n## node not connected\r\n\r\ngot 9 of those with at least one for each node\r\n\r\n```\r\n[2017-07-20T14:44:47,437][WARN ][o.e.a.a.c.n.i.TransportNodesInfoAction] [zYawxs4] not accumulating exceptions, excluding exception from response\r\norg.elasticsearch.action.FailedNodeException: Failed node [Ht8LDxXGQAGEna893aC57w]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:246) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$200(TransportNodesAction.java:160) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:218) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:493) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.start(TransportNodesAction.java:204) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:89) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:52) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:142) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:84) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:730) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.client.support.AbstractClient$ClusterAdmin.nodesInfo(AbstractClient.java:811) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.rest.action.admin.cluster.RestNodesInfoAction.lambda$prepareRequest$0(RestNodesInfoAction.java:109) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:80) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:260) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:199) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:505) ~[transport-netty4-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:72) ~[transport-netty4-5.4.1.jar:5.4.1]\r\n\tat io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:63) ~[transport-netty4-5.4.1.jar:5.4.1]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) ~[netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) ~[netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) ~[netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) ~[netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.11.Final.jar:4.1.11.Final]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.elasticsearch.transport.NodeNotConnectedException: [Ht8LDxX][10.127.1.207:9300] Node not connected\r\n\tat org.elasticsearch.transport.TcpTransport.getConnection(TcpTransport.java:630) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TcpTransport.getConnection(TcpTransport.java:116) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService.getConnection(TransportService.java:513) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:489) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\t... 58 more\r\n```\r\n\r\n## Exception when closing export bulk\r\n\r\n3 of those\r\n\r\n```\r\n[2017-07-20T14:44:48,536][WARN ][o.e.x.m.MonitoringService] [Ht8LDxX] monitoring execution failed\r\norg.elasticsearch.xpack.monitoring.exporter.ExportException: Exception when closing export bulk\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$1$1.<init>(ExportBulk.java:106) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$1.onFailure(ExportBulk.java:104) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound$1.onResponse(ExportBulk.java:217) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound$1.onResponse(ExportBulk.java:211) ~[?:?]\r\n\tat org.elasticsearch.xpack.common.IteratingActionListener.onResponse(IteratingActionListener.java:108) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$null$0(ExportBulk.java:175) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:67) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:138) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:115) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:583) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:389) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:384) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:827) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onClusterServiceClose(TransportReplicationAction.java:810) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onClusterServiceClose(ClusterStateObserver.java:304) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onClose(ClusterStateObserver.java:224) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.service.ClusterService.addTimeoutListener(ClusterService.java:385) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:166) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:103) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:802) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleException(TransportReplicationAction.java:781) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1050) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:876) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$null$0(ExportBulk.java:167) ~[?:?]\r\n\t... 27 more\r\nCaused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: bulk [default_local] reports failures when exporting documents\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:127) ~[?:?]\r\n\t... 25 more\r\n```\r\n\r\nI'm sure there's more stuff in there but I don't know if this actually helps you because I can't make sense of it due to the log volume being that high so I may be missing the important ones. Is there something more specific I could search for that would help? Maybe I should just export all elasticsearch logs for that day and mail them to you?",
"created_at": "2017-07-27T09:21:53Z"
},
{
"body": "> I think the easiest fix for now would be to assume that reason is Nullable and adapt the serialization code accordingly. WDYT @imotov ?\r\n\r\n@ywelsch Yes fixing it like this would be easy, I just didn't want to assume anything, I wanted to have a test that creates this problem so we can fix it for sure. So, that's why I spent some time trying to reproduce it. You are right about it being null in SnapshotsInProgress, and I tried to reproduce it this way but it looks like it's a completely different path that doesn't get resolved into shard failure object, so this seems to be a dead end. So, I think ABORTED path is more promising and after thinking about for a while, I think the scenario is snapshot gets stuck on a master, gets aborted, then another master takes over, and somehow generates these nulls. The problem with this scenario is that if a snapshot is aborted, it should be deleted afterwards. So, based on the information that @eirc provided, it looks like it might be a combination of stuck snapshot combined with some sort of node failure that prevented the aborted snapshot from being cleaned up, which might be quite difficult to reproduce.\r\n\r\n> Maybe I should just export all elasticsearch logs for that day and mail them to you?\r\n\r\n@eirc that would be very helpful. Thanks!",
"created_at": "2017-07-27T13:23:17Z"
},
{
"body": "Just a quick update. @ywelsch and I discussed the issue and came up with a plan how to modify `testMasterShutdownDuringFailedSnapshot` to potentially reproduce the issue. I will try implementing it. ",
"created_at": "2017-07-27T14:05:51Z"
}
],
"number": 25878,
"title": "NullPointerException when getting list of snapshots on S3"
} | {
"body": "The failure reasons for snapshot shard failures might not be propagated properly if the master node changes after errors were reported by other data nodes, which causes them to be stored as null in snapshot files. This commits adds a workaround for reading such snapshot files where this information might not have been preserved and makes sure that the reason is not null if it gets cluster state from another master. This is a partial backport of #25941 to 5.6.\r\n\r\nCloses #25878\r\n",
"number": 26127,
"review_comments": [],
"title": "Snapshot/Restore: fix NPE while handling null failure reasons"
} | {
"commits": [
{
"message": "Snapshot/Restore: Ensure that shard failure reasons are correctly stored in CS (#25941)\n\nThe failure reason for snapshot shard failures might not be propagated properly if the master node changes after the errors were reported by other data nodes. This commits adds a workaround for reading old snapshot files where this information might not have been preserved and makes sure that the reason is \"\" if it gets cluster state from another master.\n\nCloses #25878"
}
],
"files": [
{
"diff": "@@ -241,6 +241,8 @@ public ShardSnapshotStatus(String nodeId, State state, String reason) {\n this.nodeId = nodeId;\n this.state = state;\n this.reason = reason;\n+ // If the state is failed we have to have a reason for this failure\n+ assert state.failed() == false || reason != null;\n }\n \n public ShardSnapshotStatus(StreamInput in) throws IOException {\n@@ -403,7 +405,9 @@ public SnapshotsInProgress(StreamInput in) throws IOException {\n ShardId shardId = ShardId.readShardId(in);\n String nodeId = in.readOptionalString();\n State shardState = State.fromValue(in.readByte());\n- builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState));\n+ // Workaround for https://github.com/elastic/elasticsearch/issues/25878\n+ String reason = shardState.failed() ? \"\" : null;\n+ builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState, reason));\n }\n long repositoryStateId = UNDEFINED_REPOSITORY_STATE_ID;\n if (in.getVersion().onOrAfter(REPOSITORY_ID_INTRODUCED_VERSION)) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/SnapshotsInProgress.java",
"status": "modified"
},
{
"diff": "@@ -62,6 +62,7 @@ public SnapshotShardFailure(@Nullable String nodeId, ShardId shardId, String rea\n this.nodeId = nodeId;\n this.shardId = shardId;\n this.reason = reason;\n+ assert reason != null;\n status = RestStatus.INTERNAL_SERVER_ERROR;\n }\n \n@@ -192,7 +193,9 @@ public static SnapshotShardFailure fromXContent(XContentParser parser) throws IO\n } else if (\"node_id\".equals(currentFieldName)) {\n snapshotShardFailure.nodeId = parser.text();\n } else if (\"reason\".equals(currentFieldName)) {\n- snapshotShardFailure.reason = parser.text();\n+ // Workaround for https://github.com/elastic/elasticsearch/issues/25878\n+ // Some old snapshot might still have null in shard failure reasons\n+ snapshotShardFailure.reason = parser.textOrNull();\n } else if (\"shard_id\".equals(currentFieldName)) {\n shardId = parser.intValue();\n } else if (\"status\".equals(currentFieldName)) {\n@@ -215,6 +218,11 @@ public static SnapshotShardFailure fromXContent(XContentParser parser) throws IO\n throw new ElasticsearchParseException(\"index shard was not set\");\n }\n snapshotShardFailure.shardId = new ShardId(index, index_uuid, shardId);\n+ // Workaround for https://github.com/elastic/elasticsearch/issues/25878\n+ // Some old snapshot might still have null in shard failure reasons\n+ if (snapshotShardFailure.reason == null) {\n+ snapshotShardFailure.reason = \"\";\n+ }\n return snapshotShardFailure;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotShardFailure.java",
"status": "modified"
},
{
"diff": "@@ -1128,7 +1128,8 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n for (ObjectObjectCursor<ShardId, ShardSnapshotStatus> shardEntry : snapshotEntry.shards()) {\n ShardSnapshotStatus status = shardEntry.value;\n if (!status.state().completed()) {\n- shardsBuilder.put(shardEntry.key, new ShardSnapshotStatus(status.nodeId(), State.ABORTED));\n+ shardsBuilder.put(shardEntry.key, new ShardSnapshotStatus(status.nodeId(), State.ABORTED,\n+ \"aborted by snapshot deletion\"));\n } else {\n shardsBuilder.put(shardEntry.key, status);\n }",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -57,12 +57,12 @@ public void testWaitingIndices() {\n // test more than one waiting shard in an index\n shards.put(new ShardId(idx1Name, idx1UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), State.WAITING));\n shards.put(new ShardId(idx1Name, idx1UUID, 1), new ShardSnapshotStatus(randomAlphaOfLength(2), State.WAITING));\n- shards.put(new ShardId(idx1Name, idx1UUID, 2), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState()));\n+ shards.put(new ShardId(idx1Name, idx1UUID, 2), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState(), \"\"));\n // test exactly one waiting shard in an index\n shards.put(new ShardId(idx2Name, idx2UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), State.WAITING));\n- shards.put(new ShardId(idx2Name, idx2UUID, 1), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState()));\n+ shards.put(new ShardId(idx2Name, idx2UUID, 1), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState(), \"\"));\n // test no waiting shards in an index\n- shards.put(new ShardId(idx3Name, idx3UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState()));\n+ shards.put(new ShardId(idx3Name, idx3UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState(), \"\"));\n Entry entry = new Entry(snapshot, randomBoolean(), randomBoolean(), State.INIT,\n indices, System.currentTimeMillis(), randomLong(), shards.build());\n ",
"filename": "core/src/test/java/org/elasticsearch/cluster/SnapshotsInProgressTests.java",
"status": "modified"
},
{
"diff": "@@ -141,13 +141,20 @@ public SnapshotInfo waitForCompletion(String repository, String snapshotName, Ti\n return null;\n }\n \n- public static String blockMasterFromFinalizingSnapshot(final String repositoryName) {\n+ public static String blockMasterFromFinalizingSnapshotOnIndexFile(final String repositoryName) {\n final String masterName = internalCluster().getMasterName();\n ((MockRepository)internalCluster().getInstance(RepositoriesService.class, masterName)\n .repository(repositoryName)).setBlockOnWriteIndexFile(true);\n return masterName;\n }\n \n+ public static String blockMasterFromFinalizingSnapshotOnSnapFile(final String repositoryName) {\n+ final String masterName = internalCluster().getMasterName();\n+ ((MockRepository)internalCluster().getInstance(RepositoriesService.class, masterName)\n+ .repository(repositoryName)).setBlockAndFailOnWriteSnapFiles(true);\n+ return masterName;\n+ }\n+\n public static String blockNodeWithIndex(final String repositoryName, final String indexName) {\n for(String node : internalCluster().nodesInclude(indexName)) {\n ((MockRepository)internalCluster().getInstance(RepositoriesService.class, node).repository(repositoryName))",
"filename": "core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java",
"status": "modified"
},
{
"diff": "@@ -798,6 +798,67 @@ public void testMasterShutdownDuringSnapshot() throws Exception {\n assertEquals(0, snapshotInfo.failedShards());\n }\n \n+\n+ public void testMasterAndDataShutdownDuringSnapshot() throws Exception {\n+ logger.info(\"--> starting three master nodes and two data nodes\");\n+ internalCluster().startMasterOnlyNodes(3);\n+ internalCluster().startDataOnlyNodes(2);\n+\n+ final Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"mock\").setSettings(Settings.builder()\n+ .put(\"location\", randomRepoPath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)));\n+\n+ assertAcked(prepareCreate(\"test-idx\", 0, Settings.builder().put(\"number_of_shards\", between(1, 20))\n+ .put(\"number_of_replicas\", 0)));\n+ ensureGreen();\n+\n+ logger.info(\"--> indexing some data\");\n+ final int numdocs = randomIntBetween(10, 100);\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[numdocs];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(\"test-idx\", \"type1\", Integer.toString(i)).setSource(\"field1\", \"bar \" + i);\n+ }\n+ indexRandom(true, builders);\n+ flushAndRefresh();\n+\n+ final int numberOfShards = getNumShards(\"test-idx\").numPrimaries;\n+ logger.info(\"number of shards: {}\", numberOfShards);\n+\n+ final String masterNode = blockMasterFromFinalizingSnapshotOnSnapFile(\"test-repo\");\n+ final String dataNode = blockNodeWithIndex(\"test-repo\", \"test-idx\");\n+\n+ dataNodeClient().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(false).setIndices(\"test-idx\").get();\n+\n+ logger.info(\"--> stopping data node {}\", dataNode);\n+ stopNode(dataNode);\n+ logger.info(\"--> stopping master node {} \", masterNode);\n+ internalCluster().stopCurrentMasterNode();\n+\n+ logger.info(\"--> wait until the snapshot is done\");\n+\n+ assertBusy(() -> {\n+ GetSnapshotsResponse snapshotsStatusResponse = client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").get();\n+ SnapshotInfo snapshotInfo = snapshotsStatusResponse.getSnapshots().get(0);\n+ assertTrue(snapshotInfo.state().completed());\n+ }, 1, TimeUnit.MINUTES);\n+\n+ logger.info(\"--> verify that snapshot was partial\");\n+\n+ GetSnapshotsResponse snapshotsStatusResponse = client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").get();\n+ SnapshotInfo snapshotInfo = snapshotsStatusResponse.getSnapshots().get(0);\n+ assertEquals(SnapshotState.PARTIAL, snapshotInfo.state());\n+ assertNotEquals(snapshotInfo.totalShards(), snapshotInfo.successfulShards());\n+ assertThat(snapshotInfo.failedShards(), greaterThan(0));\n+ for (SnapshotShardFailure failure : snapshotInfo.shardFailures()) {\n+ assertNotNull(failure.reason());\n+ }\n+ }\n+\n @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/25281\")\n public void testMasterShutdownDuringFailedSnapshot() throws Exception {\n logger.info(\"--> starting two master nodes and two data nodes\");\n@@ -831,7 +892,7 @@ public void testMasterShutdownDuringFailedSnapshot() throws Exception {\n assertEquals(ClusterHealthStatus.RED, client().admin().cluster().prepareHealth().get().getStatus()),\n 30, TimeUnit.SECONDS);\n \n- final String masterNode = blockMasterFromFinalizingSnapshot(\"test-repo\");\n+ final String masterNode = blockMasterFromFinalizingSnapshotOnIndexFile(\"test-repo\");\n \n logger.info(\"--> snapshot\");\n client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\")",
"filename": "core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java",
"status": "modified"
},
{
"diff": "@@ -2293,9 +2293,9 @@ public void testDeleteOrphanSnapshot() throws Exception {\n public ClusterState execute(ClusterState currentState) {\n // Simulate orphan snapshot\n ImmutableOpenMap.Builder<ShardId, ShardSnapshotStatus> shards = ImmutableOpenMap.builder();\n- shards.put(new ShardId(idxName, \"_na_\", 0), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n- shards.put(new ShardId(idxName, \"_na_\", 1), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n- shards.put(new ShardId(idxName, \"_na_\", 2), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n+ shards.put(new ShardId(idxName, \"_na_\", 0), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED, \"aborted\"));\n+ shards.put(new ShardId(idxName, \"_na_\", 1), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED, \"aborted\"));\n+ shards.put(new ShardId(idxName, \"_na_\", 2), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED, \"aborted\"));\n List<Entry> entries = new ArrayList<>();\n entries.add(new Entry(new Snapshot(repositoryName,\n createSnapshotResponse.getSnapshotInfo().snapshotId()),",
"filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java",
"status": "modified"
},
{
"diff": "@@ -104,6 +104,9 @@ public long getFailureCount() {\n * finalization of a snapshot, while permitting other IO operations to proceed unblocked. */\n private volatile boolean blockOnWriteIndexFile;\n \n+ /** Allows blocking on writing the snapshot file at the end of snapshot creation to simulate a died master node */\n+ private volatile boolean blockAndFailOnWriteSnapFile;\n+\n private volatile boolean atomicMove;\n \n private volatile boolean blocked = false;\n@@ -118,6 +121,7 @@ public MockRepository(RepositoryMetaData metadata, Environment environment,\n blockOnControlFiles = metadata.settings().getAsBoolean(\"block_on_control\", false);\n blockOnDataFiles = metadata.settings().getAsBoolean(\"block_on_data\", false);\n blockOnInitialization = metadata.settings().getAsBoolean(\"block_on_init\", false);\n+ blockAndFailOnWriteSnapFile = metadata.settings().getAsBoolean(\"block_on_snap\", false);\n randomPrefix = metadata.settings().get(\"random\", \"default\");\n waitAfterUnblock = metadata.settings().getAsLong(\"wait_after_unblock\", 0L);\n atomicMove = metadata.settings().getAsBoolean(\"atomic_move\", true);\n@@ -168,13 +172,18 @@ public synchronized void unblock() {\n blockOnControlFiles = false;\n blockOnInitialization = false;\n blockOnWriteIndexFile = false;\n+ blockAndFailOnWriteSnapFile = false;\n this.notifyAll();\n }\n \n public void blockOnDataFiles(boolean blocked) {\n blockOnDataFiles = blocked;\n }\n \n+ public void setBlockAndFailOnWriteSnapFiles(boolean blocked) {\n+ blockAndFailOnWriteSnapFile = blocked;\n+ }\n+\n public void setBlockOnWriteIndexFile(boolean blocked) {\n blockOnWriteIndexFile = blocked;\n }\n@@ -187,7 +196,8 @@ private synchronized boolean blockExecution() {\n logger.debug(\"Blocking execution\");\n boolean wasBlocked = false;\n try {\n- while (blockOnDataFiles || blockOnControlFiles || blockOnInitialization || blockOnWriteIndexFile) {\n+ while (blockOnDataFiles || blockOnControlFiles || blockOnInitialization || blockOnWriteIndexFile ||\n+ blockAndFailOnWriteSnapFile) {\n blocked = true;\n this.wait();\n wasBlocked = true;\n@@ -266,6 +276,8 @@ private void maybeIOExceptionOrBlock(String blobName) throws IOException {\n throw new IOException(\"Random IOException\");\n } else if (blockOnControlFiles) {\n blockExecutionAndMaybeWait(blobName);\n+ } else if (blobName.startsWith(\"snap-\") && blockAndFailOnWriteSnapFile) {\n+ blockExecutionAndFail(blobName);\n }\n }\n }\n@@ -283,6 +295,15 @@ private void blockExecutionAndMaybeWait(final String blobName) {\n }\n }\n \n+ /**\n+ * Blocks an I/O operation on the blob fails and throws an exception when unblocked\n+ */\n+ private void blockExecutionAndFail(final String blobName) throws IOException {\n+ logger.info(\"blocking I/O operation for file [{}] at path [{}]\", blobName, path());\n+ blockExecution();\n+ throw new IOException(\"exception after block\");\n+ }\n+\n MockBlobContainer(BlobContainer delegate) {\n super(delegate);\n }",
"filename": "core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java",
"status": "modified"
}
]
} |
{
"body": "When using nested inner hits feature and source filtering on the inner hit the included fields are not returned. This is specifying the include path relative to the nested object (which is how it is returned in results without filtering). If I specify the include using the absolute path, I get a result but the source is no longer returned relative to the nested object.\r\n\r\nIf I have this document:\r\n\r\n```json\r\n{\"nested\":[{\"nested_field\": \"some value\"}]}\r\n```\r\n\r\nWithout filtering, a nested inner hit response would return:\r\n\r\n```json\r\n...\"_source\": {\"nested_field\": \"some value\"}\r\n```\r\n\r\nIf I set the inner hit source filtering to include `nested_field` I get:\r\n\r\n```json\r\n...\"_source\": {}\r\n```\r\n\r\nAnd if I set the inner hit source filtering to include `nested.nested_field` I get:\r\n\r\n```json\r\n...\"_source\": {\"nested\": {\"nested_field\": \"some value\"}}\r\n```\r\n\r\nThis is on elasticsearch 5.2.0.\r\n\r\n\r\n/cc @martijnvg @tlrx ",
"comments": [
{
"body": "Bugger... the behaviour of source filtering was changed in https://github.com/elastic/elasticsearch/pull/18567 but without the corresponding change to unfiltered source. Fixing this now would be a breaking change, either way. Perhaps we should aim to fix this in 6.0 instead.",
"created_at": "2017-02-10T08:24:47Z"
},
{
"body": "Waiting for 6.0 is fine with me, its not a pressing issue. It actually only popped up in some tests I have for a clients internal project. Thanks.",
"created_at": "2017-02-10T19:05:23Z"
},
{
"body": "Just hit this bug again and reminded to check on its status. @martijnvg @clintongormley has anything been done on this for the 6x release?",
"created_at": "2017-08-07T15:22:10Z"
},
{
"body": "@mattweber I think this bug fell through the cracks. I'll look into it right away.",
"created_at": "2017-08-08T15:38:04Z"
},
{
"body": "Is the fix only available in 6.x release or anything has been done for 5.x versions?\r\n",
"created_at": "2017-11-24T06:43:12Z"
},
{
"body": "@sand33p-23 The fix is available from 6.0. it has not been backported to a 5.x version.",
"created_at": "2017-11-24T07:58:31Z"
}
],
"number": 23090,
"title": "Inner hits source filtering not working"
} | {
"body": "As part of #18567 relative paths were no longer used to make nested hits more consistent with normal hits, but the _source of nested document was forgotten. Only if the nested _source was filtered the full field names / paths were used.\r\n\r\nCloses #23090\r\n\r\nI wonder if we should backport this change to 6.0 branch? It is a breaking change.",
"number": 26102,
"review_comments": [],
"title": "Unfiltered nested source should keep its full path"
} | {
"commits": [
{
"message": "inner hits: Unfiltered nested source should keep its full path\n\nlike filtered nested source.\n\nCloses #23090"
}
],
"files": [
{
"diff": "@@ -284,7 +284,7 @@ private SearchHit createNestedSearchHit(SearchContext context, int nestedTopDocI\n }\n context.lookup().source().setSource(nestedSourceAsMap);\n XContentType contentType = tuple.v1();\n- BytesReference nestedSource = contentBuilder(contentType).map(sourceAsMap).bytes();\n+ BytesReference nestedSource = contentBuilder(contentType).map(nestedSourceAsMap).bytes();\n context.lookup().source().setSource(nestedSource);\n context.lookup().source().setSourceContentType(contentType);\n }",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -28,7 +28,6 @@\n import org.elasticsearch.common.document.DocumentField;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.support.XContentMapValues;\n import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.plugins.Plugin;\n@@ -67,6 +66,7 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.smileBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.yamlBuilder;\n+import static org.elasticsearch.common.xcontent.support.XContentMapValues.extractValue;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n import static org.elasticsearch.index.query.QueryBuilders.nestedQuery;\n@@ -728,7 +728,7 @@ public void testTopHitsInNestedSimple() throws Exception {\n assertThat(searchHits.getTotalHits(), equalTo(1L));\n assertThat(searchHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(searchHits.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat((Integer) searchHits.getAt(0).getSourceAsMap().get(\"date\"), equalTo(1));\n+ assertThat(extractValue(\"comments.date\", searchHits.getAt(0).getSourceAsMap()), equalTo(1));\n \n bucket = terms.getBucketByKey(\"b\");\n assertThat(bucket.getDocCount(), equalTo(2L));\n@@ -737,10 +737,10 @@ public void testTopHitsInNestedSimple() throws Exception {\n assertThat(searchHits.getTotalHits(), equalTo(2L));\n assertThat(searchHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(searchHits.getAt(0).getNestedIdentity().getOffset(), equalTo(1));\n- assertThat((Integer) searchHits.getAt(0).getSourceAsMap().get(\"date\"), equalTo(2));\n+ assertThat(extractValue(\"comments.date\", searchHits.getAt(0).getSourceAsMap()), equalTo(2));\n assertThat(searchHits.getAt(1).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(searchHits.getAt(1).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat((Integer) searchHits.getAt(1).getSourceAsMap().get(\"date\"), equalTo(3));\n+ assertThat(extractValue(\"comments.date\", searchHits.getAt(1).getSourceAsMap()), equalTo(3));\n \n bucket = terms.getBucketByKey(\"c\");\n assertThat(bucket.getDocCount(), equalTo(1L));\n@@ -749,7 +749,7 @@ public void testTopHitsInNestedSimple() throws Exception {\n assertThat(searchHits.getTotalHits(), equalTo(1L));\n assertThat(searchHits.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(searchHits.getAt(0).getNestedIdentity().getOffset(), equalTo(1));\n- assertThat((Integer) searchHits.getAt(0).getSourceAsMap().get(\"date\"), equalTo(4));\n+ assertThat(extractValue(\"comments.date\", searchHits.getAt(0).getSourceAsMap()), equalTo(4));\n }\n \n public void testTopHitsInSecondLayerNested() throws Exception {\n@@ -802,49 +802,49 @@ public void testTopHitsInSecondLayerNested() throws Exception {\n assertThat(topReviewers.getHits().getHits().length, equalTo(7));\n \n assertThat(topReviewers.getHits().getAt(0).getId(), equalTo(\"1\"));\n- assertThat((String) topReviewers.getHits().getAt(0).getSourceAsMap().get(\"name\"), equalTo(\"user a\"));\n+ assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(0).getSourceAsMap()), equalTo(\"user a\"));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getChild().getOffset(), equalTo(0));\n \n assertThat(topReviewers.getHits().getAt(1).getId(), equalTo(\"1\"));\n- assertThat((String) topReviewers.getHits().getAt(1).getSourceAsMap().get(\"name\"), equalTo(\"user b\"));\n+ assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(1).getSourceAsMap()), equalTo(\"user b\"));\n assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getOffset(), equalTo(0));\n assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(1).getNestedIdentity().getChild().getOffset(), equalTo(1));\n \n assertThat(topReviewers.getHits().getAt(2).getId(), equalTo(\"1\"));\n- assertThat((String) topReviewers.getHits().getAt(2).getSourceAsMap().get(\"name\"), equalTo(\"user c\"));\n+ assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(2).getSourceAsMap()), equalTo(\"user c\"));\n assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getOffset(), equalTo(0));\n assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(2).getNestedIdentity().getChild().getOffset(), equalTo(2));\n \n assertThat(topReviewers.getHits().getAt(3).getId(), equalTo(\"1\"));\n- assertThat((String) topReviewers.getHits().getAt(3).getSourceAsMap().get(\"name\"), equalTo(\"user c\"));\n+ assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(3).getSourceAsMap()), equalTo(\"user c\"));\n assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getOffset(), equalTo(1));\n assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(3).getNestedIdentity().getChild().getOffset(), equalTo(0));\n \n assertThat(topReviewers.getHits().getAt(4).getId(), equalTo(\"1\"));\n- assertThat((String) topReviewers.getHits().getAt(4).getSourceAsMap().get(\"name\"), equalTo(\"user d\"));\n+ assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(4).getSourceAsMap()), equalTo(\"user d\"));\n assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getOffset(), equalTo(1));\n assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(4).getNestedIdentity().getChild().getOffset(), equalTo(1));\n \n assertThat(topReviewers.getHits().getAt(5).getId(), equalTo(\"1\"));\n- assertThat((String) topReviewers.getHits().getAt(5).getSourceAsMap().get(\"name\"), equalTo(\"user e\"));\n+ assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(5).getSourceAsMap()), equalTo(\"user e\"));\n assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getOffset(), equalTo(1));\n assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n assertThat(topReviewers.getHits().getAt(5).getNestedIdentity().getChild().getOffset(), equalTo(2));\n \n assertThat(topReviewers.getHits().getAt(6).getId(), equalTo(\"2\"));\n- assertThat((String) topReviewers.getHits().getAt(6).getSourceAsMap().get(\"name\"), equalTo(\"user f\"));\n+ assertThat(extractValue(\"comments.reviewers.name\", topReviewers.getHits().getAt(6).getSourceAsMap()), equalTo(\"user f\"));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n assertThat(topReviewers.getHits().getAt(0).getNestedIdentity().getChild().getField().string(), equalTo(\"reviewers\"));\n@@ -900,7 +900,7 @@ public void testNestedFetchFeatures() {\n assertThat(field.getValue().toString(), equalTo(\"5\"));\n \n assertThat(searchHit.getSourceAsMap().size(), equalTo(1));\n- assertThat(XContentMapValues.extractValue(\"comments.message\", searchHit.getSourceAsMap()), equalTo(\"some comment\"));\n+ assertThat(extractValue(\"comments.message\", searchHit.getSourceAsMap()), equalTo(\"some comment\"));\n }\n \n public void testTopHitsInNested() throws Exception {\n@@ -933,7 +933,7 @@ public void testTopHitsInNested() throws Exception {\n for (int j = 0; j < 3; j++) {\n assertThat(searchHits.getAt(j).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(searchHits.getAt(j).getNestedIdentity().getOffset(), equalTo(0));\n- assertThat((Integer) searchHits.getAt(j).getSourceAsMap().get(\"id\"), equalTo(0));\n+ assertThat(extractValue(\"comments.id\", searchHits.getAt(j).getSourceAsMap()), equalTo(0));\n \n HighlightField highlightField = searchHits.getAt(j).getHighlightFields().get(\"comments.message\");\n assertThat(highlightField.getFragments().length, equalTo(1));",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/TopHitsIT.java",
"status": "modified"
},
{
"diff": "@@ -563,7 +563,7 @@ public void testMatchesQueriesNestedInnerHits() throws Exception {\n }\n }\n \n- public void testNestedSourceFiltering() throws Exception {\n+ public void testNestedSource() throws Exception {\n assertAcked(prepareCreate(\"index1\").addMapping(\"message\", \"comments\", \"type=nested\"));\n client().prepareIndex(\"index1\", \"message\", \"1\").setSource(jsonBuilder().startObject()\n .field(\"message\", \"quick brown fox\")\n@@ -585,6 +585,19 @@ public void testNestedSourceFiltering() throws Exception {\n assertNoFailures(response);\n assertHitCount(response, 1);\n \n+ assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(2L));\n+ assertThat(extractValue(\"comments.message\", response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getSourceAsMap()),\n+ equalTo(\"fox eat quick\"));\n+ assertThat(extractValue(\"comments.message\", response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(1).getSourceAsMap()),\n+ equalTo(\"fox ate rabbit x y z\"));\n+\n+ response = client().prepareSearch()\n+ .setQuery(nestedQuery(\"comments\", matchQuery(\"comments.message\", \"fox\"), ScoreMode.None)\n+ .innerHit(new InnerHitBuilder()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getTotalHits(), equalTo(2L));\n assertThat(extractValue(\"comments.message\", response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getSourceAsMap()),\n equalTo(\"fox eat quick\"));",
"filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/InnerHitsIT.java",
"status": "modified"
},
{
"diff": "@@ -135,4 +135,9 @@ The `unified` highlighter outputs the same highlighting when `index_options` is\n \n ==== `fielddata_fields`\n \n-The deprecated `fielddata_fields` have now been removed. `docvalue_fields` should be used instead.\n\\ No newline at end of file\n+The deprecated `fielddata_fields` have now been removed. `docvalue_fields` should be used instead.\n+\n+==== Inner hits\n+\n+The source inside a hit of inner hits keeps its full path with respect to the entire source.\n+In prior versions the source field names were relative to the inner hit.",
"filename": "docs/reference/migration/migrate_6_0/search.asciidoc",
"status": "modified"
},
{
"diff": "@@ -158,8 +158,10 @@ An example of a response snippet that could be generated from the above search r\n },\n \"_score\": 1.0,\n \"_source\": {\n- \"author\": \"nik9000\",\n- \"number\": 2\n+ \"comments\" : {\n+ \"author\": \"nik9000\",\n+ \"number\": 2\n+ }\n }\n }\n ]\n@@ -404,8 +406,12 @@ Which would look like:\n },\n \"_score\": 0.6931472,\n \"_source\": {\n- \"value\": 1,\n- \"voter\": \"kimchy\"\n+ \"comments\": {\n+ \"votes\": {\n+ \"value\": 1,\n+ \"voter\": \"kimchy\"\n+ }\n+ }\n }\n }\n ]",
"filename": "docs/reference/search/request/inner-hits.asciidoc",
"status": "modified"
}
]
} |
{
"body": "Today when we aggregate on the `_index` field the cross cluster search\r\nalias is not taken into account. Neither is it respected when we search\r\non the field. This change adds support for cluster alias when the cluster\r\nalias is present on the `_index` field.\r\n\r\nCloses #25606\r\n",
"comments": [
{
"body": "> It is a bit of a hack but it looks sustainable. I'm wondering that we might be able to simplify things a bit by making the MappedFieldType responsible for producing an IndexFieldData instance directly instead of having an intermediate builder, and passing a QueryShardContext to this method like we do for factory methods of queries.\r\n\r\nthis might be a good way to make this simpler. I will look into it. thanks!",
"created_at": "2017-07-25T15:15:53Z"
},
{
"body": "> It is a bit of a hack but it looks sustainable. I'm wondering that we might be able to simplify things a bit by making the MappedFieldType responsible for producing an IndexFieldData instance directly instead of having an intermediate builder, and passing a QueryShardContext to this method like we do for factory methods of queries.\r\n\r\n@jpountz @jimczi are you ok if we do this as a followup? I think the the place we have hack in is pretty contained and we can move on with this as is?",
"created_at": "2017-07-25T18:45:10Z"
},
{
"body": "Sure, let's do this as a follow-up.",
"created_at": "2017-07-26T06:11:06Z"
}
],
"number": 25885,
"title": "Respect cluster alias in `_index` aggs and queries"
} | {
"body": "We introduced a hack in #25885 to respect the cluster alias if available on the `_index` field.\r\nThis is important if aggregations or other field data related operations are executed. Yet, we added\r\na small hack that duplicated an implementation detail from the `_index` field data builder to make this work. This change adds a necessary but simple API change that allows us to remove the hack and only have a single\r\nimplementation.",
"number": 26082,
"review_comments": [
{
"body": "nit: when the method is complex (there are 5 different arguments here), I find that explicitly implementing the interface is easier to read than lambdas",
"created_at": "2017-08-07T14:40:37Z"
},
{
"body": "fair enough. I will roll this back :)",
"created_at": "2017-08-07T14:52:24Z"
}
],
"title": "Remove `_index` fielddata hack if cluster alias is present"
} | {
"commits": [
{
"message": "Remove `_index` fielddata hack if cluster alias is present\n\nWe introduced a hack in #25885 to respect the cluster alias if available on the `_index` field.\nThis is important if aggregations or other field data related operations are executed. Yet, we added\na small hack that duplicated an implementation detail from the `_index` field data builder to make this work.\nThis change adds a necessary but simple API change that allows us to remove the hack and only have a single\nimplementation."
},
{
"message": "be more verbose"
}
],
"files": [
{
"diff": "@@ -105,10 +105,14 @@ public synchronized void clearField(final String fieldName) {\n ExceptionsHelper.maybeThrowRuntimeAndSuppress(exceptions);\n }\n \n- @SuppressWarnings(\"unchecked\")\n public <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType fieldType) {\n+ return getForField(fieldType, index().getName());\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ public <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType fieldType, String fullyQualifiedIndexName) {\n final String fieldName = fieldType.name();\n- IndexFieldData.Builder builder = fieldType.fielddataBuilder();\n+ IndexFieldData.Builder builder = fieldType.fielddataBuilder(fullyQualifiedIndexName);\n \n IndexFieldDataCache cache;\n synchronized (this) {",
"filename": "core/src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataService.java",
"status": "modified"
},
{
"diff": "@@ -121,7 +121,7 @@ public BytesReference valueForDisplay(Object value) {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new BytesBinaryDVIndexFieldData.Builder();\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/BinaryFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -182,7 +182,7 @@ public Boolean valueForDisplay(Object value) {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder().numericType(NumericType.BOOLEAN);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/BooleanFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -356,7 +356,7 @@ public Relation isFieldWithinQuery(IndexReader reader,\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder().numericType(NumericType.DATE);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -175,7 +175,7 @@ public MappedFieldType clone() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new AbstractLatLonPointDVIndexFieldData.Builder();\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -153,7 +153,7 @@ public Query termsQuery(List<?> values, QueryShardContext context) {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n if (indexOptions() == IndexOptions.NONE) {\n throw new IllegalArgumentException(\"Fielddata access on the _uid field is disallowed\");\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -157,8 +157,8 @@ private boolean isSameIndex(Object value, String indexName) {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n- return new ConstantIndexFieldData.Builder(mapperService -> mapperService.index().getName());\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n+ return new ConstantIndexFieldData.Builder(mapperService -> fullyQualifiedIndexName);\n }\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -281,7 +281,7 @@ public int size() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder().scriptFunction(IpScriptDocValues::new);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/IpFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -219,7 +219,7 @@ public Query nullValueQuery() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder();\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/KeywordFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -98,8 +98,10 @@ public MappedFieldType() {\n * @throws IllegalArgumentException if the fielddata is not supported on this type.\n * An IllegalArgumentException is needed in order to return an http error 400\n * when this error occurs in a request. see: {@link org.elasticsearch.ExceptionsHelper#status}\n- **/\n- public IndexFieldData.Builder fielddataBuilder() {\n+ *\n+ * @param fullyQualifiedIndexName the name of the index this field-data is build for\n+ * */\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n throw new IllegalArgumentException(\"Fielddata is not supported on field [\" + name() + \"] of type [\" + typeName() + \"]\");\n }\n \n@@ -322,7 +324,7 @@ public boolean isSearchable() {\n */\n public boolean isAggregatable() {\n try {\n- fielddataBuilder();\n+ fielddataBuilder(\"\");\n return true;\n } catch (IllegalArgumentException e) {\n return false;",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java",
"status": "modified"
},
{
"diff": "@@ -855,7 +855,7 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder().numericType(type.numericType());\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -41,7 +41,6 @@\n import org.elasticsearch.index.query.QueryShardContext;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n import java.util.Collections;\n import java.util.Iterator;\n import java.util.List;\n@@ -198,7 +197,7 @@ public Query termsQuery(List values, @Nullable QueryShardContext context) {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n return new DocValuesIndexFieldData.Builder();\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -259,7 +259,7 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new IndexFieldData.Builder() {\n @Override",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -204,7 +204,7 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder().numericType(NumericType.LONG);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/SeqNoFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -283,7 +283,7 @@ public Query nullValueQuery() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n if (fielddata == false) {\n throw new IllegalArgumentException(\"Fielddata is disabled on text fields by default. Set fielddata=true on [\" + name()\n + \"] in order to load fielddata in memory by uninverting the inverted index. Note that this can however \"",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/TextFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -109,7 +109,7 @@ public String typeName() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n if (hasDocValues()) {\n return new DocValuesIndexFieldData.Builder();\n } else {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -110,15 +110,15 @@ public String typeName() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n if (indexOptions() == IndexOptions.NONE) {\n DEPRECATION_LOGGER.deprecated(\"Fielddata access on the _uid field is deprecated, use _id instead\");\n return new IndexFieldData.Builder() {\n @Override\n public IndexFieldData<?> build(IndexSettings indexSettings, MappedFieldType fieldType, IndexFieldDataCache cache,\n CircuitBreakerService breakerService, MapperService mapperService) {\n MappedFieldType idFieldType = mapperService.fullName(IdFieldMapper.NAME);\n- IndexFieldData<?> idFieldData = idFieldType.fielddataBuilder()\n+ IndexFieldData<?> idFieldData = idFieldType.fielddataBuilder(fullyQualifiedIndexName)\n .build(indexSettings, idFieldType, cache, breakerService, mapperService);\n final String type = mapperService.types().iterator().next();\n return new UidIndexFieldData(indexSettings.getIndex(), type, idFieldData);",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -62,6 +62,7 @@\n import java.util.HashMap;\n import java.util.Map;\n import java.util.function.BiConsumer;\n+import java.util.function.BiFunction;\n import java.util.function.Function;\n import java.util.function.LongSupplier;\n \n@@ -77,7 +78,7 @@ public class QueryShardContext extends QueryRewriteContext {\n private final MapperService mapperService;\n private final SimilarityService similarityService;\n private final BitsetFilterCache bitsetFilterCache;\n- private final Function<MappedFieldType, IndexFieldData<?>> indexFieldDataService;\n+ private final BiFunction<MappedFieldType, String, IndexFieldData<?>> indexFieldDataService;\n private final int shardId;\n private final IndexReader reader;\n private final String clusterAlias;\n@@ -101,10 +102,10 @@ public String[] getTypes() {\n private boolean isFilter;\n \n public QueryShardContext(int shardId, IndexSettings indexSettings, BitsetFilterCache bitsetFilterCache,\n- Function<MappedFieldType, IndexFieldData<?>> indexFieldDataLookup, MapperService mapperService,\n- SimilarityService similarityService, ScriptService scriptService, NamedXContentRegistry xContentRegistry,\n- NamedWriteableRegistry namedWriteableRegistry,Client client, IndexReader reader, LongSupplier nowInMillis,\n- String clusterAlias) {\n+ BiFunction<MappedFieldType, String, IndexFieldData<?>> indexFieldDataLookup, MapperService mapperService,\n+ SimilarityService similarityService, ScriptService scriptService, NamedXContentRegistry xContentRegistry,\n+ NamedWriteableRegistry namedWriteableRegistry, Client client, IndexReader reader, LongSupplier nowInMillis,\n+ String clusterAlias) {\n super(xContentRegistry, namedWriteableRegistry,client, nowInMillis);\n this.shardId = shardId;\n this.similarityService = similarityService;\n@@ -164,13 +165,7 @@ public BitSetProducer bitsetFilter(Query filter) {\n }\n \n public <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType fieldType) {\n- if (clusterAlias != null && IndexFieldMapper.NAME.equals(fieldType.name())) {\n- // this is a \"hack\" to make the _index field data aware of cross cluster search cluster aliases.\n- ConstantIndexFieldData ifd = (ConstantIndexFieldData) indexFieldDataService.apply(fieldType);\n- return (IFD) new ConstantIndexFieldData.Builder(m -> fullyQualifiedIndexName)\n- .build(indexSettings, fieldType, null, null, mapperService);\n- }\n- return (IFD) indexFieldDataService.apply(fieldType);\n+ return (IFD) indexFieldDataService.apply(fieldType, fullyQualifiedIndexName);\n }\n \n public void addNamedQuery(String name, Query query) {\n@@ -283,7 +278,8 @@ public Collection<String> queryTypes() {\n \n public SearchLookup lookup() {\n if (lookup == null) {\n- lookup = new SearchLookup(getMapperService(), indexFieldDataService, types);\n+ lookup = new SearchLookup(getMapperService(),\n+ mappedFieldType -> indexFieldDataService.apply(mappedFieldType, fullyQualifiedIndexName), types);\n }\n return lookup;\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java",
"status": "modified"
},
{
"diff": "@@ -152,7 +152,8 @@ public void testFieldData() throws IOException {\n \n // single-valued\n ft.setName(\"scaled_float1\");\n- IndexNumericFieldData fielddata = (IndexNumericFieldData) ft.fielddataBuilder().build(indexSettings, ft, null, null, null);\n+ IndexNumericFieldData fielddata = (IndexNumericFieldData) ft.fielddataBuilder(\"index\")\n+ .build(indexSettings, ft, null, null, null);\n assertEquals(fielddata.getNumericType(), IndexNumericFieldData.NumericType.DOUBLE);\n AtomicNumericFieldData leafFieldData = fielddata.load(reader.leaves().get(0));\n SortedNumericDoubleValues values = leafFieldData.getDoubleValues();\n@@ -162,7 +163,7 @@ public void testFieldData() throws IOException {\n \n // multi-valued\n ft.setName(\"scaled_float2\");\n- fielddata = (IndexNumericFieldData) ft.fielddataBuilder().build(indexSettings, ft, null, null, null);\n+ fielddata = (IndexNumericFieldData) ft.fielddataBuilder(\"index\").build(indexSettings, ft, null, null, null);\n leafFieldData = fielddata.load(reader.leaves().get(0));\n values = leafFieldData.getDoubleValues();\n assertTrue(values.advanceExact(0));",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldTypeTests.java",
"status": "modified"
},
{
"diff": "@@ -477,7 +477,7 @@ public void testFielddata() throws IOException {\n DocumentMapper disabledMapper = parser.parse(\"type\", new CompressedXContent(mapping));\n assertEquals(mapping, disabledMapper.mappingSource().toString());\n IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n- () -> disabledMapper.mappers().getMapper(\"field\").fieldType().fielddataBuilder());\n+ () -> disabledMapper.mappers().getMapper(\"field\").fieldType().fielddataBuilder(\"test\"));\n assertThat(e.getMessage(), containsString(\"Fielddata is disabled\"));\n \n mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n@@ -490,7 +490,7 @@ public void testFielddata() throws IOException {\n DocumentMapper enabledMapper = parser.parse(\"type\", new CompressedXContent(mapping));\n \n assertEquals(mapping, enabledMapper.mappingSource().toString());\n- enabledMapper.mappers().getMapper(\"field\").fieldType().fielddataBuilder(); // no exception this time\n+ enabledMapper.mappers().getMapper(\"field\").fieldType().fielddataBuilder(\"test\"); // no exception this time\n \n String illegalMapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"properties\").startObject(\"field\")",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/TextFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -76,7 +76,7 @@ public void testDocValues(boolean singleType) throws IOException {\n w.close();\n \n MappedFieldType ft = mapperService.fullName(TypeFieldMapper.NAME);\n- IndexOrdinalsFieldData fd = (IndexOrdinalsFieldData) ft.fielddataBuilder().build(mapperService.getIndexSettings(),\n+ IndexOrdinalsFieldData fd = (IndexOrdinalsFieldData) ft.fielddataBuilder(\"test\").build(mapperService.getIndexSettings(),\n ft, new IndexFieldDataCache.None(), new NoneCircuitBreakerService(), mapperService);\n AtomicOrdinalsFieldData afd = fd.load(r.leaves().get(0));\n SortedSetDocValues values = afd.getOrdinalsValues();",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/TypeFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n \n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.TermQuery;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.lucene.search.Queries;\n@@ -35,7 +34,6 @@\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.TextFieldMapper;\n import org.elasticsearch.test.ESTestCase;\n-import org.hamcrest.Matcher;\n import org.hamcrest.Matchers;\n \n import java.io.IOException;\n@@ -64,8 +62,8 @@ public void testFailIfFieldMappingNotFound() {\n final long nowInMillis = randomNonNegativeLong();\n \n QueryShardContext context = new QueryShardContext(\n- 0, indexSettings, null, mappedFieldType ->\n- mappedFieldType.fielddataBuilder().build(indexSettings, mappedFieldType, null, null, null)\n+ 0, indexSettings, null, (mappedFieldType, idxName) ->\n+ mappedFieldType.fielddataBuilder(idxName).build(indexSettings, mappedFieldType, null, null, null)\n , mapperService, null, null, xContentRegistry(), writableRegistry(), null, null,\n () -> nowInMillis, null);\n \n@@ -109,8 +107,8 @@ public void testClusterAlias() throws IOException {\n IndexFieldMapper mapper = new IndexFieldMapper.Builder(null).build(ctx);\n final String clusterAlias = randomBoolean() ? null : \"remote_cluster\";\n QueryShardContext context = new QueryShardContext(\n- 0, indexSettings, null, mappedFieldType ->\n- mappedFieldType.fielddataBuilder().build(indexSettings, mappedFieldType, null, null, mapperService)\n+ 0, indexSettings, null, (mappedFieldType, indexname) ->\n+ mappedFieldType.fielddataBuilder(indexname).build(indexSettings, mappedFieldType, null, null, mapperService)\n , mapperService, null, null, xContentRegistry(), writableRegistry(), null, null,\n () -> nowInMillis, clusterAlias);\n ",
"filename": "core/src/test/java/org/elasticsearch/index/query/QueryShardContextTests.java",
"status": "modified"
},
{
"diff": "@@ -88,7 +88,7 @@ public String typeName() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder();\n }",
"filename": "modules/parent-join/src/main/java/org/elasticsearch/join/mapper/MetaJoinFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -111,7 +111,7 @@ public String typeName() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder();\n }",
"filename": "modules/parent-join/src/main/java/org/elasticsearch/join/mapper/ParentIdFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -223,7 +223,7 @@ public String typeName() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder();\n }",
"filename": "modules/parent-join/src/main/java/org/elasticsearch/join/mapper/ParentJoinFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -659,7 +659,7 @@ public BitSetProducer bitsetFilter(Query query) {\n @Override\n @SuppressWarnings(\"unchecked\")\n public <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType fieldType) {\n- IndexFieldData.Builder builder = fieldType.fielddataBuilder();\n+ IndexFieldData.Builder builder = fieldType.fielddataBuilder(shardContext.getFullyQualifiedIndexName());\n IndexFieldDataCache cache = new IndexFieldDataCache.None();\n CircuitBreakerService circuitBreaker = new NoneCircuitBreakerService();\n return (IFD) builder.build(shardContext.getIndexSettings(), fieldType, cache, circuitBreaker,",
"filename": "modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -128,7 +128,7 @@ public Query nullValueQuery() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder();\n }",
"filename": "plugins/analysis-icu/src/main/java/org/elasticsearch/index/mapper/ICUCollationKeywordFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -122,7 +122,7 @@ public Murmur3FieldType clone() {\n }\n \n @Override\n- public IndexFieldData.Builder fielddataBuilder() {\n+ public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n failIfNoDocValues();\n return new DocValuesIndexFieldData.Builder().numericType(NumericType.LONG);\n }",
"filename": "plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -44,7 +44,6 @@\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache.Listener;\n import org.elasticsearch.index.cache.query.DisabledQueryCache;\n import org.elasticsearch.index.engine.Engine;\n-import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexFieldDataCache;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.mapper.ContentPath;\n@@ -202,9 +201,9 @@ protected QueryShardContext queryShardContextMock(MapperService mapperService, M\n when(queryShardContext.getMapperService()).thenReturn(mapperService);\n for (MappedFieldType fieldType : fieldTypes) {\n when(queryShardContext.fieldMapper(fieldType.name())).thenReturn(fieldType);\n- when(queryShardContext.getForField(fieldType)).then(invocation -> fieldType.fielddataBuilder().build(\n- mapperService.getIndexSettings(), fieldType, new IndexFieldDataCache.None(), circuitBreakerService,\n- mapperService));\n+ when(queryShardContext.getForField(fieldType)).then(invocation -> fieldType.fielddataBuilder(mapperService.getIndexSettings()\n+ .getIndex().getName())\n+ .build(mapperService.getIndexSettings(), fieldType, new IndexFieldDataCache.None(), circuitBreakerService, mapperService));\n }\n NestedScope nestedScope = new NestedScope();\n when(queryShardContext.isFilter()).thenCallRealMethod();",
"filename": "test/framework/src/main/java/org/elasticsearch/search/aggregations/AggregatorTestCase.java",
"status": "modified"
}
]
} |
{
"body": "When trying to remove a cluster from my cross-cluster search list, I kept failing to remove the cluster by using the documented approach:\r\n\r\n```http\r\nPUT /_cluster/settings\r\n{\r\n \"persistent\": {\r\n \"search.remote.local_cluster.seeds\": null\r\n }\r\n}\r\n```\r\n\r\neven though it claims to successfully use the settings:\r\n\r\n```json\r\n{\r\n \"acknowledged\": true,\r\n \"persistent\": {},\r\n \"transient\": {}\r\n}\r\n```\r\n\r\nHowever, to actually remove the seeds (and therefore the remote cluster), you need to wrap the `null` in `[]` brackets.\r\n\r\n```http\r\nPUT /_cluster/settings\r\n{\r\n \"persistent\": {\r\n \"search\": {\r\n \"remote\": {\r\n \"my_cluster\": {\r\n \"seeds\": [\"127.0.0.1:9300\"] \r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT /_cluster/settings\r\n{\r\n \"persistent\": {\r\n \"search.remote.my_cluster.seeds\": [ null ]\r\n }\r\n}\r\n```\r\n\r\nFor what it's worth, simply passing `[]` does not work.",
"comments": [
{
"body": "Sadly this is a case of array settings biting us again because they get converted to keys `key.0`, `key.1`, etc.",
"created_at": "2017-08-04T11:28:50Z"
},
{
"body": "there is an open [community PR](26043) for it, @jasontedor would you mind having a look at it? ",
"created_at": "2017-08-04T17:14:22Z"
},
{
"body": "@javanna It looks like @rjernst has taken it up and we will need a different approach (namely addressing the underlying issue with how we handle multi-valued settings). ",
"created_at": "2017-08-19T06:23:25Z"
},
{
"body": "Another report of this issue was received today. Wrapping the `null` in brackets does not work if there are multiple seeds. A subsequent request is required to make it go away:\r\n\r\n```http\r\nGET /_cluster/settings\r\n{\r\n \"persistent\": {\r\n \"search\": {\r\n \"remote\": {\r\n \"cluster_one\": {\r\n \"seeds\": [\r\n \"127.0.0.1:9301\",\r\n \"127.0.0.1:9302\"\r\n ]\r\n }\r\n }\r\n }\r\n },\r\n \"transient\": {}\r\n}\r\n\r\nPUT /_cluster/settings\r\n{\r\n \"persistent\": {\r\n \"search.remote.cluster_one.seeds\": [null]\r\n }\r\n}\r\n\r\nGET /_cluster/settings\r\n{\r\n \"persistent\" : {\r\n \"search\" : {\r\n \"remote\" : {\r\n \"cluster_one\" : {\r\n \"seeds\" : {\r\n \"1\" : \"127.0.0.1:9302\"\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"transient\" : { }\r\n}\r\n\r\nPUT /_cluster/settings\r\n{\r\n \"persistent\": {\r\n \"search.remote.cluster_one.seeds.1\": null\r\n }\r\n}\r\n\r\nGET /_cluster/settings\r\n{\r\n \"persistent\" : { },\r\n \"transient\" : { }\r\n}\r\n```",
"created_at": "2017-08-28T20:08:50Z"
},
{
"body": "@javanna you mentioned this was solved in master already, but it's still open. Is it this PR https://github.com/elastic/elasticsearch/pull/26878 ?\r\n",
"created_at": "2017-11-10T14:03:16Z"
},
{
"body": "I just did some testing, this is fixed in 6.x and master, still happens in 6.0 and 5.6. I do think that #26878 fixed it, but I am not 100% sure. Closing.",
"created_at": "2017-11-10T19:49:30Z"
}
],
"number": 25953,
"title": "Removing Remote Cluster from Cluster Settings requires array format"
} | {
"body": "Hello guys, \r\nI' m having a look on this problem. \r\nCan I ask which one should be the allowed behaviour?\r\nif the value sent is:\r\n`\"search.remote.my_cluster.seeds\": null`\r\nseems that there is a missed match in `AbstractScopedSettings:512`\r\n`if (Regex.simpleMatch(entry, key) && canRemove.test(key)) {`\r\nentry value: `search.remote.my_cluster.seeds`\r\nkey value: `search.remote.my_cluster.seeds.0`\r\nso, the simpleMatch fails and the key is not added as value to be removed.\r\n\r\nWhich behaviour should be accepted? Error or accepting the straight null value?\r\nI wrote a simple solution that makes the match successful.\r\n I don't know if it can be considered correct, please let me know your thought!\r\nCloses #25953\r\n",
"number": 26043,
"review_comments": [],
"title": "match .seeds with seeds.0"
} | {
"commits": [
{
"message": "match .seeds with seeds.0"
}
],
"files": [
{
"diff": "@@ -509,7 +509,7 @@ private static boolean applyDeletes(Set<String> deletes, Settings.Builder builde\n Set<String> keysToRemove = new HashSet<>();\n Set<String> keySet = builder.internalMap().keySet();\n for (String key : keySet) {\n- if (Regex.simpleMatch(entry, key) && canRemove.test(key)) {\n+ if ( (Regex.simpleMatch(entry, key) || Regex.simpleMatch(entry + \".*\", key)) && canRemove.test(key)) {\n // we have to re-check with canRemove here since we might have a wildcard expression foo.* that matches\n // dynamic as well as static settings if that is the case we might remove static settings since we resolve the\n // wildcards late",
"filename": "core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java",
"status": "modified"
}
]
} |
{
"body": "If a terms aggregation was ordered by a metric nested in a single bucket aggregator which did not collect any documents (e.g. a filters aggregation which did not match in that term bucket) an ArrayOutOfBoundsException would be thrown when the ordering code tried to retrieve the value for the metric. This fix fixes all numeric metric aggregators so they return their default value when a bucket ordinal is requested which was not collected.\n\nCloses #17225\n",
"comments": [
{
"body": "LGTM\n",
"created_at": "2016-03-29T12:24:35Z"
}
],
"number": 17379,
"title": "Prevents exception being raised when ordering by an aggregation which wasn't collected"
} | {
"body": "https://github.com/elastic/elasticsearch/pull/17379 fixed many metric aggs so that if the parent aggregation does not collect any documents an empty bucket value is returned instead of an ArrayOutOfBoundsException being thrown. Unfortunately the value count aggregation was mised from this fix.\r\n\r\nThis change applies this fix from #17379 for the value count aggregation.",
"number": 26038,
"review_comments": [],
"title": "Fixes array out of bounds for value count agg"
} | {
"commits": [
{
"message": "Fixes array out of bounds for value count agg\n\nhttps://github.com/elastic/elasticsearch/pull/17379 fixed many metric aggs so that if the parent aggregation does not collect any documents an empty bucket value is returned instead of an ArrayOutOfBoundsException being thrown. Unfortunately the value count aggregation was mised from this fix.\n\nThis change applies this fix from #17379 for the value count aggregation."
}
],
"files": [
{
"diff": "@@ -83,7 +83,7 @@ public void collect(int doc, long bucket) throws IOException {\n \n @Override\n public double metric(long owningBucketOrd) {\n- return valuesSource == null ? 0 : counts.get(owningBucketOrd);\n+ return (valuesSource == null || owningBucketOrd >= counts.size()) ? 0 : counts.get(owningBucketOrd);\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/ValueCountAggregator.java",
"status": "modified"
},
{
"diff": "@@ -18,24 +18,31 @@\n */\n package org.elasticsearch.search.aggregations.metrics;\n \n-import java.util.Collection;\n-import java.util.Collections;\n-import java.util.Map;\n-\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptType;\n+import org.elasticsearch.search.aggregations.BucketOrder;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.global.Global;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCount;\n import org.elasticsearch.test.ESIntegTestCase;\n \n+import java.util.Collection;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.Map;\n+\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.count;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.global;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.search.aggregations.metrics.MetricAggScriptPlugin.METRIC_SCRIPT_ENGINE;\n import static org.elasticsearch.search.aggregations.metrics.MetricAggScriptPlugin.SUM_FIELD_PARAMS_SCRIPT;\n import static org.elasticsearch.search.aggregations.metrics.MetricAggScriptPlugin.SUM_VALUES_FIELD_SCRIPT;\n@@ -243,4 +250,33 @@ public void testDontCacheScripts() throws Exception {\n assertThat(client().admin().indices().prepareStats(\"cache_test_idx\").setRequestCache(true).get().getTotal().getRequestCache()\n .getMissCount(), equalTo(1L));\n }\n+\n+ public void testOrderByEmptyAggregation() throws Exception {\n+ SearchResponse searchResponse = client().prepareSearch(\"idx\").setQuery(matchAllQuery())\n+ .addAggregation(terms(\"terms\").field(\"value\").order(BucketOrder.compound(BucketOrder.aggregation(\"filter>count\", true)))\n+ .subAggregation(filter(\"filter\", termQuery(\"value\", 100)).subAggregation(count(\"count\").field(\"value\"))))\n+ .get();\n+\n+ assertHitCount(searchResponse, 10);\n+\n+ Terms terms = searchResponse.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ List<? extends Terms.Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ assertThat(buckets.size(), equalTo(10));\n+\n+ for (int i = 0; i < 10; i++) {\n+ Terms.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber(), equalTo((long) i + 1));\n+ assertThat(bucket.getDocCount(), equalTo(1L));\n+ Filter filter = bucket.getAggregations().get(\"filter\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo(0L));\n+ ValueCount count = filter.getAggregations().get(\"count\");\n+ assertThat(count, notNullValue());\n+ assertThat(count.value(), equalTo(0.0));\n+\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/ValueCountIT.java",
"status": "modified"
}
]
} |
{
"body": "ES 5.5\r\n\r\nWhen an index has a default field defined in the index settings\r\n\r\n```\r\n\"settings\": {\r\n \"index\": {\r\n \"query\": {\r\n \"default_field\": \"message\"\r\n },\r\n \"number_of_replicas\": \"1\"\r\n }\r\n }\r\n```\r\n\r\nbut the actual field does not exist, a * query will cause a NPE:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"query_shard_exception\",\r\n \"reason\": \"failed to create query: {\\n \\\"bool\\\" : {\\n \\\"must\\\" : [\\n {\\n \\\"query_string\\\" : {\\n \\\"query\\\" : \\\"*\\\",\\n \\\"fields\\\" : [ ],\\n \\\"use_dis_max\\\" : true,\\n \\\"tie_breaker\\\" : 0.0,\\n \\\"default_operator\\\" : \\\"or\\\",\\n \\\"auto_generate_phrase_queries\\\" : false,\\n \\\"max_determinized_states\\\" : 10000,\\n \\\"enable_position_increments\\\" : true,\\n \\\"fuzziness\\\" : \\\"AUTO\\\",\\n \\\"fuzzy_prefix_length\\\" : 0,\\n \\\"fuzzy_max_expansions\\\" : 50,\\n \\\"phrase_slop\\\" : 0,\\n \\\"escape\\\" : false,\\n \\\"split_on_whitespace\\\" : true,\\n \\\"boost\\\" : 1.0\\n }\\n }\\n ],\\n \\\"disable_coord\\\" : false,\\n \\\"adjust_pure_negative\\\" : true,\\n \\\"boost\\\" : 1.0\\n }\\n}\",\r\n \"index_uuid\": \"SRYVdIE4R5mF0kHJgHBZVw\",\r\n \"index\": \"logstash-test\"\r\n }\r\n ],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"all shards failed\",\r\n \"phase\": \"query\",\r\n \"grouped\": true,\r\n \"failed_shards\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"logstash-test\",\r\n \"node\": \"68R5f1xiTwyxWmchtjr1Ww\",\r\n \"reason\": {\r\n \"type\": \"query_shard_exception\",\r\n \"reason\": \"failed to create query: {\\n \\\"bool\\\" : {\\n \\\"must\\\" : [\\n {\\n \\\"query_string\\\" : {\\n \\\"query\\\" : \\\"*\\\",\\n \\\"fields\\\" : [ ],\\n \\\"use_dis_max\\\" : true,\\n \\\"tie_breaker\\\" : 0.0,\\n \\\"default_operator\\\" : \\\"or\\\",\\n \\\"auto_generate_phrase_queries\\\" : false,\\n \\\"max_determinized_states\\\" : 10000,\\n \\\"enable_position_increments\\\" : true,\\n \\\"fuzziness\\\" : \\\"AUTO\\\",\\n \\\"fuzzy_prefix_length\\\" : 0,\\n \\\"fuzzy_max_expansions\\\" : 50,\\n \\\"phrase_slop\\\" : 0,\\n \\\"escape\\\" : false,\\n \\\"split_on_whitespace\\\" : true,\\n \\\"boost\\\" : 1.0\\n }\\n }\\n ],\\n \\\"disable_coord\\\" : false,\\n \\\"adjust_pure_negative\\\" : true,\\n \\\"boost\\\" : 1.0\\n }\\n}\",\r\n \"index_uuid\": \"SRYVdIE4R5mF0kHJgHBZVw\",\r\n \"index\": \"logstash-test\",\r\n \"caused_by\": {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n }\r\n }\r\n ]\r\n },\r\n \"status\": 400\r\n}\r\n```\r\n\r\nSee repro and log snippet [here](https://gist.github.com/ppf2/d58a318ee58a06f7e2b58473758fb25c).\r\n",
"comments": [
{
"body": "Good catch, this would happen whenever one of the queried indices does not have any mappings yet.",
"created_at": "2017-07-31T14:31:20Z"
}
],
"number": 25956,
"title": "NPE when searching * against an index with default field set (but field does not exist)"
} | {
"body": "It currently fails if there are no mappings yet.\r\n\r\nCloses #25956",
"number": 25993,
"review_comments": [
{
"body": "nit: leftover",
"created_at": "2017-08-01T14:42:55Z"
}
],
"title": "Fix `_exists_` in query_string on empty indices."
} | {
"commits": [
{
"message": "Fix `_exists_` in query_string on empty indices.\n\nIt currently fails if there are no mappings yet.\n\nCloses #25956"
},
{
"message": "iter"
}
],
"files": [
{
"diff": "@@ -47,7 +47,6 @@\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.unit.Fuzziness;\n-import org.elasticsearch.index.analysis.ShingleTokenFilterFactory;\n import org.elasticsearch.index.mapper.AllFieldMapper;\n import org.elasticsearch.index.mapper.DateFieldMapper;\n import org.elasticsearch.index.mapper.DocumentMapper;\n@@ -683,6 +682,9 @@ private Query getPossiblyAnalyzedPrefixQuery(String field, String termStr) throw\n private Query existsQuery(String fieldName) {\n final FieldNamesFieldMapper.FieldNamesFieldType fieldNamesFieldType =\n (FieldNamesFieldMapper.FieldNamesFieldType) context.getMapperService().fullName(FieldNamesFieldMapper.NAME);\n+ if (fieldNamesFieldType == null) {\n+ return new MatchNoDocsQuery(\"No mappings yet\");\n+ }\n if (fieldNamesFieldType.isEnabled() == false) {\n // The field_names_field is disabled so we switch to a wildcard query that matches all terms\n return new WildcardQuery(new Term(fieldName, \"*\"));",
"filename": "core/src/main/java/org/elasticsearch/index/search/QueryStringQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -779,12 +779,15 @@ public void testToQueryTextParsing() throws IOException {\n }\n \n public void testExistsFieldQuery() throws Exception {\n- assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n-\n QueryShardContext context = createShardContext();\n QueryStringQueryBuilder queryBuilder = new QueryStringQueryBuilder(\"foo:*\");\n Query query = queryBuilder.toQuery(context);\n- Query expected = new ConstantScoreQuery(new TermQuery(new Term(\"_field_names\", \"foo\")));\n+ Query expected;\n+ if (getCurrentTypes().length > 0) {\n+ expected = new ConstantScoreQuery(new TermQuery(new Term(\"_field_names\", \"foo\")));\n+ } else {\n+ expected = new MatchNoDocsQuery();\n+ }\n assertThat(query, equalTo(expected));\n \n queryBuilder = new QueryStringQueryBuilder(\"_all:*\");\n@@ -804,23 +807,28 @@ public void testExistsFieldQuery() throws Exception {\n }\n \n public void testDisabledFieldNamesField() throws Exception {\n+ assumeTrue(\"No types\", getCurrentTypes().length > 0);\n QueryShardContext context = createShardContext();\n context.getMapperService().merge(\"doc\",\n new CompressedXContent(\n PutMappingRequest.buildFromSimplifiedDef(\"doc\",\n \"foo\", \"type=text\",\n \"_field_names\", \"enabled=false\").string()),\n MapperService.MergeReason.MAPPING_UPDATE, true);\n- QueryStringQueryBuilder queryBuilder = new QueryStringQueryBuilder(\"foo:*\");\n- Query query = queryBuilder.toQuery(context);\n- Query expected = new WildcardQuery(new Term(\"foo\", \"*\"));\n- assertThat(query, equalTo(expected));\n- context.getMapperService().merge(\"doc\",\n- new CompressedXContent(\n- PutMappingRequest.buildFromSimplifiedDef(\"doc\",\n- \"foo\", \"type=text\",\n- \"_field_names\", \"enabled=true\").string()),\n- MapperService.MergeReason.MAPPING_UPDATE, true);\n+ try {\n+ QueryStringQueryBuilder queryBuilder = new QueryStringQueryBuilder(\"foo:*\");\n+ Query query = queryBuilder.toQuery(context);\n+ Query expected = new WildcardQuery(new Term(\"foo\", \"*\"));\n+ assertThat(query, equalTo(expected));\n+ } finally {\n+ // restore mappings as they were before\n+ context.getMapperService().merge(\"doc\",\n+ new CompressedXContent(\n+ PutMappingRequest.buildFromSimplifiedDef(\"doc\",\n+ \"foo\", \"type=text\",\n+ \"_field_names\", \"enabled=true\").string()),\n+ MapperService.MergeReason.MAPPING_UPDATE, true);\n+ }\n }\n \n ",
"filename": "core/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -166,7 +166,7 @@ protected static Index getIndex() {\n }\n \n protected static String[] getCurrentTypes() {\n- return currentTypes == null ? Strings.EMPTY_ARRAY : currentTypes;\n+ return currentTypes;\n }\n \n protected Collection<Class<? extends Plugin>> getPlugins() {\n@@ -186,7 +186,14 @@ public static void beforeClass() {\n index = new Index(randomAlphaOfLengthBetween(1, 10), \"_na_\");\n \n // Set a single type in the index\n- currentTypes = new String[] { \"doc\" };\n+ switch (random().nextInt(3)) {\n+ case 0:\n+ currentTypes = new String[0]; // no types\n+ break;\n+ default:\n+ currentTypes = new String[] { \"doc\" };\n+ break;\n+ }\n randomTypes = getRandomTypes();\n }\n ",
"filename": "test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**:\r\n5.4.1\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nmapper_parsing_exception is thrown for numeric field with ignore_malformed=true when inserting a \"NaN\" value.\r\nI would expect that no exception is returned for a field with ignore_malformed=true\r\n\r\n**Steps to reproduce**:\r\nFirst create an index with a mapping with a numeric field with ignore_malformed = true:\r\n```\r\nPUT nan_test\r\n{\r\n \"mappings\": {\r\n \"test_type\": {\r\n \"properties\": {\r\n \"number_one\": {\r\n \"type\": \"scaled_float\",\r\n \"scaling_factor\": 1,\r\n \"ignore_malformed\": true\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\n{\"acknowledged\":true,\"shards_acknowledged\":true}\r\n```\r\n\r\nThen insert a malformed number, works as expected:\r\n```\r\nPUT nan_test/test_type/1?pretty\r\n{\r\n \"number_one\": \"not a number\"\r\n}\r\n\r\n{\r\n \"_index\" : \"nan_test\",\r\n \"_type\" : \"test_type\",\r\n \"_id\" : \"1\",\r\n \"_version\" : 1,\r\n \"result\" : \"created\",\r\n \"_shards\" : {\r\n \"total\" : 2,\r\n \"successful\" : 2,\r\n \"failed\" : 0\r\n },\r\n \"created\" : true\r\n}\r\n```\r\nNow, let's insert a number with the \"NaN\" string value:\r\n```\r\nPUT nan_test/test_type/2?pretty\r\n{\r\n \"number_one\": \"NaN\"\r\n}\r\n\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse [number_one]\"\r\n }\r\n ],\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse [number_one]\",\r\n \"caused_by\" : {\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"[scaled_float] only supports finite values, but got [NaN]\"\r\n }\r\n },\r\n \"status\" : 400\r\n}\r\n\r\n```\r\n\r\nShouldn't ignore_malformed = true ignore all mapper_parsing_exception? \r\nIf not, how can I ignore the exception raised by \"NaN\" values?\r\nNote that NaN has no special meaning in json, so it seems strange that it is considered a special input value.\r\n",
"comments": [
{
"body": "Related to https://github.com/elastic/elasticsearch/issues/12366",
"created_at": "2017-06-20T11:51:11Z"
},
{
"body": "Same problem with the \"Infinity\" and \"-Infinity\" values.\r\n\r\n`\"reason\" : \"[scaled_float] only supports finite values, but got [-Infinity]\"`\r\n\r\n",
"created_at": "2017-06-27T08:53:03Z"
},
{
"body": "This is a bug, ignore_malformed should mean the error is suppressed here.",
"created_at": "2017-06-30T13:55:52Z"
}
],
"number": 25289,
"title": "mapper_parsing_exception is thrown for \"NaN\" values for fields with ignore_malformed=true"
} | {
"body": "Fixed bug that mapper_parsing_exception is thrown for numeric field with ignore_malformed=true when inserting \"NaN\", \"Infinity\" or \"-Infinity\" values\r\n\r\nCloses #25289",
"number": 25967,
"review_comments": [],
"title": "Fixed bug that mapper_parsing_exception is thrown for numeric field with ignore_malformed=true when inserting \"NaN\""
} | {
"commits": [
{
"message": "Fixed bug that mapper_parsing_exception is thrown for numeric field with ignore_malformed=true when inserting \"NaN\", \"Infinity\" or \"-Infinity\" values"
}
],
"files": [
{
"diff": "@@ -399,8 +399,12 @@ protected void parseCreateField(ParseContext context, List<IndexableField> field\n \n double doubleValue = numericValue.doubleValue();\n if (Double.isFinite(doubleValue) == false) {\n- // since we encode to a long, we have no way to carry NaNs and infinities\n- throw new IllegalArgumentException(\"[scaled_float] only supports finite values, but got [\" + doubleValue + \"]\");\n+ if (ignoreMalformed.value()) {\n+ return;\n+ } else {\n+ // since we encode to a long, we have no way to carry NaNs and infinities\n+ throw new IllegalArgumentException(\"[scaled_float] only supports finite values, but got [\" + doubleValue + \"]\");\n+ }\n }\n long scaledValue = Math.round(doubleValue * fieldType().getScalingFactor());\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -31,7 +31,9 @@\n import org.junit.Before;\n \n import java.io.IOException;\n+import java.util.Arrays;\n import java.util.Collection;\n+import java.util.List;\n \n import static org.hamcrest.Matchers.containsString;\n \n@@ -223,37 +225,46 @@ public void testCoerce() throws Exception {\n }\n \n public void testIgnoreMalformed() throws Exception {\n+ doTestIgnoreMalformed(\"a\", \"For input string: \\\"a\\\"\");\n+\n+ List<String> values = Arrays.asList(\"NaN\", \"Infinity\", \"-Infinity\");\n+ for (String value : values) {\n+ doTestIgnoreMalformed(value, \"[scaled_float] only supports finite values, but got [\" + value + \"]\");\n+ }\n+ }\n+\n+ private void doTestIgnoreMalformed(String value, String exceptionMessageContains) throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\").startObject(\"field\").field(\"type\", \"scaled_float\")\n- .field(\"scaling_factor\", 10.0).endObject().endObject()\n- .endObject().endObject().string();\n+ .startObject(\"properties\").startObject(\"field\").field(\"type\", \"scaled_float\")\n+ .field(\"scaling_factor\", 10.0).endObject().endObject()\n+ .endObject().endObject().string();\n \n DocumentMapper mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n \n assertEquals(mapping, mapper.mappingSource().toString());\n \n ThrowingRunnable runnable = () -> mapper.parse(SourceToParse.source(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n .startObject()\n- .field(\"field\", \"a\")\n+ .field(\"field\", value)\n .endObject()\n .bytes(),\n- XContentType.JSON));\n+ XContentType.JSON));\n MapperParsingException e = expectThrows(MapperParsingException.class, runnable);\n- assertThat(e.getCause().getMessage(), containsString(\"For input string: \\\"a\\\"\"));\n+ assertThat(e.getCause().getMessage(), containsString(exceptionMessageContains));\n \n mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\").startObject(\"field\").field(\"type\", \"scaled_float\")\n- .field(\"scaling_factor\", 10.0).field(\"ignore_malformed\", true).endObject().endObject()\n- .endObject().endObject().string();\n+ .startObject(\"properties\").startObject(\"field\").field(\"type\", \"scaled_float\")\n+ .field(\"scaling_factor\", 10.0).field(\"ignore_malformed\", true).endObject().endObject()\n+ .endObject().endObject().string();\n \n DocumentMapper mapper2 = parser.parse(\"type\", new CompressedXContent(mapping));\n \n ParsedDocument doc = mapper2.parse(SourceToParse.source(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n .startObject()\n- .field(\"field\", \"a\")\n+ .field(\"field\", value)\n .endObject()\n .bytes(),\n- XContentType.JSON));\n+ XContentType.JSON));\n \n IndexableField[] fields = doc.rootDoc().getFields(\"field\");\n assertEquals(0, fields.length);",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**:\r\n5.4.1\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nmapper_parsing_exception is thrown for numeric field with ignore_malformed=true when inserting a \"NaN\" value.\r\nI would expect that no exception is returned for a field with ignore_malformed=true\r\n\r\n**Steps to reproduce**:\r\nFirst create an index with a mapping with a numeric field with ignore_malformed = true:\r\n```\r\nPUT nan_test\r\n{\r\n \"mappings\": {\r\n \"test_type\": {\r\n \"properties\": {\r\n \"number_one\": {\r\n \"type\": \"scaled_float\",\r\n \"scaling_factor\": 1,\r\n \"ignore_malformed\": true\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\n{\"acknowledged\":true,\"shards_acknowledged\":true}\r\n```\r\n\r\nThen insert a malformed number, works as expected:\r\n```\r\nPUT nan_test/test_type/1?pretty\r\n{\r\n \"number_one\": \"not a number\"\r\n}\r\n\r\n{\r\n \"_index\" : \"nan_test\",\r\n \"_type\" : \"test_type\",\r\n \"_id\" : \"1\",\r\n \"_version\" : 1,\r\n \"result\" : \"created\",\r\n \"_shards\" : {\r\n \"total\" : 2,\r\n \"successful\" : 2,\r\n \"failed\" : 0\r\n },\r\n \"created\" : true\r\n}\r\n```\r\nNow, let's insert a number with the \"NaN\" string value:\r\n```\r\nPUT nan_test/test_type/2?pretty\r\n{\r\n \"number_one\": \"NaN\"\r\n}\r\n\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse [number_one]\"\r\n }\r\n ],\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse [number_one]\",\r\n \"caused_by\" : {\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"[scaled_float] only supports finite values, but got [NaN]\"\r\n }\r\n },\r\n \"status\" : 400\r\n}\r\n\r\n```\r\n\r\nShouldn't ignore_malformed = true ignore all mapper_parsing_exception? \r\nIf not, how can I ignore the exception raised by \"NaN\" values?\r\nNote that NaN has no special meaning in json, so it seems strange that it is considered a special input value.\r\n",
"comments": [
{
"body": "Related to https://github.com/elastic/elasticsearch/issues/12366",
"created_at": "2017-06-20T11:51:11Z"
},
{
"body": "Same problem with the \"Infinity\" and \"-Infinity\" values.\r\n\r\n`\"reason\" : \"[scaled_float] only supports finite values, but got [-Infinity]\"`\r\n\r\n",
"created_at": "2017-06-27T08:53:03Z"
},
{
"body": "This is a bug, ignore_malformed should mean the error is suppressed here.",
"created_at": "2017-06-30T13:55:52Z"
}
],
"number": 25289,
"title": "mapper_parsing_exception is thrown for \"NaN\" values for fields with ignore_malformed=true"
} | {
"body": "Fixed bug that mapper_parsing_exception is thrown for numeric field with ignore_malformed=true when inserting \"NaN\", \"Infinity\" or \"-Infinity\" values\r\n\r\nCloses #25289",
"number": 25966,
"review_comments": [],
"title": "Fixed bug that mapper_parsing_exception is thrown for numeric field with ignore_malformed=true when inserting \"NaN\", \"Infinity\" or \"-Infinity\" values"
} | {
"commits": [
{
"message": "Fixed bug that mapper_parsing_exception is thrown for numeric field with ignore_malformed=true when inserting \"NaN\", \"Infinity\" or \"-Infinity\" values"
}
],
"files": [
{
"diff": "@@ -399,8 +399,12 @@ protected void parseCreateField(ParseContext context, List<IndexableField> field\n \n double doubleValue = numericValue.doubleValue();\n if (Double.isFinite(doubleValue) == false) {\n- // since we encode to a long, we have no way to carry NaNs and infinities\n- throw new IllegalArgumentException(\"[scaled_float] only supports finite values, but got [\" + doubleValue + \"]\");\n+ if (ignoreMalformed.value()) {\n+ return;\n+ } else {\n+ // since we encode to a long, we have no way to carry NaNs and infinities\n+ throw new IllegalArgumentException(\"[scaled_float] only supports finite values, but got [\" + doubleValue + \"]\");\n+ }\n }\n long scaledValue = Math.round(doubleValue * fieldType().getScalingFactor());\n ",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -31,7 +31,9 @@\n import org.junit.Before;\n \n import java.io.IOException;\n+import java.util.Arrays;\n import java.util.Collection;\n+import java.util.List;\n \n import static org.hamcrest.Matchers.containsString;\n \n@@ -223,37 +225,46 @@ public void testCoerce() throws Exception {\n }\n \n public void testIgnoreMalformed() throws Exception {\n+ doTestIgnoreMalformed(\"a\", \"For input string: \\\"a\\\"\");\n+\n+ List<String> values = Arrays.asList(\"NaN\", \"Infinity\", \"-Infinity\");\n+ for (String value : values) {\n+ doTestIgnoreMalformed(value, \"[scaled_float] only supports finite values, but got [\" + value + \"]\");\n+ }\n+ }\n+\n+ private void doTestIgnoreMalformed(String value, String exceptionMessageContains) throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\").startObject(\"field\").field(\"type\", \"scaled_float\")\n- .field(\"scaling_factor\", 10.0).endObject().endObject()\n- .endObject().endObject().string();\n+ .startObject(\"properties\").startObject(\"field\").field(\"type\", \"scaled_float\")\n+ .field(\"scaling_factor\", 10.0).endObject().endObject()\n+ .endObject().endObject().string();\n \n DocumentMapper mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n \n assertEquals(mapping, mapper.mappingSource().toString());\n \n ThrowingRunnable runnable = () -> mapper.parse(SourceToParse.source(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n .startObject()\n- .field(\"field\", \"a\")\n+ .field(\"field\", value)\n .endObject()\n .bytes(),\n- XContentType.JSON));\n+ XContentType.JSON));\n MapperParsingException e = expectThrows(MapperParsingException.class, runnable);\n- assertThat(e.getCause().getMessage(), containsString(\"For input string: \\\"a\\\"\"));\n+ assertThat(e.getCause().getMessage(), containsString(exceptionMessageContains));\n \n mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\").startObject(\"field\").field(\"type\", \"scaled_float\")\n- .field(\"scaling_factor\", 10.0).field(\"ignore_malformed\", true).endObject().endObject()\n- .endObject().endObject().string();\n+ .startObject(\"properties\").startObject(\"field\").field(\"type\", \"scaled_float\")\n+ .field(\"scaling_factor\", 10.0).field(\"ignore_malformed\", true).endObject().endObject()\n+ .endObject().endObject().string();\n \n DocumentMapper mapper2 = parser.parse(\"type\", new CompressedXContent(mapping));\n \n ParsedDocument doc = mapper2.parse(SourceToParse.source(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n .startObject()\n- .field(\"field\", \"a\")\n+ .field(\"field\", value)\n .endObject()\n .bytes(),\n- XContentType.JSON));\n+ XContentType.JSON));\n \n IndexableField[] fields = doc.rootDoc().getFields(\"field\");\n assertEquals(0, fields.length);",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/ScaledFloatFieldMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.5.0 (issue first appeared while on 5.4.1)\r\n\r\n**Plugins installed**: [x-pack, repository-s3]\r\n\r\n**JVM version** (`java -version`): openjdk version \"1.8.0_131\"\r\nOpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-1~bpo8+1-b11)\r\nOpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Linux ip-10-127-1-159 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1 (2016-12-30) x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nTrying to get the list of available snapshots on an S3 backed repository fails with NullPointerException.\r\n\r\n```\r\ncurl elasticsearch:9200/_snapshot/long_term/_all\r\n{\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]\"}],\"type\":\"null_pointer_exception\",\"reason\":null},\"status\":500}\r\n```\r\n\r\nElasticsearch logs:\r\n\r\n```\r\n[2017-07-25T12:01:47,038][WARN ][r.suppressed ] path: /_snapshot/long_term/_all, params: {repository=long_term, snapshot=_all}\r\norg.elasticsearch.transport.RemoteTransportException: [SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]\r\nCaused by: java.lang.NullPointerException\r\n```\r\n\r\nI use curator to take the backups and after grabbing backups successfully it fails when it tries to delete old snapshots because that's when it requires a list too:\r\n\r\n```\r\n2017-07-25 11:53:02,191 ERROR Failed to complete action: delete_snapshots. <class 'curator.exceptions.FailedExecution'>: Unable to get snapshot information from repository: long_term. Error: TransportError(500, 'null_pointer_exception', '[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]')\r\n```\r\n\r\nI have a feeling this is due to some kind of timeout. I turned on debug logging and while I couldn't find a more specific reason this fails I noticed it made ~ 2K requests to S3 until it failed and it stopped at 90 seconds. Is this a configurable timeout?\r\n\r\nIn the past getting a list of snapshots took increasingly long but it eventually responded. Now it breaks earlier than that.\r\n\r\nAlso posted on the forums: https://discuss.elastic.co/t/nullpointerexception-when-getting-list-of-snapshots-on-s3/94458",
"comments": [
{
"body": "Could you paste the full stack trace from the Elasticsearch server logs?",
"created_at": "2017-07-25T10:30:58Z"
},
{
"body": "There's no more logs for the null pointer entry. There's a ton of logs for the headers and each of the 2K requests do you want me to post those? All of those responded with 200 OK though.",
"created_at": "2017-07-25T11:09:18Z"
},
{
"body": "These should be the logs from the last request before the null pointer. I tried to sensor out any possibly sensitive info. Maybe the returned payload was what triggered the issue?\r\n\r\n```\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.c.p.RequestAddCookies] CookieSpec selected: default\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.c.p.RequestAuthCache] Auth cache not set in the context\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.c.p.RequestProxyAuthentication] Proxy auth state: UNCHALLENGED\r\n[2017-07-25T12:27:45,437][DEBUG][c.a.h.i.c.SdkHttpClient ] Attempt 1 to execute request\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.i.c.DefaultClientConnection] Sending request: GET /long_term/snap-GRrT8CKjS7qdq42NZf3T2A.dat HTTP/1.1\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"GET /long_term/snap-GRrT8CKjS7qdq42NZf3T2A.dat HTTP/1.1[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"Host: elastic-stack-backupsbucket-*****************.s3-eu-west-1.amazonaws.com[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"x-amz-content-sha256: *********************[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"Authorization: AWS4-HMAC-SHA256 Credential=****************/20170725/eu-west-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-d\r\nate;x-amz-security-token, Signature=***************************[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"X-Amz-Date: 20170725T092745Z[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"User-Agent: aws-sdk-java/1.10.69 Linux/3.16.0-4-amd64 OpenJDK_64-Bit_Server_VM/25.131-b11/1.8.0_131[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"X-Amz-Security-Token: **********************************[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"amz-sdk-invocation-id: 23f8b7a2-93bb-46f4-a492-cf692051dc43[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"amz-sdk-retry: 0/0/[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"Content-Type: application/octet-stream[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"Connection: Keep-Alive[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.wire ] >> \"[\\r][\\n]\"\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> GET /long_term/snap-GRrT8CKjS7qdq42NZf3T2A.dat HTTP/1.1\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> Host: elastic-stack-backupsbucket-*****************.s3-eu-west-1.amazonaws.com\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> Authorization: AWS4-HMAC-SHA256 Credential=****************/20170725/eu-west-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-dat\r\ne;x-amz-security-token, Signature=***************************\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> X-Amz-Date: 20170725T092745Z\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> User-Agent: aws-sdk-java/1.10.69 Linux/3.16.0-4-amd64 OpenJDK_64-Bit_Server_VM/25.131-b11/1.8.0_131\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> X-Amz-Security-Token: **********************************\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> amz-sdk-invocation-id: 23f8b7a2-93bb-46f4-a492-cf692051dc43\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> amz-sdk-retry: 0/0/\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> Content-Type: application/octet-stream\r\n[2017-07-25T12:27:45,437][DEBUG][o.a.h.headers ] >> Connection: Keep-Alive\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"HTTP/1.1 200 OK[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"x-amz-id-2: ************************[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"x-amz-request-id: 3E117E943CA08991[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Date: Tue, 25 Jul 2017 09:27:46 GMT[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Last-Modified: Wed, 19 Jul 2017 01:07:25 GMT[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"ETag: \"8e87c087b7474433ba26057f74233e5a\"[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Accept-Ranges: bytes[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Content-Type: application/octet-stream[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Content-Length: 302[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"Server: AmazonS3[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"[\\r][\\n]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.i.c.DefaultClientConnection] Receiving response: HTTP/1.1 200 OK\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << HTTP/1.1 200 OK\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << x-amz-id-2: *************************************\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << x-amz-request-id: 3E117E943CA08991\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Date: Tue, 25 Jul 2017 09:27:46 GMT\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Last-Modified: Wed, 19 Jul 2017 01:07:25 GMT\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << ETag: \"8e87c087b7474433ba26057f74233e5a\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Accept-Ranges: bytes\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Content-Type: application/octet-stream\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Content-Length: 302\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.headers ] << Server: AmazonS3\r\n[2017-07-25T12:27:45,509][DEBUG][c.a.h.i.c.SdkHttpClient ] Connection can be kept alive for 60000 MILLISECONDS\r\n[2017-07-25T12:27:45,509][DEBUG][c.a.requestId ] x-amzn-RequestId: not available\r\n[2017-07-25T12:27:45,509][DEBUG][c.a.request ] Received successful response: 200, AWS Request ID: 3E117E943CA08991\r\n[2017-07-25T12:27:45,509][DEBUG][c.a.requestId ] AWS Request ID: 3E117E943CA08991\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.wire ] << \"?[0xd7]l[0x17][0x8]snapshot[0x0][0x0][0x0][0x1]DFL[0x0]l[0x92][0xcd]J[0x3]1[0x14][0x85][0xe9][0xc2][0x85][0xe2]SH]t[0xd1]![0xc9]Lg[0xda][0xee]tP[0x17]B[0x17][0xa6][0xed]B[0x90]!4[0x19][0x9a][0xd2]fln[0xd2][0x95]++[0xe]E[0x90]y[0xdc][0xfe]l[0x1c]DQ[0xe8][0x85]lr[0xf8][0xce][0xb9][0xf7][0x90][0xf4][g'[0xfb][0x12][0x8c]x[0x86]i[0xe1][0xf6]k#[0x16][0xea]IHY[0x18]h[0xff][0xaa]mFhB[0x12][0xda]#[0x94][0xc4][0x9d]h[0xed][0xbd][0x96][0xa3][0xbb][0x7];[0xec][0xa6][0xf7]3[0x9e],[0xe5]2b[0x83][0xc7]<[0x1c][0xb2][0xab][0xcd]JY[0xd0][0x85][0xc9][0xb4]l[0x9e][0xe][0xae]?[0xdf][0xb5][0x91]z[0xa2]`[0xcb]^?2W[0xf4];- [0xf5][0x89][0x10][0x91]v02[0xc1]H[0x8a][0x91]=[0x8c]D[0xed]1f?[0x9e][0x1e][0x7][0xec]83[0xe]B[0x82][0xd9][0x19]&v[0xb1][0x95][0xd0][0xee][0x18]I[0xb0][0x9a][0x14][0x9b]NCl:Z[0x13]#)[0xdb][0xbd][0x81][0x13]N[0xdd][0xf2]Q[0x9a][0xde]p[0xbe][0xa9]o[0xd6]eN/[0xd4]e#[0x18][0x9f]_[0xbc][0xbc][0x96][0xca][0xc8]?[0xa1]e[0xaa][0xf]W81[0xcf]`*[0xac][0x84]f[0xa3][0xaa][0xc0]O[0xea][0xe7][0x86][0xdc][0xff][0x13][0xcb]\\[0xe8][0xb9][0xb7][0xf5]/[0xd8][0x1d][0xe]_[0x0][0x0][0x0][0xff][0xff][0x3][0x0][0xc0]([0x93][0xe8][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0xf4][0x1f]J[0xbe]\"\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.i.c.PoolingClientConnectionManager] Connection [id: 4949][route: {s}->https://elastic-stack-backupsbucket-*************.s3-eu-west-1.amazonaws.com:443] can be kept alive for 60000 MILLISECONDS\r\n[2017-07-25T12:27:45,509][DEBUG][o.a.h.i.c.PoolingClientConnectionManager] Connection released: [id: 4949][route: {s}->https://elastic-stack-backupsbucket-******************.s3-eu-west-1.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50]\r\n\r\n```",
"created_at": "2017-07-25T11:21:47Z"
},
{
"body": "FYI we use a coordinating node and 3 data nodes. I do the snapshot requests to the coordinating node, and all the S3 requests seem to originate from the data node that's currently the master (10.127.1.203).\r\n\r\nSome more logs:\r\n\r\nI see ~ 1k of these logs 15 sec after start of the request and ~ 500 at the end:\r\n\r\n [2017-07-25T12:27:46,968][DEBUG][o.e.s.SearchService ] [SVVyQPF] freeing search context [1977515], time [225057509], lastAccessTime [224978476], keepAlive [30000]\r\n\r\nThese pop up between requests:\r\n\r\n [2017-07-25T12:27:45,374][DEBUG][o.a.h.i.c.PoolingClientConnectionManager] Connection released: [id: 4949][route: {s}->https://elastic-stack-backupsbucket-**********.s3-eu-west-1.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50]\r\n [2017-07-25T12:27:45,374][DEBUG][o.a.h.i.c.PoolingClientConnectionManager] Connection [id: 4949][route: {s}->https://elastic-stack-backupsbucket-**********.s3-eu-west-1.amazonaws.com:443] can be kept alive for 60000 MILLISECONDS\r\n\r\nThese are the things logged on the master node around the time the coordinating node logged the exception (excluding the freeing search context logs mentioned above):\r\n\r\n [2017-07-25T12:27:45,509][DEBUG][o.a.h.i.c.DefaultClientConnection] Receiving response: HTTP/1.1 200 OK\r\n [2017-07-25T12:27:45,509][DEBUG][o.a.h.i.c.PoolingClientConnectionManager] Connection [id: 4949][route: {s}->https://elastic-stack-backupsbucket-***********.s3-eu-west-1.amazonaws.com:443] can be kept alive for 60000 MILLISECONDS\r\n [2017-07-25T12:27:45,509][DEBUG][c.a.requestId ] x-amzn-RequestId: not available\r\n [2017-07-25T12:27:45,541][DEBUG][o.e.m.j.JvmGcMonitorService] [SVVyQPF] [gc][221514] overhead, spent [106ms] collecting in the last [1s]\r\n [2017-07-25T12:27:47,497][DEBUG][o.e.x.m.a.GetDatafeedsStatsAction$TransportAction] [SVVyQPF] Get stats for datafeed '_all'\r\n [2017-07-25T12:27:47,652][DEBUG][o.e.x.m.e.l.LocalExporter] monitoring index templates and pipelines are installed on master node, service can start\r\n [2017-07-25T12:27:48,542][DEBUG][o.e.m.j.JvmGcMonitorService] [SVVyQPF] [gc][221517] overhead, spent [111ms] collecting in the last [1s]",
"created_at": "2017-07-25T14:06:20Z"
},
{
"body": "Hmm, I don't see any smoking gun here. I am not really sure how to move forward with this without knowing where this NPE occurs or being able to reproduce this issue locally.",
"created_at": "2017-07-26T12:39:20Z"
},
{
"body": "Ok as I understand it there should have been a stack trace after the \"caused by\" line right? Maybe we can look into why that's not present and then we'll have more info for the specific issue? Also there's that `r.suppressed` thing. That would at least point the to class in which the NPE occurred but that's not available either. Can I configure something to make that visible?",
"created_at": "2017-07-26T12:44:44Z"
},
{
"body": "@eirc you said that\r\n\r\n> These should be the logs from the last request before the null pointer\r\n\r\nbut the timestamp from these logs are `12:27` whereas the NPE has a timestamp of `12:01`.\r\nCan you provide the full logs from both the master node and the coordinating node? (You can share them in private with us if you don't want to post them publicly)",
"created_at": "2017-07-26T12:58:09Z"
},
{
"body": "@eirc, @ywelsch and I discussed this more and we have a couple of other things we would like you to try:\r\n\r\n1) could you execute `curl elasticsearch:9200/_snapshot/long_term/_all?error_trace=true` and see if the stack trace shows up there\r\n\r\n2) could you execute `curl localhost:9200/_snapshot/long_term/_all` on the current master node. And if it works, but still fails when you execute it against a coordinating node we would really appreciate this output as well.\r\n\r\n",
"created_at": "2017-07-26T13:06:03Z"
},
{
"body": "Regarding the time discrepancies, the NPE happens every time I request a listing. At 12:27 I had debug logging on so that's why most of the logs are from that time. At 12:01 was probably one of the first tests. The same NPE log appeared at 12:27 and every time I did a listing request.",
"created_at": "2017-07-26T13:43:49Z"
},
{
"body": "Ok now there's some light at the end of the tunnel!\r\n\r\nFirst if I get the listing from the master node it actually works! By requesting on the coordinating (or any other) node it fails with that same behaviour. Adding error_trace=true to the request yields some useful info finally:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [{\r\n \"type\": \"remote_transport_exception\",\r\n \"reason\": \"[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]\",\r\n \"stack_trace\": \"[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: RemoteTransportException[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: NullPointerException;\\n\\tat org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:618)\\n\\tat org.elasticsearch.ElasticsearchException.generateFailureXContent(ElasticsearchException.java:563)\\n\\tat org.elasticsearch.rest.BytesRestResponse.build(BytesRestResponse.java:138)\\n\\tat org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:96)\\n\\tat org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:91)\\n\\tat org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58)\\n\\tat org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94)\\n\\tat org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.handleException(TransportMasterNodeAction.java:185)\\n\\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1067)\\n\\tat org.elasticsearch.transport.TcpTransport.lambda$handleException$16(TcpTransport.java:1467)\\n\\tat org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:110)\\n\\tat org.elasticsearch.transport.TcpTransport.handleException(TcpTransport.java:1465)\\n\\tat org.elasticsearch.transport.TcpTransport.handlerResponseError(TcpTransport.java:1457)\\n\\tat org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1401)\\n\\tat org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\\n\\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)\\n\\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297)\\n\\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413)\\n\\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\\n\\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\\n\\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\\n\\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)\\n\\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)\\n\\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)\\n\\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)\\n\\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)\\n\\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)\\n\\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)\\n\\tat java.lang.Thread.run(Thread.java:748)\\nCaused by: RemoteTransportException[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: NullPointerException;\\nCaused by: java.lang.NullPointerException\\n\"\r\n }],\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null,\r\n \"stack_trace\": \"java.lang.NullPointerException\\n\"\r\n },\r\n \"status\": 500\r\n}\r\n```\r\n\r\nHere's the formatted stack trace for your convenience:\r\n\r\n```\r\n[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: RemoteTransportException[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: NullPointerException;\r\n at org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:618)\r\n at org.elasticsearch.ElasticsearchException.generateFailureXContent(ElasticsearchException.java:563)\r\n at org.elasticsearch.rest.BytesRestResponse.build(BytesRestResponse.java:138)\r\n at org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:96)\r\n at org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:91)\r\n at org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58)\r\n at org.elasticsearch.action.support.TransportAction.onFailure(TransportAction.java:94)\r\n at org.elasticsearch.action.support.master.TransportMasterNodeAction.handleException(TransportMasterNodeAction.java:185)\r\n at org.elasticsearch.transport.TransportService.handleException(TransportService.java:1067)\r\n at org.elasticsearch.transport.TcpTransport.lambda(TcpTransport.java:1467)\r\n at org.elasticsearch.common.util.concurrent.EsExecutors.execute(EsExecutors.java:110)\r\n at org.elasticsearch.transport.TcpTransport.handleException(TcpTransport.java:1465)\r\n at org.elasticsearch.transport.TcpTransport.handlerResponseError(TcpTransport.java:1457)\r\n at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1401)\r\n at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297)\r\n at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413)\r\n at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n at io.netty.channel.DefaultChannelPipeline.channelRead(DefaultChannelPipeline.java:1334)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)\r\n at io.netty.channel.nio.AbstractNioByteChannel.read(AbstractNioByteChannel.java:134)\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)\r\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)\r\n at io.netty.util.concurrent.SingleThreadEventExecutor.run(SingleThreadEventExecutor.java:858)\r\n at java.lang.Thread.run(Thread.java:748)\r\nCaused by: RemoteTransportException[[SVVyQPF][10.127.1.203:9300][cluster:admin/snapshot/get]]; nested: NullPointerException;\r\nCaused by: java.lang.NullPointerException\r\n```",
"created_at": "2017-07-26T13:50:13Z"
},
{
"body": "@eirc any chance you can email me the output that you get from master? My mail is igor at elastic.co. If not, could you try getting one snapshot at a time on the coordinating node and checking what's different between the snapshots that can be retrieved and the snapshots that cause this NPE? \r\n\r\nBy the way, does the coordinating node have a different es version?",
"created_at": "2017-07-26T14:07:20Z"
},
{
"body": "Just confirmed all elasticsearches are on 5.5.0. Can I check the version of plugins someway? When I upgraded the stack I remember I had to remove and reinstall plugins to be of proper versions.\r\n\r\nI'll make a script to pull each snapshot individually and see which one(s) are breaking now.",
"created_at": "2017-07-26T14:13:11Z"
},
{
"body": "In 5.5.0 all plugins should be 5.5.0. Otherwise, elasticsearch wouldn't work. In any case, based on what we know so far, I don't think it's a plugin-related issue. Our current theory is that snapshot info serialization code breaks on one or more snapshots that you have in your repository. However, we just reviewed this code and couldn't find any obvious issues. That's why we would like to figure out which snapshot information master is trying to send to the coordinating node in order to reproduce and fix the problem. ",
"created_at": "2017-07-26T14:32:44Z"
},
{
"body": "I emailed you the full snapshot list. My script ~managed to successfully grab each snapshot individually from the coordinating node~ (where grabbing them all failed). I noticed some of the snapshots have some shard failures but that shouldn't be an issue right? Maybe it's the size of the response that's the issue here? I got ~2k snapshots and the response is 1.2 MB.",
"created_at": "2017-07-26T14:59:14Z"
},
{
"body": "No scratch that, there *is* a single snapshot which produces the NPE when I get it on it's own.",
"created_at": "2017-07-26T15:04:41Z"
},
{
"body": "\r\nHere is the JSON I can get from the master but not from other nodes:\r\n\r\n```\r\n{\r\n \"snapshots\": [\r\n {\r\n \"snapshot\": \"wsj-snapshot-20170720085856\",\r\n \"uuid\": \"yIbELYjgQN-_BgjRd4Vb0A\",\r\n \"version_id\": 5040199,\r\n \"version\": \"5.4.1\",\r\n \"indices\": [\r\n \"wsj-2017.07.19\",\r\n \"wsj-iis-2017.07.11\",\r\n \"wsj-2017.07.08\",\r\n \"wsj-2017.07.15\",\r\n \"wsj-2017.07.11\",\r\n \"wsj-2017.07.12\",\r\n \"wsj-2017.07.02\",\r\n \"wsj-2017.07.10\",\r\n \"wsj-2017.07.06\",\r\n \"wsj-2017.06.30\",\r\n \"wsj-2017.07.05\",\r\n \"wsj-2017.07.14\",\r\n \"wsj-2017.07.03\",\r\n \"wsj-2017.07.16\",\r\n \"wsj-2017.07.17\",\r\n \"wsj-2017.07.07\",\r\n \"wsj-2017.07.01\",\r\n \"wsj-2017.07.09\",\r\n \"wsj-2017.07.04\",\r\n \"wsj-2017.07.18\",\r\n \"wsj-2017.07.13\"\r\n ],\r\n \"state\": \"PARTIAL\",\r\n \"start_time\": \"2017-07-20T08:58:57.243Z\",\r\n \"start_time_in_millis\": 1500541137243,\r\n \"end_time\": \"2017-07-20T11:52:37.938Z\",\r\n \"end_time_in_millis\": 1500551557938,\r\n \"duration_in_millis\": 10420695,\r\n \"failures\": [\r\n {\r\n \"index\": \"wsj-2017.07.16\",\r\n \"index_uuid\": \"wsj-2017.07.16\",\r\n \"shard_id\": 0,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.18\",\r\n \"index_uuid\": \"wsj-2017.07.18\",\r\n \"shard_id\": 1,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.06.30\",\r\n \"index_uuid\": \"wsj-2017.06.30\",\r\n \"shard_id\": 0,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-iis-2017.07.11\",\r\n \"index_uuid\": \"wsj-iis-2017.07.11\",\r\n \"shard_id\": 4,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.18\",\r\n \"index_uuid\": \"wsj-2017.07.18\",\r\n \"shard_id\": 0,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.06\",\r\n \"index_uuid\": \"wsj-2017.07.06\",\r\n \"shard_id\": 0,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-iis-2017.07.11\",\r\n \"index_uuid\": \"wsj-iis-2017.07.11\",\r\n \"shard_id\": 0,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.19\",\r\n \"index_uuid\": \"wsj-2017.07.19\",\r\n \"shard_id\": 4,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.03\",\r\n \"index_uuid\": \"wsj-2017.07.03\",\r\n \"shard_id\": 4,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-iis-2017.07.11\",\r\n \"index_uuid\": \"wsj-iis-2017.07.11\",\r\n \"shard_id\": 3,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.03\",\r\n \"index_uuid\": \"wsj-2017.07.03\",\r\n \"shard_id\": 0,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.16\",\r\n \"index_uuid\": \"wsj-2017.07.16\",\r\n \"shard_id\": 3,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.01\",\r\n \"index_uuid\": \"wsj-2017.07.01\",\r\n \"shard_id\": 1,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.18\",\r\n \"index_uuid\": \"wsj-2017.07.18\",\r\n \"shard_id\": 4,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.05\",\r\n \"index_uuid\": \"wsj-2017.07.05\",\r\n \"shard_id\": 4,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.05\",\r\n \"index_uuid\": \"wsj-2017.07.05\",\r\n \"shard_id\": 1,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.06.30\",\r\n \"index_uuid\": \"wsj-2017.06.30\",\r\n \"shard_id\": 1,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.06.30\",\r\n \"index_uuid\": \"wsj-2017.06.30\",\r\n \"shard_id\": 4,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.18\",\r\n \"index_uuid\": \"wsj-2017.07.18\",\r\n \"shard_id\": 3,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.01\",\r\n \"index_uuid\": \"wsj-2017.07.01\",\r\n \"shard_id\": 4,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.03\",\r\n \"index_uuid\": \"wsj-2017.07.03\",\r\n \"shard_id\": 3,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-iis-2017.07.11\",\r\n \"index_uuid\": \"wsj-iis-2017.07.11\",\r\n \"shard_id\": 1,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.19\",\r\n \"index_uuid\": \"wsj-2017.07.19\",\r\n \"shard_id\": 0,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.19\",\r\n \"index_uuid\": \"wsj-2017.07.19\",\r\n \"shard_id\": 3,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.19\",\r\n \"index_uuid\": \"wsj-2017.07.19\",\r\n \"shard_id\": 1,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.16\",\r\n \"index_uuid\": \"wsj-2017.07.16\",\r\n \"shard_id\": 4,\r\n \"reason\": \"IndexNotFoundException[no such index]\",\r\n \"node_id\": \"GhOdYtKNTIOYMFVRHQHn_Q\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n },\r\n {\r\n \"index\": \"wsj-2017.07.03\",\r\n \"index_uuid\": \"wsj-2017.07.03\",\r\n \"shard_id\": 1,\r\n \"reason\": null,\r\n \"node_id\": \"eIcWA_QQTByXWDrUlUOFAA\",\r\n \"status\": \"INTERNAL_SERVER_ERROR\"\r\n }\r\n ],\r\n \"shards\": {\r\n \"total\": 27,\r\n \"failed\": 27,\r\n \"successful\": 0\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nInterestingly this snapshot includes that `wsj-iis-2017.07.11` index which we then deleted (since due to the naming it would get mixed up a lot with the `wsj-*` indices) and recreated with another name. Those `IndexNotFoundException[no such index]` messages look weird though because the mentioned indices do exist, are still on the cluster and I can query them.",
"created_at": "2017-07-26T15:13:34Z"
},
{
"body": "🏆 deleted the offending snapshot and the listing now works! 🥇 \r\n\r\nIf you need any more info on the \"bug\" itself I'll be happy to provide. Also my issue is solved but I'll leave this for you to close in case you want to follow the thread deeper.",
"created_at": "2017-07-26T15:28:41Z"
},
{
"body": "Thanks @eirc. We have found the line that is causing this NPE. We are just doing some root cause analysis at the moment to see if there is more to it. It's definitely a bug. Thanks a lot for very detailed information and your willingness to work with us on it!",
"created_at": "2017-07-26T15:33:30Z"
},
{
"body": "@eirc I spent some time trying to reproduce the issue, but no matter what I try I cannot get my snapshot into the state where it produces `null`s in shard failures. It looks like the snapshot in question took place a week ago. Do you remember, by any chance, what was going on with the cluster during this time? Do you still have log files from that day?",
"created_at": "2017-07-26T22:55:15Z"
},
{
"body": "My current best guess is that that index I mentioned we deleted (wsj-iis) was deleted during the backup process and maybe that mucked up things somehow. I can check the logs at the time for more concrete info but that has to until tomorrow when i get back to work :)",
"created_at": "2017-07-26T23:06:33Z"
},
{
"body": "Yes, deletion of indices during a snapshot is the first thing I tried. It is producing a slightly different snapshot info that doesn't contain any nulls. It seems that I am missing some key ingredient here. I am done for today as well, but it would be awesome if you could check the logs tomorrow. ",
"created_at": "2017-07-26T23:12:51Z"
},
{
"body": "The issue I see is that the code incorrectly assumes that `reason` is non-null in case where there is a `SnapshotShardFailure`. The failure is constructed from a `ShardSnapshotStatus` object that is in a \"failed\" state (one of FAILED, ABORTED, MISSING). I see two places where we can possibly have a `ShardSnapshotStatus` object with \"failed\" state and where the \"reason\" can be null: \r\n- cluster state serialization (to be precise: SnapshotsInProgress), because we don't serialize the \"reason\". This means that on master failover it can become null. This scenario can be verified by adding the assertion `reason != null` to the `SnapshotShardFailure` constructor and running the (currently disabled) test `testMasterShutdownDuringFailedSnapshot` a few times.\r\n- the call `shardsBuilder.put(shardEntry.key, new ShardSnapshotStatus(status.nodeId(), State.ABORTED))` when aborting a snapshot. Here it's more difficult to come up with a scenario. But unless we can rule that one out, I would still consider it an issue.\r\n\r\nI think the easiest fix for now would be to assume that reason is Nullable and adapt the serialization code accordingly. WDYT @imotov ?",
"created_at": "2017-07-27T07:22:36Z"
},
{
"body": "Seems like that index was actually deleted a few days later after all so that was probably a red herring.\r\n\r\nOk there's a huge spike of logs during that snapshot's creation time, I'll try to aggregate what I see as most important:\r\n\r\n## Related to the snapshot itself (ie searching for \"20170720085856\")\r\n\r\n29 occurrences of\r\n\r\n```\r\n[2017-07-20T14:44:49,461][WARN ][o.e.s.SnapshotShardsService] [Ht8LDxX] [[wsj-iis-2017.07.11][2]] [long_term:wsj-snapshot-20170720085856/yIbELYjgQN-_BgjRd4Vb0A] failed to create snapshot\r\norg.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: Failed to snapshot\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:397) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:88) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:335) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.apache.lucene.store.AlreadyClosedException: engine is closed\r\n\tat org.elasticsearch.index.shard.IndexShard.getEngine(IndexShard.java:1446) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.index.shard.IndexShard.acquireIndexCommit(IndexShard.java:836) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:380) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\t... 7 more\r\n```\r\n\r\nand 2 of\r\n\r\n```\r\n[2017-07-20T14:44:49,459][WARN ][o.e.s.SnapshotShardsService] [Ht8LDxX] [[wsj-2017.07.19][2]] [long_term:wsj-snapshot-20170720085856/yIbELYjgQN-_BgjRd4Vb0A] failed to create snapshot\r\norg.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: Aborted\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext$AbortableInputStream.checkAborted(BlobStoreRepository.java:1501) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext$AbortableInputStream.read(BlobStoreRepository.java:1494) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.io.FilterInputStream.read(FilterInputStream.java:107) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:76) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:57) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:100) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshotFile(BlobStoreRepository.java:1428) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1370) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:967) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:382) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:88) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:335) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n```\r\n\r\n## no index state found\r\n\r\n1702 occurrences of the following from one data node:\r\n\r\n```\r\n[2017-07-20T14:51:22,103][WARN ][o.e.c.u.IndexFolderUpgrader] [/mnt/elasticsearch-data-02/nodes/0/indices/8oH-hwzeQAmJR7TZkUxf1w] no index state found - ignoring\r\n```\r\n\r\nand one similar from another host\r\n\r\n## unexpected error while indexing monitoring document\r\n\r\na spike of ~ 2.5k of those at the start of the snapshot:\r\n\r\n```\r\n[2017-07-20T14:44:48,526][WARN ][o.e.x.m.e.l.LocalExporter] unexpected error while indexing monitoring document\r\norg.elasticsearch.xpack.monitoring.exporter.ExportException: NodeClosedException[node closed {Ht8LDxX}{Ht8LDxXGQAGEna893aC57w}{vq-tK9uISPexLeENQ82FRw}{10.127.1.207}{10.127.1.207:9300}{ml.enabled=true}]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:131) ~[?:?]\r\n\tat java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_131]\r\n\tat java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[?:1.8.0_131]\r\n\tat java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_131]\r\n\tat java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[?:1.8.0_131]\r\n\tat java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_131]\r\n\tat java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:132) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:115) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:583) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:389) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:384) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:827) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onClusterServiceClose(TransportReplicationAction.java:810) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onClusterServiceClose(ClusterStateObserver.java:304) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onClose(ClusterStateObserver.java:224) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.service.ClusterService.addTimeoutListener(ClusterService.java:385) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:166) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:103) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:802) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleException(TransportReplicationAction.java:781) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1050) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:876) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.elasticsearch.node.NodeClosedException: node closed {Ht8LDxX}{Ht8LDxXGQAGEna893aC57w}{vq-tK9uISPexLeENQ82FRw}{10.127.1.207}{10.127.1.207:9300}{ml.enabled=true}\r\n\t... 15 more\r\n```\r\n\r\nand a similar number of those at the end of the snapshot:\r\n\r\n```\r\n[2017-07-20T14:51:05,408][WARN ][o.e.x.m.e.l.LocalExporter] unexpected error while indexing monitoring document\r\norg.elasticsearch.xpack.monitoring.exporter.ExportException: TransportException[transport stopped, action: indices:data/write/bulk[s][p]]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:131) ~[?:?]\r\n\tat java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_131]\r\n\tat java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[?:1.8.0_131]\r\n\tat java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_131]\r\n\tat java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[?:1.8.0_131]\r\n\tat java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[?:1.8.0_131]\r\n\tat java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_131]\r\n\tat java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:132) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:115) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:583) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:389) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:384) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:827) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleException(TransportReplicationAction.java:783) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1050) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:247) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.elasticsearch.transport.TransportException: transport stopped, action: indices:data/write/bulk[s][p]\r\n\tat org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:246) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\t... 5 more\r\n```\r\n\r\n## node not connected\r\n\r\ngot 9 of those with at least one for each node\r\n\r\n```\r\n[2017-07-20T14:44:47,437][WARN ][o.e.a.a.c.n.i.TransportNodesInfoAction] [zYawxs4] not accumulating exceptions, excluding exception from response\r\norg.elasticsearch.action.FailedNodeException: Failed node [Ht8LDxXGQAGEna893aC57w]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:246) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$200(TransportNodesAction.java:160) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:218) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:493) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.start(TransportNodesAction.java:204) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:89) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:52) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:142) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:84) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:730) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.client.support.AbstractClient$ClusterAdmin.nodesInfo(AbstractClient.java:811) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.rest.action.admin.cluster.RestNodesInfoAction.lambda$prepareRequest$0(RestNodesInfoAction.java:109) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:80) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:260) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:199) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:505) ~[transport-netty4-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:72) ~[transport-netty4-5.4.1.jar:5.4.1]\r\n\tat io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:63) ~[transport-netty4-5.4.1.jar:5.4.1]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) ~[netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) ~[netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) ~[netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) ~[netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.11.Final.jar:4.1.11.Final]\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.11.Final.jar:4.1.11.Final]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.elasticsearch.transport.NodeNotConnectedException: [Ht8LDxX][10.127.1.207:9300] Node not connected\r\n\tat org.elasticsearch.transport.TcpTransport.getConnection(TcpTransport.java:630) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TcpTransport.getConnection(TcpTransport.java:116) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService.getConnection(TransportService.java:513) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:489) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\t... 58 more\r\n```\r\n\r\n## Exception when closing export bulk\r\n\r\n3 of those\r\n\r\n```\r\n[2017-07-20T14:44:48,536][WARN ][o.e.x.m.MonitoringService] [Ht8LDxX] monitoring execution failed\r\norg.elasticsearch.xpack.monitoring.exporter.ExportException: Exception when closing export bulk\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$1$1.<init>(ExportBulk.java:106) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$1.onFailure(ExportBulk.java:104) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound$1.onResponse(ExportBulk.java:217) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound$1.onResponse(ExportBulk.java:211) ~[?:?]\r\n\tat org.elasticsearch.xpack.common.IteratingActionListener.onResponse(IteratingActionListener.java:108) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$null$0(ExportBulk.java:175) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:67) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:138) ~[?:?]\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:115) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:88) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:84) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:583) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:59) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:389) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:384) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:94) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:827) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onClusterServiceClose(TransportReplicationAction.java:810) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onClusterServiceClose(ClusterStateObserver.java:304) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onClose(ClusterStateObserver.java:224) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.service.ClusterService.addTimeoutListener(ClusterService.java:385) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:166) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:111) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:103) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:802) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleException(TransportReplicationAction.java:781) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1050) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.transport.TransportService$Adapter.lambda$onConnectionClosed$6(TransportService.java:876) ~[elasticsearch-5.4.1.jar:5.4.1]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.4.1.jar:5.4.1]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks\r\n\tat org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$null$0(ExportBulk.java:167) ~[?:?]\r\n\t... 27 more\r\nCaused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: bulk [default_local] reports failures when exporting documents\r\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:127) ~[?:?]\r\n\t... 25 more\r\n```\r\n\r\nI'm sure there's more stuff in there but I don't know if this actually helps you because I can't make sense of it due to the log volume being that high so I may be missing the important ones. Is there something more specific I could search for that would help? Maybe I should just export all elasticsearch logs for that day and mail them to you?",
"created_at": "2017-07-27T09:21:53Z"
},
{
"body": "> I think the easiest fix for now would be to assume that reason is Nullable and adapt the serialization code accordingly. WDYT @imotov ?\r\n\r\n@ywelsch Yes fixing it like this would be easy, I just didn't want to assume anything, I wanted to have a test that creates this problem so we can fix it for sure. So, that's why I spent some time trying to reproduce it. You are right about it being null in SnapshotsInProgress, and I tried to reproduce it this way but it looks like it's a completely different path that doesn't get resolved into shard failure object, so this seems to be a dead end. So, I think ABORTED path is more promising and after thinking about for a while, I think the scenario is snapshot gets stuck on a master, gets aborted, then another master takes over, and somehow generates these nulls. The problem with this scenario is that if a snapshot is aborted, it should be deleted afterwards. So, based on the information that @eirc provided, it looks like it might be a combination of stuck snapshot combined with some sort of node failure that prevented the aborted snapshot from being cleaned up, which might be quite difficult to reproduce.\r\n\r\n> Maybe I should just export all elasticsearch logs for that day and mail them to you?\r\n\r\n@eirc that would be very helpful. Thanks!",
"created_at": "2017-07-27T13:23:17Z"
},
{
"body": "Just a quick update. @ywelsch and I discussed the issue and came up with a plan how to modify `testMasterShutdownDuringFailedSnapshot` to potentially reproduce the issue. I will try implementing it. ",
"created_at": "2017-07-27T14:05:51Z"
}
],
"number": 25878,
"title": "NullPointerException when getting list of snapshots on S3"
} | {
"body": "The failure reason for snapshot shard failures might not be propagated properly if the master node changes after the errors were reported by other data nodes. This commits ensures that the snapshot shard failure reason is preserved properly and adds workaround for reading old snapshot files where this information might not have been preserved.\r\n\r\nCloses #25878\r\n\r\n",
"number": 25941,
"review_comments": [
{
"body": "can you add a comment here saying why we set `reason` to `\"\"`?",
"created_at": "2017-07-28T07:00:41Z"
},
{
"body": "What I don't quite understand: Why will it happily parse the `reason` field if it is null? Currently we parse it using `text()`, shouldn't that fail and should we use `textOrNull()` instead?",
"created_at": "2017-07-28T07:06:55Z"
},
{
"body": "maybe extend the message to `\"aborted by snapshot deletion\"`",
"created_at": "2017-07-28T07:08:26Z"
},
{
"body": "The method name made me think that it would prevent the master from creating a snapshot at all. Maybe we can call it something along the lines of \"blockMasterFromFinalizingSnapshot\"?",
"created_at": "2017-07-28T07:13:13Z"
},
{
"body": "You are right, it should be `textOrNull()`. ",
"created_at": "2017-07-28T14:02:10Z"
}
],
"title": "Snapshot/Restore: Ensure that shard failure reasons are correctly stored in CS"
} | {
"commits": [
{
"message": "Snapshot/Restore: Ensure that shard failure reasons are correctly stored in CS\n\nThe failure reason for snapshot shard failures might not be propagated properly if the master node changes after the errors were reported by other data nodes. This commits ensures that the snapshot shard failure reason is preserved properly and adds workaround for reading old snapshot files where this information might not have been preserved.\n\nCloses #25878"
},
{
"message": "Address @ywelsch's comments"
}
],
"files": [
{
"diff": "@@ -253,6 +253,8 @@ public ShardSnapshotStatus(String nodeId, State state, String reason) {\n this.nodeId = nodeId;\n this.state = state;\n this.reason = reason;\n+ // If the state is failed we have to have a reason for this failure\n+ assert state.failed() == false || reason != null;\n }\n \n public ShardSnapshotStatus(StreamInput in) throws IOException {\n@@ -413,9 +415,17 @@ public SnapshotsInProgress(StreamInput in) throws IOException {\n int shards = in.readVInt();\n for (int j = 0; j < shards; j++) {\n ShardId shardId = ShardId.readShardId(in);\n- String nodeId = in.readOptionalString();\n- State shardState = State.fromValue(in.readByte());\n- builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState));\n+ // TODO: Change this to an appropriate version when it's backported\n+ if (in.getVersion().onOrAfter(Version.V_6_0_0_beta1)) {\n+ builder.put(shardId, new ShardSnapshotStatus(in));\n+ } else {\n+ String nodeId = in.readOptionalString();\n+ State shardState = State.fromValue(in.readByte());\n+ // Workaround for https://github.com/elastic/elasticsearch/issues/25878\n+ // Some old snapshot might still have null in shard failure reasons\n+ String reason = shardState.failed() ? \"\" : null;\n+ builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState, reason));\n+ }\n }\n long repositoryStateId = UNDEFINED_REPOSITORY_STATE_ID;\n if (in.getVersion().onOrAfter(REPOSITORY_ID_INTRODUCED_VERSION)) {\n@@ -449,8 +459,13 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeVInt(entry.shards().size());\n for (ObjectObjectCursor<ShardId, ShardSnapshotStatus> shardEntry : entry.shards()) {\n shardEntry.key.writeTo(out);\n- out.writeOptionalString(shardEntry.value.nodeId());\n- out.writeByte(shardEntry.value.state().value());\n+ // TODO: Change this to an appropriate version when it's backported\n+ if (out.getVersion().onOrAfter(Version.V_6_0_0_beta1)) {\n+ shardEntry.value.writeTo(out);\n+ } else {\n+ out.writeOptionalString(shardEntry.value.nodeId());\n+ out.writeByte(shardEntry.value.state().value());\n+ }\n }\n if (out.getVersion().onOrAfter(REPOSITORY_ID_INTRODUCED_VERSION)) {\n out.writeLong(entry.repositoryStateId);",
"filename": "core/src/main/java/org/elasticsearch/cluster/SnapshotsInProgress.java",
"status": "modified"
},
{
"diff": "@@ -62,6 +62,7 @@ public SnapshotShardFailure(@Nullable String nodeId, ShardId shardId, String rea\n this.nodeId = nodeId;\n this.shardId = shardId;\n this.reason = reason;\n+ assert reason != null;\n status = RestStatus.INTERNAL_SERVER_ERROR;\n }\n \n@@ -192,7 +193,9 @@ public static SnapshotShardFailure fromXContent(XContentParser parser) throws IO\n } else if (\"node_id\".equals(currentFieldName)) {\n snapshotShardFailure.nodeId = parser.text();\n } else if (\"reason\".equals(currentFieldName)) {\n- snapshotShardFailure.reason = parser.text();\n+ // Workaround for https://github.com/elastic/elasticsearch/issues/25878\n+ // Some old snapshot might still have null in shard failure reasons\n+ snapshotShardFailure.reason = parser.textOrNull();\n } else if (\"shard_id\".equals(currentFieldName)) {\n shardId = parser.intValue();\n } else if (\"status\".equals(currentFieldName)) {\n@@ -215,6 +218,11 @@ public static SnapshotShardFailure fromXContent(XContentParser parser) throws IO\n throw new ElasticsearchParseException(\"index shard was not set\");\n }\n snapshotShardFailure.shardId = new ShardId(index, index_uuid, shardId);\n+ // Workaround for https://github.com/elastic/elasticsearch/issues/25878\n+ // Some old snapshot might still have null in shard failure reasons\n+ if (snapshotShardFailure.reason == null) {\n+ snapshotShardFailure.reason = \"\";\n+ }\n return snapshotShardFailure;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotShardFailure.java",
"status": "modified"
},
{
"diff": "@@ -1128,7 +1128,8 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n for (ObjectObjectCursor<ShardId, ShardSnapshotStatus> shardEntry : snapshotEntry.shards()) {\n ShardSnapshotStatus status = shardEntry.value;\n if (!status.state().completed()) {\n- shardsBuilder.put(shardEntry.key, new ShardSnapshotStatus(status.nodeId(), State.ABORTED));\n+ shardsBuilder.put(shardEntry.key, new ShardSnapshotStatus(status.nodeId(), State.ABORTED,\n+ \"aborted by snapshot deletion\"));\n } else {\n shardsBuilder.put(shardEntry.key, status);\n }",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -57,12 +57,12 @@ public void testWaitingIndices() {\n // test more than one waiting shard in an index\n shards.put(new ShardId(idx1Name, idx1UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), State.WAITING));\n shards.put(new ShardId(idx1Name, idx1UUID, 1), new ShardSnapshotStatus(randomAlphaOfLength(2), State.WAITING));\n- shards.put(new ShardId(idx1Name, idx1UUID, 2), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState()));\n+ shards.put(new ShardId(idx1Name, idx1UUID, 2), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState(), \"\"));\n // test exactly one waiting shard in an index\n shards.put(new ShardId(idx2Name, idx2UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), State.WAITING));\n- shards.put(new ShardId(idx2Name, idx2UUID, 1), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState()));\n+ shards.put(new ShardId(idx2Name, idx2UUID, 1), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState(), \"\"));\n // test no waiting shards in an index\n- shards.put(new ShardId(idx3Name, idx3UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState()));\n+ shards.put(new ShardId(idx3Name, idx3UUID, 0), new ShardSnapshotStatus(randomAlphaOfLength(2), randomNonWaitingState(), \"\"));\n Entry entry = new Entry(snapshot, randomBoolean(), randomBoolean(), State.INIT,\n indices, System.currentTimeMillis(), randomLong(), shards.build());\n ",
"filename": "core/src/test/java/org/elasticsearch/cluster/SnapshotsInProgressTests.java",
"status": "modified"
},
{
"diff": "@@ -128,13 +128,20 @@ public SnapshotInfo waitForCompletion(String repository, String snapshotName, Ti\n return null;\n }\n \n- public static String blockMasterFromFinalizingSnapshot(final String repositoryName) {\n+ public static String blockMasterFromFinalizingSnapshotOnIndexFile(final String repositoryName) {\n final String masterName = internalCluster().getMasterName();\n ((MockRepository)internalCluster().getInstance(RepositoriesService.class, masterName)\n .repository(repositoryName)).setBlockOnWriteIndexFile(true);\n return masterName;\n }\n \n+ public static String blockMasterFromFinalizingSnapshotOnSnapFile(final String repositoryName) {\n+ final String masterName = internalCluster().getMasterName();\n+ ((MockRepository)internalCluster().getInstance(RepositoriesService.class, masterName)\n+ .repository(repositoryName)).setBlockAndFailOnWriteSnapFiles(true);\n+ return masterName;\n+ }\n+\n public static String blockNodeWithIndex(final String repositoryName, final String indexName) {\n for(String node : internalCluster().nodesInclude(indexName)) {\n ((MockRepository)internalCluster().getInstance(RepositoriesService.class, node).repository(repositoryName))",
"filename": "core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java",
"status": "modified"
},
{
"diff": "@@ -767,6 +767,67 @@ public void testMasterShutdownDuringSnapshot() throws Exception {\n assertEquals(0, snapshotInfo.failedShards());\n }\n \n+\n+ public void testMasterAndDataShutdownDuringSnapshot() throws Exception {\n+ logger.info(\"--> starting three master nodes and two data nodes\");\n+ internalCluster().startMasterOnlyNodes(3);\n+ internalCluster().startDataOnlyNodes(2);\n+\n+ final Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"mock\").setSettings(Settings.builder()\n+ .put(\"location\", randomRepoPath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)));\n+\n+ assertAcked(prepareCreate(\"test-idx\", 0, Settings.builder().put(\"number_of_shards\", between(1, 20))\n+ .put(\"number_of_replicas\", 0)));\n+ ensureGreen();\n+\n+ logger.info(\"--> indexing some data\");\n+ final int numdocs = randomIntBetween(10, 100);\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[numdocs];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(\"test-idx\", \"type1\", Integer.toString(i)).setSource(\"field1\", \"bar \" + i);\n+ }\n+ indexRandom(true, builders);\n+ flushAndRefresh();\n+\n+ final int numberOfShards = getNumShards(\"test-idx\").numPrimaries;\n+ logger.info(\"number of shards: {}\", numberOfShards);\n+\n+ final String masterNode = blockMasterFromFinalizingSnapshotOnSnapFile(\"test-repo\");\n+ final String dataNode = blockNodeWithIndex(\"test-repo\", \"test-idx\");\n+\n+ dataNodeClient().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(false).setIndices(\"test-idx\").get();\n+\n+ logger.info(\"--> stopping data node {}\", dataNode);\n+ stopNode(dataNode);\n+ logger.info(\"--> stopping master node {} \", masterNode);\n+ internalCluster().stopCurrentMasterNode();\n+\n+ logger.info(\"--> wait until the snapshot is done\");\n+\n+ assertBusy(() -> {\n+ GetSnapshotsResponse snapshotsStatusResponse = client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").get();\n+ SnapshotInfo snapshotInfo = snapshotsStatusResponse.getSnapshots().get(0);\n+ assertTrue(snapshotInfo.state().completed());\n+ }, 1, TimeUnit.MINUTES);\n+\n+ logger.info(\"--> verify that snapshot was partial\");\n+\n+ GetSnapshotsResponse snapshotsStatusResponse = client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").get();\n+ SnapshotInfo snapshotInfo = snapshotsStatusResponse.getSnapshots().get(0);\n+ assertEquals(SnapshotState.PARTIAL, snapshotInfo.state());\n+ assertNotEquals(snapshotInfo.totalShards(), snapshotInfo.successfulShards());\n+ assertThat(snapshotInfo.failedShards(), greaterThan(0));\n+ for (SnapshotShardFailure failure : snapshotInfo.shardFailures()) {\n+ assertNotNull(failure.reason());\n+ }\n+ }\n+\n @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/25281\")\n public void testMasterShutdownDuringFailedSnapshot() throws Exception {\n logger.info(\"--> starting two master nodes and two data nodes\");\n@@ -800,7 +861,7 @@ public void testMasterShutdownDuringFailedSnapshot() throws Exception {\n assertEquals(ClusterHealthStatus.RED, client().admin().cluster().prepareHealth().get().getStatus()),\n 30, TimeUnit.SECONDS);\n \n- final String masterNode = blockMasterFromFinalizingSnapshot(\"test-repo\");\n+ final String masterNode = blockMasterFromFinalizingSnapshotOnIndexFile(\"test-repo\");\n \n logger.info(\"--> snapshot\");\n client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\")",
"filename": "core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java",
"status": "modified"
},
{
"diff": "@@ -2252,9 +2252,9 @@ public void testDeleteOrphanSnapshot() throws Exception {\n public ClusterState execute(ClusterState currentState) {\n // Simulate orphan snapshot\n ImmutableOpenMap.Builder<ShardId, ShardSnapshotStatus> shards = ImmutableOpenMap.builder();\n- shards.put(new ShardId(idxName, \"_na_\", 0), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n- shards.put(new ShardId(idxName, \"_na_\", 1), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n- shards.put(new ShardId(idxName, \"_na_\", 2), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED));\n+ shards.put(new ShardId(idxName, \"_na_\", 0), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED, \"aborted\"));\n+ shards.put(new ShardId(idxName, \"_na_\", 1), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED, \"aborted\"));\n+ shards.put(new ShardId(idxName, \"_na_\", 2), new ShardSnapshotStatus(\"unknown-node\", State.ABORTED, \"aborted\"));\n List<Entry> entries = new ArrayList<>();\n entries.add(new Entry(new Snapshot(repositoryName,\n createSnapshotResponse.getSnapshotInfo().snapshotId()),",
"filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java",
"status": "modified"
},
{
"diff": "@@ -66,7 +66,8 @@ private Entry randomSnapshot() {\n ShardId shardId = new ShardId(new Index(randomAlphaOfLength(10), randomAlphaOfLength(10)), randomIntBetween(0, 10));\n String nodeId = randomAlphaOfLength(10);\n State shardState = randomFrom(State.values());\n- builder.put(shardId, new SnapshotsInProgress.ShardSnapshotStatus(nodeId, shardState));\n+ builder.put(shardId, new SnapshotsInProgress.ShardSnapshotStatus(nodeId, shardState,\n+ shardState.failed() ? randomAlphaOfLength(10) : null));\n }\n ImmutableOpenMap<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shards = builder.build();\n return new Entry(snapshot, includeGlobalState, partial, state, indices, startTime, repositoryStateId, shards);",
"filename": "core/src/test/java/org/elasticsearch/snapshots/SnapshotsInProgressSerializationTests.java",
"status": "modified"
},
{
"diff": "@@ -104,6 +104,9 @@ public long getFailureCount() {\n * finalization of a snapshot, while permitting other IO operations to proceed unblocked. */\n private volatile boolean blockOnWriteIndexFile;\n \n+ /** Allows blocking on writing the snapshot file at the end of snapshot creation to simulate a died master node */\n+ private volatile boolean blockAndFailOnWriteSnapFile;\n+\n private volatile boolean atomicMove;\n \n private volatile boolean blocked = false;\n@@ -118,6 +121,7 @@ public MockRepository(RepositoryMetaData metadata, Environment environment,\n blockOnControlFiles = metadata.settings().getAsBoolean(\"block_on_control\", false);\n blockOnDataFiles = metadata.settings().getAsBoolean(\"block_on_data\", false);\n blockOnInitialization = metadata.settings().getAsBoolean(\"block_on_init\", false);\n+ blockAndFailOnWriteSnapFile = metadata.settings().getAsBoolean(\"block_on_snap\", false);\n randomPrefix = metadata.settings().get(\"random\", \"default\");\n waitAfterUnblock = metadata.settings().getAsLong(\"wait_after_unblock\", 0L);\n atomicMove = metadata.settings().getAsBoolean(\"atomic_move\", true);\n@@ -168,13 +172,18 @@ public synchronized void unblock() {\n blockOnControlFiles = false;\n blockOnInitialization = false;\n blockOnWriteIndexFile = false;\n+ blockAndFailOnWriteSnapFile = false;\n this.notifyAll();\n }\n \n public void blockOnDataFiles(boolean blocked) {\n blockOnDataFiles = blocked;\n }\n \n+ public void setBlockAndFailOnWriteSnapFiles(boolean blocked) {\n+ blockAndFailOnWriteSnapFile = blocked;\n+ }\n+\n public void setBlockOnWriteIndexFile(boolean blocked) {\n blockOnWriteIndexFile = blocked;\n }\n@@ -187,7 +196,8 @@ private synchronized boolean blockExecution() {\n logger.debug(\"Blocking execution\");\n boolean wasBlocked = false;\n try {\n- while (blockOnDataFiles || blockOnControlFiles || blockOnInitialization || blockOnWriteIndexFile) {\n+ while (blockOnDataFiles || blockOnControlFiles || blockOnInitialization || blockOnWriteIndexFile ||\n+ blockAndFailOnWriteSnapFile) {\n blocked = true;\n this.wait();\n wasBlocked = true;\n@@ -266,6 +276,8 @@ private void maybeIOExceptionOrBlock(String blobName) throws IOException {\n throw new IOException(\"Random IOException\");\n } else if (blockOnControlFiles) {\n blockExecutionAndMaybeWait(blobName);\n+ } else if (blobName.startsWith(\"snap-\") && blockAndFailOnWriteSnapFile) {\n+ blockExecutionAndFail(blobName);\n }\n }\n }\n@@ -283,6 +295,15 @@ private void blockExecutionAndMaybeWait(final String blobName) {\n }\n }\n \n+ /**\n+ * Blocks an I/O operation on the blob fails and throws an exception when unblocked\n+ */\n+ private void blockExecutionAndFail(final String blobName) throws IOException {\n+ logger.info(\"blocking I/O operation for file [{}] at path [{}]\", blobName, path());\n+ blockExecution();\n+ throw new IOException(\"exception after block\");\n+ }\n+\n MockBlobContainer(BlobContainer delegate) {\n super(delegate);\n }",
"filename": "core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.2.2\r\n\r\n**Plugins installed**:\r\n- EC2 Discovery\r\n- SearchGuard 5.2.2-11\r\n\r\n**JVM version**: openjdk 64bit 1.8.0_91\r\n\r\n**OS version**: Ubuntu 16.04\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nIn ~1% of large queries which span multiple types, including nested ones, we experience a 500 caused by `org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed` caused by `java.lang.ArrayIndexOutOfBoundsException`.\r\n\r\nWe are running a small ES cluster of 5 nodes (no dedicated master nodes) on AWS spread over two availability zones (2 instances in one, 3 in the other). Each instance is running ES in a docker container with external volumes.\r\n\r\nNodes are configured to use the EC2 AZ aware shard allocation. (`cluster.routing.allocation.awareness.attributes: aws_availability_zone`)\r\n\r\nFirst we though it might be a corrupted index. However, building a new index (not using the reindex API but basic bulk indexing) did not help. At the same time we switched to using custom routing with the new index. The error has been occurring more often since.\r\n\r\nSo far the error frequency clearly seems to be traffic dependent. It occurs more often during peak hours. It also tends to happen in batches. Sometimes there are no errors for hours and then suddenly a couple within a short timespan:\r\n<img width=\"874\" alt=\"screen shot 2017-03-17 at 10 49 21\" src=\"https://cloud.githubusercontent.com/assets/3025911/24038425/9587fb10-0b01-11e7-912f-ce7847fb5c14.png\">\r\n\r\nAny ideas of what's going on within these minutes?\r\n\r\n**Example trace**:\r\n\r\n```\r\n[2017-03-17T08:21:10,125][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch5-petrel-0ff7463123456e6e7] [<%INDEXNAME%>][4], node[Tci4YB12345dEhZebFINRQ], [R], s[STARTED], a[id=30tD94F3S_mq7uwk12341w]: Failed to execute [SearchRequest{searchType=QUERY_AND_FETCH, indices=[<%ALIAS INDEXNAME%>], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[<%6 DIFFERENT TYPES%>], routing='<OUR CUSTOM ROUTING ID>', preference='null', requestCache=null, scroll=null, source={\r\n \"from\" : 0,\r\n \"size\" : 50,\r\n \"query\" : {\r\n <%VERY LARGE QUERY INCLUDING CONDITIONS ON NESTED TYPES%>\r\n },\r\n \"stored_fields\" : [\r\n \"_id\",\r\n \"_type\"\r\n ]\r\n}}]\r\norg.elasticsearch.transport.RemoteTransportException: [elasticsearch5-penguin-046876f8756512345][123.12.0.2:9300][indices:data/read/search[phase/query+fetch]]\r\nCaused by: java.lang.ArrayIndexOutOfBoundsException\r\n[2017-03-17T08:21:10,127][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch5-petrel-0ff7463123456e6e7] All shards failed for phase: [query_fetch]\r\norg.elasticsearch.transport.RemoteTransportException: [elasticsearch5-penguin-046876f8756512345][123.12.0.2:9300][indices:data/read/search[phase/query+fetch]]\r\nCaused by: java.lang.ArrayIndexOutOfBoundsException\r\norg.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onFirstPhaseResult(AbstractSearchAsyncAction.java:208) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.access$100(AbstractSearchAsyncAction.java:52) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction$1.onFailure(AbstractSearchAsyncAction.java:143) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:51) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat com.floragunn.searchguard.transport.SearchGuardInterceptor$RestoringTransportResponseHandler.handleException(SearchGuardInterceptor.java:153) ~[?:?]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1024) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.transport.TcpTransport.lambda$handleException$17(TcpTransport.java:1411) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:109) [elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.transport.TcpTransport.handleException(TcpTransport.java:1409) [elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.transport.TcpTransport.handlerResponseError(TcpTransport.java:1401) [elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1345) [elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) [transport-netty4-client-5.2.2.jar:5.2.2]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:280) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:396) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1139) [netty-handler-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.handler.ssl.SslHandler.decode(SslHandler.java:950) [netty-handler-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) [netty-codec-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) [netty-transport-4.1.7.Final.jar:4.1.7.Final]\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.7.Final.jar:4.1.7.Final]\r\n\tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]\r\nCaused by: org.elasticsearch.transport.RemoteTransportException: [elasticsearch5-penguin-046876f8756512345][123.12.0.2:9300][indices:data/read/search[phase/query+fetch]]\r\nCaused by: java.lang.ArrayIndexOutOfBoundsException\r\n```\r\n",
"comments": [
{
"body": "We need the stack trace for this ArrayIndexOutOfBoundsException. It is missing here as the JVM [optimizes throwing repeatedly the same exception](http://www.oracle.com/technetwork/java/javase/relnotes-139183.html):\r\n> \"The compiler in the server VM now provides correct stack backtraces for all \"cold\" built-in exceptions. For performance purposes, when such an exception is thrown a few times, the method may be recompiled. After recompilation, the compiler may choose a faster tactic using preallocated exceptions that do not provide a stack trace. To disable completely the use of preallocated exceptions, use this new flag: -XX:-OmitStackTraceInFastThrow.\"\r\n\r\nCan you grep in your logs for the first occurrence of this exception?",
"created_at": "2017-03-17T11:24:14Z"
},
{
"body": "```\r\n[2017-03-18T15:23:24,661][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch5-flamingo-0640e123456614532] All shards failed for phase: [query_fetch]\r\norg.elasticsearch.transport.RemoteTransportException: [elasticsearch5-gull-065b812345909563f][123.12.0.2:9300][indices:data/read/search[phase/query+fetch]]\r\nCaused by: java.lang.ArrayIndexOutOfBoundsException: 33439509\r\n\tat org.apache.lucene.util.FixedBitSet.get(FixedBitSet.java:186) ~[lucene-core-6.4.1.jar:6.4.1 72f75b2503fa0aa4f0aff76d439874feb923bb0e - jpountz - 2017-02-01 14:43:32]\r\n\tat org.elasticsearch.search.fetch.FetchPhase.findRootDocumentIfNested(FetchPhase.java:177) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:370) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.action.search.SearchTransportService$9.messageReceived(SearchTransportService.java:322) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.action.search.SearchTransportService$9.messageReceived(SearchTransportService.java:319) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat com.floragunn.searchguard.ssl.transport.SearchGuardSSLRequestHandler.messageReceivedDecorate(SearchGuardSSLRequestHandler.java:184) ~[?:?]\r\n\tat com.floragunn.searchguard.transport.SearchGuardRequestHandler.messageReceivedDecorate(SearchGuardRequestHandler.java:171) ~[?:?]\r\n\tat com.floragunn.searchguard.ssl.transport.SearchGuardSSLRequestHandler.messageReceived(SearchGuardSSLRequestHandler.java:139) ~[?:?]\r\n\tat com.floragunn.searchguard.SearchGuardPlugin$2$1.messageReceived(SearchGuardPlugin.java:284) ~[?:?]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1488) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_91]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_91]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]\r\n```",
"created_at": "2017-03-18T16:57:17Z"
},
{
"body": "@jpountz @jimczi thoughts?",
"created_at": "2017-03-18T17:05:36Z"
},
{
"body": "The docId that is checked in the parent bitset should always be smaller than `maxDoc` though in your issue it seems that there is a discrepancy between the reader that is used for search and the one used to fetch the documents. Since this query executes the query and the fetch phase in a single pass this should never happen unless a third party plugin messes with the reader.\r\nI checked a bit what SearchGuard is doing and it seems that the searcher/reader is wrapped in this plugin in order to add a security layer on top of ES search. This is an expert feature and I suspect that the wrapper breaks some assumption in Lucene/ES. \r\nConsidering that the SearchGuard plugin adds a search layer on top of ES, I'd advise to first see if the problem is related to SearchGuard (which I suspect) and open an issue there or to try to reproduce the problem without SearchGuard.\r\nEither case I'll close this issue for now since there is no proof that ES is responsible for this. Please reopen if you find evidence that SearchGuard is not the cause of this.",
"created_at": "2017-03-20T14:50:31Z"
},
{
"body": "Sorry guys, I just restarted our cluster without the SearchGuard plugin and the problem is still occurring:\r\n\r\n```\r\n[2017-03-27T03:52:56,546][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch5-petrel-0ff7463c013123456] All shards failed for phase: [query_fetch]\r\norg.elasticsearch.transport.RemoteTransportException: [elasticsearch5-flamingo-0640eca9d37123456][123.12.0.2:9300][indices:data/read/search[phase/query+fetch]]\r\nCaused by: java.lang.ArrayIndexOutOfBoundsException: 33423460\r\n\tat org.apache.lucene.util.FixedBitSet.get(FixedBitSet.java:186) ~[lucene-core-6.4.1.jar:6.4.1 72f75b2503fa0aa4f0aff76d439874feb923bb0e - jpountz - 2017-02-01 14:43:32]\r\n\tat org.elasticsearch.search.fetch.FetchPhase.findRootDocumentIfNested(FetchPhase.java:177) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:370) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.action.search.SearchTransportService$9.messageReceived(SearchTransportService.java:322) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.action.search.SearchTransportService$9.messageReceived(SearchTransportService.java:319) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1488) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.2.2.jar:5.2.2]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_91]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_91]\r\n\tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]\r\n```",
"created_at": "2017-03-27T04:28:26Z"
},
{
"body": "Fair enough @ojundt , reopen it is.\r\n\r\nAre you able to access the problematic queries ? It would help if you can share a reproducible case with the complete query that throws this exception.\r\nOtherwise can you share the mapping of your index and a complete example of your query:\r\n\r\n> \"query\" : {\r\n <%VERY LARGE QUERY INCLUDING CONDITIONS ON NESTED TYPES%>\r\n }, \r\n\r\n\r\n",
"created_at": "2017-03-27T10:52:29Z"
},
{
"body": "@jimczi, thanks for reopening. I'm able to access the failing queries but I'm afraid I'm not allowed to share them here. Mappings maybe. Is there something specific you are looking for or that I can test for you?\r\n\r\nAs far as I can see now the same query succeeds after a few minutes so it's hard to reproduce. I will have a closer look at the end of the week.",
"created_at": "2017-03-28T15:50:33Z"
},
{
"body": "> Is there something specific you are looking for or that I can test for you?\r\n\r\nIt can be anything so we'll need to narrow the scope. Can you share the query tree of the failing query, just the query types without the content:\r\n\r\n`````\r\n\"query\": {\r\n \"bool\": {\r\n \"must\": {\r\n \"term\": {\r\n // private infos\r\n }\r\n }\r\n}\r\n`````\r\n\r\nI suspect that one of your inner query returns a `docID` greater than `maxDoc`. This should never happen so the query tree should filter the list of candidates to lookup.",
"created_at": "2017-03-29T19:01:34Z"
},
{
"body": "Alright, after some confusion and further drilling down I was able to consistently reproduce the error with the following setup:\r\n\r\nQuery (in query.json):\r\n```\r\n{\r\n \"query\": {\r\n \"function_score\": {\r\n \"functions\": [\r\n {\r\n \"script_score\": {\r\n \"script\": {\r\n \"lang\": \"painless\",\r\n \"inline\": \"-1 / doc['value'].value\"\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```\r\n\r\nSetup:\r\n```\r\ncurl -XPUT 'localhost:9205/twitter/tweet/1?pretty' -H 'Content-Type: application/json' -d'{ \"value\" : 0.0 }'\r\ncurl -XPOST 'localhost:9205/twitter/_search?pretty' -H 'Content-Type: application/json' -d @query.json\r\n```\r\n\r\nOutput:\r\n```\r\n{\r\n \"took\" : 6,\r\n \"timed_out\" : false,\r\n \"_shards\" : {\r\n \"total\" : 5,\r\n \"successful\" : 4,\r\n \"failed\" : 1,\r\n \"failures\" : [\r\n {\r\n \"shard\" : 3,\r\n \"index\" : \"twitter\",\r\n \"node\" : \"9D1hkKznR867YomO_6WtgA\",\r\n \"reason\" : {\r\n \"type\" : \"index_out_of_bounds_exception\",\r\n \"reason\" : \"docID must be >= 0 and < maxDoc=1 (got docID=2147483647)\"\r\n }\r\n }\r\n ]\r\n },\r\n \"hits\" : {\r\n \"total\" : 1,\r\n \"max_score\" : null,\r\n \"hits\" : [ ]\r\n }\r\n}\r\n```\r\n\r\nNow you may argue that division by zero is stupid anyway. Agree, but I'd expect a much different error message. Also, if you change `-1 / doc['value'].value` by `1 / doc['value'].value` it suddenly works:\r\n\r\n```\r\n{\r\n \"took\" : 10,\r\n \"timed_out\" : false,\r\n \"_shards\" : {\r\n \"total\" : 5,\r\n \"successful\" : 5,\r\n \"failed\" : 0\r\n },\r\n \"hits\" : {\r\n \"total\" : 1,\r\n \"max_score\" : 3.4028235E38,\r\n \"hits\" : [\r\n {\r\n \"_index\" : \"twitter\",\r\n \"_type\" : \"invoice\",\r\n \"_id\" : \"1\",\r\n \"_score\" : 3.4028235E38,\r\n \"_source\" : {\r\n \"value\" : 0.0\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n\r\nHope this gives you guys a clear test case :)",
"created_at": "2017-04-05T21:08:21Z"
}
],
"number": 23628,
"title": "Rare ArrayIndexOutOfBoundsException on all shards with large queries"
} | {
"body": "This change merges the functionality of the FiltersFunctionScoreQuery in the FunctionScoreQuery.\r\nIt also ensures that an exception is thrown when the computed score is equals to Float.NaN or Float.NEGATIVE_INFINITY.\r\nThese scores are invalid for TopDocsCollectors that relies on score comparison.\r\n\r\nFixes #15709\r\nFixes #23628",
"number": 25889,
"review_comments": [],
"title": "Merge FunctionScoreQuery and FiltersFunctionScoreQuery"
} | {
"commits": [
{
"message": "Merge FunctionScoreQuery and FiltersFunctionScoreQuery\n\nThis change merges the functionality of the FiltersFunctionScoreQuery in the FunctionScoreQuery.\nIt also ensures that an exception is thrown when the computed score is equals to Float.NaN or Float.NEGATIVE_INFINITY.\nThese scores are invalid for TopDocsCollectors that relies on score comparison.\n\nFixes #15709\nFixes #23628"
},
{
"message": "fix ut"
},
{
"message": "fix another ut"
}
],
"files": [
{
"diff": "@@ -20,8 +20,6 @@\n package org.apache.lucene.search.uhighlight;\n \n import org.apache.lucene.analysis.Analyzer;\n-import org.apache.lucene.index.FieldInfo;\n-import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.queries.CommonTermsQuery;\n import org.apache.lucene.search.DocIdSetIterator;\n@@ -39,7 +37,6 @@\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.lucene.all.AllTermQuery;\n import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n \n import java.io.IOException;\n@@ -213,8 +210,6 @@ private Collection<Query> rewriteCustomQuery(Query query) {\n return Collections.singletonList(new TermQuery(atq.getTerm()));\n } else if (query instanceof FunctionScoreQuery) {\n return Collections.singletonList(((FunctionScoreQuery) query).getSubQuery());\n- } else if (query instanceof FiltersFunctionScoreQuery) {\n- return Collections.singletonList(((FiltersFunctionScoreQuery) query).getSubQuery());\n } else {\n return null;\n }",
"filename": "core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,6 @@\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.spans.SpanTermQuery;\n import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n import org.elasticsearch.index.search.ESToParentBlockJoinQuery;\n \n@@ -69,8 +68,6 @@ void flatten(Query sourceQuery, IndexReader reader, Collection<Query> flatQuerie\n flatten(((FunctionScoreQuery) sourceQuery).getSubQuery(), reader, flatQueries, boost);\n } else if (sourceQuery instanceof MultiPhrasePrefixQuery) {\n flatten(sourceQuery.rewrite(reader), reader, flatQueries, boost);\n- } else if (sourceQuery instanceof FiltersFunctionScoreQuery) {\n- flatten(((FiltersFunctionScoreQuery) sourceQuery).getSubQuery(), reader, flatQueries, boost);\n } else if (sourceQuery instanceof MultiPhraseQuery) {\n MultiPhraseQuery q = ((MultiPhraseQuery) sourceQuery);\n convertMultiPhraseQuery(0, new int[q.getTermArrays().length], q, q.getTermArrays(), q.getPositions(), reader, flatQueries);",
"filename": "core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java",
"status": "modified"
},
{
"diff": "@@ -83,10 +83,6 @@ public double score(int docId, float subQueryScore) throws IOException {\n }\n double val = value * boostFactor;\n double result = modifier.apply(val);\n- if (Double.isNaN(result) || Double.isInfinite(result)) {\n- throw new ElasticsearchException(\"Result of field modification [\" + modifier.toString() + \"(\" + val\n- + \")] must be a number\");\n- }\n return result;\n }\n ",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java",
"status": "modified"
},
{
"diff": "@@ -27,50 +27,167 @@\n import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.Scorer;\n+import org.apache.lucene.search.ScorerSupplier;\n import org.apache.lucene.search.Weight;\n+import org.apache.lucene.util.Bits;\n+import org.apache.lucene.search.TopDocsCollector;\n+import org.apache.lucene.search.TopScoreDocCollector;\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.common.lucene.Lucene;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.Locale;\n import java.util.Objects;\n import java.util.Set;\n \n /**\n- * A query that allows for a pluggable boost function to be applied to it.\n+ * A query that allows for a pluggable boost function / filter. If it matches\n+ * the filter, it will be boosted by the formula.\n */\n public class FunctionScoreQuery extends Query {\n-\n public static final float DEFAULT_MAX_BOOST = Float.MAX_VALUE;\n \n+ public static class FilterScoreFunction extends ScoreFunction {\n+ public final Query filter;\n+ public final ScoreFunction function;\n+\n+ public FilterScoreFunction(Query filter, ScoreFunction function) {\n+ super(function.getDefaultScoreCombiner());\n+ this.filter = filter;\n+ this.function = function;\n+ }\n+\n+ @Override\n+ public LeafScoreFunction getLeafScoreFunction(LeafReaderContext ctx) throws IOException {\n+ return function.getLeafScoreFunction(ctx);\n+ }\n+\n+ @Override\n+ public boolean needsScores() {\n+ return function.needsScores();\n+ }\n+\n+ @Override\n+ protected boolean doEquals(ScoreFunction other) {\n+ if (getClass() != other.getClass()) {\n+ return false;\n+ }\n+ FilterScoreFunction that = (FilterScoreFunction) other;\n+ return Objects.equals(this.filter, that.filter) && Objects.equals(this.function, that.function);\n+ }\n+\n+ @Override\n+ protected int doHashCode() {\n+ return Objects.hash(filter, function);\n+ }\n+\n+ @Override\n+ protected ScoreFunction rewrite(IndexReader reader) throws IOException {\n+ Query newFilter = filter.rewrite(reader);\n+ if (newFilter == filter) {\n+ return this;\n+ }\n+ return new FilterScoreFunction(newFilter, function);\n+ }\n+\n+ @Override\n+ public float getWeight() {\n+ return function.getWeight();\n+ }\n+ }\n+\n+ public enum ScoreMode implements Writeable {\n+ FIRST, AVG, MAX, SUM, MIN, MULTIPLY;\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ out.writeEnum(this);\n+ }\n+\n+ public static ScoreMode readFromStream(StreamInput in) throws IOException {\n+ return in.readEnum(ScoreMode.class);\n+ }\n+\n+ public static ScoreMode fromString(String scoreMode) {\n+ return valueOf(scoreMode.toUpperCase(Locale.ROOT));\n+ }\n+ }\n+\n final Query subQuery;\n- final ScoreFunction function;\n+ final ScoreFunction[] functions;\n+ final ScoreMode scoreMode;\n final float maxBoost;\n- final CombineFunction combineFunction;\n- private Float minScore;\n+ private final Float minScore;\n \n- public FunctionScoreQuery(Query subQuery, ScoreFunction function, Float minScore, CombineFunction combineFunction, float maxBoost) {\n- this.subQuery = subQuery;\n- this.function = function;\n- this.combineFunction = combineFunction;\n- this.minScore = minScore;\n- this.maxBoost = maxBoost;\n+ protected final CombineFunction combineFunction;\n+\n+ /**\n+ * Creates a FunctionScoreQuery without function.\n+ * @param subQuery The query to match.\n+ * @param minScore The minimum score to consider a document.\n+ * @param maxBoost The maximum applicable boost.\n+ */\n+ public FunctionScoreQuery(Query subQuery, Float minScore, float maxBoost) {\n+ this(subQuery, ScoreMode.FIRST, new ScoreFunction[0], CombineFunction.MULTIPLY, minScore, maxBoost);\n }\n \n+ /**\n+ * Creates a FunctionScoreQuery with a single {@link ScoreFunction}\n+ * @param subQuery The query to match.\n+ * @param function The {@link ScoreFunction} to apply.\n+ */\n public FunctionScoreQuery(Query subQuery, ScoreFunction function) {\n- this.subQuery = subQuery;\n- this.function = function;\n- this.combineFunction = function.getDefaultScoreCombiner();\n- this.maxBoost = DEFAULT_MAX_BOOST;\n+ this(subQuery, function, CombineFunction.MULTIPLY, null, DEFAULT_MAX_BOOST);\n }\n \n- public float getMaxBoost() {\n- return this.maxBoost;\n+\n+ /**\n+ * Creates a FunctionScoreQuery with a single function\n+ * @param subQuery The query to match.\n+ * @param function The {@link ScoreFunction} to apply.\n+ * @param combineFunction Defines how the query and function score should be applied.\n+ * @param minScore The minimum score to consider a document.\n+ * @param maxBoost The maximum applicable boost.\n+ */\n+ public FunctionScoreQuery(Query subQuery, ScoreFunction function, CombineFunction combineFunction, Float minScore, float maxBoost) {\n+ this(subQuery, ScoreMode.FIRST, new ScoreFunction[] { function }, combineFunction, minScore, maxBoost);\n+ }\n+\n+ /**\n+ * Creates a FunctionScoreQuery with multiple score functions\n+ * @param subQuery The query to match.\n+ * @param scoreMode Defines how the different score functions should be combined.\n+ * @param functions The {@link ScoreFunction}s to apply.\n+ * @param combineFunction Defines how the query and function score should be applied.\n+ * @param minScore The minimum score to consider a document.\n+ * @param maxBoost The maximum applicable boost.\n+ */\n+ public FunctionScoreQuery(Query subQuery, ScoreMode scoreMode, ScoreFunction[] functions,\n+ CombineFunction combineFunction, Float minScore, float maxBoost) {\n+ if (Arrays.stream(functions).anyMatch(func -> func == null)) {\n+ throw new IllegalArgumentException(\"Score function should not be null\");\n+ }\n+ this.subQuery = subQuery;\n+ this.scoreMode = scoreMode;\n+ this.functions = functions;\n+ this.maxBoost = maxBoost;\n+ this.combineFunction = combineFunction;\n+ this.minScore = minScore;\n }\n \n public Query getSubQuery() {\n return subQuery;\n }\n \n- public ScoreFunction getFunction() {\n- return function;\n+ public ScoreFunction[] getFunctions() {\n+ return functions;\n }\n \n public Float getMinScore() {\n@@ -84,10 +201,16 @@ public Query rewrite(IndexReader reader) throws IOException {\n return rewritten;\n }\n Query newQ = subQuery.rewrite(reader);\n- if (newQ == subQuery) {\n- return this;\n+ ScoreFunction[] newFunctions = new ScoreFunction[functions.length];\n+ boolean needsRewrite = (newQ != subQuery);\n+ for (int i = 0; i < functions.length; i++) {\n+ newFunctions[i] = functions[i].rewrite(reader);\n+ needsRewrite |= (newFunctions[i] != functions[i]);\n }\n- return new FunctionScoreQuery(newQ, function, minScore, combineFunction, maxBoost);\n+ if (needsRewrite) {\n+ return new FunctionScoreQuery(newQ, scoreMode, newFunctions, combineFunction, minScore, maxBoost);\n+ }\n+ return this;\n }\n \n @Override\n@@ -96,22 +219,29 @@ public Weight createWeight(IndexSearcher searcher, boolean needsScores, float bo\n return subQuery.createWeight(searcher, needsScores, boost);\n }\n \n- boolean subQueryNeedsScores =\n- combineFunction != CombineFunction.REPLACE // if we don't replace we need the original score\n- || function == null // when the function is null, we just multiply the score, so we need it\n- || function.needsScores(); // some scripts can replace with a script that returns eg. 1/_score\n+ boolean subQueryNeedsScores = combineFunction != CombineFunction.REPLACE;\n+ Weight[] filterWeights = new Weight[functions.length];\n+ for (int i = 0; i < functions.length; ++i) {\n+ subQueryNeedsScores |= functions[i].needsScores();\n+ if (functions[i] instanceof FilterScoreFunction) {\n+ Query filter = ((FilterScoreFunction) functions[i]).filter;\n+ filterWeights[i] = searcher.createNormalizedWeight(filter, false);\n+ }\n+ }\n Weight subQueryWeight = subQuery.createWeight(searcher, subQueryNeedsScores, boost);\n- return new CustomBoostFactorWeight(this, subQueryWeight, subQueryNeedsScores);\n+ return new CustomBoostFactorWeight(this, subQueryWeight, filterWeights, subQueryNeedsScores);\n }\n \n class CustomBoostFactorWeight extends Weight {\n \n final Weight subQueryWeight;\n+ final Weight[] filterWeights;\n final boolean needsScores;\n \n- CustomBoostFactorWeight(Query parent, Weight subQueryWeight, boolean needsScores) throws IOException {\n+ CustomBoostFactorWeight(Query parent, Weight subQueryWeight, Weight[] filterWeights, boolean needsScores) throws IOException {\n super(parent);\n this.subQueryWeight = subQueryWeight;\n+ this.filterWeights = filterWeights;\n this.needsScores = needsScores;\n }\n \n@@ -125,11 +255,20 @@ private FunctionFactorScorer functionScorer(LeafReaderContext context) throws IO\n if (subQueryScorer == null) {\n return null;\n }\n- LeafScoreFunction leafFunction = null;\n- if (function != null) {\n- leafFunction = function.getLeafScoreFunction(context);\n+ final LeafScoreFunction[] leafFunctions = new LeafScoreFunction[functions.length];\n+ final Bits[] docSets = new Bits[functions.length];\n+ for (int i = 0; i < functions.length; i++) {\n+ ScoreFunction function = functions[i];\n+ leafFunctions[i] = function.getLeafScoreFunction(context);\n+ if (filterWeights[i] != null) {\n+ ScorerSupplier filterScorerSupplier = filterWeights[i].scorerSupplier(context);\n+ docSets[i] = Lucene.asSequentialAccessBits(context.reader().maxDoc(), filterScorerSupplier);\n+ } else {\n+ docSets[i] = new Bits.MatchAllBits(context.reader().maxDoc());\n+ }\n }\n- return new FunctionFactorScorer(this, subQueryScorer, leafFunction, maxBoost, combineFunction, needsScores);\n+ return new FunctionFactorScorer(this, subQueryScorer, scoreMode, functions, maxBoost, leafFunctions,\n+ docSets, combineFunction, needsScores);\n }\n \n @Override\n@@ -143,16 +282,51 @@ public Scorer scorer(LeafReaderContext context) throws IOException {\n \n @Override\n public Explanation explain(LeafReaderContext context, int doc) throws IOException {\n- Explanation subQueryExpl = subQueryWeight.explain(context, doc);\n- if (!subQueryExpl.isMatch()) {\n- return subQueryExpl;\n+\n+ Explanation expl = subQueryWeight.explain(context, doc);\n+ if (!expl.isMatch()) {\n+ return expl;\n }\n- Explanation expl;\n- if (function != null) {\n- Explanation functionExplanation = function.getLeafScoreFunction(context).explainScore(doc, subQueryExpl);\n- expl = combineFunction.explain(subQueryExpl, functionExplanation, maxBoost);\n- } else {\n- expl = subQueryExpl;\n+ boolean singleFunction = functions.length == 1 && functions[0] instanceof FilterScoreFunction == false;\n+ if (functions.length > 0) {\n+ // First: Gather explanations for all functions/filters\n+ List<Explanation> functionsExplanations = new ArrayList<>();\n+ for (int i = 0; i < functions.length; ++i) {\n+ if (filterWeights[i] != null) {\n+ final Bits docSet = Lucene.asSequentialAccessBits(context.reader().maxDoc(), filterWeights[i].scorerSupplier(context));\n+ if (docSet.get(doc) == false) {\n+ continue;\n+ }\n+ }\n+ ScoreFunction function = functions[i];\n+ Explanation functionExplanation = function.getLeafScoreFunction(context).explainScore(doc, expl);\n+ if (function instanceof FilterScoreFunction) {\n+ double factor = functionExplanation.getValue();\n+ float sc = (float) factor;\n+ Query filterQuery = ((FilterScoreFunction) function).filter;\n+ Explanation filterExplanation = Explanation.match(sc, \"function score, product of:\",\n+ Explanation.match(1.0f, \"match filter: \" + filterQuery.toString()), functionExplanation);\n+ functionsExplanations.add(filterExplanation);\n+ } else {\n+ functionsExplanations.add(functionExplanation);\n+ }\n+ }\n+ final Explanation factorExplanation;\n+ if (functionsExplanations.size() == 0) {\n+ // it is a little weird to add a match although no function matches but that is the way function_score behaves right now\n+ factorExplanation = Explanation.match(1.0f, \"No function matched\", Collections.emptyList());\n+ } else if (singleFunction && functionsExplanations.size() == 1) {\n+ factorExplanation = functionsExplanations.get(0);\n+ } else {\n+ FunctionFactorScorer scorer = functionScorer(context);\n+ int actualDoc = scorer.iterator().advance(doc);\n+ assert (actualDoc == doc);\n+ double score = scorer.computeScore(doc, expl.getValue());\n+ factorExplanation = Explanation.match(\n+ (float) score,\n+ \"function score, score mode [\" + scoreMode.toString().toLowerCase(Locale.ROOT) + \"]\", functionsExplanations);\n+ }\n+ expl = combineFunction.explain(expl, factorExplanation, maxBoost);\n }\n if (minScore != null && minScore > expl.getValue()) {\n expl = Explanation.noMatch(\"Score value is too low, expected at least \" + minScore + \" but got \" + expl.getValue(), expl);\n@@ -162,40 +336,117 @@ public Explanation explain(LeafReaderContext context, int doc) throws IOExceptio\n }\n \n static class FunctionFactorScorer extends FilterScorer {\n-\n- private final LeafScoreFunction function;\n- private final boolean needsScores;\n+ private final ScoreFunction[] functions;\n+ private final ScoreMode scoreMode;\n+ private final LeafScoreFunction[] leafFunctions;\n+ private final Bits[] docSets;\n private final CombineFunction scoreCombiner;\n private final float maxBoost;\n+ private final boolean needsScores;\n \n- private FunctionFactorScorer(CustomBoostFactorWeight w, Scorer scorer, LeafScoreFunction function, float maxBoost, CombineFunction scoreCombiner, boolean needsScores)\n- throws IOException {\n+ private FunctionFactorScorer(CustomBoostFactorWeight w, Scorer scorer, ScoreMode scoreMode, ScoreFunction[] functions,\n+ float maxBoost, LeafScoreFunction[] leafFunctions, Bits[] docSets, CombineFunction scoreCombiner, boolean needsScores) throws IOException {\n super(scorer, w);\n- this.function = function;\n+ this.scoreMode = scoreMode;\n+ this.functions = functions;\n+ this.leafFunctions = leafFunctions;\n+ this.docSets = docSets;\n this.scoreCombiner = scoreCombiner;\n this.maxBoost = maxBoost;\n this.needsScores = needsScores;\n }\n \n @Override\n public float score() throws IOException {\n+ int docId = docID();\n // Even if the weight is created with needsScores=false, it might\n // be costly to call score(), so we explicitly check if scores\n // are needed\n- float score = needsScores ? super.score() : 0f;\n- if (function == null) {\n- return score;\n- } else {\n- return scoreCombiner.combine(score,\n- function.score(docID(), score), maxBoost);\n+ float subQueryScore = needsScores ? super.score() : 0f;\n+ if (leafFunctions.length == 0) {\n+ return subQueryScore;\n }\n+ double factor = computeScore(docId, subQueryScore);\n+ float finalScore = scoreCombiner.combine(subQueryScore, factor, maxBoost);\n+ if (finalScore == Float.NEGATIVE_INFINITY || Float.isNaN(finalScore)) {\n+ /**\n+ * These scores are invalid for score based {@link TopDocsCollector}s.\n+ * See {@link TopScoreDocCollector} for details.\n+ */\n+ throw new ElasticsearchException(\"function score query returned an invalid score: \" + finalScore + \" for doc: \" + docId);\n+ }\n+ return finalScore;\n+ }\n+\n+ protected double computeScore(int docId, float subQueryScore) throws IOException {\n+ double factor = 1d;\n+ switch(scoreMode) {\n+ case FIRST:\n+ for (int i = 0; i < leafFunctions.length; i++) {\n+ if (docSets[i].get(docId)) {\n+ factor = leafFunctions[i].score(docId, subQueryScore);\n+ break;\n+ }\n+ }\n+ break;\n+ case MAX:\n+ double maxFactor = Double.NEGATIVE_INFINITY;\n+ for (int i = 0; i < leafFunctions.length; i++) {\n+ if (docSets[i].get(docId)) {\n+ maxFactor = Math.max(leafFunctions[i].score(docId, subQueryScore), maxFactor);\n+ }\n+ }\n+ if (maxFactor != Float.NEGATIVE_INFINITY) {\n+ factor = maxFactor;\n+ }\n+ break;\n+ case MIN:\n+ double minFactor = Double.POSITIVE_INFINITY;\n+ for (int i = 0; i < leafFunctions.length; i++) {\n+ if (docSets[i].get(docId)) {\n+ minFactor = Math.min(leafFunctions[i].score(docId, subQueryScore), minFactor);\n+ }\n+ }\n+ if (minFactor != Float.POSITIVE_INFINITY) {\n+ factor = minFactor;\n+ }\n+ break;\n+ case MULTIPLY:\n+ for (int i = 0; i < leafFunctions.length; i++) {\n+ if (docSets[i].get(docId)) {\n+ factor *= leafFunctions[i].score(docId, subQueryScore);\n+ }\n+ }\n+ break;\n+ default: // Avg / Total\n+ double totalFactor = 0.0f;\n+ double weightSum = 0;\n+ for (int i = 0; i < leafFunctions.length; i++) {\n+ if (docSets[i].get(docId)) {\n+ totalFactor += leafFunctions[i].score(docId, subQueryScore);\n+ weightSum += functions[i].getWeight();\n+ }\n+ }\n+ if (weightSum != 0) {\n+ factor = totalFactor;\n+ if (scoreMode == ScoreMode.AVG) {\n+ factor /= weightSum;\n+ }\n+ }\n+ break;\n+ }\n+ return factor;\n }\n }\n \n @Override\n public String toString(String field) {\n StringBuilder sb = new StringBuilder();\n- sb.append(\"function score (\").append(subQuery.toString(field)).append(\",function=\").append(function).append(')');\n+ sb.append(\"function score (\").append(subQuery.toString(field)).append(\", functions: [\");\n+ for (ScoreFunction function : functions) {\n+ sb.append(\"{\" + (function == null ? \"\" : function.toString()) + \"}\");\n+ }\n+ sb.append(\"])\");\n return sb.toString();\n }\n \n@@ -208,13 +459,14 @@ public boolean equals(Object o) {\n return false;\n }\n FunctionScoreQuery other = (FunctionScoreQuery) o;\n- return Objects.equals(this.subQuery, other.subQuery) && Objects.equals(this.function, other.function)\n- && Objects.equals(this.combineFunction, other.combineFunction)\n- && Objects.equals(this.minScore, other.minScore) && this.maxBoost == other.maxBoost;\n+ return Objects.equals(this.subQuery, other.subQuery) && this.maxBoost == other.maxBoost &&\n+ Objects.equals(this.combineFunction, other.combineFunction) && Objects.equals(this.minScore, other.minScore) &&\n+ Objects.equals(this.scoreMode, other.scoreMode) &&\n+ Arrays.equals(this.functions, other.functions);\n }\n \n @Override\n public int hashCode() {\n- return Objects.hash(classHash(), subQuery.hashCode(), function, combineFunction, minScore, maxBoost);\n+ return Objects.hash(classHash(), subQuery, maxBoost, combineFunction, minScore, scoreMode, Arrays.hashCode(functions));\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/FunctionScoreQuery.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.common.lucene.search.function;\n \n+import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.LeafReaderContext;\n \n import java.io.IOException;\n@@ -40,7 +41,7 @@ public CombineFunction getDefaultScoreCombiner() {\n \n /**\n * Indicates if document scores are needed by this function.\n- * \n+ *\n * @return {@code true} if scores are needed.\n */\n public abstract boolean needsScores();\n@@ -59,6 +60,10 @@ public final boolean equals(Object obj) {\n doEquals(other);\n }\n \n+ public float getWeight() {\n+ return 1.0f;\n+ }\n+\n /**\n * Indicates whether some other {@link ScoreFunction} object of the same type is \"equal to\" this one.\n */\n@@ -74,4 +79,8 @@ public final int hashCode() {\n }\n \n protected abstract int doHashCode();\n+\n+ protected ScoreFunction rewrite(IndexReader reader) throws IOException {\n+ return this;\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/ScoreFunction.java",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,6 @@\n import org.apache.lucene.search.Scorer;\n import org.elasticsearch.script.ExplainableSearchScript;\n import org.elasticsearch.script.Script;\n-import org.elasticsearch.script.GeneralScriptException;\n import org.elasticsearch.script.SearchScript;\n \n import java.io.IOException;\n@@ -80,14 +79,11 @@ public LeafScoreFunction getLeafScoreFunction(LeafReaderContext ctx) throws IOEx\n leafScript.setScorer(scorer);\n return new LeafScoreFunction() {\n @Override\n- public double score(int docId, float subQueryScore) {\n+ public double score(int docId, float subQueryScore) throws IOException {\n leafScript.setDocument(docId);\n scorer.docid = docId;\n scorer.score = subQueryScore;\n double result = leafScript.runAsDouble();\n- if (Double.isNaN(result)) {\n- throw new GeneralScriptException(\"script_score returned NaN\");\n- }\n return result;\n }\n \n@@ -137,4 +133,4 @@ protected boolean doEquals(ScoreFunction other) {\n protected int doHashCode() {\n return Objects.hash(sScript);\n }\n-}\n\\ No newline at end of file\n+}",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java",
"status": "modified"
},
{
"diff": "@@ -75,6 +75,7 @@ public Explanation explainWeight() {\n return Explanation.match(getWeight(), \"weight\");\n }\n \n+ @Override\n public float getWeight() {\n return weight;\n }",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java",
"status": "modified"
},
{
"diff": "@@ -27,8 +27,6 @@\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.lucene.search.function.CombineFunction;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery.FilterFunction;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n import org.elasticsearch.common.lucene.search.function.ScoreFunction;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -70,13 +68,13 @@ public class FunctionScoreQueryBuilder extends AbstractQueryBuilder<FunctionScor\n public static final ParseField MIN_SCORE_FIELD = new ParseField(\"min_score\");\n \n public static final CombineFunction DEFAULT_BOOST_MODE = CombineFunction.MULTIPLY;\n- public static final FiltersFunctionScoreQuery.ScoreMode DEFAULT_SCORE_MODE = FiltersFunctionScoreQuery.ScoreMode.MULTIPLY;\n+ public static final FunctionScoreQuery.ScoreMode DEFAULT_SCORE_MODE = FunctionScoreQuery.ScoreMode.MULTIPLY;\n \n private final QueryBuilder query;\n \n private float maxBoost = FunctionScoreQuery.DEFAULT_MAX_BOOST;\n \n- private FiltersFunctionScoreQuery.ScoreMode scoreMode = DEFAULT_SCORE_MODE;\n+ private FunctionScoreQuery.ScoreMode scoreMode = DEFAULT_SCORE_MODE;\n \n private CombineFunction boostMode;\n \n@@ -153,7 +151,7 @@ public FunctionScoreQueryBuilder(StreamInput in) throws IOException {\n maxBoost = in.readFloat();\n minScore = in.readOptionalFloat();\n boostMode = in.readOptionalWriteable(CombineFunction::readFromStream);\n- scoreMode = FiltersFunctionScoreQuery.ScoreMode.readFromStream(in);\n+ scoreMode = FunctionScoreQuery.ScoreMode.readFromStream(in);\n }\n \n @Override\n@@ -182,9 +180,9 @@ public FilterFunctionBuilder[] filterFunctionBuilders() {\n \n /**\n * Score mode defines how results of individual score functions will be aggregated.\n- * @see org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery.ScoreMode\n+ * @see FunctionScoreQuery.ScoreMode\n */\n- public FunctionScoreQueryBuilder scoreMode(FiltersFunctionScoreQuery.ScoreMode scoreMode) {\n+ public FunctionScoreQueryBuilder scoreMode(FunctionScoreQuery.ScoreMode scoreMode) {\n if (scoreMode == null) {\n throw new IllegalArgumentException(\"[\" + NAME + \"] requires 'score_mode' field\");\n }\n@@ -194,9 +192,9 @@ public FunctionScoreQueryBuilder scoreMode(FiltersFunctionScoreQuery.ScoreMode s\n \n /**\n * Returns the score mode, meaning how results of individual score functions will be aggregated.\n- * @see org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery.ScoreMode\n+ * @see FunctionScoreQuery.ScoreMode\n */\n- public FiltersFunctionScoreQuery.ScoreMode scoreMode() {\n+ public FunctionScoreQuery.ScoreMode scoreMode() {\n return this.scoreMode;\n }\n \n@@ -294,12 +292,16 @@ protected int doHashCode() {\n \n @Override\n protected Query doToQuery(QueryShardContext context) throws IOException {\n- FilterFunction[] filterFunctions = new FilterFunction[filterFunctionBuilders.length];\n+ ScoreFunction[] filterFunctions = new ScoreFunction[filterFunctionBuilders.length];\n int i = 0;\n for (FilterFunctionBuilder filterFunctionBuilder : filterFunctionBuilders) {\n- Query filter = filterFunctionBuilder.getFilter().toQuery(context);\n ScoreFunction scoreFunction = filterFunctionBuilder.getScoreFunction().toFunction(context);\n- filterFunctions[i++] = new FilterFunction(filter, scoreFunction);\n+ if (filterFunctionBuilder.getFilter().getName().equals(MatchAllQueryBuilder.NAME)) {\n+ filterFunctions[i++] = scoreFunction;\n+ } else {\n+ Query filter = filterFunctionBuilder.getFilter().toQuery(context);\n+ filterFunctions[i++] = new FunctionScoreQuery.FilterScoreFunction(filter, scoreFunction);\n+ }\n }\n \n Query query = this.query.toQuery(context);\n@@ -308,22 +310,18 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n }\n \n // handle cases where only one score function and no filter was provided. In this case we create a FunctionScoreQuery.\n- if (filterFunctions.length == 0 || filterFunctions.length == 1\n- && (this.filterFunctionBuilders[0].getFilter().getName().equals(MatchAllQueryBuilder.NAME))) {\n- ScoreFunction function = filterFunctions.length == 0 ? null : filterFunctions[0].function;\n+ if (filterFunctions.length == 0) {\n+ return new FunctionScoreQuery(query, minScore, maxBoost);\n+ } else if (filterFunctions.length == 1 && filterFunctions[0] instanceof FunctionScoreQuery.FilterScoreFunction == false) {\n CombineFunction combineFunction = this.boostMode;\n if (combineFunction == null) {\n- if (function != null) {\n- combineFunction = function.getDefaultScoreCombiner();\n- } else {\n- combineFunction = DEFAULT_BOOST_MODE;\n- }\n+ combineFunction = filterFunctions[0].getDefaultScoreCombiner();\n }\n- return new FunctionScoreQuery(query, function, minScore, combineFunction, maxBoost);\n+ return new FunctionScoreQuery(query, filterFunctions[0], combineFunction, minScore, maxBoost);\n }\n- // in all other cases we create a FiltersFunctionScoreQuery\n+ // in all other cases we create a FunctionScoreQuery with filters\n CombineFunction boostMode = this.boostMode == null ? DEFAULT_BOOST_MODE : this.boostMode;\n- return new FiltersFunctionScoreQuery(query, scoreMode, filterFunctions, maxBoost, minScore, boostMode);\n+ return new FunctionScoreQuery(query, scoreMode, filterFunctions, boostMode, minScore, maxBoost);\n }\n \n /**\n@@ -439,7 +437,7 @@ public static FunctionScoreQueryBuilder fromXContent(XContentParser parser) thro\n float boost = AbstractQueryBuilder.DEFAULT_BOOST;\n String queryName = null;\n \n- FiltersFunctionScoreQuery.ScoreMode scoreMode = FunctionScoreQueryBuilder.DEFAULT_SCORE_MODE;\n+ FunctionScoreQuery.ScoreMode scoreMode = FunctionScoreQueryBuilder.DEFAULT_SCORE_MODE;\n float maxBoost = FunctionScoreQuery.DEFAULT_MAX_BOOST;\n Float minScore = null;\n \n@@ -495,7 +493,7 @@ public static FunctionScoreQueryBuilder fromXContent(XContentParser parser) thro\n \n } else if (token.isValue()) {\n if (SCORE_MODE_FIELD.match(currentFieldName)) {\n- scoreMode = FiltersFunctionScoreQuery.ScoreMode.fromString(parser.text());\n+ scoreMode = FunctionScoreQuery.ScoreMode.fromString(parser.text());\n } else if (BOOST_MODE_FIELD.match(currentFieldName)) {\n combineFunction = CombineFunction.fromString(parser.text());\n } else if (MAX_BOOST_FIELD.match(currentFieldName)) {",
"filename": "core/src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -24,7 +24,6 @@\n import org.apache.lucene.search.highlight.QueryScorer;\n import org.apache.lucene.search.highlight.WeightedSpanTerm;\n import org.apache.lucene.search.highlight.WeightedSpanTermExtractor;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n \n import java.io.IOException;\n@@ -87,8 +86,6 @@ protected void extract(Query query, float boost, Map<String, WeightedSpanTerm> t\n return;\n } else if (query instanceof FunctionScoreQuery) {\n super.extract(((FunctionScoreQuery) query).getSubQuery(), boost, terms);\n- } else if (query instanceof FiltersFunctionScoreQuery) {\n- super.extract(((FiltersFunctionScoreQuery) query).getSubQuery(), boost, terms);\n } else {\n super.extract(query, boost, terms);\n }",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/highlight/CustomQueryScorer.java",
"status": "modified"
},
{
"diff": "@@ -21,59 +21,59 @@\n \n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n+import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n import org.elasticsearch.test.ESTestCase;\n \n import static org.hamcrest.Matchers.equalTo;\n \n public class ScoreModeTests extends ESTestCase {\n \n public void testValidOrdinals() {\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.FIRST.ordinal(), equalTo(0));\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.AVG.ordinal(), equalTo(1));\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.MAX.ordinal(), equalTo(2));\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.SUM.ordinal(), equalTo(3));\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.MIN.ordinal(), equalTo(4));\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.MULTIPLY.ordinal(), equalTo(5));\n+ assertThat(FunctionScoreQuery.ScoreMode.FIRST.ordinal(), equalTo(0));\n+ assertThat(FunctionScoreQuery.ScoreMode.AVG.ordinal(), equalTo(1));\n+ assertThat(FunctionScoreQuery.ScoreMode.MAX.ordinal(), equalTo(2));\n+ assertThat(FunctionScoreQuery.ScoreMode.SUM.ordinal(), equalTo(3));\n+ assertThat(FunctionScoreQuery.ScoreMode.MIN.ordinal(), equalTo(4));\n+ assertThat(FunctionScoreQuery.ScoreMode.MULTIPLY.ordinal(), equalTo(5));\n }\n \n public void testWriteTo() throws Exception {\n try (BytesStreamOutput out = new BytesStreamOutput()) {\n- FiltersFunctionScoreQuery.ScoreMode.FIRST.writeTo(out);\n+ FunctionScoreQuery.ScoreMode.FIRST.writeTo(out);\n try (StreamInput in = out.bytes().streamInput()) {\n assertThat(in.readVInt(), equalTo(0));\n }\n }\n \n try (BytesStreamOutput out = new BytesStreamOutput()) {\n- FiltersFunctionScoreQuery.ScoreMode.AVG.writeTo(out);\n+ FunctionScoreQuery.ScoreMode.AVG.writeTo(out);\n try (StreamInput in = out.bytes().streamInput()) {\n assertThat(in.readVInt(), equalTo(1));\n }\n }\n \n try (BytesStreamOutput out = new BytesStreamOutput()) {\n- FiltersFunctionScoreQuery.ScoreMode.MAX.writeTo(out);\n+ FunctionScoreQuery.ScoreMode.MAX.writeTo(out);\n try (StreamInput in = out.bytes().streamInput()) {\n assertThat(in.readVInt(), equalTo(2));\n }\n }\n \n try (BytesStreamOutput out = new BytesStreamOutput()) {\n- FiltersFunctionScoreQuery.ScoreMode.SUM.writeTo(out);\n+ FunctionScoreQuery.ScoreMode.SUM.writeTo(out);\n try (StreamInput in = out.bytes().streamInput()) {\n assertThat(in.readVInt(), equalTo(3));\n }\n }\n try (BytesStreamOutput out = new BytesStreamOutput()) {\n- FiltersFunctionScoreQuery.ScoreMode.MIN.writeTo(out);\n+ FunctionScoreQuery.ScoreMode.MIN.writeTo(out);\n try (StreamInput in = out.bytes().streamInput()) {\n assertThat(in.readVInt(), equalTo(4));\n }\n }\n \n try (BytesStreamOutput out = new BytesStreamOutput()) {\n- FiltersFunctionScoreQuery.ScoreMode.MULTIPLY.writeTo(out);\n+ FunctionScoreQuery.ScoreMode.MULTIPLY.writeTo(out);\n try (StreamInput in = out.bytes().streamInput()) {\n assertThat(in.readVInt(), equalTo(5));\n }\n@@ -84,47 +84,47 @@ public void testReadFrom() throws Exception {\n try (BytesStreamOutput out = new BytesStreamOutput()) {\n out.writeVInt(0);\n try (StreamInput in = out.bytes().streamInput()) {\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FiltersFunctionScoreQuery.ScoreMode.FIRST));\n+ assertThat(FunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FunctionScoreQuery.ScoreMode.FIRST));\n }\n }\n try (BytesStreamOutput out = new BytesStreamOutput()) {\n out.writeVInt(1);\n try (StreamInput in = out.bytes().streamInput()) {\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FiltersFunctionScoreQuery.ScoreMode.AVG));\n+ assertThat(FunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FunctionScoreQuery.ScoreMode.AVG));\n }\n }\n try (BytesStreamOutput out = new BytesStreamOutput()) {\n out.writeVInt(2);\n try (StreamInput in = out.bytes().streamInput()) {\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FiltersFunctionScoreQuery.ScoreMode.MAX));\n+ assertThat(FunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FunctionScoreQuery.ScoreMode.MAX));\n }\n }\n try (BytesStreamOutput out = new BytesStreamOutput()) {\n out.writeVInt(3);\n try (StreamInput in = out.bytes().streamInput()) {\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FiltersFunctionScoreQuery.ScoreMode.SUM));\n+ assertThat(FunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FunctionScoreQuery.ScoreMode.SUM));\n }\n }\n try (BytesStreamOutput out = new BytesStreamOutput()) {\n out.writeVInt(4);\n try (StreamInput in = out.bytes().streamInput()) {\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FiltersFunctionScoreQuery.ScoreMode.MIN));\n+ assertThat(FunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FunctionScoreQuery.ScoreMode.MIN));\n }\n }\n try (BytesStreamOutput out = new BytesStreamOutput()) {\n out.writeVInt(5);\n try (StreamInput in = out.bytes().streamInput()) {\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FiltersFunctionScoreQuery.ScoreMode.MULTIPLY));\n+ assertThat(FunctionScoreQuery.ScoreMode.readFromStream(in), equalTo(FunctionScoreQuery.ScoreMode.MULTIPLY));\n }\n }\n }\n \n public void testFromString() {\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.fromString(\"first\"), equalTo(FiltersFunctionScoreQuery.ScoreMode.FIRST));\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.fromString(\"avg\"), equalTo(FiltersFunctionScoreQuery.ScoreMode.AVG));\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.fromString(\"max\"), equalTo(FiltersFunctionScoreQuery.ScoreMode.MAX));\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.fromString(\"sum\"), equalTo(FiltersFunctionScoreQuery.ScoreMode.SUM));\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.fromString(\"min\"), equalTo(FiltersFunctionScoreQuery.ScoreMode.MIN));\n- assertThat(FiltersFunctionScoreQuery.ScoreMode.fromString(\"multiply\"), equalTo(FiltersFunctionScoreQuery.ScoreMode.MULTIPLY));\n+ assertThat(FunctionScoreQuery.ScoreMode.fromString(\"first\"), equalTo(FunctionScoreQuery.ScoreMode.FIRST));\n+ assertThat(FunctionScoreQuery.ScoreMode.fromString(\"avg\"), equalTo(FunctionScoreQuery.ScoreMode.AVG));\n+ assertThat(FunctionScoreQuery.ScoreMode.fromString(\"max\"), equalTo(FunctionScoreQuery.ScoreMode.MAX));\n+ assertThat(FunctionScoreQuery.ScoreMode.fromString(\"sum\"), equalTo(FunctionScoreQuery.ScoreMode.SUM));\n+ assertThat(FunctionScoreQuery.ScoreMode.fromString(\"min\"), equalTo(FunctionScoreQuery.ScoreMode.MIN));\n+ assertThat(FunctionScoreQuery.ScoreMode.fromString(\"multiply\"), equalTo(FunctionScoreQuery.ScoreMode.MULTIPLY));\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/query/ScoreModeTests.java",
"status": "modified"
},
{
"diff": "@@ -25,10 +25,6 @@\n import org.apache.lucene.search.SearchEquivalenceTestBase;\n import org.apache.lucene.search.TermQuery;\n import org.elasticsearch.bootstrap.BootstrapForTesting;\n-import org.elasticsearch.common.lucene.search.function.CombineFunction;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery.FilterFunction;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery.ScoreMode;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n \n public class FunctionScoreEquivalenceTests extends SearchEquivalenceTestBase {\n@@ -45,41 +41,32 @@ public void testMinScoreAllIncluded() throws Exception {\n Term term = randomTerm();\n Query query = new TermQuery(term);\n \n- FunctionScoreQuery fsq = new FunctionScoreQuery(query, null, 0f, null, Float.POSITIVE_INFINITY);\n+ FunctionScoreQuery fsq = new FunctionScoreQuery(query, null, Float.POSITIVE_INFINITY);\n assertSameScores(query, fsq);\n \n- FiltersFunctionScoreQuery ffsq = new FiltersFunctionScoreQuery(query, ScoreMode.SUM, new FilterFunction[0], Float.POSITIVE_INFINITY,\n- 0f, CombineFunction.MULTIPLY);\n+ FunctionScoreQuery ffsq = new FunctionScoreQuery(query, 0f, Float.POSITIVE_INFINITY);\n assertSameScores(query, ffsq);\n }\n \n public void testMinScoreAllExcluded() throws Exception {\n Term term = randomTerm();\n Query query = new TermQuery(term);\n \n- FunctionScoreQuery fsq = new FunctionScoreQuery(query, null, Float.POSITIVE_INFINITY, null, Float.POSITIVE_INFINITY);\n+ FunctionScoreQuery fsq = new FunctionScoreQuery(query, Float.POSITIVE_INFINITY, Float.POSITIVE_INFINITY);\n assertSameScores(new MatchNoDocsQuery(), fsq);\n-\n- FiltersFunctionScoreQuery ffsq = new FiltersFunctionScoreQuery(query, ScoreMode.SUM, new FilterFunction[0], Float.POSITIVE_INFINITY,\n- Float.POSITIVE_INFINITY, CombineFunction.MULTIPLY);\n- assertSameScores(new MatchNoDocsQuery(), ffsq);\n }\n \n public void testTwoPhaseMinScore() throws Exception {\n Term term = randomTerm();\n Query query = new TermQuery(term);\n Float minScore = random().nextFloat();\n \n- FunctionScoreQuery fsq1 = new FunctionScoreQuery(query, null, minScore, null, Float.POSITIVE_INFINITY);\n- FunctionScoreQuery fsq2 = new FunctionScoreQuery(new RandomApproximationQuery(query, random()), null, minScore, null,\n- Float.POSITIVE_INFINITY);\n+ FunctionScoreQuery fsq1 = new FunctionScoreQuery(query, minScore, Float.POSITIVE_INFINITY);\n+ FunctionScoreQuery fsq2 = new FunctionScoreQuery(new RandomApproximationQuery(query, random()), minScore, Float.POSITIVE_INFINITY);\n assertSameScores(fsq1, fsq2);\n \n- FiltersFunctionScoreQuery ffsq1 = new FiltersFunctionScoreQuery(query, ScoreMode.SUM, new FilterFunction[0],\n- Float.POSITIVE_INFINITY, minScore, CombineFunction.MULTIPLY);\n- FiltersFunctionScoreQuery ffsq2 = new FiltersFunctionScoreQuery(query, ScoreMode.SUM, new FilterFunction[0],\n- Float.POSITIVE_INFINITY, minScore, CombineFunction.MULTIPLY);\n+ FunctionScoreQuery ffsq1 = new FunctionScoreQuery(query, minScore, Float.POSITIVE_INFINITY);\n+ FunctionScoreQuery ffsq2 = new FunctionScoreQuery(query, minScore, Float.POSITIVE_INFINITY);\n assertSameScores(ffsq1, ffsq2);\n }\n-\n }",
"filename": "core/src/test/java/org/elasticsearch/index/query/functionscore/FunctionScoreEquivalenceTests.java",
"status": "modified"
},
{
"diff": "@@ -31,7 +31,6 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.lucene.search.function.CombineFunction;\n import org.elasticsearch.common.lucene.search.function.FieldValueFactorFunction;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n import org.elasticsearch.common.lucene.search.function.WeightFactorFunction;\n import org.elasticsearch.common.unit.DistanceUnit;\n@@ -99,7 +98,7 @@ protected FunctionScoreQueryBuilder doCreateTestQueryBuilder() {\n functionScoreQueryBuilder.boostMode(randomFrom(CombineFunction.values()));\n }\n if (randomBoolean()) {\n- functionScoreQueryBuilder.scoreMode(randomFrom(FiltersFunctionScoreQuery.ScoreMode.values()));\n+ functionScoreQueryBuilder.scoreMode(randomFrom(FunctionScoreQuery.ScoreMode.values()));\n }\n if (randomBoolean()) {\n functionScoreQueryBuilder.maxBoost(randomFloat());\n@@ -254,7 +253,7 @@ private static DecayFunctionBuilder<?> createRandomDecayFunction() {\n \n @Override\n protected void doAssertLuceneQuery(FunctionScoreQueryBuilder queryBuilder, Query query, SearchContext context) throws IOException {\n- assertThat(query, either(instanceOf(FunctionScoreQuery.class)).or(instanceOf(FiltersFunctionScoreQuery.class)));\n+ assertThat(query, either(instanceOf(FunctionScoreQuery.class)).or(instanceOf(FunctionScoreQuery.class)));\n }\n \n /**\n@@ -367,7 +366,7 @@ public void testParseFunctionsArray() throws IOException {\n .filterFunctionBuilders()[2].getScoreFunction();\n assertThat(gaussDecayFunctionBuilder.getFieldName(), equalTo(\"field_name\"));\n assertThat(functionScoreQueryBuilder.boost(), equalTo(3f));\n- assertThat(functionScoreQueryBuilder.scoreMode(), equalTo(FiltersFunctionScoreQuery.ScoreMode.AVG));\n+ assertThat(functionScoreQueryBuilder.scoreMode(), equalTo(FunctionScoreQuery.ScoreMode.AVG));\n assertThat(functionScoreQueryBuilder.boostMode(), equalTo(CombineFunction.REPLACE));\n assertThat(functionScoreQueryBuilder.maxBoost(), equalTo(10f));\n \n@@ -422,7 +421,7 @@ public void testParseSingleFunction() throws IOException {\n assertThat(gaussDecayFunctionBuilder.getFieldName(), equalTo(\"field_name\"));\n assertThat(gaussDecayFunctionBuilder.getWeight(), nullValue());\n assertThat(functionScoreQueryBuilder.boost(), equalTo(3f));\n- assertThat(functionScoreQueryBuilder.scoreMode(), equalTo(FiltersFunctionScoreQuery.ScoreMode.AVG));\n+ assertThat(functionScoreQueryBuilder.scoreMode(), equalTo(FunctionScoreQuery.ScoreMode.AVG));\n assertThat(functionScoreQueryBuilder.boostMode(), equalTo(CombineFunction.REPLACE));\n assertThat(functionScoreQueryBuilder.maxBoost(), equalTo(10f));\n \n@@ -523,8 +522,9 @@ public void testWeight1fStillProducesWeightFunction() throws IOException {\n Query luceneQuery = query.toQuery(createShardContext());\n assertThat(luceneQuery, instanceOf(FunctionScoreQuery.class));\n FunctionScoreQuery functionScoreQuery = (FunctionScoreQuery) luceneQuery;\n- assertThat(functionScoreQuery.getFunction(), instanceOf(WeightFactorFunction.class));\n- WeightFactorFunction weightFactorFunction = (WeightFactorFunction) functionScoreQuery.getFunction();\n+ assertThat(functionScoreQuery.getFunctions().length, equalTo(1));\n+ assertThat(functionScoreQuery.getFunctions()[0], instanceOf(WeightFactorFunction.class));\n+ WeightFactorFunction weightFactorFunction = (WeightFactorFunction) functionScoreQuery.getFunctions()[0];\n assertThat(weightFactorFunction.getWeight(), equalTo(1.0f));\n assertThat(weightFactorFunction.getScoreFunction(), instanceOf(FieldValueFactorFunction.class));\n }\n@@ -573,15 +573,15 @@ public void testCustomWeightFactorQueryBuilderWithFunctionScore() throws IOExcep\n assertThat(parsedQuery, instanceOf(FunctionScoreQuery.class));\n FunctionScoreQuery functionScoreQuery = (FunctionScoreQuery) parsedQuery;\n assertThat(((TermQuery) functionScoreQuery.getSubQuery()).getTerm(), equalTo(new Term(\"name.last\", \"banon\")));\n- assertThat((double) ((WeightFactorFunction) functionScoreQuery.getFunction()).getWeight(), closeTo(1.3, 0.001));\n+ assertThat((double) (functionScoreQuery.getFunctions()[0]).getWeight(), closeTo(1.3, 0.001));\n }\n \n public void testCustomWeightFactorQueryBuilderWithFunctionScoreWithoutQueryGiven() throws IOException {\n Query parsedQuery = parseQuery(functionScoreQuery(weightFactorFunction(1.3f))).toQuery(createShardContext());\n assertThat(parsedQuery, instanceOf(FunctionScoreQuery.class));\n FunctionScoreQuery functionScoreQuery = (FunctionScoreQuery) parsedQuery;\n assertThat(functionScoreQuery.getSubQuery() instanceof MatchAllDocsQuery, equalTo(true));\n- assertThat((double) ((WeightFactorFunction) functionScoreQuery.getFunction()).getWeight(), closeTo(1.3, 0.001));\n+ assertThat((double) (functionScoreQuery.getFunctions()[0]).getWeight(), closeTo(1.3, 0.001));\n }\n \n public void testFieldValueFactorFactorArray() throws IOException {\n@@ -666,14 +666,14 @@ public void testRewrite() throws IOException {\n FunctionScoreQueryBuilder functionScoreQueryBuilder =\n new FunctionScoreQueryBuilder(new WrapperQueryBuilder(new TermQueryBuilder(\"foo\", \"bar\").toString()))\n .boostMode(CombineFunction.REPLACE)\n- .scoreMode(FiltersFunctionScoreQuery.ScoreMode.SUM)\n+ .scoreMode(FunctionScoreQuery.ScoreMode.SUM)\n .setMinScore(1)\n .maxBoost(100);\n FunctionScoreQueryBuilder rewrite = (FunctionScoreQueryBuilder) functionScoreQueryBuilder.rewrite(createShardContext());\n assertNotSame(functionScoreQueryBuilder, rewrite);\n assertEquals(rewrite.query(), new TermQueryBuilder(\"foo\", \"bar\"));\n assertEquals(rewrite.boostMode(), CombineFunction.REPLACE);\n- assertEquals(rewrite.scoreMode(), FiltersFunctionScoreQuery.ScoreMode.SUM);\n+ assertEquals(rewrite.scoreMode(), FunctionScoreQuery.ScoreMode.SUM);\n assertEquals(rewrite.getMinScore(), 1f, 0.0001);\n assertEquals(rewrite.maxBoost(), 100f, 0.0001);\n }",
"filename": "core/src/test/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -41,13 +41,12 @@\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.util.Accountable;\n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.lucene.search.function.CombineFunction;\n import org.elasticsearch.common.lucene.search.function.FieldValueFactorFunction;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery.FilterFunction;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery.ScoreMode;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n+import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery.ScoreMode;\n import org.elasticsearch.common.lucene.search.function.LeafScoreFunction;\n import org.elasticsearch.common.lucene.search.function.RandomScoreFunction;\n import org.elasticsearch.common.lucene.search.function.ScoreFunction;\n@@ -69,8 +68,10 @@\n import java.util.Collection;\n import java.util.concurrent.ExecutionException;\n \n+import static org.hamcrest.CoreMatchers.containsString;\n import static org.hamcrest.core.Is.is;\n import static org.hamcrest.core.IsEqual.equalTo;\n+import static org.elasticsearch.common.lucene.search.function.FunctionScoreQuery.FilterScoreFunction;\n \n public class FunctionScoreTests extends ESTestCase {\n \n@@ -318,15 +319,15 @@ public void testExplainFunctionScoreQuery() throws IOException {\n }\n \n public Explanation getFunctionScoreExplanation(IndexSearcher searcher, ScoreFunction scoreFunction) throws IOException {\n- FunctionScoreQuery functionScoreQuery = new FunctionScoreQuery(new TermQuery(TERM), scoreFunction, 0.0f, CombineFunction.AVG, 100);\n+ FunctionScoreQuery functionScoreQuery = new FunctionScoreQuery(new TermQuery(TERM), scoreFunction, CombineFunction.AVG,0.0f, 100);\n Weight weight = searcher.createNormalizedWeight(functionScoreQuery, true);\n Explanation explanation = weight.explain(searcher.getIndexReader().leaves().get(0), 0);\n return explanation.getDetails()[1];\n }\n \n public void checkFunctionScoreExplanation(Explanation randomExplanation, String functionExpl) {\n assertThat(randomExplanation.getDescription(), equalTo(\"min of:\"));\n- assertThat(randomExplanation.getDetails()[0].getDescription(), equalTo(functionExpl));\n+ assertThat(randomExplanation.getDetails()[0].getDescription(), containsString(functionExpl));\n }\n \n public void testExplainFiltersFunctionScoreQuery() throws IOException {\n@@ -390,25 +391,24 @@ public void testExplainFiltersFunctionScoreQuery() throws IOException {\n }\n \n public Explanation getFiltersFunctionScoreExplanation(IndexSearcher searcher, ScoreFunction... scoreFunctions) throws IOException {\n- FiltersFunctionScoreQuery filtersFunctionScoreQuery = getFiltersFunctionScoreQuery(FiltersFunctionScoreQuery.ScoreMode.AVG,\n+ FunctionScoreQuery functionScoreQuery = getFiltersFunctionScoreQuery(FunctionScoreQuery.ScoreMode.AVG,\n CombineFunction.AVG, scoreFunctions);\n- return getExplanation(searcher, filtersFunctionScoreQuery).getDetails()[1];\n+ return getExplanation(searcher, functionScoreQuery).getDetails()[1];\n }\n \n- protected Explanation getExplanation(IndexSearcher searcher, FiltersFunctionScoreQuery filtersFunctionScoreQuery) throws IOException {\n- Weight weight = searcher.createNormalizedWeight(filtersFunctionScoreQuery, true);\n+ protected Explanation getExplanation(IndexSearcher searcher, FunctionScoreQuery functionScoreQuery) throws IOException {\n+ Weight weight = searcher.createNormalizedWeight(functionScoreQuery, true);\n return weight.explain(searcher.getIndexReader().leaves().get(0), 0);\n }\n \n- public FiltersFunctionScoreQuery getFiltersFunctionScoreQuery(FiltersFunctionScoreQuery.ScoreMode scoreMode,\n- CombineFunction combineFunction, ScoreFunction... scoreFunctions) {\n- FilterFunction[] filterFunctions = new FilterFunction[scoreFunctions.length];\n+ public FunctionScoreQuery getFiltersFunctionScoreQuery(FunctionScoreQuery.ScoreMode scoreMode,\n+ CombineFunction combineFunction, ScoreFunction... scoreFunctions) {\n+ ScoreFunction[] filterFunctions = new ScoreFunction[scoreFunctions.length];\n for (int i = 0; i < scoreFunctions.length; i++) {\n- filterFunctions[i] = new FiltersFunctionScoreQuery.FilterFunction(\n+ filterFunctions[i] = new FunctionScoreQuery.FilterScoreFunction(\n new TermQuery(TERM), scoreFunctions[i]);\n }\n- return new FiltersFunctionScoreQuery(new TermQuery(TERM), scoreMode, filterFunctions, Float.MAX_VALUE, Float.MAX_VALUE * -1,\n- combineFunction);\n+ return new FunctionScoreQuery(new TermQuery(TERM), scoreMode, filterFunctions, combineFunction,Float.MAX_VALUE * -1, Float.MAX_VALUE);\n }\n \n public void checkFiltersFunctionScoreExplanation(Explanation randomExplanation, String functionExpl, int whichFunction) {\n@@ -489,45 +489,45 @@ public void testSimpleWeightedFunction() throws IOException, ExecutionException,\n weightFunctionStubs[i] = new WeightFactorFunction(weights[i], scoreFunctionStubs[i]);\n }\n \n- FiltersFunctionScoreQuery filtersFunctionScoreQueryWithWeights = getFiltersFunctionScoreQuery(\n- FiltersFunctionScoreQuery.ScoreMode.MULTIPLY\n+ FunctionScoreQuery functionScoreQueryWithWeights = getFiltersFunctionScoreQuery(\n+ FunctionScoreQuery.ScoreMode.MULTIPLY\n , CombineFunction.REPLACE\n , weightFunctionStubs\n );\n \n- TopDocs topDocsWithWeights = searcher.search(filtersFunctionScoreQueryWithWeights, 1);\n+ TopDocs topDocsWithWeights = searcher.search(functionScoreQueryWithWeights, 1);\n float scoreWithWeight = topDocsWithWeights.scoreDocs[0].score;\n double score = 1;\n for (int i = 0; i < weights.length; i++) {\n score *= weights[i] * scores[i];\n }\n assertThat(scoreWithWeight / (float) score, is(1f));\n- float explainedScore = getExplanation(searcher, filtersFunctionScoreQueryWithWeights).getValue();\n+ float explainedScore = getExplanation(searcher, functionScoreQueryWithWeights).getValue();\n assertThat(explainedScore / scoreWithWeight, is(1f));\n \n- filtersFunctionScoreQueryWithWeights = getFiltersFunctionScoreQuery(\n- FiltersFunctionScoreQuery.ScoreMode.SUM\n+ functionScoreQueryWithWeights = getFiltersFunctionScoreQuery(\n+ FunctionScoreQuery.ScoreMode.SUM\n , CombineFunction.REPLACE\n , weightFunctionStubs\n );\n \n- topDocsWithWeights = searcher.search(filtersFunctionScoreQueryWithWeights, 1);\n+ topDocsWithWeights = searcher.search(functionScoreQueryWithWeights, 1);\n scoreWithWeight = topDocsWithWeights.scoreDocs[0].score;\n double sum = 0;\n for (int i = 0; i < weights.length; i++) {\n sum += weights[i] * scores[i];\n }\n assertThat(scoreWithWeight / (float) sum, is(1f));\n- explainedScore = getExplanation(searcher, filtersFunctionScoreQueryWithWeights).getValue();\n+ explainedScore = getExplanation(searcher, functionScoreQueryWithWeights).getValue();\n assertThat(explainedScore / scoreWithWeight, is(1f));\n \n- filtersFunctionScoreQueryWithWeights = getFiltersFunctionScoreQuery(\n- FiltersFunctionScoreQuery.ScoreMode.AVG\n+ functionScoreQueryWithWeights = getFiltersFunctionScoreQuery(\n+ FunctionScoreQuery.ScoreMode.AVG\n , CombineFunction.REPLACE\n , weightFunctionStubs\n );\n \n- topDocsWithWeights = searcher.search(filtersFunctionScoreQueryWithWeights, 1);\n+ topDocsWithWeights = searcher.search(functionScoreQueryWithWeights, 1);\n scoreWithWeight = topDocsWithWeights.scoreDocs[0].score;\n double norm = 0;\n sum = 0;\n@@ -536,45 +536,45 @@ public void testSimpleWeightedFunction() throws IOException, ExecutionException,\n sum += weights[i] * scores[i];\n }\n assertThat(scoreWithWeight / (float) (sum / norm), is(1f));\n- explainedScore = getExplanation(searcher, filtersFunctionScoreQueryWithWeights).getValue();\n+ explainedScore = getExplanation(searcher, functionScoreQueryWithWeights).getValue();\n assertThat(explainedScore / scoreWithWeight, is(1f));\n \n- filtersFunctionScoreQueryWithWeights = getFiltersFunctionScoreQuery(\n- FiltersFunctionScoreQuery.ScoreMode.MIN\n+ functionScoreQueryWithWeights = getFiltersFunctionScoreQuery(\n+ FunctionScoreQuery.ScoreMode.MIN\n , CombineFunction.REPLACE\n , weightFunctionStubs\n );\n \n- topDocsWithWeights = searcher.search(filtersFunctionScoreQueryWithWeights, 1);\n+ topDocsWithWeights = searcher.search(functionScoreQueryWithWeights, 1);\n scoreWithWeight = topDocsWithWeights.scoreDocs[0].score;\n double min = Double.POSITIVE_INFINITY;\n for (int i = 0; i < weights.length; i++) {\n min = Math.min(min, weights[i] * scores[i]);\n }\n assertThat(scoreWithWeight / (float) min, is(1f));\n- explainedScore = getExplanation(searcher, filtersFunctionScoreQueryWithWeights).getValue();\n+ explainedScore = getExplanation(searcher, functionScoreQueryWithWeights).getValue();\n assertThat(explainedScore / scoreWithWeight, is(1f));\n \n- filtersFunctionScoreQueryWithWeights = getFiltersFunctionScoreQuery(\n- FiltersFunctionScoreQuery.ScoreMode.MAX\n+ functionScoreQueryWithWeights = getFiltersFunctionScoreQuery(\n+ FunctionScoreQuery.ScoreMode.MAX\n , CombineFunction.REPLACE\n , weightFunctionStubs\n );\n \n- topDocsWithWeights = searcher.search(filtersFunctionScoreQueryWithWeights, 1);\n+ topDocsWithWeights = searcher.search(functionScoreQueryWithWeights, 1);\n scoreWithWeight = topDocsWithWeights.scoreDocs[0].score;\n double max = Double.NEGATIVE_INFINITY;\n for (int i = 0; i < weights.length; i++) {\n max = Math.max(max, weights[i] * scores[i]);\n }\n assertThat(scoreWithWeight / (float) max, is(1f));\n- explainedScore = getExplanation(searcher, filtersFunctionScoreQueryWithWeights).getValue();\n+ explainedScore = getExplanation(searcher, functionScoreQueryWithWeights).getValue();\n assertThat(explainedScore / scoreWithWeight, is(1f));\n }\n \n public void testWeightOnlyCreatesBoostFunction() throws IOException {\n FunctionScoreQuery filtersFunctionScoreQueryWithWeights = new FunctionScoreQuery(new MatchAllDocsQuery(),\n- new WeightFactorFunction(2), 0.0f, CombineFunction.MULTIPLY, 100);\n+ new WeightFactorFunction(2), CombineFunction.MULTIPLY,0.0f, 100);\n TopDocs topDocsWithWeights = searcher.search(filtersFunctionScoreQueryWithWeights, 1);\n float score = topDocsWithWeights.scoreDocs[0].score;\n assertThat(score, equalTo(2.0f));\n@@ -584,26 +584,24 @@ public void testMinScoreExplain() throws IOException {\n Query query = new MatchAllDocsQuery();\n Explanation queryExpl = searcher.explain(query, 0);\n \n- FunctionScoreQuery fsq = new FunctionScoreQuery(query, null, 0f, null, Float.POSITIVE_INFINITY);\n+ FunctionScoreQuery fsq = new FunctionScoreQuery(query,0f, Float.POSITIVE_INFINITY);\n Explanation fsqExpl = searcher.explain(fsq, 0);\n assertTrue(fsqExpl.isMatch());\n assertEquals(queryExpl.getValue(), fsqExpl.getValue(), 0f);\n assertEquals(queryExpl.getDescription(), fsqExpl.getDescription());\n \n- fsq = new FunctionScoreQuery(query, null, 10f, null, Float.POSITIVE_INFINITY);\n+ fsq = new FunctionScoreQuery(query, 10f, Float.POSITIVE_INFINITY);\n fsqExpl = searcher.explain(fsq, 0);\n assertFalse(fsqExpl.isMatch());\n assertEquals(\"Score value is too low, expected at least 10.0 but got 1.0\", fsqExpl.getDescription());\n \n- FiltersFunctionScoreQuery ffsq = new FiltersFunctionScoreQuery(query, ScoreMode.SUM, new FilterFunction[0], Float.POSITIVE_INFINITY,\n- 0f, CombineFunction.MULTIPLY);\n+ FunctionScoreQuery ffsq = new FunctionScoreQuery(query, 0f, Float.POSITIVE_INFINITY);\n Explanation ffsqExpl = searcher.explain(ffsq, 0);\n assertTrue(ffsqExpl.isMatch());\n assertEquals(queryExpl.getValue(), ffsqExpl.getValue(), 0f);\n- assertEquals(queryExpl.getDescription(), ffsqExpl.getDetails()[0].getDescription());\n+ assertEquals(queryExpl.getDescription(), ffsqExpl.getDescription());\n \n- ffsq = new FiltersFunctionScoreQuery(query, ScoreMode.SUM, new FilterFunction[0], Float.POSITIVE_INFINITY, 10f,\n- CombineFunction.MULTIPLY);\n+ ffsq = new FunctionScoreQuery(query, 10f, Float.POSITIVE_INFINITY);\n ffsqExpl = searcher.explain(ffsq, 0);\n assertFalse(ffsqExpl.isMatch());\n assertEquals(\"Score value is too low, expected at least 10.0 but got 1.0\", ffsqExpl.getDescription());\n@@ -614,44 +612,33 @@ public void testPropagatesApproximations() throws IOException {\n IndexSearcher searcher = newSearcher(reader);\n searcher.setQueryCache(null); // otherwise we could get a cached entry that does not have approximations\n \n- FunctionScoreQuery fsq = new FunctionScoreQuery(query, null, null, null, Float.POSITIVE_INFINITY);\n+ FunctionScoreQuery fsq = new FunctionScoreQuery(query, null, Float.POSITIVE_INFINITY);\n for (boolean needsScores : new boolean[] {true, false}) {\n Weight weight = searcher.createWeight(fsq, needsScores, 1f);\n Scorer scorer = weight.scorer(reader.leaves().get(0));\n assertNotNull(scorer.twoPhaseIterator());\n }\n-\n- FiltersFunctionScoreQuery ffsq = new FiltersFunctionScoreQuery(query, ScoreMode.SUM, new FilterFunction[0], Float.POSITIVE_INFINITY,\n- null, CombineFunction.MULTIPLY);\n- for (boolean needsScores : new boolean[] {true, false}) {\n- Weight weight = searcher.createWeight(ffsq, needsScores, 1f);\n- Scorer scorer = weight.scorer(reader.leaves().get(0));\n- assertNotNull(scorer.twoPhaseIterator());\n- }\n }\n \n public void testFunctionScoreHashCodeAndEquals() {\n Float minScore = randomBoolean() ? null : 1.0f;\n CombineFunction combineFunction = randomFrom(CombineFunction.values());\n float maxBoost = randomBoolean() ? Float.POSITIVE_INFINITY : randomFloat();\n- ScoreFunction function = randomBoolean() ? null : new DummyScoreFunction(combineFunction);\n+ ScoreFunction function = new DummyScoreFunction(combineFunction);\n \n- FunctionScoreQuery q = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), function, minScore, combineFunction, maxBoost);\n- FunctionScoreQuery q1 = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), function, minScore, combineFunction,\n- maxBoost);\n+ FunctionScoreQuery q = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), function, combineFunction, minScore, maxBoost);\n+ FunctionScoreQuery q1 = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), function, combineFunction, minScore, maxBoost);\n assertEquals(q, q);\n assertEquals(q.hashCode(), q.hashCode());\n assertEquals(q, q1);\n assertEquals(q.hashCode(), q1.hashCode());\n \n- FunctionScoreQuery diffQuery = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"baz\")), function, minScore, combineFunction,\n- maxBoost);\n- FunctionScoreQuery diffMinScore = new FunctionScoreQuery(q.getSubQuery(), function, minScore == null ? 1.0f : null, combineFunction,\n- maxBoost);\n- ScoreFunction otherFunciton = function == null ? new DummyScoreFunction(combineFunction) : null;\n- FunctionScoreQuery diffFunction = new FunctionScoreQuery(q.getSubQuery(), otherFunciton, minScore, combineFunction, maxBoost);\n- FunctionScoreQuery diffMaxBoost = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), function, minScore, combineFunction,\n- maxBoost == 1.0f ? 0.9f : 1.0f);\n+ FunctionScoreQuery diffQuery = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"baz\")), function, combineFunction, minScore, maxBoost);\n+ FunctionScoreQuery diffMinScore = new FunctionScoreQuery(q.getSubQuery(), function, combineFunction, minScore == null ? 1.0f : null, maxBoost);\n+ ScoreFunction otherFunction = new DummyScoreFunction(combineFunction);\n+ FunctionScoreQuery diffFunction = new FunctionScoreQuery(q.getSubQuery(), otherFunction, combineFunction, minScore, maxBoost);\n+ FunctionScoreQuery diffMaxBoost = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")),\n+ function, combineFunction, minScore, maxBoost == 1.0f ? 0.9f : 1.0f);\n FunctionScoreQuery[] queries = new FunctionScoreQuery[] { diffFunction,\n diffMinScore,\n diffQuery,\n@@ -673,51 +660,43 @@ public void testFunctionScoreHashCodeAndEquals() {\n }\n \n public void testFilterFunctionScoreHashCodeAndEquals() {\n- ScoreMode mode = randomFrom(ScoreMode.values());\n CombineFunction combineFunction = randomFrom(CombineFunction.values());\n ScoreFunction scoreFunction = new DummyScoreFunction(combineFunction);\n Float minScore = randomBoolean() ? null : 1.0f;\n Float maxBoost = randomBoolean() ? Float.POSITIVE_INFINITY : randomFloat();\n \n- FilterFunction function = new FilterFunction(new TermQuery(new Term(\"filter\", \"query\")), scoreFunction);\n- FiltersFunctionScoreQuery q = new FiltersFunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), mode,\n- new FilterFunction[] { function }, maxBoost, minScore, combineFunction);\n- FiltersFunctionScoreQuery q1 = new FiltersFunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), mode,\n- new FilterFunction[] { function }, maxBoost, minScore, combineFunction);\n+ FilterScoreFunction function = new FilterScoreFunction(new TermQuery(new Term(\"filter\", \"query\")), scoreFunction);\n+ FunctionScoreQuery q = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")),\n+ function, combineFunction, minScore, maxBoost);\n+ FunctionScoreQuery q1 = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), function, combineFunction, minScore, maxBoost);\n assertEquals(q, q);\n assertEquals(q.hashCode(), q.hashCode());\n assertEquals(q, q1);\n assertEquals(q.hashCode(), q1.hashCode());\n- FiltersFunctionScoreQuery diffCombineFunc = new FiltersFunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), mode,\n- new FilterFunction[] { function }, maxBoost, minScore,\n- combineFunction == CombineFunction.AVG ? CombineFunction.MAX : CombineFunction.AVG);\n- FiltersFunctionScoreQuery diffQuery = new FiltersFunctionScoreQuery(new TermQuery(new Term(\"foo\", \"baz\")), mode,\n- new FilterFunction[] { function }, maxBoost, minScore, combineFunction);\n- FiltersFunctionScoreQuery diffMode = new FiltersFunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")),\n- mode == ScoreMode.AVG ? ScoreMode.FIRST : ScoreMode.AVG, new FilterFunction[] { function }, maxBoost, minScore,\n- combineFunction);\n- FiltersFunctionScoreQuery diffMaxBoost = new FiltersFunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), mode,\n- new FilterFunction[] { function }, maxBoost == 1.0f ? 0.9f : 1.0f, minScore, combineFunction);\n- FiltersFunctionScoreQuery diffMinScore = new FiltersFunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), mode,\n- new FilterFunction[] { function }, maxBoost, minScore == null ? 0.9f : null, combineFunction);\n- FilterFunction otherFunc = new FilterFunction(new TermQuery(new Term(\"filter\", \"other_query\")), scoreFunction);\n- FiltersFunctionScoreQuery diffFunc = new FiltersFunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), mode,\n- randomBoolean() ? new FilterFunction[] { function, otherFunc } : new FilterFunction[] { otherFunc }, maxBoost, minScore,\n- combineFunction);\n-\n- FiltersFunctionScoreQuery[] queries = new FiltersFunctionScoreQuery[] {\n+ FunctionScoreQuery diffCombineFunc = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), function,\n+ combineFunction == CombineFunction.AVG ? CombineFunction.MAX : CombineFunction.AVG, minScore, maxBoost);\n+ FunctionScoreQuery diffQuery = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"baz\")),\n+ function, combineFunction, minScore, maxBoost);\n+ FunctionScoreQuery diffMaxBoost = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")),\n+ function, combineFunction, minScore, maxBoost == 1.0f ? 0.9f : 1.0f);\n+ FunctionScoreQuery diffMinScore = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")),\n+ function, combineFunction, minScore == null ? 0.9f : null, maxBoost);\n+ FilterScoreFunction otherFunc = new FilterScoreFunction(new TermQuery(new Term(\"filter\", \"other_query\")), scoreFunction);\n+ FunctionScoreQuery diffFunc = new FunctionScoreQuery(new TermQuery(new Term(\"foo\", \"bar\")), randomFrom(ScoreMode.values()),\n+ randomBoolean() ? new ScoreFunction[] { function, otherFunc } : new ScoreFunction[] { otherFunc }, combineFunction, minScore, maxBoost);\n+\n+ FunctionScoreQuery[] queries = new FunctionScoreQuery[] {\n diffQuery,\n diffMaxBoost,\n diffMinScore,\n- diffMode,\n diffFunc,\n q,\n diffCombineFunc\n };\n final int numIters = randomIntBetween(20, 100);\n for (int i = 0; i < numIters; i++) {\n- FiltersFunctionScoreQuery left = randomFrom(queries);\n- FiltersFunctionScoreQuery right = randomFrom(queries);\n+ FunctionScoreQuery left = randomFrom(queries);\n+ FunctionScoreQuery right = randomFrom(queries);\n if (left == right) {\n assertEquals(left, right);\n assertEquals(left.hashCode(), right.hashCode());\n@@ -736,17 +715,17 @@ public void testExplanationAndScoreEqualsEvenIfNoFunctionMatches() throws IOExce\n CombineFunction.MULTIPLY, CombineFunction.REPLACE});\n \n // check for document that has no macthing function\n- FiltersFunctionScoreQuery query = new FiltersFunctionScoreQuery(new TermQuery(new Term(FIELD, \"out\")), scoreMode,\n- new FilterFunction[]{new FilterFunction(new TermQuery(new Term(\"_uid\", \"2\")), new WeightFactorFunction(10))},\n- Float.MAX_VALUE, Float.NEGATIVE_INFINITY, combineFunction);\n+ FunctionScoreQuery query = new FunctionScoreQuery(new TermQuery(new Term(FIELD, \"out\")),\n+ new FilterScoreFunction(new TermQuery(new Term(\"_uid\", \"2\")), new WeightFactorFunction(10)),\n+ combineFunction, Float.NEGATIVE_INFINITY, Float.MAX_VALUE);\n TopDocs searchResult = localSearcher.search(query, 1);\n Explanation explanation = localSearcher.explain(query, searchResult.scoreDocs[0].doc);\n assertThat(searchResult.scoreDocs[0].score, equalTo(explanation.getValue()));\n \n // check for document that has a matching function\n- query = new FiltersFunctionScoreQuery(new TermQuery(new Term(FIELD, \"out\")), scoreMode,\n- new FilterFunction[]{new FilterFunction(new TermQuery(new Term(\"_uid\", \"1\")), new WeightFactorFunction(10))},\n- Float.MAX_VALUE, Float.NEGATIVE_INFINITY, combineFunction);\n+ query = new FunctionScoreQuery(new TermQuery(new Term(FIELD, \"out\")),\n+ new FilterScoreFunction(new TermQuery(new Term(\"_uid\", \"1\")), new WeightFactorFunction(10)),\n+ combineFunction, Float.NEGATIVE_INFINITY, Float.MAX_VALUE);\n searchResult = localSearcher.search(query, 1);\n explanation = localSearcher.explain(query, searchResult.scoreDocs[0].doc);\n assertThat(searchResult.scoreDocs[0].score, equalTo(explanation.getValue()));\n@@ -779,6 +758,58 @@ protected int doHashCode() {\n }\n }\n \n+\n+ private static class ConstantScoreFunction extends ScoreFunction {\n+ final double value;\n+\n+ protected ConstantScoreFunction(double value) {\n+ super(CombineFunction.REPLACE);\n+ this.value = value;\n+ }\n+\n+ @Override\n+ public LeafScoreFunction getLeafScoreFunction(LeafReaderContext ctx) throws IOException {\n+ return new LeafScoreFunction() {\n+ @Override\n+ public double score(int docId, float subQueryScore) throws IOException {\n+ return value;\n+ }\n+\n+ @Override\n+ public Explanation explainScore(int docId, Explanation subQueryScore) throws IOException {\n+ return null;\n+ }\n+ };\n+ }\n+\n+ @Override\n+ public boolean needsScores() {\n+ return false;\n+ }\n+\n+ @Override\n+ protected boolean doEquals(ScoreFunction other) {\n+ return false;\n+ }\n+\n+ @Override\n+ protected int doHashCode() {\n+ return 0;\n+ }\n+ }\n+\n+ public void testWithInvalidScores() {\n+ IndexSearcher localSearcher = newSearcher(reader);\n+ FunctionScoreQuery query1 = new FunctionScoreQuery(new TermQuery(new Term(FIELD, \"out\")),\n+ new ConstantScoreFunction(Float.NaN), CombineFunction.REPLACE, null, Float.POSITIVE_INFINITY);\n+ ElasticsearchException exc = expectThrows(ElasticsearchException.class, () -> localSearcher.search(query1, 1));\n+ assertThat(exc.getMessage(), containsString(\"function score query returned an invalid score: \" + Float.NaN));\n+ FunctionScoreQuery query2 = new FunctionScoreQuery(new TermQuery(new Term(FIELD, \"out\")),\n+ new ConstantScoreFunction(Float.NEGATIVE_INFINITY), CombineFunction.REPLACE, null, Float.POSITIVE_INFINITY);\n+ exc = expectThrows(ElasticsearchException.class, () -> localSearcher.search(query2, 1));\n+ assertThat(exc.getMessage(), containsString(\"function score query returned an invalid score: \" + Float.NEGATIVE_INFINITY));\n+ }\n+\n private static class DummyScoreFunction extends ScoreFunction {\n protected DummyScoreFunction(CombineFunction scoreCombiner) {\n super(scoreCombiner);",
"filename": "core/src/test/java/org/elasticsearch/index/query/functionscore/FunctionScoreTests.java",
"status": "modified"
},
{
"diff": "@@ -28,8 +28,8 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.lucene.search.function.CombineFunction;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery.ScoreMode;\n+import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n+import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery.ScoreMode;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n@@ -71,7 +71,6 @@\n import static org.hamcrest.Matchers.closeTo;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n-import static org.hamcrest.Matchers.isOneOf;\n import static org.hamcrest.Matchers.lessThan;\n \n public class DecayFunctionScoreIT extends ESIntegTestCase {\n@@ -546,7 +545,7 @@ public void testValueMissingLin() throws Exception {\n functionScoreQuery(baseQuery, new FilterFunctionBuilder[]{\n new FilterFunctionBuilder(linearDecayFunction(\"num1\", \"2013-05-28\", \"+3d\")),\n new FilterFunctionBuilder(linearDecayFunction(\"num2\", \"0.0\", \"1\"))\n- }).scoreMode(FiltersFunctionScoreQuery.ScoreMode.MULTIPLY))));\n+ }).scoreMode(FunctionScoreQuery.ScoreMode.MULTIPLY))));\n \n SearchResponse sr = response.actionGet();\n \n@@ -598,7 +597,7 @@ public void testDateWithoutOrigin() throws Exception {\n new FilterFunctionBuilder(linearDecayFunction(\"num1\", null, \"7000d\")),\n new FilterFunctionBuilder(gaussDecayFunction(\"num1\", null, \"1d\")),\n new FilterFunctionBuilder(exponentialDecayFunction(\"num1\", null, \"7000d\"))\n- }).scoreMode(FiltersFunctionScoreQuery.ScoreMode.MULTIPLY))));\n+ }).scoreMode(FunctionScoreQuery.ScoreMode.MULTIPLY))));\n \n SearchResponse sr = response.actionGet();\n assertNoFailures(sr);\n@@ -686,7 +685,7 @@ public void testParsingExceptionIfFieldDoesNotExist() throws Exception {\n searchSource()\n .size(numDocs)\n .query(functionScoreQuery(termQuery(\"test\", \"value\"), linearDecayFunction(\"type.geo\", lonlat, \"1000km\"))\n- .scoreMode(FiltersFunctionScoreQuery.ScoreMode.MULTIPLY))));\n+ .scoreMode(FunctionScoreQuery.ScoreMode.MULTIPLY))));\n try {\n response.actionGet();\n fail(\"Expected SearchPhaseExecutionException\");\n@@ -730,7 +729,7 @@ public void testNoQueryGiven() throws Exception {\n searchRequest().searchType(SearchType.QUERY_THEN_FETCH).source(\n searchSource().query(\n functionScoreQuery(linearDecayFunction(\"num\", 1, 0.5)).scoreMode(\n- FiltersFunctionScoreQuery.ScoreMode.MULTIPLY))));\n+ FunctionScoreQuery.ScoreMode.MULTIPLY))));\n response.actionGet();\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreIT.java",
"status": "modified"
},
{
"diff": "@@ -113,7 +113,10 @@ public void testFieldValueFactor() throws IOException {\n assertEquals(response.getHits().getAt(0).getScore(), response.getHits().getAt(2).getScore(), 0);\n \n \n- // n divided by 0 is infinity, which should provoke an exception.\n+ client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"test\", -1, \"body\", \"foo\").get();\n+ refresh();\n+\n+ // -1 divided by 0 is infinity, which should provoke an exception.\n try {\n response = client().prepareSearch(\"test\")\n .setExplain(randomBoolean())",
"filename": "core/src/test/java/org/elasticsearch/search/functionscore/FunctionScoreFieldValueIT.java",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,7 @@\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.lucene.search.function.CombineFunction;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n+import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n import org.elasticsearch.index.fielddata.ScriptDocValues;\n import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder.FilterFunctionBuilder;\n@@ -160,7 +160,7 @@ public void testMinScoreFunctionScoreBasic() throws IOException {\n searchRequest().source(searchSource().query(functionScoreQuery(new MatchAllQueryBuilder(), new FilterFunctionBuilder[] {\n new FilterFunctionBuilder(scriptFunction(script)),\n new FilterFunctionBuilder(scriptFunction(script))\n- }).scoreMode(FiltersFunctionScoreQuery.ScoreMode.AVG).setMinScore(minScore)))\n+ }).scoreMode(FunctionScoreQuery.ScoreMode.AVG).setMinScore(minScore)))\n ).actionGet();\n if (score < minScore) {\n assertThat(searchResponse.getHits().getTotalHits(), is(0L));\n@@ -196,7 +196,7 @@ public void testMinScoreFunctionScoreManyDocsAndRandomMinScore() throws IOExcept\n searchRequest().source(searchSource().query(functionScoreQuery(new MatchAllQueryBuilder(), new FilterFunctionBuilder[] {\n new FilterFunctionBuilder(scriptFunction(script)),\n new FilterFunctionBuilder(scriptFunction(script))\n- }).scoreMode(FiltersFunctionScoreQuery.ScoreMode.AVG).setMinScore(minScore)).size(numDocs))).actionGet();\n+ }).scoreMode(FunctionScoreQuery.ScoreMode.AVG).setMinScore(minScore)).size(numDocs))).actionGet();\n assertMinScoreSearchResponses(numDocs, searchResponse, numMatchingDocs);\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/search/functionscore/FunctionScoreIT.java",
"status": "modified"
},
{
"diff": "@@ -26,7 +26,7 @@\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.action.support.WriteRequest.RefreshPolicy;\n import org.elasticsearch.common.lucene.search.function.CombineFunction;\n-import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n+import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.index.query.BoolQueryBuilder;\n@@ -1676,7 +1676,7 @@ private SearchResponse minMaxQuery(ScoreMode scoreMode, int minChildren, Integer\n weightFactorFunction(1)),\n new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.termQuery(\"foo\", \"four\"),\n weightFactorFunction(1))\n- }).boostMode(CombineFunction.REPLACE).scoreMode(FiltersFunctionScoreQuery.ScoreMode.SUM), scoreMode)\n+ }).boostMode(CombineFunction.REPLACE).scoreMode(FunctionScoreQuery.ScoreMode.SUM), scoreMode)\n .minMaxChildren(minChildren, maxChildren != null ? maxChildren : HasChildQueryBuilder.DEFAULT_MAX_CHILDREN);\n \n return client()",
"filename": "modules/parent-join/src/test/java/org/elasticsearch/join/query/ChildQuerySearchIT.java",
"status": "modified"
},
{
"diff": "@@ -40,6 +40,7 @@\n import org.apache.lucene.search.spans.SpanOrQuery;\n import org.apache.lucene.search.spans.SpanTermQuery;\n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.lucene.search.function.CombineFunction;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n import org.elasticsearch.common.lucene.search.function.RandomScoreFunction;\n import org.elasticsearch.percolator.QueryAnalyzer.Result;\n@@ -523,7 +524,8 @@ public void testFunctionScoreQuery() {\n assertThat(result.verified, is(true));\n assertTermsEqual(result.terms, new Term(\"_field\", \"_value\"));\n \n- functionScoreQuery = new FunctionScoreQuery(termQuery, new RandomScoreFunction(0, 0, null), 1f, null, 10f);\n+ functionScoreQuery = new FunctionScoreQuery(termQuery, new RandomScoreFunction(0, 0, null),\n+ CombineFunction.MULTIPLY, 1f, 10f);\n result = analyze(functionScoreQuery);\n assertThat(result.verified, is(false));\n assertTermsEqual(result.terms, new Term(\"_field\", \"_value\"));",
"filename": "modules/percolator/src/test/java/org/elasticsearch/percolator/QueryAnalyzerTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: master\r\n**Plugins installed**: none\r\n**JVM version**: 1.8.0_121\r\n**OS version**: macOS\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen running a search on a remote cluster, remote documents include the cluster alias in the `_index` field so that the document can be properly addressed. This behavior does not apply to aggregations. When doing a `terms` agg on `_index` for instance, the terms do not include the cluster alias. The `top_hits` agg also doesn't enhance `_index` with the cluster alias.\r\n\r\n**Steps to reproduce**:\r\n 1. Start a node that remotes to itself\r\n\r\n ```sh\r\n ./bin/elasticsearch -E search.remote.local.seeds=localhost:9300\r\n ```\r\n\r\n 2. Index a document\r\n\r\n ```sh\r\n curl -XPOST \"http://localhost:9200/index/doc/id\" -H 'Content-Type: application/json' -d'\r\n {\r\n \"foo\": \"bar\"\r\n }'\r\n ```\r\n\r\n 3. Execute a search for the document via local and remote index names, which results in two unique documents (correct behavior) but one index term in the aggs and what appears to be two identical `top_hits`\r\n\r\n ```sh\r\n curl -XPOST \"http://localhost:9200/index,local:index/_search\" -H 'Content-Type: application/json' -d'\r\n {\r\n \"aggs\": {\r\n \"indices\": {\r\n \"terms\": {\r\n \"field\": \"_index\"\r\n },\r\n \"aggs\": {\r\n \"hits\": {\r\n \"top_hits\": {\r\n \"size\": 10\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }'\r\n ```\r\n\r\n results:\r\n ```json\r\n {\r\n \"took\": 7,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 10,\r\n \"successful\": 10,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 2,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"index\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"id\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"foo\": \"bar\"\r\n }\r\n },\r\n {\r\n \"_index\": \"local:index\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"id\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"foo\": \"bar\"\r\n }\r\n }\r\n ]\r\n },\r\n \"aggregations\": {\r\n \"indices\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"index\",\r\n \"doc_count\": 2,\r\n \"hits\": {\r\n \"hits\": {\r\n \"total\": 2,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"index\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"id\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"foo\": \"bar\"\r\n }\r\n },\r\n {\r\n \"_index\": \"index\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"id\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"foo\": \"bar\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n ```\r\n",
"comments": [],
"number": 25606,
"title": "Cross cluster aggregations don't include cluster alias in _index"
} | {
"body": "Today when we aggregate on the `_index` field the cross cluster search\r\nalias is not taken into account. Neither is it respected when we search\r\non the field. This change adds support for cluster alias when the cluster\r\nalias is present on the `_index` field.\r\n\r\nCloses #25606\r\n",
"number": 25885,
"review_comments": [
{
"body": "indentation was correct before",
"created_at": "2017-07-25T14:49:28Z"
}
],
"title": "Respect cluster alias in `_index` aggs and queries"
} | {
"commits": [
{
"message": "Respect cluster alias in `_index` aggs and queries\n\nToday when we aggregate on the `_index` field the cross cluster search\nalias is not taken into account. Neither is it respected when we search\non the field. This change adds support for cluster alias when the cluster\nalias is present on the `_index` field.\n\nCloses #25606"
},
{
"message": "Merge branch 'master' into fix_index_field_data_with_ccs"
},
{
"message": "apply feedback"
},
{
"message": "fix compile error"
}
],
"files": [
{
"diff": "@@ -415,7 +415,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n \n // the context is only used for validation so it's fine to pass fake values for the shard id and the current\n // timestamp\n- final QueryShardContext queryShardContext = indexService.newQueryShardContext(0, null, () -> 0L);\n+ final QueryShardContext queryShardContext = indexService.newQueryShardContext(0, null, () -> 0L, null);\n \n for (Alias alias : request.aliases()) {\n if (Strings.hasLength(alias.filter())) {",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java",
"status": "modified"
},
{
"diff": "@@ -150,7 +150,7 @@ ClusterState innerExecute(ClusterState currentState, Iterable<AliasAction> actio\n }\n // the context is only used for validation so it's fine to pass fake values for the shard id and the current\n // timestamp\n- aliasValidator.validateAliasFilter(alias, filter, indexService.newQueryShardContext(0, null, () -> 0L),\n+ aliasValidator.validateAliasFilter(alias, filter, indexService.newQueryShardContext(0, null, () -> 0L, null),\n xContentRegistry);\n }\n };",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java",
"status": "modified"
},
{
"diff": "@@ -150,7 +150,7 @@ public IndexService(\n this.mapperService = new MapperService(indexSettings, registry.build(indexSettings), xContentRegistry, similarityService,\n mapperRegistry,\n // we parse all percolator queries as they would be parsed on shard 0\n- () -> newQueryShardContext(0, null, System::currentTimeMillis));\n+ () -> newQueryShardContext(0, null, System::currentTimeMillis, null));\n this.indexFieldData = new IndexFieldDataService(indexSettings, indicesFieldDataCache, circuitBreakerService, mapperService);\n if (indexSettings.getIndexSortConfig().hasIndexSort()) {\n // we delay the actual creation of the sort order for this index because the mapping has not been merged yet.\n@@ -467,12 +467,9 @@ public IndexSettings getIndexSettings() {\n * Passing a {@code null} {@link IndexReader} will return a valid context, however it won't be able to make\n * {@link IndexReader}-specific optimizations, such as rewriting containing range queries.\n */\n- public QueryShardContext newQueryShardContext(int shardId, IndexReader indexReader, LongSupplier nowInMillis) {\n- return new QueryShardContext(\n- shardId, indexSettings, indexCache.bitsetFilterCache(), indexFieldData, mapperService(),\n- similarityService(), scriptService, xContentRegistry,\n- client, indexReader,\n- nowInMillis);\n+ public QueryShardContext newQueryShardContext(int shardId, IndexReader indexReader, LongSupplier nowInMillis, String clusterAlias) {\n+ return new QueryShardContext(shardId, indexSettings, indexCache.bitsetFilterCache(), indexFieldData::getForField, mapperService(),\n+ similarityService(), scriptService, xContentRegistry, client, indexReader, nowInMillis, clusterAlias);\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/index/IndexService.java",
"status": "modified"
},
{
"diff": "@@ -73,6 +73,7 @@ private static class ConstantAtomicFieldData extends AbstractAtomicOrdinalsField\n this.value = value;\n }\n \n+\n @Override\n public long ramBytesUsed() {\n return 0;\n@@ -125,7 +126,7 @@ public void close() {\n \n }\n \n- private final AtomicOrdinalsFieldData atomicFieldData;\n+ private final ConstantAtomicFieldData atomicFieldData;\n \n private ConstantIndexFieldData(IndexSettings indexSettings, String name, String value) {\n super(indexSettings, name, null, null,\n@@ -167,4 +168,8 @@ public IndexOrdinalsFieldData localGlobalDirect(DirectoryReader indexReader) thr\n return loadGlobal(indexReader);\n }\n \n+ public String getValue() {\n+ return atomicFieldData.value;\n+ }\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/index/fielddata/plain/ConstantIndexFieldData.java",
"status": "modified"
},
{
"diff": "@@ -123,7 +123,7 @@ public boolean isSearchable() {\n */\n @Override\n public Query termQuery(Object value, @Nullable QueryShardContext context) {\n- if (isSameIndex(value, context.index().getName())) {\n+ if (isSameIndex(value, context.getFullyQualifiedIndexName())) {\n return Queries.newMatchAllQuery();\n } else {\n return Queries.newMatchNoDocsQuery(\"Index didn't match. Index queried: \" + context.index().getName() + \" vs. \" + value);\n@@ -136,14 +136,15 @@ public Query termsQuery(List values, QueryShardContext context) {\n return super.termsQuery(values, context);\n }\n for (Object value : values) {\n- if (isSameIndex(value, context.index().getName())) {\n+ if (isSameIndex(value, context.getFullyQualifiedIndexName())) {\n // No need to OR these clauses - we can only logically be\n // running in the context of just one of these index names.\n return Queries.newMatchAllQuery();\n }\n }\n // None of the listed index names are this one\n- return Queries.newMatchNoDocsQuery(\"Index didn't match. Index queried: \" + context.index().getName() + \" vs. \" + values);\n+ return Queries.newMatchNoDocsQuery(\"Index didn't match. Index queried: \" + context.getFullyQualifiedIndexName()\n+ + \" vs. \" + values);\n }\n \n private boolean isSameIndex(Object value, String indexName) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -39,27 +39,29 @@\n import org.elasticsearch.index.analysis.IndexAnalyzers;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n-import org.elasticsearch.index.fielddata.IndexFieldDataService;\n+import org.elasticsearch.index.fielddata.plain.ConstantIndexFieldData;\n import org.elasticsearch.index.mapper.ContentPath;\n import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.IndexFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.MetadataFieldMapper;\n import org.elasticsearch.index.mapper.ObjectMapper;\n import org.elasticsearch.index.mapper.TextFieldMapper;\n import org.elasticsearch.index.query.support.NestedScope;\n import org.elasticsearch.index.similarity.SimilarityService;\n-import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService;\n-import org.elasticsearch.script.TemplateScript;\n import org.elasticsearch.search.lookup.SearchLookup;\n+import org.elasticsearch.transport.RemoteClusterAware;\n \n import java.io.IOException;\n import java.util.Arrays;\n import java.util.Collection;\n import java.util.HashMap;\n import java.util.Map;\n import java.util.function.BiConsumer;\n+import java.util.function.Function;\n import java.util.function.LongSupplier;\n \n import static java.util.Collections.unmodifiableMap;\n@@ -74,12 +76,14 @@ public class QueryShardContext extends QueryRewriteContext {\n private final MapperService mapperService;\n private final SimilarityService similarityService;\n private final BitsetFilterCache bitsetFilterCache;\n- private final IndexFieldDataService indexFieldDataService;\n+ private final Function<MappedFieldType, IndexFieldData<?>> indexFieldDataService;\n private final int shardId;\n private final IndexReader reader;\n+ private final String clusterAlias;\n private String[] types = Strings.EMPTY_ARRAY;\n private boolean cachable = true;\n private final SetOnce<Boolean> frozen = new SetOnce<>();\n+ private final String fullyQualifiedIndexName;\n \n public void setTypes(String... types) {\n this.types = types;\n@@ -96,27 +100,28 @@ public String[] getTypes() {\n private boolean isFilter;\n \n public QueryShardContext(int shardId, IndexSettings indexSettings, BitsetFilterCache bitsetFilterCache,\n- IndexFieldDataService indexFieldDataService, MapperService mapperService, SimilarityService similarityService,\n- ScriptService scriptService, NamedXContentRegistry xContentRegistry,\n- Client client, IndexReader reader, LongSupplier nowInMillis) {\n+ Function<MappedFieldType, IndexFieldData<?>> indexFieldDataLookup, MapperService mapperService,\n+ SimilarityService similarityService, ScriptService scriptService, NamedXContentRegistry xContentRegistry,\n+ Client client, IndexReader reader, LongSupplier nowInMillis, String clusterAlias) {\n super(xContentRegistry, client, nowInMillis);\n this.shardId = shardId;\n this.similarityService = similarityService;\n this.mapperService = mapperService;\n this.bitsetFilterCache = bitsetFilterCache;\n- this.indexFieldDataService = indexFieldDataService;\n+ this.indexFieldDataService = indexFieldDataLookup;\n this.allowUnmappedFields = indexSettings.isDefaultAllowUnmappedFields();\n this.nestedScope = new NestedScope();\n this.scriptService = scriptService;\n this.indexSettings = indexSettings;\n this.reader = reader;\n-\n+ this.clusterAlias = clusterAlias;\n+ this.fullyQualifiedIndexName = RemoteClusterAware.buildRemoteIndexName(clusterAlias, indexSettings.getIndex().getName());\n }\n \n public QueryShardContext(QueryShardContext source) {\n this(source.shardId, source.indexSettings, source.bitsetFilterCache, source.indexFieldDataService, source.mapperService,\n source.similarityService, source.scriptService, source.getXContentRegistry(), source.client,\n- source.reader, source.nowInMillis);\n+ source.reader, source.nowInMillis, source.clusterAlias);\n this.types = source.getTypes();\n }\n \n@@ -156,8 +161,14 @@ public BitSetProducer bitsetFilter(Query filter) {\n return bitsetFilterCache.getBitSetProducer(filter);\n }\n \n- public <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType mapper) {\n- return indexFieldDataService.getForField(mapper);\n+ public <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType fieldType) {\n+ if (clusterAlias != null && IndexFieldMapper.NAME.equals(fieldType.name())) {\n+ // this is a \"hack\" to make the _index field data aware of cross cluster search cluster aliases.\n+ ConstantIndexFieldData ifd = (ConstantIndexFieldData) indexFieldDataService.apply(fieldType);\n+ return (IFD) new ConstantIndexFieldData.Builder(m -> fullyQualifiedIndexName)\n+ .build(indexSettings, fieldType, null, null, mapperService);\n+ }\n+ return (IFD) indexFieldDataService.apply(fieldType);\n }\n \n public void addNamedQuery(String name, Query query) {\n@@ -420,4 +431,10 @@ public IndexReader getIndexReader() {\n return reader;\n }\n \n+ /**\n+ * Returns the fully qualified index name including a remote cluster alias if applicable\n+ */\n+ public String getFullyQualifiedIndexName() {\n+ return fullyQualifiedIndexName;\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java",
"status": "modified"
},
{
"diff": "@@ -39,6 +39,7 @@\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n import org.elasticsearch.index.engine.Engine;\n+import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n@@ -95,7 +96,6 @@ final class DefaultSearchContext extends SearchContext {\n private final BigArrays bigArrays;\n private final IndexShard indexShard;\n private final IndexService indexService;\n- private final ResponseCollectorService responseCollectorService;\n private final ContextIndexSearcher searcher;\n private final DfsSearchResult dfsResult;\n private final QuerySearchResult queryResult;\n@@ -150,7 +150,6 @@ final class DefaultSearchContext extends SearchContext {\n private final long originNanoTime = System.nanoTime();\n private volatile long lastAccessTime = -1;\n private Profilers profilers;\n- private ExecutorService searchExecutor;\n \n private final Map<String, SearchExtBuilder> searchExtBuilders = new HashMap<>();\n private final Map<Class<?>, Collector> queryCollectors = new HashMap<>();\n@@ -159,7 +158,7 @@ final class DefaultSearchContext extends SearchContext {\n \n DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarget shardTarget, Engine.Searcher engineSearcher,\n IndexService indexService, IndexShard indexShard, BigArrays bigArrays, Counter timeEstimateCounter,\n- TimeValue timeout, FetchPhase fetchPhase, ResponseCollectorService responseCollectorService) {\n+ TimeValue timeout, FetchPhase fetchPhase, String clusterAlias) {\n this.id = id;\n this.request = request;\n this.fetchPhase = fetchPhase;\n@@ -173,11 +172,11 @@ final class DefaultSearchContext extends SearchContext {\n this.fetchResult = new FetchSearchResult(id, shardTarget);\n this.indexShard = indexShard;\n this.indexService = indexService;\n- this.responseCollectorService = responseCollectorService;\n this.searcher = new ContextIndexSearcher(engineSearcher, indexService.cache().query(), indexShard.getQueryCachingPolicy());\n this.timeEstimateCounter = timeEstimateCounter;\n this.timeout = timeout;\n- queryShardContext = indexService.newQueryShardContext(request.shardId().id(), searcher.getIndexReader(), request::nowInMillis);\n+ queryShardContext = indexService.newQueryShardContext(request.shardId().id(), searcher.getIndexReader(), request::nowInMillis,\n+ clusterAlias);\n queryShardContext.setTypes(request.types());\n queryBoost = request.indexBoost();\n }\n@@ -496,9 +495,10 @@ public BitsetFilterCache bitsetFilterCache() {\n return indexService.cache().bitsetFilterCache();\n }\n \n+\n @Override\n- public IndexFieldDataService fieldData() {\n- return indexService.fieldData();\n+ public <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType fieldType) {\n+ return queryShardContext.getForField(fieldType);\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/search/DefaultSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -573,7 +573,7 @@ private DefaultSearchContext createSearchContext(ShardSearchRequest request, Tim\n \n final DefaultSearchContext searchContext = new DefaultSearchContext(idGenerator.incrementAndGet(), request, shardTarget,\n engineSearcher, indexService, indexShard, bigArrays, threadPool.estimatedTimeInMillisCounter(), timeout, fetchPhase,\n- responseCollectorService);\n+ request.getClusterAlias());\n boolean success = false;\n try {\n // we clone the query shard context here just for rewriting otherwise we",
"filename": "core/src/main/java/org/elasticsearch/search/SearchService.java",
"status": "modified"
},
{
"diff": "@@ -72,7 +72,7 @@ public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOExcept\n if (subReaderContext == null || hit.docId() >= subReaderContext.docBase + subReaderContext.reader().maxDoc()) {\n int readerIndex = ReaderUtil.subIndex(hit.docId(), context.searcher().getIndexReader().leaves());\n subReaderContext = context.searcher().getIndexReader().leaves().get(readerIndex);\n- data = context.fieldData().getForField(fieldType).load(subReaderContext);\n+ data = context.getForField(fieldType).load(subReaderContext);\n values = data.getScriptValues();\n }\n int subDocId = hit.docId() - subReaderContext.docBase;",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsFetchSubPhase.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n+import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n@@ -262,8 +263,8 @@ public BitsetFilterCache bitsetFilterCache() {\n }\n \n @Override\n- public IndexFieldDataService fieldData() {\n- return in.fieldData();\n+ public <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType fieldType) {\n+ return in.getForField(fieldType);\n }\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.util.concurrent.RefCounted;\n import org.elasticsearch.common.util.iterable.Iterables;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n+import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n@@ -210,7 +211,7 @@ public InnerHitsContext innerHits() {\n \n public abstract BitsetFilterCache bitsetFilterCache();\n \n- public abstract IndexFieldDataService fieldData();\n+ public abstract <IFD extends IndexFieldData<?>> IFD getForField(MappedFieldType fieldType);\n \n public abstract TimeValue timeout();\n ",
"filename": "core/src/main/java/org/elasticsearch/search/internal/SearchContext.java",
"status": "modified"
},
{
"diff": "@@ -20,33 +20,37 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n \n+import java.util.function.Function;\n+\n public class DocLookup {\n \n private final MapperService mapperService;\n- private final IndexFieldDataService fieldDataService;\n+ private final Function<MappedFieldType, IndexFieldData<?>> fieldDataLookup;\n \n @Nullable\n private final String[] types;\n \n- DocLookup(MapperService mapperService, IndexFieldDataService fieldDataService, @Nullable String[] types) {\n+ DocLookup(MapperService mapperService, Function<MappedFieldType, IndexFieldData<?>> fieldDataLookup, @Nullable String[] types) {\n this.mapperService = mapperService;\n- this.fieldDataService = fieldDataService;\n+ this.fieldDataLookup = fieldDataLookup;\n this.types = types;\n }\n \n public MapperService mapperService() {\n return this.mapperService;\n }\n \n- public IndexFieldDataService fieldDataService() {\n- return this.fieldDataService;\n+ public IndexFieldData<?> getForField(MappedFieldType fieldType) {\n+ return fieldDataLookup.apply(fieldType);\n }\n \n public LeafDocLookup getLeafDocLookup(LeafReaderContext context) {\n- return new LeafDocLookup(mapperService, fieldDataService, types, context);\n+ return new LeafDocLookup(mapperService, fieldDataLookup, types, context);\n }\n \n public String[] getTypes() {",
"filename": "core/src/main/java/org/elasticsearch/search/lookup/DocLookup.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n import org.apache.lucene.index.LeafReaderContext;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.fielddata.ScriptDocValues;\n import org.elasticsearch.index.mapper.MappedFieldType;\n@@ -34,13 +35,14 @@\n import java.util.HashMap;\n import java.util.Map;\n import java.util.Set;\n+import java.util.function.Function;\n \n public class LeafDocLookup implements Map<String, ScriptDocValues<?>> {\n \n private final Map<String, ScriptDocValues<?>> localCacheFieldData = new HashMap<>(4);\n \n private final MapperService mapperService;\n- private final IndexFieldDataService fieldDataService;\n+ private final Function<MappedFieldType, IndexFieldData<?>> fieldDataLookup;\n \n @Nullable\n private final String[] types;\n@@ -49,9 +51,10 @@ public class LeafDocLookup implements Map<String, ScriptDocValues<?>> {\n \n private int docId = -1;\n \n- LeafDocLookup(MapperService mapperService, IndexFieldDataService fieldDataService, @Nullable String[] types, LeafReaderContext reader) {\n+ LeafDocLookup(MapperService mapperService, Function<MappedFieldType, IndexFieldData<?>> fieldDataLookup, @Nullable String[] types,\n+ LeafReaderContext reader) {\n this.mapperService = mapperService;\n- this.fieldDataService = fieldDataService;\n+ this.fieldDataLookup = fieldDataLookup;\n this.types = types;\n this.reader = reader;\n }\n@@ -60,8 +63,8 @@ public MapperService mapperService() {\n return this.mapperService;\n }\n \n- public IndexFieldDataService fieldDataService() {\n- return this.fieldDataService;\n+ public IndexFieldData<?> getForField(MappedFieldType fieldType) {\n+ return fieldDataLookup.apply(fieldType);\n }\n \n public void setDocument(int docId) {\n@@ -83,7 +86,7 @@ public ScriptDocValues<?> get(Object key) {\n scriptValues = AccessController.doPrivileged(new PrivilegedAction<ScriptDocValues<?>>() {\n @Override\n public ScriptDocValues<?> run() {\n- return fieldDataService.getForField(fieldType).load(reader).getScriptValues();\n+ return fieldDataLookup.apply(fieldType).load(reader).getScriptValues();\n }\n });\n localCacheFieldData.put(fieldName, scriptValues);",
"filename": "core/src/main/java/org/elasticsearch/search/lookup/LeafDocLookup.java",
"status": "modified"
},
{
"diff": "@@ -21,9 +21,13 @@\n \n import org.apache.lucene.index.LeafReaderContext;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n \n+import java.util.function.Function;\n+\n public class SearchLookup {\n \n final DocLookup docMap;\n@@ -32,8 +36,9 @@ public class SearchLookup {\n \n final FieldsLookup fieldsLookup;\n \n- public SearchLookup(MapperService mapperService, IndexFieldDataService fieldDataService, @Nullable String[] types) {\n- docMap = new DocLookup(mapperService, fieldDataService, types);\n+ public SearchLookup(MapperService mapperService, Function<MappedFieldType, IndexFieldData<?>> fieldDataLookup,\n+ @Nullable String[] types) {\n+ docMap = new DocLookup(mapperService, fieldDataLookup, types);\n sourceLookup = new SourceLookup();\n fieldsLookup = new FieldsLookup(mapperService, types);\n }",
"filename": "core/src/main/java/org/elasticsearch/search/lookup/SearchLookup.java",
"status": "modified"
},
{
"diff": "@@ -105,7 +105,7 @@ public void execute(SearchContext searchContext) throws QueryPhaseExecutionExcep\n // here to make sure it happens during the QUERY phase\n aggregationPhase.preProcess(searchContext);\n Sort indexSort = searchContext.mapperService().getIndexSettings().getIndexSortConfig()\n- .buildIndexSort(searchContext.mapperService()::fullName, searchContext.fieldData()::getForField);\n+ .buildIndexSort(searchContext.mapperService()::fullName, searchContext::getForField);\n final ContextIndexSearcher searcher = searchContext.searcher();\n boolean rescore = execute(searchContext, searchContext.searcher(), searcher::setCheckCancelled, indexSort);\n ",
"filename": "core/src/main/java/org/elasticsearch/search/query/QueryPhase.java",
"status": "modified"
},
{
"diff": "@@ -169,7 +169,7 @@ public void testTermQuery() {\n QueryShardContext context = new QueryShardContext(0,\n new IndexSettings(IndexMetaData.builder(\"foo\").settings(indexSettings).build(),\n indexSettings),\n- null, null, null, null, null, xContentRegistry(), null, null, () -> nowInMillis);\n+ null, null, null, null, null, xContentRegistry(), null, null, () -> nowInMillis, null);\n MappedFieldType ft = createDefaultFieldType();\n ft.setName(\"field\");\n String date = \"2015-10-12T14:10:55\";\n@@ -191,7 +191,7 @@ public void testRangeQuery() throws IOException {\n .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1).build();\n QueryShardContext context = new QueryShardContext(0,\n new IndexSettings(IndexMetaData.builder(\"foo\").settings(indexSettings).build(), indexSettings),\n- null, null, null, null, null, xContentRegistry(), null, null, () -> nowInMillis);\n+ null, null, null, null, null, xContentRegistry(), null, null, () -> nowInMillis, null);\n MappedFieldType ft = createDefaultFieldType();\n ft.setName(\"field\");\n String date1 = \"2015-10-12T14:10:55\";",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/DateFieldTypeTests.java",
"status": "modified"
},
{
"diff": "@@ -46,7 +46,7 @@ public void testDoubleIndexingSameDoc() throws Exception {\n IndexService index = createIndex(\"test\");\n client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(mapping, XContentType.JSON).get();\n DocumentMapper mapper = index.mapperService().documentMapper(\"type\");\n- QueryShardContext context = index.newQueryShardContext(0, null, () -> 0L);\n+ QueryShardContext context = index.newQueryShardContext(0, null, () -> 0L, null);\n \n ParsedDocument doc = mapper.parse(SourceToParse.source(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n .startObject()",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/DoubleIndexingDocTests.java",
"status": "modified"
},
{
"diff": "@@ -63,7 +63,7 @@ public void testExternalValues() throws Exception {\n Collections.singletonMap(ExternalMetadataMapper.CONTENT_TYPE, new ExternalMetadataMapper.TypeParser()));\n \n Supplier<QueryShardContext> queryShardContext = () -> {\n- return indexService.newQueryShardContext(0, null, () -> { throw new UnsupportedOperationException(); });\n+ return indexService.newQueryShardContext(0, null, () -> { throw new UnsupportedOperationException(); }, null);\n };\n DocumentMapperParser parser = new DocumentMapperParser(indexService.getIndexSettings(), indexService.mapperService(),\n indexService.getIndexAnalyzers(), indexService.xContentRegistry(), indexService.similarityService(), mapperRegistry,\n@@ -114,7 +114,7 @@ public void testExternalValuesWithMultifield() throws Exception {\n MapperRegistry mapperRegistry = new MapperRegistry(mapperParsers, Collections.emptyMap());\n \n Supplier<QueryShardContext> queryShardContext = () -> {\n- return indexService.newQueryShardContext(0, null, () -> { throw new UnsupportedOperationException(); });\n+ return indexService.newQueryShardContext(0, null, () -> { throw new UnsupportedOperationException(); }, null);\n };\n DocumentMapperParser parser = new DocumentMapperParser(indexService.getIndexSettings(), indexService.mapperService(),\n indexService.getIndexAnalyzers(), indexService.xContentRegistry(), indexService.similarityService(), mapperRegistry,\n@@ -180,7 +180,7 @@ public void testExternalValuesWithMultifieldTwoLevels() throws Exception {\n MapperRegistry mapperRegistry = new MapperRegistry(mapperParsers, Collections.emptyMap());\n \n Supplier<QueryShardContext> queryShardContext = () -> {\n- return indexService.newQueryShardContext(0, null, () -> { throw new UnsupportedOperationException(); });\n+ return indexService.newQueryShardContext(0, null, () -> { throw new UnsupportedOperationException(); }, null);\n };\n DocumentMapperParser parser = new DocumentMapperParser(indexService.getIndexSettings(), indexService.mapperService(),\n indexService.getIndexAnalyzers(), indexService.xContentRegistry(), indexService.similarityService(), mapperRegistry,",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/ExternalFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -239,7 +239,7 @@ public void testSeesFieldsFromPlugins() throws IOException {\n );\n final MapperRegistry mapperRegistry = indicesModule.getMapperRegistry();\n Supplier<QueryShardContext> queryShardContext = () -> {\n- return indexService.newQueryShardContext(0, null, () -> { throw new UnsupportedOperationException(); });\n+ return indexService.newQueryShardContext(0, null, () -> { throw new UnsupportedOperationException(); }, null);\n };\n MapperService mapperService = new MapperService(indexService.getIndexSettings(), indexService.getIndexAnalyzers(),\n indexService.xContentRegistry(), indexService.similarityService(), mapperRegistry, queryShardContext);",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -82,7 +82,7 @@ public void testRangeQuery() throws Exception {\n .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(randomAlphaOfLengthBetween(1, 10), indexSettings);\n QueryShardContext context = new QueryShardContext(0, idxSettings, null, null, null, null, null, xContentRegistry(),\n- null, null, () -> nowInMillis);\n+ null, null, () -> nowInMillis, null);\n RangeFieldMapper.RangeFieldType ft = new RangeFieldMapper.RangeFieldType(type, Version.CURRENT);\n ft.setName(FIELDNAME);\n ft.setIndexOptions(IndexOptions.DOCS);",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java",
"status": "modified"
},
{
"diff": "@@ -26,7 +26,6 @@\n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.PhraseQuery;\n-import org.apache.lucene.search.DisjunctionMaxQuery;\n import org.apache.lucene.search.MultiPhraseQuery;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexService;\n@@ -38,7 +37,6 @@\n import java.io.IOException;\n \n import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.instanceOf;\n \n /**\n * Makes sure that graph analysis is disabled with shingle filters of different size\n@@ -71,7 +69,7 @@ public void setup() {\n indexService = createIndex(\"test\", settings, \"t\",\n \"text_shingle\", \"type=text,analyzer=text_shingle\",\n \"text_shingle_unigram\", \"type=text,analyzer=text_shingle_unigram\");\n- shardContext = indexService.newQueryShardContext(0, null, () -> 0L);\n+ shardContext = indexService.newQueryShardContext(0, null, () -> 0L, null);\n \n // parsed queries for \"text_shingle_unigram:(foo bar baz)\" with query parsers\n // that ignores position length attribute",
"filename": "core/src/test/java/org/elasticsearch/index/query/DisableGraphQueryTests.java",
"status": "modified"
},
{
"diff": "@@ -18,14 +18,27 @@\n */\n package org.elasticsearch.index.query;\n \n+import org.apache.lucene.search.MatchNoDocsQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.TermQuery;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.fielddata.IndexFieldData;\n+import org.elasticsearch.index.fielddata.plain.AbstractAtomicOrdinalsFieldData;\n+import org.elasticsearch.index.mapper.ContentPath;\n+import org.elasticsearch.index.mapper.IndexFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.TextFieldMapper;\n import org.elasticsearch.test.ESTestCase;\n+import org.hamcrest.Matcher;\n+import org.hamcrest.Matchers;\n+\n+import java.io.IOException;\n \n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n@@ -38,18 +51,23 @@\n public class QueryShardContextTests extends ESTestCase {\n \n public void testFailIfFieldMappingNotFound() {\n- IndexMetaData.Builder indexMetadata = new IndexMetaData.Builder(\"index\");\n- indexMetadata.settings(Settings.builder().put(\"index.version.created\", Version.CURRENT)\n+ IndexMetaData.Builder indexMetadataBuilder = new IndexMetaData.Builder(\"index\");\n+ indexMetadataBuilder.settings(Settings.builder().put(\"index.version.created\", Version.CURRENT)\n .put(\"index.number_of_shards\", 1)\n .put(\"index.number_of_replicas\", 1)\n );\n- IndexSettings indexSettings = new IndexSettings(indexMetadata.build(), Settings.EMPTY);\n+ IndexMetaData indexMetaData = indexMetadataBuilder.build();\n+ IndexSettings indexSettings = new IndexSettings(indexMetaData, Settings.EMPTY);\n MapperService mapperService = mock(MapperService.class);\n when(mapperService.getIndexSettings()).thenReturn(indexSettings);\n+ when(mapperService.index()).thenReturn(indexMetaData.getIndex());\n final long nowInMillis = randomNonNegativeLong();\n+\n QueryShardContext context = new QueryShardContext(\n- 0, indexSettings, null, null, mapperService, null, null, xContentRegistry(), null, null,\n- () -> nowInMillis);\n+ 0, indexSettings, null, mappedFieldType ->\n+ mappedFieldType.fielddataBuilder().build(indexSettings, mappedFieldType, null, null, null)\n+ , mapperService, null, null, xContentRegistry(), null, null,\n+ () -> nowInMillis, null);\n \n context.setAllowUnmappedFields(false);\n MappedFieldType fieldType = new TextFieldMapper.TextFieldType();\n@@ -74,4 +92,47 @@ public void testFailIfFieldMappingNotFound() {\n assertThat(result.name(), equalTo(\"name\"));\n }\n \n+ public void testClusterAlias() throws IOException {\n+ IndexMetaData.Builder indexMetadataBuilder = new IndexMetaData.Builder(\"index\");\n+ indexMetadataBuilder.settings(Settings.builder().put(\"index.version.created\", Version.CURRENT)\n+ .put(\"index.number_of_shards\", 1)\n+ .put(\"index.number_of_replicas\", 1)\n+ );\n+ IndexMetaData indexMetaData = indexMetadataBuilder.build();\n+ IndexSettings indexSettings = new IndexSettings(indexMetaData, Settings.EMPTY);\n+ MapperService mapperService = mock(MapperService.class);\n+ when(mapperService.getIndexSettings()).thenReturn(indexSettings);\n+ when(mapperService.index()).thenReturn(indexMetaData.getIndex());\n+ final long nowInMillis = randomNonNegativeLong();\n+\n+ Mapper.BuilderContext ctx = new Mapper.BuilderContext(indexSettings.getSettings(), new ContentPath());\n+ IndexFieldMapper mapper = new IndexFieldMapper.Builder(null).build(ctx);\n+ final String clusterAlias = randomBoolean() ? null : \"remote_cluster\";\n+ QueryShardContext context = new QueryShardContext(\n+ 0, indexSettings, null, mappedFieldType ->\n+ mappedFieldType.fielddataBuilder().build(indexSettings, mappedFieldType, null, null, mapperService)\n+ , mapperService, null, null, xContentRegistry(), null, null,\n+ () -> nowInMillis, clusterAlias);\n+\n+ IndexFieldData<?> forField = context.getForField(mapper.fieldType());\n+ String expected = clusterAlias == null ? indexMetaData.getIndex().getName()\n+ : clusterAlias + \":\" + indexMetaData.getIndex().getName();\n+ assertEquals(expected, ((AbstractAtomicOrdinalsFieldData)forField.load(null)).getOrdinalsValues().lookupOrd(0).utf8ToString());\n+ Query query = mapper.fieldType().termQuery(\"index\", context);\n+ if (clusterAlias == null) {\n+ assertEquals(Queries.newMatchAllQuery(), query);\n+ } else {\n+ assertThat(query, Matchers.instanceOf(MatchNoDocsQuery.class));\n+ }\n+ query = mapper.fieldType().termQuery(\"remote_cluster:index\", context);\n+ if (clusterAlias != null) {\n+ assertEquals(Queries.newMatchAllQuery(), query);\n+ } else {\n+ assertThat(query, Matchers.instanceOf(MatchNoDocsQuery.class));\n+ }\n+\n+ query = mapper.fieldType().termQuery(\"something:else\", context);\n+ assertThat(query, Matchers.instanceOf(MatchNoDocsQuery.class));\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/index/query/QueryShardContextTests.java",
"status": "modified"
},
{
"diff": "@@ -37,7 +37,7 @@ public void testRewriteMissingField() throws Exception {\n IndexService indexService = createIndex(\"test\");\n IndexReader reader = new MultiReader();\n QueryRewriteContext context = new QueryShardContext(0, indexService.getIndexSettings(), null, null, indexService.mapperService(),\n- null, null, xContentRegistry(), null, reader, null);\n+ null, null, xContentRegistry(), null, reader, null, null);\n RangeQueryBuilder range = new RangeQueryBuilder(\"foo\");\n assertEquals(Relation.DISJOINT, range.getRelation(context));\n }\n@@ -54,7 +54,7 @@ public void testRewriteMissingReader() throws Exception {\n indexService.mapperService().merge(\"type\",\n new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, false);\n QueryRewriteContext context = new QueryShardContext(0, indexService.getIndexSettings(), null, null, indexService.mapperService(),\n- null, null, xContentRegistry(), null, null, null);\n+ null, null, xContentRegistry(), null, null, null, null);\n RangeQueryBuilder range = new RangeQueryBuilder(\"foo\");\n // can't make assumptions on a missing reader, so it must return INTERSECT\n assertEquals(Relation.INTERSECTS, range.getRelation(context));\n@@ -73,7 +73,7 @@ public void testRewriteEmptyReader() throws Exception {\n new CompressedXContent(mapping), MergeReason.MAPPING_UPDATE, false);\n IndexReader reader = new MultiReader();\n QueryRewriteContext context = new QueryShardContext(0, indexService.getIndexSettings(), null, null, indexService.mapperService(),\n- null, null, xContentRegistry(), null, reader, null);\n+ null, null, xContentRegistry(), null, reader, null, null);\n RangeQueryBuilder range = new RangeQueryBuilder(\"foo\");\n // no values -> DISJOINT\n assertEquals(Relation.DISJOINT, range.getRelation(context));",
"filename": "core/src/test/java/org/elasticsearch/index/query/RangeQueryRewriteTests.java",
"status": "modified"
},
{
"diff": "@@ -177,7 +177,7 @@ public void testQuoteFieldSuffix() {\n IndexMetaData indexState = IndexMetaData.builder(\"index\").settings(indexSettings).build();\n IndexSettings settings = new IndexSettings(indexState, Settings.EMPTY);\n QueryShardContext mockShardContext = new QueryShardContext(0, settings, null, null, null, null, null, xContentRegistry(),\n- null, null, System::currentTimeMillis) {\n+ null, null, System::currentTimeMillis, null) {\n @Override\n public MappedFieldType fieldMapper(String name) {\n return new MockFieldMapper.FakeFieldType();\n@@ -191,7 +191,7 @@ public MappedFieldType fieldMapper(String name) {\n \n // Now check what happens if foo.quote does not exist\n mockShardContext = new QueryShardContext(0, settings, null, null, null, null, null, xContentRegistry(),\n- null, null, System::currentTimeMillis) {\n+ null, null, System::currentTimeMillis, null) {\n @Override\n public MappedFieldType fieldMapper(String name) {\n if (name.equals(\"foo.quote\")) {",
"filename": "core/src/test/java/org/elasticsearch/index/query/SimpleQueryParserTests.java",
"status": "modified"
},
{
"diff": "@@ -75,7 +75,7 @@ public void testCustomDummyQueryWithinBooleanQuery() {\n private static QueryShardContext queryShardContext() {\n IndicesService indicesService = internalCluster().getDataNodeInstance(IndicesService.class);\n return indicesService.indexServiceSafe(resolveIndex(\"index\")).newQueryShardContext(\n- randomInt(20), null, () -> { throw new UnsupportedOperationException(); });\n+ randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null);\n }\n \n //see #11120",
"filename": "core/src/test/java/org/elasticsearch/index/query/plugin/CustomQueryParserIT.java",
"status": "modified"
},
{
"diff": "@@ -21,9 +21,6 @@\n \n import org.apache.lucene.index.Term;\n import org.apache.lucene.queries.BlendedTermQuery;\n-import org.apache.lucene.search.BooleanClause;\n-import org.apache.lucene.search.BooleanClause.Occur;\n-import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.DisjunctionMaxQuery;\n import org.apache.lucene.search.MatchAllDocsQuery;\n@@ -89,7 +86,7 @@ public void setup() throws IOException {\n \n public void testCrossFieldMultiMatchQuery() throws IOException {\n QueryShardContext queryShardContext = indexService.newQueryShardContext(\n- randomInt(20), null, () -> { throw new UnsupportedOperationException(); });\n+ randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null);\n queryShardContext.setAllowUnmappedFields(true);\n Query parsedQuery = multiMatchQuery(\"banon\").field(\"name.first\", 2).field(\"name.last\", 3).field(\"foobar\").type(MultiMatchQueryBuilder.Type.CROSS_FIELDS).toQuery(queryShardContext);\n try (Engine.Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n@@ -114,7 +111,7 @@ public void testBlendTerms() {\n float[] boosts = new float[] {2, 3};\n Query expected = BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f);\n Query actual = MultiMatchQuery.blendTerm(\n- indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }),\n+ indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null),\n new BytesRef(\"baz\"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n assertEquals(expected, actual);\n }\n@@ -130,7 +127,7 @@ public void testBlendTermsWithFieldBoosts() {\n float[] boosts = new float[] {200, 30};\n Query expected = BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f);\n Query actual = MultiMatchQuery.blendTerm(\n- indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }),\n+ indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null),\n new BytesRef(\"baz\"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n assertEquals(expected, actual);\n }\n@@ -149,7 +146,7 @@ public Query termQuery(Object value, QueryShardContext context) {\n float[] boosts = new float[] {2};\n Query expected = BlendedTermQuery.dismaxBlendedQuery(terms, boosts, 1.0f);\n Query actual = MultiMatchQuery.blendTerm(\n- indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }),\n+ indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null),\n new BytesRef(\"baz\"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n assertEquals(expected, actual);\n }\n@@ -174,14 +171,14 @@ public Query termQuery(Object value, QueryShardContext context) {\n expectedDisjunct1\n ), 1.0f);\n Query actual = MultiMatchQuery.blendTerm(\n- indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }),\n+ indexService.newQueryShardContext(randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null),\n new BytesRef(\"baz\"), null, 1f, new FieldAndFieldType(ft1, 2), new FieldAndFieldType(ft2, 3));\n assertEquals(expected, actual);\n }\n \n public void testMultiMatchPrefixWithAllField() throws IOException {\n QueryShardContext queryShardContext = indexService.newQueryShardContext(\n- randomInt(20), null, () -> { throw new UnsupportedOperationException(); });\n+ randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null);\n queryShardContext.setAllowUnmappedFields(true);\n Query parsedQuery =\n multiMatchQuery(\"foo\").field(\"_all\").type(MultiMatchQueryBuilder.Type.PHRASE_PREFIX).toQuery(queryShardContext);\n@@ -191,7 +188,7 @@ public void testMultiMatchPrefixWithAllField() throws IOException {\n \n public void testMultiMatchCrossFieldsWithSynonyms() throws IOException {\n QueryShardContext queryShardContext = indexService.newQueryShardContext(\n- randomInt(20), null, () -> { throw new UnsupportedOperationException(); });\n+ randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null);\n \n // check that synonym query is used for a single field\n Query parsedQuery =",
"filename": "core/src/test/java/org/elasticsearch/index/search/MultiMatchQueryTests.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,6 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.IndexService;\n-import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.index.query.NestedQueryBuilder;\n@@ -297,7 +296,7 @@ public void testConjunction() {\n }\n \n public void testNested() throws IOException {\n- QueryShardContext context = indexService.newQueryShardContext(0, new MultiReader(), () -> 0);\n+ QueryShardContext context = indexService.newQueryShardContext(0, new MultiReader(), () -> 0, null);\n NestedQueryBuilder queryBuilder = new NestedQueryBuilder(\"nested1\", new MatchAllQueryBuilder(), ScoreMode.Avg);\n ESToParentBlockJoinQuery query = (ESToParentBlockJoinQuery) queryBuilder.toQuery(context);\n ",
"filename": "core/src/test/java/org/elasticsearch/index/search/NestedHelperTests.java",
"status": "modified"
},
{
"diff": "@@ -99,7 +99,7 @@ public void testParseAndValidate() {\n SearchContext context = mock(SearchContext.class);\n QueryShardContext qsc = new QueryShardContext(0,\n new IndexSettings(IndexMetaData.builder(\"foo\").settings(indexSettings).build(), indexSettings), null, null, null, null,\n- null, xContentRegistry(), null, null, () -> now);\n+ null, xContentRegistry(), null, null, () -> now, null);\n when(context.getQueryShardContext()).thenReturn(qsc);\n FormatDateTimeFormatter formatter = Joda.forPattern(\"dateOptionalTime\");\n DocValueFormat format = new DocValueFormat.DateTime(formatter, DateTimeZone.UTC);",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/histogram/ExtendedBoundsTests.java",
"status": "modified"
},
{
"diff": "@@ -33,7 +33,6 @@\n import org.elasticsearch.script.MockScriptEngine;\n import org.elasticsearch.script.ScoreAccessor;\n import org.elasticsearch.script.Script;\n-import org.elasticsearch.script.ScriptContext;\n import org.elasticsearch.script.ScriptEngine;\n import org.elasticsearch.script.ScriptModule;\n import org.elasticsearch.script.ScriptService;\n@@ -200,6 +199,6 @@ protected QueryShardContext queryShardContextMock(MapperService mapperService, f\n Map<String, ScriptEngine> engines = Collections.singletonMap(scriptEngine.getType(), scriptEngine);\n ScriptService scriptService = new ScriptService(Settings.EMPTY, engines, ScriptModule.CORE_CONTEXTS);\n return new QueryShardContext(0, mapperService.getIndexSettings(), null, null, mapperService, null, scriptService,\n- xContentRegistry(), null, null, System::currentTimeMillis);\n+ xContentRegistry(), null, null, System::currentTimeMillis, null);\n }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorTests.java",
"status": "modified"
},
{
"diff": "@@ -41,7 +41,7 @@ public void testKeyword() throws Exception {\n .get();\n \n try (Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n- QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L);\n+ QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L, null);\n \n ValuesSourceConfig<ValuesSource.Bytes> config = ValuesSourceConfig.resolve(\n context, null, \"bytes\", null, null, null, null);\n@@ -63,7 +63,7 @@ public void testEmptyKeyword() throws Exception {\n .get();\n \n try (Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n- QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L);\n+ QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L, null);\n \n ValuesSourceConfig<ValuesSource.Bytes> config = ValuesSourceConfig.resolve(\n context, null, \"bytes\", null, null, null, null);\n@@ -90,7 +90,7 @@ public void testUnmappedKeyword() throws Exception {\n .get();\n \n try (Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n- QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L);\n+ QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L, null);\n ValuesSourceConfig<ValuesSource.Bytes> config = ValuesSourceConfig.resolve(\n context, ValueType.STRING, \"bytes\", null, null, null, null);\n ValuesSource.Bytes valuesSource = config.toValuesSource(context);\n@@ -116,7 +116,7 @@ public void testLong() throws Exception {\n .get();\n \n try (Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n- QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L);\n+ QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L, null);\n \n ValuesSourceConfig<ValuesSource.Numeric> config = ValuesSourceConfig.resolve(\n context, null, \"long\", null, null, null, null);\n@@ -138,7 +138,7 @@ public void testEmptyLong() throws Exception {\n .get();\n \n try (Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n- QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L);\n+ QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L, null);\n \n ValuesSourceConfig<ValuesSource.Numeric> config = ValuesSourceConfig.resolve(\n context, null, \"long\", null, null, null, null);\n@@ -165,7 +165,7 @@ public void testUnmappedLong() throws Exception {\n .get();\n \n try (Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n- QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L);\n+ QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L, null);\n \n ValuesSourceConfig<ValuesSource.Numeric> config = ValuesSourceConfig.resolve(\n context, ValueType.NUMBER, \"long\", null, null, null, null);\n@@ -192,7 +192,7 @@ public void testBoolean() throws Exception {\n .get();\n \n try (Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n- QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L);\n+ QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L, null);\n \n ValuesSourceConfig<ValuesSource.Numeric> config = ValuesSourceConfig.resolve(\n context, null, \"bool\", null, null, null, null);\n@@ -214,7 +214,7 @@ public void testEmptyBoolean() throws Exception {\n .get();\n \n try (Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n- QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L);\n+ QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L, null);\n \n ValuesSourceConfig<ValuesSource.Numeric> config = ValuesSourceConfig.resolve(\n context, null, \"bool\", null, null, null, null);\n@@ -241,7 +241,7 @@ public void testUnmappedBoolean() throws Exception {\n .get();\n \n try (Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n- QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L);\n+ QueryShardContext context = indexService.newQueryShardContext(0, searcher.reader(), () -> 42L, null);\n \n ValuesSourceConfig<ValuesSource.Numeric> config = ValuesSourceConfig.resolve(\n context, ValueType.BOOLEAN, \"bool\", null, null, null, null);",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/support/ValuesSourceConfigTests.java",
"status": "modified"
}
]
} |
{
"body": "Elasticsearch 5.5.0\r\nI realized that logger in AbstractXContentParser is not static. This causes a lot of new logger instantiations\r\n\r\n```\r\n private final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(getClass()));\r\n```\r\n\r\nThis is unfortunately very expensive when I have a script which looks into _source to extract value from it. It seems this logger is instantiated for each document which has very significant impact on performance. See below. Many threads are hanging on \"sun.reflect.Reflection.getCallerClass(Native Method)\"\r\n\r\n``` java.lang.Thread.State: RUNNABLE\r\n at sun.reflect.Reflection.getCallerClass(Native Method)\r\n at java.lang.Class.newInstance(Class.java:397)\r\n at org.apache.logging.log4j.spi.AbstractLogger.createDefaultMessageFactory(AbstractLogger.java:212)\r\n at org.apache.logging.log4j.spi.AbstractLogger.<init>(AbstractLogger.java:128)\r\n at org.apache.logging.log4j.spi.ExtendedLoggerWrapper.<init>(ExtendedLoggerWrapper.java:44)\r\n at org.elasticsearch.common.logging.PrefixLogger.<init>(PrefixLogger.java:46)\r\n at org.elasticsearch.common.logging.ESLoggerFactory.getLogger(ESLoggerFactory.java:53)\r\n at org.elasticsearch.common.logging.ESLoggerFactory.getLogger(ESLoggerFactory.java:49)\r\n at org.elasticsearch.common.logging.ESLoggerFactory.getLogger(ESLoggerFactory.java:57)\r\n at org.elasticsearch.common.logging.Loggers.getLogger(Loggers.java:101)\r\n```\r\n\r\nCan this logger be as static one?\r\n\r\n",
"comments": [
{
"body": "I think the reason this is not static is that we want to use the class name of the concrete class for the logger name instead of creating a logger for the `AbstractXContentParser` itself. I see that @danielmitterdorfer added this deprecation logging so maybe he has an opinion on whether we need to have the class name of the concrete class here or if creating a logger for `AbstractXContentParser` would be ok so the logger can be static?",
"created_at": "2017-07-25T12:01:27Z"
},
{
"body": "Alternatively, it is possible to instantiate depreciation logger in the same way it is done in some part of `org.elasticsearch.common.settings.Setting`. This is done conditionally only if depreciation has been detected.\r\n\r\n```\r\n protected void checkDeprecation(Settings settings) {\r\n // They're using the setting, so we need to tell them to stop\r\n if (this.isDeprecated() && this.exists(settings) && settings.addDeprecatedSetting(this)) {\r\n // It would be convenient to show its replacement key, but replacement is often not so simple\r\n final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(getClass()));\r\n deprecationLogger.deprecated(\"[{}] setting was deprecated in Elasticsearch and will be removed in a future release! \" +\r\n \"See the breaking changes documentation for the next major version.\", getKey());\r\n }\r\n }\r\n```\r\n\r\nNotice that in `AbstractXContentParser` `deprecationLogger` is used only in one place and in most cases the condition will not be matched so instantiating `deprecationLogger` on each `AbstractXContentParser` instantiation seems to be redundant and can hurt performance significantly.\r\n\r\n``` \r\n if (interpretedAsLenient) {\r\n deprecationLogger.deprecated(\"Expected a boolean [true/false] for property [{}] but got [{}]\", currentName(), rawValue);\r\n }\r\n```",
"created_at": "2017-07-25T12:09:44Z"
},
{
"body": "@colings86 Yes, the original motivation was to have the concrete class name to narrow the focus. But given the performance impact + the fact that there are only four implementations (JSON, CBOR, Smile and Yaml) I think the pragmatic choice is to just use a static instance here?",
"created_at": "2017-07-25T12:15:05Z"
},
{
"body": "Seems it did not auto-close. Closed by #25881.",
"created_at": "2017-07-25T13:58:14Z"
},
{
"body": "We really need to be more careful when we make something static, especially for a class that is so fundamental. The issue here is that merely constructing a list setting (e.g., the setting object for `path.data`) causes a JSON content parser to be initialized which causes this static initializer to run which touches logging. This will happen *before* logging is even configured and that's a no-no.",
"created_at": "2017-08-14T21:34:08Z"
},
{
"body": "For this I opened #26210.",
"created_at": "2017-08-14T21:55:22Z"
}
],
"number": 25879,
"title": "AbstractXContentParser - logger is not static"
} | {
"body": "With this commit we declare the deprecation logger in\r\nAbstractXContentParser as static. It was not static previously in order\r\nto let users see which concrete subclass issued a deprecation log\r\nstatement. As there is a performance overhead when allocating a lot of\r\nparser instances and there are only four subclasses (out of which JSON\r\nis the most likely one) we opt to declare it static instead.\r\n\r\nCloses #25879",
"number": 25881,
"review_comments": [],
"title": "Declare XContent deprecation logger as static"
} | {
"commits": [
{
"message": "Declar XContent deprecation logger as static\n\nWith this commit we declare the deprecation logger in\nAbstractXContentParser as static. It was not static previously in order\nto let users see which concrete subclass issued a deprecation log\nstatement. As there is a performance overhead when allocating a lot of\nparser instances and there are only four subclasses (out of which JSON\nis the most likely one) we opt to declare it static instead.\n\nCloses #25879"
}
],
"files": [
{
"diff": "@@ -54,7 +54,7 @@ private static void checkCoerceString(boolean coerce, Class<? extends Number> cl\n }\n }\n \n- private final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(getClass()));\n+ private static final DeprecationLogger deprecationLogger = new DeprecationLogger(Loggers.getLogger(AbstractXContentParser.class));\n \n private final NamedXContentRegistry xContentRegistry;\n ",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java",
"status": "modified"
}
]
} |
{
"body": "**Edit:**\r\n\r\nOn 5.3.1 the context is:\r\n```` \r\n\"contexts\": {\r\n \"theContext\": [\r\n \"null\"\r\n ]\r\n}\r\n````\r\non 5.4.2 the context is:\r\n````\r\n\"contexts\": {\r\n \"theContext\": [\r\n \"bar\",\r\n \"null\"\r\n ]\r\n}\r\n````\r\nThe context should be:\r\n```` \r\n\"contexts\": {\r\n \"theContext\": [\r\n \"bar\"\r\n ]\r\n}\r\n````\r\n\r\n<hr />\r\n\r\nPerhaps this isn't a bug, but I can't find any documentation that suggests this shouldn't work.\r\n\r\nWhen setting a document field `index` to `not_analyzed`, a suggester context using `path` for that document field is null.\r\n\r\nI have a user id that's a UUID which I use a term query search on, but I also use the same user id for context in a suggest. I need to set the index to not_analyzed on the UUID so that I can use the term query, but when setting the index to not_analyzed the context the path is null in the suggest and never matches.\r\n\r\nThe work around is to set the context separately when uploading the document. This causes an extra field in the request (ohmy) and presumably an increase in storage space.\r\n\r\nHere's an example showing that the context path works when not specifying an index, but when setting the index to not_analyzed, the context is null.\r\n\r\n-----\r\nMapping without specifying index for `theContext`;\r\n````\r\ncurl -XPUT localhost:9200/analyzed -d '{\r\n \"mappings\": {\r\n \"indexed\" : {\r\n \"properties\": {\r\n \"suggest\": {\r\n \"type\": \"completion\",\r\n \"contexts\": [\r\n {\r\n \"name\": \"theContext\",\r\n \"type\": \"category\",\r\n \"path\": \"theContext\"\r\n }\r\n ]\r\n },\r\n \"theContext\": {\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n }\r\n }'\r\n````\r\nMapping for `\"index\": \"not_analyzed\"` for `theContext`:\r\n````\r\ncurl -XPUT localhost:9200/not_analyzed -d '{\r\n \"mappings\": {\r\n \"indexed\" : {\r\n \"properties\": {\r\n \"suggest\": {\r\n \"type\": \"completion\",\r\n \"contexts\": [\r\n {\r\n \"name\": \"theContext\",\r\n \"type\": \"category\",\r\n \"path\": \"theContext\"\r\n }\r\n ]\r\n },\r\n \"theContext\": {\r\n \"type\": \"string\",\r\n \"index\": \"not_analyzed\"\r\n }\r\n }\r\n }\r\n }\r\n }'\r\n````\r\nAdding the document to both:\r\n````\r\ncurl -XPOST localhost:9200/analyzed/indexed/foo -d '{\r\n\t\"suggest\": {\r\n\t\t\"input\": [\"foo\"]\r\n\t},\r\n\t\"theContext\": \"bar\"\r\n}'\r\ncurl -XPOST localhost:9200/not_analyzed/indexed/foo -d '{\r\n\t\"suggest\": {\r\n\t\t\"input\": [\"foo\"]\r\n\t},\r\n\t\"theContext\": \"bar\"\r\n}'\r\n````\r\nSearching `analyzed` shows `contexts.theContext = bar`:\r\n````\r\ncurl -XPOST localhost:9200/analyzed/indexed/_search -d '{\r\n \"suggest\" : {\r\n \"foo\" : {\r\n \"prefix\" : \"foo\",\r\n \"completion\" : {\r\n \"field\" : \"suggest\"\r\n }\r\n }\r\n }\r\n}'\r\n````\r\n````\r\n{\r\n \"took\": 1,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 0,\r\n \"max_score\": 0,\r\n \"hits\": []\r\n },\r\n \"suggest\": {\r\n \"foo\": [\r\n {\r\n \"text\": \"foo\",\r\n \"offset\": 0,\r\n \"length\": 3,\r\n \"options\": [\r\n {\r\n \"text\": \"foo\",\r\n \"_index\": \"analyzed\",\r\n \"_type\": \"indexed\",\r\n \"_id\": \"foo\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"suggest\": {\r\n \"input\": [\r\n \"foo\"\r\n ]\r\n },\r\n \"theContext\": \"bar\"\r\n },\r\n \"contexts\": {\r\n \"theContext\": [\r\n \"bar\"\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n}\r\n````\r\n\r\nThe same search on `not_analyzed` shows `contexts.theContext = null`:\r\n````\r\ncurl -XPOST localhost:9200/not_analyzed/indexed/_search -d '{\r\n \"suggest\" : {\r\n \"foo\" : {\r\n \"prefix\" : \"foo\",\r\n \"completion\" : {\r\n \"field\" : \"suggest\"\r\n }\r\n }\r\n }\r\n}'\r\n````\r\n````\r\n{\r\n \"took\": 2,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 0,\r\n \"max_score\": 0,\r\n \"hits\": []\r\n },\r\n \"suggest\": {\r\n \"foo\": [\r\n {\r\n \"text\": \"foo\",\r\n \"offset\": 0,\r\n \"length\": 3,\r\n \"options\": [\r\n {\r\n \"text\": \"foo\",\r\n \"_index\": \"not_analyzed\",\r\n \"_type\": \"indexed\",\r\n \"_id\": \"foo\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"suggest\": {\r\n \"input\": [\r\n \"foo\"\r\n ]\r\n },\r\n \"theContext\": \"bar\"\r\n },\r\n \"contexts\": {\r\n \"theContext\": [\r\n \"null\"\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n}\r\n````",
"comments": [
{
"body": "@jimczi could you look at this when you are back?",
"created_at": "2017-06-28T08:25:52Z"
}
],
"number": 25404,
"title": "Suggester context is null for path with index not_analyzed"
} | {
"body": "The context suggester extracts the context field values from the document but it does not filter doc values field coming from Keyword field.\r\nThis change filters doc values field when building the context values.\r\n\r\nFixes #25404",
"number": 25858,
"review_comments": [
{
"body": "Can we make it a real exception? As well as fail if none of the fields provided a value so that we fail on numeric fields, or unindexed keyword fields?",
"created_at": "2017-07-24T12:08:25Z"
}
],
"title": "Context suggester should filter doc values field"
} | {
"commits": [
{
"message": "Context suggester should filter doc values field\n\nThe context suggester extracts the context field values from the document but it does not filter doc values field coming from Keyword field.\nThis change filters doc values field when building the context values.\n\nFixes #25404"
},
{
"message": "apply review comment"
}
],
"files": [
{
"diff": "@@ -19,7 +19,11 @@\n \n package org.elasticsearch.search.suggest.completion.context;\n \n+import org.apache.lucene.document.SortedDocValuesField;\n+import org.apache.lucene.document.SortedSetDocValuesField;\n+import org.apache.lucene.document.StoredField;\n import org.apache.lucene.index.IndexableField;\n+import org.apache.lucene.search.SortedSetSortField;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.Version;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -28,6 +32,7 @@\n import org.elasticsearch.index.mapper.KeywordFieldMapper;\n import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n+import org.elasticsearch.index.mapper.StringFieldType;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -136,10 +141,16 @@ public Set<CharSequence> parseContext(Document document) {\n IndexableField[] fields = document.getFields(fieldName);\n values = new HashSet<>(fields.length);\n for (IndexableField field : fields) {\n- if (field.fieldType() instanceof KeywordFieldMapper.KeywordFieldType) {\n+ if (field instanceof SortedDocValuesField ||\n+ field instanceof SortedSetDocValuesField ||\n+ field instanceof StoredField) {\n+ // Ignore doc values and stored fields\n+ } else if (field.fieldType() instanceof KeywordFieldMapper.KeywordFieldType) {\n values.add(field.binaryValue().utf8ToString());\n- } else {\n+ } else if (field.fieldType() instanceof StringFieldType) {\n values.add(field.stringValue());\n+ } else {\n+ throw new IllegalArgumentException(\"Failed to parse context field [\" + fieldName + \"], only keyword and text fields are accepted\");\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/suggest/completion/context/CategoryContextMapping.java",
"status": "modified"
},
{
"diff": "@@ -20,9 +20,14 @@\n package org.elasticsearch.search.suggest.completion;\n \n import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.IntPoint;\n+import org.apache.lucene.document.SortedDocValuesField;\n+import org.apache.lucene.document.SortedSetDocValuesField;\n+import org.apache.lucene.document.StoredField;\n import org.apache.lucene.document.StringField;\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.search.suggest.document.ContextSuggestField;\n+import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -32,11 +37,15 @@\n import org.elasticsearch.index.mapper.CompletionFieldMapper.CompletionFieldType;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.KeywordFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.mapper.NumberFieldMapper;\n import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.SourceToParse;\n+import org.elasticsearch.index.mapper.StringFieldType;\n+import org.elasticsearch.index.mapper.TextFieldMapper;\n import org.elasticsearch.search.suggest.completion.context.CategoryContextMapping;\n import org.elasticsearch.search.suggest.completion.context.ContextBuilder;\n import org.elasticsearch.search.suggest.completion.context.ContextMapping;\n@@ -46,6 +55,7 @@\n import java.util.Set;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n \n public class CategoryContextMappingTests extends ESSingleNodeTestCase {\n@@ -699,10 +709,41 @@ public void testUnknownQueryContextParsing() throws Exception {\n public void testParsingContextFromDocument() throws Exception {\n CategoryContextMapping mapping = ContextBuilder.category(\"cat\").field(\"category\").build();\n ParseContext.Document document = new ParseContext.Document();\n- document.add(new StringField(\"category\", \"category1\", Field.Store.NO));\n+\n+ KeywordFieldMapper.KeywordFieldType keyword = new KeywordFieldMapper.KeywordFieldType();\n+ keyword.setName(\"category\");\n+ document.add(new Field(keyword.name(), new BytesRef(\"category1\"), keyword));\n+ // Ignore doc values\n+ document.add(new SortedSetDocValuesField(keyword.name(), new BytesRef(\"category1\")));\n Set<CharSequence> context = mapping.parseContext(document);\n assertThat(context.size(), equalTo(1));\n assertTrue(context.contains(\"category1\"));\n+\n+\n+ document = new ParseContext.Document();\n+ TextFieldMapper.TextFieldType text = new TextFieldMapper.TextFieldType();\n+ text.setName(\"category\");\n+ document.add(new Field(text.name(), \"category1\", text));\n+ // Ignore stored field\n+ document.add(new StoredField(text.name(), \"category1\", text));\n+ context = mapping.parseContext(document);\n+ assertThat(context.size(), equalTo(1));\n+ assertTrue(context.contains(\"category1\"));\n+\n+ document = new ParseContext.Document();\n+ document.add(new SortedSetDocValuesField(\"category\", new BytesRef(\"category\")));\n+ context = mapping.parseContext(document);\n+ assertThat(context.size(), equalTo(0));\n+\n+ document = new ParseContext.Document();\n+ document.add(new SortedDocValuesField(\"category\", new BytesRef(\"category\")));\n+ context = mapping.parseContext(document);\n+ assertThat(context.size(), equalTo(0));\n+\n+ final ParseContext.Document doc = new ParseContext.Document();\n+ doc.add(new IntPoint(\"category\", 36));\n+ IllegalArgumentException exc = expectThrows(IllegalArgumentException.class, () -> mapping.parseContext(doc));\n+ assertThat(exc.getMessage(), containsString(\"Failed to parse context field [category]\"));\n }\n \n static void assertContextSuggestFields(IndexableField[] fields, int expected) {",
"filename": "core/src/test/java/org/elasticsearch/search/suggest/completion/CategoryContextMappingTests.java",
"status": "modified"
}
]
} |
{
"body": "Im trying to use elastic to extract brands from a texts. For this I created a index containing brands a synonyms where the synonyms are analysed as keywords. I then search for brand with shingles (scripts to reproduce is below). This works great in elastic version 2.4.4. When I try this in elastic 5.4.3 it doesn't work at all, I looked at the data with the analyze endpoint and it looks like it should work.\r\n\r\nHad a disussion about how shingles work here: https://discuss.elastic.co/t/extracting-brands-in-documents-using-keyword-and-shingles/91873.\r\n\r\nWe agreed that this is a regression for elastic 5.4.3.\r\n\r\nScripts to reproduce:\r\n\r\n```\r\nPUT brand\r\n{\r\n \"settings\": {\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"my_analyzer_keyword\": {\r\n \"type\": \"custom\",\r\n \"tokenizer\": \"keyword\",\r\n \"filter\": [\r\n \"asciifolding\",\r\n \"lowercase\"\r\n ]\r\n },\r\n \"my_analyzer_shingle\": {\r\n \"type\": \"custom\",\r\n \"tokenizer\": \"standard\",\r\n \"filter\": [\r\n \"asciifolding\",\r\n \"lowercase\",\r\n \"shingle\"\r\n ]\r\n }\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"brand\": {\r\n \"properties\": {\r\n \"keyword\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"my_analyzer_keyword\",\r\n \"search_analyzer\": \"my_analyzer_shingle\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nSome documents:\r\n\r\n```\r\nPOST /brand/brand/1\r\n{\r\n \"id\": 1,\r\n \"keyword\": \"nike\"\r\n}\r\nPOST /brand/brand/2\r\n{\r\n \"id\": 2,\r\n \"keyword\": \"adidas originals\"\r\n}\r\n```\r\nI then search like this:\r\n```\r\nPOST /brand/brand/_search\r\n{\r\n \"query\": {\r\n \"match\": {\r\n \"keyword\": \"I like nike shoes and adidas originals\"\r\n }\r\n }\r\n}\r\n```\r\nI expect to get **nike** and **adidas originals** as the result but I don't get anything back.\r\n\r\nLooking at how this is analysed we see: \r\nQuery:\r\n```\r\nGET _analyze\r\n{\r\n \"tokenizer\": \"standard\",\r\n \"filter\": [\r\n \"asciifolding\",\r\n \"lowercase\",\r\n \"shingle\"\r\n ],\r\n \"char_filter\": [\r\n \"html_strip\"\r\n ],\r\n \"text\": [\r\n \"I like nike shoes and adidas originals\"\r\n ]\r\n}\r\n```\r\nResult (shortened for brevity):\r\n```\r\n{\r\n \"tokens\": [\r\n ...\r\n {\r\n \"token\": \"nike\",\r\n \"start_offset\": 7,\r\n \"end_offset\": 11,\r\n \"type\": \"<ALPHANUM>\",\r\n \"position\": 2\r\n },\r\n {\r\n \"token\": \"nike shoes\",\r\n \"start_offset\": 7,\r\n \"end_offset\": 17,\r\n \"type\": \"shingle\",\r\n \"position\": 2,\r\n \"positionLength\": 2\r\n },\r\n ...\r\n {\r\n \"token\": \"adidas\",\r\n \"start_offset\": 22,\r\n \"end_offset\": 28,\r\n \"type\": \"<ALPHANUM>\",\r\n \"position\": 5\r\n },\r\n {\r\n \"token\": \"adidas originals\",\r\n \"start_offset\": 22,\r\n \"end_offset\": 38,\r\n \"type\": \"shingle\",\r\n \"position\": 5,\r\n \"positionLength\": 2\r\n },\r\n {\r\n \"token\": \"originals\",\r\n \"start_offset\": 29,\r\n \"end_offset\": 38,\r\n \"type\": \"<ALPHANUM>\",\r\n \"position\": 6\r\n }\r\n ]\r\n}\r\n```\r\nIndex:\r\n```\r\nGET _analyze\r\n{\r\n \"tokenizer\": \"keyword\",\r\n \"filter\": [\r\n \"asciifolding\",\r\n \"lowercase\"\r\n ],\r\n \"char_filter\": [\r\n \"html_strip\"\r\n ],\r\n \"text\": [\r\n \"adidas originals\"\r\n ]\r\n}\r\n\r\n```\r\nResult:\r\n```\r\n{\r\n \"tokens\": [\r\n {\r\n \"token\": \"adidas originals\",\r\n \"start_offset\": 0,\r\n \"end_offset\": 16,\r\n \"type\": \"word\",\r\n \"position\": 0\r\n }\r\n ]\r\n}\r\n```\r\nTried it on elastic version 2.4.4 and there it worked just fine.\r\n\r\nI got the expected result \r\n```\r\n{\r\n \"took\": 73,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 2,\r\n \"max_score\": 0.003867892,\r\n \"hits\": [\r\n {\r\n \"_index\": \"brand\",\r\n \"_type\": \"brand\",\r\n \"_id\": \"2\",\r\n \"_score\": 0.003867892,\r\n \"_source\": {\r\n \"id\": 2,\r\n \"keyword\": \"adidas originals\"\r\n }\r\n },\r\n {\r\n \"_index\": \"brand\",\r\n \"_type\": \"brand\",\r\n \"_id\": \"1\",\r\n \"_score\": 0.003867892,\r\n \"_source\": {\r\n \"id\": 1,\r\n \"keyword\": \"nike\"\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```",
"comments": [
{
"body": "## It's a bug\r\n\r\nThanks, the bug affects only the default `shingle` filter so the workaround is to force a custom `shingle` filter in the mapping and use it instead.\r\nThe following restores the 2.x behavior:\r\n\r\n````\r\nPUT brand\r\n{\r\n \"settings\": {\r\n \"analysis\": {\r\n \"filter\": {\r\n \t\"my_shingle\": {\r\n \t\"type\": \"shingle\"\r\n \t}\r\n },\r\n \"analyzer\": {\r\n \"my_analyzer_keyword\": {\r\n \"type\": \"custom\",\r\n \"tokenizer\": \"keyword\",\r\n \"filter\": [\r\n \"asciifolding\",\r\n \"lowercase\"\r\n ]\r\n },\r\n \"my_analyzer_shingle\": {\r\n \"type\": \"custom\",\r\n \"tokenizer\": \"standard\",\r\n \"filter\": [\r\n \"asciifolding\",\r\n \"lowercase\",\r\n \"my_shingle\"\r\n ]\r\n }\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"brand\": {\r\n \"properties\": {\r\n \"keyword\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"my_analyzer_keyword\",\r\n \"search_analyzer\": \"my_analyzer_shingle\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n`````\r\n\r\n## The default setting for the `shingle` filter are not sane\r\n\r\nThough this issue hides a bigger problem, `shingles` of different size should not be mixed in the same field.\r\nIn 5.3 we introduced graph analysis at query time and this reveals this kind of problem where users are mixing unigrams and bi-grams on the same field (totally not your fault though since this is the default ;) ). \r\nThen in 5.3.1 we restored the old behavior automatically for analyzer that defines a graph `shingle` filter but this fix has not been applied to the already registered default `shingle` filter.\r\nSo I think we should also remove the ability to create such fields in 6, which means changing the default to not output unigrams and replace `min_gram`, `max_gram` with `gram_size`.\r\nWe discussed this is FixItFriday some times ago but I did not had time to work on it.\r\n\r\nI'll work on a fix for 5.x (we should also disable graph analysis for the already registered default shingle filter) and I'll open a new issue if needed for 6.\r\nIn the mean time you can use the work around described or try another approach with two queries, one that search for bi-grams and one for unigram. In your case you don't even have to reindex since the goal is to find exact matches in the document from partial queries.",
"created_at": "2017-07-06T04:45:07Z"
}
],
"number": 25555,
"title": "Shingles not working as expected"
} | {
"body": "This change disables the graph analysis on default `shingle` filter.\r\nThe pre-configured shingle filter produces shingles of different size.\r\nGraph analysis on such token stream is useless and dangerous as it may create too many paths.\r\n\r\nFixes #25555",
"number": 25853,
"review_comments": [],
"title": "Pre-configured shingle filter should disable graph analysis"
} | {
"commits": [
{
"message": "Pre-configured shingle filter should disable graph analysis\n\nThis change disables the graph analysis on default `shingle` filter.\nThe pre-configured shingle filter produces shingles of different size.\nGraph analysis on such token stream is useless and dangerous as it may create too many paths.\n\nFixes #25555"
}
],
"files": [
{
"diff": "@@ -46,6 +46,7 @@\n import org.apache.lucene.analysis.hi.HindiNormalizationFilter;\n import org.apache.lucene.analysis.in.IndicNormalizationFilter;\n import org.apache.lucene.analysis.miscellaneous.ASCIIFoldingFilter;\n+import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n import org.apache.lucene.analysis.miscellaneous.KeywordRepeatFilter;\n import org.apache.lucene.analysis.miscellaneous.LengthFilter;\n import org.apache.lucene.analysis.miscellaneous.LimitTokenCountFilter;\n@@ -207,7 +208,17 @@ public List<PreConfiguredTokenFilter> getPreConfiguredTokenFilters() {\n filters.add(PreConfiguredTokenFilter.singleton(\"russian_stem\", false, input -> new SnowballFilter(input, \"Russian\")));\n filters.add(PreConfiguredTokenFilter.singleton(\"scandinavian_folding\", true, ScandinavianFoldingFilter::new));\n filters.add(PreConfiguredTokenFilter.singleton(\"scandinavian_normalization\", true, ScandinavianNormalizationFilter::new));\n- filters.add(PreConfiguredTokenFilter.singleton(\"shingle\", false, ShingleFilter::new));\n+ filters.add(PreConfiguredTokenFilter.singleton(\"shingle\", false, input -> {\n+ TokenStream ts = new ShingleFilter(input);\n+ /**\n+ * We disable the graph analysis on this token stream\n+ * because it produces shingles of different size.\n+ * Graph analysis on such token stream is useless and dangerous as it may create too many paths\n+ * since shingles of different size are not aligned in terms of positions.\n+ */\n+ ts.addAttribute(DisableGraphAttribute.class);\n+ return ts;\n+ }));\n filters.add(PreConfiguredTokenFilter.singleton(\"snowball\", false, input -> new SnowballFilter(input, \"English\")));\n filters.add(PreConfiguredTokenFilter.singleton(\"sorani_normalization\", true, SoraniNormalizationFilter::new));\n filters.add(PreConfiguredTokenFilter.singleton(\"stemmer\", false, PorterStemFilter::new));",
"filename": "modules/analysis-common/src/main/java/org/elasticsearch/analysis/common/CommonAnalysisPlugin.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,66 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.analysis.common;\n+\n+import org.apache.lucene.analysis.TokenStream;\n+import org.apache.lucene.analysis.Tokenizer;\n+import org.apache.lucene.analysis.core.WhitespaceTokenizer;\n+import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.env.Environment;\n+import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.analysis.AnalysisTestsHelper;\n+import org.elasticsearch.index.analysis.IndexAnalyzers;\n+import org.elasticsearch.index.analysis.NamedAnalyzer;\n+import org.elasticsearch.index.analysis.TokenFilterFactory;\n+import org.elasticsearch.index.query.Operator;\n+import org.elasticsearch.plugins.Plugin;\n+import org.elasticsearch.test.ESIntegTestCase;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.ESTokenStreamTestCase;\n+import org.elasticsearch.test.IndexSettingsModule;\n+\n+import java.io.StringReader;\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery;\n+import static org.elasticsearch.test.ESTestCase.createTestAnalysis;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+\n+public class ShingleTokenFilterTests extends ESTokenStreamTestCase {\n+ public void testPreConfiguredShingleFilterDisableGraphAttribute() throws Exception {\n+ ESTestCase.TestAnalysis analysis = AnalysisTestsHelper.createTestAnalysisFromSettings(\n+ Settings.builder()\n+ .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString())\n+ .put(\"index.analysis.filter.my_ascii_folding.type\", \"asciifolding\")\n+ .build(),\n+ new CommonAnalysisPlugin());\n+ TokenFilterFactory tokenFilter = analysis.tokenFilter.get(\"shingle\");\n+ Tokenizer tokenizer = new WhitespaceTokenizer();\n+ tokenizer.setReader(new StringReader(\"this is a test\"));\n+ TokenStream tokenStream = tokenFilter.create(tokenizer);\n+ assertTrue(tokenStream.hasAttribute(DisableGraphAttribute.class));\n+ }\n+}",
"filename": "modules/analysis-common/src/test/java/org/elasticsearch/analysis/common/ShingleTokenFilterTests.java",
"status": "added"
}
]
} |
{
"body": "**Elasticsearch version**: 5.5.0\r\n\r\n**Plugins installed**: [ x-pack ]\r\n\r\n**JVM version** (`java -version`): 1.8.0_92\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Windows 10\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n_parent fields on upgraded indices seem to cause, IMHO undesirable and unexpected, IllegalStateException warnings in the logs on startup even though the migration checker and documentation suggests these fields require no action.\r\n\r\nIf we ignore the warning, is it a concern that the upgraded _parent field is not a doc value? For performance perhaps?\r\n\r\n**Steps to reproduce**:\r\n 1. Create an index on 2.4.1\r\n````\r\nPUT test-index\r\n{\r\n \"mappings\": {\r\n \"my-child\": {\r\n \"properties\": {\r\n \"child-1\": {\r\n \"type\": \"long\"\r\n }\r\n },\r\n \"_parent\": {\r\n \"type\": \"my-parent\"\r\n }\r\n },\r\n \"my-parent\": {}\r\n }\r\n}\r\n````\r\n 2. Add parent and child documents\r\n````\r\nPUT test-index/my-parent/1\r\n{\r\n \"field-1\": \"testing\"\r\n}\r\n\r\nPUT test-index/my-child/1?parent=1\r\n{\r\n \"child-1\": 20\r\n}\r\n````\r\n 3. Upgrade to 5.5.0 (or copy 2.4.1 data folder to 5.5.0 data folder) and start\r\n\r\n**Provide logs (if relevant)**:\r\nLogs showing the index getting upgraded without incident:\r\n````\r\n[2017-07-24T11:21:29,740][INFO ][o.e.c.u.IndexFolderUpgrader] [test-index/6_nfai1tT0C9oZpnj6ztyw] upgrading [D:\\work\\Elasticsearch\\elasticsearch-5.5.0 - Upgraded\\data\\batman\\nodes\\0\\indices\\test-index] to new naming convention\r\n[2017-07-24T11:21:29,742][INFO ][o.e.c.u.IndexFolderUpgrader] [test-index/6_nfai1tT0C9oZpnj6ztyw] moved from [D:\\work\\Elasticsearch\\elasticsearch-5.5.0 - Upgraded\\data\\batman\\nodes\\0\\indices\\test-index] to [D:\\work\\Elasticsearch\\elasticsearch-5.5.0 - Upgraded\\data\\batman\\nodes\\0\\indices\\6_nfai1tT0C9oZpnj6ztyw]\r\n````\r\nThe exception that comes shortly thereafter:\r\n````\r\n[2017-07-24T11:21:34,221][WARN ][o.e.i.w.ShardIndexWarmerService] [Z6tXLYo] [test-index][3] failed to warm-up global ordinals for [_parent]\r\norg.elasticsearch.ElasticsearchException: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: unexpected docvalues type NONE for field '_parent' (expected one of [SORTED, SORTED_SET]). Re-index with correct docvalues type.\r\n at org.elasticsearch.index.fielddata.plain.SortedSetDVOrdinalsIndexFieldData.loadGlobal(SortedSetDVOrdinalsIndexFieldData.java:120) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.index.fielddata.plain.SortedSetDVOrdinalsIndexFieldData.loadGlobal(SortedSetDVOrdinalsIndexFieldData.java:45) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.index.IndexWarmer$FieldDataWarmer.lambda$warmReader$1(IndexWarmer.java:141) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.5.0.jar:5.5.0]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_92]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_92]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_92]\r\nCaused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: unexpected docvalues type NONE for field '_parent' (expected one of [SORTED, SORTED_SET]). Re-index with correct docvalues type.\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:404) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:154) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.index.fielddata.plain.SortedSetDVOrdinalsIndexFieldData.loadGlobal(SortedSetDVOrdinalsIndexFieldData.java:115) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n ... 6 more\r\nCaused by: java.lang.IllegalStateException: unexpected docvalues type NONE for field '_parent' (expected one of [SORTED, SORTED_SET]). Re-index with correct docvalues type.\r\n at org.apache.lucene.index.DocValues.checkField(DocValues.java:212) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]\r\n at org.apache.lucene.index.DocValues.getSortedSet(DocValues.java:306) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]\r\n at org.elasticsearch.index.fielddata.plain.SortedSetDVBytesAtomicFieldData.getOrdinalsValues(SortedSetDVBytesAtomicFieldData.java:53) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder.build(GlobalOrdinalsBuilder.java:63) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.index.fielddata.plain.SortedSetDVOrdinalsIndexFieldData.localGlobalDirect(SortedSetDVOrdinalsIndexFieldData.java:127) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.index.fielddata.plain.SortedSetDVOrdinalsIndexFieldData.localGlobalDirect(SortedSetDVOrdinalsIndexFieldData.java:45) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.lambda$load$1(IndicesFieldDataCache.java:157) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:401) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:154) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n at org.elasticsearch.index.fielddata.plain.SortedSetDVOrdinalsIndexFieldData.loadGlobal(SortedSetDVOrdinalsIndexFieldData.java:115) ~[elasticsearch-5.5.0.jar:5.5.0]\r\n ... 6 more\r\n````",
"comments": [
{
"body": "This issue prevents global ordinals to be eagerly loaded which is now fixed in master and 5.x.\r\nThanks @robinpower ! ",
"created_at": "2017-07-24T11:13:38Z"
}
],
"number": 25849,
"title": "Warnings of \"IllegalStateException: unexpected docvalues type NONE for field '_parent'\" on upgraded index"
} | {
"body": "The default _parent field tries to load global ordinals because it is created with eager_global_ordinals=true.\r\nThis leads to an IllegalStateException because this field does not have doc_values.\r\nThis change explicitely sets eager_global_ordinals to false in order to avoid the ISE on startup.\r\n\r\nFixes #25849",
"number": 25851,
"review_comments": [],
"title": "The default _parent field should not try to load global ordinals"
} | {
"commits": [
{
"message": "The default _parent field should not try to load global ordinals\n\nThe default _parent field tries to load global ordinals because it is created with eager_global_ordinals=true.\nThis leads to an IllegalStateException because this field does not have doc_values.\nThis change explicitely sets eager_global_ordinals to false in order to avoid the ISE on startup.\n\nFixes #25849"
},
{
"message": "Fix ut"
}
],
"files": [
{
"diff": "@@ -65,7 +65,7 @@ public static class Defaults {\n FIELD_TYPE.setIndexOptions(IndexOptions.NONE);\n FIELD_TYPE.setHasDocValues(true);\n FIELD_TYPE.setDocValuesType(DocValuesType.SORTED);\n- FIELD_TYPE.setEagerGlobalOrdinals(true);\n+ FIELD_TYPE.setEagerGlobalOrdinals(false);\n FIELD_TYPE.freeze();\n }\n }\n@@ -78,6 +78,8 @@ public static class Builder extends MetadataFieldMapper.Builder<Builder, ParentF\n \n public Builder(String documentType) {\n super(Defaults.NAME, new ParentFieldType(Defaults.FIELD_TYPE, documentType), Defaults.FIELD_TYPE);\n+ // Defaults to true\n+ eagerGlobalOrdinals(true);\n this.documentType = documentType;\n builder = this;\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -124,6 +124,8 @@ public void testNoParentNullFieldCreatedIfNoParentSpecified() throws Exception {\n Set<String> allFields = new HashSet<>(mapperService.simpleMatchToIndexNames(\"*\"));\n assertTrue(allFields.contains(\"_parent\"));\n assertFalse(allFields.contains(\"_parent#null\"));\n+ MappedFieldType fieldType = mapperService.fullName(\"_parent\");\n+ assertFalse(fieldType.eagerGlobalOrdinals());\n }\n \n private static int getNumberOfFieldWithParentPrefix(ParseContext.Document doc) {",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/ParentFieldMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version**:\r\n\"version\" : {\r\n \"number\" : \"5.5.0\",\r\n \"build_hash\" : \"260387d\",\r\n \"build_date\" : \"2017-06-30T23:16:05.735Z\",\r\n \"build_snapshot\" : false,\r\n \"lucene_version\" : \"6.6.0\"\r\n }\r\n\r\n**Plugins installed**: none\r\n\r\n**JVM version** (`java -version`):\r\njava version \"1.8.0_131\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_131-b11)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)\r\n\r\n**OS version** \r\nDISTRIB_ID=Ubuntu\r\nDISTRIB_RELEASE=14.04\r\nDISTRIB_CODENAME=trusty\r\nDISTRIB_DESCRIPTION=\"Ubuntu 14.04.5 LTS\"\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nAttempting to index a value of \"1.0\" to byte field mapped with coerce=true fails with the following error: number_format_exception\r\n\r\n**Steps to reproduce**:\r\n\r\ncurl -X PUT localhost:9200/testing -d '{ \"mappings\" : { \"coerceme\" : { \"properties\" : { \"test\" : { \"type\" : \"byte\", \"coerce\" : true}}}}}'\r\n\r\ncurl -X PUT localhost:9200/testing/coerceme/1 -d '{ \"test\" : \"1.0\" }'\r\n\r\nResponse:\r\n{\"error\":{\"root_cause\":[{\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse [test]\"}],\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse [test]\",\"caused_by\":{\"type\":\"number_format_exception\",\"reason\":\"For input string: \\\"1.0\\\"\"}},\"status\":400}\r\n\r\n\r\n\r\n\r\n",
"comments": [
{
"body": "On 5.5 the following will succeed:\r\n\r\ncurl -X PUT localhost:9200/testing/coerceme/2 -d '{ \"test\" : 1.1 }'\r\n\r\ncurl -X PUT localhost:9200/testing/coerceme/3 -d '{ \"test\" : \"1\" }'\r\n\r\nSo it appears it will coerce a decimal, but only coerce a numeric string if it is a whole number with no decimal element.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"created_at": "2017-07-20T21:22:12Z"
},
{
"body": "Oddly enough, I was just about to file this exact same bug report for integers.\r\n\r\nAre you sure you saw this work on 5.4.1? I was able to reproduce this on both a 5.3.2 and 2.3.2 cluster.\r\n\r\nFrom debugging it, this is where the exception is thrown:\r\nhttps://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java#L156\r\n\r\nSo I am not sure this has ever worked, despite the documentation suggesting it should be able to coerce a value like \"5.0\" to an integer:\r\nhttps://www.elastic.co/guide/en/elasticsearch/reference/current/coerce.html",
"created_at": "2017-07-20T22:00:21Z"
},
{
"body": "Ah, I was looking at this code:\r\nhttps://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java#L389-L405\r\n\r\nThat Byte.Parse() will throw something similar; in the case that value is a string, it seems to disregard coerce altogether?\r\n\r\nI can ramp up my 5.4.1 instance again, but when (I thought) I saw it work, dynamic was not set to strict - so that may have had something to do with it?\r\n\r\n\r\n\r\n\r\n",
"created_at": "2017-07-20T22:20:46Z"
},
{
"body": "Just tried again on 5.4.1 and it failed regardless of the dynamic setting... Perhaps I made a typo or some other doofus error earlier. Have edited the issue to prevent (further) confusion :) ",
"created_at": "2017-07-20T22:32:48Z"
},
{
"body": "Agreed it would be nice to fix it. I added labels accordingly.",
"created_at": "2017-07-21T12:20:14Z"
},
{
"body": "I can put together a fix for this.",
"created_at": "2017-07-21T13:48:12Z"
}
],
"number": 25819,
"title": "Indexing a value of \"1.0\" to byte field with coerce=true fails on 5.5.0"
} | {
"body": "This changes makes it so you can index a value like \"1.0\" or \"1.1\" into whole\r\nnumber field types like byte and integer. Without this change then the above\r\nvalues would have resulted in an error, even with coerce set to true.\r\n\r\nCloses #25819",
"number": 25835,
"review_comments": [
{
"body": "This does not preserve the precision of large numbers that have a decimal part and exceed the fractional bits of double but are within min/max Long. For example, while both `4115420654264075766` and `\"4115420654264075766\"` will be indexed as `4115420654264075766`, something like `\"4115420654264075766.1\"` will not be indexed as `4115420654264075766` due to String -> Double -> Long conversion.\r\n\r\nDo we want to try to do something a bit more clever to handle this edge case?",
"created_at": "2017-07-21T18:56:18Z"
},
{
"body": "I removed this test since this change breaks it _but_ I don't think that this test makes sense in the current state. For example, before this change that test would also break if you changed it from `\"2.0\"` to a decimal literal like `2.5`. Now it is consistent (it will accept and truncate decimals, whether you quote them or not).",
"created_at": "2017-07-21T18:56:39Z"
},
{
"body": "Sorry I deleted your comment by mistake, so I am adding it back:\r\n\r\n> This only rejects coerce=false values for numbers with decimals but it won't reject strings, which seems to run contrary to how coercion is supposed to work. I did not change this behaviour. I just wanted to call it out as inconsistent with how the parse methods in AbstractXContentParser work, should this be changed?",
"created_at": "2017-07-24T12:25:25Z"
},
{
"body": "makes sense",
"created_at": "2017-07-24T12:33:24Z"
},
{
"body": "Agreed it should be changed to be consistent, but let's do it in a separate PR? Thinking more about it, I'm wondering whether we should remove the `coerce` option instead. I opened #25861.",
"created_at": "2017-07-24T12:39:12Z"
},
{
"body": "I think I'd just put a comment that we might not fail in all cases, but I don't think I would try to address it?",
"created_at": "2017-07-24T12:45:47Z"
},
{
"body": "Do you mean a comment in the code or in the documentation? (i.e. a warning here https://www.elastic.co/guide/en/elasticsearch/reference/current/coerce.html)",
"created_at": "2017-07-24T13:45:04Z"
},
{
"body": "I mean in the code but just noticed there was one already",
"created_at": "2017-07-25T14:34:44Z"
}
],
"title": "Coerce decimal strings for whole number types by truncating the decimal part"
} | {
"commits": [
{
"message": "Coerce decimal strings for whole number types by truncating the decimal part\n\nThis changes makes it so you can index a value like \"1.0\" or \"1.1\" into whole\nnumber field types like byte and integer. Without this change then the above\nvalues would have resulted in an error, even with coerce set to true.\n\nCloses #25819"
}
],
"files": [
{
"diff": "@@ -133,7 +133,7 @@ public short shortValue(boolean coerce) throws IOException {\n Token token = currentToken();\n if (token == Token.VALUE_STRING) {\n checkCoerceString(coerce, Short.class);\n- return Short.parseShort(text());\n+ return (short) Double.parseDouble(text());\n }\n short result = doShortValue();\n ensureNumberConversion(coerce, result, Short.class);\n@@ -147,13 +147,12 @@ public int intValue() throws IOException {\n return intValue(DEFAULT_NUMBER_COERCE_POLICY);\n }\n \n-\n @Override\n public int intValue(boolean coerce) throws IOException {\n Token token = currentToken();\n if (token == Token.VALUE_STRING) {\n checkCoerceString(coerce, Integer.class);\n- return Integer.parseInt(text());\n+ return (int) Double.parseDouble(text());\n }\n int result = doIntValue();\n ensureNumberConversion(coerce, result, Integer.class);\n@@ -172,7 +171,13 @@ public long longValue(boolean coerce) throws IOException {\n Token token = currentToken();\n if (token == Token.VALUE_STRING) {\n checkCoerceString(coerce, Long.class);\n- return Long.parseLong(text());\n+ // longs need special handling so we don't lose precision while parsing\n+ String stringValue = text();\n+ try {\n+ return Long.parseLong(stringValue);\n+ } catch (NumberFormatException e) {\n+ return (long) Double.parseDouble(stringValue);\n+ }\n }\n long result = doLongValue();\n ensureNumberConversion(coerce, result, Long.class);",
"filename": "core/src/main/java/org/elasticsearch/common/xcontent/support/AbstractXContentParser.java",
"status": "modified"
},
{
"diff": "@@ -312,13 +312,7 @@ public List<Field> createFields(String name, Number value,\n DOUBLE(\"double\", NumericType.DOUBLE) {\n @Override\n Double parse(Object value, boolean coerce) {\n- if (value instanceof Number) {\n- return ((Number) value).doubleValue();\n- }\n- if (value instanceof BytesRef) {\n- value = ((BytesRef) value).utf8ToString();\n- }\n- return Double.parseDouble(value.toString());\n+ return objectToDouble(value);\n }\n \n @Override\n@@ -389,20 +383,20 @@ public List<Field> createFields(String name, Number value,\n BYTE(\"byte\", NumericType.BYTE) {\n @Override\n Byte parse(Object value, boolean coerce) {\n+ double doubleValue = objectToDouble(value);\n+\n+ if (doubleValue < Byte.MIN_VALUE || doubleValue > Byte.MAX_VALUE) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a byte\");\n+ }\n+ if (!coerce && doubleValue % 1 != 0) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n+ }\n+\n if (value instanceof Number) {\n- double doubleValue = ((Number) value).doubleValue();\n- if (doubleValue < Byte.MIN_VALUE || doubleValue > Byte.MAX_VALUE) {\n- throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a byte\");\n- }\n- if (!coerce && doubleValue % 1 != 0) {\n- throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n- }\n return ((Number) value).byteValue();\n }\n- if (value instanceof BytesRef) {\n- value = ((BytesRef) value).utf8ToString();\n- }\n- return Byte.parseByte(value.toString());\n+\n+ return (byte) doubleValue;\n }\n \n @Override\n@@ -445,29 +439,25 @@ Number valueForSearch(Number value) {\n SHORT(\"short\", NumericType.SHORT) {\n @Override\n Short parse(Object value, boolean coerce) {\n+ double doubleValue = objectToDouble(value);\n+\n+ if (doubleValue < Short.MIN_VALUE || doubleValue > Short.MAX_VALUE) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a short\");\n+ }\n+ if (!coerce && doubleValue % 1 != 0) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n+ }\n+\n if (value instanceof Number) {\n- double doubleValue = ((Number) value).doubleValue();\n- if (doubleValue < Short.MIN_VALUE || doubleValue > Short.MAX_VALUE) {\n- throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a short\");\n- }\n- if (!coerce && doubleValue % 1 != 0) {\n- throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n- }\n return ((Number) value).shortValue();\n }\n- if (value instanceof BytesRef) {\n- value = ((BytesRef) value).utf8ToString();\n- }\n- return Short.parseShort(value.toString());\n+\n+ return (short) doubleValue;\n }\n \n @Override\n Short parse(XContentParser parser, boolean coerce) throws IOException {\n- int value = parser.intValue(coerce);\n- if (value < Short.MIN_VALUE || value > Short.MAX_VALUE) {\n- throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a short\");\n- }\n- return (short) value;\n+ return parser.shortValue(coerce);\n }\n \n @Override\n@@ -501,20 +491,20 @@ Number valueForSearch(Number value) {\n INTEGER(\"integer\", NumericType.INT) {\n @Override\n Integer parse(Object value, boolean coerce) {\n+ double doubleValue = objectToDouble(value);\n+\n+ if (doubleValue < Integer.MIN_VALUE || doubleValue > Integer.MAX_VALUE) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for an integer\");\n+ }\n+ if (!coerce && doubleValue % 1 != 0) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n+ }\n+\n if (value instanceof Number) {\n- double doubleValue = ((Number) value).doubleValue();\n- if (doubleValue < Integer.MIN_VALUE || doubleValue > Integer.MAX_VALUE) {\n- throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for an integer\");\n- }\n- if (!coerce && doubleValue % 1 != 0) {\n- throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n- }\n return ((Number) value).intValue();\n }\n- if (value instanceof BytesRef) {\n- value = ((BytesRef) value).utf8ToString();\n- }\n- return Integer.parseInt(value.toString());\n+\n+ return (int) doubleValue;\n }\n \n @Override\n@@ -612,20 +602,27 @@ public List<Field> createFields(String name, Number value,\n LONG(\"long\", NumericType.LONG) {\n @Override\n Long parse(Object value, boolean coerce) {\n+ double doubleValue = objectToDouble(value);\n+\n+ if (doubleValue < Long.MIN_VALUE || doubleValue > Long.MAX_VALUE) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a long\");\n+ }\n+ if (!coerce && doubleValue % 1 != 0) {\n+ throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n+ }\n+\n if (value instanceof Number) {\n- double doubleValue = ((Number) value).doubleValue();\n- if (doubleValue < Long.MIN_VALUE || doubleValue > Long.MAX_VALUE) {\n- throw new IllegalArgumentException(\"Value [\" + value + \"] is out of range for a long\");\n- }\n- if (!coerce && doubleValue % 1 != 0) {\n- throw new IllegalArgumentException(\"Value [\" + value + \"] has a decimal part\");\n- }\n return ((Number) value).longValue();\n }\n- if (value instanceof BytesRef) {\n- value = ((BytesRef) value).utf8ToString();\n+\n+ // longs need special handling so we don't lose precision while parsing\n+ String stringValue = (value instanceof BytesRef) ? ((BytesRef) value).utf8ToString() : value.toString();\n+\n+ try {\n+ return Long.parseLong(stringValue);\n+ } catch (NumberFormatException e) {\n+ return (long) Double.parseDouble(stringValue);\n }\n- return Long.parseLong(value.toString());\n }\n \n @Override\n@@ -781,6 +778,23 @@ boolean hasDecimalPart(Object number) {\n return Math.signum(Double.parseDouble(value.toString()));\n }\n \n+ /**\n+ * Converts an Object to a double by checking it against known types first\n+ */\n+ private static double objectToDouble(Object value) {\n+ double doubleValue;\n+\n+ if (value instanceof Number) {\n+ doubleValue = ((Number) value).doubleValue();\n+ } else if (value instanceof BytesRef) {\n+ doubleValue = Double.parseDouble(((BytesRef) value).utf8ToString());\n+ } else {\n+ doubleValue = Double.parseDouble(value.toString());\n+ }\n+\n+ return doubleValue;\n+ }\n+\n }\n \n public static final class NumberFieldType extends MappedFieldType {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/NumberFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -34,6 +34,7 @@\n \n public abstract class AbstractNumericFieldMapperTestCase extends ESSingleNodeTestCase {\n protected Set<String> TYPES;\n+ protected Set<String> WHOLE_TYPES;\n protected IndexService indexService;\n protected DocumentMapperParser parser;\n \n@@ -92,6 +93,14 @@ public void testCoerce() throws Exception {\n \n protected abstract void doTestCoerce(String type) throws IOException;\n \n+ public void testDecimalCoerce() throws Exception {\n+ for (String type : WHOLE_TYPES) {\n+ doTestDecimalCoerce(type);\n+ }\n+ }\n+\n+ protected abstract void doTestDecimalCoerce(String type) throws IOException;\n+\n public void testNullValue() throws IOException {\n for (String type : TYPES) {\n doTestNullValue(type);",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/AbstractNumericFieldMapperTestCase.java",
"status": "modified"
},
{
"diff": "@@ -36,6 +36,7 @@ public class NumberFieldMapperTests extends AbstractNumericFieldMapperTestCase {\n @Override\n protected void setTypeList() {\n TYPES = new HashSet<>(Arrays.asList(\"byte\", \"short\", \"integer\", \"long\", \"float\", \"double\"));\n+ WHOLE_TYPES = new HashSet<>(Arrays.asList(\"byte\", \"short\", \"integer\", \"long\"));\n }\n \n @Override\n@@ -185,6 +186,28 @@ public void doTestCoerce(String type) throws IOException {\n assertThat(e.getCause().getMessage(), containsString(\"passed as String\"));\n }\n \n+ @Override\n+ protected void doTestDecimalCoerce(String type) throws IOException {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"field\").field(\"type\", type).endObject().endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper mapper = parser.parse(\"type\", new CompressedXContent(mapping));\n+\n+ assertEquals(mapping, mapper.mappingSource().toString());\n+\n+ ParsedDocument doc = mapper.parse(SourceToParse.source(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"field\", \"7.89\")\n+ .endObject()\n+ .bytes(),\n+ XContentType.JSON));\n+\n+ IndexableField[] fields = doc.rootDoc().getFields(\"field\");\n+ IndexableField pointField = fields[0];\n+ assertEquals(7, pointField.numericValue().doubleValue(), 0d);\n+ }\n+\n public void testIgnoreMalformed() throws Exception {\n for (String type : TYPES) {\n doTestIgnoreMalformed(type);\n@@ -301,6 +324,7 @@ protected void doTestNullValue(String type) throws IOException {\n assertFalse(dvField.fieldType().stored());\n }\n \n+ @Override\n public void testEmptyName() throws IOException {\n // after version 5\n for (String type : TYPES) {\n@@ -314,4 +338,5 @@ public void testEmptyName() throws IOException {\n assertThat(e.getMessage(), containsString(\"name cannot be empty string\"));\n }\n }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/NumberFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.mapper;\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n+\n import org.apache.lucene.document.Document;\n import org.apache.lucene.document.FloatPoint;\n import org.apache.lucene.document.HalfFloatPoint;\n@@ -35,6 +36,7 @@\n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.store.Directory;\n+import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.TestUtil;\n import org.elasticsearch.index.mapper.MappedFieldType.Relation;\n@@ -43,6 +45,7 @@\n import org.junit.Before;\n \n import java.io.IOException;\n+import java.nio.charset.StandardCharsets;\n import java.util.Arrays;\n import java.util.function.Supplier;\n \n@@ -246,6 +249,37 @@ public void testConversions() {\n assertEquals(1.1d, NumberType.DOUBLE.parse(1.1, true));\n }\n \n+ public void testCoercions() {\n+ assertEquals((byte) 5, NumberType.BYTE.parse((short) 5, true));\n+ assertEquals((byte) 5, NumberType.BYTE.parse(\"5\", true));\n+ assertEquals((byte) 5, NumberType.BYTE.parse(\"5.0\", true));\n+ assertEquals((byte) 5, NumberType.BYTE.parse(\"5.9\", true));\n+ assertEquals((byte) 5, NumberType.BYTE.parse(new BytesRef(\"5.3\".getBytes(StandardCharsets.UTF_8)), true));\n+\n+ assertEquals((short) 5, NumberType.SHORT.parse((byte) 5, true));\n+ assertEquals((short) 5, NumberType.SHORT.parse(\"5\", true));\n+ assertEquals((short) 5, NumberType.SHORT.parse(\"5.0\", true));\n+ assertEquals((short) 5, NumberType.SHORT.parse(\"5.9\", true));\n+ assertEquals((short) 5, NumberType.SHORT.parse(new BytesRef(\"5.3\".getBytes(StandardCharsets.UTF_8)), true));\n+\n+ assertEquals(5, NumberType.INTEGER.parse((byte) 5, true));\n+ assertEquals(5, NumberType.INTEGER.parse(\"5\", true));\n+ assertEquals(5, NumberType.INTEGER.parse(\"5.0\", true));\n+ assertEquals(5, NumberType.INTEGER.parse(\"5.9\", true));\n+ assertEquals(5, NumberType.INTEGER.parse(new BytesRef(\"5.3\".getBytes(StandardCharsets.UTF_8)), true));\n+ assertEquals(Integer.MAX_VALUE, NumberType.INTEGER.parse(Integer.MAX_VALUE, true));\n+\n+ assertEquals((long) 5, NumberType.LONG.parse((byte) 5, true));\n+ assertEquals((long) 5, NumberType.LONG.parse(\"5\", true));\n+ assertEquals((long) 5, NumberType.LONG.parse(\"5.0\", true));\n+ assertEquals((long) 5, NumberType.LONG.parse(\"5.9\", true));\n+ assertEquals((long) 5, NumberType.LONG.parse(new BytesRef(\"5.3\".getBytes(StandardCharsets.UTF_8)), true));\n+\n+ // these will lose precision if they get treated as a double\n+ assertEquals(-4115420654264075766L, NumberType.LONG.parse(\"-4115420654264075766\", true));\n+ assertEquals(-4115420654264075766L, NumberType.LONG.parse(-4115420654264075766L, true));\n+ }\n+\n public void testHalfFloatRange() throws IOException {\n // make sure the accuracy loss of half floats only occurs at index time\n // this test checks that searching half floats yields the same results as",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/NumberFieldTypeTests.java",
"status": "modified"
},
{
"diff": "@@ -55,6 +55,7 @@ public class RangeFieldMapperTests extends AbstractNumericFieldMapperTestCase {\n @Override\n protected void setTypeList() {\n TYPES = new HashSet<>(Arrays.asList(\"date_range\", \"ip_range\", \"float_range\", \"double_range\", \"integer_range\", \"long_range\"));\n+ WHOLE_TYPES = new HashSet<>(Arrays.asList(\"integer_range\", \"long_range\"));\n }\n \n private Object getFrom(String type) {\n@@ -264,6 +265,40 @@ public void doTestCoerce(String type) throws IOException {\n containsString(\"failed to parse date\"), containsString(\"is not an IP string literal\")));\n }\n \n+ @Override\n+ protected void doTestDecimalCoerce(String type) throws IOException {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"field\").field(\"type\", type);\n+\n+ mapping = mapping.endObject().endObject().endObject().endObject();\n+ DocumentMapper mapper = parser.parse(\"type\", new CompressedXContent(mapping.string()));\n+\n+ assertEquals(mapping.string(), mapper.mappingSource().toString());\n+\n+ ParsedDocument doc1 = mapper.parse(SourceToParse.source(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"field\")\n+ .field(GT_FIELD.getPreferredName(), \"2.34\")\n+ .field(LT_FIELD.getPreferredName(), \"5.67\")\n+ .endObject()\n+ .endObject().bytes(),\n+ XContentType.JSON));\n+\n+ ParsedDocument doc2 = mapper.parse(SourceToParse.source(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"field\")\n+ .field(GT_FIELD.getPreferredName(), \"2\")\n+ .field(LT_FIELD.getPreferredName(), \"5\")\n+ .endObject()\n+ .endObject().bytes(),\n+ XContentType.JSON));\n+\n+ IndexableField[] fields1 = doc1.rootDoc().getFields(\"field\");\n+ IndexableField[] fields2 = doc2.rootDoc().getFields(\"field\");\n+\n+ assertEquals(fields1[1].binaryValue(), fields2[1].binaryValue());\n+ }\n+\n @Override\n protected void doTestNullValue(String type) throws IOException {\n XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n@@ -386,4 +421,5 @@ public void testSerializeDefaults() throws Exception {\n assertTrue(got, got.contains(\"\\\"locale\\\":\" + \"\\\"\" + Locale.ROOT + \"\\\"\") == type.equals(\"date_range\"));\n }\n }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/RangeFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.index.mapper;\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n+\n import org.apache.lucene.document.DoubleRange;\n import org.apache.lucene.document.FloatRange;\n import org.apache.lucene.document.InetAddressPoint;",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java",
"status": "modified"
},
{
"diff": "@@ -46,19 +46,6 @@ public void testParseValidFromStrings() throws Exception {\n assertNotNull(GeoGridAggregationBuilder.parse(\"geohash_grid\", stParser));\n }\n \n- public void testParseErrorOnNonIntPrecision() throws Exception {\n- XContentParser stParser = createParser(JsonXContent.jsonXContent, \"{\\\"field\\\":\\\"my_loc\\\", \\\"precision\\\":\\\"2.0\\\"}\");\n- XContentParser.Token token = stParser.nextToken();\n- assertSame(XContentParser.Token.START_OBJECT, token);\n- try {\n- GeoGridAggregationBuilder.parse(\"geohash_grid\", stParser);\n- fail();\n- } catch (ParsingException ex) {\n- assertThat(ex.getCause(), instanceOf(NumberFormatException.class));\n- assertEquals(\"For input string: \\\"2.0\\\"\", ex.getCause().getMessage());\n- }\n- }\n-\n public void testParseErrorOnBooleanPrecision() throws Exception {\n XContentParser stParser = createParser(JsonXContent.jsonXContent, \"{\\\"field\\\":\\\"my_loc\\\", \\\"precision\\\":false}\");\n XContentParser.Token token = stParser.nextToken();",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParserTests.java",
"status": "modified"
}
]
} |
{
"body": "When a replica processes out of order operations, it can drop some due to version comparisons. In the past that would have resulted in a VersionConflictException being thrown and ignored higher up. We changed this to have a cleaner flow that doesn't use exceptions. However, when backporting that change from master, we also back ported a change that isn't good for 5.x: we started storing these out of order ops in the translog. This is needed for the sequence number push, which also gives us some mechanism to deal with it later on during recovery. With the seq# this is not needed and can lead to deletes being lost (see the added test `testRecoverFromStoreWithOutOfOrderDelete` which fails without the fix).\r\n\r\nNote that master also suffers from a similar issue but we will be pursuing a different solution there (still under discussion).",
"comments": [
{
"body": "Thx @jasontedor ",
"created_at": "2017-07-10T08:24:39Z"
}
],
"number": 25592,
"title": "Engine - Do not store operations that are not index into lucene in the translog (5.x only)"
} | {
"body": "When a replica processes out of order operations, it can drop some due to version comparisons. In the past that would have resulted in a VersionConflictException being thrown and the operation was totally ignored. With the seq# push, we started storing these operations in the translog (but not indexing them into lucene) in order to have complete op histories to facilitate ops based recoveries. This in turn had the undesired effect that deleted docs may be resurrected during recovery in some extreme edge situation (see a complete explanation below). This PR contains a simple fix, which is also an optimization for the recovery process, incoming operation that have a seq# lower than the current local checkpoint (i.e., have already been processed) should not be indexed into lucene. Note that sometimes we can also skip storing them in the translog, but this is not required for the fix and is more complicated.\r\n\r\nThis is the equivalent of #25592\r\n\r\n## More details on resurrected ops \r\n\r\nConsider two operations: \r\n - Index d1, seq no 1\r\n - Delete d1, seq no 3\r\n\r\nOn a replica they come out of order:\r\n - Translog gen 1 contains:\r\n - delete (seqNo 3)\r\n - Translog gen 2 contains:\r\n - index (seqNo 1) (wasn't indexed into lucene, but put into the translog)\r\n - another operation (seqNo 10)\r\n - Translog gen 3 \r\n - another op (seqNo 9)\r\n - Engine commits with:\r\n - local checkpoint 9\r\n - refers to gen 2 \r\n\r\nIf this replica becomes a primary:\r\n - Local recovery will replay translog gen 2 and up, causing index #1 to be re-index. \r\n - Even if recovery will start at gen 3, the translog retention policy will cause file based recovery to replay the entire translog. If it happens to start at gen 2 (but not 1), we will run into the same problem.\r\n\r\n#### Some context - out of order delivery involving deletes:\r\n\r\nOn normal operations, this relies on the gc_deletes setting. We assume that the setting represents an upper bound on the time between the index and the delete operation. The index operation will be detected as stale based on the tombstone map in the LiveVersionMap.\r\n\r\nRecovery presents a challenge as it can replay an old index operation that was in the translog and override a delete operation that was done when the engine was opened (and is not part of the replayed snapshot). To deal with this situation, we disable GC deletes (i.e. retain all deletes) for the duration of recoveries. This means that the delete operation will be remembered and the index operation ignored.\r\n\r\nBoth of the above scenarios (local recover + peer recovery) create a situation where the delete operation is never replayed. It this \"lost\" as lucene doesn't remember it happened and our LiveVersionMap is populated with it.\r\n\r\n#### Solution:\r\n\r\nNote that both local and peer recovery represent a scenario where we replay translog ops on top of an existing lucene index, potentially with ongoing indexing. Therefore we can treat them the same.\r\n\r\nThe local checkpoint in Lucene represent a marker indicating that all operations below it were performed on the index. This is the only form of \"memory\" that we have that relates to deletes. If we can achieve the following:\r\n1) All ops below the local checkpoint are not indexed to lucene.\r\n2) All ops above the local checkpoint are\r\n\r\nIt will mean that all variants are covered: (i# == index op seq#, d# == delete op seq#, lc == local checkpoint in commit)\r\n1) i# < d# <= lc - document is already deleted in lucene and stays that way.\r\n2) i# <= lc < d# - delete is replayed on index - document is deleted\r\n3) lc < i# < d# - index is replayed and then delete - document is deleted.\r\n\r\nMore formally - we want to make sure that for all ops that performed on the primary o1 and o2, if o2 is processed on a shard before o1, o1 will be dropped. We have the following scenarios\r\n\r\n1) If both o1 or o2 are not included in the replayed snapshot and are above it (i.e., have a higher seq#), they fall under the gc deletes assumption.\r\n2) If both o1 is part of the replayed snapshot but o2 is above it:\r\n\t- if o2 arrives first, o1 must arrive due to the recovery and potentially via replication as well. since gc deletes is disabled we are guaranteed to know of o2's existence.\r\n3) If both o2 and o1 are part of the replayed snapshot:\r\n\t- we fall under the same scenarios as #2 - disabling GC deletes ensures we know of o2 if it arrives first.\r\n4) If o1 falls before the snapshot and o2 is either part of the snapshot or higher:\r\n\t- Since the snapshot is guaranteed to contain all ops that are not part of lucene and are above the lc in the commit used, this means that o1 is part of lucene and o1 < local checkpoint. This means it won't be processed and we're not in the scenario we're discussing.\r\n5) If o2 falls before the snapshot but o1 is part of it:\r\n\t- by the same reasoning above, o2 is < local checkpoint. Since o1 < o2, we also get o1 < local checkpoint and this will be dropped.\r\n\r\n\r\n#### Implementation:\r\n\r\nFor local recovery, we can filter the ops we read of the translog and avoid replaying them. For peer recovery this is tricky as we do want to send the operations in order to have some history on the target shard. Filtering operations on the engine level (i.e., not indexing to lucene if op seq# <= lc) would work for both. \r\n\r\n\r\n\r\n",
"number": 25827,
"review_comments": [
{
"body": "can you add an assertion on the origin here? i.e. PEER_RECOVERY or REPLICA",
"created_at": "2017-07-21T14:10:10Z"
}
],
"title": "Engine - do not index operations with seq# lower than the local checkpoint into lucene"
} | {
"commits": [
{
"message": "failing test"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into engine_dont_index_below_lcp"
},
{
"message": "recovery failing test"
},
{
"message": "fix"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into engine_dont_index_below_lcp"
},
{
"message": "improve comment"
}
],
"files": [
{
"diff": "@@ -693,14 +693,23 @@ private IndexingStrategy planIndexingAsNonPrimary(Index index) throws IOExceptio\n // this allows to ignore the case where a document was found in the live version maps in\n // a delete state and return false for the created flag in favor of code simplicity\n final OpVsLuceneDocStatus opVsLucene;\n- if (index.seqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO) {\n- opVsLucene = compareOpToLuceneDocBasedOnSeqNo(index);\n- } else {\n+ if (index.seqNo() == SequenceNumbersService.UNASSIGNED_SEQ_NO) {\n // This can happen if the primary is still on an old node and send traffic without seq# or we recover from translog\n // created by an old version.\n assert config().getIndexSettings().getIndexVersionCreated().before(Version.V_6_0_0_alpha1) :\n \"index is newly created but op has no sequence numbers. op: \" + index;\n opVsLucene = compareOpToLuceneDocBasedOnVersions(index);\n+ } else if (index.seqNo() <= seqNoService.getLocalCheckpoint()){\n+ // the operation seq# is lower then the current local checkpoint and thus was already put into lucene\n+ // this can happen during recovery where older operations are sent from the translog that are already\n+ // part of the lucene commit (either from a peer recovery or a local translog)\n+ // or due to concurrent indexing & recovery. For the former it is important to skip lucene as the operation in\n+ // question may have been deleted in an out of order op that is not replayed.\n+ // See testRecoverFromStoreWithOutOfOrderDelete for an example of local recovery\n+ // See testRecoveryWithOutOfOrderDelete for an example of peer recovery\n+ opVsLucene = OpVsLuceneDocStatus.OP_STALE_OR_EQUAL;\n+ } else {\n+ opVsLucene = compareOpToLuceneDocBasedOnSeqNo(index);\n }\n if (opVsLucene == OpVsLuceneDocStatus.OP_STALE_OR_EQUAL) {\n plan = IndexingStrategy.processButSkipLucene(false, index.seqNo(), index.version());\n@@ -979,12 +988,21 @@ private DeletionStrategy planDeletionAsNonPrimary(Delete delete) throws IOExcept\n // this allows to ignore the case where a document was found in the live version maps in\n // a delete state and return true for the found flag in favor of code simplicity\n final OpVsLuceneDocStatus opVsLucene;\n- if (delete.seqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO) {\n- opVsLucene = compareOpToLuceneDocBasedOnSeqNo(delete);\n- } else {\n+ if (delete.seqNo() == SequenceNumbersService.UNASSIGNED_SEQ_NO) {\n assert config().getIndexSettings().getIndexVersionCreated().before(Version.V_6_0_0_alpha1) :\n \"index is newly created but op has no sequence numbers. op: \" + delete;\n opVsLucene = compareOpToLuceneDocBasedOnVersions(delete);\n+ } else if (delete.seqNo() <= seqNoService.getLocalCheckpoint()) {\n+ // the operation seq# is lower then the current local checkpoint and thus was already put into lucene\n+ // this can happen during recovery where older operations are sent from the translog that are already\n+ // part of the lucene commit (either from a peer recovery or a local translog)\n+ // or due to concurrent indexing & recovery. For the former it is important to skip lucene as the operation in\n+ // question may have been deleted in an out of order op that is not replayed.\n+ // See testRecoverFromStoreWithOutOfOrderDelete for an example of local recovery\n+ // See testRecoveryWithOutOfOrderDelete for an example of peer recovery\n+ opVsLucene = OpVsLuceneDocStatus.OP_STALE_OR_EQUAL;\n+ } else {\n+ opVsLucene = compareOpToLuceneDocBasedOnSeqNo(delete);\n }\n \n final DeletionStrategy plan;",
"filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -125,12 +125,14 @@\n import java.util.concurrent.atomic.AtomicLong;\n import java.util.concurrent.atomic.AtomicReference;\n import java.util.function.BiConsumer;\n+import java.util.function.Consumer;\n import java.util.function.LongFunction;\n import java.util.stream.Collectors;\n import java.util.stream.IntStream;\n \n import static java.util.Collections.emptyMap;\n import static java.util.Collections.emptySet;\n+import static org.elasticsearch.cluster.routing.TestShardRouting.newShardRouting;\n import static org.elasticsearch.common.lucene.Lucene.cleanLuceneIndex;\n import static org.elasticsearch.common.xcontent.ToXContent.EMPTY_PARAMS;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n@@ -339,7 +341,7 @@ public void onFailure(Exception e) {\n // promote the replica\n final ShardRouting replicaRouting = indexShard.routingEntry();\n final ShardRouting primaryRouting =\n- TestShardRouting.newShardRouting(\n+ newShardRouting(\n replicaRouting.shardId(),\n replicaRouting.currentNodeId(),\n null,\n@@ -416,7 +418,7 @@ public void testPrimaryFillsSeqNoGapsOnPromotion() throws Exception {\n // promote the replica\n final ShardRouting replicaRouting = indexShard.routingEntry();\n final ShardRouting primaryRouting =\n- TestShardRouting.newShardRouting(\n+ newShardRouting(\n replicaRouting.shardId(),\n replicaRouting.currentNodeId(),\n null,\n@@ -458,13 +460,13 @@ public void testOperationPermitsOnPrimaryShards() throws InterruptedException, E\n \n if (randomBoolean()) {\n // relocation target\n- indexShard = newShard(TestShardRouting.newShardRouting(shardId, \"local_node\", \"other node\",\n+ indexShard = newShard(newShardRouting(shardId, \"local_node\", \"other node\",\n true, ShardRoutingState.INITIALIZING, AllocationId.newRelocation(AllocationId.newInitializing())));\n } else if (randomBoolean()) {\n // simulate promotion\n indexShard = newStartedShard(false);\n ShardRouting replicaRouting = indexShard.routingEntry();\n- ShardRouting primaryRouting = TestShardRouting.newShardRouting(replicaRouting.shardId(), replicaRouting.currentNodeId(), null,\n+ ShardRouting primaryRouting = newShardRouting(replicaRouting.shardId(), replicaRouting.currentNodeId(), null,\n true, ShardRoutingState.STARTED, replicaRouting.allocationId());\n indexShard.updateShardState(primaryRouting, indexShard.getPrimaryTerm() + 1, (shard, listener) -> {}, 0L,\n Collections.singleton(indexShard.routingEntry().allocationId().getId()),\n@@ -520,7 +522,7 @@ public void testOperationPermitOnReplicaShards() throws Exception {\n case 1: {\n // initializing replica / primary\n final boolean relocating = randomBoolean();\n- ShardRouting routing = TestShardRouting.newShardRouting(shardId, \"local_node\",\n+ ShardRouting routing = newShardRouting(shardId, \"local_node\",\n relocating ? \"sourceNode\" : null,\n relocating ? randomBoolean() : false,\n ShardRoutingState.INITIALIZING,\n@@ -533,7 +535,7 @@ public void testOperationPermitOnReplicaShards() throws Exception {\n // relocation source\n indexShard = newStartedShard(true);\n ShardRouting routing = indexShard.routingEntry();\n- routing = TestShardRouting.newShardRouting(routing.shardId(), routing.currentNodeId(), \"otherNode\",\n+ routing = newShardRouting(routing.shardId(), routing.currentNodeId(), \"otherNode\",\n true, ShardRoutingState.RELOCATING, AllocationId.newRelocation(routing.allocationId()));\n IndexShardTestCase.updateRoutingEntry(indexShard, routing);\n indexShard.relocated(\"test\", primaryContext -> {});\n@@ -1377,6 +1379,47 @@ protected void doRun() throws Exception {\n closeShards(shard);\n }\n \n+ public void testRecoverFromStoreWithOutOfOrderDelete() throws IOException {\n+ final IndexShard shard = newStartedShard(false);\n+ final Consumer<Mapping> mappingConsumer = getMappingUpdater(shard, \"test\");\n+ shard.applyDeleteOperationOnReplica(1, 1, 2, \"test\", \"id\", VersionType.EXTERNAL, mappingConsumer);\n+ shard.getEngine().rollTranslogGeneration(); // isolate the delete in it's own generation\n+ shard.applyIndexOperationOnReplica(0, 1, 1, VersionType.EXTERNAL, IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, false,\n+ SourceToParse.source(shard.shardId().getIndexName(), \"test\", \"id\", new BytesArray(\"{}\"), XContentType.JSON), mappingConsumer);\n+\n+ // index a second item into the second generation, skipping seq# 2. Local checkpoint is now 1, which will make this generation stick\n+ // around\n+ shard.applyIndexOperationOnReplica(3, 1, 1, VersionType.EXTERNAL, IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, false,\n+ SourceToParse.source(shard.shardId().getIndexName(), \"test\", \"id2\", new BytesArray(\"{}\"), XContentType.JSON), mappingConsumer);\n+\n+ final int translogOps;\n+ if (randomBoolean()) {\n+ logger.info(\"--> flushing shard\");\n+ flushShard(shard);\n+ translogOps = 2;\n+ } else if (randomBoolean()) {\n+ shard.getEngine().rollTranslogGeneration();\n+ translogOps = 3;\n+ } else {\n+ translogOps = 3;\n+ }\n+\n+ final ShardRouting replicaRouting = shard.routingEntry();\n+ IndexShard newShard = reinitShard(shard,\n+ newShardRouting(replicaRouting.shardId(), replicaRouting.currentNodeId(), true, ShardRoutingState.INITIALIZING,\n+ RecoverySource.StoreRecoverySource.EXISTING_STORE_INSTANCE));\n+ DiscoveryNode localNode = new DiscoveryNode(\"foo\", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT);\n+ newShard.markAsRecovering(\"store\", new RecoveryState(newShard.routingEntry(), localNode, null));\n+ assertTrue(newShard.recoverFromStore());\n+ assertEquals(translogOps, newShard.recoveryState().getTranslog().recoveredOperations());\n+ assertEquals(translogOps, newShard.recoveryState().getTranslog().totalOperations());\n+ assertEquals(translogOps, newShard.recoveryState().getTranslog().totalOperationsOnStart());\n+ assertEquals(100.0f, newShard.recoveryState().getTranslog().recoveredPercent(), 0.01f);\n+ updateRoutingEntry(newShard, ShardRoutingHelper.moveToStarted(newShard.routingEntry()));\n+ assertDocCount(newShard, 1);\n+ closeShards(newShard);\n+ }\n+\n public void testRecoverFromStore() throws IOException {\n final IndexShard shard = newStartedShard(true);\n int totalOps = randomInt(10);\n@@ -1939,7 +1982,7 @@ public void testRecoverFromLocalShard() throws IOException {\n sourceShard.refresh(\"test\");\n \n \n- ShardRouting targetRouting = TestShardRouting.newShardRouting(new ShardId(\"index_1\", \"index_1\", 0), \"n1\", true,\n+ ShardRouting targetRouting = newShardRouting(new ShardId(\"index_1\", \"index_1\", 0), \"n1\", true,\n ShardRoutingState.INITIALIZING, RecoverySource.LocalShardsRecoverySource.INSTANCE);\n \n final IndexShard targetShard;",
"filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java",
"status": "modified"
},
{
"diff": "@@ -19,9 +19,14 @@\n \n package org.elasticsearch.indices.recovery;\n \n+import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.VersionType;\n+import org.elasticsearch.index.mapper.SourceToParse;\n import org.elasticsearch.index.replication.ESIndexLevelReplicationTestCase;\n import org.elasticsearch.index.replication.RecoveryDuringReplicationTests;\n import org.elasticsearch.index.shard.IndexShard;\n@@ -79,4 +84,52 @@ public void testRetentionPolicyChangeDuringRecovery() throws Exception {\n assertBusy(() -> assertThat(replica.getTranslog().totalOperations(), equalTo(0)));\n }\n }\n+\n+ public void testRecoveryWithOutOfOrderDelete() throws Exception {\n+ try (ReplicationGroup shards = createGroup(1)) {\n+ shards.startAll();\n+ // create out of order delete and index op on replica\n+ final IndexShard orgReplica = shards.getReplicas().get(0);\n+ orgReplica.applyDeleteOperationOnReplica(1, 1, 2, \"type\", \"id\", VersionType.EXTERNAL, u -> {});\n+ orgReplica.getTranslog().rollGeneration(); // isolate the delete in it's own generation\n+ orgReplica.applyIndexOperationOnReplica(0, 1, 1, VersionType.EXTERNAL, IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, false,\n+ SourceToParse.source(orgReplica.shardId().getIndexName(), \"type\", \"id\", new BytesArray(\"{}\"), XContentType.JSON),\n+ u -> {});\n+\n+ // index a second item into the second generation, skipping seq# 2. Local checkpoint is now 1, which will make this generation\n+ // stick around\n+ orgReplica.applyIndexOperationOnReplica(3, 1, 1, VersionType.EXTERNAL, IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, false,\n+ SourceToParse.source(orgReplica.shardId().getIndexName(), \"type\", \"id2\", new BytesArray(\"{}\"), XContentType.JSON), u -> {});\n+\n+ final int translogOps;\n+ if (randomBoolean()) {\n+ if (randomBoolean()) {\n+ logger.info(\"--> flushing shard (translog will be trimmed)\");\n+ IndexMetaData.Builder builder = IndexMetaData.builder(orgReplica.indexSettings().getIndexMetaData());\n+ builder.settings(Settings.builder().put(orgReplica.indexSettings().getSettings())\n+ .put(IndexSettings.INDEX_TRANSLOG_RETENTION_AGE_SETTING.getKey(), \"-1\")\n+ .put(IndexSettings.INDEX_TRANSLOG_RETENTION_SIZE_SETTING.getKey(), \"-1\")\n+ );\n+ orgReplica.indexSettings().updateIndexMetaData(builder.build());\n+ orgReplica.onSettingsChanged();\n+ translogOps = 3; // 2 ops + seqno gaps\n+ } else {\n+ logger.info(\"--> flushing shard (translog will be retained)\");\n+ translogOps = 4; // 3 ops + seqno gaps\n+ }\n+ flushShard(orgReplica);\n+ } else {\n+ translogOps = 4; // 3 ops + seqno gaps\n+ }\n+\n+ final IndexShard orgPrimary = shards.getPrimary();\n+ shards.promoteReplicaToPrimary(orgReplica).get(); // wait for primary/replica sync to make sure seq# gap is closed.\n+\n+ IndexShard newReplica = shards.addReplicaWithExistingPath(orgPrimary.shardPath(), orgPrimary.routingEntry().currentNodeId());\n+ shards.recoverReplica(newReplica);\n+ shards.assertAllEqual(1);\n+\n+ assertThat(newReplica.getTranslog().totalOperations(), equalTo(translogOps));\n+ }\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/indices/recovery/RecoveryTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.4.1\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): 1.8.0_131\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Ubuntu 16.04\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nsearch query fails when explain parameter is set to true.\r\n\r\n**Steps to reproduce**:\r\nUse transport client in java code.\r\nFails randomly in tests, looks like it is data dependent.\r\n\r\n**Provide logs (if relevant)**:\r\nException in thread \"elasticsearch[BHybiKz][search][T#6]\" java.lang.AssertionError: input 1.3065281278905116E-105 out of float scope for function score deviation: 1.0\r\n\tat org.elasticsearch.common.lucene.search.function.CombineFunction.toFloat(CombineFunction.java:133)\r\n\tat org.elasticsearch.index.query.functionscore.DecayFunctionBuilder$AbstractDistanceScoreFunction$1.explainScore(DecayFunctionBuilder.java:551)\r\n\tat org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery$CustomBoostFactorWeight.explain(FiltersFunctionScoreQuery.java:220)\r\n\tat org.apache.lucene.search.IndexSearcher.explain(IndexSearcher.java:723)\r\n\tat org.apache.lucene.search.IndexSearcher.explain(IndexSearcher.java:700)\r\n\tat org.elasticsearch.search.internal.ContextIndexSearcher.explain(ContextIndexSearcher.java:140)\r\n\tat org.elasticsearch.search.fetch.subphase.ExplainFetchSubPhase.hitExecute(ExplainFetchSubPhase.java:41)\r\n\tat org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:161)\r\n\tat org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:298)\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:273)\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:339)\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:336)\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)\r\n\tat org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:627)\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638)\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n\tat java.lang.Thread.run(Thread.java:745)\r\n\r\n",
"comments": [
{
"body": "I think we should just remove this assertion that failed, we can't guaratee it will pass all the time.",
"created_at": "2017-06-21T11:30:45Z"
}
],
"number": 25330,
"title": "Search with explain sometimes fail"
} | {
"body": "We cannot guarantee that the result of computations will be in the float range,\r\nsince it depends on the data and how scores are computed. We already use doubles\r\nas intermediate representations and cast to a float as a final step, which is\r\nthe right thing to do. Small doubles will just be rounded to zero, there is not\r\nmuch we can or should do about it.\r\n\r\nCloses #25330",
"number": 25806,
"review_comments": [],
"title": "Remove assertion about deviation when casting to a float."
} | {
"commits": [
{
"message": "Remove assertion about deviation when casting to a float.\n\nWe cannot guarantee that the result of computations will be in the float range,\nsince it depends on the data and how scores are computed. We already use doubles\nas intermediate representations and cast to a float as a final step, which is\nthe right thing to do. Small doubles will just be rounded to zero, there is not\nmuch we can or should do about it.\n\nCloses #25330"
}
],
"files": [
{
"diff": "@@ -31,7 +31,7 @@ public enum CombineFunction implements Writeable {\n MULTIPLY {\n @Override\n public float combine(double queryScore, double funcScore, double maxBoost) {\n- return toFloat(queryScore * Math.min(funcScore, maxBoost));\n+ return (float) (queryScore * Math.min(funcScore, maxBoost));\n }\n \n @Override\n@@ -48,7 +48,7 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma\n REPLACE {\n @Override\n public float combine(double queryScore, double funcScore, double maxBoost) {\n- return toFloat(Math.min(funcScore, maxBoost));\n+ return (float) (Math.min(funcScore, maxBoost));\n }\n \n @Override\n@@ -64,7 +64,7 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma\n SUM {\n @Override\n public float combine(double queryScore, double funcScore, double maxBoost) {\n- return toFloat(queryScore + Math.min(funcScore, maxBoost));\n+ return (float) (queryScore + Math.min(funcScore, maxBoost));\n }\n \n @Override\n@@ -79,23 +79,23 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma\n AVG {\n @Override\n public float combine(double queryScore, double funcScore, double maxBoost) {\n- return toFloat((Math.min(funcScore, maxBoost) + queryScore) / 2.0);\n+ return (float) ((Math.min(funcScore, maxBoost) + queryScore) / 2.0);\n }\n \n @Override\n public Explanation explain(Explanation queryExpl, Explanation funcExpl, float maxBoost) {\n Explanation minExpl = Explanation.match(Math.min(funcExpl.getValue(), maxBoost), \"min of:\",\n funcExpl, Explanation.match(maxBoost, \"maxBoost\"));\n return Explanation.match(\n- toFloat((Math.min(funcExpl.getValue(), maxBoost) + queryExpl.getValue()) / 2.0), \"avg of\",\n+ (float) ((Math.min(funcExpl.getValue(), maxBoost) + queryExpl.getValue()) / 2.0), \"avg of\",\n queryExpl, minExpl);\n }\n \n },\n MIN {\n @Override\n public float combine(double queryScore, double funcScore, double maxBoost) {\n- return toFloat(Math.min(queryScore, Math.min(funcScore, maxBoost)));\n+ return (float) (Math.min(queryScore, Math.min(funcScore, maxBoost)));\n }\n \n @Override\n@@ -112,7 +112,7 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma\n MAX {\n @Override\n public float combine(double queryScore, double funcScore, double maxBoost) {\n- return toFloat(Math.max(queryScore, Math.min(funcScore, maxBoost)));\n+ return (float) (Math.max(queryScore, Math.min(funcScore, maxBoost)));\n }\n \n @Override\n@@ -129,16 +129,6 @@ public Explanation explain(Explanation queryExpl, Explanation funcExpl, float ma\n \n public abstract float combine(double queryScore, double funcScore, double maxBoost);\n \n- public static float toFloat(double input) {\n- assert deviation(input) <= 0.001 : \"input \" + input + \" out of float scope for function score deviation: \" + deviation(input);\n- return (float) input;\n- }\n-\n- private static double deviation(double input) { // only with assert!\n- float floatVersion = (float) input;\n- return Double.compare(floatVersion, input) == 0 || input == 0.0d ? 0 : 1.d - (floatVersion) / input;\n- }\n-\n public abstract Explanation explain(Explanation queryExpl, Explanation funcExpl, float maxBoost);\n \n @Override",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/CombineFunction.java",
"status": "modified"
},
{
"diff": "@@ -96,7 +96,7 @@ public Explanation explainScore(int docId, Explanation subQueryScore) throws IOE\n String defaultStr = missing != null ? \"?:\" + missing : \"\";\n double score = score(docId, subQueryScore.getValue());\n return Explanation.match(\n- CombineFunction.toFloat(score),\n+ (float) score,\n String.format(Locale.ROOT,\n \"field value function: %s(doc['%s'].value%s * factor=%s)\", modifierStr, field, defaultStr, boostFactor));\n }",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/FieldValueFactorFunction.java",
"status": "modified"
},
{
"diff": "@@ -206,7 +206,7 @@ public Explanation explain(LeafReaderContext context, int doc) throws IOExceptio\n FilterFunction filterFunction = filterFunctions[i];\n Explanation functionExplanation = filterFunction.function.getLeafScoreFunction(context).explainScore(doc, expl);\n double factor = functionExplanation.getValue();\n- float sc = CombineFunction.toFloat(factor);\n+ float sc = (float) factor;\n Explanation filterExplanation = Explanation.match(sc, \"function score, product of:\",\n Explanation.match(1.0f, \"match filter: \" + filterFunction.filter.toString()), functionExplanation);\n filterExplanations.add(filterExplanation);\n@@ -219,7 +219,7 @@ public Explanation explain(LeafReaderContext context, int doc) throws IOExceptio\n Explanation factorExplanation;\n if (filterExplanations.size() > 0) {\n factorExplanation = Explanation.match(\n- CombineFunction.toFloat(score),\n+ (float) score,\n \"function score, score mode [\" + scoreMode.toString().toLowerCase(Locale.ROOT) + \"]\",\n filterExplanations);\n ",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/FiltersFunctionScoreQuery.java",
"status": "modified"
},
{
"diff": "@@ -84,7 +84,7 @@ public double score(int docId, float subQueryScore) throws IOException {\n public Explanation explainScore(int docId, Explanation subQueryScore) throws IOException {\n String field = fieldData == null ? null : fieldData.getFieldName();\n return Explanation.match(\n- CombineFunction.toFloat(score(docId, subQueryScore.getValue())),\n+ (float) score(docId, subQueryScore.getValue()),\n \"random score function (seed: \" + originalSeed + \", field: \" + field + \")\");\n }\n };",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java",
"status": "modified"
},
{
"diff": "@@ -109,7 +109,7 @@ public Explanation explainScore(int docId, Explanation subQueryScore) throws IOE\n subQueryScore.getValue(), \"_score: \",\n subQueryScore);\n return Explanation.match(\n- CombineFunction.toFloat(score), explanation,\n+ (float) score, explanation,\n scoreExp);\n }\n return exp;",
"filename": "core/src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java",
"status": "modified"
},
{
"diff": "@@ -543,7 +543,7 @@ public Explanation explainScore(int docId, Explanation subQueryScore) throws IOE\n return Explanation.noMatch(\"No value for the distance\");\n }\n return Explanation.match(\n- CombineFunction.toFloat(score(docId, subQueryScore.getValue())),\n+ (float) score(docId, subQueryScore.getValue()),\n \"Function for field \" + getFieldName() + \":\",\n func.explainFunction(getDistanceString(ctx, docId), distance.doubleValue(), scale));\n }",
"filename": "core/src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionBuilder.java",
"status": "modified"
}
]
} |
{
"body": "In 5.5.0 which has the new ip_range datatype, the search query fails with\r\n\"type\": \"class_cast_exception\",\r\n\"reason\": \"org.apache.lucene.util.BytesRef cannot be cast to java.lang.String\"\r\n\r\n```\r\nDELETE model_range_index\r\n\r\nPUT model_range_index\r\n{\r\n \"mappings\": {\r\n \"my_type\": {\r\n \"properties\": {\r\n \"my_ip_range\": {\r\n \"type\": \"ip_range\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT model_range_index/my_type/1\r\n{\r\n \"my_ip_range\" : { \r\n \"gte\" : \"0.0.0.0\",\r\n \"lte\" : \"255.255.255.255\"\r\n }\r\n}\r\n\r\nGET model_range_index/_mapping/my_type/field/my_ip_range\r\n\r\nPOST model_range_index/_search\r\n{\r\n \"query\" : {\r\n \"range\" : {\r\n \"my_ip_range\" : { \r\n \"from\" : \"0.0.0.0\",\r\n \"to\" : \"255.255.255.255\",\r\n \"relation\" : \"intersects\" \r\n }\r\n }\r\n }\r\n}\r\n```",
"comments": [],
"number": 25636,
"title": "Search by ip_range fails"
} | {
"body": "Closes #25636",
"number": 25768,
"review_comments": [],
"title": "Fix parsing of ip range queries."
} | {
"commits": [
{
"message": "Fix parsing of ip range queries.\n\nCloses #25636"
}
],
"files": [
{
"diff": "@@ -452,7 +452,14 @@ public InetAddress parseTo(RangeFieldType fieldType, XContentParser parser, bool\n }\n @Override\n public InetAddress parse(Object value, boolean coerce) {\n- return value instanceof InetAddress ? (InetAddress) value : InetAddresses.forString((String) value);\n+ if (value instanceof InetAddress) {\n+ return (InetAddress) value;\n+ } else {\n+ if (value instanceof BytesRef) {\n+ value = ((BytesRef) value).utf8ToString();\n+ }\n+ return InetAddresses.forString(value.toString());\n+ }\n }\n @Override\n public InetAddress minValue() {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -29,10 +29,12 @@\n import org.apache.lucene.queries.BinaryDocValuesRangeQuery;\n import org.apache.lucene.search.IndexOrDocValuesQuery;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.geo.ShapeRelation;\n import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.mapper.RangeFieldMapper.RangeType;\n@@ -267,4 +269,10 @@ private Object nextTo(Object from) throws Exception {\n return (Float)from + DISTANCE;\n }\n }\n+\n+ public void testParseIp() {\n+ assertEquals(InetAddresses.forString(\"::1\"), RangeFieldMapper.RangeType.IP.parse(InetAddresses.forString(\"::1\"), randomBoolean()));\n+ assertEquals(InetAddresses.forString(\"::1\"), RangeFieldMapper.RangeType.IP.parse(\"::1\", randomBoolean()));\n+ assertEquals(InetAddresses.forString(\"::1\"), RangeFieldMapper.RangeType.IP.parse(new BytesRef(\"::1\"), randomBoolean()));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java",
"status": "modified"
}
]
} |
{
"body": "raised here following [this conversation](https://discuss.elastic.co/t/when-using-inner-hits-on-nested-query-we-are-getting-an-index-out-of-bounds-exception/89968) with Martijn .\r\n\r\n<!-- Bug report -->\r\n\r\nConfig: ES 5.4, AWS Linux, single node (test server), 3 shards, 0 replicas.\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nLinux 4.9.27-14.31.amzn1.x86_64 #1 SMP Wed May 10 01:58:40 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Plugins installed**: [none]\r\n\r\n**JVM version** (`java -version`): build 1.8.0_131-b11\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWe have a set of documents which are replicated to Elastic from Couchbase.\r\n\r\nin the document, we have a \"holiday product\", this product has a nested array of \"prices\", this price object contains fields such as number of passengers, date, promo code, price etc. For each holiday product, this list of prices can be very long as there can be hundreds/thousands of permutations.\r\n\r\nWhen running a query against the data, we don't want to carry the 1000's of lines of data over the wire (can be >5MB) so are trying to use inner_hits to only return the rows that match. (approx 10KB)\r\n\r\nwe have a query such as: \r\n```\r\n{\r\n\"_source\": false,\r\n \"query\": {\r\n \"nested\": {\r\n \"path\": \"doc.products.calculatedPrices\",\r\n \"score_mode\": \"avg\",\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n { \"match\": { \"doc.products.calculatedPrices.promoCode\": \"DRH725C\" } },\r\n { \"match\": { \"doc.products.calculatedPrices.pax\": 2 } },\r\n { \"range\": { \"doc.products.calculatedPrices.now\" : {\"gte\": 100, \"lte\":120} } }\r\n ]\r\n } \r\n\t },\r\n\t \"inner_hits\": {}\r\n }\r\n }\r\n}\r\n```\r\nthis generates the error:\r\n```\r\n{\r\n \"took\": 41,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 3,\r\n \"successful\": 2,\r\n \"failed\": 1,\r\n \"failures\": [\r\n {\r\n \"shard\": 2,\r\n \"index\": \"testavail5\",\r\n \"node\": \"AfXKKfqrSPqOrNO7mLreEQ\",\r\n \"reason\": {\r\n \"type\": \"index_out_of_bounds_exception\",\r\n \"reason\": \"Index: 7290, Size: 134\"\r\n }\r\n }\r\n ]\r\n },\r\n \"hits\": {\r\n \"total\": 62,\r\n \"max_score\": 5.845086,\r\n \"hits\": []\r\n }\r\n}\r\n```\r\nOn this test server, we are running a single node with 3 shards. so not sure why one shard reports as failed.\r\n\r\nif we change the\r\n`inner_hits\": {}` to `inner_hits\": {\"_source\":false}`\r\n\r\nwe get results but obviously don't get any useful information in the output!\r\n\r\ntrace log:\r\n```\r\n[2017-06-20T11:46:58,972][TRACE][o.e.s.SearchService ] [AfXKKfq] Fetch phase failed\r\njava.lang.IndexOutOfBoundsException: Index: 7290, Size: 134\r\n at java.util.ArrayList.rangeCheck(ArrayList.java:653) ~[?:1.8.0_131]\r\n at java.util.ArrayList.get(ArrayList.java:429) ~[?:1.8.0_131]\r\n at org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:256) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitExecute(InnerHitsFetchSubPhase.java:65) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:161) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:417) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:394) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:391) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:627) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.4.0.jar:5.4.0]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n[2017-06-20T11:46:58,973][DEBUG][o.e.a.s.TransportSearchAction] [AfXKKfq] [6] Failed to execute fetch phase\r\norg.elasticsearch.transport.RemoteTransportException: [AfXKKfq][172.31.35.8:9300][indices:data/read/search[phase/fetch/id]]\r\nCaused by: java.lang.IndexOutOfBoundsException: Index: 7290, Size: 134\r\n at java.util.ArrayList.rangeCheck(ArrayList.java:653) ~[?:1.8.0_131]\r\n at java.util.ArrayList.get(ArrayList.java:429) ~[?:1.8.0_131]\r\n at org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:256) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitExecute(InnerHitsFetchSubPhase.java:65) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:161) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:417) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:394) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:391) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:627) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.4.0.jar:5.4.0]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.4.0.jar:5.4.0]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n```\r\n\r\n",
"comments": [
{
"body": "@BLZB0B Are you able to share the 62 documents that matched with that query? (sending it me privately is good too) This would make it easier to reproduce the error. I haven't been able yet to figure out what the cause is.",
"created_at": "2017-06-21T09:07:15Z"
},
{
"body": "Hi Martijn,\r\n\r\nI've sent a link to your gmail account..\r\n\r\nRegards\r\n\r\nPhil\r\n",
"created_at": "2017-06-21T11:17:50Z"
},
{
"body": "@BLZB0B I've taken a look at the document that causes this error and your mapping and the reason this happens is because the logic that extracts the relevant nested part from the `_source` doesn't work well when a nested object field is wrapped inside an object field. \r\n\r\nIn your case the `calculatedPrices` nested field is wrapped in a `products` object field, which is again wrapped in a `doc` object field. The extract nested source logic falsely assumes that all `calculatedPrices` elements are in the first level, while these elements actually are on the third level.\r\n\r\nI think the nested source extraction logic can be fixed to flatten all the levels that don't use a nested field mapper, but it is going to make the this logic more complicated and unfortunately it is already complicated. I currently lean towards throwing a descriptive error (including a hint with a workaround) in the case of when nested fields are wrapped by regular object fields. Also if we in the future decide to change how the source is stored (as is descibed in #9034) then the extraction of the nested source is no longer needed.\r\n\r\nThere are two workarounds:\r\n* Change `doc` and `products` field to be of type `nested` then the extract nested source logic should work. This does also mean that you need to nested the `nested` query, so for the query you shared, you will need to first use `nested` query for `doc` level then for the `doc.products` level and then `doc.products.calculatedPrices` level.\r\n* The workaround that you're already using; disabling fetching the nested source (`_source` to `false` inside inner hits definition) and enable fetching nested doc values fields (using `docvalue_fields` parameter in the inner hits definition).",
"created_at": "2017-06-21T14:32:20Z"
},
{
"body": "Thanks Martijn,\r\n\r\nI've updated our mapping as suggested and are getting results back with \"inner_hits\": { } rather than the error mentioned above.\r\n\r\nA more useful error message would help as you suggested.\r\n\r\nI have a question related to the above but not part of the bug, so will switch to the forum rather than ask here if that's OK.\r\n\r\nRegards\r\n\r\nPhil",
"created_at": "2017-06-22T16:20:18Z"
},
{
"body": "This was discussed in the fix it friday meeting and there was agreement on making the nested source extraction not more complicated than it already is. So we should throw a descriptive error and document the two possible workarounds.",
"created_at": "2017-06-23T13:44:55Z"
},
{
"body": "Thanks @martijnvg ",
"created_at": "2017-06-23T15:08:28Z"
}
],
"number": 25315,
"title": "Using inner_hits on nested query causes an index_out_of_bounds_exception"
} | {
"body": "The nested source fetch logic can't properly select the part of the source that belongs to a specific nested document if a nested object field's parent object field is non nested.\r\n\r\nPR for #25315",
"number": 25749,
"review_comments": [
{
"body": "You are only checking the first one, but I think we need to iterate over entries from 0 to nested.getOffset() ?",
"created_at": "2017-08-08T10:34:18Z"
},
{
"body": "When the parent object is not nested then `XContentMapValues.extractValue(...)` extracts the values from two or more layers resulting in a list of list being returned, because there is one nested level, but in the _source there are two ore more levels. This is why I think only the first element of nestedParsedSource needs to be checked. Checking upto `nested.getOffset()` will likely cause an AOBE. \r\n\r\nI'll add a comment explaining that.",
"created_at": "2017-08-09T06:56:03Z"
}
],
"title": "Do not allow inner hits that fetch _source and have a non nested object field as parent"
} | {
"commits": [
{
"message": "inner hits: Do not allow inner hits that use _source and have a non nested object field as parent\n\nCloses #25315"
},
{
"message": "Moved the check to fetch phase. This basically means that we throw\na better error message instead of an AOBE and not adding more restrictions."
},
{
"message": "fix line length violation"
},
{
"message": "added comment"
}
],
"files": [
{
"diff": "@@ -239,7 +239,7 @@ public IndexWarmer.TerminationHandle warmReader(final IndexShard indexShard, fin\n hasNested = true;\n for (ObjectMapper objectMapper : docMapper.objectMappers().values()) {\n if (objectMapper.nested().isNested()) {\n- ObjectMapper parentObjectMapper = docMapper.findParentObjectMapper(objectMapper);\n+ ObjectMapper parentObjectMapper = objectMapper.getParentObjectMapper(mapperService);\n if (parentObjectMapper != null && parentObjectMapper.nested().isNested()) {\n warmUp.add(parentObjectMapper.nestedTypeFilter());\n }",
"filename": "core/src/main/java/org/elasticsearch/index/cache/bitset/BitsetFilterCache.java",
"status": "modified"
},
{
"diff": "@@ -292,21 +292,6 @@ public ObjectMapper findNestedObjectMapper(int nestedDocId, SearchContext sc, Le\n return nestedObjectMapper;\n }\n \n- /**\n- * Returns the parent {@link ObjectMapper} instance of the specified object mapper or <code>null</code> if there\n- * isn't any.\n- */\n- // TODO: We should add: ObjectMapper#getParentObjectMapper()\n- public ObjectMapper findParentObjectMapper(ObjectMapper objectMapper) {\n- int indexOfLastDot = objectMapper.fullPath().lastIndexOf('.');\n- if (indexOfLastDot != -1) {\n- String parentNestObjectPath = objectMapper.fullPath().substring(0, indexOfLastDot);\n- return objectMappers().get(parentNestObjectPath);\n- } else {\n- return null;\n- }\n- }\n-\n public boolean isParent(String type) {\n return mapperService.getParentTypes().contains(type);\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java",
"status": "modified"
},
{
"diff": "@@ -396,6 +396,35 @@ public final Dynamic dynamic() {\n return dynamic;\n }\n \n+ /**\n+ * Returns the parent {@link ObjectMapper} instance of the specified object mapper or <code>null</code> if there\n+ * isn't any.\n+ */\n+ public ObjectMapper getParentObjectMapper(MapperService mapperService) {\n+ int indexOfLastDot = fullPath().lastIndexOf('.');\n+ if (indexOfLastDot != -1) {\n+ String parentNestObjectPath = fullPath().substring(0, indexOfLastDot);\n+ return mapperService.getObjectMapper(parentNestObjectPath);\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ /**\n+ * Returns whether all parent objects fields are nested too.\n+ */\n+ public boolean parentObjectMapperAreNested(MapperService mapperService) {\n+ for (ObjectMapper parent = getParentObjectMapper(mapperService);\n+ parent != null;\n+ parent = parent.getParentObjectMapper(mapperService)) {\n+\n+ if (parent.nested().isNested() == false) {\n+ return false;\n+ }\n+ }\n+ return true;\n+ }\n+\n @Override\n public ObjectMapper merge(Mapper mergeWith, boolean updateAllTypes) {\n if (!(mergeWith instanceof ObjectMapper)) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/ObjectMapper.java",
"status": "modified"
},
{
"diff": "@@ -40,6 +40,7 @@\n import org.elasticsearch.index.fieldvisitor.FieldsVisitor;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.ObjectMapper;\n import org.elasticsearch.index.mapper.SourceFieldMapper;\n import org.elasticsearch.index.mapper.Uid;\n@@ -246,7 +247,7 @@ private SearchHit createNestedSearchHit(SearchContext context, int nestedTopDocI\n ObjectMapper nestedObjectMapper = documentMapper.findNestedObjectMapper(nestedSubDocId, context, subReaderContext);\n assert nestedObjectMapper != null;\n SearchHit.NestedIdentity nestedIdentity =\n- getInternalNestedIdentity(context, nestedSubDocId, subReaderContext, documentMapper, nestedObjectMapper);\n+ getInternalNestedIdentity(context, nestedSubDocId, subReaderContext, context.mapperService(), nestedObjectMapper);\n \n if (source != null) {\n Tuple<XContentType, Map<String, Object>> tuple = XContentHelper.convertToMap(source, true);\n@@ -262,18 +263,28 @@ private SearchHit createNestedSearchHit(SearchContext context, int nestedTopDocI\n String nestedPath = nested.getField().string();\n current.put(nestedPath, new HashMap<>());\n Object extractedValue = XContentMapValues.extractValue(nestedPath, sourceAsMap);\n- List<Map<String, Object>> nestedParsedSource;\n+ List<?> nestedParsedSource;\n if (extractedValue instanceof List) {\n // nested field has an array value in the _source\n- nestedParsedSource = (List<Map<String, Object>>) extractedValue;\n+ nestedParsedSource = (List<?>) extractedValue;\n } else if (extractedValue instanceof Map) {\n // nested field has an object value in the _source. This just means the nested field has just one inner object,\n // which is valid, but uncommon.\n- nestedParsedSource = Collections.singletonList((Map<String, Object>) extractedValue);\n+ nestedParsedSource = Collections.singletonList(extractedValue);\n } else {\n throw new IllegalStateException(\"extracted source isn't an object or an array\");\n }\n- sourceAsMap = nestedParsedSource.get(nested.getOffset());\n+ if ((nestedParsedSource.get(0) instanceof Map) == false &&\n+ nestedObjectMapper.parentObjectMapperAreNested(context.mapperService()) == false) {\n+ // When one of the parent objects are not nested then XContentMapValues.extractValue(...) extracts the values\n+ // from two or more layers resulting in a list of list being returned. This is because nestedPath\n+ // encapsulates two or more object layers in the _source.\n+ //\n+ // This is why only the first element of nestedParsedSource needs to be checked.\n+ throw new IllegalArgumentException(\"Cannot execute inner hits. One or more parent object fields of nested field [\" +\n+ nestedObjectMapper.name() + \"] are not nested. All parent fields need to be nested fields too\");\n+ }\n+ sourceAsMap = (Map<String, Object>) nestedParsedSource.get(nested.getOffset());\n if (nested.getChild() == null) {\n current.put(nestedPath, sourceAsMap);\n } else {\n@@ -312,7 +323,8 @@ private Map<String, DocumentField> getSearchFields(SearchContext context, int ne\n }\n \n private SearchHit.NestedIdentity getInternalNestedIdentity(SearchContext context, int nestedSubDocId,\n- LeafReaderContext subReaderContext, DocumentMapper documentMapper,\n+ LeafReaderContext subReaderContext,\n+ MapperService mapperService,\n ObjectMapper nestedObjectMapper) throws IOException {\n int currentParent = nestedSubDocId;\n ObjectMapper nestedParentObjectMapper;\n@@ -321,7 +333,7 @@ private SearchHit.NestedIdentity getInternalNestedIdentity(SearchContext context\n SearchHit.NestedIdentity nestedIdentity = null;\n do {\n Query parentFilter;\n- nestedParentObjectMapper = documentMapper.findParentObjectMapper(current);\n+ nestedParentObjectMapper = current.getParentObjectMapper(mapperService);\n if (nestedParentObjectMapper != null) {\n if (nestedParentObjectMapper.nested().isNested() == false) {\n current = nestedParentObjectMapper;",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java",
"status": "modified"
},
{
"diff": "@@ -36,6 +36,7 @@\n import java.util.Collections;\n import java.util.function.Function;\n \n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.nullValue;\n@@ -428,4 +429,35 @@ public void testLimitOfNestedFieldsWithMultiTypePerIndex() throws Exception {\n createIndex(\"test5\", Settings.builder().put(MapperService.INDEX_MAPPING_NESTED_FIELDS_LIMIT_SETTING.getKey(), 0).build())\n .mapperService().merge(\"type\", new CompressedXContent(mapping.apply(\"type\")), MergeReason.MAPPING_RECOVERY, false);\n }\n+\n+ public void testParentObjectMapperAreNested() throws Exception {\n+ MapperService mapperService = createIndex(\"index1\", Settings.EMPTY, \"doc\", jsonBuilder().startObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"messages\")\n+ .field(\"type\", \"nested\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()).mapperService();\n+ ObjectMapper objectMapper = mapperService.getObjectMapper(\"comments.messages\");\n+ assertTrue(objectMapper.parentObjectMapperAreNested(mapperService));\n+\n+ mapperService = createIndex(\"index2\", Settings.EMPTY, \"doc\", jsonBuilder().startObject()\n+ .startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .startObject(\"messages\")\n+ .field(\"type\", \"nested\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()).mapperService();\n+ objectMapper = mapperService.getObjectMapper(\"comments.messages\");\n+ assertFalse(objectMapper.parentObjectMapperAreNested(mapperService));\n+ }\n+\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/NestedObjectMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -402,32 +402,54 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n List<IndexRequestBuilder> requests = new ArrayList<>();\n requests.add(client().prepareIndex(\"articles\", \"article\", \"1\").setSource(jsonBuilder().startObject()\n .field(\"title\", \"quick brown fox\")\n- .startObject(\"comments\")\n- .startArray(\"messages\")\n- .startObject().field(\"message\", \"fox eat quick\").endObject()\n- .startObject().field(\"message\", \"bear eat quick\").endObject()\n+ .startArray(\"comments\")\n+ .startObject()\n+ .startArray(\"messages\")\n+ .startObject().field(\"message\", \"fox eat quick\").endObject()\n+ .startObject().field(\"message\", \"bear eat quick\").endObject()\n+ .endArray()\n+ .endObject()\n+ .startObject()\n+ .startArray(\"messages\")\n+ .startObject().field(\"message\", \"no fox\").endObject()\n+ .endArray()\n+ .endObject()\n .endArray()\n- .endObject()\n .endObject()));\n indexRandom(true, requests);\n \n- SearchResponse response = client().prepareSearch(\"articles\")\n+ SearchResponse response = client().prepareSearch(\"articles\").setQuery(nestedQuery(\"comments.messages\",\n+ matchQuery(\"comments.messages.message\", \"fox\"), ScoreMode.Avg).innerHit(new InnerHitBuilder())).get();\n+ assertEquals(\"Cannot execute inner hits. One or more parent object fields of nested field [comments.messages] are \" +\n+ \"not nested. All parent fields need to be nested fields too\", response.getShardFailures()[0].getCause().getMessage());\n+\n+ response = client().prepareSearch(\"articles\").setQuery(nestedQuery(\"comments.messages\",\n+ matchQuery(\"comments.messages.message\", \"fox\"), ScoreMode.Avg).innerHit(new InnerHitBuilder()\n+ .setFetchSourceContext(new FetchSourceContext(true)))).get();\n+ assertEquals(\"Cannot execute inner hits. One or more parent object fields of nested field [comments.messages] are \" +\n+ \"not nested. All parent fields need to be nested fields too\", response.getShardFailures()[0].getCause().getMessage());\n+\n+ response = client().prepareSearch(\"articles\")\n .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"fox\"), ScoreMode.Avg)\n- .innerHit(new InnerHitBuilder())).get();\n+ .innerHit(new InnerHitBuilder().setFetchSourceContext(new FetchSourceContext(false)))).get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n SearchHit hit = response.getHits().getAt(0);\n assertThat(hit.getId(), equalTo(\"1\"));\n SearchHits messages = hit.getInnerHits().get(\"comments.messages\");\n- assertThat(messages.getTotalHits(), equalTo(1L));\n+ assertThat(messages.getTotalHits(), equalTo(2L));\n assertThat(messages.getAt(0).getId(), equalTo(\"1\"));\n assertThat(messages.getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments.messages\"));\n- assertThat(messages.getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(messages.getAt(0).getNestedIdentity().getOffset(), equalTo(2));\n assertThat(messages.getAt(0).getNestedIdentity().getChild(), nullValue());\n+ assertThat(messages.getAt(1).getId(), equalTo(\"1\"));\n+ assertThat(messages.getAt(1).getNestedIdentity().getField().string(), equalTo(\"comments.messages\"));\n+ assertThat(messages.getAt(1).getNestedIdentity().getOffset(), equalTo(0));\n+ assertThat(messages.getAt(1).getNestedIdentity().getChild(), nullValue());\n \n response = client().prepareSearch(\"articles\")\n .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"bear\"), ScoreMode.Avg)\n- .innerHit(new InnerHitBuilder())).get();\n+ .innerHit(new InnerHitBuilder().setFetchSourceContext(new FetchSourceContext(false)))).get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n hit = response.getHits().getAt(0);\n@@ -448,7 +470,7 @@ public void testInnerHitsWithObjectFieldThatHasANestedField() throws Exception {\n indexRandom(true, requests);\n response = client().prepareSearch(\"articles\")\n .setQuery(nestedQuery(\"comments.messages\", matchQuery(\"comments.messages.message\", \"fox\"), ScoreMode.Avg)\n- .innerHit(new InnerHitBuilder())).get();\n+ .innerHit(new InnerHitBuilder().setFetchSourceContext(new FetchSourceContext(false)))).get();\n assertNoFailures(response);\n assertHitCount(response, 1);\n hit = response.getHits().getAt(0);;",
"filename": "core/src/test/java/org/elasticsearch/search/fetch/subphase/InnerHitsIT.java",
"status": "modified"
}
]
} |
{
"body": "Hi,\r\n\r\nI have issues to delete snapshot on a repository managed with Azure plugin : \r\n* Our configuration is the following : \r\n = ElasticSearch version : 5.4.1 (via https://artifacts.elastic.co/packages/5.x/yum)\r\n = Plugins installed : x-pack, repository-azure\r\n = JVM version : Openjdk 1.8.0.121 (RPM : java-1.8.0-openjdk-1.8.0.121-0.b13.el7_3.x86_64)\r\n = OS Version : Centos 7.3 / Linux xxx 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n = Point to note : We use the Azure plugin but we are not in the Azure infrastructure\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nExpected behaviour : \r\n* Possibility to delete more than one snapshot from Azure repository \r\nActual Behaviour: \r\n* The DELETE Request never ends up\r\n* When you ask the status of the Snapshot, you take a 404 snapshot_missing_exception\r\n* When you try to delete an other snapshot, you get a 503 concurrent_snapshot_execution_exception\r\n\r\n**Steps to reproduce**:\r\n\r\nI always achieve to reproduce it on my test environnement (happens first on production) : \r\n1) curl -v -X DELETE http://X.Y.Z.T:9200/_snapshot/testbackups-azure/snapshot_XXX . The request hangs up\r\n2) In the same time, on the second terminal , curl -v http://X.Y.Z.T:9200/_snapshot/testbackups-azure/snapshot_XXX. You get \r\n{\"error\":{\"root_cause\":[{\"type\":\"snapshot_missing_exception\",\"reason\":\"[testbackups-azure:snapshot_XXX] is missing\"}],\"type\":\"snapshot_missing_exception\",\"reason\":\"[testbackups-azure:snapshot_XXX] is missing\"},\"status\":404}\r\n3) On the second terminal, ask for curl -v -X DELETE http://X.Y.Z.T:9200/_snapshot/testbackups-azure/snapshot_YYY (a different Snapshot this time).\r\n{\"error\":{\"root_cause\":[{\"type\":\"concurrent_snapshot_execution_exception\",\"reason\":\"[testbackups-azure:snapshot_YYY/ZZZ] cannot delete - another snapshot is currently being deleted\"}],\"type\":\"concurrent_snapshot_execution_exception\",\"reason\":\"[testbackups-azure:snapshot_YYY/ZZZ] cannot delete - another snapshot is currently being deleted\"},\"status\":503}\r\n\r\nWhen I do a rolling restart of the cluster ES, it come back to normal and the first snapshot (XXX in our case) is no more present and it is possible to delete an snapshot. \r\n\r\n",
"comments": [
{
"body": "You cannot delete a snapshot if one is being executed or deleted. In your case, the first operation 1) hangs and it forbids any deletion like 3) to happen.\r\n\r\nBefore deleting a snapshot, the master node retrieves the list of snapshots from the remote repository (and it can takes time... in the meanwhile if your execute some get snapshot like 2) it will answer that it doesn't know the snapshot) and after that it update the cluster state to inform that a deletion is going to be executed. When the cluster state is updated, it deletes the snapshot on the repository.\r\n\r\nIt would be interesting to know why the first operation takes time. Maybe you have a lot of snapshots on the Azure repository?",
"created_at": "2017-06-27T12:59:08Z"
},
{
"body": "The key will be finding out the cause of the first operation taking a while. Given the above described scenario, it looks like all of the snapshot files are deleted appropriately from Azure, but the snapshot deletion in progress is not removed from the cluster state, which is the last step in the process of deleting a snapshot. We will likely need to see your logs from the master node to help diagnose the problem. It seems like the master node is hanging on something.",
"created_at": "2017-06-27T14:02:03Z"
},
{
"body": "Hi,\r\n\r\nThank you for your help. Here are the new elements : \r\n@tlrx, today I waited a long time and after a lot of time the process of the remove of snapshot finished : \r\n* 7 minutes on the test cluster (1800 snapshot and a volumetry of the cluster estimated at 600 Gbytes) \r\n* on production, it doesn't finished after a timeout of 3 hours (aroud 50 snapshots for a volumetry of the cluster estimated to 6 Tbytes)\r\n@abeyad , I see no logs on any of our nodes on (we have log at info level). \r\n\r\nIs there some request (HTTP or transport API) that can give me the process of delete of the snapshot . I see nothing on pending_tasks nor snapshot API endpoints ? or some log I could activate ?\r\n",
"created_at": "2017-06-27T16:06:40Z"
},
{
"body": "No, deletions should be quick, so we don't provide a status endpoint for those. Is there nothing in the logs whatsoever? Can you increase the logging for snapshotting? You can use:\r\n\r\n```\r\nPUT /_cluster/settings\r\n{\r\n \"transient\" : {\r\n \"logger.org.elasticsearch.snapshots\" : \"DEBUG\",\r\n \"logger.org.elasticsearch.cluster.service\": \"DEBUG\"\r\n }\r\n}\r\n```",
"created_at": "2017-06-27T16:50:16Z"
},
{
"body": "I suspect the large number of snapshots being an issue here, but I have no clue so I :+1: @abeyad suggestion to grab more information by logging at the debug level.",
"created_at": "2017-06-27T18:46:10Z"
},
{
"body": "I put the log in debug and it was acknowledged correctly \r\n```\r\n{\r\n \"acknowledged\": true,\r\n \"persistent\": {},\r\n \"transient\": {\r\n \"logger\": {\r\n \"org\": {\r\n \"elasticsearch\": {\r\n \"snapshots\": \"DEBUG\",\r\n \"cluster\": {\r\n \"service\": \"DEBUG\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\nI retried to remove a snapshot on my test platform : \r\n11:18:56 change of the transient parameters \r\ncurator : \r\n2017-06-28 11:20:34,648 INFO Preparing Action ID: 1, \"delete_snapshots\"\r\n2017-06-28 11:20:34,657 INFO Trying Action ID: 1, \"delete_snapshots\": Delete snapshots from the selected repository older than 2 days\r\n2017-06-28 11:24:05,369 INFO Deleting selected snapshots\r\n2017-06-28 11:27:21,182 INFO Deleting snapshot snapshot_2017_06_10_02_56_11...\r\n...\r\nTake 4 minutes to have the list of the snapshots and never acknowledge the delete . \r\n\r\nes master node : \r\n[2017-06-28T11:18:56,066][DEBUG][o.e.c.s.ClusterService ] [es3-main] set local cluster state to version 197349\r\n[2017-06-28T11:18:56,070][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [cluster_update_settings]: took [35ms] done applying updated cluster_state (version: 197349, uuid: lZr4L7UrQKyKJe42pAlnlg)\r\n[2017-06-28T11:18:56,070][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [reroute_after_cluster_update_settings]: execute\r\n[2017-06-28T11:18:56,076][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [reroute_after_cluster_update_settings]: took [5ms] no change in cluster_state\r\n[2017-06-28T11:27:22,379][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [delete snapshot]: execute\r\n[2017-06-28T11:27:22,379][DEBUG][o.e.c.s.ClusterService ] [es3-main] cluster state updated, version [197350], source [delete snapshot]\r\n[2017-06-28T11:27:22,380][DEBUG][o.e.c.s.ClusterService ] [es3-main] publishing cluster state version [197350]\r\n[2017-06-28T11:27:22,392][DEBUG][o.e.c.s.ClusterService ] [es3-main] applying cluster state version 197350\r\n[2017-06-28T11:27:22,392][DEBUG][o.e.c.s.ClusterService ] [es3-main] set local cluster state to version 197350\r\n[2017-06-28T11:27:22,409][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [delete snapshot]: took [30ms] done applying updated cluster_state (version: 197350, uuid: XXXXXXXXXXXXXXXXXXXXX)\r\n\r\nIt confirm that the delete part is fast but then it hangs up. \r\nUnfortunately, I have no logs on Snapshot classes. \r\n\r\nRegards,\r\n\r\nEtienne",
"created_at": "2017-06-28T10:15:42Z"
},
{
"body": "Complement on the logfile with the end of the remove snapshot : \r\n[2017-06-28T11:27:22,379][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [delete snapshot]: execute\r\n[2017-06-28T11:27:22,379][DEBUG][o.e.c.s.ClusterService ] [es3-main] cluster state updated, version [197350], source [delete snapshot]\r\n[2017-06-28T11:27:22,380][DEBUG][o.e.c.s.ClusterService ] [es3-main] publishing cluster state version [197350]\r\n[2017-06-28T11:27:22,392][DEBUG][o.e.c.s.ClusterService ] [es3-main] applying cluster state version 197350\r\n[2017-06-28T11:27:22,392][DEBUG][o.e.c.s.ClusterService ] [es3-main] set local cluster state to version 197350\r\n[2017-06-28T11:27:22,409][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [delete snapshot]: took [30ms] done applying updated cluster_state (version: 197350, uuid: XXXXXXXXXXXXXXXXXXXXX)\r\n[2017-06-28T12:35:10,224][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [remove snapshot deletion metadata]: execute\r\n[2017-06-28T12:35:10,225][DEBUG][o.e.c.s.ClusterService ] [es3-main] cluster state updated, version [197351], source [remove snapshot deletion metadata]\r\n[2017-06-28T12:35:10,225][DEBUG][o.e.c.s.ClusterService ] [es3-main] publishing cluster state version [197351]\r\n[2017-06-28T12:35:10,236][DEBUG][o.e.c.s.ClusterService ] [es3-main] applying cluster state version 197351\r\n[2017-06-28T12:35:10,236][DEBUG][o.e.c.s.ClusterService ] [es3-main] set local cluster state to version 197351\r\n[2017-06-28T12:35:10,242][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [remove snapshot deletion metadata]: took [17ms] done applying updated cluster_state (version: 197351, uuid: XXXXXXXXXXXXXXXXXXXXX)\r\n",
"created_at": "2017-06-28T11:35:31Z"
},
{
"body": "This is problematic:\r\n```\r\n[2017-06-28T11:27:22,409][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [delete snapshot]: took [30ms] done applying updated cluster_state (version: 197350, uuid: XXXXXXXXXXXXXXXXXXXXX)\r\n[2017-06-28T12:35:10,224][DEBUG][o.e.c.s.ClusterService ] [es3-main] processing [remove snapshot deletion metadata]: execute\r\n```\r\n\r\nThe deletion started at 11:27 and we only begin to remove the snapshot deletion from the cluster state at 12:35, a full hour after the deletion started. It seems to me the Azure access is very slow.\r\n\r\nApologies, but can you kindly re-run the test with DEBUG logging enabled as follows:\r\n```\r\nPUT /_cluster/settings\r\n{\r\n \"transient\" : {\r\n \"logger.org.elasticsearch.snapshots\" : \"DEBUG\",\r\n \"logger.org.elasticsearch.cluster.service\": \"DEBUG\",\r\n \"logger.org.elasticsearch.repositories\": \"DEBUG\",\r\n \"logger.org.elasticsearch.cloud.azure\": \"DEBUG\",\r\n \"logger.com.microsoft.azure.storage\": \"DEBUG\"\r\n }\r\n}\r\n```\r\n\r\nand send the logs, as complete as possible? ",
"created_at": "2017-06-28T15:24:02Z"
},
{
"body": "Hi,\r\n\r\nIn the same time, we upgraded to 5.4.3 our testing database so the new test was with 5.4.3 . The snapshot log is on gist : https://gist.github.com/etiennecarriere/cc21d7e079fb24d8b7d0c65449065f65\r\nI remove all unique ID and replace them with XXX, YYY, ZZZ, TTT . \r\nIt seems to prove it is the listing of the azure container that is very slow (50 seconds at each time) \r\n\r\nRegards,\r\n\r\nEtienne ",
"created_at": "2017-06-29T13:41:52Z"
},
{
"body": "Hi, \r\n\r\nI confirm what seems to be a very ineffective implementation of the list in the azure plugin. On a bad Internet access (6 Mbits download/ 1 Mbits upload / 50 ms latency to Azure DC), I come from 1162 seconds to remove one snapshot to 185 seconds with my patch to remove a snapshot from a repository with 45 snapshots of 90 shards each. \r\n\r\nI see with my employer Monday how I can pull request it. \r\n\r\nRegards,\r\n\r\nEtienne ",
"created_at": "2017-07-01T17:45:40Z"
},
{
"body": "@abeyad , I permit to come back to you to see if you had time to have an opinion on the pull request I proposed ",
"created_at": "2017-07-19T13:20:11Z"
}
],
"number": 25424,
"title": "Issue on deleting ES Snapshot with Azure plugin"
} | {
"body": "Close #25424 . \r\n\r\nCurrent behaviour of the Listing of a container in azure plugin : \r\n* List all the blob\r\n* For each blob, request the medata of blob\r\n=> it is ineffective as we have N+1 requests for N blobs \r\n\r\nProposed behaviour\r\n* List all blob and retrieving the metadata in the same request",
"number": 25710,
"review_comments": [
{
"body": "This doPrivileged needs to stay.",
"created_at": "2017-07-13T20:46:50Z"
},
{
"body": "I don't think we need this check, it should be an error if we don't get back CloudBlockBlob (which is what we expect for all keys here). I think it can be a hard assert.",
"created_at": "2017-07-13T20:53:09Z"
},
{
"body": "nit: please use spaces after the commas\r\n\r\nAlso, check the length of this line, it looks close to the 140 char limit",
"created_at": "2017-07-13T20:55:10Z"
},
{
"body": "This can simply be `randomLong()`. However, I don't think this will work because the cleanup in `wipeAzureRepositories()` would then not know about the generated name. If these need to be unique, then this method needs to stash the generated name so that the repository can be cleaned up after the test.",
"created_at": "2017-07-13T20:59:06Z"
},
{
"body": "I don't think these should be removed, but if they should, then please explain why.",
"created_at": "2017-07-13T20:59:35Z"
},
{
"body": "I agree it is the assumption on the previous version of code but I prefer to be defensive : if the azure storage container has been modified by an outside component, it could have other type of blob (Append/Page) so I propose ignoring it and put a warning. ",
"created_at": "2017-07-13T22:50:31Z"
},
{
"body": "I am +1 on what @rjernst said - silently ignoring is not the way to go here, very few read log messages, its more important to fail hard if something changed that alters our underlying assumptions of what object type is returned there.",
"created_at": "2017-07-14T01:38:05Z"
},
{
"body": "After reflexion, ok to keep it but we have to add the MockFSIndexStore.TestPlugin so that the configuration variable are known. If we don't have, we have random error as MockFSIndexStore.TestPlugin is added randomly in ESIntegTestCase superclass",
"created_at": "2017-07-14T09:05:49Z"
},
{
"body": "you are right : I rollback as it will not work in wipeAzureRepository . ",
"created_at": "2017-07-14T09:10:39Z"
},
{
"body": "nit: please keep the spacing around `==` and between arguments (after `,`)",
"created_at": "2017-08-17T23:06:27Z"
},
{
"body": "typo: an -> a, concat -> concatenated",
"created_at": "2017-08-17T23:10:26Z"
},
{
"body": "typo: hypens -> hyphens",
"created_at": "2017-08-17T23:10:40Z"
}
],
"title": "Snapshot : azure module - accelerate the listing of files (used in delete snapshot)"
} | {
"commits": [
{
"message": "Update Integrations tests"
},
{
"message": " Update the ListBlob in Container in order to optimize the network requests"
},
{
"message": "Corrections following the remarks on the pull request"
},
{
"message": "Remarks taken in account in the integration test"
},
{
"message": "Minor modifications on comments"
},
{
"message": "Merge branch 'master' into 25424_azure_improve_delete_snapshot"
},
{
"message": "Change to comply with the change of API"
}
],
"files": [
{
"diff": "@@ -24,6 +24,7 @@\n import com.microsoft.azure.storage.RetryExponentialRetry;\n import com.microsoft.azure.storage.RetryPolicy;\n import com.microsoft.azure.storage.StorageException;\n+import com.microsoft.azure.storage.blob.BlobListingDetails;\n import com.microsoft.azure.storage.blob.BlobProperties;\n import com.microsoft.azure.storage.blob.CloudBlobClient;\n import com.microsoft.azure.storage.blob.CloudBlobContainer;\n@@ -45,6 +46,7 @@\n import java.io.OutputStream;\n import java.net.URI;\n import java.net.URISyntaxException;\n+import java.util.EnumSet;\n import java.util.HashMap;\n import java.util.Map;\n \n@@ -291,33 +293,26 @@ public Map<String, BlobMetaData> listBlobsByPrefix(String account, LocationMode\n \n logger.debug(\"listing container [{}], keyPath [{}], prefix [{}]\", container, keyPath, prefix);\n MapBuilder<String, BlobMetaData> blobsBuilder = MapBuilder.newMapBuilder();\n+ EnumSet<BlobListingDetails> enumBlobListingDetails = EnumSet.of(BlobListingDetails.METADATA);\n CloudBlobClient client = this.getSelectedClient(account, mode);\n CloudBlobContainer blobContainer = client.getContainerReference(container);\n-\n SocketAccess.doPrivilegedVoidException(() -> {\n if (blobContainer.exists()) {\n- for (ListBlobItem blobItem : blobContainer.listBlobs(keyPath + (prefix == null ? \"\" : prefix))) {\n+ for (ListBlobItem blobItem : blobContainer.listBlobs(keyPath + (prefix == null ? \"\" : prefix), false,\n+ enumBlobListingDetails, null, null)) {\n URI uri = blobItem.getUri();\n logger.trace(\"blob url [{}]\", uri);\n \n // uri.getPath is of the form /container/keyPath.* and we want to strip off the /container/\n // this requires 1 + container.length() + 1, with each 1 corresponding to one of the /\n String blobPath = uri.getPath().substring(1 + container.length() + 1);\n-\n- CloudBlockBlob blob = blobContainer.getBlockBlobReference(blobPath);\n-\n- // fetch the blob attributes from Azure (getBlockBlobReference does not do this)\n- // this is needed to retrieve the blob length (among other metadata) from Azure Storage\n- blob.downloadAttributes();\n-\n- BlobProperties properties = blob.getProperties();\n+ BlobProperties properties = ((CloudBlockBlob) blobItem).getProperties();\n String name = blobPath.substring(keyPath.length());\n logger.trace(\"blob url [{}], name [{}], size [{}]\", uri, name, properties.getLength());\n blobsBuilder.put(name, new PlainBlobMetaData(name, properties.getLength()));\n }\n }\n });\n-\n return blobsBuilder.immutableMap();\n }\n ",
"filename": "plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageServiceImpl.java",
"status": "modified"
},
{
"diff": "@@ -36,18 +36,24 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n+import org.elasticsearch.plugin.repository.azure.AzureRepositoryPlugin;\n+import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.repositories.RepositoryMissingException;\n import org.elasticsearch.repositories.RepositoryVerificationException;\n import org.elasticsearch.repositories.azure.AzureRepository.Repository;\n import org.elasticsearch.snapshots.SnapshotMissingException;\n+import org.elasticsearch.snapshots.SnapshotRestoreException;\n import org.elasticsearch.snapshots.SnapshotState;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.store.MockFSDirectoryService;\n+import org.elasticsearch.test.store.MockFSIndexStore;\n import org.junit.After;\n import org.junit.Before;\n \n import java.net.URISyntaxException;\n+import java.util.Arrays;\n+import java.util.Collection;\n import java.util.Locale;\n import java.util.concurrent.TimeUnit;\n \n@@ -65,13 +71,24 @@\n supportsDedicatedMasters = false, numDataNodes = 1,\n transportClientRatio = 0.0)\n public class AzureSnapshotRestoreTests extends AbstractAzureWithThirdPartyIntegTestCase {\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return Arrays.asList(AzureRepositoryPlugin.class, MockFSIndexStore.TestPlugin.class);\n+ }\n+\n private String getRepositoryPath() {\n String testName = \"it-\" + getTestName();\n return testName.contains(\" \") ? Strings.split(testName, \" \")[0] : testName;\n }\n \n public static String getContainerName() {\n- String testName = \"snapshot-itest-\".concat(RandomizedTest.getContext().getRunnerSeedAsString().toLowerCase(Locale.ROOT));\n+ /* Have a different name per test so that there is no possible race condition. As the long can be negative,\n+ * there mustn't be a hyphen between the 2 concatenated numbers\n+ * (can't have 2 consecutives hyphens on Azure containers)\n+ */\n+ String testName = \"snapshot-itest-\"\n+ .concat(RandomizedTest.getContext().getRunnerSeedAsString().toLowerCase(Locale.ROOT));\n return testName.contains(\" \") ? Strings.split(testName, \" \")[0] : testName;\n }\n \n@@ -95,9 +112,10 @@ public final void wipeAzureRepositories() throws StorageException, URISyntaxExce\n }\n \n public void testSimpleWorkflow() {\n+ String repo_name = \"test-repo-simple\";\n Client client = client();\n logger.info(\"--> creating azure repository with path [{}]\", getRepositoryPath());\n- PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(\"test-repo\")\n+ PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(repo_name)\n .setType(\"azure\").setSettings(Settings.builder()\n .put(Repository.CONTAINER_SETTING.getKey(), getContainerName())\n .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath())\n@@ -120,13 +138,13 @@ public void testSimpleWorkflow() {\n assertThat(client.prepareSearch(\"test-idx-3\").setSize(0).get().getHits().getTotalHits(), equalTo(100L));\n \n logger.info(\"--> snapshot\");\n- CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\")\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(repo_name, \"test-snap\")\n .setWaitForCompletion(true).setIndices(\"test-idx-*\", \"-test-idx-3\").get();\n assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(),\n equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n \n- assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").get().getSnapshots()\n+ assertThat(client.admin().cluster().prepareGetSnapshots(repo_name).setSnapshots(\"test-snap\").get().getSnapshots()\n .get(0).state(), equalTo(SnapshotState.SUCCESS));\n \n logger.info(\"--> delete some data\");\n@@ -148,7 +166,7 @@ public void testSimpleWorkflow() {\n client.admin().indices().prepareClose(\"test-idx-1\", \"test-idx-2\").get();\n \n logger.info(\"--> restore all indices from the snapshot\");\n- RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\")\n+ RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(repo_name, \"test-snap\")\n .setWaitForCompletion(true).get();\n assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n \n@@ -161,7 +179,7 @@ public void testSimpleWorkflow() {\n logger.info(\"--> delete indices\");\n cluster().wipeIndices(\"test-idx-1\", \"test-idx-2\");\n logger.info(\"--> restore one index after deletion\");\n- restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true)\n+ restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(repo_name, \"test-snap\").setWaitForCompletion(true)\n .setIndices(\"test-idx-*\", \"-test-idx-2\").get();\n assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n ensureGreen();\n@@ -177,7 +195,7 @@ public void testSimpleWorkflow() {\n public void testMultipleSnapshots() throws URISyntaxException, StorageException {\n final String indexName = \"test-idx-1\";\n final String typeName = \"doc\";\n- final String repositoryName = \"test-repo\";\n+ final String repositoryName = \"test-repo-multiple-snapshot\";\n final String snapshot1Name = \"test-snap-1\";\n final String snapshot2Name = \"test-snap-2\";\n \n@@ -314,6 +332,7 @@ public void testMultipleRepositories() {\n * For issue #26: https://github.com/elastic/elasticsearch-cloud-azure/issues/26\n */\n public void testListBlobs_26() throws StorageException, URISyntaxException {\n+ final String repositoryName=\"test-repo-26\";\n createIndex(\"test-idx-1\", \"test-idx-2\", \"test-idx-3\");\n ensureGreen();\n \n@@ -327,45 +346,45 @@ public void testListBlobs_26() throws StorageException, URISyntaxException {\n \n ClusterAdminClient client = client().admin().cluster();\n logger.info(\"--> creating azure repository without any path\");\n- PutRepositoryResponse putRepositoryResponse = client.preparePutRepository(\"test-repo\").setType(\"azure\")\n+ PutRepositoryResponse putRepositoryResponse = client.preparePutRepository(repositoryName).setType(\"azure\")\n .setSettings(Settings.builder()\n .put(Repository.CONTAINER_SETTING.getKey(), getContainerName())\n ).get();\n assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true));\n \n // Get all snapshots - should be empty\n- assertThat(client.prepareGetSnapshots(\"test-repo\").get().getSnapshots().size(), equalTo(0));\n+ assertThat(client.prepareGetSnapshots(repositoryName).get().getSnapshots().size(), equalTo(0));\n \n logger.info(\"--> snapshot\");\n- CreateSnapshotResponse createSnapshotResponse = client.prepareCreateSnapshot(\"test-repo\", \"test-snap-26\")\n+ CreateSnapshotResponse createSnapshotResponse = client.prepareCreateSnapshot(repositoryName, \"test-snap-26\")\n .setWaitForCompletion(true).setIndices(\"test-idx-*\").get();\n assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n \n // Get all snapshots - should have one\n- assertThat(client.prepareGetSnapshots(\"test-repo\").get().getSnapshots().size(), equalTo(1));\n+ assertThat(client.prepareGetSnapshots(repositoryName).get().getSnapshots().size(), equalTo(1));\n \n // Clean the snapshot\n- client.prepareDeleteSnapshot(\"test-repo\", \"test-snap-26\").get();\n- client.prepareDeleteRepository(\"test-repo\").get();\n+ client.prepareDeleteSnapshot(repositoryName, \"test-snap-26\").get();\n+ client.prepareDeleteRepository(repositoryName).get();\n \n logger.info(\"--> creating azure repository path [{}]\", getRepositoryPath());\n- putRepositoryResponse = client.preparePutRepository(\"test-repo\").setType(\"azure\")\n+ putRepositoryResponse = client.preparePutRepository(repositoryName).setType(\"azure\")\n .setSettings(Settings.builder()\n .put(Repository.CONTAINER_SETTING.getKey(), getContainerName())\n .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath())\n ).get();\n assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true));\n \n // Get all snapshots - should be empty\n- assertThat(client.prepareGetSnapshots(\"test-repo\").get().getSnapshots().size(), equalTo(0));\n+ assertThat(client.prepareGetSnapshots(repositoryName).get().getSnapshots().size(), equalTo(0));\n \n logger.info(\"--> snapshot\");\n- createSnapshotResponse = client.prepareCreateSnapshot(\"test-repo\", \"test-snap-26\").setWaitForCompletion(true)\n+ createSnapshotResponse = client.prepareCreateSnapshot(repositoryName, \"test-snap-26\").setWaitForCompletion(true)\n .setIndices(\"test-idx-*\").get();\n assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n \n // Get all snapshots - should have one\n- assertThat(client.prepareGetSnapshots(\"test-repo\").get().getSnapshots().size(), equalTo(1));\n+ assertThat(client.prepareGetSnapshots(repositoryName).get().getSnapshots().size(), equalTo(1));\n \n \n }\n@@ -374,23 +393,24 @@ public void testListBlobs_26() throws StorageException, URISyntaxException {\n * For issue #28: https://github.com/elastic/elasticsearch-cloud-azure/issues/28\n */\n public void testGetDeleteNonExistingSnapshot_28() throws StorageException, URISyntaxException {\n+ final String repositoryName=\"test-repo-28\";\n ClusterAdminClient client = client().admin().cluster();\n logger.info(\"--> creating azure repository without any path\");\n- PutRepositoryResponse putRepositoryResponse = client.preparePutRepository(\"test-repo\").setType(\"azure\")\n+ PutRepositoryResponse putRepositoryResponse = client.preparePutRepository(repositoryName).setType(\"azure\")\n .setSettings(Settings.builder()\n .put(Repository.CONTAINER_SETTING.getKey(), getContainerName())\n ).get();\n assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true));\n \n try {\n- client.prepareGetSnapshots(\"test-repo\").addSnapshots(\"nonexistingsnapshotname\").get();\n+ client.prepareGetSnapshots(repositoryName).addSnapshots(\"nonexistingsnapshotname\").get();\n fail(\"Shouldn't be here\");\n } catch (SnapshotMissingException ex) {\n // Expected\n }\n \n try {\n- client.prepareDeleteSnapshot(\"test-repo\", \"nonexistingsnapshotname\").get();\n+ client.prepareDeleteSnapshot(repositoryName, \"nonexistingsnapshotname\").get();\n fail(\"Shouldn't be here\");\n } catch (SnapshotMissingException ex) {\n // Expected\n@@ -419,18 +439,19 @@ public void testForbiddenContainerName() throws Exception {\n * @param correct Is this container name correct\n */\n private void checkContainerName(final String container, final boolean correct) throws Exception {\n+ String repositoryName = \"test-repo-checkContainerName\";\n logger.info(\"--> creating azure repository with container name [{}]\", container);\n // It could happen that we just removed from a previous test the same container so\n // we can not create it yet.\n assertBusy(() -> {\n try {\n- PutRepositoryResponse putRepositoryResponse = client().admin().cluster().preparePutRepository(\"test-repo\")\n+ PutRepositoryResponse putRepositoryResponse = client().admin().cluster().preparePutRepository(repositoryName)\n .setType(\"azure\").setSettings(Settings.builder()\n .put(Repository.CONTAINER_SETTING.getKey(), container)\n .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath())\n .put(Repository.CHUNK_SIZE_SETTING.getKey(), randomIntBetween(1000, 10000), ByteSizeUnit.BYTES)\n ).get();\n- client().admin().cluster().prepareDeleteRepository(\"test-repo\").get();\n+ client().admin().cluster().prepareDeleteRepository(repositoryName).get();\n try {\n logger.info(\"--> remove container [{}]\", container);\n cleanRepositoryFiles(container);\n@@ -451,9 +472,10 @@ private void checkContainerName(final String container, final boolean correct) t\n * Test case for issue #23: https://github.com/elastic/elasticsearch-cloud-azure/issues/23\n */\n public void testNonExistingRepo_23() {\n+ final String repositoryName = \"test-repo-test23\";\n Client client = client();\n logger.info(\"--> creating azure repository with path [{}]\", getRepositoryPath());\n- PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(\"test-repo\")\n+ PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(repositoryName)\n .setType(\"azure\").setSettings(Settings.builder()\n .put(Repository.CONTAINER_SETTING.getKey(), getContainerName())\n .put(Repository.BASE_PATH_SETTING.getKey(), getRepositoryPath())\n@@ -463,9 +485,9 @@ public void testNonExistingRepo_23() {\n \n logger.info(\"--> restore non existing snapshot\");\n try {\n- client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"no-existing-snapshot\").setWaitForCompletion(true).get();\n+ client.admin().cluster().prepareRestoreSnapshot(repositoryName, \"no-existing-snapshot\").setWaitForCompletion(true).get();\n fail(\"Shouldn't be here\");\n- } catch (SnapshotMissingException ex) {\n+ } catch (SnapshotRestoreException ex) {\n // Expected\n }\n }\n@@ -475,9 +497,8 @@ public void testNonExistingRepo_23() {\n */\n public void testRemoveAndCreateContainer() throws Exception {\n final String container = getContainerName().concat(\"-testremove\");\n- final AzureStorageService storageService = new AzureStorageServiceImpl(internalCluster().getDefaultSettings(),\n- AzureStorageSettings.load(internalCluster().getDefaultSettings()));\n-\n+ final AzureStorageService storageService = new AzureStorageServiceImpl(nodeSettings(0),AzureStorageSettings.load(nodeSettings(0)));\n+ \n // It could happen that we run this test really close to a previous one\n // so we might need some time to be able to create the container\n assertBusy(() -> {",
"filename": "plugins/repository-azure/src/test/java/org/elasticsearch/repositories/azure/AzureSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 2.4.5\r\n\r\n**Description of the problem including expected versus actual behavior**: In all versions 2.4.2+ `_mget` used for an alias that has routing defined will not use the routing value to get the document and will report back an error:\r\n\r\n```\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"routing is required for [test]/[test]/[1]\"\r\n```\r\n\r\n**Steps to reproduce**:\r\n\r\n```\r\nDELETE test\r\n\r\nPUT test\r\n{\r\n \"mappings\": {\r\n \"test\": {\r\n \"_routing\": {\r\n \"required\": true\r\n }\r\n }\r\n }\r\n}\r\n\r\nPOST test/test/_bulk\r\n{\"index\":{\"_routing\":123,\"_id\":1}}\r\n{\"some_field\":\"bla\"}\r\n{\"index\":{\"_routing\":123,\"_id\":2}}\r\n{\"some_field\":\"bla\"}\r\n\r\nPOST /_aliases\r\n{\r\n \"actions\": [\r\n {\r\n \"add\": {\r\n \"index\": \"test\",\r\n \"alias\": \"test_alias\",\r\n \"routing\": \"123\"\r\n }\r\n }\r\n ]\r\n}\r\n\r\nGET /_mget\r\n{\r\n \"docs\": [\r\n {\r\n \"_index\": \"test_alias\",\r\n \"_type\": \"test\",\r\n \"_routing\": \"123\",\r\n \"_id\": \"1\" \r\n }\r\n ]\r\n}\r\n\r\nGET /_mget\r\n{\r\n \"docs\": [\r\n {\r\n \"_index\": \"test_alias\",\r\n \"_type\": \"test\",\r\n \"_id\": \"1\" \r\n }\r\n ]\r\n}\r\n```",
"comments": [
{
"body": "@javanna please could you take a look at this. Bug introduced in https://github.com/elastic/elasticsearch/pull/20659",
"created_at": "2017-07-13T10:16:35Z"
},
{
"body": "@clintongormley there's already a PR for it, which I reviewed a few hours ago ;) See #25697 .",
"created_at": "2017-07-13T10:23:46Z"
},
{
"body": "🥇 ",
"created_at": "2017-07-13T10:25:27Z"
}
],
"number": 25696,
"title": "_mget with an alias using routing doesn't automatically use the routing value"
} | {
"body": "Closes #25696\r\n\r\n",
"number": 25697,
"review_comments": [
{
"body": "assertEquals and assertFalse?",
"created_at": "2017-07-13T08:26:03Z"
}
],
"title": "mget with an alias shouldn't ignore alias routing"
} | {
"commits": [
{
"message": "mget with an alias shouldn't ignore alias routing\n\nCloses #25696"
},
{
"message": "Address @javanna's comments"
}
],
"files": [
{
"diff": "@@ -68,7 +68,7 @@ protected void doExecute(final MultiGetRequest request, final ActionListener<Mul\n try {\n concreteSingleIndex = indexNameExpressionResolver.concreteSingleIndex(clusterState, item).getName();\n \n- item.routing(clusterState.metaData().resolveIndexRouting(item.parent(), item.routing(), concreteSingleIndex));\n+ item.routing(clusterState.metaData().resolveIndexRouting(item.parent(), item.routing(), item.index()));\n if ((item.routing() == null) && (clusterState.getMetaData().routingRequired(concreteSingleIndex, item.type()))) {\n String message = \"routing is required for [\" + concreteSingleIndex + \"]/[\" + item.type() + \"]/[\" + item.id() + \"]\";\n responses.set(i, newItemFailure(concreteSingleIndex, item.type(), item.id(), new IllegalArgumentException(message)));",
"filename": "core/src/main/java/org/elasticsearch/action/get/TransportMultiGetAction.java",
"status": "modified"
},
{
"diff": "@@ -106,6 +106,23 @@ public void testThatMgetShouldWorkWithMultiIndexAlias() throws IOException {\n assertThat(mgetResponse.getResponses()[0].getFailure().getMessage(), containsString(\"more than one indices\"));\n }\n \n+ public void testThatMgetShouldWorkWithAliasRouting() throws IOException {\n+ assertAcked(prepareCreate(\"test\").addAlias(new Alias(\"alias1\").routing(\"abc\"))\n+ .addMapping(\"test\", jsonBuilder()\n+ .startObject().startObject(\"test\").startObject(\"_routing\").field(\"required\", true).endObject().endObject().endObject()));\n+\n+ client().prepareIndex(\"alias1\", \"test\", \"1\").setSource(jsonBuilder().startObject().field(\"foo\", \"bar\").endObject())\n+ .setRefreshPolicy(IMMEDIATE).get();\n+\n+ MultiGetResponse mgetResponse = client().prepareMultiGet()\n+ .add(new MultiGetRequest.Item(\"alias1\", \"test\", \"1\"))\n+ .get();\n+ assertEquals(1, mgetResponse.getResponses().length);\n+\n+ assertEquals(\"test\", mgetResponse.getResponses()[0].getIndex());\n+ assertFalse(mgetResponse.getResponses()[0].isFailed());\n+ }\n+\n public void testThatParentPerDocumentIsSupported() throws Exception {\n assertAcked(prepareCreate(\"test\").addAlias(new Alias(\"alias\"))\n .addMapping(\"test\", jsonBuilder()",
"filename": "core/src/test/java/org/elasticsearch/mget/SimpleMgetIT.java",
"status": "modified"
}
]
} |
{
"body": "We allow for specifying a config directory that resides outside of `%ES_HOME%` by specifying [%CONF_DIR%](https://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/elasticsearch-service.bat#L227).\r\n\r\nHowever setting the [jvm.options path is hard-coded](https://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/elasticsearch-service.bat#L146) to `%ES_HOME%/config` and doesn't use `%CONF_DIR%` which results in either an error or causes the wrong jvm.options file to be used when it's set.",
"comments": [
{
"body": "@gmarz the location of the `jvm.options` file should be configurable with the `ES_JVM_OPTIONS` environment variable: https://www.elastic.co/guide/en/elasticsearch/reference/5.2/breaking_50_packaging.html#_jvm_options",
"created_at": "2017-02-07T10:49:04Z"
},
{
"body": "@clintongormley but it defaults to `ES_HOME\\config\\jvm.options` which is problematic if `CONF_DIR` is set but not `ES_JVM_OPTIONS`.\r\n\r\nI think it should default to `CONF_DIR\\jvm.options` instead, and `CONF_DIR` default to `ES_HOME\\config`, which it already does.\r\n\r\n",
"created_at": "2017-02-07T14:38:03Z"
},
{
"body": "Had a related conversation with @jasontedor on slack and we discussed removing the ability to specify a config path altogether.",
"created_at": "2017-02-07T16:06:14Z"
},
{
"body": "We run multiple elasticsearch datanodes on the same system - as we change the CONF_DIR to point to the correct location for the configuration of each elasticsearch instance, it becomes an issue to have a single jvm.options file. Previously this was set via the ES_HEAP_SIZE variable in /etc/sysconfig/elasticsearch-data-node-0X however this variable is no longer supported. \r\n\r\nOur current situation is that all instances running use the same jvm options, so it's not a deal-breaker, however now I'm coming to puppetise the deployment, it will be a problem based on our previous puppet design using version 2.X.",
"created_at": "2017-02-21T01:21:02Z"
},
{
"body": "@gmarz Please don't remove the ability to specify a config path.",
"created_at": "2017-02-27T11:35:05Z"
},
{
"body": "> Please don't remove the ability to specify a config path.\r\n\r\n@roweryan Would you mind explaining your use case so we have a better understanding why you're requesting this?",
"created_at": "2017-02-27T11:52:05Z"
},
{
"body": "@jasontedor Sure. I need to keep my log files and config separate from my binaries. If the config lives inside the binary directory this makes my life more difficult :)",
"created_at": "2017-02-27T12:05:48Z"
},
{
"body": "> I need to keep my log files and config separate from my binaries.\r\n\r\nThis only pushed the question back one layer. 😄\r\n\r\nWhy do you need this?",
"created_at": "2017-02-27T12:11:10Z"
},
{
"body": "The unpacked tar.gz file needs to match what is in the tar.gz file I can get from your download page. If there are changed files or extra files in there then our check fails.",
"created_at": "2017-02-27T12:31:50Z"
}
],
"number": 23004,
"title": "Windows service script doesn't respect %CONF_DIR% when setting the jvm.options path"
} | {
"body": "This commit removes the environment variable ES_JVM_OPTIONS that allows the jvm.options file to sit separately from the rest of the config directory. Instead, we use the CONF_DIR environment variable for custom configuration location just as we do for the other configuration files.\r\n\r\nCloses #23004\r\n",
"number": 25679,
"review_comments": [],
"title": "Use config directory to find jvm.options"
} | {
"commits": [
{
"message": "Use config directory to find jvm.options\n\nThis commit removes the environment variable ES_JVM_OPTIONS that allows\nthe jvm.options file to sit separately from the rest of the config\ndirectory. Instead, we use the CONF_DIR environment variable for custom\nconfiguration location just as we do for the other configuration files."
},
{
"message": "Merge branch 'master' into remove-es_jvm_options\n\n* master:\n Fix inadvertent rename of systemd tests\n Adding basic search request documentation for high level client (#25651)\n Disallow lang to be used with Stored Scripts (#25610)\n Fix typo in ScriptDocValues deprecation warnings (#25672)\n Changes DocValueFieldsFetchSubPhase to reuse doc values iterators for multiple hits (#25644)\n Query range fields by doc values when they are expected to be more efficient than points.\n Remove SearchHit#internalHits (#25653)\n [DOCS] Reorganized the highlighting topic so it's less confusing."
},
{
"message": "Add systemd test too"
}
],
"files": [
{
"diff": "@@ -158,7 +158,7 @@ class NodeInfo {\n args.add(\"${property.key.substring('tests.es.'.size())}=${property.value}\")\n }\n }\n- env.put('ES_JVM_OPTIONS', new File(confDir, 'jvm.options'))\n+ env.put('CONF_DIR', confDir)\n if (Version.fromString(nodeVersion).major == 5) {\n args.addAll(\"-E\", \"path.conf=${confDir}\")\n } else {",
"filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy",
"status": "modified"
},
{
"diff": "@@ -80,7 +80,7 @@ DAEMON_OPTS=\"-d -p $PID_FILE --path.conf $CONF_DIR\"\n export ES_JAVA_OPTS\n export JAVA_HOME\n export ES_INCLUDE\n-export ES_JVM_OPTIONS\n+export CONF_DIR\n \n if [ ! -x \"$DAEMON\" ]; then\n \techo \"The elasticsearch startup script does not exists or it is not executable, tried: $DAEMON\"",
"filename": "distribution/deb/src/main/packaging/init.d/elasticsearch",
"status": "modified"
},
{
"diff": "@@ -64,7 +64,7 @@ pidfile=\"$PID_DIR/${prog}.pid\"\n export ES_JAVA_OPTS\n export JAVA_HOME\n export ES_INCLUDE\n-export ES_JVM_OPTIONS\n+export CONF_DIR\n export ES_STARTUP_SLEEP_TIME\n \n lockfile=/var/lock/subsys/$prog",
"filename": "distribution/rpm/src/main/packaging/init.d/elasticsearch",
"status": "modified"
},
{
"diff": "@@ -6,7 +6,7 @@\n # behavior, those variables are:\n #\n # ES_CLASSPATH -- A Java classpath containing everything necessary to run.\n-# ES_JVM_OPTIONS -- Path to file containing JVM options\n+# CONF_DIR -- Path to config directory\n # ES_JAVA_OPTS -- External Java Opts on top of the defaults set\n #\n # Optionally, exact memory values can be set using the `ES_JAVA_OPTS`.\n@@ -81,14 +81,10 @@ ES_HOME=`dirname \"$SCRIPT\"`/..\n # make ELASTICSEARCH_HOME absolute\n ES_HOME=`cd \"$ES_HOME\"; pwd`\n \n-if [ -z \"$ES_JVM_OPTIONS\" ]; then\n- for jvm_options in \"$ES_HOME\"/config/jvm.options \\\n- /etc/elasticsearch/jvm.options; do\n- if [ -r \"$jvm_options\" ]; then\n- ES_JVM_OPTIONS=$jvm_options\n- break\n- fi\n- done\n+if [ -z \"$CONF_DIR\" ]; then\n+ ES_JVM_OPTIONS=\"$ES_HOME\"/config/jvm.options\n+else\n+ ES_JVM_OPTIONS=\"$CONF_DIR\"/jvm.options\n fi\n \n ES_JAVA_OPTS=\"$(parse_jvm_options \"$ES_JVM_OPTIONS\") $ES_JAVA_OPTS\"",
"filename": "distribution/src/main/resources/bin/elasticsearch",
"status": "modified"
},
{
"diff": "@@ -126,9 +126,11 @@ if exist \"%JAVA_HOME%\\bin\\client\\jvm.dll\" (\n )\n \n :foundJVM\n-if \"%ES_JVM_OPTIONS%\" == \"\" (\n-set ES_JVM_OPTIONS=%ES_HOME%\\config\\jvm.options\n-)\n+CALL \"%ES_HOME%\\bin\\elasticsearch.in.bat\"\n+\n+if \"%CONF_DIR%\" == \"\" set CONF_DIR=%ES_HOME%\\config\n+\n+set ES_JVM_OPTIONS=%CONF_DIR%\\jvm.options\n \n if not \"%ES_JAVA_OPTS%\" == \"\" set ES_JAVA_OPTS=%ES_JAVA_OPTS: =;%\n \n@@ -205,10 +207,6 @@ if \"%JVM_SS%\" == \"\" (\n goto:eof\n )\n \n-CALL \"%ES_HOME%\\bin\\elasticsearch.in.bat\"\n-\n-if \"%CONF_DIR%\" == \"\" set CONF_DIR=%ES_HOME%\\config\n-\n set ES_PARAMS=-Delasticsearch;-Des.path.home=\"%ES_HOME%\"\n \n if \"%ES_START_TYPE%\" == \"\" set ES_START_TYPE=manual",
"filename": "distribution/src/main/resources/bin/elasticsearch-service.bat",
"status": "modified"
},
{
"diff": "@@ -35,9 +35,11 @@ FOR /F \"usebackq tokens=1* delims= \" %%A IN (!params!) DO (\n \n SET HOSTNAME=%COMPUTERNAME%\n \n-if \"%ES_JVM_OPTIONS%\" == \"\" (\n+if \"%CONF_DIR%\" == \"\" (\n rem '0' is the batch file, '~dp' appends the drive and path\n set \"ES_JVM_OPTIONS=%~dp0\\..\\config\\jvm.options\"\n+) else (\n+set \"ES_JVM_OPTIONS=%CONF_DIR%\\jvm.options\"\n )\n \n @setlocal",
"filename": "distribution/src/main/resources/bin/elasticsearch.bat",
"status": "modified"
},
{
"diff": "@@ -35,3 +35,11 @@ only. Related, this means that the environment variables `DATA_DIR` and\n We previously attempted to ensure that Elasticsearch could be started on 32-bit\n JVM (although a bootstrap check prevented using a 32-bit JVM in production). We\n are no longer maintaining this attempt.\n+\n+==== `ES_JVM_OPTIONS`is no longer supported\n+\n+The environment variable `ES_JVM_OPTIONS` that enabled a custom location for the\n+`jvm.options` file has been removed in favor of using the environment variable\n+`CONF_DIR`. This environment variable is already used in the packaging to\n+support relocating the configuration files so this change merely aligns the\n+other configuration files with the location of the `jvm.options` file.",
"filename": "docs/reference/migration/migrate_6_0/packaging.asciidoc",
"status": "modified"
},
{
"diff": "@@ -67,10 +67,9 @@ When using the zip or tarball packages, the `config`, `data`, `logs` and\n default.\n \n It is a good idea to place these directories in a different location so that\n-there is no chance of deleting them when upgrading Elasticsearch. These\n-custom paths can be <<path-settings,configured>> with the `path.conf`,\n-`path.logs`, and `path.data` settings, and using `ES_JVM_OPTIONS` to specify\n-the location of the `jvm.options` file.\n+there is no chance of deleting them when upgrading Elasticsearch. These custom\n+paths can be <<path-settings,configured>> with the `CONF_DIR` environment\n+variable, and the `path.logs`, and `path.data` settings.\n \n The <<deb,Debian>> and <<rpm,RPM>> packages place these directories in the\n appropriate place for each operating system.",
"filename": "docs/reference/setup/rolling_upgrade.asciidoc",
"status": "modified"
},
{
"diff": "@@ -111,8 +111,10 @@ setup() {\n \n @test \"[TAR] start Elasticsearch with custom JVM options\" {\n local es_java_opts=$ES_JAVA_OPTS\n- local es_jvm_options=$ES_JVM_OPTIONS\n+ local conf_dir=$CONF_DIR\n local temp=`mktemp -d`\n+ cp \"$ESCONFIG\"/elasticsearch.yml \"$temp\"\n+ cp \"$ESCONFIG\"/log4j2.properties \"$temp\"\n touch \"$temp/jvm.options\"\n chown -R elasticsearch:elasticsearch \"$temp\"\n echo \"-Xms512m\" >> \"$temp/jvm.options\"\n@@ -121,13 +123,13 @@ setup() {\n # manager exception before we have configured logging; this will fail\n # startup since we detect usages of logging before it is configured\n echo \"-Dlog4j2.disable.jmx=true\" >> \"$temp/jvm.options\"\n- export ES_JVM_OPTIONS=\"$temp/jvm.options\"\n+ export CONF_DIR=\"$temp\"\n export ES_JAVA_OPTS=\"-XX:-UseCompressedOops\"\n start_elasticsearch_service\n curl -s -XGET localhost:9200/_nodes | fgrep '\"heap_init_in_bytes\":536870912'\n curl -s -XGET localhost:9200/_nodes | fgrep '\"using_compressed_ordinary_object_pointers\":\"false\"'\n stop_elasticsearch_service\n- export ES_JVM_OPTIONS=$es_jvm_options\n+ export CONF_DIR=$CONF_DIR\n export ES_JAVA_OPTS=$es_java_opts\n }\n ",
"filename": "qa/vagrant/src/test/resources/packaging/tests/20_tar_package.bats",
"status": "modified"
},
{
"diff": "@@ -187,6 +187,30 @@ setup() {\n systemctl stop elasticsearch.service\n }\n \n+@test \"[SYSTEMD] start Elasticsearch with custom JVM options\" {\n+ assert_file_exist $ESENVFILE\n+ local temp=`mktemp -d`\n+ cp \"$ESCONFIG\"/elasticsearch.yml \"$temp\"\n+ cp \"$ESCONFIG\"/log4j2.properties \"$temp\"\n+ touch \"$temp/jvm.options\"\n+ chown -R elasticsearch:elasticsearch \"$temp\"\n+ echo \"-Xms512m\" >> \"$temp/jvm.options\"\n+ echo \"-Xmx512m\" >> \"$temp/jvm.options\"\n+ # we have to disable Log4j from using JMX lest it will hit a security\n+ # manager exception before we have configured logging; this will fail\n+ # startup since we detect usages of logging before it is configured\n+ echo \"-Dlog4j2.disable.jmx=true\" >> \"$temp/jvm.options\"\n+ cp $ESENVFILE \"$temp/elasticsearch\"\n+ echo \"CONF_DIR=\\\"$temp\\\"\" >> $ESENVFILE\n+ echo \"ES_JAVA_OPTS=\\\"-XX:-UseCompressedOops\\\"\" >> $ESENVFILE\n+ service elasticsearch start\n+ wait_for_elasticsearch_status\n+ curl -s -XGET localhost:9200/_nodes | fgrep '\"heap_init_in_bytes\":536870912'\n+ curl -s -XGET localhost:9200/_nodes | fgrep '\"using_compressed_ordinary_object_pointers\":\"false\"'\n+ service elasticsearch stop\n+ cp \"$temp/elasticsearch\" $ESENVFILE\n+}\n+\n @test \"[SYSTEMD] masking systemd-sysctl\" {\n clean_before_test\n ",
"filename": "qa/vagrant/src/test/resources/packaging/tests/60_systemd.bats",
"status": "modified"
},
{
"diff": "@@ -120,9 +120,9 @@ setup() {\n \n @test \"[INIT.D] start Elasticsearch with custom JVM options\" {\n assert_file_exist $ESENVFILE\n- local es_java_opts=$ES_JAVA_OPTS\n- local es_jvm_options=$ES_JVM_OPTIONS\n local temp=`mktemp -d`\n+ cp \"$ESCONFIG\"/elasticsearch.yml \"$temp\"\n+ cp \"$ESCONFIG\"/log4j2.properties \"$temp\"\n touch \"$temp/jvm.options\"\n chown -R elasticsearch:elasticsearch \"$temp\"\n echo \"-Xms512m\" >> \"$temp/jvm.options\"\n@@ -132,7 +132,7 @@ setup() {\n # startup since we detect usages of logging before it is configured\n echo \"-Dlog4j2.disable.jmx=true\" >> \"$temp/jvm.options\"\n cp $ESENVFILE \"$temp/elasticsearch\"\n- echo \"ES_JVM_OPTIONS=\\\"$temp/jvm.options\\\"\" >> $ESENVFILE\n+ echo \"CONF_DIR=\\\"$temp\\\"\" >> $ESENVFILE\n echo \"ES_JAVA_OPTS=\\\"-XX:-UseCompressedOops\\\"\" >> $ESENVFILE\n service elasticsearch start\n wait_for_elasticsearch_status",
"filename": "qa/vagrant/src/test/resources/packaging/tests/70_sysv_initd.bats",
"status": "modified"
},
{
"diff": "@@ -147,7 +147,7 @@ fi\n move_config\n \n CONF_DIR=\"$ESCONFIG\" install_jvm_example\n- CONF_DIR=\"$ESCONFIG\" ES_JVM_OPTIONS=\"$ESCONFIG/jvm.options\" start_elasticsearch_service\n+ CONF_DIR=\"$ESCONFIG\" start_elasticsearch_service\n diff <(curl -s localhost:9200/_cat/configured_example | sed 's/ //g') <(echo \"foo\")\n stop_elasticsearch_service\n CONF_DIR=\"$ESCONFIG\" remove_jvm_example",
"filename": "qa/vagrant/src/test/resources/packaging/tests/module_and_plugin_test_cases.bash",
"status": "modified"
},
{
"diff": "@@ -339,10 +339,8 @@ run_elasticsearch_service() {\n if [ ! -z \"$CONF_DIR\" ] ; then\n if is_dpkg ; then\n echo \"CONF_DIR=$CONF_DIR\" >> /etc/default/elasticsearch;\n- echo \"ES_JVM_OPTIONS=$ES_JVM_OPTIONS\" >> /etc/default/elasticsearch;\n elif is_rpm; then\n echo \"CONF_DIR=$CONF_DIR\" >> /etc/sysconfig/elasticsearch;\n- echo \"ES_JVM_OPTIONS=$ES_JVM_OPTIONS\" >> /etc/sysconfig/elasticsearch\n fi\n fi\n \n@@ -370,7 +368,7 @@ run_elasticsearch_service() {\n # This line is attempting to emulate the on login behavior of /usr/share/upstart/sessions/jayatana.conf\n [ -f /usr/share/java/jayatanaag.jar ] && export JAVA_TOOL_OPTIONS=\"-javaagent:/usr/share/java/jayatanaag.jar\"\n # And now we can start Elasticsearch normally, in the background (-d) and with a pidfile (-p).\n-export ES_JVM_OPTIONS=$ES_JVM_OPTIONS\n+export CONF_DIR=$CONF_DIR\n export ES_JAVA_OPTS=$ES_JAVA_OPTS\n $timeoutCommand/tmp/elasticsearch/bin/elasticsearch $background -p /tmp/elasticsearch/elasticsearch.pid $ES_PATH_CONF $commandLineArgs\n BASH",
"filename": "qa/vagrant/src/test/resources/packaging/utils/utils.bash",
"status": "modified"
}
]
} |
{
"body": "Doc values have moved to an iterator API in Lucene 7. We should fix DocValueFieldsFetchSubPhase to only pull the iterator once and then iterate over top hits in doc id order in order to limit the cost of calling `advance` on the doc value iterators.",
"comments": [
{
"body": "This probably also means we should try to benchmark `doc_values` fields as we might have to update our recommendations as to when it is a good idea, eg. compared to using `_source`.",
"created_at": "2017-05-31T17:01:41Z"
}
],
"number": 24986,
"title": "DocValueFieldsFetchSubPhase should pull one iterator for all top hits"
} | {
"body": "\r\nCloses #24986",
"number": 25644,
"review_comments": [
{
"body": "remove this commented code?",
"created_at": "2017-07-11T10:06:03Z"
},
{
"body": "isn't this also changing the order in which the hits are being serialized?",
"created_at": "2017-07-11T10:07:06Z"
},
{
"body": "Yep, I pushed a change just before you commented :)",
"created_at": "2017-07-11T10:08:44Z"
},
{
"body": "cool, thx!",
"created_at": "2017-07-11T10:09:10Z"
},
{
"body": "Yep, only leaving it here until I ensure the tests pass so I have a reference of the old code",
"created_at": "2017-07-11T10:09:11Z"
},
{
"body": "matter of taste but I tend to like method refs better, ie. `Arrays.sort(hits, Comparators.comparing(SearchHit::docId))`",
"created_at": "2017-07-11T12:01:34Z"
},
{
"body": "you could do `if (subReaderContext == null || hit.docId() >= subReaderContext.docBase + subReaderContext.reader().maxDoc())` to avoid doing the `ReaderUtil.subIndex` binary search for every doc",
"created_at": "2017-07-11T12:07:26Z"
},
{
"body": "I'm wondering whether we should keep things this way here and do the cloning in `get(int index)`/`getValue()` to help GC by having even shorter lived objects, and potentially make escape analysis more likely to not ever create those objects.",
"created_at": "2017-07-12T08:12:45Z"
}
],
"title": "Changes DocValueFieldsFetchSubPhase to reuse doc values iterators for multiple hits"
} | {
"commits": [
{
"message": "Changes DocValueFieldsFetchSubPhase to reuse doc values iterators for multiple hits\n\nCloses #24986"
},
{
"message": "iter"
},
{
"message": "Update ScriptDocValues to not reuse GeoPoint and Date objects"
},
{
"message": "added Javadoc about script value re-use"
}
],
"files": [
{
"diff": "@@ -43,8 +43,13 @@\n \n \n /**\n- * Script level doc values, the assumption is that any implementation will implement a <code>getValue</code>\n- * and a <code>getValues</code> that return the relevant type that then can be used in scripts.\n+ * Script level doc values, the assumption is that any implementation will\n+ * implement a <code>getValue</code> and a <code>getValues</code> that return\n+ * the relevant type that then can be used in scripts.\n+ * \n+ * Implementations should not internally re-use objects for the values that they\n+ * return as a single {@link ScriptDocValues} instance can be reused to return\n+ * values form multiple documents.\n */\n public abstract class ScriptDocValues<T> extends AbstractList<T> {\n /**\n@@ -266,7 +271,7 @@ void refreshArray() throws IOException {\n return;\n }\n for (int i = 0; i < count; i++) {\n- dates[i].setMillis(in.nextValue());\n+ dates[i] = new MutableDateTime(in.nextValue(), DateTimeZone.UTC);\n }\n }\n }\n@@ -340,7 +345,7 @@ public void setNextDocId(int docId) throws IOException {\n resize(in.docValueCount());\n for (int i = 0; i < count; i++) {\n GeoPoint point = in.nextValue();\n- values[i].reset(point.lat(), point.lon());\n+ values[i] = new GeoPoint(point.lat(), point.lon());\n }\n } else {\n resize(0);",
"filename": "core/src/main/java/org/elasticsearch/index/fielddata/ScriptDocValues.java",
"status": "modified"
},
{
"diff": "@@ -18,15 +18,19 @@\n */\n package org.elasticsearch.search.fetch.subphase;\n \n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.ReaderUtil;\n import org.elasticsearch.common.document.DocumentField;\n import org.elasticsearch.index.fielddata.AtomicFieldData;\n import org.elasticsearch.index.fielddata.ScriptDocValues;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.fetch.FetchSubPhase;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.Collections;\n import java.util.HashMap;\n \n@@ -38,7 +42,8 @@\n public final class DocValueFieldsFetchSubPhase implements FetchSubPhase {\n \n @Override\n- public void hitExecute(SearchContext context, HitContext hitContext) throws IOException {\n+ public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOException {\n+\n if (context.collapse() != null) {\n // retrieve the `doc_value` associated with the collapse field\n String name = context.collapse().getFieldType().name();\n@@ -48,26 +53,40 @@ public void hitExecute(SearchContext context, HitContext hitContext) throws IOEx\n context.docValueFieldsContext().fields().add(name);\n }\n }\n+\n if (context.docValueFieldsContext() == null) {\n return;\n }\n+\n+ hits = hits.clone(); // don't modify the incoming hits\n+ Arrays.sort(hits, (a, b) -> Integer.compare(a.docId(), b.docId()));\n+\n for (String field : context.docValueFieldsContext().fields()) {\n- if (hitContext.hit().fieldsOrNull() == null) {\n- hitContext.hit().fields(new HashMap<>(2));\n- }\n- DocumentField hitField = hitContext.hit().getFields().get(field);\n- if (hitField == null) {\n- hitField = new DocumentField(field, new ArrayList<>(2));\n- hitContext.hit().getFields().put(field, hitField);\n- }\n MappedFieldType fieldType = context.mapperService().fullName(field);\n if (fieldType != null) {\n- /* Because this is called once per document we end up creating a new ScriptDocValues for every document which is important\n- * because the values inside ScriptDocValues might be reused for different documents (Dates do this). */\n- AtomicFieldData data = context.fieldData().getForField(fieldType).load(hitContext.readerContext());\n- ScriptDocValues<?> values = data.getScriptValues();\n- values.setNextDocId(hitContext.docId());\n- hitField.getValues().addAll(values);\n+ LeafReaderContext subReaderContext = null;\n+ AtomicFieldData data = null;\n+ ScriptDocValues<?> values = null;\n+ for (SearchHit hit : hits) {\n+ // if the reader index has changed we need to get a new doc values reader instance\n+ if (subReaderContext == null || hit.docId() >= subReaderContext.docBase + subReaderContext.reader().maxDoc()) {\n+ int readerIndex = ReaderUtil.subIndex(hit.docId(), context.searcher().getIndexReader().leaves());\n+ subReaderContext = context.searcher().getIndexReader().leaves().get(readerIndex);\n+ data = context.fieldData().getForField(fieldType).load(subReaderContext);\n+ values = data.getScriptValues();\n+ }\n+ int subDocId = hit.docId() - subReaderContext.docBase;\n+ values.setNextDocId(subDocId);\n+ if (hit.fieldsOrNull() == null) {\n+ hit.fields(new HashMap<>(2));\n+ }\n+ DocumentField hitField = hit.getFields().get(field);\n+ if (hitField == null) {\n+ hitField = new DocumentField(field, new ArrayList<>(2));\n+ hit.getFields().put(field, hitField);\n+ }\n+ hitField.getValues().addAll(values);\n+ }\n }\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsFetchSubPhase.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: master\r\n**Plugins installed**: none\r\n**JVM version**: 1.8.0_121\r\n**OS version**: macOS\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen running a search on a remote cluster, remote documents include the cluster alias in the `_index` field so that the document can be properly addressed. This behavior does not apply to aggregations. When doing a `terms` agg on `_index` for instance, the terms do not include the cluster alias. The `top_hits` agg also doesn't enhance `_index` with the cluster alias.\r\n\r\n**Steps to reproduce**:\r\n 1. Start a node that remotes to itself\r\n\r\n ```sh\r\n ./bin/elasticsearch -E search.remote.local.seeds=localhost:9300\r\n ```\r\n\r\n 2. Index a document\r\n\r\n ```sh\r\n curl -XPOST \"http://localhost:9200/index/doc/id\" -H 'Content-Type: application/json' -d'\r\n {\r\n \"foo\": \"bar\"\r\n }'\r\n ```\r\n\r\n 3. Execute a search for the document via local and remote index names, which results in two unique documents (correct behavior) but one index term in the aggs and what appears to be two identical `top_hits`\r\n\r\n ```sh\r\n curl -XPOST \"http://localhost:9200/index,local:index/_search\" -H 'Content-Type: application/json' -d'\r\n {\r\n \"aggs\": {\r\n \"indices\": {\r\n \"terms\": {\r\n \"field\": \"_index\"\r\n },\r\n \"aggs\": {\r\n \"hits\": {\r\n \"top_hits\": {\r\n \"size\": 10\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }'\r\n ```\r\n\r\n results:\r\n ```json\r\n {\r\n \"took\": 7,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 10,\r\n \"successful\": 10,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 2,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"index\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"id\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"foo\": \"bar\"\r\n }\r\n },\r\n {\r\n \"_index\": \"local:index\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"id\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"foo\": \"bar\"\r\n }\r\n }\r\n ]\r\n },\r\n \"aggregations\": {\r\n \"indices\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"key\": \"index\",\r\n \"doc_count\": 2,\r\n \"hits\": {\r\n \"hits\": {\r\n \"total\": 2,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"index\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"id\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"foo\": \"bar\"\r\n }\r\n },\r\n {\r\n \"_index\": \"index\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"id\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"foo\": \"bar\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n ```\r\n",
"comments": [],
"number": 25606,
"title": "Cross cluster aggregations don't include cluster alias in _index"
} | {
"body": "We lost the cluster alias due to some special casing in inner hits\r\nand due to the fact that we didn't pass on the alias to the shard request.\r\nThis change ensures that we have the cluster alias present on the shard to\r\nensure all SearchShardTarget reads preserve the alias.\r\n\r\nRelates to #25606\r\n",
"number": 25627,
"review_comments": [
{
"body": "I wonder what happens when we parse this back for the high level REST client. We will not split the string but that should be ok, getIndex just returns `clusterAlias:index` .",
"created_at": "2017-07-10T12:39:42Z"
},
{
"body": "yeah that was my plan actually. :) I checked the code to make sure this happens?!",
"created_at": "2017-07-11T09:33:09Z"
}
],
"title": "Ensure remote cluster alias is preserved in inner hits aggs"
} | {
"commits": [
{
"message": "Ensure remote cluster alias is preserved in inner hits aggs\n\nWe lost the cluster alias due to some special caseing in inner hits\nand due to the fact that we didn't pass on the alias to the shard request.\nThis change ensures that we have the cluster alias present on the shard to\nensure all SearchShardTarget reads preserve the alias.\n\nRelates to #25606"
},
{
"message": "add missing files"
},
{
"message": "fix tests"
},
{
"message": "fix checkstyle"
},
{
"message": "Merge branch 'master' into fix_inner_hits_remote_cluster_index_name"
},
{
"message": "fix complilation due to renamed version constant in master"
}
],
"files": [
{
"diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.cluster.routing.GroupShardsIterator;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.search.SearchPhaseResult;\n import org.elasticsearch.search.SearchShardTarget;\n import org.elasticsearch.search.internal.AliasFilter;\n@@ -297,11 +298,12 @@ public final void onFailure(Exception e) {\n }\n \n public final ShardSearchTransportRequest buildShardSearchRequest(SearchShardIterator shardIt) {\n+ String clusterAlias = shardIt.getClusterAlias();\n AliasFilter filter = aliasFilter.get(shardIt.shardId().getIndex().getUUID());\n assert filter != null;\n float indexBoost = concreteIndexBoosts.getOrDefault(shardIt.shardId().getIndex().getUUID(), DEFAULT_INDEX_BOOST);\n return new ShardSearchTransportRequest(shardIt.getOriginalIndices(), request, shardIt.shardId(), getNumShards(),\n- filter, indexBoost, timeProvider.getAbsoluteStartMillis());\n+ filter, indexBoost, timeProvider.getAbsoluteStartMillis(), clusterAlias);\n }\n \n /**",
"filename": "core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java",
"status": "modified"
},
{
"diff": "@@ -216,9 +216,6 @@ static BiFunction<String, String, DiscoveryNode> processRemoteShards(Map<String,\n //add the cluster name to the remote index names for indices disambiguation\n //this ends up in the hits returned with the search response\n ShardId shardId = clusterSearchShardsGroup.getShardId();\n- Index remoteIndex = shardId.getIndex();\n- Index index = new Index(RemoteClusterAware.buildRemoteIndexName(clusterAlias, remoteIndex.getName()),\n- remoteIndex.getUUID());\n final AliasFilter aliasFilter;\n if (indicesAndFilters == null) {\n aliasFilter = AliasFilter.EMPTY;\n@@ -229,10 +226,10 @@ static BiFunction<String, String, DiscoveryNode> processRemoteShards(Map<String,\n String[] aliases = aliasFilter.getAliases();\n String[] finalIndices = aliases.length == 0 ? new String[] {shardId.getIndexName()} : aliases;\n // here we have to map the filters to the UUID since from now on we use the uuid for the lookup\n- aliasFilterMap.put(remoteIndex.getUUID(), aliasFilter);\n+ aliasFilterMap.put(shardId.getIndex().getUUID(), aliasFilter);\n final OriginalIndices originalIndices = remoteIndicesByCluster.get(clusterAlias);\n assert originalIndices != null : \"original indices are null for clusterAlias: \" + clusterAlias;\n- SearchShardIterator shardIterator = new SearchShardIterator(clusterAlias, new ShardId(index, shardId.getId()),\n+ SearchShardIterator shardIterator = new SearchShardIterator(clusterAlias, shardId,\n Arrays.asList(clusterSearchShardsGroup.getShards()), new OriginalIndices(finalIndices,\n originalIndices.indicesOptions()));\n remoteShardIterators.add(shardIterator);",
"filename": "core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java",
"status": "modified"
},
{
"diff": "@@ -49,6 +49,7 @@\n import org.elasticsearch.search.fetch.subphase.highlight.HighlightField;\n import org.elasticsearch.search.lookup.SourceLookup;\n import org.elasticsearch.search.suggest.completion.CompletionSuggestion;\n+import org.elasticsearch.transport.RemoteClusterAware;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -106,6 +107,7 @@ public final class SearchHit implements Streamable, ToXContentObject, Iterable<D\n private SearchShardTarget shard;\n \n private transient String index;\n+ private transient String clusterAlias;\n \n private Map<String, Object> sourceAsMap;\n \n@@ -329,9 +331,17 @@ public void shard(SearchShardTarget target) {\n this.shard = target;\n if (target != null) {\n this.index = target.getIndex();\n+ this.clusterAlias = target.getClusterAlias();\n }\n }\n \n+ /**\n+ * Returns the cluster alias this hit comes from or null if it comes from a local cluster\n+ */\n+ public String getClusterAlias() {\n+ return clusterAlias;\n+ }\n+\n public void matchedQueries(String[] matchedQueries) {\n this.matchedQueries = matchedQueries;\n }\n@@ -408,7 +418,7 @@ public XContentBuilder toInnerXContent(XContentBuilder builder, Params params) t\n nestedIdentity.toXContent(builder, params);\n } else {\n if (index != null) {\n- builder.field(Fields._INDEX, index);\n+ builder.field(Fields._INDEX, RemoteClusterAware.buildRemoteIndexName(clusterAlias, index));\n }\n if (type != null) {\n builder.field(Fields._TYPE, type);",
"filename": "core/src/main/java/org/elasticsearch/search/SearchHit.java",
"status": "modified"
},
{
"diff": "@@ -61,12 +61,6 @@ public SearchHits(SearchHit[] hits, long totalHits, float maxScore) {\n this.maxScore = maxScore;\n }\n \n- public void shardTarget(SearchShardTarget shardTarget) {\n- for (SearchHit hit : hits) {\n- hit.shard(shardTarget);\n- }\n- }\n-\n /**\n * The total number of hits that matches the search request.\n */",
"filename": "core/src/main/java/org/elasticsearch/search/SearchHits.java",
"status": "modified"
},
{
"diff": "@@ -512,7 +512,7 @@ public DefaultSearchContext createSearchContext(ShardSearchRequest request, Time\n IndexService indexService = indicesService.indexServiceSafe(request.shardId().getIndex());\n IndexShard indexShard = indexService.getShard(request.shardId().getId());\n SearchShardTarget shardTarget = new SearchShardTarget(clusterService.localNode().getId(),\n- indexShard.shardId(), null, OriginalIndices.NONE);\n+ indexShard.shardId(), request.getClusterAlias(), OriginalIndices.NONE);\n Engine.Searcher engineSearcher = searcher == null ? indexShard.acquireSearcher(\"search\") : searcher;\n \n final DefaultSearchContext searchContext = new DefaultSearchContext(idGenerator.incrementAndGet(), request, shardTarget,",
"filename": "core/src/main/java/org/elasticsearch/search/SearchService.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.OriginalIndices;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -27,6 +28,7 @@\n import org.elasticsearch.common.text.Text;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.transport.RemoteClusterAware;\n \n import java.io.IOException;\n \n@@ -40,7 +42,7 @@ public final class SearchShardTarget implements Writeable, Comparable<SearchShar\n //original indices and cluster alias are only needed in the coordinating node throughout the search request execution.\n //no need to serialize them as part of SearchShardTarget.\n private final transient OriginalIndices originalIndices;\n- private final transient String clusterAlias;\n+ private final String clusterAlias;\n \n public SearchShardTarget(StreamInput in) throws IOException {\n if (in.readBoolean()) {\n@@ -50,7 +52,11 @@ public SearchShardTarget(StreamInput in) throws IOException {\n }\n shardId = ShardId.readShardId(in);\n this.originalIndices = null;\n- this.clusterAlias = null;\n+ if (in.getVersion().onOrAfter(Version.V_6_0_0_beta1)) {\n+ clusterAlias = in.readOptionalString();\n+ } else {\n+ clusterAlias = null;\n+ }\n }\n \n public SearchShardTarget(String nodeId, ShardId shardId, String clusterAlias, OriginalIndices originalIndices) {\n@@ -61,8 +67,8 @@ public SearchShardTarget(String nodeId, ShardId shardId, String clusterAlias, Or\n }\n \n //this constructor is only used in tests\n- public SearchShardTarget(String nodeId, Index index, int shardId) {\n- this(nodeId, new ShardId(index, shardId), null, OriginalIndices.NONE);\n+ public SearchShardTarget(String nodeId, Index index, int shardId, String clusterAlias) {\n+ this(nodeId, new ShardId(index, shardId), clusterAlias, OriginalIndices.NONE);\n }\n \n @Nullable\n@@ -108,6 +114,9 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeText(nodeId);\n }\n shardId.writeTo(out);\n+ if (out.getVersion().onOrAfter(Version.V_6_0_0_beta1)) {\n+ out.writeOptionalString(clusterAlias);\n+ }\n }\n \n @Override\n@@ -117,7 +126,7 @@ public boolean equals(Object o) {\n SearchShardTarget that = (SearchShardTarget) o;\n if (shardId.equals(that.shardId) == false) return false;\n if (nodeId != null ? !nodeId.equals(that.nodeId) : that.nodeId != null) return false;\n-\n+ if (clusterAlias != null ? !clusterAlias.equals(that.clusterAlias) : that.clusterAlias != null) return false;\n return true;\n }\n \n@@ -126,14 +135,17 @@ public int hashCode() {\n int result = nodeId != null ? nodeId.hashCode() : 0;\n result = 31 * result + (shardId.getIndexName() != null ? shardId.getIndexName().hashCode() : 0);\n result = 31 * result + shardId.hashCode();\n+ result = 31 * result + (clusterAlias != null ? clusterAlias.hashCode() : 0);\n return result;\n }\n \n @Override\n public String toString() {\n+ String shardToString = \"[\" + RemoteClusterAware.buildRemoteIndexName(clusterAlias, shardId.getIndexName()) + \"][\" + shardId.getId()\n+ + \"]\";\n if (nodeId == null) {\n- return \"[_na_]\" + shardId;\n+ return \"[_na_]\" + shardToString;\n }\n- return \"[\" + nodeId + \"]\" + shardId;\n+ return \"[\" + nodeId + \"]\" + shardToString;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/SearchShardTarget.java",
"status": "modified"
},
{
"diff": "@@ -59,6 +59,7 @@\n \n public class ShardSearchLocalRequest implements ShardSearchRequest {\n \n+ private String clusterAlias;\n private ShardId shardId;\n private int numberOfShards;\n private SearchType searchType;\n@@ -76,11 +77,12 @@ public class ShardSearchLocalRequest implements ShardSearchRequest {\n }\n \n ShardSearchLocalRequest(SearchRequest searchRequest, ShardId shardId, int numberOfShards,\n- AliasFilter aliasFilter, float indexBoost, long nowInMillis) {\n+ AliasFilter aliasFilter, float indexBoost, long nowInMillis, String clusterAlias) {\n this(shardId, numberOfShards, searchRequest.searchType(),\n searchRequest.source(), searchRequest.types(), searchRequest.requestCache(), aliasFilter, indexBoost);\n this.scroll = searchRequest.scroll();\n this.nowInMillis = nowInMillis;\n+ this.clusterAlias = clusterAlias;\n }\n \n public ShardSearchLocalRequest(ShardId shardId, String[] types, long nowInMillis, AliasFilter aliasFilter) {\n@@ -197,6 +199,9 @@ protected void innerReadFrom(StreamInput in) throws IOException {\n }\n nowInMillis = in.readVLong();\n requestCache = in.readOptionalBoolean();\n+ if (in.getVersion().onOrAfter(Version.V_6_0_0_beta1)) {\n+ clusterAlias = in.readOptionalString();\n+ }\n }\n \n protected void innerWriteTo(StreamOutput out, boolean asKey) throws IOException {\n@@ -216,6 +221,9 @@ protected void innerWriteTo(StreamOutput out, boolean asKey) throws IOException\n out.writeVLong(nowInMillis);\n }\n out.writeOptionalBoolean(requestCache);\n+ if (out.getVersion().onOrAfter(Version.V_6_0_0_beta1)) {\n+ out.writeOptionalString(clusterAlias);\n+ }\n }\n \n @Override\n@@ -238,4 +246,9 @@ public void rewrite(QueryShardContext context) throws IOException {\n }\n this.source = source;\n }\n+\n+ @Override\n+ public String getClusterAlias() {\n+ return clusterAlias;\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/internal/ShardSearchLocalRequest.java",
"status": "modified"
},
{
"diff": "@@ -141,4 +141,10 @@ static QueryBuilder parseAliasFilter(CheckedFunction<byte[], QueryBuilder, IOExc\n }\n }\n \n+ /**\n+ * Returns the cluster alias if this request is for a remote cluster or <code>null</code> if the request if targeted to the local\n+ * cluster.\n+ */\n+ String getClusterAlias();\n+\n }",
"filename": "core/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java",
"status": "modified"
},
{
"diff": "@@ -54,9 +54,9 @@ public ShardSearchTransportRequest(){\n }\n \n public ShardSearchTransportRequest(OriginalIndices originalIndices, SearchRequest searchRequest, ShardId shardId, int numberOfShards,\n- AliasFilter aliasFilter, float indexBoost, long nowInMillis) {\n+ AliasFilter aliasFilter, float indexBoost, long nowInMillis, String clusterAlias) {\n this.shardSearchLocalRequest = new ShardSearchLocalRequest(searchRequest, shardId, numberOfShards, aliasFilter, indexBoost,\n- nowInMillis);\n+ nowInMillis, clusterAlias);\n this.originalIndices = originalIndices;\n }\n \n@@ -141,6 +141,7 @@ public void readFrom(StreamInput in) throws IOException {\n shardSearchLocalRequest = new ShardSearchLocalRequest();\n shardSearchLocalRequest.innerReadFrom(in);\n originalIndices = OriginalIndices.readOriginalIndices(in);\n+\n }\n \n @Override\n@@ -180,4 +181,9 @@ public String getDescription() {\n // Shard id is enough here, the request itself can be found by looking at the parent task description\n return \"shardId[\" + shardSearchLocalRequest.shardId() + \"]\";\n }\n+\n+ @Override\n+ public String getClusterAlias() {\n+ return shardSearchLocalRequest.getClusterAlias();\n+ }\n }",
"filename": "core/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java",
"status": "modified"
},
{
"diff": "@@ -161,6 +161,6 @@ private static InetSocketAddress parseSeedAddress(String remoteHost) {\n }\n \n public static final String buildRemoteIndexName(String clusterAlias, String indexName) {\n- return clusterAlias + REMOTE_CLUSTER_INDEX_SEPARATOR + indexName;\n+ return clusterAlias != null ? clusterAlias + REMOTE_CLUSTER_INDEX_SEPARATOR + indexName : indexName;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/transport/RemoteClusterAware.java",
"status": "modified"
},
{
"diff": "@@ -114,9 +114,9 @@ public void testGuessRootCause() {\n assertEquals(ElasticsearchException.getExceptionName(rootCauses[0]), \"index_not_found_exception\");\n assertEquals(rootCauses[0].getMessage(), \"no such index\");\n ShardSearchFailure failure = new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1, null));\n ShardSearchFailure failure1 = new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 2));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 2, null));\n SearchPhaseExecutionException ex = new SearchPhaseExecutionException(\"search\", \"all shards failed\",\n new ShardSearchFailure[]{failure, failure1});\n if (randomBoolean()) {\n@@ -135,11 +135,11 @@ public void testGuessRootCause() {\n {\n ShardSearchFailure failure = new ShardSearchFailure(\n new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1, null));\n ShardSearchFailure failure1 = new ShardSearchFailure(new QueryShardException(new Index(\"foo1\", \"_na_\"), \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo1\", \"_na_\"), 1));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo1\", \"_na_\"), 1, null));\n ShardSearchFailure failure2 = new ShardSearchFailure(new QueryShardException(new Index(\"foo1\", \"_na_\"), \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo1\", \"_na_\"), 2));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo1\", \"_na_\"), 2, null));\n SearchPhaseExecutionException ex = new SearchPhaseExecutionException(\"search\", \"all shards failed\",\n new ShardSearchFailure[]{failure, failure1, failure2});\n final ElasticsearchException[] rootCauses = ex.guessRootCauses();\n@@ -166,9 +166,9 @@ public void testGuessRootCause() {\n public void testDeduplicate() throws IOException {\n {\n ShardSearchFailure failure = new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1, null));\n ShardSearchFailure failure1 = new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 2));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 2, null));\n SearchPhaseExecutionException ex = new SearchPhaseExecutionException(\"search\", \"all shards failed\",\n randomBoolean() ? failure1.getCause() : failure.getCause(), new ShardSearchFailure[]{failure, failure1});\n XContentBuilder builder = XContentFactory.jsonBuilder();\n@@ -182,11 +182,11 @@ public void testDeduplicate() throws IOException {\n }\n {\n ShardSearchFailure failure = new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1, null));\n ShardSearchFailure failure1 = new ShardSearchFailure(new QueryShardException(new Index(\"foo1\", \"_na_\"), \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo1\", \"_na_\"), 1));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo1\", \"_na_\"), 1, null));\n ShardSearchFailure failure2 = new ShardSearchFailure(new QueryShardException(new Index(\"foo1\", \"_na_\"), \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo1\", \"_na_\"), 2));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo1\", \"_na_\"), 2, null));\n SearchPhaseExecutionException ex = new SearchPhaseExecutionException(\"search\", \"all shards failed\",\n new ShardSearchFailure[]{failure, failure1, failure2});\n XContentBuilder builder = XContentFactory.jsonBuilder();\n@@ -202,9 +202,9 @@ public void testDeduplicate() throws IOException {\n }\n {\n ShardSearchFailure failure = new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1, null));\n ShardSearchFailure failure1 = new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 2));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 2, null));\n NullPointerException nullPointerException = new NullPointerException();\n SearchPhaseExecutionException ex = new SearchPhaseExecutionException(\"search\", \"all shards failed\", nullPointerException,\n new ShardSearchFailure[]{failure, failure1});\n@@ -891,7 +891,7 @@ public static Tuple<Throwable, ElasticsearchException> randomExceptions() {\n actual = new SearchPhaseExecutionException(\"search\", \"all shards failed\",\n new ShardSearchFailure[]{\n new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1))\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1, null))\n });\n expected = new ElasticsearchException(\"Elasticsearch exception [type=search_phase_execution_exception, \" +\n \"reason=all shards failed]\");",
"filename": "core/src/test/java/org/elasticsearch/ElasticsearchExceptionTests.java",
"status": "modified"
},
{
"diff": "@@ -65,7 +65,6 @@\n import org.elasticsearch.indices.IndexTemplateMissingException;\n import org.elasticsearch.indices.InvalidIndexTemplateException;\n import org.elasticsearch.indices.recovery.RecoverFilesRecoveryException;\n-import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.repositories.RepositoryException;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.rest.action.admin.indices.AliasesNotFoundException;\n@@ -284,7 +283,7 @@ public void testQueryShardException() throws IOException {\n }\n \n public void testSearchException() throws IOException {\n- SearchShardTarget target = new SearchShardTarget(\"foo\", new Index(\"bar\", \"_na_\"), 1);\n+ SearchShardTarget target = new SearchShardTarget(\"foo\", new Index(\"bar\", \"_na_\"), 1, null);\n SearchException ex = serialize(new SearchException(target, \"hello world\"));\n assertEquals(target, ex.shard());\n assertEquals(ex.getMessage(), \"hello world\");",
"filename": "core/src/test/java/org/elasticsearch/ExceptionSerializationTests.java",
"status": "modified"
},
{
"diff": "@@ -61,13 +61,13 @@ public void testCollect() throws InterruptedException {\n DfsSearchResult dfsSearchResult = new DfsSearchResult(shardID, null);\n dfsSearchResult.setShardIndex(shardID);\n dfsSearchResult.setSearchShardTarget(new SearchShardTarget(\"foo\",\n- new Index(\"bar\", \"baz\"), shardID));\n+ new Index(\"bar\", \"baz\"), shardID, null));\n collector.onResult(dfsSearchResult);});\n break;\n case 2:\n state.add(2);\n executor.execute(() -> collector.onFailure(shardID, new SearchShardTarget(\"foo\", new Index(\"bar\", \"baz\"),\n- shardID), new RuntimeException(\"boom\")));\n+ shardID, null), new RuntimeException(\"boom\")));\n break;\n default:\n fail(\"unknown state\");",
"filename": "core/src/test/java/org/elasticsearch/action/search/CountedCollectorTests.java",
"status": "modified"
},
{
"diff": "@@ -51,8 +51,8 @@ private static DfsSearchResult newSearchResult(int shardIndex, long requestId, S\n public void testDfsWith2Shards() throws IOException {\n AtomicArray<DfsSearchResult> results = new AtomicArray<>(2);\n AtomicReference<AtomicArray<SearchPhaseResult>> responseRef = new AtomicReference<>();\n- results.set(0, newSearchResult(0, 1, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0)));\n- results.set(1, newSearchResult(1, 2, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 0)));\n+ results.set(0, newSearchResult(0, 1, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0, null)));\n+ results.set(1, newSearchResult(1, 2, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 0, null)));\n results.get(0).termsStatistics(new Term[0], new TermStatistics[0]);\n results.get(1).termsStatistics(new Term[0], new TermStatistics[0]);\n \n@@ -64,12 +64,14 @@ public void testDfsWith2Shards() throws IOException {\n public void sendExecuteQuery(Transport.Connection connection, QuerySearchRequest request, SearchTask task,\n SearchActionListener<QuerySearchResult> listener) {\n if (request.id() == 1) {\n- QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0));\n+ QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0,\n+ null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(42, 1.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(2); // the size of the result set\n listener.onResponse(queryResult);\n } else if (request.id() == 2) {\n- QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 0));\n+ QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 0,\n+ null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(84, 2.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(2); // the size of the result set\n listener.onResponse(queryResult);\n@@ -106,8 +108,8 @@ public void run() throws IOException {\n public void testDfsWith1ShardFailed() throws IOException {\n AtomicArray<DfsSearchResult> results = new AtomicArray<>(2);\n AtomicReference<AtomicArray<SearchPhaseResult>> responseRef = new AtomicReference<>();\n- results.set(0, newSearchResult(0, 1, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0)));\n- results.set(1, newSearchResult(1, 2, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 0)));\n+ results.set(0, newSearchResult(0, 1, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0, null)));\n+ results.set(1, newSearchResult(1, 2, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 0, null)));\n results.get(0).termsStatistics(new Term[0], new TermStatistics[0]);\n results.get(1).termsStatistics(new Term[0], new TermStatistics[0]);\n \n@@ -119,7 +121,8 @@ public void testDfsWith1ShardFailed() throws IOException {\n public void sendExecuteQuery(Transport.Connection connection, QuerySearchRequest request, SearchTask task,\n SearchActionListener<QuerySearchResult> listener) {\n if (request.id() == 1) {\n- QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0));\n+ QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0,\n+ null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(42, 1.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(2); // the size of the result set\n listener.onResponse(queryResult);\n@@ -161,8 +164,8 @@ public void run() throws IOException {\n public void testFailPhaseOnException() throws IOException {\n AtomicArray<DfsSearchResult> results = new AtomicArray<>(2);\n AtomicReference<AtomicArray<SearchPhaseResult>> responseRef = new AtomicReference<>();\n- results.set(0, newSearchResult(0, 1, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0)));\n- results.set(1, newSearchResult(1, 2, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 0)));\n+ results.set(0, newSearchResult(0, 1, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0, null)));\n+ results.set(1, newSearchResult(1, 2, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 0, null)));\n results.get(0).termsStatistics(new Term[0], new TermStatistics[0]);\n results.get(1).termsStatistics(new Term[0], new TermStatistics[0]);\n \n@@ -174,7 +177,8 @@ public void testFailPhaseOnException() throws IOException {\n public void sendExecuteQuery(Transport.Connection connection, QuerySearchRequest request, SearchTask task,\n SearchActionListener<QuerySearchResult> listener) {\n if (request.id() == 1) {\n- QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0));\n+ QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0,\n+ null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(42, 1.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(2); // the size of the result set\n listener.onResponse(queryResult);",
"filename": "core/src/test/java/org/elasticsearch/action/search/DfsQueryPhaseTests.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,6 @@\n import org.elasticsearch.search.fetch.FetchSearchResult;\n import org.elasticsearch.search.fetch.QueryFetchSearchResult;\n import org.elasticsearch.search.fetch.ShardFetchSearchRequest;\n-import org.elasticsearch.search.internal.InternalSearchResponse;\n import org.elasticsearch.search.query.QuerySearchResult;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.transport.Transport;\n@@ -91,13 +90,13 @@ public void testFetchTwoDocument() throws IOException {\n controller.newSearchPhaseResults(mockSearchPhaseContext.getRequest(), 2);\n AtomicReference<SearchResponse> responseRef = new AtomicReference<>();\n int resultSetSize = randomIntBetween(2, 10);\n- QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0));\n+ QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0, null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(42, 1.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(resultSetSize); // the size of the result set\n queryResult.setShardIndex(0);\n results.consumeResult(queryResult);\n \n- queryResult = new QuerySearchResult(321, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 1));\n+ queryResult = new QuerySearchResult(321, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 1, null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(84, 2.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(resultSetSize);\n queryResult.setShardIndex(1);\n@@ -145,13 +144,13 @@ public void testFailFetchOneDoc() throws IOException {\n controller.newSearchPhaseResults(mockSearchPhaseContext.getRequest(), 2);\n AtomicReference<SearchResponse> responseRef = new AtomicReference<>();\n int resultSetSize = randomIntBetween(2, 10);\n- QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0));\n+ QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0, null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(42, 1.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(resultSetSize); // the size of the result set\n queryResult.setShardIndex(0);\n results.consumeResult(queryResult);\n \n- queryResult = new QuerySearchResult(321, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 1));\n+ queryResult = new QuerySearchResult(321, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 1, null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(84, 2.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(resultSetSize);\n queryResult.setShardIndex(1);\n@@ -204,7 +203,7 @@ public void testFetchDocsConcurrently() throws IOException, InterruptedException\n controller.newSearchPhaseResults(mockSearchPhaseContext.getRequest(), numHits);\n AtomicReference<SearchResponse> responseRef = new AtomicReference<>();\n for (int i = 0; i < numHits; i++) {\n- QuerySearchResult queryResult = new QuerySearchResult(i, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0));\n+ QuerySearchResult queryResult = new QuerySearchResult(i, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0, null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(i+1, i)}, i), new DocValueFormat[0]);\n queryResult.size(resultSetSize); // the size of the result set\n queryResult.setShardIndex(i);\n@@ -259,13 +258,13 @@ public void testExceptionFailsPhase() throws IOException {\n controller.newSearchPhaseResults(mockSearchPhaseContext.getRequest(), 2);\n AtomicReference<SearchResponse> responseRef = new AtomicReference<>();\n int resultSetSize = randomIntBetween(2, 10);\n- QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0));\n+ QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0, null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(42, 1.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(resultSetSize); // the size of the result set\n queryResult.setShardIndex(0);\n results.consumeResult(queryResult);\n \n- queryResult = new QuerySearchResult(321, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 1));\n+ queryResult = new QuerySearchResult(321, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 1, null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(84, 2.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(resultSetSize);\n queryResult.setShardIndex(1);\n@@ -312,13 +311,13 @@ public void testCleanupIrrelevantContexts() throws IOException { // contexts tha\n controller.newSearchPhaseResults(mockSearchPhaseContext.getRequest(), 2);\n AtomicReference<SearchResponse> responseRef = new AtomicReference<>();\n int resultSetSize = 1;\n- QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0));\n+ QuerySearchResult queryResult = new QuerySearchResult(123, new SearchShardTarget(\"node1\", new Index(\"test\", \"na\"), 0, null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(42, 1.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(resultSetSize); // the size of the result set\n queryResult.setShardIndex(0);\n results.consumeResult(queryResult);\n \n- queryResult = new QuerySearchResult(321, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 1));\n+ queryResult = new QuerySearchResult(321, new SearchShardTarget(\"node2\", new Index(\"test\", \"na\"), 1, null));\n queryResult.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(84, 2.0F)}, 2.0F), new DocValueFormat[0]);\n queryResult.size(resultSetSize);\n queryResult.setShardIndex(1);",
"filename": "core/src/test/java/org/elasticsearch/action/search/FetchSearchPhaseTests.java",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,6 @@\n import com.carrotsearch.randomizedtesting.RandomizedContext;\n import org.apache.lucene.search.ScoreDoc;\n import org.apache.lucene.search.TopDocs;\n-import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.text.Text;\n import org.elasticsearch.common.util.BigArrays;\n@@ -43,7 +42,6 @@\n import org.elasticsearch.search.suggest.Suggest;\n import org.elasticsearch.search.suggest.completion.CompletionSuggestion;\n import org.elasticsearch.test.ESTestCase;\n-import org.elasticsearch.test.TestCluster;\n import org.junit.Before;\n \n import java.io.IOException;\n@@ -54,8 +52,6 @@\n import java.util.List;\n import java.util.Map;\n import java.util.Optional;\n-import java.util.concurrent.Callable;\n-import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicInteger;\n import java.util.stream.Collectors;\n import java.util.stream.Stream;\n@@ -188,7 +184,7 @@ private AtomicArray<SearchPhaseResult> generateQueryResults(int nShards,\n AtomicArray<SearchPhaseResult> queryResults = new AtomicArray<>(nShards);\n for (int shardIndex = 0; shardIndex < nShards; shardIndex++) {\n QuerySearchResult querySearchResult = new QuerySearchResult(shardIndex,\n- new SearchShardTarget(\"\", new Index(\"\", \"\"), shardIndex));\n+ new SearchShardTarget(\"\", new Index(\"\", \"\"), shardIndex, null));\n TopDocs topDocs = new TopDocs(0, new ScoreDoc[0], 0);\n if (searchHitsSize > 0) {\n int nDocs = randomIntBetween(0, searchHitsSize);\n@@ -256,7 +252,7 @@ private AtomicArray<SearchPhaseResult> generateFetchResults(int nShards, ScoreDo\n AtomicArray<SearchPhaseResult> fetchResults = new AtomicArray<>(nShards);\n for (int shardIndex = 0; shardIndex < nShards; shardIndex++) {\n float maxScore = -1F;\n- SearchShardTarget shardTarget = new SearchShardTarget(\"\", new Index(\"\", \"\"), shardIndex);\n+ SearchShardTarget shardTarget = new SearchShardTarget(\"\", new Index(\"\", \"\"), shardIndex, null);\n FetchSearchResult fetchSearchResult = new FetchSearchResult(shardIndex, shardTarget);\n List<SearchHit> searchHits = new ArrayList<>();\n for (ScoreDoc scoreDoc : mergedSearchDocs) {\n@@ -293,23 +289,23 @@ public void testConsumer() {\n request.source(new SearchSourceBuilder().aggregation(AggregationBuilders.avg(\"foo\")));\n request.setBatchedReduceSize(bufferSize);\n InitialSearchPhase.SearchPhaseResults<SearchPhaseResult> consumer = searchPhaseController.newSearchPhaseResults(request, 3);\n- QuerySearchResult result = new QuerySearchResult(0, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), 0));\n+ QuerySearchResult result = new QuerySearchResult(0, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), 0, null));\n result.topDocs(new TopDocs(0, new ScoreDoc[0], 0.0F), new DocValueFormat[0]);\n InternalAggregations aggs = new InternalAggregations(Arrays.asList(new InternalMax(\"test\", 1.0D, DocValueFormat.RAW,\n Collections.emptyList(), Collections.emptyMap())));\n result.aggregations(aggs);\n result.setShardIndex(0);\n consumer.consumeResult(result);\n \n- result = new QuerySearchResult(1, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), 0));\n+ result = new QuerySearchResult(1, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), 0, null));\n result.topDocs(new TopDocs(0, new ScoreDoc[0], 0.0F), new DocValueFormat[0]);\n aggs = new InternalAggregations(Arrays.asList(new InternalMax(\"test\", 3.0D, DocValueFormat.RAW,\n Collections.emptyList(), Collections.emptyMap())));\n result.aggregations(aggs);\n result.setShardIndex(2);\n consumer.consumeResult(result);\n \n- result = new QuerySearchResult(1, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), 0));\n+ result = new QuerySearchResult(1, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), 0, null));\n result.topDocs(new TopDocs(0, new ScoreDoc[0], 0.0F), new DocValueFormat[0]);\n aggs = new InternalAggregations(Arrays.asList(new InternalMax(\"test\", 2.0D, DocValueFormat.RAW,\n Collections.emptyList(), Collections.emptyMap())));\n@@ -348,7 +344,7 @@ public void testConsumerConcurrently() throws InterruptedException {\n threads[i] = new Thread(() -> {\n int number = randomIntBetween(1, 1000);\n max.updateAndGet(prev -> Math.max(prev, number));\n- QuerySearchResult result = new QuerySearchResult(id, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), id));\n+ QuerySearchResult result = new QuerySearchResult(id, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), id, null));\n result.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(0, number)}, number), new DocValueFormat[0]);\n InternalAggregations aggs = new InternalAggregations(Arrays.asList(new InternalMax(\"test\", (double) number,\n DocValueFormat.RAW, Collections.emptyList(), Collections.emptyMap())));\n@@ -385,7 +381,7 @@ public void testConsumerOnlyAggs() throws InterruptedException {\n int id = i;\n int number = randomIntBetween(1, 1000);\n max.updateAndGet(prev -> Math.max(prev, number));\n- QuerySearchResult result = new QuerySearchResult(id, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), id));\n+ QuerySearchResult result = new QuerySearchResult(id, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), id, null));\n result.topDocs(new TopDocs(1, new ScoreDoc[0], number), new DocValueFormat[0]);\n InternalAggregations aggs = new InternalAggregations(Arrays.asList(new InternalMax(\"test\", (double) number,\n DocValueFormat.RAW, Collections.emptyList(), Collections.emptyMap())));\n@@ -418,7 +414,7 @@ public void testConsumerOnlyHits() throws InterruptedException {\n int id = i;\n int number = randomIntBetween(1, 1000);\n max.updateAndGet(prev -> Math.max(prev, number));\n- QuerySearchResult result = new QuerySearchResult(id, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), id));\n+ QuerySearchResult result = new QuerySearchResult(id, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), id, null));\n result.topDocs(new TopDocs(1, new ScoreDoc[] {new ScoreDoc(0, number)}, number), new DocValueFormat[0]);\n result.setShardIndex(id);\n result.size(1);\n@@ -474,7 +470,7 @@ public void testReduceTopNWithFromOffset() {\n searchPhaseController.newSearchPhaseResults(request, 4);\n int score = 100;\n for (int i = 0; i < 4; i++) {\n- QuerySearchResult result = new QuerySearchResult(i, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), i));\n+ QuerySearchResult result = new QuerySearchResult(i, new SearchShardTarget(\"node\", new Index(\"a\", \"b\"), i, null));\n ScoreDoc[] docs = new ScoreDoc[3];\n for (int j = 0; j < docs.length; j++) {\n docs[j] = new ScoreDoc(0, score--);",
"filename": "core/src/test/java/org/elasticsearch/action/search/SearchPhaseControllerTests.java",
"status": "modified"
},
{
"diff": "@@ -49,11 +49,11 @@ public void testToXContent() throws IOException {\n SearchPhaseExecutionException exception = new SearchPhaseExecutionException(\"test\", \"all shards failed\",\n new ShardSearchFailure[]{\n new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 0)),\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 0, null)),\n new ShardSearchFailure(new IndexShardClosedException(new ShardId(new Index(\"foo\", \"_na_\"), 1)),\n- new SearchShardTarget(\"node_2\", new Index(\"foo\", \"_na_\"), 1)),\n+ new SearchShardTarget(\"node_2\", new Index(\"foo\", \"_na_\"), 1, null)),\n new ShardSearchFailure(new ParsingException(5, 7, \"foobar\", null),\n- new SearchShardTarget(\"node_3\", new Index(\"foo\", \"_na_\"), 2)),\n+ new SearchShardTarget(\"node_3\", new Index(\"foo\", \"_na_\"), 2, null)),\n });\n \n // Failures are grouped (by default)\n@@ -150,7 +150,7 @@ public void testToAndFromXContent() throws IOException {\n new TimestampParsingException(\"foo\", null),\n new NullPointerException()\n );\n- shardSearchFailures[i] = new ShardSearchFailure(cause, new SearchShardTarget(\"node_\" + i, new Index(\"test\", \"_na_\"), i));\n+ shardSearchFailures[i] = new ShardSearchFailure(cause, new SearchShardTarget(\"node_\" + i, new Index(\"test\", \"_na_\"), i, null));\n }\n \n final String phase = randomFrom(\"query\", \"search\", \"other\");",
"filename": "core/src/test/java/org/elasticsearch/action/search/SearchPhaseExecutionExceptionTests.java",
"status": "modified"
},
{
"diff": "@@ -71,7 +71,7 @@ protected void executeInitialPhase(Transport.Connection connection, InternalScro\n SearchAsyncActionTests.TestSearchPhaseResult testSearchPhaseResult =\n new SearchAsyncActionTests.TestSearchPhaseResult(internalRequest.id(), connection.getNode());\n testSearchPhaseResult.setSearchShardTarget(new SearchShardTarget(connection.getNode().getId(),\n- new Index(\"test\", \"_na_\"), 1));\n+ new Index(\"test\", \"_na_\"), 1, null));\n searchActionListener.onResponse(testSearchPhaseResult);\n }).start();\n }\n@@ -162,7 +162,7 @@ protected void executeInitialPhase(Transport.Connection connection, InternalScro\n SearchAsyncActionTests.TestSearchPhaseResult testSearchPhaseResult =\n new SearchAsyncActionTests.TestSearchPhaseResult(internalRequest.id(), connection.getNode());\n testSearchPhaseResult.setSearchShardTarget(new SearchShardTarget(connection.getNode().getId(),\n- new Index(\"test\", \"_na_\"), 1));\n+ new Index(\"test\", \"_na_\"), 1, null));\n searchActionListener.onResponse(testSearchPhaseResult);\n }).start();\n }\n@@ -235,7 +235,7 @@ protected void executeInitialPhase(Transport.Connection connection, InternalScro\n SearchAsyncActionTests.TestSearchPhaseResult testSearchPhaseResult =\n new SearchAsyncActionTests.TestSearchPhaseResult(internalRequest.id(), connection.getNode());\n testSearchPhaseResult.setSearchShardTarget(new SearchShardTarget(connection.getNode().getId(),\n- new Index(\"test\", \"_na_\"), 1));\n+ new Index(\"test\", \"_na_\"), 1, null));\n searchActionListener.onResponse(testSearchPhaseResult);\n }).start();\n }\n@@ -312,7 +312,7 @@ protected void executeInitialPhase(Transport.Connection connection, InternalScro\n SearchAsyncActionTests.TestSearchPhaseResult testSearchPhaseResult =\n new SearchAsyncActionTests.TestSearchPhaseResult(internalRequest.id(), connection.getNode());\n testSearchPhaseResult.setSearchShardTarget(new SearchShardTarget(connection.getNode().getId(),\n- new Index(\"test\", \"_na_\"), 1));\n+ new Index(\"test\", \"_na_\"), 1, null));\n searchActionListener.onResponse(testSearchPhaseResult);\n }\n }).start();",
"filename": "core/src/test/java/org/elasticsearch/action/search/SearchScrollAsyncActionTests.java",
"status": "modified"
},
{
"diff": "@@ -198,7 +198,8 @@ public void testProcessRemoteShards() throws IOException {\n assertArrayEquals(new String[]{\"some_alias_for_foo\", \"some_other_foo_alias\"},\n iterator.getOriginalIndices().indices());\n assertTrue(iterator.shardId().getId() == 0 || iterator.shardId().getId() == 1);\n- assertEquals(\"test_cluster_1:foo\", iterator.shardId().getIndexName());\n+ assertEquals(\"test_cluster_1\", iterator.getClusterAlias());\n+ assertEquals(\"foo\", iterator.shardId().getIndexName());\n ShardRouting shardRouting = iterator.nextOrNull();\n assertNotNull(shardRouting);\n assertEquals(shardRouting.getIndexName(), \"foo\");\n@@ -209,7 +210,8 @@ public void testProcessRemoteShards() throws IOException {\n } else if (iterator.shardId().getIndexName().endsWith(\"bar\")) {\n assertArrayEquals(new String[]{\"bar\"}, iterator.getOriginalIndices().indices());\n assertEquals(0, iterator.shardId().getId());\n- assertEquals(\"test_cluster_1:bar\", iterator.shardId().getIndexName());\n+ assertEquals(\"test_cluster_1\", iterator.getClusterAlias());\n+ assertEquals(\"bar\", iterator.shardId().getIndexName());\n ShardRouting shardRouting = iterator.nextOrNull();\n assertNotNull(shardRouting);\n assertEquals(shardRouting.getIndexName(), \"bar\");\n@@ -220,7 +222,8 @@ public void testProcessRemoteShards() throws IOException {\n } else if (iterator.shardId().getIndexName().endsWith(\"xyz\")) {\n assertArrayEquals(new String[]{\"some_alias_for_xyz\"}, iterator.getOriginalIndices().indices());\n assertEquals(0, iterator.shardId().getId());\n- assertEquals(\"test_cluster_2:xyz\", iterator.shardId().getIndexName());\n+ assertEquals(\"xyz\", iterator.shardId().getIndexName());\n+ assertEquals(\"test_cluster_2\", iterator.getClusterAlias());\n ShardRouting shardRouting = iterator.nextOrNull();\n assertNotNull(shardRouting);\n assertEquals(shardRouting.getIndexName(), \"xyz\");",
"filename": "core/src/test/java/org/elasticsearch/action/search/TransportSearchActionTests.java",
"status": "modified"
},
{
"diff": "@@ -127,6 +127,11 @@ public BytesReference cacheKey() throws IOException {\n @Override\n public void rewrite(QueryShardContext context) throws IOException {\n }\n+\n+ @Override\n+ public String getClusterAlias() {\n+ return null;\n+ }\n };\n @Override\n public ShardSearchRequest request() {",
"filename": "core/src/test/java/org/elasticsearch/index/SearchSlowLogTests.java",
"status": "modified"
},
{
"diff": "@@ -151,9 +151,9 @@ public void testConvert() throws IOException {\n RestRequest request = new FakeRestRequest();\n RestChannel channel = new DetailedExceptionRestChannel(request);\n ShardSearchFailure failure = new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 1, null));\n ShardSearchFailure failure1 = new ShardSearchFailure(new ParsingException(1, 2, \"foobar\", null),\n- new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 2));\n+ new SearchShardTarget(\"node_1\", new Index(\"foo\", \"_na_\"), 2, null));\n SearchPhaseExecutionException ex = new SearchPhaseExecutionException(\"search\", \"all shards failed\", new ShardSearchFailure[] {failure, failure1});\n BytesRestResponse response = new BytesRestResponse(channel, new RemoteTransportException(\"foo\", ex));\n String text = response.content().utf8ToString();",
"filename": "core/src/test/java/org/elasticsearch/rest/BytesRestResponseTests.java",
"status": "modified"
},
{
"diff": "@@ -207,7 +207,8 @@ public void testToXContent() throws IOException {\n }\n \n public void testSerializeShardTarget() throws Exception {\n- SearchShardTarget target = new SearchShardTarget(\"_node_id\", new Index(\"_index\", \"_na_\"), 0);\n+ String clusterAlias = randomBoolean() ? null : \"cluster_alias\";\n+ SearchShardTarget target = new SearchShardTarget(\"_node_id\", new Index(\"_index\", \"_na_\"), 0, clusterAlias);\n \n Map<String, SearchHits> innerHits = new HashMap<>();\n SearchHit innerHit1 = new SearchHit(0, \"_id\", new Text(\"_type\"), null);\n@@ -233,6 +234,7 @@ public void testSerializeShardTarget() throws Exception {\n \n SearchHits hits = new SearchHits(new SearchHit[]{hit1, hit2}, 2, 1f);\n \n+\n BytesStreamOutput output = new BytesStreamOutput();\n hits.writeTo(output);\n InputStream input = output.bytes().streamInput();\n@@ -242,6 +244,17 @@ public void testSerializeShardTarget() throws Exception {\n assertThat(results.getAt(0).getInnerHits().get(\"1\").getAt(0).getInnerHits().get(\"1\").getAt(0).getShard(), notNullValue());\n assertThat(results.getAt(0).getInnerHits().get(\"1\").getAt(1).getShard(), notNullValue());\n assertThat(results.getAt(0).getInnerHits().get(\"2\").getAt(0).getShard(), notNullValue());\n+ for (SearchHit hit : results) {\n+ assertEquals(clusterAlias, hit.getClusterAlias());\n+ if (hit.getInnerHits() != null) {\n+ for (SearchHits innerhits : hit.getInnerHits().values()) {\n+ for (SearchHit innerHit : innerhits) {\n+ assertEquals(clusterAlias, innerHit.getClusterAlias());\n+ }\n+ }\n+ }\n+ }\n+\n assertThat(results.getAt(1).getShard(), equalTo(target));\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/search/SearchHitTests.java",
"status": "modified"
},
{
"diff": "@@ -86,7 +86,7 @@ public int numberOfShards() {\n \n @Override\n public SearchShardTarget shardTarget() {\n- return new SearchShardTarget(\"no node, this is a unit test\", new Index(\"no index, this is a unit test\", \"_na_\"), 0);\n+ return new SearchShardTarget(\"no node, this is a unit test\", new Index(\"no index, this is a unit test\", \"_na_\"), 0, null);\n }\n }\n ",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/significant/SignificanceHeuristicTests.java",
"status": "modified"
},
{
"diff": "@@ -97,7 +97,7 @@ private ShardSearchTransportRequest createShardSearchTransportRequest() throws I\n filteringAliases = new AliasFilter(null, Strings.EMPTY_ARRAY);\n }\n return new ShardSearchTransportRequest(new OriginalIndices(searchRequest), searchRequest, shardId,\n- randomIntBetween(1, 100), filteringAliases, randomBoolean() ? 1.0f : randomFloat(), Math.abs(randomLong()));\n+ randomIntBetween(1, 100), filteringAliases, randomBoolean() ? 1.0f : randomFloat(), Math.abs(randomLong()), null);\n }\n \n public void testFilteringAliases() throws Exception {",
"filename": "core/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,39 @@\n+---\n+\"Test that remote index names are preserved in top hits\":\n+\n+ - do:\n+ indices.create:\n+ index: single_doc_index\n+ body:\n+ settings:\n+ index:\n+ number_of_shards: 1\n+ number_of_replicas: 0\n+\n+ - do:\n+ bulk:\n+ refresh: true\n+ body:\n+ - '{\"index\": {\"_index\": \"single_doc_index\", \"_type\": \"test_type\"}}'\n+ - '{\"f1\": \"local_cluster\", \"sort_field\": 0}'\n+ - do:\n+ search:\n+ index: \"single_doc_index,my_remote_cluster:single_doc_index\"\n+ body:\n+ sort: \"sort_field\"\n+ aggs:\n+ cluster:\n+ top_hits:\n+ size: 2\n+ sort: \"sort_field\"\n+\n+ - match: { _shards.total: 2 }\n+ - match: { hits.total: 2 }\n+ - match: { hits.hits.0._index: \"single_doc_index\"}\n+ - match: { hits.hits.1._index: \"my_remote_cluster:single_doc_index\"}\n+\n+ - length: { aggregations.cluster.hits.hits: 2 }\n+ - match: { aggregations.cluster.hits.hits.0._index: \"single_doc_index\" }\n+ - match: { aggregations.cluster.hits.hits.0._source.f1: \"local_cluster\" }\n+ - match: { aggregations.cluster.hits.hits.1._index: \"my_remote_cluster:single_doc_index\" }\n+ - match: { aggregations.cluster.hits.hits.1._source.f1: \"remote_cluster\" }",
"filename": "qa/multi-cluster-search/src/test/resources/rest-api-spec/test/multi_cluster/60_tophits.yml",
"status": "added"
},
{
"diff": "@@ -1,6 +1,22 @@\n ---\n \"Index data and search on the old cluster\":\n \n+ - do:\n+ indices.create:\n+ index: single_doc_index\n+ body:\n+ settings:\n+ index:\n+ number_of_shards: 1\n+ number_of_replicas: 0\n+\n+ - do:\n+ bulk:\n+ refresh: true\n+ body:\n+ - '{\"index\": {\"_index\": \"single_doc_index\", \"_type\": \"test_type\"}}'\n+ - '{\"f1\": \"remote_cluster\", \"sort_field\": 1}'\n+\n - do:\n indices.create:\n index: field_caps_index_1",
"filename": "qa/multi-cluster-search/src/test/resources/rest-api-spec/test/remote_cluster/10_basic.yml",
"status": "modified"
},
{
"diff": "@@ -45,7 +45,7 @@ public void testAssertNoInFlightContext() {\n \n @Override\n public SearchShardTarget shardTarget() {\n- return new SearchShardTarget(\"node\", new Index(\"idx\", \"ignored\"), 0);\n+ return new SearchShardTarget(\"node\", new Index(\"idx\", \"ignored\"), 0, null);\n }\n \n @Override",
"filename": "test/framework/src/test/java/org/elasticsearch/search/MockSearchServiceTests.java",
"status": "modified"
}
]
} |
{
"body": "Run something like this:\r\n\r\n```\r\nDELETE /source\r\n\r\nPUT /source\r\n{\r\n \"mappings\": {\r\n \"doc\": {\r\n \"properties\": {\r\n \"join_field\": {\r\n \"type\": \"join\",\r\n \"relations\": {\r\n \"parent\": \"child\",\r\n \"child\": \"grand_child\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nDELETE /dest\r\n\r\nPUT /dest\r\n{\r\n \"mappings\": {\r\n \"doc\": {\r\n \"properties\": {\r\n \"join_field\": {\r\n \"type\": \"join\",\r\n \"relations\": {\r\n \"parent\": \"child\",\r\n \"child\": \"grand_child\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\n\r\nPUT /source/doc/1\r\n{ \"join_field\": { \"name\": \"parent\" } }\r\n\r\nPUT /source/doc/2?routing=1\r\n{ \"join_field\": { \"name\": \"child\", \"parent\": \"1\" } }\r\n\r\nPUT /source/doc/3?routing=1\r\n{ \"join_field\": { \"name\": \"grand_child\", \"parent\": \"2\" } }\r\n\r\nPOST /_refresh\r\n\r\nPOST /_reindex?refresh\r\n{\r\n \"source\": {\r\n \"index\": \"source\",\r\n \"remote\": {\r\n \"host\": \"http://127.0.0.1:9200\"\r\n }\r\n },\r\n \"dest\": {\r\n \"index\": \"dest\"\r\n }\r\n}\r\n```\r\n\r\nAnd it'll blow up with a big stack trace that comes down to:\r\n```\r\n \"reason\": \"[fields] unknown field [join_field], parser not found\"\r\n```\r\n\r\nThis is because `join` always returns the join field whether you ask for it or not:\r\n```\r\nPOST /source/_search\r\n```\r\nreturns:\r\n```\r\n{\r\n ...\r\n \"hits\": {\r\n \"total\": 3,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"source\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"1\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"join_field\": {\r\n \"name\": \"parent\"\r\n }\r\n },\r\n \"fields\": {\r\n \"join_field\": [ <---- This\r\n \"parent\"\r\n ]\r\n }\r\n },\r\n {\r\n \"_index\": \"source\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"2\",\r\n \"_score\": 1,\r\n \"_routing\": \"1\",\r\n \"_source\": {\r\n \"join_field\": {\r\n \"name\": \"child\",\r\n \"parent\": \"1\"\r\n }\r\n },\r\n \"fields\": {\r\n \"join_field#parent\": [ <---- This\r\n \"1\"\r\n ],\r\n \"join_field\": [ <---- This\r\n \"child\"\r\n ]\r\n }\r\n },\r\n {\r\n \"_index\": \"source\",\r\n \"_type\": \"doc\",\r\n \"_id\": \"3\",\r\n \"_score\": 1,\r\n \"_routing\": \"1\",\r\n \"_source\": {\r\n \"join_field\": {\r\n \"name\": \"grand_child\",\r\n \"parent\": \"2\"\r\n }\r\n },\r\n \"fields\": {\r\n \"join_field\": [ <---- This\r\n \"grand_child\"\r\n ],\r\n \"join_field#child\": [ <---- This\r\n \"2\"\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n\r\nReindex can fix this on its side by ignoring fields it doesn't know about or by doing more complex things like checking the source mapping first. I wonder if a better solution is for `join` not to return the field if it wasn't asked for. I don't believe reindex needs the field to do its job.",
"comments": [
{
"body": "Good point. I believe it was added just because the _parent field did this too, but since join is now always in the _source (_parent field was really a meta field), it is kind of duplicated information and therefor not really needed. ",
"created_at": "2017-06-22T19:48:47Z"
},
{
"body": "@nik9000 Can this issue be closed now that #25550 has been merged?",
"created_at": "2017-07-06T11:31:33Z"
},
{
"body": "> @nik9000 Can this issue be closed now that #25550 has been merged?\r\n\r\nI think we just have to remove [this](https://github.com/elastic/elasticsearch/blob/master/qa/smoke-test-reindex-with-all-modules/src/test/resources/rest-api-spec/test/reindex/50_reindex_with_parent_join.yml#L94-L96) to close it.",
"created_at": "2017-07-06T13:54:16Z"
},
{
"body": "Thanks @martijnvg! I was about to look at this again but you got to it before I did. Thanks for fixing the whole thing!",
"created_at": "2017-07-07T14:46:31Z"
}
],
"number": 25363,
"title": "Reindex from remote doesn't work with master's `join` module"
} | {
"body": "The `ParentJoinFieldSubFetchPhase` adds the parent type and parent id as separate fields to the each search hit. This information is available in the source too, so it is duplicated. Therefore I think we should not add that and remove the `ParentJoinFieldSubFetchPhase`.\r\n\r\nAlso reindex has issues with this, because it doesn't know what to do with these fields: \r\n#25363",
"number": 25550,
"review_comments": [
{
"body": "The same change is needed in `reference/search/request/inner-hits` since we also check the output in this doc.",
"created_at": "2017-07-06T04:58:08Z"
}
],
"title": "Remove ParentJoinFieldSubFetchPhase"
} | {
"commits": [
{
"message": "parent/child: Removed ParentJoinFieldSubFetchPhase"
}
],
"files": [
{
"diff": "@@ -150,12 +150,7 @@ Will return:\n \"_score\": null,\n \"_source\": {\n \"text\": \"This is a parent document\",\n- \"my_join_field\": \"my_parent\"\n- },\n- \"fields\": {\n- \"my_join_field\": [\n- \"my_parent\" <1>\n- ]\n+ \"my_join_field\": \"my_parent\" <1>\n },\n \"sort\": [\n \"1\"\n@@ -168,12 +163,7 @@ Will return:\n \"_score\": null,\n \"_source\": {\n \"text\": \"This is a another parent document\",\n- \"my_join_field\": \"my_parent\"\n- },\n- \"fields\": {\n- \"my_join_field\": [\n- \"my_parent\" <2>\n- ]\n+ \"my_join_field\": \"my_parent\" <2>\n },\n \"sort\": [\n \"2\"\n@@ -192,14 +182,6 @@ Will return:\n \"parent\": \"1\" <4>\n }\n },\n- \"fields\": {\n- \"my_join_field\": [\n- \"my_child\"\n- ],\n- \"my_join_field#my_parent\": [\n- \"1\"\n- ]\n- },\n \"sort\": [\n \"3\"\n ]\n@@ -217,14 +199,6 @@ Will return:\n \"parent\": \"1\"\n }\n },\n- \"fields\": {\n- \"my_join_field\": [\n- \"my_child\"\n- ],\n- \"my_join_field#my_parent\": [\n- \"1\"\n- ]\n- },\n \"sort\": [\n \"4\"\n ]",
"filename": "docs/reference/mapping/types/parent-join.asciidoc",
"status": "modified"
},
{
"diff": "@@ -498,11 +498,6 @@ An example of a response snippet that could be generated from the above search r\n \"number\": 1,\n \"my_join_field\": \"my_parent\"\n },\n- \"fields\": {\n- \"my_join_field\": [\n- \"my_parent\"\n- ]\n- },\n \"inner_hits\": {\n \"my_child\": {\n \"hits\": {\n@@ -520,14 +515,6 @@ An example of a response snippet that could be generated from the above search r\n \"name\": \"my_child\",\n \"parent\": \"1\"\n }\n- },\n- \"fields\": {\n- \"my_join_field\": [\n- \"my_child\"\n- ],\n- \"my_join_field#my_parent\": [\n- \"1\"\n- ]\n }\n }\n ]",
"filename": "docs/reference/search/request/inner-hits.asciidoc",
"status": "modified"
},
{
"diff": "@@ -19,27 +19,26 @@\n \n package org.elasticsearch.join;\n \n-import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.join.aggregations.ChildrenAggregationBuilder;\n import org.elasticsearch.join.aggregations.InternalChildren;\n-import org.elasticsearch.join.fetch.ParentJoinFieldSubFetchPhase;\n import org.elasticsearch.join.mapper.ParentJoinFieldMapper;\n import org.elasticsearch.join.query.HasChildQueryBuilder;\n import org.elasticsearch.join.query.HasParentQueryBuilder;\n import org.elasticsearch.join.query.ParentIdQueryBuilder;\n import org.elasticsearch.plugins.MapperPlugin;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.plugins.SearchPlugin;\n-import org.elasticsearch.search.fetch.FetchSubPhase;\n \n import java.util.Arrays;\n import java.util.Collections;\n import java.util.List;\n import java.util.Map;\n \n public class ParentJoinPlugin extends Plugin implements SearchPlugin, MapperPlugin {\n- public ParentJoinPlugin(Settings settings) {}\n+\n+ public ParentJoinPlugin() {\n+ }\n \n @Override\n public List<QuerySpec<?>> getQueries() {\n@@ -62,9 +61,4 @@ public List<AggregationSpec> getAggregations() {\n public Map<String, Mapper.TypeParser> getMappers() {\n return Collections.singletonMap(ParentJoinFieldMapper.CONTENT_TYPE, new ParentJoinFieldMapper.TypeParser());\n }\n-\n- @Override\n- public List<FetchSubPhase> getFetchSubPhases(FetchPhaseConstructionContext context) {\n- return Collections.singletonList(new ParentJoinFieldSubFetchPhase());\n- }\n }",
"filename": "modules/parent-join/src/main/java/org/elasticsearch/join/ParentJoinPlugin.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,8 @@\n package org.elasticsearch.join.query;\n \n import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.ReaderUtil;\n+import org.apache.lucene.index.SortedDocValues;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanQuery;\n@@ -32,6 +34,8 @@\n import org.apache.lucene.search.TopScoreDocCollector;\n import org.apache.lucene.search.TotalHitCountCollector;\n import org.apache.lucene.search.Weight;\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.document.DocumentField;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.index.mapper.DocumentMapper;\n@@ -49,6 +53,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.List;\n import java.util.Map;\n \n import static org.elasticsearch.search.fetch.subphase.InnerHitsContext.intersect;\n@@ -126,8 +131,8 @@ public TopDocs[] topDocs(SearchHit[] hits) throws IOException {\n TopDocs[] result = new TopDocs[hits.length];\n for (int i = 0; i < hits.length; i++) {\n SearchHit hit = hits[i];\n- DocumentField joinField = hit.getFields().get(joinFieldMapper.name());\n- if (joinField == null) {\n+ String joinName = getSortedDocValue(joinFieldMapper.name(), context, hit.docId());\n+ if (joinName == null) {\n result[i] = Lucene.EMPTY_TOP_DOCS;\n continue;\n }\n@@ -150,8 +155,8 @@ public TopDocs[] topDocs(SearchHit[] hits) throws IOException {\n .add(joinFieldMapper.fieldType().termQuery(typeName, qsc), BooleanClause.Occur.FILTER)\n .build();\n } else {\n- DocumentField parentIdField = hit.getFields().get(parentIdFieldMapper.name());\n- q = context.mapperService().fullName(IdFieldMapper.NAME).termQuery(parentIdField.getValue(), qsc);\n+ String parentId = getSortedDocValue(parentIdFieldMapper.name(), context, hit.docId());\n+ q = context.mapperService().fullName(IdFieldMapper.NAME).termQuery(parentId, qsc);\n }\n \n Weight weight = context.searcher().createNormalizedWeight(q, false);\n@@ -181,6 +186,24 @@ public TopDocs[] topDocs(SearchHit[] hits) throws IOException {\n }\n return result;\n }\n+\n+ private String getSortedDocValue(String field, SearchContext context, int docId) {\n+ try {\n+ List<LeafReaderContext> ctxs = context.searcher().getIndexReader().leaves();\n+ LeafReaderContext ctx = ctxs.get(ReaderUtil.subIndex(docId, ctxs));\n+ SortedDocValues docValues = ctx.reader().getSortedDocValues(field);\n+ int segmentDocId = docId - ctx.docBase;\n+ if (docValues == null || docValues.advanceExact(segmentDocId) == false) {\n+ return null;\n+ }\n+ int ord = docValues.ordValue();\n+ BytesRef joinName = docValues.lookupOrd(ord);\n+ return joinName.utf8ToString();\n+ } catch (IOException e) {\n+ throw ExceptionsHelper.convertToElastic(e);\n+ }\n+ }\n+\n }\n \n static final class ParentChildInnerHitSubContext extends InnerHitsContext.InnerHitSubContext {",
"filename": "modules/parent-join/src/main/java/org/elasticsearch/join/query/ParentChildInnerHitContextBuilder.java",
"status": "modified"
},
{
"diff": "@@ -60,6 +60,7 @@\n import java.util.Set;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.common.xcontent.support.XContentMapValues.extractValue;\n import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n import static org.elasticsearch.index.query.QueryBuilders.idsQuery;\n@@ -216,8 +217,8 @@ public void testSimpleChildQuery() throws Exception {\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L));\n assertThat(searchResponse.getHits().getAt(0).getId(), equalTo(\"c1\"));\n- assertThat(searchResponse.getHits().getAt(0).field(\"join_field\").getValue(), equalTo(\"child\"));\n- assertThat(searchResponse.getHits().getAt(0).field(\"join_field#parent\").getValue(), equalTo(\"p1\"));\n+ assertThat(extractValue(\"join_field.name\", searchResponse.getHits().getAt(0).getSourceAsMap()), equalTo(\"child\"));\n+ assertThat(extractValue(\"join_field.parent\", searchResponse.getHits().getAt(0).getSourceAsMap()), equalTo(\"p1\"));\n }\n \n // TEST matching on parent\n@@ -236,11 +237,11 @@ public void testSimpleChildQuery() throws Exception {\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(2L));\n assertThat(searchResponse.getHits().getAt(0).getId(), anyOf(equalTo(\"c1\"), equalTo(\"c2\")));\n- assertThat(searchResponse.getHits().getAt(0).field(\"join_field\").getValue(), equalTo(\"child\"));\n- assertThat(searchResponse.getHits().getAt(0).field(\"join_field#parent\").getValue(), equalTo(\"p1\"));\n+ assertThat(extractValue(\"join_field.name\", searchResponse.getHits().getAt(0).getSourceAsMap()), equalTo(\"child\"));\n+ assertThat(extractValue(\"join_field.parent\", searchResponse.getHits().getAt(0).getSourceAsMap()), equalTo(\"p1\"));\n assertThat(searchResponse.getHits().getAt(1).getId(), anyOf(equalTo(\"c1\"), equalTo(\"c2\")));\n- assertThat(searchResponse.getHits().getAt(1).field(\"join_field\").getValue(), equalTo(\"child\"));\n- assertThat(searchResponse.getHits().getAt(1).field(\"join_field#parent\").getValue(), equalTo(\"p1\"));\n+ assertThat(extractValue(\"join_field.name\", searchResponse.getHits().getAt(1).getSourceAsMap()), equalTo(\"child\"));\n+ assertThat(extractValue(\"join_field.parent\", searchResponse.getHits().getAt(1).getSourceAsMap()), equalTo(\"p1\"));\n }\n \n if (legacy()) {",
"filename": "modules/parent-join/src/test/java/org/elasticsearch/join/query/ChildQuerySearchIT.java",
"status": "modified"
},
{
"diff": "@@ -26,7 +26,6 @@\n import org.elasticsearch.index.query.BoolQueryBuilder;\n import org.elasticsearch.index.query.InnerHitBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n-import org.elasticsearch.join.ParentJoinPlugin;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.script.MockScriptEngine;\n import org.elasticsearch.script.MockScriptPlugin;\n@@ -40,7 +39,6 @@\n import org.elasticsearch.search.sort.SortOrder;\n \n import java.util.ArrayList;\n-import java.util.Arrays;\n import java.util.Collection;\n import java.util.Collections;\n import java.util.List;\n@@ -49,6 +47,7 @@\n import java.util.function.Function;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.common.xcontent.support.XContentMapValues.extractValue;\n import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n@@ -550,7 +549,8 @@ public void testNestedInnerHitWrappedInParentChildInnerhit() throws Exception {\n if (legacy()) {\n assertThat(hit.getInnerHits().get(\"child_type\").getAt(0).field(\"_parent\").getValue(), equalTo(\"1\"));\n } else {\n- assertThat(hit.getInnerHits().get(\"child_type\").getAt(0).field(\"join_field#parent_type\").getValue(), equalTo(\"1\"));\n+ String parentId = (String) extractValue(\"join_field.parent\", hit.getInnerHits().get(\"child_type\").getAt(0).getSourceAsMap());\n+ assertThat(parentId, equalTo(\"1\"));\n }\n assertThat(hit.getInnerHits().get(\"child_type\").getAt(0).getInnerHits().get(\"nested_type\").getAt(0).field(\"_parent\"), nullValue());\n }",
"filename": "modules/parent-join/src/test/java/org/elasticsearch/join/query/InnerHitsIT.java",
"status": "modified"
},
{
"diff": "@@ -71,36 +71,36 @@ setup:\n - match: { hits.hits.0._index: \"test\" }\n - match: { hits.hits.0._type: \"doc\" }\n - match: { hits.hits.0._id: \"3\" }\n- - match: { hits.hits.0.fields.join_field: [\"child\"] }\n- - match: { hits.hits.0.fields.join_field#parent: [\"1\"] }\n+ - match: { hits.hits.0._source.join_field.name: \"child\" }\n+ - match: { hits.hits.0._source.join_field.parent: \"1\" }\n - is_false: hits.hits.0.fields.join_field#child }\n - match: { hits.hits.1._index: \"test\" }\n - match: { hits.hits.1._type: \"doc\" }\n - match: { hits.hits.1._id: \"4\" }\n- - match: { hits.hits.1.fields.join_field: [\"child\"] }\n- - match: { hits.hits.1.fields.join_field#parent: [\"1\"] }\n+ - match: { hits.hits.1._source.join_field.name: \"child\" }\n+ - match: { hits.hits.1._source.join_field.parent: \"1\" }\n - is_false: hits.hits.1.fields.join_field#child }\n - match: { hits.hits.2._index: \"test\" }\n - match: { hits.hits.2._type: \"doc\" }\n - match: { hits.hits.2._id: \"5\" }\n- - match: { hits.hits.2.fields.join_field: [\"child\"] }\n- - match: { hits.hits.2.fields.join_field#parent: [\"2\"] }\n+ - match: { hits.hits.2._source.join_field.name: \"child\" }\n+ - match: { hits.hits.2._source.join_field.parent: \"2\" }\n - is_false: hits.hits.2.fields.join_field#child }\n - match: { hits.hits.3._index: \"test\" }\n - match: { hits.hits.3._type: \"doc\" }\n - match: { hits.hits.3._id: \"6\" }\n- - match: { hits.hits.3.fields.join_field: [\"grand_child\"] }\n- - match: { hits.hits.3.fields.join_field#child: [\"5\"] }\n+ - match: { hits.hits.3._source.join_field.name: \"grand_child\" }\n+ - match: { hits.hits.3._source.join_field.parent: \"5\" }\n - match: { hits.hits.4._index: \"test\" }\n - match: { hits.hits.4._type: \"doc\" }\n - match: { hits.hits.4._id: \"1\" }\n- - match: { hits.hits.4.fields.join_field: [\"parent\"] }\n- - is_false: hits.hits.4.fields.join_field#parent\n+ - match: { hits.hits.4._source.join_field.name: \"parent\" }\n+ - is_false: hits.hits.4._source.join_field.parent\n - match: { hits.hits.5._index: \"test\" }\n - match: { hits.hits.5._type: \"doc\" }\n - match: { hits.hits.5._id: \"2\" }\n- - match: { hits.hits.5.fields.join_field: [\"parent\"] }\n- - is_false: hits.hits.5.fields.join_field#parent\n+ - match: { hits.hits.5._source.join_field.name: \"parent\" }\n+ - is_false: hits.hits.5._source.join_field.parent\n \n ---\n \"Test parent_id query\":\n@@ -121,12 +121,12 @@ setup:\n - match: { hits.hits.0._index: \"test\" }\n - match: { hits.hits.0._type: \"doc\" }\n - match: { hits.hits.0._id: \"3\" }\n- - match: { hits.hits.0.fields.join_field: [\"child\"] }\n- - match: { hits.hits.0.fields.join_field#parent: [\"1\"] }\n+ - match: { hits.hits.0._source.join_field.name: \"child\" }\n+ - match: { hits.hits.0._source.join_field.parent: \"1\" }\n - match: { hits.hits.1._index: \"test\" }\n - match: { hits.hits.1._type: \"doc\" }\n - match: { hits.hits.1._id: \"4\" }\n- - match: { hits.hits.1.fields.join_field: [\"child\"] }\n- - match: { hits.hits.1.fields.join_field#parent: [\"1\"] }\n+ - match: { hits.hits.1._source.join_field.name: \"child\" }\n+ - match: { hits.hits.1._source.join_field.parent: \"1\" }\n \n ",
"filename": "modules/parent-join/src/test/resources/rest-api-spec/test/20_parent_join.yml",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.5.0-SNAPSHOT\r\n**Plugins installed**: None\r\n\r\n**Steps to reproduce**:\r\n\r\nA cross-cluster-search that has a remote wildcard that doesn't match anything, ends up running a local search across all indices.\r\n\r\nThat is: `GET /my_remote:does_not_exist*/_search` behaves like `GET /_search`\r\n\r\nReproduction:\r\n```\r\n# Start \"local\" cluster\r\n$ bin/elasticsearch -Epath.data=./data.local -Epath.logs=logs.local -Ecluster.name=local -Etransport.tcp.port=9300 -Ehttp.port=9200 -d\r\n\r\n# Start \"remote\" cluster\r\n$ bin/elasticsearch -Epath.data=./data.remote -Epath.logs=logs.remote -Ecluster.name=remote -Etransport.tcp.port=9310 -Ehttp.port=9210 -d\r\n\r\n# Put local index\r\n$ curl -XPUT http://localhost:9200/test-index/doc/1 -d'{ \"cluster\": \"local\" }'\r\n{\"_index\":\"test-index\",\"_type\":\"doc\",\"_id\":\"1\",\"_version\":1,\"result\":\"created\",\"_shards\":{\"total\":2,\"successful\":1,\"failed\":0},\"created\":true}\r\n\r\n# Put remote index\r\n$ curl -XPUT http://localhost:9210/remote-index/doc/1 -d'{ \"cluster\": \"remote\" }'\r\n{\"_index\":\"remote-index\",\"_type\":\"doc\",\"_id\":\"1\",\"_version\":1,\"result\":\"created\",\"_shards\":{\"total\":2,\"successful\":1,\"failed\":0},\"created\":true}\r\n\r\n# Put cluster seed\r\n$ curl -XPUT 'http://localhost:9200/_cluster/settings?pretty' -H 'Content-Type: application/json' -d'\r\n{\r\n \"transient\": {\r\n \"search.remote.remote_cluster.seeds\": \"127.0.0.1:9310\"\r\n }\r\n}'\r\n{\r\n \"acknowledged\" : true,\r\n \"persistent\" : { },\r\n \"transient\" : {\r\n \"search\" : {\r\n \"remote\" : {\r\n \"remote_cluster\" : {\r\n \"seeds\" : \"127.0.0.1:9310\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\n# Search remote cluster, find remote index [correct]\r\n$ curl 'http://localhost:9200/remote_cluster:*/_search'\r\n{\"took\":152,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":1.0,\"hits\":[{\"_index\":\"remote_cluster:remote-index\",\"_type\":\"doc\",\"_id\":\"1\",\"_score\":1.0,\"_source\":{ \"cluster\": \"remote\" }}]}}\r\n\r\n# Search for non-existent index, get appropriate error [correct]\r\n$ curl 'http://localhost:9200/remote_cluster:does_not_exist/_search'\r\n{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no such index\",\"index_uuid\":\"_na_\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"does_not_exist\",\"index\":\"does_not_exist\"}],\"type\":\"transport_exception\",\"reason\":\"unable to communicate with remote cluster [remote_cluster]\",\"caused_by\":{\"type\":\"index_not_found_exception\",\"reason\":\"no such index\",\"index_uuid\":\"_na_\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"does_not_exist\",\"index\":\"does_not_exist\"}},\"status\":500}\r\n\r\n# Search for non-existent wildcard, find local documents [incorrect]\r\n$ curl 'http://localhost:9200/remote_cluster:does_not_exist*/_search'\r\n{\"took\":20,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":1.0,\"hits\":[{\"_index\":\"test-index\",\"_type\":\"doc\",\"_id\":\"1\",\"_score\":1.0,\"_source\":{ \"cluster\": \"local\" }}]}}\r\n```\r\n\r\n",
"comments": [],
"number": 25426,
"title": "Cross-Cluster-Search with non-matching wildcard does local search instead"
} | {
"body": "This commit changes how we determine if there were any remote indices that a search should have\r\nbeen executed against. Previously, we used the list of remote shard iterators but if the remote\r\nindex pattern resolved to no indices there would be no remote shard iterators even though the\r\nrequest specified remote indices. The map of remote cluster names to the original indices is used\r\ninstead so that we can determine if there were remote indices even when there are no remote shard\r\niterators.\r\n\r\nCloses #25426",
"number": 25436,
"review_comments": [],
"title": "Do not search locally if remote index pattern resolves to no indices"
} | {
"commits": [
{
"message": "Do not search locally if remote index pattern resolves to no indices\n\nThis commit changes how we determine if there were any remote indices that a search should have\nbeen executed against. Previously, we used the list of remote shard iterators but if the remote\nindex pattern resolved to no indices there would be no remote shard iterators even though the\nrequest specified remote indices. The map of remote cluster names to the original indices is used\ninstead so that we can determine if there were remote indices even when there are no remote shard\niterators.\n\nCloses #25426"
}
],
"files": [
{
"diff": "@@ -33,7 +33,6 @@\n import org.elasticsearch.cluster.routing.GroupShardsIterator;\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.cluster.service.ClusterService;\n-import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Setting;\n import org.elasticsearch.common.settings.Setting.Property;\n@@ -184,7 +183,7 @@ protected void doExecute(Task task, SearchRequest searchRequest, ActionListener<\n searchRequest.indices(), idx -> indexNameExpressionResolver.hasIndexOrAlias(idx, clusterState));\n OriginalIndices localIndices = remoteClusterIndices.remove(RemoteClusterAware.LOCAL_CLUSTER_GROUP_KEY);\n if (remoteClusterIndices.isEmpty()) {\n- executeSearch((SearchTask)task, timeProvider, searchRequest, localIndices, Collections.emptyList(),\n+ executeSearch((SearchTask)task, timeProvider, searchRequest, localIndices, remoteClusterIndices, Collections.emptyList(),\n (clusterName, nodeId) -> null, clusterState, Collections.emptyMap(), listener);\n } else {\n remoteClusterService.collectSearchShards(searchRequest.indicesOptions(), searchRequest.preference(), searchRequest.routing(),\n@@ -193,7 +192,7 @@ protected void doExecute(Task task, SearchRequest searchRequest, ActionListener<\n Map<String, AliasFilter> remoteAliasFilters = new HashMap<>();\n BiFunction<String, String, DiscoveryNode> clusterNodeLookup = processRemoteShards(searchShardsResponses,\n remoteClusterIndices, remoteShardIterators, remoteAliasFilters);\n- executeSearch((SearchTask)task, timeProvider, searchRequest, localIndices, remoteShardIterators,\n+ executeSearch((SearchTask) task, timeProvider, searchRequest, localIndices, remoteClusterIndices, remoteShardIterators,\n clusterNodeLookup, clusterState, remoteAliasFilters, listener);\n }, listener::onFailure));\n }\n@@ -249,16 +248,16 @@ static BiFunction<String, String, DiscoveryNode> processRemoteShards(Map<String,\n }\n \n private void executeSearch(SearchTask task, SearchTimeProvider timeProvider, SearchRequest searchRequest, OriginalIndices localIndices,\n- List<SearchShardIterator> remoteShardIterators, BiFunction<String, String, DiscoveryNode> remoteConnections,\n- ClusterState clusterState, Map<String, AliasFilter> remoteAliasMap,\n- ActionListener<SearchResponse> listener) {\n+ Map<String, OriginalIndices> remoteClusterIndices, List<SearchShardIterator> remoteShardIterators,\n+ BiFunction<String, String, DiscoveryNode> remoteConnections, ClusterState clusterState,\n+ Map<String, AliasFilter> remoteAliasMap, ActionListener<SearchResponse> listener) {\n \n clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ);\n // TODO: I think startTime() should become part of ActionRequest and that should be used both for index name\n // date math expressions and $now in scripts. This way all apis will deal with now in the same way instead\n // of just for the _search api\n final Index[] indices;\n- if (localIndices.indices().length == 0 && remoteShardIterators.size() > 0) {\n+ if (localIndices.indices().length == 0 && remoteClusterIndices.isEmpty() == false) {\n indices = Index.EMPTY_ARRAY; // don't search on _all if only remote indices were specified\n } else {\n indices = indexNameExpressionResolver.concreteIndices(clusterState, searchRequest.indicesOptions(),",
"filename": "core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,44 @@\n+---\n+\"Search with missing remote index pattern\":\n+ - do:\n+ catch: \"request\"\n+ search:\n+ index: \"my_remote_cluster:foo\"\n+\n+ - do:\n+ search:\n+ index: \"my_remote_cluster:fooo*\"\n+ - match: { _shards.total: 0 }\n+ - match: { hits.total: 0 }\n+\n+ - do:\n+ search:\n+ index: \"*:foo*\"\n+\n+ - match: { _shards.total: 0 }\n+ - match: { hits.total: 0 }\n+\n+ - do:\n+ search:\n+ index: \"my_remote_cluster:test_index,my_remote_cluster:foo*\"\n+ body:\n+ aggs:\n+ cluster:\n+ terms:\n+ field: f1.keyword\n+\n+ - match: { _shards.total: 3 }\n+ - match: { hits.total: 6 }\n+ - length: { aggregations.cluster.buckets: 1 }\n+ - match: { aggregations.cluster.buckets.0.key: \"remote_cluster\" }\n+ - match: { aggregations.cluster.buckets.0.doc_count: 6 }\n+\n+ - do:\n+ catch: \"request\"\n+ search:\n+ index: \"my_remote_cluster:test_index,my_remote_cluster:foo\"\n+ body:\n+ aggs:\n+ cluster:\n+ terms:\n+ field: f1.keyword",
"filename": "qa/multi-cluster-search/src/test/resources/rest-api-spec/test/multi_cluster/50_missing.yml",
"status": "added"
}
]
} |
{
"body": "**Elasticsearch version**: ES 5.x\r\n\r\n**Plugins installed**: none\r\n\r\n**JVM version**: Any\r\n\r\n**OS version**: Any\r\n\r\n[1] Using `-1` for the size returns top 10 results\r\n\r\n```\r\nPOST /_search\r\n{\r\n \"size\": -1\r\n}\r\n```\r\n\r\n[2] Using any number `<-1` returns an unhandled Lucene error:\r\n\r\n```\r\nPOST /_search\r\n{\r\n \"size\": -2\r\n}\r\n\r\n\r\n# Result\r\n\r\n\r\n[2017-01-10T13:30:28,890][DEBUG][o.e.a.s.TransportSearchAction] [oZ4lS-x] All shards failed for phase: [query]\r\norg.elasticsearch.transport.RemoteTransportException: [oZ4lS-x][127.0.0.1:9300][indices:data/read/search[phase/query]]\r\nCaused by: org.elasticsearch.search.query.QueryPhaseExecutionException: Query Failed [Failed to execute main query]\r\n\tat org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:405) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:106) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:259) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:273) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:300) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:297) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:577) [elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527) [elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.1.0.jar:5.1.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_45]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_45]\r\n\tat java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]\r\nCaused by: java.lang.IllegalArgumentException: numHits must be > 0; please use TotalHitCountCollector if you just need the total hit count\r\n\tat org.apache.lucene.search.TopScoreDocCollector.create(TopScoreDocCollector.java:170) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]\r\n\tat org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:219) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:106) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:259) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:273) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:300) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:297) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:577) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.1.0.jar:5.1.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_45]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_45]\r\n\tat java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_45]\r\n```\r\n\r\n\r\nI think that the `QueryPhase` class should catch this before being an exception, so we avoid the exception thrown by `TopScoreDocCollector`. In other words, any number below 0 should return a custom Elasticsearch exception.\r\n\r\nAlso, reading the code i don't understand why `-1` is just handled correctly.",
"comments": [
{
"body": "@gmoskovicz because -1 is taken as not set and then it will be set to default value.",
"created_at": "2017-01-12T02:55:13Z"
},
{
"body": "@clintongormley I give a pull request to fix this issue. it is: https://github.com/elastic/elasticsearch/pull/22579\r\ncan you please help to review it? thanks",
"created_at": "2017-01-12T09:18:55Z"
}
],
"number": 22530,
"title": "Unhandled query size parameter"
} | {
"body": "This change adds a check to `SearchSourceBuilder` to throw and exception if the size set on it is set to a negative value.\r\n\r\nCloses #22530\r\n",
"number": 25397,
"review_comments": [
{
"body": "Maybe we can set the size to the default of 10, that's what `SearchService#createContext` defaults to later on anyway if it finds the current `-1` value. I just find it easier to reason about if the value is set to the default in this case.",
"created_at": "2017-06-26T13:19:02Z"
},
{
"body": "nit: maybe you could test -1 and another random negative int instead?",
"created_at": "2017-06-26T13:19:07Z"
},
{
"body": "I think it's best to keep the default as-is here and let it be handled when creating the context as today so that we can distinguish between set and set from default.",
"created_at": "2017-06-26T13:43:27Z"
},
{
"body": "Maybe include the illegal value?",
"created_at": "2017-06-26T13:47:05Z"
},
{
"body": "Okay, the set/unset distinction makes sense e.g. when using the builder on the client side, otherwise we would always render the `size` even if not set. What makes me not like this is the fact that we reject \"-1\" as illegal but still use it as a \"legal\" value meaning something internally. To me it feels a bit weird when you are just looking at this class.",
"created_at": "2017-06-26T13:55:07Z"
},
{
"body": "I think you should just remove this line entirely.",
"created_at": "2017-07-03T13:12:28Z"
},
{
"body": "Well spotted, I had meant to remove this",
"created_at": "2017-07-03T13:15:20Z"
}
],
"title": "Adds check for negative search request size"
} | {
"commits": [
{
"message": "Adds check for negative search request size\n\nThis change adds a check to `SearchSourceBuilder` to throw and exception if the size set on it is set to a negative value.\n\nCloses #22530"
},
{
"message": "fix error in reindex"
},
{
"message": "update re-index tests"
},
{
"message": "Addresses review comment"
},
{
"message": "Fixed tests"
},
{
"message": "Added random negative size test"
},
{
"message": "Fixes test"
}
],
"files": [
{
"diff": "@@ -171,6 +171,9 @@ public int getSize() {\n * documents.\n */\n public Self setSize(int size) {\n+ if (size < 0) {\n+ throw new IllegalArgumentException(\"[size] parameter cannot be negative, found [\" + size + \"]\");\n+ }\n this.size = size;\n return self();\n }\n@@ -367,10 +370,13 @@ protected Self doForSlice(Self request, TaskId slicingTask) {\n .setShouldStoreResult(false)\n // Split requests per second between all slices\n .setRequestsPerSecond(requestsPerSecond / slices)\n- // Size is split between workers. This means the size might round down!\n- .setSize(size == SIZE_ALL_MATCHES ? SIZE_ALL_MATCHES : size / slices)\n // Sub requests don't have workers\n .setSlices(1);\n+ if (size != -1) {\n+ // Size is split between workers. This means the size might round\n+ // down!\n+ request.setSize(size == SIZE_ALL_MATCHES ? SIZE_ALL_MATCHES : size / slices);\n+ }\n // Set the parent task so this task is cancelled if we cancel the parent\n request.setParentTask(slicingTask);\n // TODO It'd be nice not to refresh on every slice. Instead we should refresh after the sub requests finish.",
"filename": "core/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequest.java",
"status": "modified"
},
{
"diff": "@@ -346,6 +346,9 @@ public int from() {\n * The number of search hits to return. Defaults to <tt>10</tt>.\n */\n public SearchSourceBuilder size(int size) {\n+ if (size < 0) {\n+ throw new IllegalArgumentException(\"[size] parameter cannot be negative, found [\" + size + \"]\");\n+ }\n this.size = size;\n return this;\n }",
"filename": "core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java",
"status": "modified"
},
{
"diff": "@@ -42,7 +42,9 @@ public void testForSlice() {\n original.setSlices(between(2, 1000));\n original.setRequestsPerSecond(\n randomBoolean() ? Float.POSITIVE_INFINITY : randomValueOtherThanMany(r -> r < 0, ESTestCase::randomFloat));\n- original.setSize(randomBoolean() ? AbstractBulkByScrollRequest.SIZE_ALL_MATCHES : between(0, Integer.MAX_VALUE));\n+ if (randomBoolean()) {\n+ original.setSize(between(0, Integer.MAX_VALUE));\n+ }\n \n TaskId slicingTask = new TaskId(randomAlphaOfLength(5), randomLong());\n SearchRequest sliceRequest = new SearchRequest();",
"filename": "core/src/test/java/org/elasticsearch/index/reindex/AbstractBulkByScrollRequestTestCase.java",
"status": "modified"
},
{
"diff": "@@ -365,6 +365,15 @@ public void testNegativeFromErrors() {\n assertEquals(\"[from] parameter cannot be negative\", expected.getMessage());\n }\n \n+ public void testNegativeSizeErrors() {\n+ int randomSize = randomIntBetween(-100000, -2);\n+ IllegalArgumentException expected = expectThrows(IllegalArgumentException.class,\n+ () -> new SearchSourceBuilder().size(randomSize));\n+ assertEquals(\"[size] parameter cannot be negative, found [\" + randomSize + \"]\", expected.getMessage());\n+ expected = expectThrows(IllegalArgumentException.class, () -> new SearchSourceBuilder().size(-1));\n+ assertEquals(\"[size] parameter cannot be negative, found [-1]\", expected.getMessage());\n+ }\n+\n private void assertIndicesBoostParseErrorMessage(String restContent, String expectedErrorMessage) throws IOException {\n try (XContentParser parser = createParser(JsonXContent.jsonXContent, restContent)) {\n ParsingException e = expectThrows(ParsingException.class, () -> SearchSourceBuilder.fromXContent(createParseContext(parser)));",
"filename": "core/src/test/java/org/elasticsearch/search/builder/SearchSourceBuilderTests.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@ way to reindex old indices is to use the `reindex` API.\n * <<breaking_60_aggregations_changes>>\n * <<breaking_60_mappings_changes>>\n * <<breaking_60_docs_changes>>\n+* <<breaking_60_reindex_changes>>\n * <<breaking_60_cluster_changes>>\n * <<breaking_60_settings_changes>>\n * <<breaking_60_plugins_changes>>\n@@ -55,6 +56,8 @@ include::migrate_6_0/mappings.asciidoc[]\n \n include::migrate_6_0/docs.asciidoc[]\n \n+include::migrate_6_0/reindex.asciidoc[]\n+\n include::migrate_6_0/cluster.asciidoc[]\n \n include::migrate_6_0/settings.asciidoc[]",
"filename": "docs/reference/migration/migrate_6_0.asciidoc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,6 @@\n+[[breaking_60_reindex_changes]]\n+=== Reindex changes\n+\n+==== `size` parameter\n+\n+The `size` parameter can no longer be explicitly set to `-1`. If all documents are required then the `size` parameter should not be set.\n\\ No newline at end of file",
"filename": "docs/reference/migration/migrate_6_0/reindex.asciidoc",
"status": "added"
},
{
"diff": "@@ -32,8 +32,6 @@\n import java.util.Map;\n import java.util.function.Consumer;\n \n-import static org.elasticsearch.index.reindex.AbstractBulkByScrollRequest.SIZE_ALL_MATCHES;\n-\n /**\n * Rest handler for reindex actions that accepts a search request like Update-By-Query or Delete-By-Query\n */\n@@ -52,7 +50,6 @@ protected void parseInternalRequest(Request internal, RestRequest restRequest,\n \n SearchRequest searchRequest = internal.getSearchRequest();\n int scrollSize = searchRequest.source().size();\n- searchRequest.source().size(SIZE_ALL_MATCHES);\n \n try (XContentParser parser = extractRequestSpecificFields(restRequest, bodyConsumers)) {\n RestSearchAction.parseSearchRequest(searchRequest, restRequest, parser);",
"filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractBulkByQueryRestHandler.java",
"status": "modified"
},
{
"diff": "@@ -131,7 +131,9 @@ public void testDeleteByQueryRequest() throws IOException {\n private void randomRequest(AbstractBulkByScrollRequest<?> request) {\n request.getSearchRequest().indices(\"test\");\n request.getSearchRequest().source().size(between(1, 1000));\n- request.setSize(random().nextBoolean() ? between(1, Integer.MAX_VALUE) : -1);\n+ if (randomBoolean()) {\n+ request.setSize(between(1, Integer.MAX_VALUE));\n+ }\n request.setAbortOnVersionConflict(random().nextBoolean());\n request.setRefresh(rarely());\n request.setTimeout(TimeValue.parseTimeValue(randomTimeValue(), null, \"test\"));",
"filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/RoundTripTests.java",
"status": "modified"
},
{
"diff": "@@ -44,7 +44,7 @@\n id: 1\n body: { \"text\": \"test\" }\n - do:\n- catch: /size should be greater than 0 if the request is limited to some number of documents or -1 if it isn't but it was \\[-4\\]/\n+ catch: /\\[size\\] parameter cannot be negative, found \\[-4\\]/\n delete_by_query:\n index: test\n size: -4",
"filename": "modules/reindex/src/test/resources/rest-api-spec/test/delete_by_query/20_validation.yml",
"status": "modified"
},
{
"diff": "@@ -104,7 +104,7 @@\n id: 1\n body: { \"text\": \"test\" }\n - do:\n- catch: /size should be greater than 0 if the request is limited to some number of documents or -1 if it isn't but it was \\[-4\\]/\n+ catch: /\\[size\\] parameter cannot be negative, found \\[-4\\]/\n reindex:\n body:\n source:",
"filename": "modules/reindex/src/test/resources/rest-api-spec/test/reindex/20_validation.yml",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,7 @@\n id: 1\n body: { \"text\": \"test\" }\n - do:\n- catch: /size should be greater than 0 if the request is limited to some number of documents or -1 if it isn't but it was \\[-4\\]/\n+ catch: /\\[size\\] parameter cannot be negative, found \\[-4\\]/\n update_by_query:\n index: test\n size: -4",
"filename": "modules/reindex/src/test/resources/rest-api-spec/test/update_by_query/20_validation.yml",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.3\r\n\r\n**Plugins installed**: Cerebro, X-Pack\r\n\r\n**JVM version** (`java -version`): 1.8.0_66\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): RHEL 7.3\r\n\r\n**Description of the problem including expected versus actual behavior**: When trying to call shrink API and the destination index matches a template that provides mappings they get an error:\r\n\r\n```\r\n{\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[xxx-master][x.x.x.x:9300][indices:admin/shrink]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"mappings are not allowed when shrinking indices, all mappings are copied from the source index\"},\"status\":400}\r\n```\r\n\r\nIt would be nice if the shrink API bypassed checking for index templates that match the destination index. The error is misleading too since no mappings were provided by the user.\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Create index template for FooIndex* that includes a mapping.\r\n 2. Create index that matches index pattern FooIndex*\r\n 3. Call Shrink API for index created in step 2, making destination index also match the pattern FooIndex*.\r\n",
"comments": [
{
"body": "@clintongormley I want to take this issue.",
"created_at": "2017-06-06T07:32:14Z"
},
{
"body": "@fred84 go for it",
"created_at": "2017-06-06T11:19:12Z"
},
{
"body": "@clintongormley Hello. I have fixed this this bug, but need advice on further implementation.\r\n\r\n1. There are also aliases and customs applied to index from template. Should we also ignore them when shrinking index?\r\n2. Anonymous subclass of AckedClusterStateUpdateTask in MetaDataCreateIndexService contains lots of logic inside. I think we may convert it to inner class so it will be possible to unit-test this class. Here is code demonstrating this idea: https://github.com/fred84/elasticsearch/pull/1/files . Am I going in right direction?",
"created_at": "2017-06-12T19:21:23Z"
},
{
"body": "Hi @fred84 \r\n\r\nThanks for taking this on! I think that the shrunk index should ignore anything from templates and instead take its mappings, aliases, and settings from the original index, plus any new settings and aliases passed in with the shrink request.\r\n\r\nAs far as the direction you're going, I'll defer to @s1monw on that",
"created_at": "2017-06-13T08:54:12Z"
},
{
"body": "Closed by #25380",
"created_at": "2017-07-12T22:27:59Z"
}
],
"number": 25035,
"title": "Shrink API attempts to apply mapping from index templates."
} | {
"body": "@jasontedor fix only for #25035 \r\n",
"number": 25380,
"review_comments": [],
"title": "Shrink API should ignore templates"
} | {
"commits": [
{
"message": "Shrink API should ignore templates"
},
{
"message": "Merge branch 'master' into 25035_shrink_api_fix_only"
},
{
"message": "Number of replicas specified in request instead of template in PartitionedRoutingIT::testShrinking"
},
{
"message": "Shrink mapping yml test should allocate documents only on master"
},
{
"message": "Merge branch 'master' into 25035_shrink_api_fix_only"
},
{
"message": "Merge branch 'master' into 25035_shrink_api_fix_only\n\n* master: (181 commits)\n Use a non default port range in MockTransportService\n Add a shard filter search phase to pre-filter shards based on query rewriting (#25658)\n Prevent excessive disk consumption by log files\n Migrate RestHttpResponseHeadersIT to ESRestTestCase (#25675)\n Use config directory to find jvm.options\n Fix inadvertent rename of systemd tests\n Adding basic search request documentation for high level client (#25651)\n Disallow lang to be used with Stored Scripts (#25610)\n Fix typo in ScriptDocValues deprecation warnings (#25672)\n Changes DocValueFieldsFetchSubPhase to reuse doc values iterators for multiple hits (#25644)\n Query range fields by doc values when they are expected to be more efficient than points.\n Remove SearchHit#internalHits (#25653)\n [DOCS] Reorganized the highlighting topic so it's less confusing.\n Add an underscore to flood stage setting\n Avoid failing install if system-sysctl is masked\n Add another parent value option to join documentation (#25609)\n Ensure we rewrite common queries to `match_none` if possible (#25650)\n Remove reference to field-stats docs.\n Optimize the order of bytes in uuids for better compression. (#24615)\n Fix BytesReferenceStreamInput#skip with offset (#25634)\n ..."
}
],
"files": [
{
"diff": "@@ -264,58 +264,64 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n customs.put(entry.getKey(), entry.getValue());\n }\n \n- // apply templates, merging the mappings into the request mapping if exists\n- for (IndexTemplateMetaData template : templates) {\n- templateNames.add(template.getName());\n- for (ObjectObjectCursor<String, CompressedXContent> cursor : template.mappings()) {\n- String mappingString = cursor.value.string();\n- if (mappings.containsKey(cursor.key)) {\n- XContentHelper.mergeDefaults(mappings.get(cursor.key),\n+ final Index shrinkFromIndex = request.shrinkFrom();\n+\n+ if (shrinkFromIndex == null) {\n+ // apply templates, merging the mappings into the request mapping if exists\n+ for (IndexTemplateMetaData template : templates) {\n+ templateNames.add(template.getName());\n+ for (ObjectObjectCursor<String, CompressedXContent> cursor : template.mappings()) {\n+ String mappingString = cursor.value.string();\n+ if (mappings.containsKey(cursor.key)) {\n+ XContentHelper.mergeDefaults(mappings.get(cursor.key),\n MapperService.parseMapping(xContentRegistry, mappingString));\n- } else {\n- mappings.put(cursor.key,\n- MapperService.parseMapping(xContentRegistry, mappingString));\n- }\n- }\n- // handle custom\n- for (ObjectObjectCursor<String, Custom> cursor : template.customs()) {\n- String type = cursor.key;\n- IndexMetaData.Custom custom = cursor.value;\n- IndexMetaData.Custom existing = customs.get(type);\n- if (existing == null) {\n- customs.put(type, custom);\n- } else {\n- IndexMetaData.Custom merged = existing.mergeWith(custom);\n- customs.put(type, merged);\n- }\n- }\n- //handle aliases\n- for (ObjectObjectCursor<String, AliasMetaData> cursor : template.aliases()) {\n- AliasMetaData aliasMetaData = cursor.value;\n- //if an alias with same name came with the create index request itself,\n- // ignore this one taken from the index template\n- if (request.aliases().contains(new Alias(aliasMetaData.alias()))) {\n- continue;\n+ } else {\n+ mappings.put(cursor.key,\n+ MapperService.parseMapping(xContentRegistry, mappingString));\n+ }\n }\n- //if an alias with same name was already processed, ignore this one\n- if (templatesAliases.containsKey(cursor.key)) {\n- continue;\n+ // handle custom\n+ for (ObjectObjectCursor<String, Custom> cursor : template.customs()) {\n+ String type = cursor.key;\n+ IndexMetaData.Custom custom = cursor.value;\n+ IndexMetaData.Custom existing = customs.get(type);\n+ if (existing == null) {\n+ customs.put(type, custom);\n+ } else {\n+ IndexMetaData.Custom merged = existing.mergeWith(custom);\n+ customs.put(type, merged);\n+ }\n }\n-\n- //Allow templatesAliases to be templated by replacing a token with the name of the index that we are applying it to\n- if (aliasMetaData.alias().contains(\"{index}\")) {\n- String templatedAlias = aliasMetaData.alias().replace(\"{index}\", request.index());\n- aliasMetaData = AliasMetaData.newAliasMetaData(aliasMetaData, templatedAlias);\n+ //handle aliases\n+ for (ObjectObjectCursor<String, AliasMetaData> cursor : template.aliases()) {\n+ AliasMetaData aliasMetaData = cursor.value;\n+ //if an alias with same name came with the create index request itself,\n+ // ignore this one taken from the index template\n+ if (request.aliases().contains(new Alias(aliasMetaData.alias()))) {\n+ continue;\n+ }\n+ //if an alias with same name was already processed, ignore this one\n+ if (templatesAliases.containsKey(cursor.key)) {\n+ continue;\n+ }\n+\n+ //Allow templatesAliases to be templated by replacing a token with the name of the index that we are applying it to\n+ if (aliasMetaData.alias().contains(\"{index}\")) {\n+ String templatedAlias = aliasMetaData.alias().replace(\"{index}\", request.index());\n+ aliasMetaData = AliasMetaData.newAliasMetaData(aliasMetaData, templatedAlias);\n+ }\n+\n+ aliasValidator.validateAliasMetaData(aliasMetaData, request.index(), currentState.metaData());\n+ templatesAliases.put(aliasMetaData.alias(), aliasMetaData);\n }\n-\n- aliasValidator.validateAliasMetaData(aliasMetaData, request.index(), currentState.metaData());\n- templatesAliases.put(aliasMetaData.alias(), aliasMetaData);\n }\n }\n Settings.Builder indexSettingsBuilder = Settings.builder();\n- // apply templates, here, in reverse order, since first ones are better matching\n- for (int i = templates.size() - 1; i >= 0; i--) {\n- indexSettingsBuilder.put(templates.get(i).settings());\n+ if (shrinkFromIndex == null) {\n+ // apply templates, here, in reverse order, since first ones are better matching\n+ for (int i = templates.size() - 1; i >= 0; i--) {\n+ indexSettingsBuilder.put(templates.get(i).settings());\n+ }\n }\n // now, put the request settings, so they override templates\n indexSettingsBuilder.put(request.settings());\n@@ -340,7 +346,6 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n }\n indexSettingsBuilder.put(IndexMetaData.SETTING_INDEX_PROVIDED_NAME, request.getProvidedName());\n indexSettingsBuilder.put(SETTING_INDEX_UUID, UUIDs.randomBase64UUID());\n- final Index shrinkFromIndex = request.shrinkFrom();\n final IndexMetaData.Builder tmpImdBuilder = IndexMetaData.builder(request.index());\n \n final int routingNumShards;",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java",
"status": "modified"
},
{
"diff": "@@ -67,6 +67,7 @@ public void testShrinking() throws Exception {\n client().admin().indices().prepareCreate(index)\n .setSettings(Settings.builder()\n .put(\"index.number_of_shards\", currentShards)\n+ .put(\"index.number_of_replicas\", numberOfReplicas())\n .put(\"index.routing_partition_size\", partitionSize))\n .addMapping(\"type\", \"{\\\"type\\\":{\\\"_routing\\\":{\\\"required\\\":true}}}\", XContentType.JSON)\n .execute().actionGet();\n@@ -107,6 +108,7 @@ public void testShrinking() throws Exception {\n client().admin().indices().prepareShrinkIndex(previousIndex, index)\n .setSettings(Settings.builder()\n .put(\"index.number_of_shards\", currentShards)\n+ .put(\"index.number_of_replicas\", numberOfReplicas())\n .build()).get();\n ensureGreen();\n }",
"filename": "core/src/test/java/org/elasticsearch/routing/PartitionedRoutingIT.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,77 @@\n+---\n+\"Shrink index ignores target template mapping\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: bug fixed in 6.0\n+\n+ - do:\n+ cluster.state: {}\n+ # Get master node id\n+\n+ - set: { master_node: master }\n+\n+ # create index\n+ - do:\n+ indices.create:\n+ index: source\n+ wait_for_active_shards: 1\n+ body:\n+ settings:\n+ # ensure everything is allocated on a single node\n+ index.routing.allocation.include._id: $master\n+ number_of_replicas: 0\n+ mappings:\n+ test:\n+ properties:\n+ count:\n+ type: text\n+\n+ # index document\n+ - do:\n+ index:\n+ index: source\n+ type: test\n+ id: \"1\"\n+ body: { \"count\": \"1\" }\n+\n+ # create template matching shrink tagret\n+ - do:\n+ indices.put_template:\n+ name: tpl1\n+ body:\n+ index_patterns: targ*\n+ mappings:\n+ test:\n+ properties:\n+ count:\n+ type: integer\n+\n+ # make it read-only\n+ - do:\n+ indices.put_settings:\n+ index: source\n+ body:\n+ index.blocks.write: true\n+ index.number_of_replicas: 0\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: green\n+ index: source\n+\n+ # now we do the actual shrink\n+ - do:\n+ indices.shrink:\n+ index: \"source\"\n+ target: \"target\"\n+ wait_for_active_shards: 1\n+ master_timeout: 10s\n+ body:\n+ settings:\n+ index.number_of_replicas: 0\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: green\n+\n+",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.shrink/20_source_mapping.yml",
"status": "added"
}
]
} |
{
"body": "<!--\nGitHub is reserved for bug reports and feature requests. The best place\nto ask a general question is at the Elastic Discourse forums at\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\na feature request, please include one and only one of the below blocks\nin your new issue.\n-->\n\n<!--\nIf you are filing a feature request, please remove the above bug\nreport block and provide responses for all of the below items.\n-->\n\n**Describe the feature**:\n\nWhen aggregating documents with a date field mapping using `format: epoch_seconds` I would like to use epoch seconds in the `to` and `from` fields of a date-range aggregation. At present dates passed in `to` and `from` are not converted to milliseconds before the date-range aggregation.\n\nThis seems inconsistent with how dates are converted from the mapping format to the internal usage format when PUTing documents when the date-field format is specified.\n",
"comments": [
{
"body": "@hrfuller the `format` parameter in the agg is for the display format only. For input formats, you can add `epoch_seconds` to your field mapping, eg:\n\n```\n\"mappings\": {\n \"t\": {\n \"properties\": {\n \"date\": {\n \"type\": \"date\",\n \"format\": \"strict_date_optional_time||epoch_millis\"\n }\n }\n }\n}\n```\n",
"created_at": "2016-04-25T18:07:50Z"
},
{
"body": "@clintongormley My mapping has a date property formatted as `epoch_seconds`. When using a date-range aggregation and passing a date in `epoch_seconds` as the value of the `to` and `from` fields.\nI.e\n\n``` python\n'date_range': {\n 'field': 'dims.time',\n 'ranges': [\n {\n 'from': 1454463970,\n 'to': 1455150295,\n }\n ]\n},\n```\n\n the date-range aggregator interprets the timestamps as `epoch_millis`. Of course I can easily convert the `epoch_seconds` to `epoch_millis`.\n\nThis was a feature request because it seems like it would improve the consistency of the elasticsearch API if date ranges were query-able/aggregate-able using the date input format specified in the date mapping.\n\n```\n\"version\" : {\n \"number\" : \"2.3.1\",\n \"build_hash\" : \"bd980929010aef404e7cb0843e61d0665269fc39\",\n \"build_timestamp\" : \"2016-04-04T12:25:05Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"5.5.0\"\n },\n```\n",
"created_at": "2016-04-25T23:15:33Z"
},
{
"body": "Sorry, I completely misread the original description. I thought you were setting the format in the agg, not in the mapping. This is a bug.\n",
"created_at": "2016-04-26T14:10:43Z"
},
{
"body": "does it support `unbounded range`. For example `gt` : `unix-timestamp`",
"created_at": "2017-12-14T02:59:42Z"
}
],
"number": 17920,
"title": "Use epoch seconds in `to` and `from` fields of date range aggregation `ranges` objects."
} | {
"body": "Currently the `to` and `from` parameter in the `date_range` aggregation is not parsed with the correct date field format from the mappings or the aggregation if the argument is numeric, but always treated as a long value specifying `epoch_millis`. This leads to problems e.g. when the format is `epoch_second`, but the `to` and `from` are currently treated as millis.\r\n\r\n#Closes #17920 ",
"number": 25376,
"review_comments": [
{
"body": "Although this will solve the case where the format is `epoch_millis` or `epoch_seconds`, does this not now mean that if you have a string based date format (e.g. `strict_date_optional_time`) you can no longer supply the from and to values as a long because the parser will throw an exception stating an invalid format?",
"created_at": "2017-06-27T07:57:44Z"
},
{
"body": "That's true, but to me that's also what I would expect. If the `format` in the mapping is `strict_hour_minute_second` I would expect this to be the format of dates at index and at query time (if not specified otherwise). Note that the user could either use `strict_hour_minute_second||epoch_millis` in the mappings or use `\"format\" : \"epoch_millis\"` in the aggregation to still be able to supply the to and from as long.",
"created_at": "2017-06-27T11:16:07Z"
},
{
"body": "In that case, if we are always going to interpret the users input as a formatted date, then we should just have the parser get the string value of the from and to fields in the request and not have the parser try to work out if the input is a string or a number and convert to string at that point. That will mean we don't have the conversion `double --> long --> string --> double` and instead always have `string --> double`. \r\n\r\nSaying that I'm not sure if the original intended fix to #17920 was to also remove the ability to specific epoch time as a long when the format is non-epoch based /cc @clintongormley ",
"created_at": "2017-06-27T12:14:16Z"
},
{
"body": "The original intention of https://github.com/elastic/elasticsearch/issues/10971 was that dates should always be parsed according to their listed formats. Part of this change, which was never added, was to support a query time syntax that could be used to specify `ms`, regardless of the listed formats. so kibana would be able to use `ms:12345` (see the \"Query time\" heading on that issue)",
"created_at": "2017-06-29T17:15:16Z"
},
{
"body": "@clintongormley I'm not sure I understand what this means for this PR. For the `date_range` aggregation the user can specify `\"format\" : \"epoch_millis\"` and then use numeric ms values in \"to\" and \"from\", regardless of whats in the mapping. The change here is that before we treated _every_ numeric value as if it was a millisecond date, now we either use the format in the mappings or in the aggregation itself, but throw an error if neither of it can parse a millisecond/second numeric date.",
"created_at": "2017-06-30T10:45:22Z"
},
{
"body": "makes sense, thanks",
"created_at": "2017-06-30T11:19:09Z"
},
{
"body": "> we should just have the parser get the string value of the from and to fields in the request and not have the parser try to work out if the input is a string or a number and convert to string at that point \r\n\r\nTalked about this with @colings86 f2f, the problem with parsing all numeric values back to strings in the aggregation parser is that we end up with different Range objects that are no longer `equal` (something we check e.g. in `DateRangeTests`). Also, doing the conversion at parsing will e.g. still leave the door open for current users of the transport client to still use numeric to/from values by accident which we then don't convert. We aggreed on sticking with the current solution.",
"created_at": "2017-07-10T11:20:26Z"
}
],
"title": "Change parsing of numeric `to` and `from` parameters in `date_range` aggregation"
} | {
"commits": [
{
"message": "Fix #17920"
},
{
"message": "Adding more test cases"
},
{
"message": "Skip range date test for versions befor 6.0"
},
{
"message": "Adding note to migration docs"
}
],
"files": [
{
"diff": "@@ -26,15 +26,13 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range;\n import org.elasticsearch.search.aggregations.support.ValuesSource;\n-import org.elasticsearch.search.aggregations.support.ValuesSource.Numeric;\n import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder;\n-import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;\n-import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n import java.util.Objects;\n+import java.util.function.Function;\n \n public abstract class AbstractRangeBuilder<AB extends AbstractRangeBuilder<AB, R>, R extends Range>\n extends ValuesSourceAggregationBuilder<ValuesSource.Numeric, AB> {\n@@ -63,10 +61,10 @@ protected AbstractRangeBuilder(StreamInput in, InternalRange.Factory<?, ?> range\n * Resolve any strings in the ranges so we have a number value for the from\n * and to of each range. The ranges are also sorted before being returned.\n */\n- protected Range[] processRanges(SearchContext context, ValuesSourceConfig<Numeric> config) {\n+ protected Range[] processRanges(Function<Range, Range> rangeProcessor) {\n Range[] ranges = new Range[this.ranges.size()];\n for (int i = 0; i < ranges.length; i++) {\n- ranges[i] = this.ranges.get(i).process(config.format(), context);\n+ ranges[i] = rangeProcessor.apply(this.ranges.get(i));\n }\n sortRanges(ranges);\n return ranges;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n@@ -218,7 +219,7 @@ public DateRangeAggregationBuilder addRange(String key, DateTime from, DateTime\n return this;\n }\n \n- private Double convertDateTime(DateTime dateTime) {\n+ private static Double convertDateTime(DateTime dateTime) {\n if (dateTime == null) {\n return null;\n } else {\n@@ -281,7 +282,27 @@ protected DateRangeAggregatorFactory innerBuild(SearchContext context, ValuesSou\n AggregatorFactory<?> parent, Builder subFactoriesBuilder) throws IOException {\n // We need to call processRanges here so they are parsed and we know whether `now` has been used before we make\n // the decision of whether to cache the request\n- RangeAggregator.Range[] ranges = processRanges(context, config);\n+ RangeAggregator.Range[] ranges = processRanges(range -> {\n+ DocValueFormat parser = config.format();\n+ assert parser != null;\n+ double from = range.getFrom();\n+ double to = range.getTo();\n+ String fromAsString = range.getFromAsString();\n+ String toAsString = range.getToAsString();\n+ if (fromAsString != null) {\n+ from = parser.parseDouble(fromAsString, false, context.getQueryShardContext()::nowInMillis);\n+ } else if (Double.isFinite(from)) {\n+ // from/to provided as double should be converted to string and parsed regardless to support\n+ // different formats like `epoch_millis` vs. `epoch_second` with numeric input\n+ from = parser.parseDouble(Long.toString((long) from), false, context.getQueryShardContext()::nowInMillis);\n+ }\n+ if (toAsString != null) {\n+ to = parser.parseDouble(toAsString, false, context.getQueryShardContext()::nowInMillis);\n+ } else if (Double.isFinite(to)) {\n+ to = parser.parseDouble(Long.toString((long) to), false, context.getQueryShardContext()::nowInMillis);\n+ }\n+ return new RangeAggregator.Range(range.getKey(), from, fromAsString, to, toAsString);\n+ });\n if (ranges.length == 0) {\n throw new IllegalArgumentException(\"No [ranges] specified for the [\" + this.getName() + \"] aggregation\");\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/DateRangeAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n@@ -138,7 +139,19 @@ public RangeAggregationBuilder addUnboundedFrom(double from) {\n protected RangeAggregatorFactory innerBuild(SearchContext context, ValuesSourceConfig<Numeric> config,\n AggregatorFactory<?> parent, Builder subFactoriesBuilder) throws IOException {\n // We need to call processRanges here so they are parsed before we make the decision of whether to cache the request\n- Range[] ranges = processRanges(context, config);\n+ Range[] ranges = processRanges(range -> {\n+ DocValueFormat parser = config.format();\n+ assert parser != null;\n+ Double from = range.from;\n+ Double to = range.to;\n+ if (range.fromAsStr != null) {\n+ from = parser.parseDouble(range.fromAsStr, false, context.getQueryShardContext()::nowInMillis);\n+ }\n+ if (range.toAsStr != null) {\n+ to = parser.parseDouble(range.toAsStr, false, context.getQueryShardContext()::nowInMillis);\n+ }\n+ return new Range(range.key, from, range.fromAsStr, to, range.toAsStr);\n+ });\n if (ranges.length == 0) {\n throw new IllegalArgumentException(\"No [ranges] specified for the [\" + this.getName() + \"] aggregation\");\n }",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregationBuilder.java",
"status": "modified"
},
{
"diff": "@@ -90,8 +90,27 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeDouble(to);\n }\n \n+ public double getFrom() {\n+ return this.from;\n+ }\n+\n+ public double getTo() {\n+ return this.to;\n+ }\n+\n+ public String getFromAsString() {\n+ return this.fromAsStr;\n+ }\n+\n+ public String getToAsString() {\n+ return this.toAsStr;\n+ }\n+\n+ public String getKey() {\n+ return this.key;\n+ }\n \n- protected Range(String key, Double from, String fromAsStr, Double to, String toAsStr) {\n+ public Range(String key, Double from, String fromAsStr, Double to, String toAsStr) {\n this.key = key;\n this.from = from == null ? Double.NEGATIVE_INFINITY : from;\n this.fromAsStr = fromAsStr;\n@@ -108,19 +127,6 @@ public String toString() {\n return \"[\" + from + \" to \" + to + \")\";\n }\n \n- public Range process(DocValueFormat parser, SearchContext context) {\n- assert parser != null;\n- Double from = this.from;\n- Double to = this.to;\n- if (fromAsStr != null) {\n- from = parser.parseDouble(fromAsStr, false, context.getQueryShardContext()::nowInMillis);\n- }\n- if (toAsStr != null) {\n- to = parser.parseDouble(toAsStr, false, context.getQueryShardContext()::nowInMillis);\n- }\n- return new Range(key, from, fromAsStr, to, toAsStr);\n- }\n-\n public static Range fromXContent(XContentParser parser) throws IOException {\n XContentParser.Token token;\n String currentFieldName = null;",
"filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/RangeAggregator.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.search.aggregations.bucket;\n \n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -136,7 +137,6 @@ public void testDateMath() throws Exception {\n assertThat(range.getName(), equalTo(\"range\"));\n assertThat(range.getBuckets().size(), equalTo(3));\n \n- // TODO: use diamond once JI-9019884 is fixed\n List<Range.Bucket> buckets = new ArrayList<>(range.getBuckets());\n \n Range.Bucket bucket = buckets.get(0);\n@@ -855,7 +855,6 @@ public void testEmptyAggregation() throws Exception {\n assertThat(bucket, Matchers.notNullValue());\n \n Range dateRange = bucket.getAggregations().get(\"date_range\");\n- // TODO: use diamond once JI-9019884 is fixed\n List<Range.Bucket> buckets = new ArrayList<>(dateRange.getBuckets());\n assertThat(dateRange, Matchers.notNullValue());\n assertThat(dateRange.getName(), equalTo(\"date_range\"));\n@@ -926,4 +925,142 @@ public void testDontCacheScripts() throws Exception {\n assertThat(client().admin().indices().prepareStats(\"cache_test_idx\").setRequestCache(true).get().getTotal().getRequestCache()\n .getMissCount(), equalTo(1L));\n }\n+\n+ /**\n+ * Test querying ranges on date mapping specifying a format with to/from\n+ * values specified as Strings\n+ */\n+ public void testRangeWithFormatStringValue() throws Exception {\n+ String indexName = \"dateformat_test_idx\";\n+ assertAcked(prepareCreate(indexName).addMapping(\"type\", \"date\", \"type=date,format=strict_hour_minute_second\"));\n+ indexRandom(true,\n+ client().prepareIndex(indexName, \"type\", \"1\").setSource(jsonBuilder().startObject().field(\"date\", \"00:16:40\").endObject()),\n+ client().prepareIndex(indexName, \"type\", \"2\").setSource(jsonBuilder().startObject().field(\"date\", \"00:33:20\").endObject()),\n+ client().prepareIndex(indexName, \"type\", \"3\").setSource(jsonBuilder().startObject().field(\"date\", \"00:50:00\").endObject()));\n+\n+ // using no format should work when to/from is compatible with format in\n+ // mapping\n+ SearchResponse searchResponse = client().prepareSearch(indexName).setSize(0)\n+ .addAggregation(dateRange(\"date_range\").field(\"date\").addRange(\"00:16:40\", \"00:50:00\").addRange(\"00:50:00\", \"01:06:40\"))\n+ .get();\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(3L));\n+ List<Range.Bucket> buckets = checkBuckets(searchResponse.getAggregations().get(\"date_range\"), \"date_range\", 2);\n+ assertBucket(buckets.get(0), 2L, \"00:16:40-00:50:00\", 1000000L, 3000000L);\n+ assertBucket(buckets.get(1), 1L, \"00:50:00-01:06:40\", 3000000L, 4000000L);\n+\n+ // using different format should work when to/from is compatible with\n+ // format in aggregation\n+ searchResponse = client().prepareSearch(indexName).setSize(0).addAggregation(\n+ dateRange(\"date_range\").field(\"date\").addRange(\"00.16.40\", \"00.50.00\").addRange(\"00.50.00\", \"01.06.40\").format(\"HH.mm.ss\"))\n+ .get();\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(3L));\n+ buckets = checkBuckets(searchResponse.getAggregations().get(\"date_range\"), \"date_range\", 2);\n+ assertBucket(buckets.get(0), 2L, \"00.16.40-00.50.00\", 1000000L, 3000000L);\n+ assertBucket(buckets.get(1), 1L, \"00.50.00-01.06.40\", 3000000L, 4000000L);\n+\n+ // providing numeric input with format should work, but bucket keys are\n+ // different now\n+ searchResponse = client().prepareSearch(indexName).setSize(0)\n+ .addAggregation(\n+ dateRange(\"date_range\").field(\"date\").addRange(1000000, 3000000).addRange(3000000, 4000000).format(\"epoch_millis\"))\n+ .get();\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(3L));\n+ buckets = checkBuckets(searchResponse.getAggregations().get(\"date_range\"), \"date_range\", 2);\n+ assertBucket(buckets.get(0), 2L, \"1000000-3000000\", 1000000L, 3000000L);\n+ assertBucket(buckets.get(1), 1L, \"3000000-4000000\", 3000000L, 4000000L);\n+\n+ // providing numeric input without format should throw an exception\n+ Exception e = expectThrows(Exception.class, () -> client().prepareSearch(indexName).setSize(0)\n+ .addAggregation(dateRange(\"date_range\").field(\"date\").addRange(1000000, 3000000).addRange(3000000, 4000000)).get());\n+ Throwable cause = e.getCause();\n+ assertThat(cause, instanceOf(ElasticsearchParseException.class));\n+ assertEquals(\"failed to parse date field [1000000] with format [strict_hour_minute_second]\", cause.getMessage());\n+ }\n+\n+ /**\n+ * Test querying ranges on date mapping specifying a format with to/from\n+ * values specified as numeric value\n+ */\n+ public void testRangeWithFormatNumericValue() throws Exception {\n+ String indexName = \"dateformat_numeric_test_idx\";\n+ assertAcked(prepareCreate(indexName).addMapping(\"type\", \"date\", \"type=date,format=epoch_second\"));\n+ indexRandom(true,\n+ client().prepareIndex(indexName, \"type\", \"1\").setSource(jsonBuilder().startObject().field(\"date\", 1000).endObject()),\n+ client().prepareIndex(indexName, \"type\", \"2\").setSource(jsonBuilder().startObject().field(\"date\", 2000).endObject()),\n+ client().prepareIndex(indexName, \"type\", \"3\").setSource(jsonBuilder().startObject().field(\"date\", 3000).endObject()));\n+\n+ // using no format should work when to/from is compatible with format in\n+ // mapping\n+ SearchResponse searchResponse = client().prepareSearch(indexName).setSize(0)\n+ .addAggregation(dateRange(\"date_range\").field(\"date\").addRange(1000, 3000).addRange(3000, 4000)).get();\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(3L));\n+ List<Bucket> buckets = checkBuckets(searchResponse.getAggregations().get(\"date_range\"), \"date_range\", 2);\n+ assertBucket(buckets.get(0), 2L, \"1000-3000\", 1000000L, 3000000L);\n+ assertBucket(buckets.get(1), 1L, \"3000-4000\", 3000000L, 4000000L);\n+\n+ // using no format should also work when and to/from are string values\n+ searchResponse = client().prepareSearch(indexName).setSize(0)\n+ .addAggregation(dateRange(\"date_range\").field(\"date\").addRange(\"1000\", \"3000\").addRange(\"3000\", \"4000\")).get();\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(3L));\n+ buckets = checkBuckets(searchResponse.getAggregations().get(\"date_range\"), \"date_range\", 2);\n+ assertBucket(buckets.get(0), 2L, \"1000-3000\", 1000000L, 3000000L);\n+ assertBucket(buckets.get(1), 1L, \"3000-4000\", 3000000L, 4000000L);\n+\n+ // also e-notation should work, fractional parts should be truncated\n+ searchResponse = client().prepareSearch(indexName).setSize(0)\n+ .addAggregation(dateRange(\"date_range\").field(\"date\").addRange(1.0e3, 3000.8123).addRange(3000.8123, 4.0e3)).get();\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(3L));\n+ buckets = checkBuckets(searchResponse.getAggregations().get(\"date_range\"), \"date_range\", 2);\n+ assertBucket(buckets.get(0), 2L, \"1000-3000\", 1000000L, 3000000L);\n+ assertBucket(buckets.get(1), 1L, \"3000-4000\", 3000000L, 4000000L);\n+\n+ // however, e-notation should and fractional parts provided as string\n+ // should be parsed and error if not compatible\n+ Exception e = expectThrows(Exception.class, () -> client().prepareSearch(indexName).setSize(0)\n+ .addAggregation(dateRange(\"date_range\").field(\"date\").addRange(\"1.0e3\", \"3.0e3\").addRange(\"3.0e3\", \"4.0e3\")).get());\n+ assertThat(e.getCause(), instanceOf(ElasticsearchParseException.class));\n+ assertEquals(\"failed to parse date field [1.0e3] with format [epoch_second]\", e.getCause().getMessage());\n+\n+ e = expectThrows(Exception.class, () -> client().prepareSearch(indexName).setSize(0)\n+ .addAggregation(dateRange(\"date_range\").field(\"date\").addRange(\"1000.123\", \"3000.8\").addRange(\"3000.8\", \"4000.3\")).get());\n+ assertThat(e.getCause(), instanceOf(ElasticsearchParseException.class));\n+ assertEquals(\"failed to parse date field [1000.123] with format [epoch_second]\", e.getCause().getMessage());\n+\n+ // using different format should work when to/from is compatible with\n+ // format in aggregation\n+ searchResponse = client().prepareSearch(indexName).setSize(0).addAggregation(\n+ dateRange(\"date_range\").field(\"date\").addRange(\"00.16.40\", \"00.50.00\").addRange(\"00.50.00\", \"01.06.40\").format(\"HH.mm.ss\"))\n+ .get();\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(3L));\n+ buckets = checkBuckets(searchResponse.getAggregations().get(\"date_range\"), \"date_range\", 2);\n+ assertBucket(buckets.get(0), 2L, \"00.16.40-00.50.00\", 1000000L, 3000000L);\n+ assertBucket(buckets.get(1), 1L, \"00.50.00-01.06.40\", 3000000L, 4000000L);\n+\n+ // providing different numeric input with format should work, but bucket\n+ // keys are different now\n+ searchResponse = client().prepareSearch(indexName).setSize(0)\n+ .addAggregation(\n+ dateRange(\"date_range\").field(\"date\").addRange(1000000, 3000000).addRange(3000000, 4000000).format(\"epoch_millis\"))\n+ .get();\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(3L));\n+ buckets = checkBuckets(searchResponse.getAggregations().get(\"date_range\"), \"date_range\", 2);\n+ assertBucket(buckets.get(0), 2L, \"1000000-3000000\", 1000000L, 3000000L);\n+ assertBucket(buckets.get(1), 1L, \"3000000-4000000\", 3000000L, 4000000L);\n+ }\n+\n+ private static List<Range.Bucket> checkBuckets(Range dateRange, String expectedAggName, long expectedBucketsSize) {\n+ assertThat(dateRange, Matchers.notNullValue());\n+ assertThat(dateRange.getName(), equalTo(expectedAggName));\n+ List<Range.Bucket> buckets = new ArrayList<>(dateRange.getBuckets());\n+ assertThat(buckets.size(), is(2));\n+ return buckets;\n+ }\n+\n+ private static void assertBucket(Bucket bucket, long bucketSize, String expectedKey, long expectedFrom, long expectedTo) {\n+ assertThat(bucket.getDocCount(), equalTo(bucketSize));\n+ assertThat((String) bucket.getKey(), equalTo(expectedKey));\n+ assertThat(((DateTime) bucket.getFrom()).getMillis(), equalTo(expectedFrom));\n+ assertThat(((DateTime) bucket.getTo()).getMillis(), equalTo(expectedTo));\n+ assertThat(bucket.getAggregations().asList().isEmpty(), is(true));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/DateRangeIT.java",
"status": "modified"
},
{
"diff": "@@ -49,3 +49,11 @@ POST /twitter/_search?size=0\n --------------------------------------------------\n // CONSOLE\n // TEST[setup:twitter]\n+\n+==== Numeric `to` and `from` parameters in `date_range` aggregation are interpreted according to `format` now\n+\n+Numeric `to` and `from` parameters in `date_range` aggregations used to always be interpreted as `epoch_millis`,\n+making other numeric formats like `epoch_seconds` unusable for numeric input values. \n+Now we interpret these parameters according to the `format` of the target field. \n+If the `format` in the mappings is not compatible with the numeric input value, a compatible \n+`format` (e.g. `epoch_millis`, `epoch_second`) must be specified in the `date_range` aggregation, otherwise an error is thrown.",
"filename": "docs/reference/migration/migrate_6_0/aggregations.asciidoc",
"status": "modified"
},
{
"diff": "@@ -14,6 +14,7 @@ setup:\n type: double\n date:\n type: date\n+ format: epoch_second\n \n - do:\n cluster.health:\n@@ -225,3 +226,50 @@ setup:\n \n - match: { aggregations.ip_range.buckets.1.doc_count: 2 } \n \n+---\n+\"Date range\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: before 6.0, numeric date_range to/from parameters were always parsed as if they are epoch_millis (#17920)\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 1\n+ body: { \"date\" : 1000 }\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 2\n+ body: { \"date\" : 2000 }\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 3\n+ body: { \"date\" : 3000 }\n+\n+ - do:\n+ indices.refresh: {}\n+\n+ - do:\n+ search:\n+ body: { \"size\" : 0, \"aggs\" : { \"date_range\" : { \"date_range\" : { \"field\" : \"date\", \"ranges\": [ { \"from\" : 1000, \"to\": 3000 }, { \"from\": 3000, \"to\": 4000 } ] } } } }\n+\n+ - match: { hits.total: 3 }\n+\n+ - length: { aggregations.date_range.buckets: 2 }\n+\n+ - match: { aggregations.date_range.buckets.0.doc_count: 2 }\n+ - match: { aggregations.date_range.buckets.0.key: \"1000-3000\" }\n+ - match: { aggregations.date_range.buckets.0.from: 1000000 }\n+ - match: { aggregations.date_range.buckets.0.to: 3000000 }\n+ \n+ - match: { aggregations.date_range.buckets.1.doc_count: 1 }\n+ - match: { aggregations.date_range.buckets.1.key: \"3000-4000\" }\n+ - match: { aggregations.date_range.buckets.1.from: 3000000 }\n+ - match: { aggregations.date_range.buckets.1.to: 4000000 }\n+",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/40_range.yml",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.3\r\n\r\n**Plugins installed**: Cerebro, X-Pack\r\n\r\n**JVM version** (`java -version`): 1.8.0_66\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): RHEL 7.3\r\n\r\n**Description of the problem including expected versus actual behavior**: When trying to call shrink API and the destination index matches a template that provides mappings they get an error:\r\n\r\n```\r\n{\"error\":{\"root_cause\":[{\"type\":\"remote_transport_exception\",\"reason\":\"[xxx-master][x.x.x.x:9300][indices:admin/shrink]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"mappings are not allowed when shrinking indices, all mappings are copied from the source index\"},\"status\":400}\r\n```\r\n\r\nIt would be nice if the shrink API bypassed checking for index templates that match the destination index. The error is misleading too since no mappings were provided by the user.\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Create index template for FooIndex* that includes a mapping.\r\n 2. Create index that matches index pattern FooIndex*\r\n 3. Call Shrink API for index created in step 2, making destination index also match the pattern FooIndex*.\r\n",
"comments": [
{
"body": "@clintongormley I want to take this issue.",
"created_at": "2017-06-06T07:32:14Z"
},
{
"body": "@fred84 go for it",
"created_at": "2017-06-06T11:19:12Z"
},
{
"body": "@clintongormley Hello. I have fixed this this bug, but need advice on further implementation.\r\n\r\n1. There are also aliases and customs applied to index from template. Should we also ignore them when shrinking index?\r\n2. Anonymous subclass of AckedClusterStateUpdateTask in MetaDataCreateIndexService contains lots of logic inside. I think we may convert it to inner class so it will be possible to unit-test this class. Here is code demonstrating this idea: https://github.com/fred84/elasticsearch/pull/1/files . Am I going in right direction?",
"created_at": "2017-06-12T19:21:23Z"
},
{
"body": "Hi @fred84 \r\n\r\nThanks for taking this on! I think that the shrunk index should ignore anything from templates and instead take its mappings, aliases, and settings from the original index, plus any new settings and aliases passed in with the shrink request.\r\n\r\nAs far as the direction you're going, I'll defer to @s1monw on that",
"created_at": "2017-06-13T08:54:12Z"
},
{
"body": "Closed by #25380",
"created_at": "2017-07-12T22:27:59Z"
}
],
"number": 25035,
"title": "Shrink API attempts to apply mapping from index templates."
} | {
"body": "Aliases, mapping, customs and settings from templates should be ignored when shrinking index #25035 ",
"number": 25373,
"review_comments": [],
"title": "Shrink api should ignore templates"
} | {
"commits": [
{
"message": "refactoring of MetaDataCreateIndexService::onlyCreateIndex#"
},
{
"message": "Merge branch 'master' into 25034_shrink_api_should_ignore_mapping_from_template"
},
{
"message": "extract task from anonymous class to inner in onlyCreateIndex"
},
{
"message": "unit test for index creation task"
},
{
"message": "index creation task refactoring"
},
{
"message": "tests for MetaDataCreateIndex task"
},
{
"message": "more tests on IndexCreationTask"
},
{
"message": "more tests in index creation task"
},
{
"message": "more tests for create index action"
},
{
"message": "test for shring index creation"
},
{
"message": "resolve conflict with master (primary terms for shrunk index)"
},
{
"message": "index creation action refactoring"
},
{
"message": "Merge branch 'master' into 25035_shrink_api_should_ignore_mapping_from_template"
},
{
"message": "minor cleanup in IndexCreationTask and fix integration tests"
},
{
"message": "Merge branch 'master' into 25034_shrink_api_should_ignore_mapping_from_template"
},
{
"message": "Merge branch 'master' into 25035_shrink_api_should_ignore_mapping_from_template"
},
{
"message": "skip settings from template when shrinking index"
}
],
"files": [
{
"diff": "@@ -19,11 +19,10 @@\n \n package org.elasticsearch.cluster.metadata;\n \n-import com.carrotsearch.hppc.cursors.ObjectCursor;\n import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n+import org.apache.logging.log4j.Logger;\n import org.apache.logging.log4j.message.ParameterizedMessage;\n import org.apache.logging.log4j.util.Supplier;\n-import org.apache.lucene.util.CollectionUtil;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ResourceAlreadyExistsException;\n import org.elasticsearch.Version;\n@@ -80,20 +79,24 @@\n import java.io.IOException;\n import java.io.UnsupportedEncodingException;\n import java.nio.file.Path;\n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.Comparator;\n-import java.util.HashMap;\n-import java.util.List;\n-import java.util.Locale;\n import java.util.Map;\n+import java.util.List;\n+import java.util.HashMap;\n+import java.util.Collections;\n import java.util.Set;\n+import java.util.Locale;\n+import java.util.ArrayList;\n+import java.util.Iterator;\n import java.util.concurrent.atomic.AtomicInteger;\n import java.util.function.BiFunction;\n import java.util.function.Predicate;\n+import java.util.stream.Stream;\n+import java.util.stream.StreamSupport;\n import java.util.stream.IntStream;\n-\n-import static org.elasticsearch.action.support.ContextPreservingActionListener.wrapPreservingContext;\n+import static java.util.Comparator.comparingInt;\n+import static java.util.stream.Collectors.toList;\n+import static java.util.stream.Collectors.toMap;\n+import static java.util.stream.Collectors.toSet;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_CREATION_DATE;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_INDEX_UUID;\n@@ -118,6 +121,408 @@ public class MetaDataCreateIndexService extends AbstractComponent {\n private final NamedXContentRegistry xContentRegistry;\n private final ThreadPool threadPool;\n \n+ interface IndexValidator {\n+ void validate(CreateIndexClusterStateUpdateRequest request, ClusterState state);\n+ }\n+\n+ static class IndexCreationTask extends AckedClusterStateUpdateTask<ClusterStateUpdateResponse> {\n+\n+ private final IndicesService indicesService;\n+ private final AliasValidator aliasValidator;\n+ private final NamedXContentRegistry xContentRegistry;\n+ private final CreateIndexClusterStateUpdateRequest request;\n+ private final Logger logger;\n+ private final AllocationService allocationService;\n+ private final Settings settings;\n+ private final IndexValidator validator;\n+\n+ IndexCreationTask(Logger logger, AllocationService allocationService, CreateIndexClusterStateUpdateRequest request,\n+ ActionListener<ClusterStateUpdateResponse> listener, IndicesService indicesService,\n+ AliasValidator aliasValidator, NamedXContentRegistry xContentRegistry,\n+ Settings settings, IndexValidator validator) {\n+ super(Priority.URGENT, request, listener);\n+ this.request = request;\n+ this.logger = logger;\n+ this.allocationService = allocationService;\n+ this.indicesService = indicesService;\n+ this.aliasValidator = aliasValidator;\n+ this.xContentRegistry = xContentRegistry;\n+ this.settings = settings;\n+ this.validator = validator;\n+ }\n+\n+ @Override\n+ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n+ return new ClusterStateUpdateResponse(acknowledged);\n+ }\n+\n+ @Override\n+ public ClusterState execute(ClusterState currentState) throws Exception {\n+ Index createdIndex = null;\n+ String removalExtraInfo = null;\n+ IndexRemovalReason removalReason = IndexRemovalReason.FAILURE;\n+ try {\n+ validator.validate(request, currentState);\n+ request.aliases().forEach(a -> aliasValidator.validateAlias(a, request.index(), currentState.metaData()));\n+\n+ final Index shrinkFromIndex = request.shrinkFrom();\n+ final Map<String, Custom> customs = new HashMap<>(request.customs());\n+ // add the request mapping\n+ final Map<String, Map<String, Object>> mappings = getRequestMappings();\n+ // we only find a template when its an API call (a new index)\n+ // find templates, highest order are better matching\n+ final List<IndexTemplateMetaData> templates = findTemplates(request, currentState);\n+ final List<String> templateNames = new ArrayList<>();\n+ final Map<String, AliasMetaData> templatesAliases = new HashMap<>();\n+\n+ // apply templates, merging the mappings into the request mapping if exists\n+ if (shrinkFromIndex == null) {\n+ fillFromTemplates(currentState, customs, mappings, templateNames, templatesAliases, templates);\n+ }\n+\n+ final Settings.Builder indexSettingsBuilder = getIndexSettingsBuilder(request, currentState, templates,\n+ shrinkFromIndex != null);\n+\n+ if (shrinkFromIndex != null) {\n+ prepareShrinkIndexSettings(currentState, mappings.keySet(), indexSettingsBuilder, shrinkFromIndex, request.index());\n+ }\n+\n+ final Settings actualIndexSettings = indexSettingsBuilder.build();\n+ final int routingNumShards = getRoutingShardNum(shrinkFromIndex, currentState, actualIndexSettings);\n+ final IndexMetaData.Builder tmpImdBuilder = getTmpIndexMetaDataBuilder(actualIndexSettings, routingNumShards);\n+\n+ if (shrinkFromIndex != null) {\n+ applyPrimaryTermFromSource(currentState, shrinkFromIndex, tmpImdBuilder);\n+ }\n+\n+ // Set up everything, now locally create the index to see that things are ok, and apply\n+ final IndexMetaData tmpImd = tmpImdBuilder.build();\n+\n+ validateWaitForActiveShardsValue(tmpImd);\n+\n+ // create the index here (on the master) to validate it can be created, as well as adding the mapping\n+ final IndexService indexService = indicesService.createIndex(tmpImd, Collections.emptyList());\n+ createdIndex = indexService.index();\n+\n+ // now add the mappings\n+ final MapperService mapperService = indexService.mapperService();\n+\n+ try {\n+ mapperService.merge(mappings, MergeReason.MAPPING_UPDATE, request.updateAllTypes());\n+ } catch (Exception e) {\n+ removalExtraInfo = \"failed on parsing default mapping/mappings on index creation\";\n+ throw e;\n+ }\n+\n+ if (request.shrinkFrom() == null) {\n+ // now that the mapping is merged we can validate the index sort.\n+ // we cannot validate for index shrinking since the mapping is empty\n+ // at this point. The validation will take place later in the process\n+ // (when all shards are copied in a single place).\n+ validateIndexSort(indexService);\n+ }\n+\n+ final Set<AliasMetaData> aliasesMetaData = getAliasMetaData(templatesAliases, indexService);\n+ // now, update the mappings with the actual source\n+ final Map<String, MappingMetaData> mappingsMetaData = getMappingsMetaData(mapperService);\n+ final IndexMetaData.Builder indexMetaDataBuilder = createMetaDataBuilder(request.index(), actualIndexSettings,\n+ routingNumShards, mappingsMetaData, request.state(), aliasesMetaData, customs, tmpImd);\n+\n+ final IndexMetaData indexMetaData;\n+ try {\n+ indexMetaData = indexMetaDataBuilder.build();\n+ } catch (Exception e) {\n+ removalExtraInfo = \"failed to build index metadata\";\n+ throw e;\n+ }\n+\n+ indexService.getIndexEventListener().beforeIndexAddedToCluster(indexMetaData.getIndex(), indexMetaData.getSettings());\n+\n+ final MetaData newMetaData = MetaData.builder(currentState.metaData())\n+ .put(indexMetaData, false)\n+ .build();\n+\n+ logger.info(\"[{}] creating index, cause [{}], templates {}, shards [{}]/[{}], mappings {}\",\n+ request.index(), request.cause(), templateNames, indexMetaData.getNumberOfShards(),\n+ indexMetaData.getNumberOfReplicas(), mappings.keySet());\n+\n+ final ClusterBlocks.Builder blocks = getClusterBlocksBuilder(currentState, indexMetaData);\n+\n+ ClusterState updatedState = ClusterState.builder(currentState).blocks(blocks).metaData(newMetaData).build();\n+\n+ if (request.state() == State.OPEN) {\n+ RoutingTable.Builder routingTableBuilder = RoutingTable.builder(updatedState.routingTable())\n+ .addAsNew(updatedState.metaData().index(request.index()));\n+ updatedState = allocationService.reroute(\n+ ClusterState.builder(updatedState).routingTable(routingTableBuilder.build()).build(),\n+ \"index [\" + request.index() + \"] created\");\n+ }\n+ removalExtraInfo = \"cleaning up after validating index on master\";\n+ removalReason = IndexRemovalReason.NO_LONGER_ASSIGNED;\n+ return updatedState;\n+ } finally {\n+ if (createdIndex != null) {\n+ // Index was already partially created - need to clean up\n+ indicesService.removeIndex(createdIndex, removalReason, removalExtraInfo);\n+ }\n+ }\n+ }\n+\n+ private void validateWaitForActiveShardsValue(IndexMetaData tmpImd) {\n+ if (!getWaitForActiveShards(tmpImd).validate(tmpImd.getNumberOfReplicas())) {\n+ throw new IllegalArgumentException(\"invalid wait_for_active_shards[\" + request.waitForActiveShards() +\n+ \"]: cannot be greater than number of shard copies [\" +\n+ (tmpImd.getNumberOfReplicas() + 1) + \"]\");\n+ }\n+ }\n+\n+ private IndexMetaData.Builder getTmpIndexMetaDataBuilder(Settings actualIndexSettings, int routingNumShards) {\n+ return IndexMetaData.builder(request.index()).setRoutingNumShards(routingNumShards).settings(actualIndexSettings);\n+ }\n+\n+ private ClusterBlocks.Builder getClusterBlocksBuilder(ClusterState currentState, IndexMetaData indexMetaData) {\n+ final ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks());\n+ request.blocks().forEach(block -> blocks.addIndexBlock(request.index(), block));\n+ blocks.updateBlocks(indexMetaData);\n+ return blocks;\n+ }\n+\n+ private Map<String, Map<String, Object>> getRequestMappings() throws Exception {\n+ final Map<String, Map<String, Object>> mappings = new HashMap<>();\n+ for (Map.Entry<String, String> entry : request.mappings().entrySet()) {\n+ mappings.put(entry.getKey(), MapperService.parseMapping(xContentRegistry, entry.getValue()));\n+ }\n+ return mappings;\n+ }\n+\n+ private Map<String, MappingMetaData> getMappingsMetaData(MapperService mapperService) {\n+ return StreamSupport.stream(mapperService.docMappers(true).spliterator(), false)\n+ .collect(toMap(DocumentMapper::type, MappingMetaData::new));\n+ }\n+\n+ private ActiveShardCount getWaitForActiveShards(IndexMetaData tmpImd) {\n+ if (request.waitForActiveShards() == ActiveShardCount.DEFAULT) {\n+ return tmpImd.getWaitForActiveShards();\n+ }\n+ return request.waitForActiveShards();\n+ }\n+\n+ private Set<AliasMetaData> getAliasMetaData(Map<String, AliasMetaData> templatesAliases, IndexService indexService) {\n+ // the context is only used for validation so it's fine to pass fake values for the shard id and the current\n+ // timestamp\n+ final QueryShardContext queryShardContext = indexService.newQueryShardContext(0, null, () -> 0L);\n+\n+ request\n+ .aliases()\n+ .stream()\n+ .filter(alias -> Strings.hasLength(alias.filter()))\n+ .forEach(alias -> aliasValidator.validateAliasFilter(alias.name(), alias.filter(),\n+ queryShardContext, xContentRegistry)\n+ );\n+\n+ templatesAliases\n+ .values()\n+ .stream()\n+ .filter(aliasMetaData -> aliasMetaData.filter() != null)\n+ .forEach(aliasMetaData -> aliasValidator.validateAliasFilter(aliasMetaData.alias(),\n+ aliasMetaData.filter().uncompressed(), queryShardContext, xContentRegistry)\n+ );\n+\n+ final Set<AliasMetaData> aliasesMetaData = aliasesToMetaData(request.aliases());\n+ aliasesMetaData.addAll(templatesAliases.values());\n+ return aliasesMetaData;\n+ }\n+\n+ private void applyPrimaryTermFromSource(ClusterState currentState, Index shrinkFromIndex, IndexMetaData.Builder tmpImdBuilder) {\n+ /*\n+ * We need to arrange that the primary term on all the shards in the shrunken index is at least as large as\n+ * the maximum primary term on all the shards in the source index. This ensures that we have correct\n+ * document-level semantics regarding sequence numbers in the shrunken index.\n+ */\n+ final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(shrinkFromIndex);\n+ final long primaryTerm = IntStream\n+ .range(0, sourceMetaData.getNumberOfShards())\n+ .mapToLong(sourceMetaData::primaryTerm)\n+ .max()\n+ .getAsLong();\n+ for (int shardId = 0; shardId < tmpImdBuilder.numberOfShards(); shardId++) {\n+ tmpImdBuilder.primaryTerm(shardId, primaryTerm);\n+ }\n+ }\n+\n+ private void validateIndexSort(IndexService indexService) {\n+ indexService.getIndexSortSupplier().get();\n+ }\n+\n+ private void fillFromTemplates(ClusterState currentState, Map<String, Custom> customs, Map<String, Map<String, Object>> mappings,\n+ List<String> templateNames, Map<String, AliasMetaData> templatesAliases,\n+ List<IndexTemplateMetaData> templates) throws Exception {\n+ for (IndexTemplateMetaData template : templates) {\n+ templateNames.add(template.getName());\n+ fillMappingsFromTemplate(mappings, template);\n+ fillCustomsFromTemplate(customs, template);\n+ fillAliasesFromTemplate(currentState, templatesAliases, template);\n+ }\n+ }\n+\n+ private int getRoutingShardNum(Index shrinkFromIndex, ClusterState currentState, Settings settings) {\n+ if (shrinkFromIndex != null) {\n+ return currentState.metaData().getIndexSafe(shrinkFromIndex).getRoutingNumShards();\n+ }\n+ return IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(settings);\n+ }\n+\n+ private void fillAliasesFromTemplate(ClusterState currentState, Map<String, AliasMetaData> templatesAliases,\n+ IndexTemplateMetaData template) {\n+ for (ObjectObjectCursor<String, AliasMetaData> cursor : template.aliases()) {\n+ AliasMetaData aliasMetaData = cursor.value;\n+ //if an alias with same name came with the create index request itself,\n+ // ignore this one taken from the index template\n+ if (request.aliases().contains(new Alias(aliasMetaData.alias()))) {\n+ continue;\n+ }\n+ //if an alias with same name was already processed, ignore this one\n+ if (templatesAliases.containsKey(cursor.key)) {\n+ continue;\n+ }\n+\n+ //Allow templatesAliases to be templated by replacing a token with the name of the index that we are applying it to\n+ if (aliasMetaData.alias().contains(\"{index}\")) {\n+ String templatedAlias = aliasMetaData.alias().replace(\"{index}\", request.index());\n+ aliasMetaData = AliasMetaData.newAliasMetaData(aliasMetaData, templatedAlias);\n+ }\n+\n+ aliasValidator.validateAliasMetaData(aliasMetaData, request.index(), currentState.metaData());\n+ templatesAliases.put(aliasMetaData.alias(), aliasMetaData);\n+ }\n+ }\n+\n+ private void fillCustomsFromTemplate(Map<String, Custom> customs, IndexTemplateMetaData template) {\n+ for (ObjectObjectCursor<String, Custom> cursor : template.customs()) {\n+ String type = cursor.key;\n+ Custom custom = cursor.value;\n+ Custom existing = customs.get(type);\n+ if (existing == null) {\n+ customs.put(type, custom);\n+ } else {\n+ Custom merged = existing.mergeWith(custom);\n+ customs.put(type, merged);\n+ }\n+ }\n+ }\n+\n+ private void fillMappingsFromTemplate(Map<String, Map<String, Object>> mappings, IndexTemplateMetaData template) throws Exception {\n+ for (ObjectObjectCursor<String, CompressedXContent> cursor : template.mappings()) {\n+ String mappingString = cursor.value.string();\n+ if (mappings.containsKey(cursor.key)) {\n+ XContentHelper.mergeDefaults(mappings.get(cursor.key),\n+ MapperService.parseMapping(xContentRegistry, mappingString));\n+ } else {\n+ mappings.put(cursor.key,\n+ MapperService.parseMapping(xContentRegistry, mappingString));\n+ }\n+ }\n+ }\n+\n+ private static Set<AliasMetaData> aliasesToMetaData(Set<Alias> aliases) {\n+ return aliases\n+ .stream()\n+ .map(alias -> AliasMetaData\n+ .builder(alias.name())\n+ .filter(alias.filter())\n+ .indexRouting(alias.indexRouting())\n+ .searchRouting(alias.searchRouting()).build())\n+ .collect(toSet());\n+ }\n+\n+ private IndexMetaData.Builder createMetaDataBuilder(String index, Settings actualIndexSettings, int routingNumShards,\n+ Map<String, MappingMetaData> mappingsMetaData, State state,\n+ Set<AliasMetaData> aliasesMetaData, Map<String, Custom> customs,\n+ IndexMetaData tmpImd) {\n+ final IndexMetaData.Builder indexMetaDataBuilder = IndexMetaData\n+ .builder(index)\n+ .settings(actualIndexSettings)\n+ .setRoutingNumShards(routingNumShards);\n+\n+ for (int shardId = 0; shardId < tmpImd.getNumberOfShards(); shardId++) {\n+ indexMetaDataBuilder.primaryTerm(shardId, tmpImd.primaryTerm(shardId));\n+ }\n+\n+ mappingsMetaData.values().forEach(indexMetaDataBuilder::putMapping);\n+ aliasesMetaData.forEach(indexMetaDataBuilder::putAlias);\n+ customs.forEach(indexMetaDataBuilder::putCustom);\n+ indexMetaDataBuilder.state(state);\n+\n+ return indexMetaDataBuilder;\n+ }\n+\n+ private static <T> Stream<T> asStream(Iterator<T> sourceIterator) {\n+ final Iterable<T> iterable = () -> sourceIterator;\n+ return StreamSupport.stream(iterable.spliterator(), false);\n+ }\n+\n+ private List<IndexTemplateMetaData> findTemplates(CreateIndexClusterStateUpdateRequest request,\n+ ClusterState state) throws IOException {\n+ return asStream(state.metaData().templates().values().iterator())\n+ .map(cursor -> cursor.value)\n+ .filter(metadata -> metadata.patterns().stream().anyMatch(template -> Regex.simpleMatch(template, request.index())))\n+ .sorted(comparingInt(IndexTemplateMetaData::order).reversed()) // timsort is default in JDK8\n+ .collect(toList());\n+ }\n+\n+ private Settings.Builder getIndexSettingsBuilder(CreateIndexClusterStateUpdateRequest request, ClusterState currentState,\n+ List<IndexTemplateMetaData> templates, boolean isShrinking) {\n+ Settings.Builder indexSettingsBuilder = Settings.builder();\n+\n+ if (!isShrinking) {\n+ // apply templates, here, in reverse order, since first ones are better matching\n+ for (int i = templates.size() - 1; i >= 0; i--) {\n+ indexSettingsBuilder.put(templates.get(i).settings());\n+ }\n+ }\n+ // now, put the request settings, so they override templates\n+ indexSettingsBuilder.put(request.settings());\n+ applyDefaultSettings(request, currentState, indexSettingsBuilder);\n+\n+ return indexSettingsBuilder;\n+ }\n+\n+ private void applyDefaultSettings(CreateIndexClusterStateUpdateRequest request, ClusterState currentState,\n+ Settings.Builder indexSettingsBuilder) {\n+ if (indexSettingsBuilder.get(SETTING_NUMBER_OF_SHARDS) == null) {\n+ indexSettingsBuilder.put(SETTING_NUMBER_OF_SHARDS, settings.getAsInt(SETTING_NUMBER_OF_SHARDS, 5));\n+ }\n+ if (indexSettingsBuilder.get(SETTING_NUMBER_OF_REPLICAS) == null) {\n+ indexSettingsBuilder.put(SETTING_NUMBER_OF_REPLICAS, settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, 1));\n+ }\n+ if (settings.get(SETTING_AUTO_EXPAND_REPLICAS) != null && indexSettingsBuilder.get(SETTING_AUTO_EXPAND_REPLICAS) == null) {\n+ indexSettingsBuilder.put(SETTING_AUTO_EXPAND_REPLICAS, settings.get(SETTING_AUTO_EXPAND_REPLICAS));\n+ }\n+\n+ if (indexSettingsBuilder.get(SETTING_VERSION_CREATED) == null) {\n+ final DiscoveryNodes nodes = currentState.nodes();\n+ final Version createdVersion = Version.min(Version.CURRENT, nodes.getSmallestNonClientNodeVersion());\n+ indexSettingsBuilder.put(SETTING_VERSION_CREATED, createdVersion);\n+ }\n+\n+ if (indexSettingsBuilder.get(SETTING_CREATION_DATE) == null) {\n+ indexSettingsBuilder.put(SETTING_CREATION_DATE, new DateTime(DateTimeZone.UTC).getMillis());\n+ }\n+ indexSettingsBuilder.put(IndexMetaData.SETTING_INDEX_PROVIDED_NAME, request.getProvidedName());\n+ indexSettingsBuilder.put(SETTING_INDEX_UUID, UUIDs.randomBase64UUID());\n+ }\n+\n+ @Override\n+ public void onFailure(String source, Exception e) {\n+ if (e instanceof ResourceAlreadyExistsException) {\n+ logger.trace((Supplier<?>) () -> new ParameterizedMessage(\"[{}] failed to create\", request.index()), e);\n+ } else {\n+ logger.debug((Supplier<?>) () -> new ParameterizedMessage(\"[{}] failed to create\", request.index()), e);\n+ }\n+ super.onFailure(source, e);\n+ }\n+ }\n+\n @Inject\n public MetaDataCreateIndexService(Settings settings, ClusterService clusterService,\n IndicesService indicesService, AllocationService allocationService,\n@@ -223,319 +628,14 @@ private void onlyCreateIndex(final CreateIndexClusterStateUpdateRequest request,\n request.settings(updatedSettingsBuilder.build());\n \n clusterService.submitStateUpdateTask(\"create-index [\" + request.index() + \"], cause [\" + request.cause() + \"]\",\n- new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(Priority.URGENT, request,\n- wrapPreservingContext(listener, threadPool.getThreadContext())) {\n-\n- @Override\n- protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n- return new ClusterStateUpdateResponse(acknowledged);\n- }\n-\n- @Override\n- public ClusterState execute(ClusterState currentState) throws Exception {\n- Index createdIndex = null;\n- String removalExtraInfo = null;\n- IndexRemovalReason removalReason = IndexRemovalReason.FAILURE;\n- try {\n- validate(request, currentState);\n-\n- for (Alias alias : request.aliases()) {\n- aliasValidator.validateAlias(alias, request.index(), currentState.metaData());\n- }\n-\n- // we only find a template when its an API call (a new index)\n- // find templates, highest order are better matching\n- List<IndexTemplateMetaData> templates = findTemplates(request, currentState);\n-\n- Map<String, Custom> customs = new HashMap<>();\n-\n- // add the request mapping\n- Map<String, Map<String, Object>> mappings = new HashMap<>();\n-\n- Map<String, AliasMetaData> templatesAliases = new HashMap<>();\n-\n- List<String> templateNames = new ArrayList<>();\n-\n- for (Map.Entry<String, String> entry : request.mappings().entrySet()) {\n- mappings.put(entry.getKey(), MapperService.parseMapping(xContentRegistry, entry.getValue()));\n- }\n-\n- for (Map.Entry<String, Custom> entry : request.customs().entrySet()) {\n- customs.put(entry.getKey(), entry.getValue());\n- }\n-\n- // apply templates, merging the mappings into the request mapping if exists\n- for (IndexTemplateMetaData template : templates) {\n- templateNames.add(template.getName());\n- for (ObjectObjectCursor<String, CompressedXContent> cursor : template.mappings()) {\n- String mappingString = cursor.value.string();\n- if (mappings.containsKey(cursor.key)) {\n- XContentHelper.mergeDefaults(mappings.get(cursor.key),\n- MapperService.parseMapping(xContentRegistry, mappingString));\n- } else {\n- mappings.put(cursor.key,\n- MapperService.parseMapping(xContentRegistry, mappingString));\n- }\n- }\n- // handle custom\n- for (ObjectObjectCursor<String, Custom> cursor : template.customs()) {\n- String type = cursor.key;\n- IndexMetaData.Custom custom = cursor.value;\n- IndexMetaData.Custom existing = customs.get(type);\n- if (existing == null) {\n- customs.put(type, custom);\n- } else {\n- IndexMetaData.Custom merged = existing.mergeWith(custom);\n- customs.put(type, merged);\n- }\n- }\n- //handle aliases\n- for (ObjectObjectCursor<String, AliasMetaData> cursor : template.aliases()) {\n- AliasMetaData aliasMetaData = cursor.value;\n- //if an alias with same name came with the create index request itself,\n- // ignore this one taken from the index template\n- if (request.aliases().contains(new Alias(aliasMetaData.alias()))) {\n- continue;\n- }\n- //if an alias with same name was already processed, ignore this one\n- if (templatesAliases.containsKey(cursor.key)) {\n- continue;\n- }\n-\n- //Allow templatesAliases to be templated by replacing a token with the name of the index that we are applying it to\n- if (aliasMetaData.alias().contains(\"{index}\")) {\n- String templatedAlias = aliasMetaData.alias().replace(\"{index}\", request.index());\n- aliasMetaData = AliasMetaData.newAliasMetaData(aliasMetaData, templatedAlias);\n- }\n-\n- aliasValidator.validateAliasMetaData(aliasMetaData, request.index(), currentState.metaData());\n- templatesAliases.put(aliasMetaData.alias(), aliasMetaData);\n- }\n- }\n- Settings.Builder indexSettingsBuilder = Settings.builder();\n- // apply templates, here, in reverse order, since first ones are better matching\n- for (int i = templates.size() - 1; i >= 0; i--) {\n- indexSettingsBuilder.put(templates.get(i).settings());\n- }\n- // now, put the request settings, so they override templates\n- indexSettingsBuilder.put(request.settings());\n- if (indexSettingsBuilder.get(SETTING_NUMBER_OF_SHARDS) == null) {\n- indexSettingsBuilder.put(SETTING_NUMBER_OF_SHARDS, settings.getAsInt(SETTING_NUMBER_OF_SHARDS, 5));\n- }\n- if (indexSettingsBuilder.get(SETTING_NUMBER_OF_REPLICAS) == null) {\n- indexSettingsBuilder.put(SETTING_NUMBER_OF_REPLICAS, settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, 1));\n- }\n- if (settings.get(SETTING_AUTO_EXPAND_REPLICAS) != null && indexSettingsBuilder.get(SETTING_AUTO_EXPAND_REPLICAS) == null) {\n- indexSettingsBuilder.put(SETTING_AUTO_EXPAND_REPLICAS, settings.get(SETTING_AUTO_EXPAND_REPLICAS));\n- }\n-\n- if (indexSettingsBuilder.get(SETTING_VERSION_CREATED) == null) {\n- DiscoveryNodes nodes = currentState.nodes();\n- final Version createdVersion = Version.min(Version.CURRENT, nodes.getSmallestNonClientNodeVersion());\n- indexSettingsBuilder.put(SETTING_VERSION_CREATED, createdVersion);\n- }\n-\n- if (indexSettingsBuilder.get(SETTING_CREATION_DATE) == null) {\n- indexSettingsBuilder.put(SETTING_CREATION_DATE, new DateTime(DateTimeZone.UTC).getMillis());\n- }\n- indexSettingsBuilder.put(IndexMetaData.SETTING_INDEX_PROVIDED_NAME, request.getProvidedName());\n- indexSettingsBuilder.put(SETTING_INDEX_UUID, UUIDs.randomBase64UUID());\n- final Index shrinkFromIndex = request.shrinkFrom();\n- final IndexMetaData.Builder tmpImdBuilder = IndexMetaData.builder(request.index());\n-\n- final int routingNumShards;\n- if (shrinkFromIndex == null) {\n- routingNumShards = IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.get(indexSettingsBuilder.build());\n- } else {\n- final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(shrinkFromIndex);\n- routingNumShards = sourceMetaData.getRoutingNumShards();\n- }\n- tmpImdBuilder.setRoutingNumShards(routingNumShards);\n-\n- if (shrinkFromIndex != null) {\n- prepareShrinkIndexSettings(\n- currentState, mappings.keySet(), indexSettingsBuilder, shrinkFromIndex, request.index());\n- }\n- final Settings actualIndexSettings = indexSettingsBuilder.build();\n- tmpImdBuilder.settings(actualIndexSettings);\n-\n- if (shrinkFromIndex != null) {\n- /*\n- * We need to arrange that the primary term on all the shards in the shrunken index is at least as large as\n- * the maximum primary term on all the shards in the source index. This ensures that we have correct\n- * document-level semantics regarding sequence numbers in the shrunken index.\n- */\n- final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(shrinkFromIndex);\n- final long primaryTerm =\n- IntStream\n- .range(0, sourceMetaData.getNumberOfShards())\n- .mapToLong(sourceMetaData::primaryTerm)\n- .max()\n- .getAsLong();\n- for (int shardId = 0; shardId < tmpImdBuilder.numberOfShards(); shardId++) {\n- tmpImdBuilder.primaryTerm(shardId, primaryTerm);\n- }\n- }\n-\n- // Set up everything, now locally create the index to see that things are ok, and apply\n- final IndexMetaData tmpImd = tmpImdBuilder.build();\n- ActiveShardCount waitForActiveShards = request.waitForActiveShards();\n- if (waitForActiveShards == ActiveShardCount.DEFAULT) {\n- waitForActiveShards = tmpImd.getWaitForActiveShards();\n- }\n- if (waitForActiveShards.validate(tmpImd.getNumberOfReplicas()) == false) {\n- throw new IllegalArgumentException(\"invalid wait_for_active_shards[\" + request.waitForActiveShards() +\n- \"]: cannot be greater than number of shard copies [\" +\n- (tmpImd.getNumberOfReplicas() + 1) + \"]\");\n- }\n- // create the index here (on the master) to validate it can be created, as well as adding the mapping\n- final IndexService indexService = indicesService.createIndex(tmpImd, Collections.emptyList());\n- createdIndex = indexService.index();\n- // now add the mappings\n- MapperService mapperService = indexService.mapperService();\n- try {\n- mapperService.merge(mappings, MergeReason.MAPPING_UPDATE, request.updateAllTypes());\n- } catch (Exception e) {\n- removalExtraInfo = \"failed on parsing default mapping/mappings on index creation\";\n- throw e;\n- }\n-\n- if (request.shrinkFrom() == null) {\n- // now that the mapping is merged we can validate the index sort.\n- // we cannot validate for index shrinking since the mapping is empty\n- // at this point. The validation will take place later in the process\n- // (when all shards are copied in a single place).\n- indexService.getIndexSortSupplier().get();\n- }\n-\n- // the context is only used for validation so it's fine to pass fake values for the shard id and the current\n- // timestamp\n- final QueryShardContext queryShardContext = indexService.newQueryShardContext(0, null, () -> 0L);\n-\n- for (Alias alias : request.aliases()) {\n- if (Strings.hasLength(alias.filter())) {\n- aliasValidator.validateAliasFilter(alias.name(), alias.filter(), queryShardContext, xContentRegistry);\n- }\n- }\n- for (AliasMetaData aliasMetaData : templatesAliases.values()) {\n- if (aliasMetaData.filter() != null) {\n- aliasValidator.validateAliasFilter(aliasMetaData.alias(), aliasMetaData.filter().uncompressed(),\n- queryShardContext, xContentRegistry);\n- }\n- }\n-\n- // now, update the mappings with the actual source\n- Map<String, MappingMetaData> mappingsMetaData = new HashMap<>();\n- for (DocumentMapper mapper : mapperService.docMappers(true)) {\n- MappingMetaData mappingMd = new MappingMetaData(mapper);\n- mappingsMetaData.put(mapper.type(), mappingMd);\n- }\n-\n- final IndexMetaData.Builder indexMetaDataBuilder = IndexMetaData.builder(request.index())\n- .settings(actualIndexSettings)\n- .setRoutingNumShards(routingNumShards);\n-\n- for (int shardId = 0; shardId < tmpImd.getNumberOfShards(); shardId++) {\n- indexMetaDataBuilder.primaryTerm(shardId, tmpImd.primaryTerm(shardId));\n- }\n-\n- for (MappingMetaData mappingMd : mappingsMetaData.values()) {\n- indexMetaDataBuilder.putMapping(mappingMd);\n- }\n-\n- for (AliasMetaData aliasMetaData : templatesAliases.values()) {\n- indexMetaDataBuilder.putAlias(aliasMetaData);\n- }\n- for (Alias alias : request.aliases()) {\n- AliasMetaData aliasMetaData = AliasMetaData.builder(alias.name()).filter(alias.filter())\n- .indexRouting(alias.indexRouting()).searchRouting(alias.searchRouting()).build();\n- indexMetaDataBuilder.putAlias(aliasMetaData);\n- }\n-\n- for (Map.Entry<String, Custom> customEntry : customs.entrySet()) {\n- indexMetaDataBuilder.putCustom(customEntry.getKey(), customEntry.getValue());\n- }\n-\n- indexMetaDataBuilder.state(request.state());\n-\n- final IndexMetaData indexMetaData;\n- try {\n- indexMetaData = indexMetaDataBuilder.build();\n- } catch (Exception e) {\n- removalExtraInfo = \"failed to build index metadata\";\n- throw e;\n- }\n-\n- indexService.getIndexEventListener().beforeIndexAddedToCluster(indexMetaData.getIndex(),\n- indexMetaData.getSettings());\n-\n- MetaData newMetaData = MetaData.builder(currentState.metaData())\n- .put(indexMetaData, false)\n- .build();\n-\n- logger.info(\"[{}] creating index, cause [{}], templates {}, shards [{}]/[{}], mappings {}\",\n- request.index(), request.cause(), templateNames, indexMetaData.getNumberOfShards(),\n- indexMetaData.getNumberOfReplicas(), mappings.keySet());\n-\n- ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks());\n- if (!request.blocks().isEmpty()) {\n- for (ClusterBlock block : request.blocks()) {\n- blocks.addIndexBlock(request.index(), block);\n- }\n- }\n- blocks.updateBlocks(indexMetaData);\n-\n- ClusterState updatedState = ClusterState.builder(currentState).blocks(blocks).metaData(newMetaData).build();\n-\n- if (request.state() == State.OPEN) {\n- RoutingTable.Builder routingTableBuilder = RoutingTable.builder(updatedState.routingTable())\n- .addAsNew(updatedState.metaData().index(request.index()));\n- updatedState = allocationService.reroute(\n- ClusterState.builder(updatedState).routingTable(routingTableBuilder.build()).build(),\n- \"index [\" + request.index() + \"] created\");\n- }\n- removalExtraInfo = \"cleaning up after validating index on master\";\n- removalReason = IndexRemovalReason.NO_LONGER_ASSIGNED;\n- return updatedState;\n- } finally {\n- if (createdIndex != null) {\n- // Index was already partially created - need to clean up\n- indicesService.removeIndex(createdIndex, removalReason, removalExtraInfo);\n- }\n- }\n+ new IndexCreationTask(logger, allocationService, request, listener,\n+ indicesService, aliasValidator, xContentRegistry, settings,\n+ (req, state) -> {\n+ validateIndexName(req.index(), state);\n+ validateIndexSettings(req.index(), req.settings());\n }\n-\n- @Override\n- public void onFailure(String source, Exception e) {\n- if (e instanceof ResourceAlreadyExistsException) {\n- logger.trace((Supplier<?>) () -> new ParameterizedMessage(\"[{}] failed to create\", request.index()), e);\n- } else {\n- logger.debug((Supplier<?>) () -> new ParameterizedMessage(\"[{}] failed to create\", request.index()), e);\n- }\n- super.onFailure(source, e);\n- }\n- });\n- }\n-\n- private List<IndexTemplateMetaData> findTemplates(CreateIndexClusterStateUpdateRequest request, ClusterState state) throws IOException {\n- List<IndexTemplateMetaData> templateMetadata = new ArrayList<>();\n- for (ObjectCursor<IndexTemplateMetaData> cursor : state.metaData().templates().values()) {\n- IndexTemplateMetaData metadata = cursor.value;\n- for (String template: metadata.patterns()) {\n- if (Regex.simpleMatch(template, request.index())) {\n- templateMetadata.add(metadata);\n- break;\n- }\n- }\n- }\n-\n- CollectionUtil.timSort(templateMetadata, Comparator.comparingInt(IndexTemplateMetaData::order).reversed());\n- return templateMetadata;\n- }\n-\n- private void validate(CreateIndexClusterStateUpdateRequest request, ClusterState state) {\n- validateIndexName(request.index(), state);\n- validateIndexSettings(request.index(), request.settings());\n+ )\n+ );\n }\n \n public void validateIndexSettings(String indexName, Settings settings) throws IndexCreationException {\n@@ -555,7 +655,8 @@ List<String> getIndexSettingsValidationErrors(Settings settings) {\n } else if (Strings.isEmpty(customPath) == false) {\n Path resolvedPath = PathUtils.get(new Path[]{env.sharedDataFile()}, customPath);\n if (resolvedPath == null) {\n- validationErrors.add(\"custom path [\" + customPath + \"] is not a sub-path of path.shared_data [\" + env.sharedDataFile() + \"]\");\n+ validationErrors.add(\n+ \"custom path [\" + customPath + \"] is not a sub-path of path.shared_data [\" + env.sharedDataFile() + \"]\");\n }\n }\n return validationErrors;\n@@ -619,7 +720,8 @@ static List<String> validateShrinkIndex(ClusterState state, String sourceIndex,\n return nodesToAllocateOn;\n }\n \n- static void prepareShrinkIndexSettings(ClusterState currentState, Set<String> mappingKeys, Settings.Builder indexSettingsBuilder, Index shrinkFromIndex, String shrinkIntoName) {\n+ static void prepareShrinkIndexSettings(ClusterState currentState, Set<String> mappingKeys, Settings.Builder indexSettingsBuilder,\n+ Index shrinkFromIndex, String shrinkIntoName) {\n final IndexMetaData sourceMetaData = currentState.metaData().index(shrinkFromIndex.getName());\n \n final List<String> nodesToAllocateOn = validateShrinkIndex(currentState, shrinkFromIndex.getName(),\n@@ -642,5 +744,4 @@ static void prepareShrinkIndexSettings(ClusterState currentState, Set<String> ma\n .put(IndexMetaData.INDEX_SHRINK_SOURCE_NAME.getKey(), shrinkFromIndex.getName())\n .put(IndexMetaData.INDEX_SHRINK_SOURCE_UUID.getKey(), shrinkFromIndex.getUUID());\n }\n-\n }",
"filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,450 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.cluster.metadata;\n+\n+import org.apache.logging.log4j.Logger;\n+import org.apache.lucene.search.Sort;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.admin.indices.alias.Alias;\n+import org.elasticsearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest;\n+import org.elasticsearch.action.support.ActiveShardCount;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlock;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.block.ClusterBlocks;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.IndexRoutingTable;\n+import org.elasticsearch.cluster.routing.RoutingTable;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n+import org.elasticsearch.cluster.routing.TestShardRouting;\n+import org.elasticsearch.cluster.routing.allocation.AllocationService;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.common.compress.CompressedXContent;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.set.Sets;\n+import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.ParentFieldMapper;\n+import org.elasticsearch.index.mapper.RoutingFieldMapper;\n+import org.elasticsearch.index.shard.IndexEventListener;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.test.ESTestCase;\n+import org.mockito.ArgumentCaptor;\n+\n+import java.io.IOException;\n+import java.util.Map;\n+import java.util.HashSet;\n+import java.util.Set;\n+import java.util.Collections;\n+import java.util.Arrays;\n+import java.util.function.Supplier;\n+\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n+import static org.mockito.Matchers.anyBoolean;\n+import static org.mockito.Matchers.anyObject;\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.when;\n+import static org.mockito.Mockito.doThrow;\n+import static org.mockito.Mockito.verify;\n+import static org.mockito.Mockito.anyMap;\n+import static org.mockito.Mockito.times;\n+import static org.mockito.Mockito.eq;\n+\n+public class IndexCreationTaskTests extends ESTestCase {\n+\n+ private final IndicesService indicesService = mock(IndicesService.class);\n+ private final AliasValidator aliasValidator = mock(AliasValidator.class);\n+ private final NamedXContentRegistry xContentRegistry = mock(NamedXContentRegistry.class);\n+ private final CreateIndexClusterStateUpdateRequest request = mock(CreateIndexClusterStateUpdateRequest.class);\n+ private final Logger logger = mock(Logger.class);\n+ private final AllocationService allocationService = mock(AllocationService.class);\n+ private final MetaDataCreateIndexService.IndexValidator validator = mock(MetaDataCreateIndexService.IndexValidator.class);\n+ private final ActionListener listener = mock(ActionListener.class);\n+ private final ClusterState state = mock(ClusterState.class);\n+ private final Settings.Builder clusterStateSettings = Settings.builder();\n+ private final MapperService mapper = mock(MapperService.class);\n+\n+ private final ImmutableOpenMap.Builder<String, IndexTemplateMetaData> tplBuilder = ImmutableOpenMap.builder();\n+ private final ImmutableOpenMap.Builder<String, MetaData.Custom> customBuilder = ImmutableOpenMap.builder();\n+ private final ImmutableOpenMap.Builder<String, IndexMetaData> idxBuilder = ImmutableOpenMap.builder();\n+\n+ private final Settings.Builder reqSettings = Settings.builder();\n+ private final Set<ClusterBlock> reqBlocks = Sets.newHashSet();\n+ private final MetaData.Builder currentStateMetaDataBuilder = MetaData.builder();\n+ private final ClusterBlocks currentStateBlocks = mock(ClusterBlocks.class);\n+ private final RoutingTable.Builder routingTableBuilder = RoutingTable.builder();\n+ private final DocumentMapper docMapper = mock(DocumentMapper.class);\n+\n+ private ActiveShardCount waitForActiveShardsNum = ActiveShardCount.DEFAULT;\n+\n+ public void setUp() throws Exception {\n+ super.setUp();\n+ setupIndicesService();\n+ setupClusterState();\n+ }\n+\n+ public void testMatchTemplates() throws Exception {\n+ tplBuilder.put(\"template_1\", createTemplateMetadata(\"template_1\", \"te*\"));\n+ tplBuilder.put(\"template_2\", createTemplateMetadata(\"template_2\", \"tes*\"));\n+ tplBuilder.put(\"template_3\", createTemplateMetadata(\"template_3\", \"zzz*\"));\n+\n+ final ClusterState result = executeTask();\n+\n+ assertTrue(result.metaData().index(\"test\").getAliases().containsKey(\"alias_from_template_1\"));\n+ assertTrue(result.metaData().index(\"test\").getAliases().containsKey(\"alias_from_template_2\"));\n+ assertFalse(result.metaData().index(\"test\").getAliases().containsKey(\"alias_from_template_3\"));\n+ }\n+\n+ public void testApplyDataFromTemplate() throws Exception {\n+ addMatchingTemplate(builder -> builder\n+ .putAlias(AliasMetaData.builder(\"alias1\"))\n+ .putMapping(\"mapping1\", createMapping())\n+ .putCustom(\"custom1\", createCustom())\n+ .settings(Settings.builder().put(\"key1\", \"value1\"))\n+ );\n+\n+ final ClusterState result = executeTask();\n+\n+ assertTrue(result.metaData().index(\"test\").getAliases().containsKey(\"alias1\"));\n+ assertTrue(result.metaData().index(\"test\").getCustoms().containsKey(\"custom1\"));\n+ assertEquals(\"value1\", result.metaData().index(\"test\").getSettings().get(\"key1\"));\n+ assertTrue(getMappingsFromResponse().containsKey(\"mapping1\"));\n+ }\n+\n+ public void testApplyDataFromRequest() throws Exception {\n+ setupRequestAlias(new Alias(\"alias1\"));\n+ setupRequestMapping(\"mapping1\", createMapping());\n+ setupRequestCustom(\"custom1\", createCustom());\n+ reqSettings.put(\"key1\", \"value1\");\n+\n+ final ClusterState result = executeTask();\n+\n+ assertTrue(result.metaData().index(\"test\").getAliases().containsKey(\"alias1\"));\n+ assertTrue(result.metaData().index(\"test\").getCustoms().containsKey(\"custom1\"));\n+ assertEquals(\"value1\", result.metaData().index(\"test\").getSettings().get(\"key1\"));\n+ assertTrue(getMappingsFromResponse().containsKey(\"mapping1\"));\n+ }\n+\n+ public void testRequestDataHavePriorityOverTemplateData() throws Exception {\n+ final IndexMetaData.Custom tplCustom = createCustom();\n+ final IndexMetaData.Custom reqCustom = createCustom();\n+ final IndexMetaData.Custom mergedCustom = createCustom();\n+ when(reqCustom.mergeWith(tplCustom)).thenReturn(mergedCustom);\n+\n+ final CompressedXContent tplMapping = createMapping(\"text\");\n+ final CompressedXContent reqMapping = createMapping(\"keyword\");\n+\n+ addMatchingTemplate(builder -> builder\n+ .putAlias(AliasMetaData.builder(\"alias1\").searchRouting(\"fromTpl\").build())\n+ .putMapping(\"mapping1\", tplMapping)\n+ .putCustom(\"custom1\", tplCustom)\n+ .settings(Settings.builder().put(\"key1\", \"tplValue\"))\n+ );\n+\n+ setupRequestAlias(new Alias(\"alias1\").searchRouting(\"fromReq\"));\n+ setupRequestMapping(\"mapping1\", reqMapping);\n+ setupRequestCustom(\"custom1\", reqCustom);\n+ reqSettings.put(\"key1\", \"reqValue\");\n+\n+ final ClusterState result = executeTask();\n+\n+ assertEquals(mergedCustom, result.metaData().index(\"test\").getCustoms().get(\"custom1\"));\n+ assertEquals(\"fromReq\", result.metaData().index(\"test\").getAliases().get(\"alias1\").getSearchRouting());\n+ assertEquals(\"reqValue\", result.metaData().index(\"test\").getSettings().get(\"key1\"));\n+ assertEquals(\"{type={properties={field={type=keyword}}}}\", getMappingsFromResponse().get(\"mapping1\").toString());\n+ }\n+\n+ public void testDefaultSettings() throws Exception {\n+ final ClusterState result = executeTask();\n+\n+ assertEquals(\"5\", result.getMetaData().index(\"test\").getSettings().get(SETTING_NUMBER_OF_SHARDS));\n+ }\n+\n+ public void testSettingsFromClusterState() throws Exception {\n+ clusterStateSettings.put(SETTING_NUMBER_OF_SHARDS, 15);\n+\n+ final ClusterState result = executeTask();\n+\n+ assertEquals(\"15\", result.getMetaData().index(\"test\").getSettings().get(SETTING_NUMBER_OF_SHARDS));\n+ }\n+\n+ public void testTemplateOrder() throws Exception {\n+ addMatchingTemplate(builder -> builder\n+ .order(1)\n+ .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 10))\n+ .putAlias(AliasMetaData.builder(\"alias1\").searchRouting(\"1\").build())\n+ );\n+ addMatchingTemplate(builder -> builder\n+ .order(2)\n+ .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 11))\n+ .putAlias(AliasMetaData.builder(\"alias1\").searchRouting(\"2\").build())\n+ );\n+ addMatchingTemplate(builder -> builder\n+ .order(3)\n+ .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 12))\n+ .putAlias(AliasMetaData.builder(\"alias1\").searchRouting(\"3\").build())\n+ );\n+ final ClusterState result = executeTask();\n+\n+ assertEquals(\"12\", result.getMetaData().index(\"test\").getSettings().get(SETTING_NUMBER_OF_SHARDS));\n+ assertEquals(\"3\", result.metaData().index(\"test\").getAliases().get(\"alias1\").getSearchRouting());\n+ }\n+\n+ public void testTemplateOrder2() throws Exception {\n+ addMatchingTemplate(builder -> builder\n+ .order(3)\n+ .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 12))\n+ .putAlias(AliasMetaData.builder(\"alias1\").searchRouting(\"3\").build())\n+ );\n+ addMatchingTemplate(builder -> builder\n+ .order(2)\n+ .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 11))\n+ .putAlias(AliasMetaData.builder(\"alias1\").searchRouting(\"2\").build())\n+ );\n+ addMatchingTemplate(builder -> builder\n+ .order(1)\n+ .settings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 10))\n+ .putAlias(AliasMetaData.builder(\"alias1\").searchRouting(\"1\").build())\n+ );\n+ final ClusterState result = executeTask();\n+\n+ assertEquals(\"12\", result.getMetaData().index(\"test\").getSettings().get(SETTING_NUMBER_OF_SHARDS));\n+ assertEquals(\"3\", result.metaData().index(\"test\").getAliases().get(\"alias1\").getSearchRouting());\n+ }\n+\n+ public void testRequestStateOpen() throws Exception {\n+\n+ when(request.state()).thenReturn(IndexMetaData.State.OPEN);\n+\n+ executeTask();\n+\n+ verify(allocationService, times(1)).reroute(anyObject(), anyObject());\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ public void testIndexRemovalOnFailure() throws Exception {\n+ doThrow(new RuntimeException(\"oops\")).when(mapper).merge(anyMap(), anyObject(), anyBoolean());\n+\n+ try {\n+ executeTask();\n+ fail(\"exception not thrown\");\n+ } catch (RuntimeException e) {\n+ verify(indicesService, times(1)).removeIndex(anyObject(), anyObject(), anyObject());\n+ }\n+ }\n+\n+ public void testShrinkIndexIgnoresTemplates() throws Exception {\n+ final Index source = new Index(\"source_idx\", \"aaa111bbb222\");\n+\n+ when(request.shrinkFrom()).thenReturn(source);\n+\n+ currentStateMetaDataBuilder.put(createIndexMetaDataBuilder(\"source_idx\", \"aaa111bbb222\", 2, 2));\n+\n+ routingTableBuilder.add(createIndexRoutingTableWithStartedShards(source));\n+\n+ when(currentStateBlocks.indexBlocked(eq(ClusterBlockLevel.WRITE), eq(\"source_idx\"))).thenReturn(true);\n+ reqSettings.put(SETTING_NUMBER_OF_SHARDS, 1);\n+\n+ addMatchingTemplate(builder -> builder\n+ .putAlias(AliasMetaData.builder(\"alias1\").searchRouting(\"fromTpl\").build())\n+ .putMapping(\"mapping1\", createMapping())\n+ .putCustom(\"custom1\", createCustom())\n+ .settings(Settings.builder().put(\"key1\", \"tplValue\"))\n+ );\n+\n+ final ClusterState result = executeTask();\n+\n+ assertFalse(result.metaData().index(\"test\").getAliases().containsKey(\"alias1\"));\n+ assertFalse(result.metaData().index(\"test\").getCustoms().containsKey(\"custom1\"));\n+ assertNull(result.metaData().index(\"test\").getSettings().get(\"key1\"));\n+ assertFalse(getMappingsFromResponse().containsKey(\"mapping1\"));\n+ }\n+\n+ public void testValidateWaitForActiveShardsFailure() throws Exception {\n+ waitForActiveShardsNum = ActiveShardCount.from(1000);\n+\n+ try {\n+ executeTask();\n+ fail(\"validation exception expected\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"invalid wait_for_active_shards\"));\n+ }\n+ }\n+\n+ private IndexRoutingTable createIndexRoutingTableWithStartedShards(Index index) {\n+ final IndexRoutingTable idxRoutingTable = mock(IndexRoutingTable.class);\n+\n+ when(idxRoutingTable.getIndex()).thenReturn(index);\n+ when(idxRoutingTable.shardsWithState(eq(ShardRoutingState.STARTED))).thenReturn(Arrays.asList(\n+ TestShardRouting.newShardRouting(index.getName(), 0, \"1\", randomBoolean(), ShardRoutingState.INITIALIZING).moveToStarted(),\n+ TestShardRouting.newShardRouting(index.getName(), 0, \"1\", randomBoolean(), ShardRoutingState.INITIALIZING).moveToStarted()\n+\n+ ));\n+\n+ return idxRoutingTable;\n+ }\n+\n+ private IndexMetaData.Builder createIndexMetaDataBuilder(String name, String uuid, int numShards, int numReplicas) {\n+ return IndexMetaData\n+ .builder(name)\n+ .settings(Settings.builder()\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, uuid))\n+ .putMapping(new MappingMetaData(docMapper))\n+ .numberOfShards(numShards)\n+ .numberOfReplicas(numReplicas);\n+ }\n+\n+ private IndexMetaData.Custom createCustom() {\n+ return mock(IndexMetaData.Custom.class);\n+ }\n+\n+ private interface MetaDataBuilderConfigurator {\n+ void configure(IndexTemplateMetaData.Builder builder) throws IOException;\n+ }\n+\n+ private void addMatchingTemplate(MetaDataBuilderConfigurator configurator) throws IOException {\n+ final IndexTemplateMetaData.Builder builder = metaDataBuilder(\"template1\", \"te*\");\n+ configurator.configure(builder);\n+\n+ tplBuilder.put(\"template\" + builder.hashCode(), builder.build());\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private Map<String, Map<String, Object>> getMappingsFromResponse() {\n+ final ArgumentCaptor<Map> argument = ArgumentCaptor.forClass(Map.class);\n+ verify(mapper).merge(argument.capture(), anyObject(), anyBoolean());\n+ return argument.getValue();\n+ }\n+\n+ private void setupRequestAlias(Alias alias) {\n+ when(request.aliases()).thenReturn(new HashSet<>(Collections.singletonList(alias)));\n+ }\n+\n+ private void setupRequestMapping(String mappingKey, CompressedXContent mapping) throws IOException {\n+ when(request.mappings()).thenReturn(Collections.singletonMap(mappingKey, mapping.string()));\n+ }\n+\n+ private void setupRequestCustom(String customKey, IndexMetaData.Custom custom) throws IOException {\n+ when(request.customs()).thenReturn(Collections.singletonMap(customKey, custom));\n+ }\n+\n+ private CompressedXContent createMapping() throws IOException {\n+ return createMapping(\"text\");\n+ }\n+\n+ private CompressedXContent createMapping(String fieldType) throws IOException {\n+ final String mapping = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"field\")\n+ .field(\"type\", fieldType)\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().string();\n+\n+ return new CompressedXContent(mapping);\n+ }\n+\n+ private IndexTemplateMetaData.Builder metaDataBuilder(String name, String pattern) {\n+ return IndexTemplateMetaData\n+ .builder(name)\n+ .patterns(Collections.singletonList(pattern));\n+ }\n+\n+ private IndexTemplateMetaData createTemplateMetadata(String name, String pattern) {\n+ return IndexTemplateMetaData\n+ .builder(name)\n+ .patterns(Collections.singletonList(pattern))\n+ .putAlias(AliasMetaData.builder(\"alias_from_\" + name).build())\n+ .build();\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private ClusterState executeTask() throws Exception {\n+ setupState();\n+ setupRequest();\n+ final MetaDataCreateIndexService.IndexCreationTask task = new MetaDataCreateIndexService.IndexCreationTask(\n+ logger, allocationService, request, listener, indicesService, aliasValidator, xContentRegistry, clusterStateSettings.build(),\n+ validator\n+ );\n+ return task.execute(state);\n+ }\n+\n+ private void setupState() {\n+ final ImmutableOpenMap.Builder<String, ClusterState.Custom> stateCustomsBuilder = ImmutableOpenMap.builder();\n+\n+ currentStateMetaDataBuilder\n+ .customs(customBuilder.build())\n+ .templates(tplBuilder.build())\n+ .indices(idxBuilder.build());\n+\n+ when(state.metaData()).thenReturn(currentStateMetaDataBuilder.build());\n+\n+ final ImmutableOpenMap.Builder<String, Set<ClusterBlock>> blockIdxBuilder = ImmutableOpenMap.builder();\n+\n+ when(currentStateBlocks.indices()).thenReturn(blockIdxBuilder.build());\n+\n+ when(state.blocks()).thenReturn(currentStateBlocks);\n+ when(state.customs()).thenReturn(stateCustomsBuilder.build());\n+ when(state.routingTable()).thenReturn(routingTableBuilder.build());\n+ }\n+\n+ private void setupRequest() {\n+ when(request.settings()).thenReturn(reqSettings.build());\n+ when(request.index()).thenReturn(\"test\");\n+ when(request.waitForActiveShards()).thenReturn(waitForActiveShardsNum);\n+ when(request.blocks()).thenReturn(reqBlocks);\n+ }\n+\n+ private void setupClusterState() {\n+ final DiscoveryNodes nodes = mock(DiscoveryNodes.class);\n+ when(nodes.getSmallestNonClientNodeVersion()).thenReturn(Version.CURRENT);\n+\n+ when(state.nodes()).thenReturn(nodes);\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private void setupIndicesService() throws Exception {\n+ final RoutingFieldMapper routingMapper = mock(RoutingFieldMapper.class);\n+ when(routingMapper.required()).thenReturn(false);\n+\n+ when(docMapper.routingFieldMapper()).thenReturn(routingMapper);\n+ when(docMapper.parentFieldMapper()).thenReturn(mock(ParentFieldMapper.class));\n+\n+ when(mapper.docMappers(anyBoolean())).thenReturn(Collections.singletonList(docMapper));\n+\n+ final Index index = new Index(\"target\", \"tgt1234\");\n+ final Supplier<Sort> supplier = mock(Supplier.class);\n+ final IndexService service = mock(IndexService.class);\n+ when(service.index()).thenReturn(index);\n+ when(service.mapperService()).thenReturn(mapper);\n+ when(service.getIndexSortSupplier()).thenReturn(supplier);\n+ when(service.getIndexEventListener()).thenReturn(mock(IndexEventListener.class));\n+\n+ when(indicesService.createIndex(anyObject(), anyObject())).thenReturn(service);\n+ }\n+}",
"filename": "core/src/test/java/org/elasticsearch/cluster/metadata/IndexCreationTaskTests.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,67 @@\n+---\n+\"Shrink index ignores target template mapping\":\n+ - skip:\n+ version: \" - 5.99.99\"\n+ reason: bug fixed in 6.0\n+\n+ # create index\n+ - do:\n+ indices.create:\n+ index: source\n+ wait_for_active_shards: 1\n+ body:\n+ mappings:\n+ test:\n+ properties:\n+ count:\n+ type: text\n+\n+ # index document\n+ - do:\n+ index:\n+ index: source\n+ type: test\n+ id: \"1\"\n+ body: { \"count\": \"1\" }\n+\n+ # create template matching shrink tagret\n+ - do:\n+ indices.put_template:\n+ name: tpl1\n+ body:\n+ index_patterns: targ*\n+ mappings:\n+ test:\n+ properties:\n+ count:\n+ type: integer\n+\n+ # make it read-only\n+ - do:\n+ indices.put_settings:\n+ index: source\n+ body:\n+ index.blocks.write: true\n+ index.number_of_replicas: 0\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: green\n+ index: source\n+\n+ # now we do the actual shrink\n+ - do:\n+ indices.shrink:\n+ index: \"source\"\n+ target: \"target\"\n+ wait_for_active_shards: 1\n+ master_timeout: 10s\n+ body:\n+ settings:\n+ index.number_of_replicas: 0\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: green\n+\n+",
"filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.shrink/20_source_mapping.yml",
"status": "added"
}
]
} |
{
"body": "Here is a recreation:\r\n\r\n```\r\nDELETE index\r\n\r\nPUT index\r\n{\r\n \"mappings\": {\r\n \"doc\": {\r\n \"properties\": {\r\n \"my_date\": {\r\n \"type\": \"date\", \r\n \"format\": \"yyyy/MM/dd\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT index/_mapping/doc\r\n{\r\n \"doc\": {\r\n \"properties\": {\r\n \"my_date\": {\r\n \"type\": \"date\", \r\n \"format\": \"yyyy-MM-dd\"\r\n }\r\n }\r\n }\r\n}\r\n\r\nGET index/_mapping\r\n```\r\n\r\n`date` allows its `format` to be updated. This is trappy because then it could be changed to a format that doesn't work with already indexed documents, or even worse: a format that parses to a different date.\r\n\r\nI think we need to ensure that the format either cannot be changed, or make it a list that is append-only so that it is only possible to append date formats.",
"comments": [
{
"body": "~~++ to move to an append-able list!~~ we discussed this in fixit friday and we are convinced that we should not move to a list due to the ordering aspects. If we'd allow only adding to it but not sending the list is that we need to barf if the order changes which is weird. The right fix here is to make the date immutable and reject any updates to it.",
"created_at": "2017-06-16T13:07:28Z"
}
],
"number": 25271,
"title": "`date`'s `format` updateability is trappy"
} | {
"body": "Disable date field changing in mapping.\r\n\r\nI also modified a little `DocumentMapperMergeTests` in format.\r\n\r\nCloses #25271",
"number": 25285,
"review_comments": [
{
"body": "Can you move this test to `DateFieldMapperTests` instead and undo changes to this file?",
"created_at": "2017-06-22T11:14:37Z"
},
{
"body": "Could you only check the `format` property and make the error message more specific?",
"created_at": "2017-06-22T11:16:07Z"
},
{
"body": "++, thanks for using `expectThrows`",
"created_at": "2017-06-22T11:16:25Z"
},
{
"body": "it is currently a bit confusing since eg. `ignore_malformed` can be updated",
"created_at": "2017-06-22T11:17:05Z"
},
{
"body": "Thx for your comments, I modified the error message and specified it's _date field's format_ which cannot be updated. And the `fieldType()`'s [`equals()`](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java#L193-L199) method has been re-implemented to just compare the format of a Date, isn't it ?",
"created_at": "2017-06-22T12:28:19Z"
},
{
"body": "it also checks for other properties through the super.equals call",
"created_at": "2017-06-22T13:48:37Z"
}
],
"title": "Disable date field mapping changing"
} | {
"commits": [
{
"message": "Disable date field mapping changing\n\nMake date field mapping unchangeable.\n\nCloses #25271"
},
{
"message": "modification"
},
{
"message": "apply checkCompatibility for DateFieldType doMerge"
},
{
"message": "remove checkCompatibility for doMerge"
}
],
"files": [
{
"diff": "@@ -54,6 +54,7 @@\n import java.util.Locale;\n import java.util.Map;\n import java.util.Objects;\n+import java.util.ArrayList;\n import static org.elasticsearch.index.mapper.TypeParsers.parseDateTimeFormatter;\n \n /** A {@link FieldMapper} for ip addresses. */\n@@ -211,16 +212,12 @@ public String typeName() {\n @Override\n public void checkCompatibility(MappedFieldType fieldType, List<String> conflicts, boolean strict) {\n super.checkCompatibility(fieldType, conflicts, strict);\n- if (strict) {\n- DateFieldType other = (DateFieldType)fieldType;\n- if (Objects.equals(dateTimeFormatter().format(), other.dateTimeFormatter().format()) == false) {\n- conflicts.add(\"mapper [\" + name()\n- + \"] is used by multiple types. Set update_all_types to true to update [format] across all types.\");\n- }\n- if (Objects.equals(dateTimeFormatter().locale(), other.dateTimeFormatter().locale()) == false) {\n- conflicts.add(\"mapper [\" + name()\n- + \"] is used by multiple types. Set update_all_types to true to update [locale] across all types.\");\n- }\n+ DateFieldType other = (DateFieldType) fieldType;\n+ if (Objects.equals(dateTimeFormatter().format(), other.dateTimeFormatter().format()) == false) {\n+ conflicts.add(\"mapper [\" + name() + \"] has different [format] values\");\n+ }\n+ if (Objects.equals(dateTimeFormatter().locale(), other.dateTimeFormatter().locale()) == false) {\n+ conflicts.add(\"mapper [\" + name() + \"] has different [locale] values\");\n }\n }\n \n@@ -490,8 +487,8 @@ protected void parseCreateField(ParseContext context, List<IndexableField> field\n \n @Override\n protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n+ final DateFieldMapper other = (DateFieldMapper) mergeWith;\n super.doMerge(mergeWith, updateAllTypes);\n- DateFieldMapper other = (DateFieldMapper) mergeWith;\n this.includeInAll = other.includeInAll;\n if (other.ignoreMalformed.explicit()) {\n this.ignoreMalformed = other.ignoreMalformed;",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/DateFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -296,19 +296,13 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n @Override\n protected void doMerge(Mapper mergeWith, boolean updateAllTypes) {\n+ ParentFieldMapper fieldMergeWith = (ParentFieldMapper) mergeWith;\n ParentFieldType currentFieldType = (ParentFieldType) fieldType.clone();\n super.doMerge(mergeWith, updateAllTypes);\n- ParentFieldMapper fieldMergeWith = (ParentFieldMapper) mergeWith;\n if (fieldMergeWith.parentType != null && Objects.equals(parentType, fieldMergeWith.parentType) == false) {\n throw new IllegalArgumentException(\"The _parent field's type option can't be changed: [\" + parentType + \"]->[\" + fieldMergeWith.parentType + \"]\");\n }\n \n- List<String> conflicts = new ArrayList<>();\n- fieldType().checkCompatibility(fieldMergeWith.fieldType, conflicts, true);\n- if (conflicts.isEmpty() == false) {\n- throw new IllegalArgumentException(\"Merge conflicts: \" + conflicts);\n- }\n-\n if (active()) {\n fieldType = currentFieldType;\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -37,6 +37,7 @@\n import java.util.Collection;\n \n import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.notNullValue;\n \n public class DateFieldMapperTests extends ESSingleNodeTestCase {\n \n@@ -345,4 +346,26 @@ public void testTimeZoneParsing() throws Exception {\n \n assertEquals(randomDate.withZone(DateTimeZone.UTC).getMillis(), fields[0].numericValue().longValue());\n }\n+\n+ public void testMergeDate() throws IOException {\n+ String initMapping = XContentFactory.jsonBuilder().startObject().startObject(\"movie\")\n+ .startObject(\"properties\")\n+ .startObject(\"release_date\").field(\"type\", \"date\").field(\"format\", \"yyyy/MM/dd\").endObject()\n+ .endObject().endObject().endObject().string();\n+ DocumentMapper initMapper = indexService.mapperService().merge(\"movie\", new CompressedXContent(initMapping),\n+ MapperService.MergeReason.MAPPING_UPDATE, randomBoolean());\n+\n+ assertThat(initMapper.mappers().getMapper(\"release_date\"), notNullValue());\n+ assertFalse(initMapper.mappers().getMapper(\"release_date\").fieldType().stored());\n+\n+ String updateFormatMapping = XContentFactory.jsonBuilder().startObject().startObject(\"movie\")\n+ .startObject(\"properties\")\n+ .startObject(\"release_date\").field(\"type\", \"date\").field(\"format\", \"epoch_millis\").endObject()\n+ .endObject().endObject().endObject().string();\n+\n+ Exception e = expectThrows(IllegalArgumentException.class,\n+ () -> indexService.mapperService().merge(\"movie\", new CompressedXContent(updateFormatMapping),\n+ MapperService.MergeReason.MAPPING_UPDATE, randomBoolean()));\n+ assertThat(e.getMessage(), containsString(\"[mapper [release_date] has different [format] values]\"));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/DateFieldMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -58,13 +58,13 @@ protected MappedFieldType createDefaultFieldType() {\n @Before\n public void setupProperties() {\n setDummyNullValue(10);\n- addModifier(new Modifier(\"format\", true) {\n+ addModifier(new Modifier(\"format\", false) {\n @Override\n public void modify(MappedFieldType ft) {\n ((DateFieldType) ft).setDateTimeFormatter(Joda.forPattern(\"basic_week_date\", Locale.ROOT));\n }\n });\n- addModifier(new Modifier(\"locale\", true) {\n+ addModifier(new Modifier(\"locale\", false) {\n @Override\n public void modify(MappedFieldType ft) {\n ((DateFieldType) ft).setDateTimeFormatter(Joda.forPattern(\"date_optional_time\", Locale.CANADA));",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/DateFieldTypeTests.java",
"status": "modified"
},
{
"diff": "@@ -25,13 +25,6 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.index.analysis.NamedAnalyzer;\n-import org.elasticsearch.index.mapper.DocumentFieldMappers;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.DocumentMapperParser;\n-import org.elasticsearch.index.mapper.MapperService;\n-import org.elasticsearch.index.mapper.Mapping;\n-import org.elasticsearch.index.mapper.ObjectMapper;\n-import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n import java.io.IOException;",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/DocumentMapperMergeTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: master 186c16ea41406b284bce896ab23771a93e93e7ec, 6.0.0-alpha1 and alpha2\r\n\r\n**Plugins installed**: repository-s3\r\n\r\n**JVM version** (`java -version`): 1.8.0_131-b11\r\n\r\n**OS version** :\r\n- Darwin Kernel Version 15.4.0: Fri Feb 26 22:08:05 PST 2016; root:xnu-3248.40.184~3/RELEASE_X86_64 x86_64\r\n- ubuntu-1604 4.4.0-75-generic #96-Ubuntu SMP Thu Apr 20 09:56:33 UTC 2017 x86_64\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen creating snapshots to S3, the first snapshot is often successful, a second snapshot fails most of the time with a SecurityException. This leads to a PARTIAL snapshot with errors in the log or a\r\n`{\"error\":{\"root_cause\":[{\"type\":\"access_control_exception\",\"reason\":\"access denied (\\\"java.net.SocketPermission\\\" \\\"54.231.134.114:443\\\" \\\"connect,resolve\\\")\"}],\"type\":\"access_control_exception\",\"reason\":\"access denied (\\\"java.net.SocketPermission\\\" \\\"54.231.134.114:443\\\" \\\"connect,resolve\\\")\"},\"status\":500}`\r\n`\r\n\r\n**Steps to reproduce**:\r\n 1. Create 10 empty indices\r\n 2. Register S3 repository \r\n 3. Create snapshots in a loop\r\n \r\n(python scripts attached)\r\n\r\n**Analysis**:\r\n\r\nThe exception occurs when a socket gets opened by a S3OutputStream.close() operation. The problem is that a plugin can only use its own code/jars to perform privileged operations. In this case the stack contains elements from the elasticsearch and the lucene-core jar which gets more obvious from the security debugging below.\r\n\r\nA snapshot might succeed if connections got opened using e.g. the listBucket or bucketExists methods and gets reused on S3OutputStream.close() calls.\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n[2017-06-13T10:53:06,284][INFO ][o.e.s.SnapshotShardsService] [PoQjxkm] snapshot [elasticsearch-local:20170613t0952-1/Y6AH1fueS2i8hfL8hASjEg] is done\r\n[2017-06-13T10:53:08,940][WARN ][o.e.s.SnapshotsService ] [PoQjxkm] failed to create snapshot [20170613t0952-2/4QbHE3LjTEyN9OR25QVQGg]\r\njava.security.AccessControlException: access denied (\"java.net.SocketPermission\" \"54.231.134.114:443\" \"connect,resolve\")\r\n\tat java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_131]\r\n\tat java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_131]\r\n\tat java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_131]\r\n\tat java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_131]\r\n\tat java.net.Socket.connect(Socket.java:584) ~[?:1.8.0_131]\r\n\tat sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668) ~[?:?]\r\n\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:542) ~[?:?]\r\n\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412) ~[?:?]\r\n\tat com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:134) ~[?:?]\r\n\tat org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179) ~[?:?]\r\n\tat org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:328) ~[?:?]\r\n\tat org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612) ~[?:?]\r\n\tat org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447) ~[?:?]\r\n\tat org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884) ~[?:?]\r\n\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[?:?]\r\n\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3654) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1354) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.doUpload(DefaultS3OutputStream.java:139) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.upload(DefaultS3OutputStream.java:110) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.flush(DefaultS3OutputStream.java:99) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3OutputStream.flushBuffer(S3OutputStream.java:69) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3OutputStream.close(S3OutputStream.java:87) ~[?:?]\r\n\tat org.apache.lucene.util.IOUtils.close(IOUtils.java:89) ~[lucene-core-7.0.0-snapshot-a0aef2f.jar:7.0.0-snapshot-a0aef2f 5ba761bcf693f3e553489642e2d9f5af09db44cc - nknize - 2017-05-16 17:08:03]\r\n\tat org.apache.lucene.util.IOUtils.close(IOUtils.java:76) ~[lucene-core-7.0.0-snapshot-a0aef2f.jar:7.0.0-snapshot-a0aef2f 5ba761bcf693f3e553489642e2d9f5af09db44cc - nknize - 2017-05-16 17:08:03]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:88) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:60) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$2(S3BlobContainer.java:95) ~[?:?]\r\n\tat java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:48) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:95) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeBlob(ChecksumBlobStoreFormat.java:187) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.write(ChecksumBlobStoreFormat.java:157) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.initializeSnapshot(BlobStoreRepository.java:327) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.beginSnapshot(SnapshotsService.java:364) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.access$700(SnapshotsService.java:105) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService$1.lambda$clusterStateProcessed$1(SnapshotsService.java:282) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n\tSuppressed: java.security.AccessControlException: access denied (\"java.net.SocketPermission\" \"54.231.134.114:443\" \"connect,resolve\")\r\n\t\tat java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_131]\r\n\t\tat java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_131]\r\n\t\tat java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_131]\r\n\t\tat java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_131]\r\n\t\tat java.net.Socket.connect(Socket.java:584) ~[?:1.8.0_131]\r\n\t\tat sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668) ~[?:?]\r\n\t\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:542) ~[?:?]\r\n\t\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412) ~[?:?]\r\n\t\tat com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:134) ~[?:?]\r\n\t\tat org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179) ~[?:?]\r\n\t\tat org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:328) ~[?:?]\r\n\t\tat org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612) ~[?:?]\r\n\t\tat org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447) ~[?:?]\r\n\t\tat org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884) ~[?:?]\r\n\t\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[?:?]\r\n\t\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) ~[?:?]\r\n\t\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3654) ~[?:?]\r\n\t\tat com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1354) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.doUpload(DefaultS3OutputStream.java:139) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.upload(DefaultS3OutputStream.java:110) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.flush(DefaultS3OutputStream.java:99) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3OutputStream.flushBuffer(S3OutputStream.java:69) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3OutputStream.close(S3OutputStream.java:87) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:96) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeBlob(ChecksumBlobStoreFormat.java:187) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.write(ChecksumBlobStoreFormat.java:157) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.initializeSnapshot(BlobStoreRepository.java:327) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService.beginSnapshot(SnapshotsService.java:364) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService.access$700(SnapshotsService.java:105) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService$1.lambda$clusterStateProcessed$1(SnapshotsService.java:282) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\t\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n[2017-06-13T10:53:08,963][WARN ][r.suppressed ] path: /_snapshot/elasticsearch-local/20170613t0952-2, params: {repository=elasticsearch-local, snapshot=20170613t0952-2}\r\njava.security.AccessControlException: access denied (\"java.net.SocketPermission\" \"54.231.134.114:443\" \"connect,resolve\")\r\n\tat java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_131]\r\n\tat java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_131]\r\n\tat java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_131]\r\n\tat java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_131]\r\n\tat java.net.Socket.connect(Socket.java:584) ~[?:1.8.0_131]\r\n\tat sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668) ~[?:?]\r\n\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:542) ~[?:?]\r\n\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412) ~[?:?]\r\n\tat com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:134) ~[?:?]\r\n\tat org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179) ~[?:?]\r\n\tat org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:328) ~[?:?]\r\n\tat org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612) ~[?:?]\r\n\tat org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447) ~[?:?]\r\n\tat org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884) ~[?:?]\r\n\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[?:?]\r\n\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3654) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1354) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.doUpload(DefaultS3OutputStream.java:139) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.upload(DefaultS3OutputStream.java:110) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.flush(DefaultS3OutputStream.java:99) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3OutputStream.flushBuffer(S3OutputStream.java:69) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3OutputStream.close(S3OutputStream.java:87) ~[?:?]\r\n\tat org.apache.lucene.util.IOUtils.close(IOUtils.java:89) ~[lucene-core-7.0.0-snapshot-a0aef2f.jar:7.0.0-snapshot-a0aef2f 5ba761bcf693f3e553489642e2d9f5af09db44cc - nknize - 2017-05-16 17:08:03]\r\n\tat org.apache.lucene.util.IOUtils.close(IOUtils.java:76) ~[lucene-core-7.0.0-snapshot-a0aef2f.jar:7.0.0-snapshot-a0aef2f 5ba761bcf693f3e553489642e2d9f5af09db44cc - nknize - 2017-05-16 17:08:03]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:88) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.common.io.Streams.copy(Streams.java:60) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$2(S3BlobContainer.java:95) ~[?:?]\r\n\tat java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_131]\r\n\tat org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:48) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:95) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeBlob(ChecksumBlobStoreFormat.java:187) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.write(ChecksumBlobStoreFormat.java:157) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.initializeSnapshot(BlobStoreRepository.java:327) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.beginSnapshot(SnapshotsService.java:364) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.access$700(SnapshotsService.java:105) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.snapshots.SnapshotsService$1.lambda$clusterStateProcessed$1(SnapshotsService.java:282) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n\tSuppressed: java.security.AccessControlException: access denied (\"java.net.SocketPermission\" \"54.231.134.114:443\" \"connect,resolve\")\r\n\t\tat java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_131]\r\n\t\tat java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_131]\r\n\t\tat java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_131]\r\n\t\tat java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_131]\r\n\t\tat java.net.Socket.connect(Socket.java:584) ~[?:1.8.0_131]\r\n\t\tat sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668) ~[?:?]\r\n\t\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:542) ~[?:?]\r\n\t\tat org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412) ~[?:?]\r\n\t\tat com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:134) ~[?:?]\r\n\t\tat org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179) ~[?:?]\r\n\t\tat org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:328) ~[?:?]\r\n\t\tat org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612) ~[?:?]\r\n\t\tat org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447) ~[?:?]\r\n\t\tat org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884) ~[?:?]\r\n\t\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[?:?]\r\n\t\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) ~[?:?]\r\n\t\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) ~[?:?]\r\n\t\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3654) ~[?:?]\r\n\t\tat com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1354) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.doUpload(DefaultS3OutputStream.java:139) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.upload(DefaultS3OutputStream.java:110) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.DefaultS3OutputStream.flush(DefaultS3OutputStream.java:99) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3OutputStream.flushBuffer(S3OutputStream.java:69) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3OutputStream.close(S3OutputStream.java:87) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:96) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeBlob(ChecksumBlobStoreFormat.java:187) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.write(ChecksumBlobStoreFormat.java:157) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.initializeSnapshot(BlobStoreRepository.java:327) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService.beginSnapshot(SnapshotsService.java:364) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService.access$700(SnapshotsService.java:105) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService$1.lambda$clusterStateProcessed$1(SnapshotsService.java:282) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) ~[elasticsearch-6.0.0-alpha3-SNAPSHOT.jar:6.0.0-alpha3-SNAPSHOT]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\t\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\n```\r\n-Djava.security.debug=\"access,failure,domain\"\r\n\r\n```\r\naccess: access denied (\"java.net.SocketPermission\" \"52.218.64.73:443\" \"connect,resolve\")\r\njava.lang.Exception: Stack trace\r\n at java.lang.Thread.dumpStack(Thread.java:1336)\r\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:462)\r\n at java.security.AccessController.checkPermission(AccessController.java:884)\r\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\r\n at java.lang.SecurityManager.checkConnect(SecurityManager.java:1051)\r\n at java.net.Socket.connect(Socket.java:584)\r\n at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)\r\n at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:542)\r\n at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:412)\r\n at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:134)\r\n at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:179)\r\n at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:328)\r\n at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:612)\r\n at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:447)\r\n at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884)\r\n at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)\r\n at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)\r\n at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837)\r\n at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)\r\n at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)\r\n at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)\r\n at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)\r\n at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3654)\r\n at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1354)\r\n at org.elasticsearch.repositories.s3.DefaultS3OutputStream.doUpload(DefaultS3OutputStream.java:139)\r\n at org.elasticsearch.repositories.s3.DefaultS3OutputStream.upload(DefaultS3OutputStream.java:110)\r\n at org.elasticsearch.repositories.s3.DefaultS3OutputStream.flush(DefaultS3OutputStream.java:99)\r\n at org.elasticsearch.repositories.s3.S3OutputStream.flushBuffer(S3OutputStream.java:69)\r\n at org.elasticsearch.repositories.s3.S3OutputStream.close(S3OutputStream.java:87)\r\n at org.apache.lucene.util.IOUtils.close(IOUtils.java:89)\r\n at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)\r\n at org.elasticsearch.common.io.Streams.copy(Streams.java:88)\r\n at org.elasticsearch.common.io.Streams.copy(Streams.java:60)\r\n at org.elasticsearch.repositories.s3.S3BlobContainer.lambda$writeBlob$2(S3BlobContainer.java:95)\r\n at java.security.AccessController.doPrivileged(Native Method)\r\n at org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedIOException(SocketAccess.java:48)\r\n at org.elasticsearch.repositories.s3.S3BlobContainer.writeBlob(S3BlobContainer.java:95)\r\n at org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeBlob(ChecksumBlobStoreFormat.java:187)\r\n at org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeAtomic(ChecksumBlobStoreFormat.java:136)\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1008)\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1242)\r\n at org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:815)\r\n at org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:380)\r\n at org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:88)\r\n at org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:334)\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638)\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n at java.lang.Thread.run(Thread.java:748)\r\naccess: domain that failed ProtectionDomain (file:/Users/jodraeger/servers/elasticsearch-6.0.0-alpha3-SNAPSHOT/lib/lucene-core-7.0.0-snapshot-a0aef2f.jar <no signer certificates>)\r\n sun.misc.Launcher$AppClassLoader@18b4aac2\r\n <no principals>\r\n java.security.Permissions@47db5fa5 (\r\n (\"java.lang.RuntimePermission\" \"exitVM\")\r\n (\"java.io.FilePermission\" \"/Users/jodraeger/servers/elasticsearch-6.0.0-alpha3-SNAPSHOT/lib/lucene-core-7.0.0-snapshot-a0aef2f.jar\" \"read\")\r\n)\r\n```\r\n\r\n",
"comments": [
{
"body": "thanks for testing out 6.0 @joachimdraeger - i've made you an Elastic Pioneer\r\n\r\n@tbrooks8 please could you take a look",
"created_at": "2017-06-13T13:09:56Z"
},
{
"body": "This should be fixed by #25254",
"created_at": "2017-07-10T15:37:41Z"
}
],
"number": 25192,
"title": "intermittent SecurityException when creating s3-repository snapshots"
} | {
"body": "Moved SocketAccess.doPrivileged up the stack to DefaultS3OutputStream in repository-S3 plugin to avoid SecurityException by Streams.copy(). A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). #25192\r\n",
"number": 25254,
"review_comments": [
{
"body": "Spelling: flushPrivileged",
"created_at": "2017-06-20T20:34:54Z"
},
{
"body": "Are there any guidelines which ports could be used in unit/integration tests? I think I wasn't able to use any other port than 9200 without changing further permissions.",
"created_at": "2017-06-21T14:02:55Z"
},
{
"body": "Can you elaborate a bit more why this necessary, exactly what we are simulating here?",
"created_at": "2017-06-27T00:49:46Z"
},
{
"body": "Maybe wrap this in a try-with-resources instead?",
"created_at": "2017-06-27T00:50:26Z"
},
{
"body": "How about throwing `UncheckedIOException` instead?",
"created_at": "2017-06-27T00:50:35Z"
},
{
"body": "It's not obvious to me what is going on here, can you please explain?",
"created_at": "2017-06-27T00:52:56Z"
},
{
"body": "I don't understand this?",
"created_at": "2017-06-27T00:53:04Z"
},
{
"body": "This thread can leak and fail the test, I think that you need to clean it up (join on it in tear down).",
"created_at": "2017-06-27T00:53:38Z"
},
{
"body": "We tend to static import these assert methods; would you mind doing the same here so this can only be `assertTrue` without the class qualifier?",
"created_at": "2017-07-03T16:31:56Z"
},
{
"body": "I think that this will generate an error in the logs most of the time when this test ends. As we will call `close` and I think `accept` throws an exception when the socket is closed. @jasontedor - is this alright?",
"created_at": "2017-07-03T16:38:36Z"
},
{
"body": "That's a good point @tbrooks8, we should not do this.",
"created_at": "2017-07-03T16:41:30Z"
},
{
"body": "Sure, no problem",
"created_at": "2017-07-04T10:58:36Z"
},
{
"body": "I'll remove it",
"created_at": "2017-07-04T10:58:57Z"
}
],
"title": "Avoid SecurityException in repository-S3 on DefaultS3OutputStream.flush()"
} | {
"commits": [
{
"message": "Moved SocketAccess.doPrivileged up the stack to DefaultS3OutputStream in repository-S3 plugin to avoid SecurityException by Streams.copy(). A plugin is only allowed to use its own jars when performing privileged operations. The S3 client might open a new Socket on close(). #25192"
},
{
"message": "PoC for checking SocketPetmissions in repository-s3 plugin"
},
{
"message": "Fix spelling"
},
{
"message": "Merge branch 's3-socket-perm-tests' into s3-security-exception-flush"
},
{
"message": "fix api checks"
},
{
"message": "Use MockSocket to simulate S3 connections"
},
{
"message": "More documentation to clarify the role of using a MockSocket in S3BlobStoreContainerTests\nand further improvements."
},
{
"message": "Removed obsolete IOException in test, static import of assertTrue"
}
],
"files": [
{
"diff": "@@ -78,6 +78,13 @@ class DefaultS3OutputStream extends S3OutputStream {\n \n @Override\n public void flush(byte[] bytes, int off, int len, boolean closing) throws IOException {\n+ SocketAccess.doPrivilegedIOException(() -> {\n+ flushPrivileged(bytes, off, len, closing);\n+ return null;\n+ });\n+ }\n+\n+ private void flushPrivileged(byte[] bytes, int off, int len, boolean closing) throws IOException {\n if (len > MULTIPART_MAX_SIZE.getBytes()) {\n throw new IOException(\"Unable to upload files larger than \" + MULTIPART_MAX_SIZE + \" to Amazon S3\");\n }",
"filename": "plugins/repository-s3/src/main/java/org/elasticsearch/repositories/s3/DefaultS3OutputStream.java",
"status": "modified"
},
{
"diff": "@@ -92,7 +92,7 @@ public void writeBlob(String blobName, InputStream inputStream, long blobSize) t\n throw new FileAlreadyExistsException(\"blob [\" + blobName + \"] already exists, cannot overwrite\");\n }\n try (OutputStream stream = createOutput(blobName)) {\n- SocketAccess.doPrivilegedIOException(() -> Streams.copy(inputStream, stream));\n+ Streams.copy(inputStream, stream);\n }\n }\n ",
"filename": "plugins/repository-s3/src/main/java/org/elasticsearch/repositories/s3/S3BlobContainer.java",
"status": "modified"
},
{
"diff": "@@ -40,20 +40,49 @@\n \n import java.io.IOException;\n import java.io.InputStream;\n+import java.io.UncheckedIOException;\n+import java.net.InetAddress;\n+import java.net.Socket;\n import java.security.DigestInputStream;\n import java.util.ArrayList;\n import java.util.List;\n import java.util.Map;\n import java.util.concurrent.ConcurrentHashMap;\n \n+import static org.junit.Assert.assertTrue;\n+\n class MockAmazonS3 extends AbstractAmazonS3 {\n \n+ private final int mockSocketPort;\n+\n private Map<String, InputStream> blobs = new ConcurrentHashMap<>();\n \n // in ESBlobStoreContainerTestCase.java, the maximum\n // length of the input data is 100 bytes\n private byte[] byteCounter = new byte[100];\n \n+\n+ MockAmazonS3(int mockSocketPort) {\n+ this.mockSocketPort = mockSocketPort;\n+ }\n+\n+ // Simulate a socket connection to check that SocketAccess.doPrivileged() is used correctly.\n+ // Any method of AmazonS3 might potentially open a socket to the S3 service. Firstly, a call\n+ // to any method of AmazonS3 has to be wrapped by SocketAccess.doPrivileged().\n+ // Secondly, each method on the stack from doPrivileged to opening the socket has to be\n+ // located in a jar that is provided by the plugin.\n+ // Thirdly, a SocketPermission has to be configured in plugin-security.policy.\n+ // By opening a socket in each method of MockAmazonS3 it is ensured that in production AmazonS3\n+ // is able to to open a socket to the S3 Service without causing a SecurityException\n+ private void simulateS3SocketConnection() {\n+ try (Socket socket = new Socket(InetAddress.getByName(\"127.0.0.1\"), mockSocketPort)) {\n+ assertTrue(socket.isConnected()); // NOOP to keep static analysis happy\n+ } catch (IOException e) {\n+ throw new UncheckedIOException(e);\n+ }\n+ }\n+\n+\n @Override\n public boolean doesBucketExist(String bucket) {\n return true;\n@@ -63,6 +92,7 @@ public boolean doesBucketExist(String bucket) {\n public ObjectMetadata getObjectMetadata(\n GetObjectMetadataRequest getObjectMetadataRequest)\n throws AmazonClientException, AmazonServiceException {\n+ simulateS3SocketConnection();\n String blobName = getObjectMetadataRequest.getKey();\n \n if (!blobs.containsKey(blobName)) {\n@@ -75,6 +105,7 @@ public ObjectMetadata getObjectMetadata(\n @Override\n public PutObjectResult putObject(PutObjectRequest putObjectRequest)\n throws AmazonClientException, AmazonServiceException {\n+ simulateS3SocketConnection();\n String blobName = putObjectRequest.getKey();\n DigestInputStream stream = (DigestInputStream) putObjectRequest.getInputStream();\n \n@@ -95,6 +126,7 @@ public PutObjectResult putObject(PutObjectRequest putObjectRequest)\n @Override\n public S3Object getObject(GetObjectRequest getObjectRequest)\n throws AmazonClientException, AmazonServiceException {\n+ simulateS3SocketConnection();\n // in ESBlobStoreContainerTestCase.java, the prefix is empty,\n // so the key and blobName are equivalent to each other\n String blobName = getObjectRequest.getKey();\n@@ -114,6 +146,7 @@ public S3Object getObject(GetObjectRequest getObjectRequest)\n @Override\n public ObjectListing listObjects(ListObjectsRequest listObjectsRequest)\n throws AmazonClientException, AmazonServiceException {\n+ simulateS3SocketConnection();\n MockObjectListing list = new MockObjectListing();\n list.setTruncated(false);\n \n@@ -147,6 +180,7 @@ public ObjectListing listObjects(ListObjectsRequest listObjectsRequest)\n @Override\n public CopyObjectResult copyObject(CopyObjectRequest copyObjectRequest)\n throws AmazonClientException, AmazonServiceException {\n+ simulateS3SocketConnection();\n String sourceBlobName = copyObjectRequest.getSourceKey();\n String targetBlobName = copyObjectRequest.getDestinationKey();\n \n@@ -167,6 +201,7 @@ public CopyObjectResult copyObject(CopyObjectRequest copyObjectRequest)\n @Override\n public void deleteObject(DeleteObjectRequest deleteObjectRequest)\n throws AmazonClientException, AmazonServiceException {\n+ simulateS3SocketConnection();\n String blobName = deleteObjectRequest.getKey();\n \n if (!blobs.containsKey(blobName)) {",
"filename": "plugins/repository-s3/src/test/java/org/elasticsearch/repositories/s3/MockAmazonS3.java",
"status": "modified"
},
{
"diff": "@@ -19,21 +19,61 @@\n \n package org.elasticsearch.repositories.s3;\n \n+import org.apache.logging.log4j.Level;\n+import org.apache.logging.log4j.Logger;\n import org.elasticsearch.common.blobstore.BlobStore;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.elasticsearch.mocksocket.MockServerSocket;\n import org.elasticsearch.repositories.ESBlobStoreContainerTestCase;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n \n import java.io.IOException;\n+import java.net.InetAddress;\n+import java.net.ServerSocket;\n import java.util.Locale;\n \n public class S3BlobStoreContainerTests extends ESBlobStoreContainerTestCase {\n+\n+ private static final Logger logger = Loggers.getLogger(S3BlobStoreContainerTests.class);\n+\n+ private static ServerSocket mockS3ServerSocket;\n+\n+ private static Thread mockS3AcceptorThread;\n+\n+ // Opens a MockSocket to simulate connections to S3 checking that SocketPermissions are set up correctly.\n+ // See MockAmazonS3.simulateS3SocketConnection.\n+ @BeforeClass\n+ public static void openMockSocket() throws IOException {\n+ mockS3ServerSocket = new MockServerSocket(0, 50, InetAddress.getByName(\"127.0.0.1\"));\n+ mockS3AcceptorThread = new Thread(() -> {\n+ while (!mockS3ServerSocket.isClosed()) {\n+ try {\n+ // Accept connections from MockAmazonS3.\n+ mockS3ServerSocket.accept();\n+ } catch (IOException e) {\n+ }\n+ }\n+ });\n+ mockS3AcceptorThread.start();\n+ }\n+\n protected BlobStore newBlobStore() throws IOException {\n- MockAmazonS3 client = new MockAmazonS3();\n+ MockAmazonS3 client = new MockAmazonS3(mockS3ServerSocket.getLocalPort());\n String bucket = randomAlphaOfLength(randomIntBetween(1, 10)).toLowerCase(Locale.ROOT);\n \n return new S3BlobStore(Settings.EMPTY, client, bucket, false,\n new ByteSizeValue(10, ByteSizeUnit.MB), \"public-read-write\", \"standard\");\n }\n+\n+ @AfterClass\n+ public static void closeMockSocket() throws IOException, InterruptedException {\n+ mockS3ServerSocket.close();\n+ mockS3AcceptorThread.join();\n+ mockS3AcceptorThread = null;\n+ mockS3ServerSocket = null;\n+ }\n }",
"filename": "plugins/repository-s3/src/test/java/org/elasticsearch/repositories/s3/S3BlobStoreContainerTests.java",
"status": "modified"
}
]
} |
{
"body": "**Elasticsearch version**: 5.4.0\r\n\r\n**Plugins installed**: [X-Pack]\r\n\r\n**JVM version** (`java -version`):java version \"1.8.0_102\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_102-b14)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): macOS Sierra Version 10.12.4 (16E195)\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nThe minimum score of the nested document is applied to the parent, although \"score_mode\": \"max\" is specified.\r\n\r\n**Steps to reproduce**:\r\n\r\nCreate Index\r\n```\r\nPUT tests\r\n{\r\n \"mappings\": {\r\n \"test\": {\r\n \"properties\": {\r\n \"nestedDoc\": {\r\n \"type\": \"nested\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nIndex a test document\r\n```\r\nPUT tests/test/1\r\n{\r\n \"topVal\" : 1,\r\n \"nestedDoc\": [\r\n {\r\n \"nestedVal\": 2\r\n },\r\n {\r\n \"nestedVal\": 3\r\n }\r\n ]\r\n}\r\n```\r\n\r\nVerify the existence of the test document\r\n```\r\nGET tests/_search\r\n```\r\n\r\nDo a nested query with function scoring on the nestedVal. The expected score of the parent document should be 3, the maximum. However, 2 is returned. \"avg\" as score_mode works.\r\n```\r\nGET tests/test/_search\r\n{\r\n \"explain\": true, \r\n \"query\": {\r\n \"nested\" : {\r\n \"query\" : {\r\n \"function_score\" : {\r\n \"query\" : {\r\n \"match_all\": {}\r\n },\r\n \"functions\" : [\r\n {\r\n \"filter\" : {\r\n \"match_all\" : {\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"field_value_factor\" : {\r\n \"field\" : \"nestedDoc.nestedVal\",\r\n \"factor\" : 1.0,\r\n \"missing\" : 0.0,\r\n \"modifier\" : \"none\"\r\n }\r\n }\r\n ],\r\n \"score_mode\" : \"sum\",\r\n \"boost_mode\" : \"replace\"\r\n }\r\n },\r\n \"path\" : \"nestedDoc\",\r\n \"score_mode\" : \"max\",\r\n \"inner_hits\" : {\r\n \"name\" : \"nestedDoc\",\r\n \"ignore_unmapped\" : true,\r\n \"from\" : 0,\r\n \"size\" : 30,\r\n \"version\" : false,\r\n \"explain\" : true,\r\n \"track_scores\" : true,\r\n \"_source\" : false\r\n }\r\n }\r\n }\r\n}\r\n```\r\n",
"comments": [
{
"body": "Tested under 5.1.2 and 5.3.0 and it worked as expected.",
"created_at": "2017-05-12T14:09:06Z"
},
{
"body": "+1",
"created_at": "2017-05-12T16:41:19Z"
},
{
"body": "+1",
"created_at": "2017-05-17T15:07:33Z"
},
{
"body": "I found a problem in Lucene which might be related ...\r\nhttps://issues.apache.org/jira/browse/LUCENE-7833",
"created_at": "2017-05-17T17:08:37Z"
},
{
"body": "We from Holidu (@MaKuehn, @michaelsiebers and me) can confirm that the scoring bug originates from the mentioned bug in Lucene. We built our own lucene-join-6.5.0 jar and replaced it on our test cluster and the test query above returned `\"max_score\": 3`.",
"created_at": "2017-05-18T20:27:18Z"
},
{
"body": "Lucene 6.6 is now out and has the fix, so we just need to upgrade to have the fix in 5.5. Separately, I'll try to see how we can get the fix in 5.4.x as well.",
"created_at": "2017-06-07T11:50:07Z"
},
{
"body": "This will be fixed in 5.4.2 and 5.5.0.",
"created_at": "2017-06-15T09:29:28Z"
},
{
"body": "Thanks for fixing this! In the meantime, version 5.3.3 works as well.",
"created_at": "2017-06-15T21:38:59Z"
}
],
"number": 24647,
"title": "\"score_mode\": \"max\" is not working for nested Query and function score"
} | {
"body": "This PR backports https://issues.apache.org/jira/browse/LUCENE-7833 to 5.4.\r\n\r\nCloses #24647",
"number": 25216,
"review_comments": [
{
"body": "Can you add some kind of marking here that this is the only change compared to the class that ships with lucene 6.5?\r\n\r\n```\r\n// BEGIN CHANGE\r\nscore = Math.max(score, childScore);\r\n// END CHANGE\r\n```",
"created_at": "2017-06-14T09:46:26Z"
}
],
"title": "Fix the `max` score mode."
} | {
"commits": [
{
"message": "Fix the `max` score mode.\n\nThis PR backports https://issues.apache.org/jira/browse/LUCENE-7833 to 5.4.\n\nCloses #24647"
},
{
"message": "iter"
},
{
"message": "iter"
}
],
"files": [
{
"diff": "@@ -0,0 +1,405 @@\n+// Copied from Lucene 6.6.0, do not modify\n+\n+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one or more\n+ * contributor license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright ownership.\n+ * The ASF licenses this file to You under the Apache License, Version 2.0\n+ * (the \"License\"); you may not use this file except in compliance with\n+ * the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.apache.lucene.search;\n+\n+import java.io.IOException;\n+import java.util.Collection;\n+import java.util.Collections;\n+import java.util.Locale;\n+\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.IndexWriter;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.search.DocIdSetIterator;\n+import org.apache.lucene.search.Explanation;\n+import org.apache.lucene.search.FilterWeight;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.Scorer;\n+import org.apache.lucene.search.ScorerSupplier;\n+import org.apache.lucene.search.TwoPhaseIterator;\n+import org.apache.lucene.search.Weight;\n+import org.apache.lucene.search.join.BitSetProducer;\n+import org.apache.lucene.search.join.ScoreMode;\n+import org.apache.lucene.search.join.ToChildBlockJoinQuery;\n+import org.apache.lucene.util.BitSet;\n+\n+/**\n+ * This query requires that you index\n+ * children and parent docs as a single block, using the\n+ * {@link IndexWriter#addDocuments IndexWriter.addDocuments()} or {@link\n+ * IndexWriter#updateDocuments IndexWriter.updateDocuments()} API. In each block, the\n+ * child documents must appear first, ending with the parent\n+ * document. At search time you provide a Filter\n+ * identifying the parents, however this Filter must provide\n+ * an {@link BitSet} per sub-reader.\n+ *\n+ * <p>Once the block index is built, use this query to wrap\n+ * any sub-query matching only child docs and join matches in that\n+ * child document space up to the parent document space.\n+ * You can then use this Query as a clause with\n+ * other queries in the parent document space.</p>\n+ *\n+ * <p>See {@link ToChildBlockJoinQuery} if you need to join\n+ * in the reverse order.\n+ *\n+ * <p>The child documents must be orthogonal to the parent\n+ * documents: the wrapped child query must never\n+ * return a parent document.</p>\n+ *\n+ * <p>See {@link org.apache.lucene.search.join} for an\n+ * overview. </p>\n+ *\n+ */\n+public class XToParentBlockJoinQuery extends Query {\n+\n+ private final BitSetProducer parentsFilter;\n+ private final Query childQuery;\n+ private final ScoreMode scoreMode;\n+\n+ /** Create a ToParentBlockJoinQuery.\n+ *\n+ * @param childQuery Query matching child documents.\n+ * @param parentsFilter Filter identifying the parent documents.\n+ * @param scoreMode How to aggregate multiple child scores\n+ * into a single parent score.\n+ **/\n+ public XToParentBlockJoinQuery(Query childQuery, BitSetProducer parentsFilter, ScoreMode scoreMode) {\n+ super();\n+ this.childQuery = childQuery;\n+ this.parentsFilter = parentsFilter;\n+ this.scoreMode = scoreMode;\n+ }\n+\n+ @Override\n+ public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException {\n+ return new BlockJoinWeight(this, childQuery.createWeight(searcher, needsScores), parentsFilter,\n+ needsScores ? scoreMode : ScoreMode.None);\n+ }\n+\n+ /** Return our child query. */\n+ public Query getChildQuery() {\n+ return childQuery;\n+ }\n+\n+ private static class BlockJoinWeight extends FilterWeight {\n+ private final BitSetProducer parentsFilter;\n+ private final ScoreMode scoreMode;\n+\n+ BlockJoinWeight(Query joinQuery, Weight childWeight, BitSetProducer parentsFilter, ScoreMode scoreMode) {\n+ super(joinQuery, childWeight);\n+ this.parentsFilter = parentsFilter;\n+ this.scoreMode = scoreMode;\n+ }\n+\n+ @Override\n+ public Scorer scorer(LeafReaderContext context) throws IOException {\n+ final ScorerSupplier scorerSupplier = scorerSupplier(context);\n+ if (scorerSupplier == null) {\n+ return null;\n+ }\n+ return scorerSupplier.get(false);\n+ }\n+\n+ // NOTE: acceptDocs applies (and is checked) only in the\n+ // parent document space\n+ @Override\n+ public ScorerSupplier scorerSupplier(LeafReaderContext context) throws IOException {\n+ final ScorerSupplier childScorerSupplier = in.scorerSupplier(context);\n+ if (childScorerSupplier == null) {\n+ return null;\n+ }\n+\n+ // NOTE: this does not take accept docs into account, the responsibility\n+ // to not match deleted docs is on the scorer\n+ final BitSet parents = parentsFilter.getBitSet(context);\n+ if (parents == null) {\n+ // No matches\n+ return null;\n+ }\n+\n+ return new ScorerSupplier() {\n+\n+ @Override\n+ public Scorer get(boolean randomAccess) throws IOException {\n+ return new BlockJoinScorer(BlockJoinWeight.this, childScorerSupplier.get(randomAccess), parents, scoreMode);\n+ }\n+\n+ @Override\n+ public long cost() {\n+ return childScorerSupplier.cost();\n+ }\n+ };\n+ }\n+\n+ @Override\n+ public Explanation explain(LeafReaderContext context, int doc) throws IOException {\n+ BlockJoinScorer scorer = (BlockJoinScorer) scorer(context);\n+ if (scorer != null && scorer.iterator().advance(doc) == doc) {\n+ return scorer.explain(context, in);\n+ }\n+ return Explanation.noMatch(\"Not a match\");\n+ }\n+ }\n+\n+ private static class ParentApproximation extends DocIdSetIterator {\n+\n+ private final DocIdSetIterator childApproximation;\n+ private final BitSet parentBits;\n+ private int doc = -1;\n+\n+ ParentApproximation(DocIdSetIterator childApproximation, BitSet parentBits) {\n+ this.childApproximation = childApproximation;\n+ this.parentBits = parentBits;\n+ }\n+\n+ @Override\n+ public int docID() {\n+ return doc;\n+ }\n+\n+ @Override\n+ public int nextDoc() throws IOException {\n+ return advance(doc + 1);\n+ }\n+\n+ @Override\n+ public int advance(int target) throws IOException {\n+ if (target >= parentBits.length()) {\n+ return doc = NO_MORE_DOCS;\n+ }\n+ final int firstChildTarget = target == 0 ? 0 : parentBits.prevSetBit(target - 1) + 1;\n+ int childDoc = childApproximation.docID();\n+ if (childDoc < firstChildTarget) {\n+ childDoc = childApproximation.advance(firstChildTarget);\n+ }\n+ if (childDoc >= parentBits.length() - 1) {\n+ return doc = NO_MORE_DOCS;\n+ }\n+ return doc = parentBits.nextSetBit(childDoc + 1);\n+ }\n+\n+ @Override\n+ public long cost() {\n+ return childApproximation.cost();\n+ }\n+ }\n+\n+ private static class ParentTwoPhase extends TwoPhaseIterator {\n+\n+ private final ParentApproximation parentApproximation;\n+ private final DocIdSetIterator childApproximation;\n+ private final TwoPhaseIterator childTwoPhase;\n+\n+ ParentTwoPhase(ParentApproximation parentApproximation, TwoPhaseIterator childTwoPhase) {\n+ super(parentApproximation);\n+ this.parentApproximation = parentApproximation;\n+ this.childApproximation = childTwoPhase.approximation();\n+ this.childTwoPhase = childTwoPhase;\n+ }\n+\n+ @Override\n+ public boolean matches() throws IOException {\n+ assert childApproximation.docID() < parentApproximation.docID();\n+ do {\n+ if (childTwoPhase.matches()) {\n+ return true;\n+ }\n+ } while (childApproximation.nextDoc() < parentApproximation.docID());\n+ return false;\n+ }\n+\n+ @Override\n+ public float matchCost() {\n+ // TODO: how could we compute a match cost?\n+ return childTwoPhase.matchCost() + 10;\n+ }\n+ }\n+\n+ static class BlockJoinScorer extends Scorer {\n+ private final Scorer childScorer;\n+ private final BitSet parentBits;\n+ private final ScoreMode scoreMode;\n+ private final DocIdSetIterator childApproximation;\n+ private final TwoPhaseIterator childTwoPhase;\n+ private final ParentApproximation parentApproximation;\n+ private final ParentTwoPhase parentTwoPhase;\n+ private float score;\n+ private int freq;\n+\n+ BlockJoinScorer(Weight weight, Scorer childScorer, BitSet parentBits, ScoreMode scoreMode) {\n+ super(weight);\n+ //System.out.println(\"Q.init firstChildDoc=\" + firstChildDoc);\n+ this.parentBits = parentBits;\n+ this.childScorer = childScorer;\n+ this.scoreMode = scoreMode;\n+ childTwoPhase = childScorer.twoPhaseIterator();\n+ if (childTwoPhase == null) {\n+ childApproximation = childScorer.iterator();\n+ parentApproximation = new ParentApproximation(childApproximation, parentBits);\n+ parentTwoPhase = null;\n+ } else {\n+ childApproximation = childTwoPhase.approximation();\n+ parentApproximation = new ParentApproximation(childTwoPhase.approximation(), parentBits);\n+ parentTwoPhase = new ParentTwoPhase(parentApproximation, childTwoPhase);\n+ }\n+ }\n+\n+ @Override\n+ public Collection<ChildScorer> getChildren() {\n+ return Collections.singleton(new ChildScorer(childScorer, \"BLOCK_JOIN\"));\n+ }\n+\n+ @Override\n+ public DocIdSetIterator iterator() {\n+ if (parentTwoPhase == null) {\n+ // the approximation is exact\n+ return parentApproximation;\n+ } else {\n+ return TwoPhaseIterator.asDocIdSetIterator(parentTwoPhase);\n+ }\n+ }\n+\n+ @Override\n+ public TwoPhaseIterator twoPhaseIterator() {\n+ return parentTwoPhase;\n+ }\n+\n+ @Override\n+ public int docID() {\n+ return parentApproximation.docID();\n+ }\n+\n+ @Override\n+ public float score() throws IOException {\n+ setScoreAndFreq();\n+ return score;\n+ }\n+ \n+ @Override\n+ public int freq() throws IOException {\n+ setScoreAndFreq();\n+ return freq;\n+ }\n+\n+ private void setScoreAndFreq() throws IOException {\n+ if (childApproximation.docID() >= parentApproximation.docID()) {\n+ return;\n+ }\n+ double score = scoreMode == ScoreMode.None ? 0 : childScorer.score();\n+ int freq = 1;\n+ while (childApproximation.nextDoc() < parentApproximation.docID()) {\n+ if (childTwoPhase == null || childTwoPhase.matches()) {\n+ final float childScore = childScorer.score();\n+ freq += 1;\n+ switch (scoreMode) {\n+ case Total:\n+ case Avg:\n+ score += childScore;\n+ break;\n+ case Min:\n+ score = Math.min(score, childScore);\n+ break;\n+ case Max:\n+ // BEGIN CHANGE\n+ score = Math.max(score, childScore);\n+ // BEGIN CHANGE\n+ break;\n+ case None:\n+ break;\n+ default:\n+ throw new AssertionError();\n+ }\n+ }\n+ }\n+ if (childApproximation.docID() == parentApproximation.docID() && (childTwoPhase == null || childTwoPhase.matches())) {\n+ throw new IllegalStateException(\"Child query must not match same docs with parent filter. \"\n+ + \"Combine them as must clauses (+) to find a problem doc. \"\n+ + \"docId=\" + parentApproximation.docID() + \", \" + childScorer.getClass());\n+ }\n+ if (scoreMode == ScoreMode.Avg) {\n+ score /= freq;\n+ }\n+ this.score = (float) score;\n+ this.freq = freq;\n+ }\n+\n+ public Explanation explain(LeafReaderContext context, Weight childWeight) throws IOException {\n+ int prevParentDoc = parentBits.prevSetBit(parentApproximation.docID() - 1);\n+ int start = context.docBase + prevParentDoc + 1; // +1 b/c prevParentDoc is previous parent doc\n+ int end = context.docBase + parentApproximation.docID() - 1; // -1 b/c parentDoc is parent doc\n+\n+ Explanation bestChild = null;\n+ int matches = 0;\n+ for (int childDoc = start; childDoc <= end; childDoc++) {\n+ Explanation child = childWeight.explain(context, childDoc - context.docBase);\n+ if (child.isMatch()) {\n+ matches++;\n+ if (bestChild == null || child.getValue() > bestChild.getValue()) {\n+ bestChild = child;\n+ }\n+ }\n+ }\n+\n+ assert freq() == matches;\n+ return Explanation.match(score(), String.format(Locale.ROOT,\n+ \"Score based on %d child docs in range from %d to %d, best match:\", matches, start, end), bestChild\n+ );\n+ }\n+ }\n+\n+ @Override\n+ public Query rewrite(IndexReader reader) throws IOException {\n+ final Query childRewrite = childQuery.rewrite(reader);\n+ if (childRewrite != childQuery) {\n+ return new XToParentBlockJoinQuery(childRewrite,\n+ parentsFilter,\n+ scoreMode);\n+ } else {\n+ return super.rewrite(reader);\n+ }\n+ }\n+\n+ @Override\n+ public String toString(String field) {\n+ return \"ToParentBlockJoinQuery (\"+childQuery.toString()+\")\";\n+ }\n+\n+ @Override\n+ public boolean equals(Object other) {\n+ return sameClassAs(other) &&\n+ equalsTo(getClass().cast(other));\n+ }\n+\n+ private boolean equalsTo(XToParentBlockJoinQuery other) {\n+ return childQuery.equals(other.childQuery) &&\n+ parentsFilter.equals(other.parentsFilter) &&\n+ scoreMode == other.scoreMode;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ final int prime = 31;\n+ int hash = classHash();\n+ hash = prime * hash + childQuery.hashCode();\n+ hash = prime * hash + scoreMode.hashCode();\n+ hash = prime * hash + parentsFilter.hashCode();\n+ return hash;\n+ }\n+}",
"filename": "core/src/main/java/org/apache/lucene/search/XToParentBlockJoinQuery.java",
"status": "added"
},
{
"diff": "@@ -23,24 +23,24 @@\n import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.Weight;\n+import org.apache.lucene.search.XToParentBlockJoinQuery;\n import org.apache.lucene.search.join.BitSetProducer;\n import org.apache.lucene.search.join.ScoreMode;\n-import org.apache.lucene.search.join.ToParentBlockJoinQuery;\n \n import java.io.IOException;\n import java.util.Objects;\n \n-/** A {@link ToParentBlockJoinQuery} that allows to retrieve its nested path. */\n+/** A {@link XToParentBlockJoinQuery} that allows to retrieve its nested path. */\n public final class ESToParentBlockJoinQuery extends Query {\n \n- private final ToParentBlockJoinQuery query;\n+ private final XToParentBlockJoinQuery query;\n private final String path;\n \n public ESToParentBlockJoinQuery(Query childQuery, BitSetProducer parentsFilter, ScoreMode scoreMode, String path) {\n- this(new ToParentBlockJoinQuery(childQuery, parentsFilter, scoreMode), path);\n+ this(new XToParentBlockJoinQuery(childQuery, parentsFilter, scoreMode), path);\n }\n \n- private ESToParentBlockJoinQuery(ToParentBlockJoinQuery query, String path) {\n+ private ESToParentBlockJoinQuery(XToParentBlockJoinQuery query, String path) {\n this.query = query;\n this.path = path;\n }\n@@ -65,8 +65,8 @@ public Query rewrite(IndexReader reader) throws IOException {\n // a MatchNoDocsQuery if it realizes that it cannot match any docs and rewrites\n // to a MatchNoDocsQuery. In that case it would be fine to lose information\n // about the nested path.\n- if (innerRewrite instanceof ToParentBlockJoinQuery) {\n- return new ESToParentBlockJoinQuery((ToParentBlockJoinQuery) innerRewrite, path);\n+ if (innerRewrite instanceof XToParentBlockJoinQuery) {\n+ return new ESToParentBlockJoinQuery((XToParentBlockJoinQuery) innerRewrite, path);\n } else {\n return innerRewrite;\n }",
"filename": "core/src/main/java/org/elasticsearch/index/search/ESToParentBlockJoinQuery.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,98 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.lucene.search;\n+\n+import org.apache.lucene.document.Field.Store;\n+import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.RandomIndexWriter;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.join.BitSetProducer;\n+import org.apache.lucene.search.join.QueryBitSetProducer;\n+import org.apache.lucene.search.join.ScoreMode;\n+import org.apache.lucene.search.similarities.BasicStats;\n+import org.apache.lucene.search.similarities.Similarity;\n+import org.apache.lucene.search.similarities.SimilarityBase;\n+import org.apache.lucene.store.Directory;\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.IOException;\n+import java.util.Arrays;\n+import java.util.Collections;\n+\n+public class XToParentBlockJoinQueryTests extends ESTestCase {\n+\n+ public void testScoreMode() throws IOException {\n+ Similarity sim = new SimilarityBase() {\n+\n+ @Override\n+ public String toString() {\n+ return \"TestSim\";\n+ }\n+\n+ @Override\n+ protected float score(BasicStats stats, float freq, float docLen) {\n+ return freq;\n+ }\n+ };\n+ Directory dir = newDirectory();\n+ RandomIndexWriter w = new RandomIndexWriter(random(), dir, newIndexWriterConfig().setSimilarity(sim));\n+ w.addDocuments(Arrays.asList(\n+ Collections.singleton(newTextField(\"foo\", \"bar bar\", Store.NO)),\n+ Collections.singleton(newTextField(\"foo\", \"bar\", Store.NO)),\n+ Collections.emptyList(),\n+ Collections.singleton(newStringField(\"type\", new BytesRef(\"parent\"), Store.NO))));\n+ DirectoryReader reader = w.getReader();\n+ w.close();\n+ IndexSearcher searcher = newSearcher(reader);\n+ searcher.setSimilarity(sim);\n+ BitSetProducer parents = new QueryBitSetProducer(new TermQuery(new Term(\"type\", \"parent\")));\n+ for (ScoreMode scoreMode : ScoreMode.values()) {\n+ Query query = new XToParentBlockJoinQuery(new TermQuery(new Term(\"foo\", \"bar\")), parents, scoreMode);\n+ TopDocs topDocs = searcher.search(query, 10);\n+ assertEquals(1, topDocs.totalHits);\n+ assertEquals(3, topDocs.scoreDocs[0].doc);\n+ float expectedScore;\n+ switch (scoreMode) {\n+ case Avg:\n+ expectedScore = 1.5f;\n+ break;\n+ case Max:\n+ expectedScore = 2f;\n+ break;\n+ case Min:\n+ expectedScore = 1f;\n+ break;\n+ case None:\n+ expectedScore = 0f;\n+ break;\n+ case Total:\n+ expectedScore = 3f;\n+ break;\n+ default:\n+ throw new AssertionError();\n+ }\n+ assertEquals(expectedScore, topDocs.scoreDocs[0].score, 0f);\n+ }\n+ reader.close();\n+ dir.close();\n+ }\n+\n+}",
"filename": "core/src/test/java/org/apache/lucene/search/XToParentBlockJoinQueryTests.java",
"status": "added"
}
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.