issue
dict
pr
dict
pr_details
dict
{ "body": "Returning metadata customs by default is dangerous because they could be from a plugin that is not installed on a transport client. This commit modifies the cluster state action so that metadata customs are not returned instead of explicitly asked for. We make the same change on the REST layer, and we require that if metadata customs are requested, so is metadata.\r\n\r\nRelates #30731", "comments": [ { "body": "Pinging @elastic/es-distributed", "created_at": "2018-05-25T02:38:56Z" }, { "body": "retest this please", "created_at": "2018-05-25T15:52:19Z" }, { "body": "@ywelsch Are you okay with that being a follow-up?", "created_at": "2018-05-25T15:52:29Z" }, { "body": "retest this please", "created_at": "2018-05-25T17:29:20Z" }, { "body": "run gradle build tests", "created_at": "2018-05-26T11:34:45Z" }, { "body": "With the newly introduced flag, users with the default distribution now have to choose between upgrading to the x-pack client or not showing any custom metadata at all (not even OSS one). The PR is also a breaking change as we will stop showing OSS custom metadata by default (e.g. index graveyard, ingest pipelines, repositories). In more detail, with the current PR:\r\n\r\n- 6.2/6.3 clients will by default stop returning custom metadata (even if that custom metadata was from OSS, e.g. index graveyard, ingest pipelines, repositories, ...).\r\n- OSS 6.3 client cannot get clusterstate from default distrib 6.3 cluster if specifying all() on request and not explicitly excluding metadata customs afterwards while the same worked on a default distrib 6.2 cluster.\r\n- OSS 6.3 client cannot get OSS custom metadata from a default distrib 6.3 cluster. When specifying metaCustoms, it has to choose between all or nothing.\r\n\r\nI'm doubtful about the use of the new flag and that it truly solves the problem, and my suggestion is therefore for the short term to go with the following approach instead:\r\n- not return x-pack custom metadata through the cluster state API. This will only affect ML metadata. It already has dedicated APIs to access the information, so there's no need to expose this through the cluster state API. If need be, we could add a system property to allow exposing this for duration of the 6.x series.\r\n- For persistent tasks, which has moved to OSS in 6.3, we can return it if streamoutput is on 6.3 at least.", "created_at": "2018-05-28T15:44:21Z" }, { "body": "> I'm doubtful about the use of the new flag and that it truly solves the problem, and my suggestion is therefore for the short term to go with the following approach instead:\r\n\r\nI am not sure this is useful to anybody and I will push hard to remove this stuff form this API in 7.0 it's been a mistake to expose this to begin with. \r\n\r\n> not return x-pack custom metadata through the cluster state API. This will only affect ML metadata. It already has dedicated APIs to access the information, so there's no need to expose this through the cluster state API. If need be, we could add a system property to allow exposing this for duration of the 6.x series.\r\n\r\n+1 we should never return plugin metadata here at any time\r\n\r\n> For persistent tasks, which has moved to OSS in 6.3, we can return it if streamoutput is on 6.3 at least.\r\n\r\nthis sounds good to me too", "created_at": "2018-05-30T18:47:28Z" }, { "body": "Superseded by #31020", "created_at": "2018-06-01T15:12:08Z" } ], "number": 30857, "title": "Do not return metadata customs by default" }
{ "body": "This commit removes ML and persistent task custom metadata from the cluster state API to avoid the possibility of sending such custom metadata to a client that can not understand. This can arise in a rolling upgrade to the default distribution from a prior version that did not have X-Pack.\r\n\r\nRelates #30731, relates #30857 ", "number": 30945, "review_comments": [ { "body": "are we sure that's OK? this is also used when publishing a cluster state to nodes. What happens when people upgrade from 6.2 + xpack to 6.3 + xpack in a rolling fashion?", "created_at": "2018-05-30T07:03:52Z" }, { "body": "The effect of losing the persistent tasks in a 6.2 -> 6.3 upgrade would be that running ML jobs and datafeeds would be killed and would have to be manually restarted.\r\n\r\nIt's currently documented best practice to stop ML jobs during upgrades. We are trying to make things more robust so this advice can be removed, but because the advice exists today I don't think it would be a complete disaster if ML jobs got killed during a rolling upgrade to 6.3 if it wasn't followed. (For upgrades beyond 6.3 this wouldn't be acceptable, as ML will be supported in Cloud and they'll be relying on ML jobs staying open during rolling upgrades from 6.3 to higher versions.)\r\n\r\nWe could also potentially document the `es.persistent_tasks.custom_metadata_minimum_version` system property and tell people who want to leave ML jobs running during a rolling upgrade to 6.3 that they need to set it.", "created_at": "2018-05-30T10:53:38Z" } ], "title": "Remove metadata customs that can break serialization" }
{ "commits": [ { "message": "Remove metadata customs that can break serialization\n\nThis commit removes ML and persistent task custom metadata from the\ncluster state API to avoid the possibility of sending such custom\nmetadata to a client that can not understand. This can arise in a\nrolling upgrade to the default distribution from a prior version that\ndid not have X-Pack." }, { "message": "Static" }, { "message": "Blank line" } ], "files": [ { "diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.NamedDiff;\n import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.Strings;\n@@ -187,9 +188,37 @@ public long getNumberOfTasksOnNode(String nodeId, String taskName) {\n task -> taskName.equals(task.taskName) && nodeId.equals(task.assignment.executorNode)).count();\n }\n \n+ private static final Version MINIMAL_SUPPORTED_VERSION;\n+\n+ static {\n+ /*\n+ * Prior to version 6.3.0 of Elasticsearch, persistent tasks were in X-Pack and we could assume that every node in the cluster and\n+ * all connected clients also had the X-Pack plugin. The only constraint against sending persistent tasks custom metadata in\n+ * response to a cluster state request was that the version of the client was at least 5.4.0. When we migrated the persistent task\n+ * code to the server codebase, we introduced the possibility that we could communicating with a client on a version between 5.4.0\n+ * (inclusive) and 6.3.0 (exclusive). However, such a client could not understand persistent task metadata. As such, we not set the\n+ * minimal supported version on persistent tasks custom metadata to 6.3.0 as that is the first version that we are certain any\n+ * client of that version can understand persistent tasks custom metdata. However, this is a breaking change from previous versions\n+ * where for persistent tasks custom metadata we assumed that any client would have X-Pack installed. To reinstate the previous\n+ * behavior (e.g., for debugging) we provide the following undocumented system property. This system property could be removed at\n+ * any time.\n+ */\n+ final String property = System.getProperty(\"es.persistent_tasks.custom_metadata_minimum_version\", \"6.3.0\");\n+ final Version minimalSupportedVersion = Version.fromString(property);\n+ if (minimalSupportedVersion.before(Version.V_5_4_0)) {\n+ throw new IllegalArgumentException(\n+ \"es.persistent_tasks.custom_metadata_minimum_version must be after [5.4.0] but was [\" + property + \"]\");\n+ } else if (minimalSupportedVersion.after(Version.CURRENT)) {\n+ throw new IllegalArgumentException(\n+ \"es.persistent_tasks.custom_metadata_minimum_version must be before [\" + Version.CURRENT.toString()\n+ + \"] but was [\" + property + \"]\");\n+ }\n+ MINIMAL_SUPPORTED_VERSION = minimalSupportedVersion;\n+ }\n+\n @Override\n public Version getMinimalSupportedVersion() {\n- return Version.V_5_4_0;\n+ return MINIMAL_SUPPORTED_VERSION;\n }\n \n @Override", "filename": "server/src/main/java/org/elasticsearch/persistent/PersistentTasksCustomMetaData.java", "status": "modified" }, { "diff": "@@ -122,9 +122,27 @@ public String getWriteableName() {\n return MLMetadataField.TYPE;\n }\n \n+ private static final EnumSet<MetaData.XContentContext> CONTEXT;\n+\n+ static {\n+ /*\n+ * We generally can not expose ML custom metadata because we might be communicating with a client that does not understand such\n+ * custom metadata. This can happen, for example, when a transport client without the X-Pack plugin is connected to a 6.3.0 cluster\n+ * running the default distribution. To avoid sending such a client a metadata custom that it can not understand, we do not return\n+ * ML custom metadata in response to cluster state requests. However, this a breaking change from previous versions where we assumed\n+ * that any client would have X-Pack installed. To reinstate the previous behavior (e.g., for debugging) we provide the following\n+ * undocumented system property. This system property could be removed at any time.\n+ */\n+ if (Boolean.parseBoolean(System.getProperty(\"es.xpack.ml.api_metadata_context\", \"false\"))) {\n+ CONTEXT = MetaData.ALL_CONTEXTS;\n+ } else {\n+ CONTEXT = EnumSet.of(MetaData.XContentContext.GATEWAY, MetaData.XContentContext.SNAPSHOT);\n+ }\n+ }\n+\n @Override\n public EnumSet<MetaData.XContentContext> context() {\n- return MetaData.ALL_CONTEXTS;\n+ return CONTEXT;\n }\n \n @Override", "filename": "x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/MlMetadata.java", "status": "modified" } ] }
{ "body": "Returning metadata customs by default is dangerous because they could be from a plugin that is not installed on a transport client. This commit modifies the cluster state action so that metadata customs are not returned instead of explicitly asked for. We make the same change on the REST layer, and we require that if metadata customs are requested, so is metadata.\r\n\r\nRelates #30731", "comments": [ { "body": "Pinging @elastic/es-distributed", "created_at": "2018-05-25T02:38:56Z" }, { "body": "retest this please", "created_at": "2018-05-25T15:52:19Z" }, { "body": "@ywelsch Are you okay with that being a follow-up?", "created_at": "2018-05-25T15:52:29Z" }, { "body": "retest this please", "created_at": "2018-05-25T17:29:20Z" }, { "body": "run gradle build tests", "created_at": "2018-05-26T11:34:45Z" }, { "body": "With the newly introduced flag, users with the default distribution now have to choose between upgrading to the x-pack client or not showing any custom metadata at all (not even OSS one). The PR is also a breaking change as we will stop showing OSS custom metadata by default (e.g. index graveyard, ingest pipelines, repositories). In more detail, with the current PR:\r\n\r\n- 6.2/6.3 clients will by default stop returning custom metadata (even if that custom metadata was from OSS, e.g. index graveyard, ingest pipelines, repositories, ...).\r\n- OSS 6.3 client cannot get clusterstate from default distrib 6.3 cluster if specifying all() on request and not explicitly excluding metadata customs afterwards while the same worked on a default distrib 6.2 cluster.\r\n- OSS 6.3 client cannot get OSS custom metadata from a default distrib 6.3 cluster. When specifying metaCustoms, it has to choose between all or nothing.\r\n\r\nI'm doubtful about the use of the new flag and that it truly solves the problem, and my suggestion is therefore for the short term to go with the following approach instead:\r\n- not return x-pack custom metadata through the cluster state API. This will only affect ML metadata. It already has dedicated APIs to access the information, so there's no need to expose this through the cluster state API. If need be, we could add a system property to allow exposing this for duration of the 6.x series.\r\n- For persistent tasks, which has moved to OSS in 6.3, we can return it if streamoutput is on 6.3 at least.", "created_at": "2018-05-28T15:44:21Z" }, { "body": "> I'm doubtful about the use of the new flag and that it truly solves the problem, and my suggestion is therefore for the short term to go with the following approach instead:\r\n\r\nI am not sure this is useful to anybody and I will push hard to remove this stuff form this API in 7.0 it's been a mistake to expose this to begin with. \r\n\r\n> not return x-pack custom metadata through the cluster state API. This will only affect ML metadata. It already has dedicated APIs to access the information, so there's no need to expose this through the cluster state API. If need be, we could add a system property to allow exposing this for duration of the 6.x series.\r\n\r\n+1 we should never return plugin metadata here at any time\r\n\r\n> For persistent tasks, which has moved to OSS in 6.3, we can return it if streamoutput is on 6.3 at least.\r\n\r\nthis sounds good to me too", "created_at": "2018-05-30T18:47:28Z" }, { "body": "Superseded by #31020", "created_at": "2018-06-01T15:12:08Z" } ], "number": 30857, "title": "Do not return metadata customs by default" }
{ "body": "ML has dedicated APIs for datafeeds and jobs yet base test classes and some tests were relying on the cluster state for this state. This commit removes this usage in favor of using the dedicated endpoints.\r\n\r\nRelates #30731, relates #30857\r\n", "number": 30941, "review_comments": [], "title": "Use dedicated ML APIs in tests" }
{ "commits": [ { "message": "Use dedicated ML APIs in tests\n\nML has dedicated APIs for datafeeds and jobs yet base test classes and\nsome tests were relying on the cluster state for this state. This commit\nremoves this usage in favor of using the dedicated endpoints." } ], "files": [ { "diff": "@@ -6,6 +6,8 @@\n package org.elasticsearch.xpack.core.ml.integration;\n \n import org.apache.logging.log4j.Logger;\n+import org.elasticsearch.client.Request;\n+import org.elasticsearch.client.Response;\n import org.elasticsearch.client.RestClient;\n import org.elasticsearch.common.xcontent.support.XContentMapValues;\n import org.elasticsearch.test.rest.ESRestTestCase;\n@@ -35,10 +37,12 @@ public void clearMlMetadata() throws IOException {\n \n @SuppressWarnings(\"unchecked\")\n private void deleteAllDatafeeds() throws IOException {\n- Map<String, Object> clusterStateAsMap = testCase.entityAsMap(adminClient.performRequest(\"GET\", \"/_cluster/state\",\n- Collections.singletonMap(\"filter_path\", \"metadata.ml.datafeeds\")));\n- List<Map<String, Object>> datafeeds =\n- (List<Map<String, Object>>) XContentMapValues.extractValue(\"metadata.ml.datafeeds\", clusterStateAsMap);\n+ final Request datafeedsRequest = new Request(\"GET\", \"/_xpack/ml/datafeeds\");\n+ datafeedsRequest.addParameter(\"filter_path\", \"datafeeds\");\n+ final Response datafeedsResponse = adminClient.performRequest(datafeedsRequest);\n+ @SuppressWarnings(\"unchecked\")\n+ final List<Map<String, Object>> datafeeds =\n+ (List<Map<String, Object>>) XContentMapValues.extractValue(\"datafeeds\", testCase.entityAsMap(datafeedsResponse));\n if (datafeeds == null) {\n return;\n }\n@@ -75,11 +79,12 @@ private void deleteAllDatafeeds() throws IOException {\n }\n \n private void deleteAllJobs() throws IOException {\n- Map<String, Object> clusterStateAsMap = testCase.entityAsMap(adminClient.performRequest(\"GET\", \"/_cluster/state\",\n- Collections.singletonMap(\"filter_path\", \"metadata.ml.jobs\")));\n+ final Request jobsRequest = new Request(\"GET\", \"/_xpack/ml/anomaly_detectors\");\n+ jobsRequest.addParameter(\"filter_path\", \"jobs\");\n+ final Response response = adminClient.performRequest(jobsRequest);\n @SuppressWarnings(\"unchecked\")\n- List<Map<String, Object>> jobConfigs =\n- (List<Map<String, Object>>) XContentMapValues.extractValue(\"metadata.ml.jobs\", clusterStateAsMap);\n+ final List<Map<String, Object>> jobConfigs =\n+ (List<Map<String, Object>>) XContentMapValues.extractValue(\"jobs\", testCase.entityAsMap(response));\n if (jobConfigs == null) {\n return;\n }", "filename": "x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/ml/integration/MlRestTestStateCleaner.java", "status": "modified" }, { "diff": "@@ -27,6 +27,10 @@\n import org.elasticsearch.test.MockHttpTransport;\n import org.elasticsearch.test.discovery.TestZenDiscovery;\n import org.elasticsearch.xpack.core.XPackSettings;\n+import org.elasticsearch.xpack.core.ml.action.GetDatafeedsAction;\n+import org.elasticsearch.xpack.core.ml.action.GetJobsAction;\n+import org.elasticsearch.xpack.core.ml.action.util.QueryPage;\n+import org.elasticsearch.xpack.core.ml.client.MachineLearningClient;\n import org.elasticsearch.xpack.ml.LocalStateMachineLearning;\n import org.elasticsearch.xpack.ml.MachineLearning;\n import org.elasticsearch.xpack.core.ml.MachineLearningField;\n@@ -271,7 +275,9 @@ public static GetDatafeedsStatsAction.Response.DatafeedStats getDatafeedStats(St\n }\n \n public static void deleteAllDatafeeds(Logger logger, Client client) throws Exception {\n- MlMetadata mlMetadata = MlMetadata.getMlMetadata(client.admin().cluster().prepareState().get().getState());\n+ final MachineLearningClient mlClient = new MachineLearningClient(client);\n+ final QueryPage<DatafeedConfig> datafeeds =\n+ mlClient.getDatafeeds(new GetDatafeedsAction.Request(GetDatafeedsAction.ALL)).actionGet().getResponse();\n try {\n logger.info(\"Closing all datafeeds (using _all)\");\n StopDatafeedAction.Response stopResponse = client\n@@ -292,25 +298,25 @@ public static void deleteAllDatafeeds(Logger logger, Client client) throws Excep\n \"Had to resort to force-stopping datafeed, something went wrong?\", e1);\n }\n \n- for (DatafeedConfig datafeed : mlMetadata.getDatafeeds().values()) {\n- String datafeedId = datafeed.getId();\n+ for (final DatafeedConfig datafeed : datafeeds.results()) {\n assertBusy(() -> {\n try {\n- GetDatafeedsStatsAction.Request request = new GetDatafeedsStatsAction.Request(datafeedId);\n+ GetDatafeedsStatsAction.Request request = new GetDatafeedsStatsAction.Request(datafeed.getId());\n GetDatafeedsStatsAction.Response r = client.execute(GetDatafeedsStatsAction.INSTANCE, request).get();\n assertThat(r.getResponse().results().get(0).getDatafeedState(), equalTo(DatafeedState.STOPPED));\n } catch (InterruptedException | ExecutionException e) {\n throw new RuntimeException(e);\n }\n });\n DeleteDatafeedAction.Response deleteResponse =\n- client.execute(DeleteDatafeedAction.INSTANCE, new DeleteDatafeedAction.Request(datafeedId)).get();\n+ client.execute(DeleteDatafeedAction.INSTANCE, new DeleteDatafeedAction.Request(datafeed.getId())).get();\n assertTrue(deleteResponse.isAcknowledged());\n }\n }\n \n public static void deleteAllJobs(Logger logger, Client client) throws Exception {\n- MlMetadata mlMetadata = MlMetadata.getMlMetadata(client.admin().cluster().prepareState().get().getState());\n+ final MachineLearningClient mlClient = new MachineLearningClient(client);\n+ final QueryPage<Job> jobs = mlClient.getJobs(new GetJobsAction.Request(MetaData.ALL)).actionGet().getResponse();\n \n try {\n CloseJobAction.Request closeRequest = new CloseJobAction.Request(MetaData.ALL);\n@@ -334,15 +340,14 @@ public static void deleteAllJobs(Logger logger, Client client) throws Exception\n e1);\n }\n \n- for (Map.Entry<String, Job> entry : mlMetadata.getJobs().entrySet()) {\n- String jobId = entry.getKey();\n+ for (final Job job : jobs.results()) {\n assertBusy(() -> {\n GetJobsStatsAction.Response statsResponse =\n- client().execute(GetJobsStatsAction.INSTANCE, new GetJobsStatsAction.Request(jobId)).actionGet();\n+ client().execute(GetJobsStatsAction.INSTANCE, new GetJobsStatsAction.Request(job.getId())).actionGet();\n assertEquals(JobState.CLOSED, statsResponse.getResponse().results().get(0).getState());\n });\n DeleteJobAction.Response response =\n- client.execute(DeleteJobAction.INSTANCE, new DeleteJobAction.Request(jobId)).get();\n+ client.execute(DeleteJobAction.INSTANCE, new DeleteJobAction.Request(job.getId())).get();\n assertTrue(response.isAcknowledged());\n }\n }", "filename": "x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/support/BaseMlIntegTestCase.java", "status": "modified" }, { "diff": "@@ -548,10 +548,9 @@\n - do:\n headers:\n Authorization: \"Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==\" # run as x_pack_rest_user, i.e. the test setup superuser\n- cluster.state:\n- metric: [ metadata ]\n- filter_path: metadata.persistent_tasks\n- - match: {\"metadata.persistent_tasks.tasks.0.task.xpack/ml/job.status.state\": opened}\n+ xpack.ml.get_job_stats:\n+ job_id: jobs-crud-close-job\n+ - match: {\"jobs.0.state\": opened}\n \n - do:\n xpack.ml.close_job:\n@@ -561,11 +560,9 @@\n - do:\n headers:\n Authorization: \"Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==\" # run as x_pack_rest_user, i.e. the test setup superuser\n- cluster.state:\n- metric: [ metadata ]\n- filter_path: metadata.persistent_tasks\n- - match:\n- metadata.persistent_tasks.tasks: []\n+ xpack.ml.get_job_stats:\n+ job_id: jobs-crud-close-job\n+ - match: {\"jobs.0.state\": closed}\n \n ---\n \"Test closing a closed job isn't an error\":\n@@ -789,10 +786,9 @@\n - do:\n headers:\n Authorization: \"Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==\" # run as x_pack_rest_user, i.e. the test setup superuser\n- cluster.state:\n- metric: [ metadata ]\n- filter_path: metadata.persistent_tasks\n- - match: {\"metadata.persistent_tasks.tasks.0.task.xpack/ml/job.status.state\": opened}\n+ xpack.ml.get_job_stats:\n+ job_id: jobs-crud-force-close-job\n+ - match: {\"jobs.0.state\": opened}\n \n - do:\n xpack.ml.close_job:\n@@ -803,11 +799,9 @@\n - do:\n headers:\n Authorization: \"Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==\" # run as x_pack_rest_user, i.e. the test setup superuser\n- cluster.state:\n- metric: [ metadata ]\n- filter_path: metadata.persistent_tasks\n- - match:\n- metadata.persistent_tasks.tasks: []\n+ xpack.ml.get_job_stats:\n+ job_id: jobs-crud-force-close-job\n+ - match: {\"jobs.0.state\": closed}\n \n ---\n \"Test force closing a closed job isn't an error\":", "filename": "x-pack/plugin/src/test/resources/rest-api-spec/test/ml/jobs_crud.yml", "status": "modified" } ] }
{ "body": "see https://github.com/elastic/elasticsearch/issues/12292\n\nThis should return a 400 error instead\n", "comments": [ { "body": "@elastic/es-search-aggs ", "created_at": "2018-03-20T23:05:07Z" }, { "body": "Hi i m a newbie, is this issue fixed? if not can i take this issue .", "created_at": "2018-05-28T22:10:06Z" }, { "body": "@gauravmishra123 the issue is being worked on, sorry. Take a look at issues labeled \"adoptme\" and probably also \"low hanging fruit\", they should be relatively easy to get started with. In any case, pinging on the issue like you just did is a great idea and much appreciated!", "created_at": "2018-05-28T22:15:09Z" } ], "number": 12315, "title": "Search template render API returns 500 error on malformed template" }
{ "body": "Currently failures to compile a script usually lead to a ScriptException, which\r\ninherits the 500 INTERNAL_SERVER_ERROR from ElasticsearchException if it does\r\nnot contain another root cause. Issue #12315 suggests this should be a 400\r\ninstead for template compile errors, but I assume more generally for script\r\ncompilation errors. This changes ScriptException to return 400 (bad request) as\r\nthe status code and changes MustacheScriptEngine to convert any internal\r\nMustacheException to the more general ScriptException.\r\n\r\nCloses #12315\r\n", "number": 30861, "review_comments": [], "title": "Change ScriptException status to 400 (bad request)" }
{ "commits": [ { "message": "Change ScriptException return status\n\nCurrently failures to compile a script usually lead to a ScriptException, which\ninherits the 500 INTERNAL_SERVER_ERROR from ElasticsearchException if it does\nnot contain another root cause. Issue #12315 suggests this should be a 400\ninstead for template compile errors, but I assume more generally for script\ncompilation errors. This changes ScriptException to return 400 (bad request) as\nthe status code and changes MustacheScriptEngine to convert any internal\nMustacheException to the more general ScriptException.\n\nCloses #12315" }, { "message": "Fix docs test" }, { "message": "Fixing another qa smoke test" }, { "message": "Merge branch 'master' into fix-12315" }, { "message": "Fix x-pack qa test" }, { "message": "Merge branch 'master' into fix-12315" }, { "message": "Merge branch 'master' into fix-12315" }, { "message": "Add notes to migration docs" } ], "files": [ { "diff": "@@ -48,7 +48,7 @@ Which shows that the class of `doc.first` is\n \"java_class\": \"org.elasticsearch.index.fielddata.ScriptDocValues$Longs\",\n ...\n },\n- \"status\": 500\n+ \"status\": 400\n }\n ---------------------------------------------------------\n // TESTRESPONSE[s/\\.\\.\\./\"script_stack\": $body.error.script_stack, \"script\": $body.error.script, \"lang\": $body.error.lang, \"caused_by\": $body.error.caused_by, \"root_cause\": $body.error.root_cause, \"reason\": $body.error.reason/]", "filename": "docs/painless/painless-debugging.asciidoc", "status": "modified" }, { "diff": "@@ -11,3 +11,9 @@ the getter methods for date objects were deprecated. These methods have\n now been removed. Instead, use `.value` on `date` fields, or explicitly\n parse `long` fields into a date object using\n `Instance.ofEpochMillis(doc[\"myfield\"].value)`.\n+\n+==== Script errors will return as `400` error codes\n+\n+Malformed scripts, either in search templates, ingest pipelines or search \n+requests, return `400 - Bad request` while they would previously return\n+`500 - Internal Server Error`. This also applies for stored scripts.", "filename": "docs/reference/migration/migrate_7_0/scripting.asciidoc", "status": "modified" }, { "diff": "@@ -43,7 +43,7 @@ The Search API returns `400 - Bad request` while it would previously return\n * the number of slices is too large\n * keep alive for scroll is too large\n * number of filters in the adjacency matrix aggregation is too large\n-\n+* script compilation errors\n \n ==== Scroll queries cannot use the `request_cache` anymore\n ", "filename": "docs/reference/migration/migrate_7_0/search.asciidoc", "status": "modified" }, { "diff": "@@ -19,9 +19,9 @@\n package org.elasticsearch.script.mustache;\n \n import com.github.mustachejava.Mustache;\n+import com.github.mustachejava.MustacheException;\n import com.github.mustachejava.MustacheFactory;\n \n-import java.io.StringReader;\n import org.apache.logging.log4j.Logger;\n import org.apache.logging.log4j.message.ParameterizedMessage;\n import org.apache.logging.log4j.util.Supplier;\n@@ -31,12 +31,15 @@\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptContext;\n import org.elasticsearch.script.ScriptEngine;\n+import org.elasticsearch.script.ScriptException;\n import org.elasticsearch.script.TemplateScript;\n \n import java.io.Reader;\n+import java.io.StringReader;\n import java.io.StringWriter;\n import java.security.AccessController;\n import java.security.PrivilegedAction;\n+import java.util.Collections;\n import java.util.Map;\n \n /**\n@@ -66,9 +69,14 @@ public <T> T compile(String templateName, String templateSource, ScriptContext<T\n }\n final MustacheFactory factory = createMustacheFactory(options);\n Reader reader = new StringReader(templateSource);\n- Mustache template = factory.compile(reader, \"query-template\");\n- TemplateScript.Factory compiled = params -> new MustacheExecutableScript(template, params);\n- return context.factoryClazz.cast(compiled);\n+ try {\n+ Mustache template = factory.compile(reader, \"query-template\");\n+ TemplateScript.Factory compiled = params -> new MustacheExecutableScript(template, params);\n+ return context.factoryClazz.cast(compiled);\n+ } catch (MustacheException ex) {\n+ throw new ScriptException(ex.getMessage(), ex, Collections.emptyList(), templateSource, NAME);\n+ }\n+\n }\n \n private CustomMustacheFactory createMustacheFactory(Map<String, String> options) {", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/MustacheScriptEngine.java", "status": "modified" }, { "diff": "@@ -18,6 +18,15 @@\n */\n package org.elasticsearch.script.mustache;\n \n+import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.script.ScriptEngine;\n+import org.elasticsearch.script.ScriptException;\n+import org.elasticsearch.script.TemplateScript;\n+import org.elasticsearch.test.ESTestCase;\n+import org.hamcrest.Matcher;\n+\n import java.net.URLEncoder;\n import java.nio.charset.StandardCharsets;\n import java.util.Arrays;\n@@ -29,15 +38,6 @@\n import java.util.Map;\n import java.util.Set;\n \n-import com.github.mustachejava.MustacheException;\n-import org.elasticsearch.common.bytes.BytesReference;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentHelper;\n-import org.elasticsearch.script.ScriptEngine;\n-import org.elasticsearch.script.TemplateScript;\n-import org.elasticsearch.test.ESTestCase;\n-import org.hamcrest.Matcher;\n-\n import static java.util.Collections.singleton;\n import static java.util.Collections.singletonMap;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n@@ -225,11 +225,17 @@ public void testSimpleListToJSON() throws Exception {\n }\n \n public void testsUnsupportedTagsToJson() {\n- MustacheException e = expectThrows(MustacheException.class, () -> compile(\"{{#toJson}}{{foo}}{{bar}}{{/toJson}}\"));\n+ final String script = \"{{#toJson}}{{foo}}{{bar}}{{/toJson}}\";\n+ ScriptException e = expectThrows(ScriptException.class, () -> compile(script));\n assertThat(e.getMessage(), containsString(\"Mustache function [toJson] must contain one and only one identifier\"));\n+ assertEquals(MustacheScriptEngine.NAME, e.getLang());\n+ assertEquals(script, e.getScript());\n \n- e = expectThrows(MustacheException.class, () -> compile(\"{{#toJson}}{{/toJson}}\"));\n+ final String script2 = \"{{#toJson}}{{/toJson}}\";\n+ e = expectThrows(ScriptException.class, () -> compile(script2));\n assertThat(e.getMessage(), containsString(\"Mustache function [toJson] must contain one and only one identifier\"));\n+ assertEquals(MustacheScriptEngine.NAME, e.getLang());\n+ assertEquals(script2, e.getScript());\n }\n \n public void testEmbeddedToJSON() throws Exception {\n@@ -312,11 +318,17 @@ public void testJoinWithToJson() {\n }\n \n public void testsUnsupportedTagsJoin() {\n- MustacheException e = expectThrows(MustacheException.class, () -> compile(\"{{#join}}{{/join}}\"));\n+ final String script = \"{{#join}}{{/join}}\";\n+ ScriptException e = expectThrows(ScriptException.class, () -> compile(script));\n assertThat(e.getMessage(), containsString(\"Mustache function [join] must contain one and only one identifier\"));\n+ assertEquals(MustacheScriptEngine.NAME, e.getLang());\n+ assertEquals(script, e.getScript());\n \n- e = expectThrows(MustacheException.class, () -> compile(\"{{#join delimiter='a'}}{{/join delimiter='b'}}\"));\n+ final String script2 = \"{{#join delimiter='a'}}{{/join delimiter='b'}}\";\n+ e = expectThrows(ScriptException.class, () -> compile(script2));\n assertThat(e.getMessage(), containsString(\"Mismatched start/end tags\"));\n+ assertEquals(MustacheScriptEngine.NAME, e.getLang());\n+ assertEquals(script2, e.getScript());\n }\n \n public void testJoinWithCustomDelimiter() {", "filename": "modules/lang-mustache/src/test/java/org/elasticsearch/script/mustache/MustacheTests.java", "status": "modified" }, { "diff": "@@ -35,7 +35,7 @@\n id: \"non_existing\"\n \n - do:\n- catch: request\n+ catch: bad_request\n put_script:\n id: \"1\"\n context: \"search\"", "filename": "modules/lang-painless/src/test/resources/rest-api-spec/test/painless/16_update2.yml", "status": "modified" }, { "diff": "@@ -133,7 +133,7 @@ setup:\n ---\n \"Scripted Field with script error\":\n - do:\n- catch: request\n+ catch: bad_request\n search:\n body:\n script_fields:", "filename": "modules/lang-painless/src/test/resources/rest-api-spec/test/painless/20_scriptfield.yml", "status": "modified" }, { "diff": "@@ -17,7 +17,7 @@\n indices.refresh: {}\n \n - do:\n- catch: request\n+ catch: bad_request\n reindex:\n body:\n source:", "filename": "modules/reindex/src/test/resources/rest-api-spec/test/reindex/35_search_failures.yml", "status": "modified" }, { "diff": "@@ -446,7 +446,7 @@\n indices.refresh: {}\n \n - do:\n- catch: request\n+ catch: bad_request\n reindex:\n refresh: true\n body:", "filename": "modules/reindex/src/test/resources/rest-api-spec/test/reindex/85_scripting.yml", "status": "modified" }, { "diff": "@@ -17,7 +17,7 @@\n indices.refresh: {}\n \n - do:\n- catch: request\n+ catch: bad_request\n update_by_query:\n index: source\n body:", "filename": "modules/reindex/src/test/resources/rest-api-spec/test/update_by_query/35_search_failure.yml", "status": "modified" }, { "diff": "@@ -434,7 +434,7 @@\n indices.refresh: {}\n \n - do:\n- catch: request\n+ catch: bad_request\n update_by_query:\n index: twitter\n refresh: true", "filename": "modules/reindex/src/test/resources/rest-api-spec/test/update_by_query/80_scripting.yml", "status": "modified" }, { "diff": "@@ -332,7 +332,7 @@\n wait_for_status: green\n \n - do:\n- catch: request\n+ catch: bad_request\n ingest.put_pipeline:\n id: \"my_pipeline_1\"\n body: >\n@@ -348,5 +348,5 @@\n ]\n }\n - match: { error.header.processor_type: \"set\" }\n- - match: { error.type: \"general_script_exception\" }\n- - match: { error.reason: \"Failed to compile inline script [{{#join}}{{/join}}] using lang [mustache]\" }\n+ - match: { error.type: \"script_exception\" }\n+ - match: { error.reason: \"Mustache function [join] must contain one and only one identifier\" }", "filename": "qa/smoke-test-ingest-with-all-dependencies/src/test/resources/rest-api-spec/test/ingest/10_pipeline_with_mustache_templates.yml", "status": "modified" }, { "diff": "@@ -89,7 +89,7 @@\n ---\n \"Test script processor with syntax error in inline script\":\n - do:\n- catch: request\n+ catch: bad_request\n ingest.put_pipeline:\n id: \"my_pipeline\"\n body: >", "filename": "qa/smoke-test-ingest-with-all-dependencies/src/test/resources/rest-api-spec/test/ingest/50_script_processor_using_painless.yml", "status": "modified" }, { "diff": "@@ -1,5 +1,14 @@\n package org.elasticsearch.script;\n \n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.rest.RestStatus;\n+\n /*\n * Licensed to Elasticsearch under one or more contributor\n * license agreements. See the NOTICE file distributed with\n@@ -25,14 +34,6 @@\n import java.util.List;\n import java.util.Objects;\n \n-import org.elasticsearch.ElasticsearchException;\n-import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.io.stream.StreamInput;\n-import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.common.xcontent.ToXContent;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n-\n /**\n * Exception from a scripting engine.\n * <p>\n@@ -132,4 +133,9 @@ public String toJsonString() {\n throw new RuntimeException(e);\n }\n }\n+\n+ @Override\n+ public RestStatus status() {\n+ return RestStatus.BAD_REQUEST;\n+ }\n }", "filename": "server/src/main/java/org/elasticsearch/script/ScriptException.java", "status": "modified" }, { "diff": "@@ -5,7 +5,7 @@\n wait_for_status: green\n \n - do:\n- catch: request\n+ catch: bad_request\n xpack.watcher.put_watch:\n id: \"my_exe_watch\"\n body: >\n@@ -33,7 +33,7 @@\n }\n \n - is_true: error.script_stack\n- - match: { status: 500 }\n+ - match: { status: 400 }\n \n ---\n \"Test painless exceptions are returned when logging a broken response\":", "filename": "x-pack/qa/smoke-test-watcher/src/test/resources/rest-api-spec/test/painless/40_exception.yml", "status": "modified" } ] }
{ "body": "It looks like I have an unusual umask on my laptop:\r\n```\r\n[manybubbles@localhost ~]$ umask\r\n0002\r\n```\r\n\r\nI'm not sure how I got it but now that I have it I notice that the packaging tests don't pass:\r\n```\r\n$ ./gradlew -p qa/vagrant/ packagingTest\r\n...\r\n# Expected privileges: 644, found 664 [/usr/share/elasticsearch/README.textile]\r\n```\r\n\r\nI think we should try and build the same packages regardless of my personal umask setting.", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-05-22T23:03:58Z" }, { "body": "It looks like this isn't actually based on the umask. There are other machines and users with the same umask that aren't getting this.", "created_at": "2018-05-23T16:59:11Z" }, { "body": "This isn't caused by my umask:\r\n```\r\n[manybubbles@localhost elasticsearch]$ ./gradlew -p distribution clean\r\n...\r\n[manybubbles@localhost elasticsearch]$ umask 0022\r\n[manybubbles@localhost elasticsearch]$ umask\r\n0022\r\n[manybubbles@localhost elasticsearch]$ ./gradlew -p qa/vagrant/ packagingTest\r\n...\r\n# Expected privileges: 644, found 664 [/usr/share/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar]\r\n```", "created_at": "2018-05-23T17:48:02Z" }, { "body": "I was wrong about it not being my umask. If I `git clean -xdf` after setting my umask to `0022` then I get *further*. The tests still fail because my clone of elasticsearch has `README.textile` as `664` rather than `644`. I *think* the umask that you had when you cloned Elasticsearch influences that one.\r\n\r\n", "created_at": "2018-05-23T18:30:39Z" } ], "number": 30799, "title": "Permissions on files in packages depends on *my* umask" }
{ "body": "Applies default file and directory permissions to zip distributions\r\nsimilar to how they're set for the tar distributions. Previously zip\r\ndistributions would retain permissions they had on the build host's\r\nworking tree, which could vary depending on its umask\r\n\r\nFor #30799", "number": 30854, "review_comments": [], "title": "stable filemode for zip distributions" }
{ "commits": [ { "message": "stable filemode for zip distributions\n\nApplies default file and directory permissions to zip distributions\nsimilar to how they're set for the tar distributions. Previously zip\ndistributions would retain permissions they had on the build host's\nworking tree, which could vary depending on its umask\n\nFor #30799" } ], "files": [ { "diff": "@@ -104,15 +104,23 @@ tasks.withType(AbstractArchiveTask) {\n baseName = \"elasticsearch${ subdir.contains('oss') ? '-oss' : ''}\"\n }\n \n+Closure commonZipConfig = {\n+ dirMode 0755\n+ fileMode 0644\n+}\n+\n task buildIntegTestZip(type: Zip) {\n+ configure(commonZipConfig)\n with archiveFiles(transportModulesFiles, 'zip', false)\n }\n \n task buildZip(type: Zip) {\n+ configure(commonZipConfig)\n with archiveFiles(modulesFiles(false), 'zip', false)\n }\n \n task buildOssZip(type: Zip) {\n+ configure(commonZipConfig)\n with archiveFiles(modulesFiles(true), 'zip', true)\n }\n ", "filename": "distribution/archives/build.gradle", "status": "modified" } ] }
{ "body": "Follow-up of #22146, which has been stalled for a long time and probably needs to be re-done entirely now.\r\n\r\nCurrently docvalues_fields return the values of the fields as they are stored\r\nin doc values. I don't like that it exposes implementation details, but there\r\nare also user-facing issues like the fact it cannot work with binary fields.\r\nThis change will also make it easier for users to reindex if they do not store\r\nthe source, since docvalues_fields will return data is such a format that it\r\ncan be put in an indexing request with the same mappings.", "comments": [ { "body": "Just a random aside, but it used to be the case with Kibana that they relied on\r\nthe non-formatting of doc_values for displaying data. For example, with dates\r\nthey used to ask for `fielddata_fields`/`docvalues_fields` of the date to get it\r\nback in the epoch long format, rather than `YYYY-mm-dd:HH:mm:ss,SSS` (Kibana may\r\nhave since changed not to use it, I just wanted to leave a warning that we\r\nshould check).\r\n", "created_at": "2017-10-17T13:30:32Z" }, { "body": "Absolutely, this is actually the reason that I delayed the merging since it required synchronization with Kibana at a time when I had few free cycles.", "created_at": "2017-10-17T22:05:54Z" }, { "body": "Pinging @elastic/es-search-aggs ", "created_at": "2018-03-23T21:57:24Z" }, { "body": "Fixed by #30831.", "created_at": "2019-02-01T08:45:01Z" } ], "number": 26948, "title": "Format doc values fields" }
{ "body": "Doc-value fields now return a value that is based on the mappings rather than\r\nthe script implementation by default.\r\n\r\nThis deprecates the special `use_field_mapping` docvalue format which was added\r\nin #29639 only to ease the transition to 7.x and it is not necessary anymore in\r\n7.0.\r\n\r\nCloses #26948", "number": 30831, "review_comments": [ { "body": "`with` -> `within`?", "created_at": "2018-06-22T12:07:27Z" }, { "body": "Not sure but should we have a test to check for warnings that runs on 6.4-7.0 versions for the mixed cluster bwc tests?", "created_at": "2018-06-22T12:12:19Z" }, { "body": "A test that would have \"6.3.99 - 6.99.99\" as a version would never run since 7.0 is not included?", "created_at": "2018-06-25T17:01:36Z" }, { "body": "hmmm I think I might have a slightly misunderstanding on how the mixed cluster tests work then as I thought we set up a mixed cluster (let say 5 modes with 3 as 6.x and 2 as 7.0) and then we ran the YAML tests against the 6.x nodes using the 6.x version to work out which tests to run. Though now that I'm writing this I realise that as long as the 6.x version tests that the warning is sent (which it does) it should be enough and we probably shouldn't have 7.0 effectively test somethign in 6.x.", "created_at": "2018-06-26T07:02:39Z" }, { "body": "Weeps I had missed that you were talking specifically about the bw tests. Yes, my preference would be to not test the warning, especially as things might depend on which node acts as a coordinating node for the search request.", "created_at": "2018-06-27T12:25:04Z" } ], "title": "Use mappings to format doc-value fields by default." }
{ "commits": [ { "message": "Use mappings to format doc-value fields by default.\n\nDoc-value fields now return a value that is based on the mappings rather than\nthe script implementation by default.\n\nThis deprecates the special `use_field_mapping` docvalue format which was added\nin #29639 only to ease the transition to 7.x and it is not necessary anymore in\n7.0." }, { "message": "iter" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "iter" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "iter" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "iter" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "iter" }, { "message": "iter" }, { "message": "iter" }, { "message": "iter" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "iter" }, { "message": "iter" }, { "message": "iter" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "iter" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "iter" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "iter" }, { "message": "iter" }, { "message": "Merge branch 'master' into remove_use_field_mapping" }, { "message": "iter" } ], "files": [ { "diff": "@@ -122,6 +122,16 @@ using the \"all fields\" mode (\"default_field\": \"*\") or other fieldname expansions\n Search requests with extra content after the main object will no longer be accepted\n by the `_search` endpoint. A parsing exception will be thrown instead.\n \n+[float]\n+==== Doc-value fields default format\n+\n+The format of doc-value fields is changing to be the same as what could be\n+obtained in 6.x with the special `use_field_mapping` format. This is mostly a\n+change for date fields, which are now formatted based on the format that is\n+configured in the mappings by default. This behavior can be changed by\n+specifying a <<search-request-docvalue-fields,`format`>> within the doc-value\n+field.\n+\n [float]\n ==== Context Completion Suggester\n ", "filename": "docs/reference/migration/migrate_7_0/search.asciidoc", "status": "modified" }, { "diff": "@@ -12,9 +12,9 @@ GET /_search\n \"match_all\": {}\n },\n \"docvalue_fields\" : [\n+ \"my_ip_field\", <1>\n {\n- \"field\": \"my_ip_field\", <1>\n- \"format\": \"use_field_mapping\" <2>\n+ \"field\": \"my_keyword_field\" <2>\n },\n {\n \"field\": \"my_date_field\",\n@@ -25,10 +25,10 @@ GET /_search\n --------------------------------------------------\n // CONSOLE\n <1> the name of the field\n-<2> the special `use_field_mapping` format tells Elasticsearch to use the format from the mapping\n-<3> date fields may use a custom format\n+<2> an object notation is supported as well\n+<3> the object notation allows to specify a custom format\n \n-Doc value fields can work on fields that are not stored.\n+Doc value fields can work on fields that have doc-values enabled, regardless of whether they are stored\n \n `*` can be used as a wild card, for example:\n \n@@ -41,8 +41,8 @@ GET /_search\n },\n \"docvalue_fields\" : [\n {\n- \"field\": \"*field\", <1>\n- \"format\": \"use_field_mapping\" <2>\n+ \"field\": \"*_date_field\", <1>\n+ \"format\": \"epoch_millis\" <2>\n }\n ]\n }\n@@ -62,9 +62,8 @@ While most fields do not support custom formats, some of them do:\n - <<date,Date>> fields can take any <<mapping-date-format,date format>>.\n - <<number,Numeric>> fields accept a https://docs.oracle.com/javase/8/docs/api/java/text/DecimalFormat.html[DecimalFormat pattern].\n \n-All fields support the special `use_field_mapping` format, which tells\n-Elasticsearch to use the mappings to figure out a default format.\n+By default fields are formatted based on a sensible configuration that depends\n+on their mappings: `long`, `double` and other numeric fields are formatted as\n+numbers, `keyword` fields are formatted as strings, `date` fields are formatted\n+with the configured `date` format, etc.\n \n-NOTE: The default is currently to return the same output as\n-<<search-request-script-fields,script fields>>. However it will change in 7.0\n-to behave as if the `use_field_mapping` format was provided.", "filename": "docs/reference/search/request/docvalue-fields.asciidoc", "status": "modified" }, { "diff": "@@ -246,10 +246,7 @@ POST test/_search\n \"inner_hits\": {\n \"_source\" : false,\n \"docvalue_fields\" : [\n- {\n- \"field\": \"comments.text.keyword\",\n- \"format\": \"use_field_mapping\"\n- }\n+ \"comments.text.keyword\"\n ]\n }\n }", "filename": "docs/reference/search/request/inner-hits.asciidoc", "status": "modified" }, { "diff": "@@ -27,8 +27,7 @@ Which returns:\n \"size\" : 10,\n \"docvalue_fields\" : [\n {\n- \"field\": \"page_count\",\n- \"format\": \"use_field_mapping\"\n+ \"field\": \"page_count\"\n },\n {\n \"field\": \"release_date\",", "filename": "docs/reference/sql/endpoints/translate.asciidoc", "status": "modified" }, { "diff": "@@ -196,7 +196,7 @@ public void testQueryBuilderBWC() throws Exception {\n QueryBuilder expectedQueryBuilder = (QueryBuilder) CANDIDATES.get(i)[1];\n Request request = new Request(\"GET\", \"/\" + index + \"/_search\");\n request.setJsonEntity(\"{\\\"query\\\": {\\\"ids\\\": {\\\"values\\\": [\\\"\" + Integer.toString(i) + \"\\\"]}}, \" +\n- \"\\\"docvalue_fields\\\": [{\\\"field\\\":\\\"query.query_builder_field\\\", \\\"format\\\":\\\"use_field_mapping\\\"}]}\");\n+ \"\\\"docvalue_fields\\\": [{\\\"field\\\":\\\"query.query_builder_field\\\"}]}\");\n Response rsp = client().performRequest(request);\n assertEquals(200, rsp.getStatusLine().getStatusCode());\n Map<?, ?> hitRsp = (Map<?, ?>) ((List<?>) ((Map<?, ?>)toMap(rsp).get(\"hits\")).get(\"hits\")).get(0);", "filename": "qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/QueryBuilderBWCIT.java", "status": "modified" }, { "diff": "@@ -46,8 +46,8 @@ setup:\n \"Nested doc version and seqIDs\":\n \n - skip:\n- version: \" - 6.3.99\"\n- reason: \"object notation for docvalue_fields was introduced in 6.4\"\n+ version: \" - 6.99.99\"\n+ reason: \"Triggers warnings before 7.0\"\n \n - do:\n index:\n@@ -62,7 +62,7 @@ setup:\n - do:\n search:\n rest_total_hits_as_int: true\n- body: { \"query\" : { \"nested\" : { \"path\" : \"nested_field\", \"query\" : { \"match_all\" : {} }, \"inner_hits\" : { version: true, \"docvalue_fields\": [ { \"field\": \"_seq_no\", \"format\": \"use_field_mapping\" } ]} }}, \"version\": true, \"docvalue_fields\" : [ { \"field\": \"_seq_no\", \"format\": \"use_field_mapping\" } ] }\n+ body: { \"query\" : { \"nested\" : { \"path\" : \"nested_field\", \"query\" : { \"match_all\" : {} }, \"inner_hits\" : { version: true, \"docvalue_fields\": [ \"_seq_no\" ]} }}, \"version\": true, \"docvalue_fields\" : [ \"_seq_no\" ] }\n \n - match: { hits.total: 1 }\n - match: { hits.hits.0._index: \"test\" }\n@@ -86,7 +86,7 @@ setup:\n - do:\n search:\n rest_total_hits_as_int: true\n- body: { \"query\" : { \"nested\" : { \"path\" : \"nested_field\", \"query\" : { \"match_all\" : {} }, \"inner_hits\" : { version: true, \"docvalue_fields\": [ { \"field\": \"_seq_no\", \"format\": \"use_field_mapping\" } ]} }}, \"version\": true, \"docvalue_fields\" : [ { \"field\": \"_seq_no\", \"format\": \"use_field_mapping\" } ] }\n+ body: { \"query\" : { \"nested\" : { \"path\" : \"nested_field\", \"query\" : { \"match_all\" : {} }, \"inner_hits\" : { version: true, \"docvalue_fields\": [ \"_seq_no\" ]} }}, \"version\": true, \"docvalue_fields\" : [ \"_seq_no\" ] }\n \n - match: { hits.total: 1 }\n - match: { hits.hits.0._index: \"test\" }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search.inner_hits/10_basic.yml", "status": "modified" }, { "diff": "@@ -144,12 +144,9 @@ setup:\n ---\n \"docvalue_fields\":\n - skip:\n- version: \" - 6.4.0\"\n- reason: format option was added in 6.4 and the deprecation message changed in 6.4.1\n- features: warnings\n+ version: \" - 6.9.99\"\n+ reason: Triggers a deprecation warning before 7.0\n - do:\n- warnings:\n- - 'There are doc-value fields which are not using a format. The output will change in 7.0 when doc value fields get formatted based on mappings by default. It is recommended to pass [format=use_field_mapping] with a doc value field in order to opt in for the future behaviour and ease the migration to 7.0: [count]'\n search:\n body:\n docvalue_fields: [ \"count\" ]\n@@ -158,12 +155,9 @@ setup:\n ---\n \"multiple docvalue_fields\":\n - skip:\n- version: \" - 6.4.0\"\n- reason: format option was added in 6.4 and the deprecation message changed in 6.4.1\n- features: warnings\n+ version: \" - 6.9.99\"\n+ reason: Triggered a deprecation warning before 7.0\n - do:\n- warnings:\n- - 'There are doc-value fields which are not using a format. The output will change in 7.0 when doc value fields get formatted based on mappings by default. It is recommended to pass [format=use_field_mapping] with a doc value field in order to opt in for the future behaviour and ease the migration to 7.0: [count, include.field1.keyword]'\n search:\n body:\n docvalue_fields: [ \"count\", \"include.field1.keyword\" ]\n@@ -172,22 +166,22 @@ setup:\n ---\n \"docvalue_fields as url param\":\n - skip:\n- version: \" - 6.4.0\"\n- reason: format option was added in 6.4 and the deprecation message changed in 6.4.1\n- features: warnings\n+ version: \" - 6.99.99\"\n+ reason: Triggered a deprecation warning before 7.0\n - do:\n- warnings:\n- - 'There are doc-value fields which are not using a format. The output will change in 7.0 when doc value fields get formatted based on mappings by default. It is recommended to pass [format=use_field_mapping] with a doc value field in order to opt in for the future behaviour and ease the migration to 7.0: [count]'\n search:\n docvalue_fields: [ \"count\" ]\n - match: { hits.hits.0.fields.count: [1] }\n \n ---\n \"docvalue_fields with default format\":\n - skip:\n- version: \" - 6.3.99\"\n- reason: format option was added in 6.4\n+ version: \" - 6.99.99\"\n+ reason: Only triggers warnings on 7.0+\n+ features: warnings\n - do:\n+ warnings:\n+ - \"[use_field_mapping] is a special format that was only used to ease the transition to 7.x. It has become the default and shouldn't be set explicitly anymore.\"\n search:\n body:\n docvalue_fields:", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search/10_source_filtering.yml", "status": "modified" }, { "diff": "@@ -67,8 +67,8 @@ setup:\n \"Docvalues_fields size limit\":\n \n - skip:\n- version: \" - 6.3.99\"\n- reason: \"The object notation for docvalue_fields is only supported on 6.4+\"\n+ version: \" - 6.99.99\"\n+ reason: \"Triggers warnings before 7.0\"\n - do:\n catch: /Trying to retrieve too many docvalue_fields\\. Must be less than or equal to[:] \\[2\\] but was \\[3\\]\\. This limit can be set by changing the \\[index.max_docvalue_fields_search\\] index level setting\\./\n search:\n@@ -78,12 +78,9 @@ setup:\n query:\n match_all: {}\n docvalue_fields:\n- - field: \"one\"\n- format: \"use_field_mapping\"\n- - field: \"two\"\n- format: \"use_field_mapping\"\n- - field: \"three\"\n- format: \"use_field_mapping\"\n+ - \"one\"\n+ - \"two\"\n+ - \"three\"\n \n ---\n \"Script_fields size limit\":", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search/30_limits.yml", "status": "modified" }, { "diff": "@@ -38,8 +38,6 @@\n */\n public class DocValueFieldsContext {\n \n- public static final String USE_DEFAULT_FORMAT = \"use_field_mapping\";\n-\n /**\n * Wrapper around a field name and the format that should be used to\n * display values of this field.", "filename": "server/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsContext.java", "status": "modified" }, { "diff": "@@ -28,7 +28,6 @@\n import org.elasticsearch.index.fielddata.AtomicNumericFieldData;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.IndexNumericFieldData;\n-import org.elasticsearch.index.fielddata.ScriptDocValues;\n import org.elasticsearch.index.fielddata.SortedBinaryDocValues;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n import org.elasticsearch.index.mapper.MappedFieldType;\n@@ -46,7 +45,6 @@\n import java.util.HashMap;\n import java.util.List;\n import java.util.Objects;\n-import java.util.stream.Collectors;\n \n /**\n * Query sub phase which pulls data from doc values\n@@ -55,7 +53,8 @@\n */\n public final class DocValueFieldsFetchSubPhase implements FetchSubPhase {\n \n- private static final DeprecationLogger deprecationLogger = new DeprecationLogger(\n+ private static final String USE_DEFAULT_FORMAT = \"use_field_mapping\";\n+ private static final DeprecationLogger DEPRECATION_LOGGER = new DeprecationLogger(\n LogManager.getLogger(DocValueFieldsFetchSubPhase.class));\n \n @Override\n@@ -66,9 +65,9 @@ public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOExcept\n String name = context.collapse().getFieldName();\n if (context.docValueFieldsContext() == null) {\n context.docValueFieldsContext(new DocValueFieldsContext(\n- Collections.singletonList(new FieldAndFormat(name, DocValueFieldsContext.USE_DEFAULT_FORMAT))));\n+ Collections.singletonList(new FieldAndFormat(name, null))));\n } else if (context.docValueFieldsContext().fields().stream().map(ff -> ff.field).anyMatch(name::equals) == false) {\n- context.docValueFieldsContext().fields().add(new FieldAndFormat(name, DocValueFieldsContext.USE_DEFAULT_FORMAT));\n+ context.docValueFieldsContext().fields().add(new FieldAndFormat(name, null));\n }\n }\n \n@@ -79,33 +78,28 @@ public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOExcept\n hits = hits.clone(); // don't modify the incoming hits\n Arrays.sort(hits, Comparator.comparingInt(SearchHit::docId));\n \n- List<String> noFormatFields = context.docValueFieldsContext().fields().stream().filter(f -> f.format == null).map(f -> f.field)\n- .collect(Collectors.toList());\n- if (noFormatFields.isEmpty() == false) {\n- deprecationLogger.deprecated(\"There are doc-value fields which are not using a format. The output will \"\n- + \"change in 7.0 when doc value fields get formatted based on mappings by default. It is recommended to pass \"\n- + \"[format={}] with a doc value field in order to opt in for the future behaviour and ease the migration to \"\n- + \"7.0: {}\", DocValueFieldsContext.USE_DEFAULT_FORMAT, noFormatFields);\n+ if (context.docValueFieldsContext().fields().stream()\n+ .map(f -> f.format)\n+ .filter(USE_DEFAULT_FORMAT::equals)\n+ .findAny()\n+ .isPresent()) {\n+ DEPRECATION_LOGGER.deprecated(\"[\" + USE_DEFAULT_FORMAT + \"] is a special format that was only used to \" +\n+ \"ease the transition to 7.x. It has become the default and shouldn't be set explicitly anymore.\");\n }\n \n for (FieldAndFormat fieldAndFormat : context.docValueFieldsContext().fields()) {\n String field = fieldAndFormat.field;\n MappedFieldType fieldType = context.mapperService().fullName(field);\n if (fieldType != null) {\n final IndexFieldData<?> indexFieldData = context.getForField(fieldType);\n- final DocValueFormat format;\n- if (fieldAndFormat.format == null) {\n- format = null;\n- } else {\n- String formatDesc = fieldAndFormat.format;\n- if (Objects.equals(formatDesc, DocValueFieldsContext.USE_DEFAULT_FORMAT)) {\n- formatDesc = null;\n- }\n- format = fieldType.docValueFormat(formatDesc, null);\n+ String formatDesc = fieldAndFormat.format;\n+ if (Objects.equals(formatDesc, USE_DEFAULT_FORMAT)) {\n+ // TODO: Remove in 8.x\n+ formatDesc = null;\n }\n+ final DocValueFormat format = fieldType.docValueFormat(formatDesc, null);\n LeafReaderContext subReaderContext = null;\n AtomicFieldData data = null;\n- ScriptDocValues<?> scriptValues = null; // legacy\n SortedBinaryDocValues binaryValues = null; // binary / string / ip fields\n SortedNumericDocValues longValues = null; // int / date fields\n SortedNumericDoubleValues doubleValues = null; // floating-point fields\n@@ -115,9 +109,7 @@ public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOExcept\n int readerIndex = ReaderUtil.subIndex(hit.docId(), context.searcher().getIndexReader().leaves());\n subReaderContext = context.searcher().getIndexReader().leaves().get(readerIndex);\n data = indexFieldData.load(subReaderContext);\n- if (format == null) {\n- scriptValues = data.getLegacyFieldValues();\n- } else if (indexFieldData instanceof IndexNumericFieldData) {\n+ if (indexFieldData instanceof IndexNumericFieldData) {\n if (((IndexNumericFieldData) indexFieldData).getNumericType().isFloatingPoint()) {\n doubleValues = ((AtomicNumericFieldData) data).getDoubleValues();\n } else {\n@@ -138,10 +130,7 @@ public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOExcept\n final List<Object> values = hitField.getValues();\n \n int subDocId = hit.docId() - subReaderContext.docBase;\n- if (scriptValues != null) {\n- scriptValues.setNextDocId(subDocId);\n- values.addAll(scriptValues);\n- } else if (binaryValues != null) {\n+ if (binaryValues != null) {\n if (binaryValues.advanceExact(subDocId)) {\n for (int i = 0, count = binaryValues.docValueCount(); i < count; ++i) {\n values.add(format.format(binaryValues.nextValue()));", "filename": "server/src/main/java/org/elasticsearch/search/fetch/subphase/DocValueFieldsFetchSubPhase.java", "status": "modified" }, { "diff": "@@ -32,7 +32,6 @@\n import org.elasticsearch.script.ScriptType;\n import org.elasticsearch.search.SearchModule;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext.FieldAndFormat;\n import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilderTests;\n@@ -158,8 +157,7 @@ public static InnerHitBuilder randomInnerHits() {\n innerHits.setStoredFieldNames(randomListStuff(16, () -> randomAlphaOfLengthBetween(1, 16)));\n }\n innerHits.setDocValueFields(randomListStuff(16,\n- () -> new FieldAndFormat(randomAlphaOfLengthBetween(1, 16),\n- randomBoolean() ? null : DocValueFieldsContext.USE_DEFAULT_FORMAT)));\n+ () -> new FieldAndFormat(randomAlphaOfLengthBetween(1, 16), null)));\n // Random script fields deduped on their field name.\n Map<String, SearchSourceBuilder.ScriptField> scriptFields = new HashMap<>();\n for (SearchSourceBuilder.ScriptField field: randomListStuff(16, InnerHitBuilderTests::randomScript)) {\n@@ -201,8 +199,7 @@ static InnerHitBuilder mutate(InnerHitBuilder original) throws IOException {\n modifiers.add(() -> {\n if (randomBoolean()) {\n copy.setDocValueFields(randomValueOtherThan(copy.getDocValueFields(),\n- () -> randomListStuff(16, () -> new FieldAndFormat(randomAlphaOfLengthBetween(1, 16),\n- randomBoolean() ? null : DocValueFieldsContext.USE_DEFAULT_FORMAT))));\n+ () -> randomListStuff(16, () -> new FieldAndFormat(randomAlphaOfLengthBetween(1, 16), null))));\n } else {\n copy.addDocValueField(randomAlphaOfLengthBetween(1, 16));\n }", "filename": "server/src/test/java/org/elasticsearch/index/query/InnerHitBuilderTests.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.search.fields;\n \n-import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -49,7 +48,6 @@\n import org.elasticsearch.test.InternalSettingsPlugin;\n import org.joda.time.DateTime;\n import org.joda.time.DateTimeZone;\n-import org.joda.time.ReadableDateTime;\n import org.joda.time.format.DateTimeFormat;\n \n import java.time.ZoneOffset;\n@@ -804,13 +802,12 @@ public void testDocValueFields() throws Exception {\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"long_field\").getValue(), equalTo((Object) 4L));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"float_field\").getValue(), equalTo((Object) 5.0));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"double_field\").getValue(), equalTo((Object) 6.0d));\n- DateTime dateField = searchResponse.getHits().getAt(0).getFields().get(\"date_field\").getValue();\n- assertThat(dateField.getMillis(), equalTo(date.toInstant().toEpochMilli()));\n+ assertThat(searchResponse.getHits().getAt(0).getFields().get(\"date_field\").getValue(),\n+ equalTo(DateFormatter.forPattern(\"dateOptionalTime\").format(date)));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"boolean_field\").getValue(), equalTo((Object) true));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"text_field\").getValue(), equalTo(\"foo\"));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"keyword_field\").getValue(), equalTo(\"foo\"));\n- assertThat(searchResponse.getHits().getAt(0).getFields().get(\"binary_field\").getValue(),\n- equalTo(new BytesRef(new byte[] {42, 100})));\n+ assertThat(searchResponse.getHits().getAt(0).getFields().get(\"binary_field\").getValue(), equalTo(\"KmQ\"));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"ip_field\").getValue(), equalTo(\"::1\"));\n \n builder = client().prepareSearch().setQuery(matchAllQuery())\n@@ -830,13 +827,12 @@ public void testDocValueFields() throws Exception {\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"long_field\").getValue(), equalTo((Object) 4L));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"float_field\").getValue(), equalTo((Object) 5.0));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"double_field\").getValue(), equalTo((Object) 6.0d));\n- dateField = searchResponse.getHits().getAt(0).getFields().get(\"date_field\").getValue();\n- assertThat(dateField.getMillis(), equalTo(date.toInstant().toEpochMilli()));\n+ assertThat(searchResponse.getHits().getAt(0).getFields().get(\"date_field\").getValue(),\n+ equalTo(DateFormatter.forPattern(\"dateOptionalTime\").format(date)));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"boolean_field\").getValue(), equalTo((Object) true));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"text_field\").getValue(), equalTo(\"foo\"));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"keyword_field\").getValue(), equalTo(\"foo\"));\n- assertThat(searchResponse.getHits().getAt(0).getFields().get(\"binary_field\").getValue(),\n- equalTo(new BytesRef(new byte[] {42, 100})));\n+ assertThat(searchResponse.getHits().getAt(0).getFields().get(\"binary_field\").getValue(), equalTo(\"KmQ\"));\n assertThat(searchResponse.getHits().getAt(0).getFields().get(\"ip_field\").getValue(), equalTo(\"::1\"));\n \n builder = client().prepareSearch().setQuery(matchAllQuery())\n@@ -1001,9 +997,7 @@ public void testDocValueFieldsWithFieldAlias() throws Exception {\n \n DocumentField dateField = fields.get(\"date_field\");\n assertThat(dateField.getName(), equalTo(\"date_field\"));\n-\n- ReadableDateTime fetchedDate = dateField.getValue();\n- assertThat(fetchedDate.getMillis(), equalTo(date.toInstant().getMillis()));\n+ assertThat(dateField.getValue(), equalTo(\"1990-12-29\"));\n }\n \n public void testWildcardDocValueFieldsWithFieldAlias() throws Exception {\n@@ -1065,9 +1059,7 @@ public void testWildcardDocValueFieldsWithFieldAlias() throws Exception {\n \n DocumentField dateField = fields.get(\"date_field\");\n assertThat(dateField.getName(), equalTo(\"date_field\"));\n-\n- ReadableDateTime fetchedDate = dateField.getValue();\n- assertThat(fetchedDate.getMillis(), equalTo(date.toInstant().getMillis()));\n+ assertThat(dateField.getValue(), equalTo(\"1990-12-29\"));\n }\n \n ", "filename": "server/src/test/java/org/elasticsearch/search/fields/SearchFieldsIT.java", "status": "modified" }, { "diff": "@@ -7,7 +7,6 @@\n \n import org.elasticsearch.common.document.DocumentField;\n import org.elasticsearch.search.SearchHit;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n \n import java.util.List;\n import java.util.Map;\n@@ -52,7 +51,7 @@ public ExtractionMethod getExtractionMethod() {\n public abstract Object[] value(SearchHit hit);\n \n public String getDocValueFormat() {\n- return DocValueFieldsContext.USE_DEFAULT_FORMAT;\n+ return null;\n }\n \n public static ExtractedField newTimeField(String name, ExtractionMethod extractionMethod) {", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/datafeed/extractor/fields/ExtractedField.java", "status": "modified" }, { "diff": "@@ -44,7 +44,6 @@\n import org.elasticsearch.index.query.WildcardQueryBuilder;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.xpack.core.ClientHelper;\n import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;\n import org.elasticsearch.xpack.core.ml.datafeed.DatafeedUpdate;\n@@ -198,7 +197,7 @@ public void onFailure(Exception e) {\n public void findDatafeedsForJobIds(Collection<String> jobIds, ActionListener<Set<String>> listener) {\n SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(buildDatafeedJobIdsQuery(jobIds));\n sourceBuilder.fetchSource(false);\n- sourceBuilder.docValueField(DatafeedConfig.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);\n+ sourceBuilder.docValueField(DatafeedConfig.ID.getPreferredName(), null);\n \n SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())\n .setIndicesOptions(IndicesOptions.lenientExpandOpen())\n@@ -366,7 +365,7 @@ public void expandDatafeedIds(String expression, boolean allowNoDatafeeds, Actio\n SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(buildDatafeedIdQuery(tokens));\n sourceBuilder.sort(DatafeedConfig.ID.getPreferredName());\n sourceBuilder.fetchSource(false);\n- sourceBuilder.docValueField(DatafeedConfig.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);\n+ sourceBuilder.docValueField(DatafeedConfig.ID.getPreferredName(), null);\n \n SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())\n .setIndicesOptions(IndicesOptions.lenientExpandOpen())", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/datafeed/persistence/DatafeedConfigProvider.java", "status": "modified" }, { "diff": "@@ -52,7 +52,6 @@\n import org.elasticsearch.index.query.WildcardQueryBuilder;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n import org.elasticsearch.search.sort.SortOrder;\n import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;\n@@ -424,7 +423,7 @@ public void jobIdMatches(List<String> ids, ActionListener<List<String>> listener\n \n SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(boolQueryBuilder);\n sourceBuilder.fetchSource(false);\n- sourceBuilder.docValueField(Job.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);\n+ sourceBuilder.docValueField(Job.ID.getPreferredName(), null);\n \n SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())\n .setIndicesOptions(IndicesOptions.lenientExpandOpen())\n@@ -509,8 +508,8 @@ public void expandJobsIds(String expression, boolean allowNoJobs, boolean exclud\n SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(buildQuery(tokens, excludeDeleting));\n sourceBuilder.sort(Job.ID.getPreferredName());\n sourceBuilder.fetchSource(false);\n- sourceBuilder.docValueField(Job.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);\n- sourceBuilder.docValueField(Job.GROUPS.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);\n+ sourceBuilder.docValueField(Job.ID.getPreferredName(), null);\n+ sourceBuilder.docValueField(Job.GROUPS.getPreferredName(), null);\n \n SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())\n .setIndicesOptions(IndicesOptions.lenientExpandOpen())\n@@ -554,8 +553,8 @@ private SearchRequest makeExpandIdsSearchRequest(String expression, boolean excl\n SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(buildQuery(tokens, excludeDeleting));\n sourceBuilder.sort(Job.ID.getPreferredName());\n sourceBuilder.fetchSource(false);\n- sourceBuilder.docValueField(Job.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);\n- sourceBuilder.docValueField(Job.GROUPS.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);\n+ sourceBuilder.docValueField(Job.ID.getPreferredName(), null);\n+ sourceBuilder.docValueField(Job.GROUPS.getPreferredName(), null);\n \n return client.prepareSearch(AnomalyDetectorsIndex.configIndexName())\n .setIndicesOptions(IndicesOptions.lenientExpandOpen())\n@@ -638,7 +637,7 @@ public void expandGroupIds(List<String> groupIds, ActionListener<SortedSet<Strin\n .query(new TermsQueryBuilder(Job.GROUPS.getPreferredName(), groupIds));\n sourceBuilder.sort(Job.ID.getPreferredName(), SortOrder.DESC);\n sourceBuilder.fetchSource(false);\n- sourceBuilder.docValueField(Job.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);\n+ sourceBuilder.docValueField(Job.ID.getPreferredName(), null);\n \n SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())\n .setIndicesOptions(IndicesOptions.lenientExpandOpen())", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/persistence/JobConfigProvider.java", "status": "modified" }, { "diff": "@@ -6,7 +6,6 @@\n package org.elasticsearch.xpack.ml.datafeed.extractor.fields;\n \n import org.elasticsearch.search.SearchHit;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.xpack.ml.test.SearchHitBuilder;\n \n@@ -143,7 +142,7 @@ public void testAliasVersusName() {\n \n public void testGetDocValueFormat() {\n for (ExtractedField.ExtractionMethod method : ExtractedField.ExtractionMethod.values()) {\n- assertThat(ExtractedField.newField(\"f\", method).getDocValueFormat(), equalTo(DocValueFieldsContext.USE_DEFAULT_FORMAT));\n+ assertThat(ExtractedField.newField(\"f\", method).getDocValueFormat(), equalTo(null));\n }\n assertThat(ExtractedField.newTimeField(\"doc_value_time\", ExtractedField.ExtractionMethod.DOC_VALUE).getDocValueFormat(),\n equalTo(\"epoch_millis\"));", "filename": "x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/extractor/fields/ExtractedFieldTests.java", "status": "modified" }, { "diff": "@@ -7,7 +7,6 @@\n \n import org.elasticsearch.action.fieldcaps.FieldCapabilities;\n import org.elasticsearch.action.fieldcaps.FieldCapabilitiesResponse;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;\n import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;\n@@ -63,7 +62,7 @@ public void testBuildGivenMixtureOfTypes() {\n assertThat(extractedFields.getDocValueFields().get(0).getName(), equalTo(\"time\"));\n assertThat(extractedFields.getDocValueFields().get(0).getDocValueFormat(), equalTo(\"epoch_millis\"));\n assertThat(extractedFields.getDocValueFields().get(1).getName(), equalTo(\"value\"));\n- assertThat(extractedFields.getDocValueFields().get(1).getDocValueFormat(), equalTo(DocValueFieldsContext.USE_DEFAULT_FORMAT));\n+ assertThat(extractedFields.getDocValueFields().get(1).getDocValueFormat(), equalTo(null));\n assertThat(extractedFields.getSourceFields(), equalTo(new String[] {\"airline\"}));\n assertThat(extractedFields.getAllFields().size(), equalTo(4));\n }", "filename": "x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/extractor/fields/ExtractedFieldsTests.java", "status": "modified" }, { "diff": "@@ -9,7 +9,6 @@\n import org.elasticsearch.action.fieldcaps.FieldCapabilitiesResponse;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;\n import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;\n@@ -134,7 +133,7 @@ public void testBuildGivenMixtureOfTypes() {\n assertThat(extractedFields.getDocValueFields().get(0).getName(), equalTo(\"time\"));\n assertThat(extractedFields.getDocValueFields().get(0).getDocValueFormat(), equalTo(\"epoch_millis\"));\n assertThat(extractedFields.getDocValueFields().get(1).getName(), equalTo(\"value\"));\n- assertThat(extractedFields.getDocValueFields().get(1).getDocValueFormat(), equalTo(DocValueFieldsContext.USE_DEFAULT_FORMAT));\n+ assertThat(extractedFields.getDocValueFields().get(1).getDocValueFormat(), equalTo(null));\n assertThat(extractedFields.getSourceFields().length, equalTo(1));\n assertThat(extractedFields.getSourceFields()[0], equalTo(\"airline\"));\n assertThat(extractedFields.getAllFields().size(), equalTo(4));", "filename": "x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/extractor/fields/TimeBasedExtractedFieldsTests.java", "status": "modified" }, { "diff": "@@ -103,7 +103,6 @@ public void testExplainWithWhere() throws IOException {\n assertThat(readLine(), startsWith(\" \\\"docvalue_fields\\\" : [\"));\n assertThat(readLine(), startsWith(\" {\"));\n assertThat(readLine(), startsWith(\" \\\"field\\\" : \\\"i\\\"\"));\n- assertThat(readLine(), startsWith(\" \\\"format\\\" : \\\"use_field_mapping\\\"\"));\n assertThat(readLine(), startsWith(\" }\"));\n assertThat(readLine(), startsWith(\" ],\"));\n assertThat(readLine(), startsWith(\" \\\"sort\\\" : [\"));", "filename": "x-pack/plugin/sql/qa/single-node/src/test/java/org/elasticsearch/xpack/sql/qa/single_node/CliExplainIT.java", "status": "modified" }, { "diff": "@@ -6,7 +6,6 @@\n package org.elasticsearch.xpack.sql.action;\n \n import org.elasticsearch.search.builder.SearchSourceBuilder;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.test.AbstractStreamableTestCase;\n import org.elasticsearch.xpack.sql.action.SqlTranslateResponse;\n \n@@ -20,7 +19,7 @@ protected SqlTranslateResponse createTestInstance() {\n if (randomBoolean()) {\n long docValues = iterations(5, 10);\n for (int i = 0; i < docValues; i++) {\n- s.docValueField(randomAlphaOfLength(10), DocValueFieldsContext.USE_DEFAULT_FORMAT);\n+ s.docValueField(randomAlphaOfLength(10));\n }\n }\n ", "filename": "x-pack/plugin/sql/sql-action/src/test/java/org/elasticsearch/xpack/sql/action/SqlTranslateResponseTests.java", "status": "modified" }, { "diff": "@@ -8,7 +8,6 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext.FieldAndFormat;\n \n import java.util.LinkedHashMap;\n@@ -69,8 +68,7 @@ public void build(SearchSourceBuilder sourceBuilder) {\n if (!sourceFields.isEmpty()) {\n sourceBuilder.fetchSource(sourceFields.toArray(Strings.EMPTY_ARRAY), null);\n }\n- docFields.forEach(field -> sourceBuilder.docValueField(field.field,\n- field.format == null ? DocValueFieldsContext.USE_DEFAULT_FORMAT : field.format));\n+ docFields.forEach(field -> sourceBuilder.docValueField(field.field, field.format));\n scriptFields.forEach(sourceBuilder::scriptField);\n }\n }", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/execution/search/SqlSourceBuilder.java", "status": "modified" }, { "diff": "@@ -11,7 +11,6 @@\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.xpack.sql.SqlIllegalArgumentException;\n import org.elasticsearch.xpack.sql.execution.search.FieldExtraction;\n import org.elasticsearch.xpack.sql.execution.search.SourceGenerator;\n@@ -183,7 +182,7 @@ private Tuple<QueryContainer, FieldExtraction> nestedHitFieldRef(FieldAttribute\n List<FieldExtraction> nestedRefs = new ArrayList<>();\n \n String name = aliasName(attr);\n- String format = attr.field().getDataType() == DataType.DATETIME ? \"epoch_millis\" : DocValueFieldsContext.USE_DEFAULT_FORMAT;\n+ String format = attr.field().getDataType() == DataType.DATETIME ? \"epoch_millis\" : null;\n Query q = rewriteToContainNestedField(query, attr.source(),\n attr.nestedParent().name(), name, format, attr.field().isAggregatable());\n ", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/querydsl/container/QueryContainer.java", "status": "modified" }, { "diff": "@@ -154,7 +154,7 @@ public void testSqlTranslateActionLicense() throws Exception {\n .query(\"SELECT * FROM test\").get();\n SearchSourceBuilder source = response.source();\n assertThat(source.docValueFields(), Matchers.contains(\n- new DocValueFieldsContext.FieldAndFormat(\"count\", DocValueFieldsContext.USE_DEFAULT_FORMAT)));\n+ new DocValueFieldsContext.FieldAndFormat(\"count\", null)));\n FetchSourceContext fetchSource = source.fetchSource();\n assertThat(fetchSource.includes(), Matchers.arrayContaining(\"data\"));\n }", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/action/SqlLicenseIT.java", "status": "modified" }, { "diff": "@@ -35,7 +35,7 @@ public void testSqlTranslateAction() {\n assertTrue(fetch.fetchSource());\n assertArrayEquals(new String[] { \"data\" }, fetch.includes());\n assertEquals(\n- singletonList(new DocValueFieldsContext.FieldAndFormat(\"count\", DocValueFieldsContext.USE_DEFAULT_FORMAT)),\n+ singletonList(new DocValueFieldsContext.FieldAndFormat(\"count\", null)),\n source.docValueFields());\n assertEquals(singletonList(SortBuilders.fieldSort(\"count\").missing(\"_last\").unmappedType(\"long\")), source.sorts());\n }", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/action/SqlTranslateActionIT.java", "status": "modified" }, { "diff": "@@ -5,7 +5,6 @@\n */\n package org.elasticsearch.xpack.sql.querydsl.container;\n \n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.xpack.sql.querydsl.query.BoolQuery;\n import org.elasticsearch.xpack.sql.querydsl.query.MatchAll;\n@@ -24,7 +23,7 @@ public class QueryContainerTests extends ESTestCase {\n private Source source = SourceTests.randomSource();\n private String path = randomAlphaOfLength(5);\n private String name = randomAlphaOfLength(5);\n- private String format = DocValueFieldsContext.USE_DEFAULT_FORMAT;\n+ private String format = null;\n private boolean hasDocValues = randomBoolean();\n \n public void testRewriteToContainNestedFieldNoQuery() {", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/querydsl/container/QueryContainerTests.java", "status": "modified" }, { "diff": "@@ -5,7 +5,6 @@\n */\n package org.elasticsearch.xpack.sql.querydsl.query;\n \n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.search.sort.NestedSortBuilder;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.xpack.sql.tree.Source;\n@@ -53,15 +52,14 @@ public void testContainsNestedField() {\n \n public void testAddNestedField() {\n Query q = boolQueryWithoutNestedChildren();\n- assertSame(q, q.addNestedField(randomAlphaOfLength(5), randomAlphaOfLength(5), DocValueFieldsContext.USE_DEFAULT_FORMAT,\n- randomBoolean()));\n+ assertSame(q, q.addNestedField(randomAlphaOfLength(5), randomAlphaOfLength(5), null, randomBoolean()));\n \n String path = randomAlphaOfLength(5);\n String field = randomAlphaOfLength(5);\n q = boolQueryWithNestedChildren(path, field);\n String newField = randomAlphaOfLength(5);\n boolean hasDocValues = randomBoolean();\n- Query rewritten = q.addNestedField(path, newField, DocValueFieldsContext.USE_DEFAULT_FORMAT, hasDocValues);\n+ Query rewritten = q.addNestedField(path, newField, null, hasDocValues);\n assertNotSame(q, rewritten);\n assertTrue(rewritten.containsNestedField(path, newField));\n }\n@@ -87,7 +85,7 @@ private Query boolQueryWithoutNestedChildren() {\n \n private Query boolQueryWithNestedChildren(String path, String field) {\n NestedQuery match = new NestedQuery(SourceTests.randomSource(), path,\n- singletonMap(field, new SimpleImmutableEntry<>(randomBoolean(), DocValueFieldsContext.USE_DEFAULT_FORMAT)),\n+ singletonMap(field, new SimpleImmutableEntry<>(randomBoolean(), null)),\n new MatchAll(SourceTests.randomSource()));\n Query matchAll = new MatchAll(SourceTests.randomSource());\n Query left;\n@@ -108,4 +106,4 @@ public void testToString() {\n new ExistsQuery(new Source(1, 1, StringUtils.EMPTY), \"f1\"),\n new ExistsQuery(new Source(1, 7, StringUtils.EMPTY), \"f2\")).toString());\n }\n-}\n\\ No newline at end of file\n+}", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/querydsl/query/BoolQueryTests.java", "status": "modified" }, { "diff": "@@ -6,7 +6,6 @@\n package org.elasticsearch.xpack.sql.querydsl.query;\n \n import org.elasticsearch.index.query.QueryBuilder;\n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.search.sort.NestedSortBuilder;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.xpack.sql.tree.Source;\n@@ -54,8 +53,7 @@ public void testContainsNestedField() {\n public void testAddNestedField() {\n Query query = new DummyLeafQuery(SourceTests.randomSource());\n // Leaf queries don't contain nested fields.\n- assertSame(query, query.addNestedField(randomAlphaOfLength(5), randomAlphaOfLength(5), DocValueFieldsContext.USE_DEFAULT_FORMAT,\n- randomBoolean()));\n+ assertSame(query, query.addNestedField(randomAlphaOfLength(5), randomAlphaOfLength(5), null, randomBoolean()));\n }\n \n public void testEnrichNestedSort() {", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/querydsl/query/LeafQueryTests.java", "status": "modified" }, { "diff": "@@ -5,7 +5,6 @@\n */\n package org.elasticsearch.xpack.sql.querydsl.query;\n \n-import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;\n import org.elasticsearch.search.sort.NestedSortBuilder;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.xpack.sql.SqlIllegalArgumentException;\n@@ -45,7 +44,7 @@ private static Map<String, Map.Entry<Boolean, String>> randomFields() {\n int size = between(0, 5);\n Map<String, Map.Entry<Boolean, String>> fields = new HashMap<>(size);\n while (fields.size() < size) {\n- fields.put(randomAlphaOfLength(5), new SimpleImmutableEntry<>(randomBoolean(), DocValueFieldsContext.USE_DEFAULT_FORMAT));\n+ fields.put(randomAlphaOfLength(5), new SimpleImmutableEntry<>(randomBoolean(), null));\n }\n return fields;\n }\n@@ -80,18 +79,18 @@ public void testAddNestedField() {\n NestedQuery q = randomNestedQuery(0);\n for (String field : q.fields().keySet()) {\n // add does nothing if the field is already there\n- assertSame(q, q.addNestedField(q.path(), field, DocValueFieldsContext.USE_DEFAULT_FORMAT, randomBoolean()));\n+ assertSame(q, q.addNestedField(q.path(), field, null, randomBoolean()));\n String otherPath = randomValueOtherThan(q.path(), () -> randomAlphaOfLength(5));\n // add does nothing if the path doesn't match\n- assertSame(q, q.addNestedField(otherPath, randomAlphaOfLength(5), DocValueFieldsContext.USE_DEFAULT_FORMAT, randomBoolean()));\n+ assertSame(q, q.addNestedField(otherPath, randomAlphaOfLength(5), null, randomBoolean()));\n }\n \n // if the field isn't in the list then add rewrites to a query with all the old fields and the new one\n String newField = randomValueOtherThanMany(q.fields()::containsKey, () -> randomAlphaOfLength(5));\n boolean hasDocValues = randomBoolean();\n- NestedQuery added = (NestedQuery) q.addNestedField(q.path(), newField, DocValueFieldsContext.USE_DEFAULT_FORMAT, hasDocValues);\n+ NestedQuery added = (NestedQuery) q.addNestedField(q.path(), newField, null, hasDocValues);\n assertNotSame(q, added);\n- assertThat(added.fields(), hasEntry(newField, new SimpleImmutableEntry<>(hasDocValues, DocValueFieldsContext.USE_DEFAULT_FORMAT)));\n+ assertThat(added.fields(), hasEntry(newField, new SimpleImmutableEntry<>(hasDocValues, null)));\n assertTrue(added.containsNestedField(q.path(), newField));\n for (Map.Entry<String, Map.Entry<Boolean, String>> field : q.fields().entrySet()) {\n assertThat(added.fields(), hasEntry(field.getKey(), field.getValue()));\n@@ -133,8 +132,8 @@ public void testEnrichNestedSort() {\n \n public void testToString() {\n NestedQuery q = new NestedQuery(new Source(1, 1, StringUtils.EMPTY), \"a.b\",\n- singletonMap(\"f\", new SimpleImmutableEntry<>(true, DocValueFieldsContext.USE_DEFAULT_FORMAT)),\n+ singletonMap(\"f\", new SimpleImmutableEntry<>(true, null)),\n new MatchAll(new Source(1, 1, StringUtils.EMPTY)));\n- assertEquals(\"NestedQuery@1:2[a.b.{f=true=use_field_mapping}[MatchAll@1:2[]]]\", q.toString());\n+ assertEquals(\"NestedQuery@1:2[a.b.{f=true=null}[MatchAll@1:2[]]]\", q.toString());\n }\n-}\n\\ No newline at end of file\n+}", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/querydsl/query/NestedQueryTests.java", "status": "modified" }, { "diff": "@@ -1,9 +1,8 @@\n ---\n \"Translate SQL\":\n - skip:\n- version: \" - 6.3.99\"\n- reason: format option was added in 6.4\n- features: warnings\n+ version: \" - 6.99.99\"\n+ reason: Triggers warnings before 7.0\n \n - do:\n bulk:\n@@ -29,7 +28,6 @@\n excludes: []\n docvalue_fields:\n - field: int\n- format: use_field_mapping\n sort:\n - int:\n order: asc", "filename": "x-pack/plugin/src/test/resources/rest-api-spec/test/sql/translate.yml", "status": "modified" } ] }
{ "body": "It looks like I have an unusual umask on my laptop:\r\n```\r\n[manybubbles@localhost ~]$ umask\r\n0002\r\n```\r\n\r\nI'm not sure how I got it but now that I have it I notice that the packaging tests don't pass:\r\n```\r\n$ ./gradlew -p qa/vagrant/ packagingTest\r\n...\r\n# Expected privileges: 644, found 664 [/usr/share/elasticsearch/README.textile]\r\n```\r\n\r\nI think we should try and build the same packages regardless of my personal umask setting.", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-05-22T23:03:58Z" }, { "body": "It looks like this isn't actually based on the umask. There are other machines and users with the same umask that aren't getting this.", "created_at": "2018-05-23T16:59:11Z" }, { "body": "This isn't caused by my umask:\r\n```\r\n[manybubbles@localhost elasticsearch]$ ./gradlew -p distribution clean\r\n...\r\n[manybubbles@localhost elasticsearch]$ umask 0022\r\n[manybubbles@localhost elasticsearch]$ umask\r\n0022\r\n[manybubbles@localhost elasticsearch]$ ./gradlew -p qa/vagrant/ packagingTest\r\n...\r\n# Expected privileges: 644, found 664 [/usr/share/elasticsearch/modules/lang-painless/antlr4-runtime-4.5.3.jar]\r\n```", "created_at": "2018-05-23T17:48:02Z" }, { "body": "I was wrong about it not being my umask. If I `git clean -xdf` after setting my umask to `0022` then I get *further*. The tests still fail because my clone of elasticsearch has `README.textile` as `664` rather than `644`. I *think* the umask that you had when you cloned Elasticsearch influences that one.\r\n\r\n", "created_at": "2018-05-23T18:30:39Z" } ], "number": 30799, "title": "Permissions on files in packages depends on *my* umask" }
{ "body": "If you have an unusual umask (e.g., 0002) and clone the GitHub repository then files that we stick into our packages like the README.textile and the license will have a file mode of 0664 on disk yet we expect them to be 0644. Additionally, the same thing happens with compiled artifacts like JARs. We try to set a default file mode yet it does not seem to take everywhere. This commit adds explicit file modes in some places that we were relying on the defaults to ensure that the built artifacts have a consistent file mode regardless of the underlying build host.\r\n\r\nCloses #30799\r\n", "number": 30823, "review_comments": [], "title": "Force stable file modes for built packages" }
{ "commits": [ { "message": "Force stable file modes for built packages\n\nIf you have an unusual umask (e.g., 0002) and clone the GitHub\nrepository then files that we stick into our packages like the\nREADME.textile and the license will have a file mode of 0664 on disk yet\nwe expect them to be 0644. Additionally, the same thing happens with\ncompiled artifacts like JARs. We try to set a default file mode yet it\ndoes not seem to take everywhere. This commit adds explicit file modes\nin some places that we were relying on the defaults to ensure that the\nbuilt artifacts have a consistent file mode regardless of the underlying\nbuild host." }, { "message": "Also get systemd files" }, { "message": "Merge branch 'master' into stable-file-modes\n\n* master:\n [DOCS] Splits auditing.asciidoc into smaller files\n Reintroduce mandatory http pipelining support (#30820)\n Painless: Types Section Clean Up (#30283)\n Add support for indexed shape routing in geo_shape query (#30760)\n [test] java tests for archive packaging (#30734)\n Revert \"Make http pipelining support mandatory (#30695)\" (#30813)\n [DOCS] Fix more edit URLs in Stack Overview (#30704)\n Use correct cluster state version for node fault detection (#30810)\n Change serialization version of doc-value fields.\n [DOCS] Fixes broken link for native realm\n [DOCS] Clarified audit.index.client.hosts (#30797)\n [TEST] Don't expect acks when isolating nodes" } ], "files": [ { "diff": "@@ -242,6 +242,8 @@ configure(subprojects.findAll { ['archives', 'packages'].contains(it.name) }) {\n if (it.relativePath.segments[-2] == 'bin') {\n // bin files, wherever they are within modules (eg platform specific) should be executable\n it.mode = 0755\n+ } else {\n+ it.mode = 0644\n }\n }\n if (oss) {", "filename": "distribution/build.gradle", "status": "modified" }, { "diff": "@@ -122,6 +122,7 @@ Closure commonPackageConfig(String type, boolean oss) {\n }\n from(rootProject.projectDir) {\n include 'README.textile'\n+ fileMode 0644\n }\n into('modules') {\n with copySpec {\n@@ -135,6 +136,11 @@ Closure commonPackageConfig(String type, boolean oss) {\n for (int i = segments.length - 2; i > 0 && segments[i] != 'modules'; --i) {\n directory('/' + segments[0..i].join('/'), 0755)\n }\n+ if (segments[-2] == 'bin') {\n+ fcp.mode = 0755\n+ } else {\n+ fcp.mode = 0644\n+ }\n }\n }\n }\n@@ -153,6 +159,7 @@ Closure commonPackageConfig(String type, boolean oss) {\n include oss ? 'APACHE-LICENSE-2.0.txt' : 'ELASTIC-LICENSE.txt'\n rename { 'LICENSE.txt' }\n }\n+ fileMode 0644\n }\n }\n \n@@ -180,14 +187,17 @@ Closure commonPackageConfig(String type, boolean oss) {\n // ========= systemd =========\n into('/usr/lib/tmpfiles.d') {\n from \"${packagingFiles}/systemd/elasticsearch.conf\"\n+ fileMode 0644\n }\n into('/usr/lib/systemd/system') {\n fileType CONFIG | NOREPLACE\n from \"${packagingFiles}/systemd/elasticsearch.service\"\n+ fileMode 0644\n }\n into('/usr/lib/sysctl.d') {\n fileType CONFIG | NOREPLACE\n from \"${packagingFiles}/systemd/sysctl/elasticsearch.conf\"\n+ fileMode 0644\n }\n \n // ========= sysV init =========", "filename": "distribution/packages/build.gradle", "status": "modified" } ] }
{ "body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 6.2.3\r\n\r\n**Plugins installed**: [X-Pack 6.2.3]\r\n\r\n**JVM version** (`java -version`): OpenJDK Runtime Environment (build 1.8.0_161-b14)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Linux 44e7bb4d9f24 4.9.87-linuxkit-aufs #1 SMP Fri Mar 16 18:16:33 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux _(Official Docker image running on MacOS 10.13.4)_\r\n\r\n**Description of the problem including expected versus actual behavior**: When attempting to execute a force merge operation passing the accepted parameters as a JSON payload, a response is returned to the user which ambiguously suggests the requested operation was successful. However, inspection of `<index_name>/_segments` shows no merge having taken place.\r\n\r\nBy contrast, when passing the parameter as a URL parameter, the merge takes place as expected.\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Trigger a force merge with a JSON payload of the desired parameters\r\n```\r\nPOST <index_name>/_forcemerge\r\n{\r\n \"max_num_segments\": 1\r\n}\r\n```\r\n\r\n 2. Receive positive looking response\r\n\r\n```\r\n{\r\n \"_shards\": {\r\n \"total\": 102,\r\n \"successful\": 102,\r\n \"failed\": 0\r\n }\r\n}\r\n```\r\n 3. Check the status\r\n\r\n```\r\nGET <index_name>/_segments\r\n```\r\n\r\nalternatively view Segment Count in X-Pack Monitoring\r\n\r\nIn order to actually trigger the merge, do the following:\r\n\r\n```\r\nPOST <index_name>/_forcemerge?max_num_segments=1\r\n```\r\n\r\nThen verify with \r\n\r\n```\r\nGET <index_name>/_segments\r\n```", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-04-18T14:31:18Z" } ], "number": 29584, "title": "_forcemerge API erroneously accepts JSON payloads and returns ambiguously positive response" }
{ "body": "This commit adds validation to forcemerge rest requests which contain a\r\nbody. All parameters to force merge must be part of http params.\r\n\r\ncloses #29584", "number": 30792, "review_comments": [ { "body": "These are \"query parameters\" (there are many different kinds of \"parameters\" in the HTTP specification, not only query parameters). I think we should say \"request body\" instead of \"http body\" (and let us capitalize \"HTTP\" if we keep it in the message.", "created_at": "2018-05-23T15:59:39Z" } ], "title": "RestAPI: Reject forcemerge requests with a body" }
{ "commits": [ { "message": "RestAPI: Reject forcemerge requests with a body\n\nThis commit adds validation to forcemerge rest requests which contain a\nbody. All parameters to force merge must be part of http params.\n\ncloses #29584" }, { "message": "Merge branch 'master' into forcemerge_validation" }, { "message": "tweak commit message" }, { "message": "Merge branch 'master' into forcemerge_validation" } ], "files": [ { "diff": "@@ -47,6 +47,9 @@ public String getName() {\n \n @Override\n public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {\n+ if (request.hasContent()) {\n+ throw new IllegalArgumentException(\"forcemerge takes arguments in query parameters, not in the request body\");\n+ }\n ForceMergeRequest mergeRequest = new ForceMergeRequest(Strings.splitStringByCommaToArray(request.param(\"index\")));\n mergeRequest.indicesOptions(IndicesOptions.fromRequest(request, mergeRequest.indicesOptions()));\n mergeRequest.maxNumSegments(request.paramAsInt(\"max_num_segments\", mergeRequest.maxNumSegments()));", "filename": "server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestForceMergeAction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,47 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.admin.indices.forcemerge;\n+\n+import org.elasticsearch.client.node.NodeClient;\n+import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.rest.RestController;\n+import org.elasticsearch.rest.action.admin.indices.RestForceMergeAction;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.rest.FakeRestRequest;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.mockito.Mockito.mock;\n+\n+public class RestForceMergeActionTests extends ESTestCase {\n+\n+ public void testBodyRejection() throws Exception {\n+ final RestForceMergeAction handler = new RestForceMergeAction(Settings.EMPTY, mock(RestController.class));\n+ String json = JsonXContent.contentBuilder().startObject().field(\"max_num_segments\", 1).endObject().toString();\n+ final FakeRestRequest request = new FakeRestRequest.Builder(NamedXContentRegistry.EMPTY)\n+ .withContent(new BytesArray(json), XContentType.JSON).build();\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> handler.prepareRequest(request, mock(NodeClient.class)));\n+ assertThat(e.getMessage(), equalTo(\"forcemerge takes arguments in query parameters, not in the request body\"));\n+ }\n+}", "filename": "server/src/test/java/org/elasticsearch/action/admin/indices/forcemerge/RestForceMergeActionTests.java", "status": "added" } ] }
{ "body": "**Elasticsearch version** (`bin/elasticsearch --version`): 6.2.4\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nIf a user is allowed to write to an alias, but not its concrete index, then indexing a document with new mappings can produce the following authorization exception:\r\n ```\r\n {\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"security_exception\",\r\n \"reason\" : \"action [indices:admin/mapping/put] is unauthorized for user [test_user]\"\r\n }\r\n ],\r\n \"type\" : \"security_exception\",\r\n \"reason\" : \"action [indices:admin/mapping/put] is unauthorized for user [test_user]\"\r\n },\r\n \"status\" : 403\r\n}\r\n```\r\n\r\nThis behavior is confusing, because `write` permissions should allow for both indexing and updating mappings, and the user can successfully make a put mapping request directly.\r\n\r\nNote that if the user is also given `write` privileges to the underlying index, then attempting to index the document succeeds. This fact suggests that the implicit `mappings` call during indexing is maybe being validated against the concrete index, instead of the alias.\r\n\r\nRelated to https://github.com/elastic/elasticsearch/issues/29874.\r\n\r\n**Steps to reproduce**:\r\n\r\nGist: https://gist.github.com/jtibshirani/ff8ebcd235dc8be36ecd84543b29525d\r\n", "comments": [ { "body": "Pinging @elastic/es-security", "created_at": "2018-05-15T00:19:24Z" }, { "body": "> This fact suggests that the implicit mappings call during indexing is maybe being validated against the concrete index, instead of the alias.\r\n\r\nThis is in fact the case and was an intentional change introduced in #17048. A couple of thoughts come to mind, we can add the original indices to the request or we can handle this case specifically within our authorization code. We currently already have a special case for this, so I am leaning towards expanding that with a fix.", "created_at": "2018-05-15T20:08:53Z" } ], "number": 30597, "title": "Authorization failure when indexing document with new mappings to alias." }
{ "body": "This commit fixes an issue with dynamic mapping updates when an index\r\noperation is performed against an alias and when the user only has\r\npermissions to the alias. Dynamic mapping updates resolve the concrete\r\nindex early to prevent issues so the information about the alias that\r\nthe triggering operation was being executed against is lost. When\r\nsecurity is enabled and a user only has privileges to the alias, this\r\ndynamic mapping update would be rejected as it is executing against the\r\nconcrete index and not the alias. In order to handle this situation,\r\nthe security code needs to look at the concrete index and the\r\nauthorized indices of the user; if the concrete index is not authorized\r\nthe code will attempt to find an alias that the user has permissions to\r\nupdate the mappings of.\r\n\r\nCloses #30597", "number": 30787, "review_comments": [ { "body": "Can we provide more details in this message (e.g. the name of the index/alias)", "created_at": "2018-05-23T11:46:14Z" } ], "title": "Security: fix dynamic mapping updates with aliases" }
{ "commits": [ { "message": "Security: fix dynamic mapping updates with aliases\n\nThis commit fixes an issue with dynamic mapping updates when an index\noperation is performed against an alias and when the user only has\npermissions to the alias. Dynamic mapping updates resolve the concrete\nindex early to prevent issues so the information about the alias that\nthe triggering operation was being executed against is lost. When\nsecurity is enabled and a user only has privileges to the aias, this\ndynamic mapping update would be rejected as it is executing against the\nconcrete index and not the alias. In order to handle this situation,\nthe security code needs to look at the concrete index and the\nauthorized indices of the user; if the concrete index is not authorized\nthe code will attempt to find an alias that the user has permissions to\nupdate the mappings of.\n\nCloses #30597" }, { "message": "update exception message" } ], "files": [ { "diff": "@@ -14,11 +14,14 @@\n import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequest;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.cluster.metadata.AliasMetaData;\n import org.elasticsearch.cluster.metadata.AliasOrIndex;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.ClusterSettings;\n import org.elasticsearch.common.settings.Settings;\n@@ -35,14 +38,15 @@\n import java.util.HashSet;\n import java.util.List;\n import java.util.Map;\n+import java.util.Optional;\n import java.util.Set;\n import java.util.SortedMap;\n import java.util.concurrent.CopyOnWriteArraySet;\n import java.util.stream.Collectors;\n \n import static org.elasticsearch.xpack.core.security.authz.IndicesAndAliasesResolverField.NO_INDEX_PLACEHOLDER;\n \n-public class IndicesAndAliasesResolver {\n+class IndicesAndAliasesResolver {\n \n //`*,-*` what we replace indices with if we need Elasticsearch to return empty responses without throwing exception\n private static final String[] NO_INDICES_ARRAY = new String[] { \"*\", \"-*\" };\n@@ -51,7 +55,7 @@ public class IndicesAndAliasesResolver {\n private final IndexNameExpressionResolver nameExpressionResolver;\n private final RemoteClusterResolver remoteClusterResolver;\n \n- public IndicesAndAliasesResolver(Settings settings, ClusterService clusterService) {\n+ IndicesAndAliasesResolver(Settings settings, ClusterService clusterService) {\n this.nameExpressionResolver = new IndexNameExpressionResolver(settings);\n this.remoteClusterResolver = new RemoteClusterResolver(settings, clusterService.getClusterSettings());\n }\n@@ -85,7 +89,7 @@ public IndicesAndAliasesResolver(Settings settings, ClusterService clusterServic\n * Otherwise, <em>N</em> will be added to the <em>local</em> index list.\n */\n \n- public ResolvedIndices resolve(TransportRequest request, MetaData metaData, AuthorizedIndices authorizedIndices) {\n+ ResolvedIndices resolve(TransportRequest request, MetaData metaData, AuthorizedIndices authorizedIndices) {\n if (request instanceof IndicesAliasesRequest) {\n ResolvedIndices.Builder resolvedIndicesBuilder = new ResolvedIndices.Builder();\n IndicesAliasesRequest indicesAliasesRequest = (IndicesAliasesRequest) request;\n@@ -116,7 +120,7 @@ ResolvedIndices resolveIndicesAndAliases(IndicesRequest indicesRequest, MetaData\n */\n assert indicesRequest.indices() == null || indicesRequest.indices().length == 0\n : \"indices are: \" + Arrays.toString(indicesRequest.indices()); // Arrays.toString() can handle null values - all good\n- resolvedIndicesBuilder.addLocal(((PutMappingRequest) indicesRequest).getConcreteIndex().getName());\n+ resolvedIndicesBuilder.addLocal(getPutMappingIndexOrAlias((PutMappingRequest) indicesRequest, authorizedIndices, metaData));\n } else if (indicesRequest instanceof IndicesRequest.Replaceable) {\n IndicesRequest.Replaceable replaceable = (IndicesRequest.Replaceable) indicesRequest;\n final boolean replaceWildcards = indicesRequest.indicesOptions().expandWildcardsOpen()\n@@ -213,7 +217,48 @@ ResolvedIndices resolveIndicesAndAliases(IndicesRequest indicesRequest, MetaData\n return resolvedIndicesBuilder.build();\n }\n \n- public static boolean allowsRemoteIndices(IndicesRequest request) {\n+ /**\n+ * Special handling of the value to authorize for a put mapping request. Dynamic put mapping\n+ * requests use a concrete index, but we allow permissions to be defined on aliases so if the\n+ * request's concrete index is not in the list of authorized indices, then we need to look to\n+ * see if this can be authorized against an alias\n+ */\n+ static String getPutMappingIndexOrAlias(PutMappingRequest request, AuthorizedIndices authorizedIndices, MetaData metaData) {\n+ final String concreteIndexName = request.getConcreteIndex().getName();\n+ final List<String> authorizedIndicesList = authorizedIndices.get();\n+\n+ // validate that the concrete index exists, otherwise there is no remapping that we could do\n+ final AliasOrIndex aliasOrIndex = metaData.getAliasAndIndexLookup().get(concreteIndexName);\n+ final String resolvedAliasOrIndex;\n+ if (aliasOrIndex == null) {\n+ resolvedAliasOrIndex = concreteIndexName;\n+ } else if (aliasOrIndex.isAlias()) {\n+ throw new IllegalStateException(\"concrete index [\" + concreteIndexName + \"] is an alias but should not be\");\n+ } else if (authorizedIndicesList.contains(concreteIndexName)) {\n+ // user is authorized to put mappings for this index\n+ resolvedAliasOrIndex = concreteIndexName;\n+ } else {\n+ // the user is not authorized to put mappings for this index, but could have been\n+ // authorized for a write using an alias that triggered a dynamic mapping update\n+ ImmutableOpenMap<String, List<AliasMetaData>> foundAliases =\n+ metaData.findAliases(Strings.EMPTY_ARRAY, new String[] { concreteIndexName });\n+ List<AliasMetaData> aliasMetaData = foundAliases.get(concreteIndexName);\n+ if (aliasMetaData != null) {\n+ Optional<String> foundAlias = aliasMetaData.stream()\n+ .map(AliasMetaData::alias)\n+ .filter(authorizedIndicesList::contains)\n+ .filter(aliasName -> metaData.getAliasAndIndexLookup().get(aliasName).getIndices().size() == 1)\n+ .findFirst();\n+ resolvedAliasOrIndex = foundAlias.orElse(concreteIndexName);\n+ } else {\n+ resolvedAliasOrIndex = concreteIndexName;\n+ }\n+ }\n+\n+ return resolvedAliasOrIndex;\n+ }\n+\n+ static boolean allowsRemoteIndices(IndicesRequest request) {\n return request instanceof SearchRequest || request instanceof FieldCapabilitiesRequest\n || request instanceof GraphExploreRequest;\n }", "filename": "x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authz/IndicesAndAliasesResolver.java", "status": "modified" }, { "diff": "@@ -39,10 +39,12 @@\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.ClusterSettings;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n import org.elasticsearch.test.ESTestCase;\n@@ -149,7 +151,10 @@ public void setup() {\n new IndicesPrivileges[] { IndicesPrivileges.builder().indices(authorizedIndices).privileges(\"all\").build() }, null));\n roleMap.put(\"dash\", new RoleDescriptor(\"dash\", null,\n new IndicesPrivileges[] { IndicesPrivileges.builder().indices(dashIndices).privileges(\"all\").build() }, null));\n- roleMap.put(\"test\", new RoleDescriptor(\"role\", new String[] { \"monitor\" }, null, null));\n+ roleMap.put(\"test\", new RoleDescriptor(\"test\", new String[] { \"monitor\" }, null, null));\n+ roleMap.put(\"alias_read_write\", new RoleDescriptor(\"alias_read_write\", null,\n+ new IndicesPrivileges[] { IndicesPrivileges.builder().indices(\"barbaz\", \"foofoobar\").privileges(\"read\", \"write\").build() },\n+ null));\n roleMap.put(ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR.getName(), ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR);\n final FieldPermissionsCache fieldPermissionsCache = new FieldPermissionsCache(Settings.EMPTY);\n doAnswer((i) -> {\n@@ -651,7 +656,7 @@ public void testResolveWildcardsIndicesAliasesRequestNoMatchingIndices() {\n request.addAliasAction(AliasActions.add().alias(\"alias2\").index(\"bar*\"));\n request.addAliasAction(AliasActions.add().alias(\"alias3\").index(\"non_matching_*\"));\n //if a single operation contains wildcards and ends up being resolved to no indices, it makes the whole request fail\n- expectThrows(IndexNotFoundException.class, \n+ expectThrows(IndexNotFoundException.class,\n () -> resolveIndices(request, buildAuthorizedIndices(user, IndicesAliasesAction.NAME)));\n }\n \n@@ -1180,10 +1185,10 @@ public void testIndicesExists() {\n assertNoIndices(request, resolveIndices(request,\n buildAuthorizedIndices(userNoIndices, IndicesExistsAction.NAME)));\n }\n- \n+\n {\n IndicesExistsRequest request = new IndicesExistsRequest(\"does_not_exist\");\n- \n+\n assertNoIndices(request, resolveIndices(request,\n buildAuthorizedIndices(user, IndicesExistsAction.NAME)));\n }\n@@ -1228,7 +1233,7 @@ public void testNonXPackUserAccessingSecurityIndex() {\n List<String> indices = resolveIndices(request, authorizedIndices).getLocal();\n assertThat(indices, not(hasItem(SecurityIndexManager.SECURITY_INDEX_NAME)));\n }\n- \n+\n {\n IndicesAliasesRequest aliasesRequest = new IndicesAliasesRequest();\n aliasesRequest.addAliasAction(AliasActions.add().alias(\"security_alias1\").index(\"*\"));\n@@ -1317,6 +1322,21 @@ public void testAliasDateMathExpressionNotSupported() {\n assertThat(request.aliases(), arrayContainingInAnyOrder(\"<datetime-{now/M}>\"));\n }\n \n+ public void testDynamicPutMappingRequestFromAlias() {\n+ PutMappingRequest request = new PutMappingRequest(Strings.EMPTY_ARRAY).setConcreteIndex(new Index(\"foofoo\", UUIDs.base64UUID()));\n+ User user = new User(\"alias-writer\", \"alias_read_write\");\n+ AuthorizedIndices authorizedIndices = buildAuthorizedIndices(user, PutMappingAction.NAME);\n+\n+ String putMappingIndexOrAlias = IndicesAndAliasesResolver.getPutMappingIndexOrAlias(request, authorizedIndices, metaData);\n+ assertEquals(\"barbaz\", putMappingIndexOrAlias);\n+\n+ // multiple indices map to an alias so we can only return the concrete index\n+ final String index = randomFrom(\"foo\", \"foobar\");\n+ request = new PutMappingRequest(Strings.EMPTY_ARRAY).setConcreteIndex(new Index(index, UUIDs.base64UUID()));\n+ putMappingIndexOrAlias = IndicesAndAliasesResolver.getPutMappingIndexOrAlias(request, authorizedIndices, metaData);\n+ assertEquals(index, putMappingIndexOrAlias);\n+ }\n+\n // TODO with the removal of DeleteByQuery is there another way to test resolving a write action?\n \n ", "filename": "x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authz/IndicesAndAliasesResolverTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,90 @@\n+---\n+setup:\n+ - skip:\n+ features: headers\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: yellow\n+\n+ - do:\n+ xpack.security.put_role:\n+ name: \"alias_write_role\"\n+ body: >\n+ {\n+ \"indices\": [\n+ { \"names\": [\"write_alias\"], \"privileges\": [\"write\"] }\n+ ]\n+ }\n+\n+ - do:\n+ xpack.security.put_user:\n+ username: \"test_user\"\n+ body: >\n+ {\n+ \"password\" : \"x-pack-test-password\",\n+ \"roles\" : [ \"alias_write_role\" ],\n+ \"full_name\" : \"user with privileges to write via alias\"\n+ }\n+\n+ - do:\n+ indices.create:\n+ index: write_index_1\n+ body:\n+ settings:\n+ index:\n+ number_of_shards: 1\n+ number_of_replicas: 0\n+\n+ - do:\n+ indices.put_alias:\n+ index: write_index_1\n+ name: write_alias\n+\n+---\n+teardown:\n+ - do:\n+ xpack.security.delete_user:\n+ username: \"test_user\"\n+ ignore: 404\n+\n+ - do:\n+ xpack.security.delete_role:\n+ name: \"alias_write_role\"\n+ ignore: 404\n+\n+ - do:\n+ indices.delete_alias:\n+ index: \"write_index_1\"\n+ name: [ \"write_alias\" ]\n+ ignore: 404\n+\n+ - do:\n+ indices.delete:\n+ index: [ \"write_index_1\" ]\n+ ignore: 404\n+\n+---\n+\"Test indexing documents into an alias with dynamic mappings\":\n+\n+ - do:\n+ headers: { Authorization: \"Basic dGVzdF91c2VyOngtcGFjay10ZXN0LXBhc3N3b3Jk\" } # test_user\n+ create:\n+ id: 1\n+ index: write_alias\n+ type: doc\n+ body: >\n+ {\n+ \"name\" : \"doc1\"\n+ }\n+\n+ - do:\n+ headers: { Authorization: \"Basic dGVzdF91c2VyOngtcGFjay10ZXN0LXBhc3N3b3Jk\" } # test_user\n+ create:\n+ id: 2\n+ index: write_alias\n+ type: doc\n+ body: >\n+ {\n+ \"name2\" : \"doc2\"\n+ }", "filename": "x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/30_dynamic_put_mapping.yml", "status": "added" } ] }
{ "body": "Currently, GeoShape filter and query supports to pre-indexed shape as argument[1].\nBut, there are no way to define required `_routing` value for pre-indexed shape. This causes RoutingMissingException if routing is mandatory.\n\nelasticsearch version 1.3.2\n\nFor example, mapping is:\n\n```\ncurl -XPOST 'localhost:9200/-inx/ang/_mapping' -d'{\n \"ang\" : {\n \"_parent\" : {\n \"type\" : \"someType\"\n },\n \"_routing\" : {\n \"required\" : true\n },\n \"properties\" : {\n \"to\" : {\n \"type\" : \"geo_shape\",\n \"tree_levels\" : 6\n }\n }\n }\n}'\n```\n\nAdd some data:\n\n```\ncurl -XPUT 'localhost:9200/-inx/ang/ouX6eeH5RxKQG3vJ5Ho84A?parent=111' -d '{\n \"to\" : {\n \"type\" : \"circle\",\n \"coordinates\" : [-45.0, 45.0],\n \"radius\" : \"100m\"\n }\n}'\n```\n\nLet's make a query:\n\n```\ncurl -XGET 'localhost:9200/_search' -d'{\n \"query\" : {\n \"geo_shape\" : {\n \"to\" : {\n \"indexed_shape\" : {\n \"type\" : \"ang\",\n \"index\" : \"-inx\",\n \"path\" : \"to\",\n \"id\" : \"ouX6eeH5RxKQG3vJ5Ho84A\"\n }\n }\n }\n }\n}'\n```\n\n```\norg.elasticsearch.action.search.SearchPhaseExecutionException: Failed to execute phase [query_fetch], all shards failed; shardFailures {[eryGHbZWSMa-YJiCpgVbcg][-inx][0]: \nRemoteTransportException[[Alistair Smythe][inet[/127.0.0.1:9300]][search/phase/query+fetch]]; nested: SearchParseException[[-inx][0]: from[0],size[10]:\nParse Failure [Failed to parse source [{\"query\":{\"geo_shape\":{\"to\":{\"indexed_shape\":{\"id\":\"ouX6eeH5RxKQG3vJ5Ho84A\",\"type\":\"ang\",\"index\":\"-inx\",\"path\":\"to\"}}}}}]]];\nnested: RoutingMissingException[routing is required for [-inx]/[ang]/[ouX6eeH5RxKQG3vJ5Ho84A]]; }\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:233) ~[elasticsearch-1.3.2.jar:na]\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onFailure(TransportSearchTypeAction.java:179) ~[elasticsearch-1.3.2.jar:na]\n at org.elasticsearch.search.action.SearchServiceTransportAction$12.handleException(SearchServiceTransportAction.java:326) ~[elasticsearch-1.3.2.jar:na]\n at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:185) ~[elasticsearch-1.3.2.jar:na]\n at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:175) ~[elasticsearch-1.3.2.jar:na]\n```\n\n[1] http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-geo-shape-filter.html#_pre_indexed_shape\n", "comments": [ { "body": "thanks for reporting @helllamer \n", "created_at": "2014-09-09T15:19:34Z" }, { "body": "I've rebased and squashed all 3 commits against current master: 4271835abb31aeec6c450ab7e1633cc59a5a07ff\n\nGithub automatically blocked my #7667, so I cannot comment it.\n", "created_at": "2015-01-16T15:24:26Z" }, { "body": "Hi @helllamer \n\nApologies - I've just seen your comment about having rebased your commit... from months ago! Sorry, please could you open it as a new PR, otherwise we'll never see it :)\n\nthanks\n", "created_at": "2015-05-29T16:22:15Z" }, { "body": "Assigning to @nknize as this is still an issue\n", "created_at": "2015-11-21T18:41:16Z" } ], "number": 7663, "title": "geo_shape query/filter: indexed_shape has no syntax to define _routing value and throws RoutingMissingException, because _routing undefined" }
{ "body": "Adds ability to specify the routing value for the indexed shape in the\r\ngeo_shape query.\r\n\r\nCloses #7663", "number": 30760, "review_comments": [ { "body": "`testIndexShape*Routing*`?", "created_at": "2018-05-21T18:18:52Z" }, { "body": "This should be 6.4.0, right?", "created_at": "2018-05-21T18:20:17Z" }, { "body": "Yes, eventually, after/during backport. If I make it 6.4.0 now, it will never pass BWC tests. ", "created_at": "2018-05-21T18:26:46Z" }, { "body": "Cool. I figure the branch overrides like `-Dtests.bwc.refspec.6.x=` would be good for that sort of thing.", "created_at": "2018-05-21T19:38:34Z" } ], "title": "Add support for indexed shape routing in geo_shape query" }
{ "commits": [ { "message": "Add support for indexed shape routing in geo_shape query\n\nAdds ability to specify the routing value for the indexed shape in the\ngeo_shape query.\n\nCloses #7663" }, { "message": "Fix the test name" }, { "message": "Merge remote-tracking branch 'elastic/master' into issue-7663-add-routing-support-to-geo-shapes" }, { "message": "Merge remote-tracking branch 'elastic/master' into issue-7663-add-routing-support-to-geo-shapes" } ], "files": [ { "diff": "@@ -93,6 +93,7 @@ to 'shapes'.\n * `type` - Index type where the pre-indexed shape is.\n * `path` - The field specified as path containing the pre-indexed shape.\n Defaults to 'shape'.\n+* `routing` - The routing of the shape document if required.\n \n The following is an example of using the Filter with a pre-indexed\n shape:", "filename": "docs/reference/query-dsl/geo-shape-query.asciidoc", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.apache.lucene.spatial.query.SpatialArgs;\n import org.apache.lucene.spatial.query.SpatialOperation;\n import org.apache.lucene.util.SetOnce;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.get.GetRequest;\n import org.elasticsearch.action.get.GetResponse;\n@@ -77,6 +78,7 @@ public class GeoShapeQueryBuilder extends AbstractQueryBuilder<GeoShapeQueryBuil\n private static final ParseField SHAPE_TYPE_FIELD = new ParseField(\"type\");\n private static final ParseField SHAPE_INDEX_FIELD = new ParseField(\"index\");\n private static final ParseField SHAPE_PATH_FIELD = new ParseField(\"path\");\n+ private static final ParseField SHAPE_ROUTING_FIELD = new ParseField(\"routing\");\n private static final ParseField IGNORE_UNMAPPED_FIELD = new ParseField(\"ignore_unmapped\");\n \n private final String fieldName;\n@@ -89,8 +91,10 @@ public class GeoShapeQueryBuilder extends AbstractQueryBuilder<GeoShapeQueryBuil\n private final String indexedShapeId;\n private final String indexedShapeType;\n \n+\n private String indexedShapeIndex = DEFAULT_SHAPE_INDEX_NAME;\n private String indexedShapePath = DEFAULT_SHAPE_FIELD_NAME;\n+ private String indexedShapeRouting;\n \n private ShapeRelation relation = DEFAULT_SHAPE_RELATION;\n \n@@ -166,6 +170,11 @@ public GeoShapeQueryBuilder(StreamInput in) throws IOException {\n indexedShapeType = in.readOptionalString();\n indexedShapeIndex = in.readOptionalString();\n indexedShapePath = in.readOptionalString();\n+ if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ indexedShapeRouting = in.readOptionalString();\n+ } else {\n+ indexedShapeRouting = null;\n+ }\n }\n relation = ShapeRelation.readFromStream(in);\n strategy = in.readOptionalWriteable(SpatialStrategy::readFromStream);\n@@ -188,6 +197,11 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n out.writeOptionalString(indexedShapeType);\n out.writeOptionalString(indexedShapeIndex);\n out.writeOptionalString(indexedShapePath);\n+ if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ out.writeOptionalString(indexedShapeRouting);\n+ } else if (indexedShapeRouting != null) {\n+ throw new IllegalStateException(\"indexed shape routing cannot be serialized to older nodes\");\n+ }\n }\n relation.writeTo(out);\n out.writeOptionalWriteable(strategy);\n@@ -285,6 +299,26 @@ public String indexedShapePath() {\n return indexedShapePath;\n }\n \n+ /**\n+ * Sets the optional routing to the indexed Shape that will be used in the query\n+ *\n+ * @param indexedShapeRouting indexed shape routing\n+ * @return this\n+ */\n+ public GeoShapeQueryBuilder indexedShapeRouting(String indexedShapeRouting) {\n+ this.indexedShapeRouting = indexedShapeRouting;\n+ return this;\n+ }\n+\n+\n+ /**\n+ * @return the optional routing to the indexed Shape that will be used in the\n+ * Query\n+ */\n+ public String indexedShapeRouting() {\n+ return indexedShapeRouting;\n+ }\n+\n /**\n * Sets the relation of query shape and indexed shape.\n *\n@@ -473,6 +507,9 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep\n if (indexedShapePath != null) {\n builder.field(SHAPE_PATH_FIELD.getPreferredName(), indexedShapePath);\n }\n+ if (indexedShapeRouting != null) {\n+ builder.field(SHAPE_ROUTING_FIELD.getPreferredName(), indexedShapeRouting);\n+ }\n builder.endObject();\n }\n \n@@ -498,6 +535,7 @@ public static GeoShapeQueryBuilder fromXContent(XContentParser parser) throws IO\n String type = null;\n String index = null;\n String shapePath = null;\n+ String shapeRouting = null;\n \n XContentParser.Token token;\n String currentFieldName = null;\n@@ -544,6 +582,8 @@ public static GeoShapeQueryBuilder fromXContent(XContentParser parser) throws IO\n index = parser.text();\n } else if (SHAPE_PATH_FIELD.match(currentFieldName, parser.getDeprecationHandler())) {\n shapePath = parser.text();\n+ } else if (SHAPE_ROUTING_FIELD.match(currentFieldName, parser.getDeprecationHandler())) {\n+ shapeRouting = parser.text();\n }\n } else {\n throw new ParsingException(parser.getTokenLocation(), \"[\" + GeoShapeQueryBuilder.NAME +\n@@ -581,6 +621,9 @@ public static GeoShapeQueryBuilder fromXContent(XContentParser parser) throws IO\n if (shapePath != null) {\n builder.indexedShapePath(shapePath);\n }\n+ if (shapeRouting != null) {\n+ builder.indexedShapeRouting(shapeRouting);\n+ }\n if (shapeRelation != null) {\n builder.relation(shapeRelation);\n }\n@@ -602,6 +645,7 @@ protected boolean doEquals(GeoShapeQueryBuilder other) {\n && Objects.equals(indexedShapeIndex, other.indexedShapeIndex)\n && Objects.equals(indexedShapePath, other.indexedShapePath)\n && Objects.equals(indexedShapeType, other.indexedShapeType)\n+ && Objects.equals(indexedShapeRouting, other.indexedShapeRouting)\n && Objects.equals(relation, other.relation)\n && Objects.equals(shape, other.shape)\n && Objects.equals(supplier, other.supplier)\n@@ -612,7 +656,7 @@ protected boolean doEquals(GeoShapeQueryBuilder other) {\n @Override\n protected int doHashCode() {\n return Objects.hash(fieldName, indexedShapeId, indexedShapeIndex,\n- indexedShapePath, indexedShapeType, relation, shape, strategy, ignoreUnmapped, supplier);\n+ indexedShapePath, indexedShapeType, indexedShapeRouting, relation, shape, strategy, ignoreUnmapped, supplier);\n }\n \n @Override\n@@ -629,6 +673,7 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryRewriteContext) throws\n SetOnce<ShapeBuilder> supplier = new SetOnce<>();\n queryRewriteContext.registerAsyncAction((client, listener) -> {\n GetRequest getRequest = new GetRequest(indexedShapeIndex, indexedShapeType, indexedShapeId);\n+ getRequest.routing(indexedShapeRouting);\n fetch(client, getRequest, indexedShapePath, ActionListener.wrap(builder-> {\n supplier.set(builder);\n listener.onResponse(null);", "filename": "server/src/main/java/org/elasticsearch/index/query/GeoShapeQueryBuilder.java", "status": "modified" }, { "diff": "@@ -59,6 +59,7 @@ public class GeoShapeQueryBuilderTests extends AbstractQueryTestCase<GeoShapeQue\n private static String indexedShapeType;\n private static String indexedShapePath;\n private static String indexedShapeIndex;\n+ private static String indexedShapeRouting;\n private static ShapeBuilder indexedShapeToReturn;\n \n @Override\n@@ -85,6 +86,10 @@ private GeoShapeQueryBuilder doCreateTestQueryBuilder(boolean indexedShape) {\n indexedShapePath = randomAlphaOfLengthBetween(3, 20);\n builder.indexedShapePath(indexedShapePath);\n }\n+ if (randomBoolean()) {\n+ indexedShapeRouting = randomAlphaOfLengthBetween(3, 20);\n+ builder.indexedShapeRouting(indexedShapeRouting);\n+ }\n }\n if (randomBoolean()) {\n SpatialStrategy strategy = randomFrom(SpatialStrategy.values());\n@@ -112,6 +117,7 @@ protected GetResponse executeGet(GetRequest getRequest) {\n assertThat(indexedShapeType, notNullValue());\n assertThat(getRequest.id(), equalTo(indexedShapeId));\n assertThat(getRequest.type(), equalTo(indexedShapeType));\n+ assertThat(getRequest.routing(), equalTo(indexedShapeRouting));\n String expectedShapeIndex = indexedShapeIndex == null ? GeoShapeQueryBuilder.DEFAULT_SHAPE_INDEX_NAME : indexedShapeIndex;\n assertThat(getRequest.index(), equalTo(expectedShapeIndex));\n String expectedShapePath = indexedShapePath == null ? GeoShapeQueryBuilder.DEFAULT_SHAPE_FIELD_NAME : indexedShapePath;\n@@ -136,6 +142,7 @@ public void clearShapeFields() {\n indexedShapeType = null;\n indexedShapePath = null;\n indexedShapeIndex = null;\n+ indexedShapeRouting = null;\n }\n \n @Override", "filename": "server/src/test/java/org/elasticsearch/index/query/GeoShapeQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.test.ESIntegTestCase;\n \n+import static org.elasticsearch.index.query.QueryBuilders.geoShapeQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.equalTo;\n@@ -121,6 +122,43 @@ public void testIgnoreMalformed() throws Exception {\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L));\n }\n \n+ /**\n+ * Test that the indexed shape routing can be provided if it is required\n+ */\n+ public void testIndexShapeRouting() throws Exception {\n+ String mapping = \"{\\n\" +\n+ \" \\\"_routing\\\": {\\n\" +\n+ \" \\\"required\\\": true\\n\" +\n+ \" },\\n\" +\n+ \" \\\"properties\\\": {\\n\" +\n+ \" \\\"shape\\\": {\\n\" +\n+ \" \\\"type\\\": \\\"geo_shape\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\";\n+\n+\n+ // create index\n+ assertAcked(client().admin().indices().prepareCreate(\"test\").addMapping(\"doc\", mapping, XContentType.JSON).get());\n+ ensureGreen();\n+\n+ String source = \"{\\n\" +\n+ \" \\\"shape\\\" : {\\n\" +\n+ \" \\\"type\\\" : \\\"circle\\\",\\n\" +\n+ \" \\\"coordinates\\\" : [-45.0, 45.0],\\n\" +\n+ \" \\\"radius\\\" : \\\"100m\\\"\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ indexRandom(true, client().prepareIndex(\"test\", \"doc\", \"0\").setSource(source, XContentType.JSON).setRouting(\"ABC\"));\n+\n+ SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(\n+ geoShapeQuery(\"shape\", \"0\", \"doc\").indexedShapeIndex(\"test\").indexedShapeRouting(\"ABC\")\n+ ).get();\n+\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(1L));\n+ }\n+\n private String findNodeName(String index) {\n ClusterState state = client().admin().cluster().prepareState().get().getState();\n IndexShardRoutingTable shard = state.getRoutingTable().index(index).shard(0);", "filename": "server/src/test/java/org/elasticsearch/search/geo/GeoShapeIntegrationIT.java", "status": "modified" } ] }
{ "body": "*Original comment by @astefan:*\n\nA query like `SELECT SCORE(), * FROM library WHERE match(author,'dan')` does calculate and output the score of the results, while something like `SELECT SCORE(), * FROM library WHERE match(author,'dan') AND (page_count IS NULL OR page_count > 200)` doesn't.\r\n\r\nThe reason seems to be the query in the first case it's simply creating a `match` while the second one is wrongly wrapping the same `match` query in a `bool`s `filter` (which is not calculating the score by default). While ok to wrap it in a `bool`, but maybe not inside a `filter`. I _think_ the same outcome can be obtained by using `must` statements instead of `filter` ones **only when** the scoring is needed (the presence of `SCORE()` anywhere in the query)?", "comments": [], "number": 29685, "title": "SQL: score is not calculated for a `match` statement when combined with filter-like statements" }
{ "body": "Make all bool constructs use match/should (that is a query context) as\r\nthat is controlled and changed to a filter context by ES automatically\r\nbased on the sort order (_doc, field vs _sort) and trackScores.\r\n\r\nFix #29685", "number": 30730, "review_comments": [], "title": "SQL: Preserve scoring in bool queries" }
{ "commits": [ { "message": "SQL: Preserve scoring in bool queries\n\nMake all bool constructs use match/should (that is a query context) as\nthat is controlled and changed to a filter context by ES automatically\nbased on the sort order (_doc, field vs _sort) and trackScores.\n\nFix 29685" } ], "files": [ { "diff": "@@ -5,8 +5,6 @@\n */\n package org.elasticsearch.xpack.sql.execution.search;\n \n-import org.elasticsearch.index.query.BoolQueryBuilder;\n-import org.elasticsearch.index.query.ConstantScoreQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.aggregations.AggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.composite.CompositeAggregationBuilder;\n@@ -28,6 +26,7 @@\n import java.util.List;\n \n import static java.util.Collections.singletonList;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.search.sort.SortBuilders.fieldSort;\n import static org.elasticsearch.search.sort.SortBuilders.scoreSort;\n import static org.elasticsearch.search.sort.SortBuilders.scriptSort;\n@@ -37,20 +36,23 @@ public abstract class SourceGenerator {\n private static final List<String> NO_STORED_FIELD = singletonList(StoredFieldsContext._NONE_);\n \n public static SearchSourceBuilder sourceBuilder(QueryContainer container, QueryBuilder filter, Integer size) {\n- final SearchSourceBuilder source = new SearchSourceBuilder();\n+ QueryBuilder finalQuery = null;\n // add the source\n- if (container.query() == null) {\n+ if (container.query() != null) {\n if (filter != null) {\n- source.query(new ConstantScoreQueryBuilder(filter));\n+ finalQuery = boolQuery().must(container.query().asBuilder()).filter(filter);\n+ } else {\n+ finalQuery = container.query().asBuilder();\n }\n } else {\n if (filter != null) {\n- source.query(new BoolQueryBuilder().must(container.query().asBuilder()).filter(filter));\n- } else {\n- source.query(container.query().asBuilder());\n+ finalQuery = boolQuery().filter(filter);\n }\n }\n \n+ final SearchSourceBuilder source = new SearchSourceBuilder();\n+ source.query(finalQuery);\n+\n SqlSourceBuilder sortBuilder = new SqlSourceBuilder();\n // Iterate through all the columns requested, collecting the fields that\n // need to be retrieved from the result documents", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/execution/search/SourceGenerator.java", "status": "modified" }, { "diff": "@@ -5,13 +5,13 @@\n */\n package org.elasticsearch.xpack.sql.querydsl.query;\n \n-import java.util.Objects;\n-\n import org.elasticsearch.index.query.BoolQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.sort.NestedSortBuilder;\n import org.elasticsearch.xpack.sql.tree.Location;\n \n+import java.util.Objects;\n+\n import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n \n /**\n@@ -63,9 +63,8 @@ public void enrichNestedSort(NestedSortBuilder sort) {\n public QueryBuilder asBuilder() {\n BoolQueryBuilder boolQuery = boolQuery();\n if (isAnd) {\n- // TODO are we throwing out score by using filter?\n- boolQuery.filter(left.asBuilder());\n- boolQuery.filter(right.asBuilder());\n+ boolQuery.must(left.asBuilder());\n+ boolQuery.must(right.asBuilder());\n } else {\n boolQuery.should(left.asBuilder());\n boolQuery.should(right.asBuilder());", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/querydsl/query/BoolQuery.java", "status": "modified" }, { "diff": "@@ -5,9 +5,6 @@\n */\n package org.elasticsearch.xpack.sql.execution.search;\n \n-import org.elasticsearch.index.query.BoolQueryBuilder;\n-import org.elasticsearch.index.query.ConstantScoreQueryBuilder;\n-import org.elasticsearch.index.query.MatchQueryBuilder;\n import org.elasticsearch.index.query.Operator;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.aggregations.AggregatorFactories.Builder;\n@@ -28,6 +25,8 @@\n import org.elasticsearch.xpack.sql.type.KeywordEsField;\n \n import static java.util.Collections.singletonList;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n import static org.elasticsearch.search.sort.SortBuilders.fieldSort;\n import static org.elasticsearch.search.sort.SortBuilders.scoreSort;\n \n@@ -42,22 +41,22 @@ public void testNoQueryNoFilter() {\n public void testQueryNoFilter() {\n QueryContainer container = new QueryContainer().with(new MatchQuery(Location.EMPTY, \"foo\", \"bar\"));\n SearchSourceBuilder sourceBuilder = SourceGenerator.sourceBuilder(container, null, randomIntBetween(1, 10));\n- assertEquals(new MatchQueryBuilder(\"foo\", \"bar\").operator(Operator.OR), sourceBuilder.query());\n+ assertEquals(matchQuery(\"foo\", \"bar\").operator(Operator.OR), sourceBuilder.query());\n }\n \n public void testNoQueryFilter() {\n QueryContainer container = new QueryContainer();\n- QueryBuilder filter = new MatchQueryBuilder(\"bar\", \"baz\");\n+ QueryBuilder filter = matchQuery(\"bar\", \"baz\");\n SearchSourceBuilder sourceBuilder = SourceGenerator.sourceBuilder(container, filter, randomIntBetween(1, 10));\n- assertEquals(new ConstantScoreQueryBuilder(new MatchQueryBuilder(\"bar\", \"baz\")), sourceBuilder.query());\n+ assertEquals(boolQuery().filter(matchQuery(\"bar\", \"baz\")), sourceBuilder.query());\n }\n \n public void testQueryFilter() {\n QueryContainer container = new QueryContainer().with(new MatchQuery(Location.EMPTY, \"foo\", \"bar\"));\n- QueryBuilder filter = new MatchQueryBuilder(\"bar\", \"baz\");\n+ QueryBuilder filter = matchQuery(\"bar\", \"baz\");\n SearchSourceBuilder sourceBuilder = SourceGenerator.sourceBuilder(container, filter, randomIntBetween(1, 10));\n- assertEquals(new BoolQueryBuilder().must(new MatchQueryBuilder(\"foo\", \"bar\").operator(Operator.OR))\n- .filter(new MatchQueryBuilder(\"bar\", \"baz\")), sourceBuilder.query());\n+ assertEquals(boolQuery().must(matchQuery(\"foo\", \"bar\").operator(Operator.OR)).filter(matchQuery(\"bar\", \"baz\")),\n+ sourceBuilder.query());\n }\n \n public void testLimit() {", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/execution/search/SourceGeneratorTests.java", "status": "modified" }, { "diff": "@@ -16,7 +16,6 @@\n import org.elasticsearch.client.Response;\n import org.elasticsearch.client.ResponseException;\n import org.elasticsearch.common.CheckedSupplier;\n-import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.io.Streams;\n import org.elasticsearch.common.xcontent.XContentHelper;\n@@ -33,12 +32,11 @@\n import java.sql.JDBCType;\n import java.util.Arrays;\n import java.util.HashMap;\n+import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n-import java.util.TreeMap;\n \n import static java.util.Collections.emptyList;\n-import static java.util.Collections.emptyMap;\n import static java.util.Collections.singletonList;\n import static java.util.Collections.singletonMap;\n import static java.util.Collections.unmodifiableMap;\n@@ -396,19 +394,23 @@ public void testBasicTranslateQueryWithFilter() throws IOException {\n assertNotNull(query);\n \n @SuppressWarnings(\"unchecked\")\n- Map<String, Object> constantScore = (Map<String, Object>) query.get(\"constant_score\");\n- assertNotNull(constantScore);\n+ Map<String, Object> bool = (Map<String, Object>) query.get(\"bool\");\n+ assertNotNull(bool);\n \n @SuppressWarnings(\"unchecked\")\n- Map<String, Object> filter = (Map<String, Object>) constantScore.get(\"filter\");\n+ List<Object> filter = (List<Object>) bool.get(\"filter\");\n assertNotNull(filter);\n \n @SuppressWarnings(\"unchecked\")\n- Map<String, Object> match = (Map<String, Object>) filter.get(\"match\");\n- assertNotNull(match);\n+ Map<String, Object> map = (Map<String, Object>) filter.get(0);\n+ assertNotNull(map);\n \n @SuppressWarnings(\"unchecked\")\n- Map<String, Object> matchQuery = (Map<String, Object>) match.get(\"test\");\n+ Map<String, Object> matchQ = (Map<String, Object>) map.get(\"match\");\n+\n+ @SuppressWarnings(\"unchecked\")\n+ Map<String, Object> matchQuery = (Map<String, Object>) matchQ.get(\"test\");\n+\n assertNotNull(matchQuery);\n assertEquals(\"foo\", matchQuery.get(\"query\"));\n }", "filename": "x-pack/qa/sql/src/main/java/org/elasticsearch/xpack/qa/sql/rest/RestSqlTestCase.java", "status": "modified" } ] }
{ "body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**:\r\n5.2.0\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nResult is IllegalArgumentException. Expected empty strings to be ignored.\r\n\r\nIs this an issue? If not, is there a recommended resolution?\r\n\r\n**Steps to reproduce**:\r\nIn Sense:\r\n```\r\nPUT twitter\r\n{\r\n \"mappings\": {\r\n \"tweet\": {\r\n \"properties\": {\r\n \"message\": {\r\n \"type\": \"completion\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT twitter/tweet/3\r\n{\r\n \"message\" : \"\"\r\n}\r\n```\r\nEdit: Updated description to simpler example.", "comments": [ { "body": "I agree - empty string should be ignored.\r\n\r\nSimpler recreation:\r\n\r\n```\r\nPUT twitter\r\n{\r\n \"mappings\": {\r\n \"tweet\": {\r\n \"properties\": {\r\n \"message\": {\r\n \"type\": \"completion\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT twitter/tweet/3\r\n{\r\n \"message\" : \"\"\r\n}\r\n```", "created_at": "2017-02-12T12:16:24Z" }, { "body": "Have the same issue. Does anyone have a solution for that?", "created_at": "2018-01-01T15:18:10Z" }, { "body": "@elastic/es-search-aggs \r\n\r\nJim has a PR open for this #28289", "created_at": "2018-03-22T23:19:36Z" } ], "number": 23121, "title": "Empty string with completion type results in IllegalArgumentException" }
{ "body": "This change makes sure that an empty completion input does not throw an IAE when indexing.\r\nInstead the input is ignored and the completion field is added in the list of ignored fields\r\nfor the document.\r\n\r\nCloses #23121", "number": 30713, "review_comments": [], "title": "Ignore empty completion input" }
{ "commits": [ { "message": "Ignore empty completion input\n\nThis change makes sure that an empty completion input does not throw an IAE when indexing.\nInstead the input is ignored and the completion field is added in the list of ignored fields\nfor the document.\n\nCloses #23121" } ], "files": [ { "diff": "@@ -449,6 +449,10 @@ public Mapper parse(ParseContext context) throws IOException {\n // index\n for (Map.Entry<String, CompletionInputMetaData> completionInput : inputMap.entrySet()) {\n String input = completionInput.getKey();\n+ if (input.trim().isEmpty()) {\n+ context.addIgnoredField(fieldType.name());\n+ continue;\n+ }\n // truncate input\n if (input.length() > maxInputLength) {\n int len = Math.min(maxInputLength, input.length());", "filename": "server/src/main/java/org/elasticsearch/index/mapper/CompletionFieldMapper.java", "status": "modified" }, { "diff": "@@ -397,6 +397,19 @@ public void testFieldValueValidation() throws Exception {\n assertThat(cause, instanceOf(IllegalArgumentException.class));\n assertThat(cause.getMessage(), containsString(\"[0x1e]\"));\n }\n+\n+ // empty inputs are ignored\n+ ParsedDocument doc = defaultMapper.parse(SourceToParse.source(\"test\", \"type1\", \"1\", BytesReference\n+ .bytes(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .array(\"completion\", \" \", \"\")\n+ .endObject()),\n+ XContentType.JSON));\n+ assertThat(doc.docs().size(), equalTo(1));\n+ assertNull(doc.docs().get(0).get(\"completion\"));\n+ assertNotNull(doc.docs().get(0).getField(\"_ignored\"));\n+ IndexableField ignoredFields = doc.docs().get(0).getField(\"_ignored\");\n+ assertThat(ignoredFields.stringValue(), equalTo(\"completion\"));\n }\n \n public void testPrefixQueryType() throws Exception {", "filename": "server/src/test/java/org/elasticsearch/index/mapper/CompletionFieldMapperTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion: 6.2.4, Build: ccec39f/2018-04-12T20:37:28.497551Z\r\n\r\n**Plugins installed**:\r\nx-pack\r\n x-pack-core\r\n x-pack-deprecation\r\n x-pack-graph\r\n x-pack-logstash\r\n x-pack-ml\r\n x-pack-monitoring\r\n x-pack-security\r\n x-pack-upgrade\r\n x-pack-watcher\r\n\r\n**JVM version** (`java -version`):\r\njava 9\r\nJava(TM) SE Runtime Environment (build 9+181)\r\nJava HotSpot(TM) 64-Bit Server VM (build 9+181, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nWindows 10\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nRunning a query with field-level security on a subfield of a nested array field and asking for inner_hits can cause an index out of bounds exception.\r\n\r\nExpected behavior: no error. (If this is a known limitation of either inner_hits or field-level security then I couldn't see it in the documentation.)\r\n\r\n**Steps to reproduce**:\r\n```\r\n# create a new index\r\ncurl -XPUT localhost:9200/test\r\n\r\n# define a mapping\r\ncurl -XPUT localhost:9200/test/_mapping/document -d '{\r\n \"properties\": {\r\n \"document_name\": {\r\n \"type\": \"text\"\r\n },\r\n \"metadata\": {\r\n \"type\": \"nested\",\r\n \"properties\": {\r\n \"name\": {\"type\": \"text\"},\r\n \"sv\": {\"type\": \"text\"},\r\n \"nv\": {\"type\": \"double\"}\r\n }\r\n }\r\n }\r\n}'\r\n\r\n# index a document:\r\n# it has one 'metadata' value where nv is populated and another where sv is populated\r\ncurl -XPOST localhost:9200/test/document -d '{\r\n \"document_name\": \"testing\",\r\n \"metadata\": [\r\n {\r\n \"name\": \"numeric\",\r\n \"nv\": 0.2\r\n },\r\n {\r\n \"name\": \"string\",\r\n \"sv\": \"problem\"\r\n }\r\n ]\r\n}'\r\n\r\n# run this query; it should retrieve the document okay including the inner_hits\r\n# metadata[1] is the inner hit which matches; metadata[0] doesn't match\r\ncurl -XPOST localhost:9200/test/_search -d '{\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"nested\": {\r\n \"path\": \"metadata\",\r\n \"inner_hits\":{},\r\n \"score_mode\": \"avg\",\r\n \"query\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"match\": {\r\n \"metadata.sv\": \"problem\"\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n}'\r\n\r\n# create a role which can only see metadata.sv\r\ncurl -XPUT localhost:9200/_xpack/security/role/reproduce -d '{\r\n \"indices\": [\r\n {\r\n \"names\": [ \"test\" ],\r\n \"privileges\": [ \"read\" ],\r\n \"field_security\" : {\r\n \"grant\": [ \"metadata.sv\" ]\r\n }\r\n } \r\n ]\r\n}'\r\n\r\n# create a user in that role\r\ncurl -XPUT localhost:9200/_xpack/security/user/repro-user -d '{\r\n \"roles\": [\"reproduce\"],\r\n \"password\": \"whatever\",\r\n \"full_name\": \"Repro User\",\r\n \"email\": \"blah@blah.blah\"\r\n}'\r\n```\r\n\r\nNow, if you rerun the previous query but authenticating as this new user, instead of getting results, it fails:\r\n```\r\n\"failures\": [\r\n {\r\n ...\r\n \"reason\": {\r\n \"type\": \"index_out_of_bounds_exception\",\r\n \"reason\": \"Index 1 out-of-bounds for length 1\"\r\n }\r\n }\r\n]\r\n```\r\n\r\nRemoving `\"inner_hits\":{}` from the query will make it work.\r\n\r\nMy guess is that it figures out that `metadata[1]` is the inner hit which matches, then strips out the fields you aren't allowed to see, which removes `metadata[0]` since it doesn't have a `metadata.sv` field, which leaves the `metadata` array one element long instead of two, so when it then looks up `metadata[1]` to put it in the inner hits node, it goes bang?\r\n", "comments": [ { "body": "Pinging @elastic/es-security", "created_at": "2018-05-15T18:15:41Z" }, { "body": "There's what seems to be a related issue where you don't get an error, but you do get the wrong inner hit back.\r\n\r\nIf you repeat the above steps, but instead index the document\r\n```\r\n{\r\n\t\"document_name\": \"followup testing\",\r\n\t\"metadata\": [\r\n\t\t{\r\n\t\t\t\"name\": \"numeric\",\r\n\t\t\t\"nv\": 0.1\r\n\t\t},\r\n\t\t{\r\n\t\t\t\"name\": \"string\",\r\n\t\t\t\"sv\": \"problem\"\r\n\t\t},\r\n\t\t{\r\n\t\t\t\"name\": \"second\",\r\n\t\t\t\"sv\": \"kitten\"\r\n\t\t}\r\n\t]\r\n}\r\n```\r\n\r\nand rerun the search which throws an exception, you no longer get an exception, but the \"inner_hits\" part of the response contains the \"kitten\" metadata field instead of the \"problem\" one which you actually searched for.\r\n\r\nThis seems consistent with the guess that it locates the array index of the inner hit before doing the filtering, and looks it up properly afterwards.\r\n", "created_at": "2018-05-16T08:53:44Z" }, { "body": "@james-uffindell-granta Thanks for reporting this problem. The inner hit source filtering is not working correctly with field level security.\r\n\r\nStacktrace of the first request:\r\n\r\n```\r\n[elasticsearch] [2018-05-16T17:56:48,844][WARN ][r.suppressed ] path: /test/_search, params: {index=test}\r\n[elasticsearch] org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed\r\n[elasticsearch] at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:293) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:133) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:254) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.InitialSearchPhase.access$100(InitialSearchPhase.java:48) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.InitialSearchPhase$2.lambda$onFailure$1(InitialSearchPhase.java:222) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.InitialSearchPhase.maybeFork(InitialSearchPhase.java:176) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.InitialSearchPhase.access$000(InitialSearchPhase.java:48) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.InitialSearchPhase$2.onFailure(InitialSearchPhase.java:222) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.SearchExecutionStatsCollector.onFailure(SearchExecutionStatsCollector.java:73) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:51) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.SearchTransportService$ConnectionCountingHandler.handleException(SearchTransportService.java:535) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1095) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:1188) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1172) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.transport.TaskTransportChannel.sendResponse(TaskTransportChannel.java:66) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.SearchTransportService$6$1.onFailure(SearchTransportService.java:393) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.search.SearchService$2.onFailure(SearchService.java:340) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:334) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:328) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.search.SearchService$3.doRun(SearchService.java:1015) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) [?:?]\r\n[elasticsearch] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]\r\n[elasticsearch] at java.lang.Thread.run(Thread.java:844) [?:?]\r\n[elasticsearch] Caused by: org.elasticsearch.ElasticsearchException$1: Index 1 out-of-bounds for length 1\r\n[elasticsearch] at org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:658) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:131) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] ... 26 more\r\n[elasticsearch] Caused by: java.lang.IndexOutOfBoundsException: Index 1 out-of-bounds for length 1\r\n[elasticsearch] at jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64) ~[?:?]\r\n[elasticsearch] at jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70) ~[?:?]\r\n[elasticsearch] at jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248) ~[?:?]\r\n[elasticsearch] at java.util.Objects.checkIndex(Objects.java:372) ~[?:?]\r\n[elasticsearch] at java.util.ArrayList.get(ArrayList.java:440) ~[?:?]\r\n[elasticsearch] at org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:295) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:153) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:69) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:170) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:392) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:367) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n[elasticsearch] at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:332) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n```", "created_at": "2018-05-16T16:00:12Z" } ], "number": 30624, "title": "Field-level security with inner_hits can cause index out-of-bounds exception" }
{ "body": "Prior to this change an json array element with no fields would be omitted from json array.\r\nNested inner hits source filtering relies on the fact that the json array element numbering\r\nremains untouched and this causes AOOB exceptions on the ES side during the fetch phase\r\nwithout this change.\r\n\r\nCloses #30624", "number": 30709, "review_comments": [ { "body": "Maybe use the empty filteredValue directly ?", "created_at": "2018-05-18T07:12:50Z" }, { "body": "Good point. I've updated the PR.", "created_at": "2018-05-18T07:59:58Z" } ], "title": "[Security] Include an empty json object in an json array when FLS filters out all fields" }
{ "commits": [ { "message": "[Security] Include an empty json object in an json array when FLS filters out all fields\n\nPrior to this change an json array element with no fields would be omitted from json array.\nNested inner hits source filtering relies on the fact that the json array element numbering\nremains untouched and this causes AOOB exceptions in the ES side during the fetch phase\nwithout this change.\n\nCloses #30624" }, { "message": "iter" } ], "files": [ { "diff": "@@ -193,9 +193,7 @@ private static List<Object> filter(Iterable<?> iterable, CharacterRunAutomaton i\n continue;\n }\n Map<String, Object> filteredValue = filter((Map<String, ?>)value, includeAutomaton, state);\n- if (filteredValue.isEmpty() == false) {\n- filtered.add(filteredValue);\n- }\n+ filtered.add(filteredValue);\n } else if (value instanceof Iterable) {\n List<Object> filteredValue = filter((Iterable<?>) value, includeAutomaton, initialState);\n if (filteredValue.isEmpty() == false) {", "filename": "x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/accesscontrol/FieldSubsetReader.java", "status": "modified" }, { "diff": "@@ -716,6 +716,22 @@ public void testSourceFiltering() {\n expected.put(\"foo\", subArray);\n \n assertEquals(expected, filtered);\n+\n+ // json array objects that have no matching fields should be left empty instead of being removed:\n+ // (otherwise nested inner hit source filtering fails with AOOB)\n+ map = new HashMap<>();\n+ map.put(\"foo\", \"value\");\n+ List<Map<?, ?>> values = new ArrayList<>();\n+ values.add(Collections.singletonMap(\"foo\", \"1\"));\n+ values.add(Collections.singletonMap(\"baz\", \"2\"));\n+ map.put(\"bar\", values);\n+\n+ include = new CharacterRunAutomaton(Automatons.patterns(\"bar.baz\"));\n+ filtered = FieldSubsetReader.filter(map, include, 0);\n+\n+ expected = new HashMap<>();\n+ expected.put(\"bar\", Arrays.asList(new HashMap<>(), Collections.singletonMap(\"baz\", \"2\")));\n+ assertEquals(expected, filtered);\n }\n \n /**", "filename": "x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/security/authz/accesscontrol/FieldSubsetReaderTests.java", "status": "modified" } ] }
{ "body": "It appears to me that we do not have protections to ensure that multiple http requests over a single connection are handled correctly when http pipelining is not enabled.\r\n\r\nEssentially if a client sends multiple requests, we will receive those requests, parse them, and dispatch them to other threads for handling. Without the pipelining enabled, the first request to be handled will be the first response written back to the channel. This leads to out of order responses.\r\n\r\nI'm not immediately sure what the fix is. We could always enable pipelining. Or maybe the server should send a something like a 501 if it receives a requests before the initial response has been returned?", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-04-12T21:28:40Z" }, { "body": "Hi, can i fork and try playing with this issue???", "created_at": "2018-04-12T23:58:43Z" }, { "body": "When HTTP pipelining is disabled, a client that is sending a request before it has received the response for a previous request is misbehaving with respect to Elasticsearch (I am not saying they are misbehaving with respect to the HTTP/1.1 specification).\r\n\r\n> We could always enable pipelining.\r\n\r\nThe default is for it to be enabled, and I agree the germane question is whether or not we should remove the ability for it to be disabled.\r\n\r\nFrom the HTTP/1.1 specification:\r\n\r\n> A client that supports persistent connections MAY \"pipeline\" its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received.\r\n\r\nWith this, I lean towards saying that we remove the ability to disable pipelining (i.e., pipelining is always enabled).", "created_at": "2018-04-13T00:19:31Z" }, { "body": "> The default is for it to be enabled, and I agree the germane question is whether or not we should remove the ability for it to be disabled.\r\n\r\nRight. I worded it poorly, but removing the option for pipelining to be disabled is what I was suggesting.\r\n\r\n> With this, I lean towards saying that we remove the ability to disable pipelining (i.e., pipelining is always enabled).\r\n\r\nI agree. Users could still set `http.pipelining.max_events` to a lower number if they want to limit inflight requests.", "created_at": "2018-04-13T01:55:38Z" }, { "body": "@tbrooks8 Let us move forward with removing the ability to disable pipelining. We can deprecate in 6.x.", "created_at": "2018-04-13T02:16:51Z" }, { "body": "These tasks need to be completed:\r\n\r\n- [x] Remove the http.pipelining setting. All http modules must support pipelining\r\n- [x] Cleanup all the references to enabled or disabled pipelining the integration test code\r\n- [x] Deprecate in 6.x", "created_at": "2018-05-17T16:55:04Z" }, { "body": "Closing this as pipelining support is now mandator in 7.0 and the setting is deprecated in 6.x.", "created_at": "2018-06-04T15:42:52Z" } ], "number": 29500, "title": "Http server does not handle multiple requests correctly when pipeling disabled" }
{ "body": "This is related to #29500 and #28898. This commit removes the abilitiy\r\nto disable http pipelining. After this commit, any elasticsearch node\r\nwill support pipelined requests from a client. Additionally, it extracts\r\nsome of the http pipelining work to the server module. This extracted \r\nwork is used to implement pipelining for the nio plugin.", "number": 30695, "review_comments": [ { "body": "hmm why do you close this twice? and if so should we use try/finally?", "created_at": "2018-05-18T08:10:25Z" }, { "body": "can this also throw an IOException?", "created_at": "2018-05-18T08:10:55Z" }, { "body": "if you rename this method to something else, it will be run first after the test and the `super.tearDown` will be run after that without the need to specify it (unless I missed something obvious here).", "created_at": "2018-05-18T08:31:05Z" }, { "body": "Did not mean to close twice. Removing the second throw.", "created_at": "2018-05-18T15:52:37Z" }, { "body": "No. It does not throw a checked exception. ", "created_at": "2018-05-18T15:52:53Z" }, { "body": "It passes it on to the next write handler in the netty pipeline.", "created_at": "2018-05-18T15:53:11Z" } ], "title": "Make http pipelining support mandatory" }
{ "commits": [ { "message": "Require pipelining" }, { "message": "Remove need for synchronized" }, { "message": "Work on nio pipelining" }, { "message": "IMplement nio pipelining" }, { "message": "Some reworks" }, { "message": "Remove references to setting" }, { "message": "tests" }, { "message": "Rename handler" }, { "message": "Add assertions and release request" }, { "message": "Merge remote-tracking branch 'upstream/master' into add_pipelining_to_nio" }, { "message": "Fix checkstyle" }, { "message": "Merge remote-tracking branch 'upstream/master' into add_pipelining_to_nio" }, { "message": "Merge remote-tracking branch 'upstream/master' into add_pipelining_to_nio" }, { "message": "Changes for review" }, { "message": "Merge remote-tracking branch 'upstream/master' into add_pipelining_to_nio" } ], "files": [ { "diff": "@@ -29,6 +29,14 @@\n [[remove-http-enabled]]\n ==== Http enabled setting removed\n \n-The setting `http.enabled` previously allowed disabling binding to HTTP, only allowing\n+* The setting `http.enabled` previously allowed disabling binding to HTTP, only allowing\n use of the transport client. This setting has been removed, as the transport client\n will be removed in the future, thus requiring HTTP to always be enabled.\n+\n+[[remove-http-pipelining-setting]]\n+==== Http pipelining setting removed\n+\n+* The setting `http.pipelining` previously allowed disabling HTTP pipelining support.\n+This setting has been removed, as disabling http pipelining support on the server\n+provided little value. The setting `http.pipelining.max_events` can still be used to\n+limit the number of pipelined requests in-flight.", "filename": "docs/reference/migration/migrate_7_0/settings.asciidoc", "status": "modified" }, { "diff": "@@ -96,8 +96,6 @@ and stack traces in response output. Note: When set to `false` and the `error_tr\n parameter is specified, an error will be returned; when `error_trace` is not specified, a\n simple message will be returned. Defaults to `true`\n \n-|`http.pipelining` |Enable or disable HTTP pipelining, defaults to `true`.\n-\n |`http.pipelining.max_events` |The maximum number of events to be queued up in memory before a HTTP connection is closed, defaults to `10000`.\n \n |`http.max_warning_header_count` |The maximum number of warning headers in", "filename": "docs/reference/modules/http.asciidoc", "status": "modified" }, { "diff": "@@ -42,7 +42,6 @@\n import org.elasticsearch.common.util.concurrent.ThreadContext;\n import org.elasticsearch.http.HttpHandlingSettings;\n import org.elasticsearch.http.netty4.cors.Netty4CorsHandler;\n-import org.elasticsearch.http.netty4.pipelining.HttpPipelinedRequest;\n import org.elasticsearch.rest.AbstractRestChannel;\n import org.elasticsearch.rest.RestResponse;\n import org.elasticsearch.rest.RestStatus;\n@@ -59,29 +58,24 @@ final class Netty4HttpChannel extends AbstractRestChannel {\n private final Netty4HttpServerTransport transport;\n private final Channel channel;\n private final FullHttpRequest nettyRequest;\n- private final HttpPipelinedRequest pipelinedRequest;\n+ private final int sequence;\n private final ThreadContext threadContext;\n private final HttpHandlingSettings handlingSettings;\n \n /**\n- * @param transport The corresponding <code>NettyHttpServerTransport</code> where this channel belongs to.\n- * @param request The request that is handled by this channel.\n- * @param pipelinedRequest If HTTP pipelining is enabled provide the corresponding pipelined request. May be null if\n- * HTTP pipelining is disabled.\n- * @param handlingSettings true iff error messages should include stack traces.\n- * @param threadContext the thread context for the channel\n+ * @param transport The corresponding <code>NettyHttpServerTransport</code> where this channel belongs to.\n+ * @param request The request that is handled by this channel.\n+ * @param sequence The pipelining sequence number for this request\n+ * @param handlingSettings true if error messages should include stack traces.\n+ * @param threadContext the thread context for the channel\n */\n- Netty4HttpChannel(\n- final Netty4HttpServerTransport transport,\n- final Netty4HttpRequest request,\n- final HttpPipelinedRequest pipelinedRequest,\n- final HttpHandlingSettings handlingSettings,\n- final ThreadContext threadContext) {\n+ Netty4HttpChannel(Netty4HttpServerTransport transport, Netty4HttpRequest request, int sequence, HttpHandlingSettings handlingSettings,\n+ ThreadContext threadContext) {\n super(request, handlingSettings.getDetailedErrorsEnabled());\n this.transport = transport;\n this.channel = request.getChannel();\n this.nettyRequest = request.request();\n- this.pipelinedRequest = pipelinedRequest;\n+ this.sequence = sequence;\n this.threadContext = threadContext;\n this.handlingSettings = handlingSettings;\n }\n@@ -129,7 +123,7 @@ public void sendResponse(RestResponse response) {\n final ChannelPromise promise = channel.newPromise();\n \n if (releaseContent) {\n- promise.addListener(f -> ((Releasable)content).close());\n+ promise.addListener(f -> ((Releasable) content).close());\n }\n \n if (releaseBytesStreamOutput) {\n@@ -140,13 +134,9 @@ public void sendResponse(RestResponse response) {\n promise.addListener(ChannelFutureListener.CLOSE);\n }\n \n- final Object msg;\n- if (pipelinedRequest != null) {\n- msg = pipelinedRequest.createHttpResponse(resp, promise);\n- } else {\n- msg = resp;\n- }\n- channel.writeAndFlush(msg, promise);\n+ Netty4HttpResponse newResponse = new Netty4HttpResponse(sequence, resp);\n+\n+ channel.writeAndFlush(newResponse, promise);\n releaseContent = false;\n releaseBytesStreamOutput = false;\n } finally {\n@@ -156,9 +146,6 @@ public void sendResponse(RestResponse response) {\n if (releaseBytesStreamOutput) {\n bytesOutputOrNull().close();\n }\n- if (pipelinedRequest != null) {\n- pipelinedRequest.release();\n- }\n }\n }\n ", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpChannel.java", "status": "modified" }, { "diff": "@@ -0,0 +1,102 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.http.netty4;\n+\n+import io.netty.channel.ChannelDuplexHandler;\n+import io.netty.channel.ChannelHandlerContext;\n+import io.netty.channel.ChannelPromise;\n+import io.netty.handler.codec.http.LastHttpContent;\n+import org.apache.logging.log4j.Logger;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.http.HttpPipelinedRequest;\n+import org.elasticsearch.http.HttpPipeliningAggregator;\n+import org.elasticsearch.transport.netty4.Netty4Utils;\n+\n+import java.nio.channels.ClosedChannelException;\n+import java.util.Collections;\n+import java.util.List;\n+\n+/**\n+ * Implements HTTP pipelining ordering, ensuring that responses are completely served in the same order as their corresponding requests.\n+ */\n+public class Netty4HttpPipeliningHandler extends ChannelDuplexHandler {\n+\n+ private final Logger logger;\n+ private final HttpPipeliningAggregator<Netty4HttpResponse, ChannelPromise> aggregator;\n+\n+ /**\n+ * Construct a new pipelining handler; this handler should be used downstream of HTTP decoding/aggregation.\n+ *\n+ * @param logger for logging unexpected errors\n+ * @param maxEventsHeld the maximum number of channel events that will be retained prior to aborting the channel connection; this is\n+ * required as events cannot queue up indefinitely\n+ */\n+ public Netty4HttpPipeliningHandler(Logger logger, final int maxEventsHeld) {\n+ this.logger = logger;\n+ this.aggregator = new HttpPipeliningAggregator<>(maxEventsHeld);\n+ }\n+\n+ @Override\n+ public void channelRead(final ChannelHandlerContext ctx, final Object msg) {\n+ if (msg instanceof LastHttpContent) {\n+ HttpPipelinedRequest<LastHttpContent> pipelinedRequest = aggregator.read(((LastHttpContent) msg).retain());\n+ ctx.fireChannelRead(pipelinedRequest);\n+ } else {\n+ ctx.fireChannelRead(msg);\n+ }\n+ }\n+\n+ @Override\n+ public void write(final ChannelHandlerContext ctx, final Object msg, final ChannelPromise promise) {\n+ assert msg instanceof Netty4HttpResponse : \"Message must be type: \" + Netty4HttpResponse.class;\n+ Netty4HttpResponse response = (Netty4HttpResponse) msg;\n+ boolean success = false;\n+ try {\n+ List<Tuple<Netty4HttpResponse, ChannelPromise>> readyResponses = aggregator.write(response, promise);\n+ for (Tuple<Netty4HttpResponse, ChannelPromise> readyResponse : readyResponses) {\n+ ctx.write(readyResponse.v1().getResponse(), readyResponse.v2());\n+ }\n+ success = true;\n+ } catch (IllegalStateException e) {\n+ ctx.channel().close();\n+ } finally {\n+ if (success == false) {\n+ promise.setFailure(new ClosedChannelException());\n+ }\n+ }\n+ }\n+\n+ @Override\n+ public void close(ChannelHandlerContext ctx, ChannelPromise promise) {\n+ List<Tuple<Netty4HttpResponse, ChannelPromise>> inflightResponses = aggregator.removeAllInflightResponses();\n+\n+ if (inflightResponses.isEmpty() == false) {\n+ ClosedChannelException closedChannelException = new ClosedChannelException();\n+ for (Tuple<Netty4HttpResponse, ChannelPromise> inflightResponse : inflightResponses) {\n+ try {\n+ inflightResponse.v2().setFailure(closedChannelException);\n+ } catch (RuntimeException e) {\n+ logger.error(\"unexpected error while releasing pipelined http responses\", e);\n+ }\n+ }\n+ }\n+ ctx.close(promise);\n+ }\n+}", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpPipeliningHandler.java", "status": "added" }, { "diff": "@@ -30,41 +30,30 @@\n import io.netty.handler.codec.http.HttpHeaders;\n import org.elasticsearch.common.util.concurrent.ThreadContext;\n import org.elasticsearch.http.HttpHandlingSettings;\n-import org.elasticsearch.http.netty4.pipelining.HttpPipelinedRequest;\n+import org.elasticsearch.http.HttpPipelinedRequest;\n import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.transport.netty4.Netty4Utils;\n \n import java.util.Collections;\n \n @ChannelHandler.Sharable\n-class Netty4HttpRequestHandler extends SimpleChannelInboundHandler<Object> {\n+class Netty4HttpRequestHandler extends SimpleChannelInboundHandler<HttpPipelinedRequest<FullHttpRequest>> {\n \n private final Netty4HttpServerTransport serverTransport;\n private final HttpHandlingSettings handlingSettings;\n- private final boolean httpPipeliningEnabled;\n private final ThreadContext threadContext;\n \n Netty4HttpRequestHandler(Netty4HttpServerTransport serverTransport, HttpHandlingSettings handlingSettings,\n ThreadContext threadContext) {\n this.serverTransport = serverTransport;\n- this.httpPipeliningEnabled = serverTransport.pipelining;\n this.handlingSettings = handlingSettings;\n this.threadContext = threadContext;\n }\n \n @Override\n- protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {\n- final FullHttpRequest request;\n- final HttpPipelinedRequest pipelinedRequest;\n- if (this.httpPipeliningEnabled && msg instanceof HttpPipelinedRequest) {\n- pipelinedRequest = (HttpPipelinedRequest) msg;\n- request = (FullHttpRequest) pipelinedRequest.last();\n- } else {\n- pipelinedRequest = null;\n- request = (FullHttpRequest) msg;\n- }\n+ protected void channelRead0(ChannelHandlerContext ctx, HttpPipelinedRequest<FullHttpRequest> msg) throws Exception {\n+ final FullHttpRequest request = msg.getRequest();\n \n- boolean success = false;\n try {\n \n final FullHttpRequest copy =\n@@ -111,7 +100,7 @@ protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Except\n Netty4HttpChannel innerChannel;\n try {\n innerChannel =\n- new Netty4HttpChannel(serverTransport, httpRequest, pipelinedRequest, handlingSettings, threadContext);\n+ new Netty4HttpChannel(serverTransport, httpRequest, msg.getSequence(), handlingSettings, threadContext);\n } catch (final IllegalArgumentException e) {\n if (badRequestCause == null) {\n badRequestCause = e;\n@@ -126,7 +115,7 @@ protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Except\n copy,\n ctx.channel());\n innerChannel =\n- new Netty4HttpChannel(serverTransport, innerRequest, pipelinedRequest, handlingSettings, threadContext);\n+ new Netty4HttpChannel(serverTransport, innerRequest, msg.getSequence(), handlingSettings, threadContext);\n }\n channel = innerChannel;\n }\n@@ -138,12 +127,9 @@ protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Except\n } else {\n serverTransport.dispatchRequest(httpRequest, channel);\n }\n- success = true;\n } finally {\n- // the request is otherwise released in case of dispatch\n- if (success == false && pipelinedRequest != null) {\n- pipelinedRequest.release();\n- }\n+ // As we have copied the buffer, we can release the request\n+ request.release();\n }\n }\n ", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpRequestHandler.java", "status": "modified" }, { "diff": "@@ -0,0 +1,37 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.http.netty4;\n+\n+import io.netty.handler.codec.http.FullHttpResponse;\n+import org.elasticsearch.http.HttpPipelinedMessage;\n+\n+public class Netty4HttpResponse extends HttpPipelinedMessage {\n+\n+ private final FullHttpResponse response;\n+\n+ public Netty4HttpResponse(int sequence, FullHttpResponse response) {\n+ super(sequence);\n+ this.response = response;\n+ }\n+\n+ public FullHttpResponse getResponse() {\n+ return response;\n+ }\n+}", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpResponse.java", "status": "added" }, { "diff": "@@ -62,7 +62,6 @@\n import org.elasticsearch.http.netty4.cors.Netty4CorsConfig;\n import org.elasticsearch.http.netty4.cors.Netty4CorsConfigBuilder;\n import org.elasticsearch.http.netty4.cors.Netty4CorsHandler;\n-import org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler;\n import org.elasticsearch.rest.RestUtils;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.netty4.Netty4OpenChannelsHandler;\n@@ -99,7 +98,6 @@\n import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_RECEIVE_BUFFER_SIZE;\n import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_REUSE_ADDRESS;\n import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_SEND_BUFFER_SIZE;\n-import static org.elasticsearch.http.HttpTransportSettings.SETTING_PIPELINING;\n import static org.elasticsearch.http.HttpTransportSettings.SETTING_PIPELINING_MAX_EVENTS;\n import static org.elasticsearch.http.netty4.cors.Netty4CorsHandler.ANY_ORIGIN;\n \n@@ -162,8 +160,6 @@ public class Netty4HttpServerTransport extends AbstractHttpServerTransport {\n \n protected final int workerCount;\n \n- protected final boolean pipelining;\n-\n protected final int pipeliningMaxEvents;\n \n /**\n@@ -204,14 +200,16 @@ public Netty4HttpServerTransport(Settings settings, NetworkService networkServic\n this.maxChunkSize = SETTING_HTTP_MAX_CHUNK_SIZE.get(settings);\n this.maxHeaderSize = SETTING_HTTP_MAX_HEADER_SIZE.get(settings);\n this.maxInitialLineLength = SETTING_HTTP_MAX_INITIAL_LINE_LENGTH.get(settings);\n+ this.pipeliningMaxEvents = SETTING_PIPELINING_MAX_EVENTS.get(settings);\n this.httpHandlingSettings = new HttpHandlingSettings(Math.toIntExact(maxContentLength.getBytes()),\n Math.toIntExact(maxChunkSize.getBytes()),\n Math.toIntExact(maxHeaderSize.getBytes()),\n Math.toIntExact(maxInitialLineLength.getBytes()),\n SETTING_HTTP_RESET_COOKIES.get(settings),\n SETTING_HTTP_COMPRESSION.get(settings),\n SETTING_HTTP_COMPRESSION_LEVEL.get(settings),\n- SETTING_HTTP_DETAILED_ERRORS_ENABLED.get(settings));\n+ SETTING_HTTP_DETAILED_ERRORS_ENABLED.get(settings),\n+ pipeliningMaxEvents);\n \n this.maxCompositeBufferComponents = SETTING_HTTP_NETTY_MAX_COMPOSITE_BUFFER_COMPONENTS.get(settings);\n this.workerCount = SETTING_HTTP_WORKER_COUNT.get(settings);\n@@ -226,14 +224,12 @@ public Netty4HttpServerTransport(Settings settings, NetworkService networkServic\n ByteSizeValue receivePredictor = SETTING_HTTP_NETTY_RECEIVE_PREDICTOR_SIZE.get(settings);\n recvByteBufAllocator = new FixedRecvByteBufAllocator(receivePredictor.bytesAsInt());\n \n- this.pipelining = SETTING_PIPELINING.get(settings);\n- this.pipeliningMaxEvents = SETTING_PIPELINING_MAX_EVENTS.get(settings);\n this.corsConfig = buildCorsConfig(settings);\n \n logger.debug(\"using max_chunk_size[{}], max_header_size[{}], max_initial_line_length[{}], max_content_length[{}], \" +\n- \"receive_predictor[{}], max_composite_buffer_components[{}], pipelining[{}], pipelining_max_events[{}]\",\n- maxChunkSize, maxHeaderSize, maxInitialLineLength, this.maxContentLength, receivePredictor, maxCompositeBufferComponents,\n- pipelining, pipeliningMaxEvents);\n+ \"receive_predictor[{}], max_composite_buffer_components[{}], pipelining_max_events[{}]\",\n+ maxChunkSize, maxHeaderSize, maxInitialLineLength, maxContentLength, receivePredictor, maxCompositeBufferComponents,\n+ pipeliningMaxEvents);\n }\n \n public Settings settings() {\n@@ -452,9 +448,7 @@ protected void initChannel(Channel ch) throws Exception {\n if (SETTING_CORS_ENABLED.get(transport.settings())) {\n ch.pipeline().addLast(\"cors\", new Netty4CorsHandler(transport.getCorsConfig()));\n }\n- if (transport.pipelining) {\n- ch.pipeline().addLast(\"pipelining\", new HttpPipeliningHandler(transport.logger, transport.pipeliningMaxEvents));\n- }\n+ ch.pipeline().addLast(\"pipelining\", new Netty4HttpPipeliningHandler(transport.logger, transport.pipeliningMaxEvents));\n ch.pipeline().addLast(\"handler\", requestHandler);\n }\n ", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java", "status": "modified" }, { "diff": "@@ -60,7 +60,6 @@\n import org.elasticsearch.http.HttpTransportSettings;\n import org.elasticsearch.http.NullDispatcher;\n import org.elasticsearch.http.netty4.cors.Netty4CorsHandler;\n-import org.elasticsearch.http.netty4.pipelining.HttpPipelinedRequest;\n import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n import org.elasticsearch.rest.BytesRestResponse;\n import org.elasticsearch.rest.RestResponse;\n@@ -212,12 +211,12 @@ public void testHeadersSet() {\n final FullHttpRequest httpRequest = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, \"/\");\n httpRequest.headers().add(HttpHeaderNames.ORIGIN, \"remote\");\n final WriteCapturingChannel writeCapturingChannel = new WriteCapturingChannel();\n- Netty4HttpRequest request = new Netty4HttpRequest(xContentRegistry(), httpRequest, writeCapturingChannel);\n+ final Netty4HttpRequest request = new Netty4HttpRequest(xContentRegistry(), httpRequest, writeCapturingChannel);\n HttpHandlingSettings handlingSettings = httpServerTransport.httpHandlingSettings;\n \n // send a response\n Netty4HttpChannel channel =\n- new Netty4HttpChannel(httpServerTransport, request, null, handlingSettings, threadPool.getThreadContext());\n+ new Netty4HttpChannel(httpServerTransport, request, 1, handlingSettings, threadPool.getThreadContext());\n TestResponse resp = new TestResponse();\n final String customHeader = \"custom-header\";\n final String customHeaderValue = \"xyz\";\n@@ -227,7 +226,7 @@ public void testHeadersSet() {\n // inspect what was written\n List<Object> writtenObjects = writeCapturingChannel.getWrittenObjects();\n assertThat(writtenObjects.size(), is(1));\n- HttpResponse response = (HttpResponse) writtenObjects.get(0);\n+ HttpResponse response = ((Netty4HttpResponse) writtenObjects.get(0)).getResponse();\n assertThat(response.headers().get(\"non-existent-header\"), nullValue());\n assertThat(response.headers().get(customHeader), equalTo(customHeaderValue));\n assertThat(response.headers().get(HttpHeaderNames.CONTENT_LENGTH), equalTo(Integer.toString(resp.content().length())));\n@@ -243,10 +242,9 @@ public void testReleaseOnSendToClosedChannel() {\n final FullHttpRequest httpRequest = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, \"/\");\n final EmbeddedChannel embeddedChannel = new EmbeddedChannel();\n final Netty4HttpRequest request = new Netty4HttpRequest(registry, httpRequest, embeddedChannel);\n- final HttpPipelinedRequest pipelinedRequest = randomBoolean() ? new HttpPipelinedRequest(request.request(), 1) : null;\n HttpHandlingSettings handlingSettings = httpServerTransport.httpHandlingSettings;\n final Netty4HttpChannel channel =\n- new Netty4HttpChannel(httpServerTransport, request, pipelinedRequest, handlingSettings, threadPool.getThreadContext());\n+ new Netty4HttpChannel(httpServerTransport, request, 1, handlingSettings, threadPool.getThreadContext());\n final TestResponse response = new TestResponse(bigArrays);\n assertThat(response.content(), instanceOf(Releasable.class));\n embeddedChannel.close();\n@@ -263,10 +261,9 @@ public void testReleaseOnSendToChannelAfterException() throws IOException {\n final FullHttpRequest httpRequest = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, \"/\");\n final EmbeddedChannel embeddedChannel = new EmbeddedChannel();\n final Netty4HttpRequest request = new Netty4HttpRequest(registry, httpRequest, embeddedChannel);\n- final HttpPipelinedRequest pipelinedRequest = randomBoolean() ? new HttpPipelinedRequest(request.request(), 1) : null;\n HttpHandlingSettings handlingSettings = httpServerTransport.httpHandlingSettings;\n final Netty4HttpChannel channel =\n- new Netty4HttpChannel(httpServerTransport, request, pipelinedRequest, handlingSettings, threadPool.getThreadContext());\n+ new Netty4HttpChannel(httpServerTransport, request, 1, handlingSettings, threadPool.getThreadContext());\n final BytesRestResponse response = new BytesRestResponse(RestStatus.INTERNAL_SERVER_ERROR,\n JsonXContent.contentBuilder().startObject().endObject());\n assertThat(response.content(), not(instanceOf(Releasable.class)));\n@@ -312,7 +309,7 @@ public void testConnectionClose() throws Exception {\n assertTrue(embeddedChannel.isOpen());\n HttpHandlingSettings handlingSettings = httpServerTransport.httpHandlingSettings;\n final Netty4HttpChannel channel =\n- new Netty4HttpChannel(httpServerTransport, request, null, handlingSettings, threadPool.getThreadContext());\n+ new Netty4HttpChannel(httpServerTransport, request, 1, handlingSettings, threadPool.getThreadContext());\n final TestResponse resp = new TestResponse();\n channel.sendResponse(resp);\n assertThat(embeddedChannel.isOpen(), equalTo(!close));\n@@ -340,13 +337,13 @@ private FullHttpResponse executeRequest(final Settings settings, final String or\n HttpHandlingSettings handlingSettings = httpServerTransport.httpHandlingSettings;\n \n Netty4HttpChannel channel =\n- new Netty4HttpChannel(httpServerTransport, request, null, handlingSettings, threadPool.getThreadContext());\n+ new Netty4HttpChannel(httpServerTransport, request, 1, handlingSettings, threadPool.getThreadContext());\n channel.sendResponse(new TestResponse());\n \n // get the response\n List<Object> writtenObjects = writeCapturingChannel.getWrittenObjects();\n assertThat(writtenObjects.size(), is(1));\n- return (FullHttpResponse) writtenObjects.get(0);\n+ return ((Netty4HttpResponse) writtenObjects.get(0)).getResponse();\n }\n }\n ", "filename": "modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpChannelTests.java", "status": "modified" }, { "diff": "@@ -38,9 +38,9 @@\n import org.elasticsearch.common.util.MockBigArrays;\n import org.elasticsearch.common.util.MockPageCacheRecycler;\n import org.elasticsearch.common.util.concurrent.ThreadContext;\n+import org.elasticsearch.http.HttpPipelinedRequest;\n import org.elasticsearch.http.HttpServerTransport;\n import org.elasticsearch.http.NullDispatcher;\n-import org.elasticsearch.http.netty4.pipelining.HttpPipelinedRequest;\n import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n import org.elasticsearch.test.ESTestCase;\n import org.elasticsearch.threadpool.TestThreadPool;\n@@ -52,16 +52,11 @@\n import java.util.ArrayList;\n import java.util.Collection;\n import java.util.Collections;\n-import java.util.HashSet;\n import java.util.List;\n-import java.util.Set;\n import java.util.concurrent.ExecutorService;\n import java.util.concurrent.Executors;\n \n-import static org.elasticsearch.test.hamcrest.RegexMatcher.matches;\n-import static org.hamcrest.CoreMatchers.equalTo;\n import static org.hamcrest.Matchers.contains;\n-import static org.hamcrest.Matchers.hasSize;\n \n /**\n * This test just tests, if he pipelining works in general with out any connection the Elasticsearch handler\n@@ -85,9 +80,8 @@ public void shutdown() throws Exception {\n }\n }\n \n- public void testThatHttpPipeliningWorksWhenEnabled() throws Exception {\n+ public void testThatHttpPipeliningWorks() throws Exception {\n final Settings settings = Settings.builder()\n- .put(\"http.pipelining\", true)\n .put(\"http.port\", \"0\")\n .build();\n try (HttpServerTransport httpServerTransport = new CustomNettyHttpServerTransport(settings)) {\n@@ -112,48 +106,6 @@ public void testThatHttpPipeliningWorksWhenEnabled() throws Exception {\n }\n }\n \n- public void testThatHttpPipeliningCanBeDisabled() throws Exception {\n- final Settings settings = Settings.builder()\n- .put(\"http.pipelining\", false)\n- .put(\"http.port\", \"0\")\n- .build();\n- try (HttpServerTransport httpServerTransport = new CustomNettyHttpServerTransport(settings)) {\n- httpServerTransport.start();\n- final TransportAddress transportAddress = randomFrom(httpServerTransport.boundAddress().boundAddresses());\n-\n- final int numberOfRequests = randomIntBetween(4, 16);\n- final Set<Integer> slowIds = new HashSet<>();\n- final List<String> requests = new ArrayList<>(numberOfRequests);\n- for (int i = 0; i < numberOfRequests; i++) {\n- if (rarely()) {\n- requests.add(\"/slow/\" + i);\n- slowIds.add(i);\n- } else {\n- requests.add(\"/\" + i);\n- }\n- }\n-\n- try (Netty4HttpClient nettyHttpClient = new Netty4HttpClient()) {\n- Collection<FullHttpResponse> responses = nettyHttpClient.get(transportAddress.address(), requests.toArray(new String[]{}));\n- List<String> responseBodies = new ArrayList<>(Netty4HttpClient.returnHttpResponseBodies(responses));\n- // we can not be sure about the order of the responses, but the slow ones should come last\n- assertThat(responseBodies, hasSize(numberOfRequests));\n- for (int i = 0; i < numberOfRequests - slowIds.size(); i++) {\n- assertThat(responseBodies.get(i), matches(\"/\\\\d+\"));\n- }\n-\n- final Set<Integer> ids = new HashSet<>();\n- for (int i = 0; i < slowIds.size(); i++) {\n- final String response = responseBodies.get(numberOfRequests - slowIds.size() + i);\n- assertThat(response, matches(\"/slow/\\\\d+\" ));\n- assertTrue(ids.add(Integer.parseInt(response.split(\"/\")[2])));\n- }\n-\n- assertThat(slowIds, equalTo(ids));\n- }\n- }\n- }\n-\n class CustomNettyHttpServerTransport extends Netty4HttpServerTransport {\n \n private final ExecutorService executorService = Executors.newCachedThreadPool();\n@@ -196,7 +148,7 @@ protected void initChannel(Channel ch) throws Exception {\n \n }\n \n- class PossiblySlowUpstreamHandler extends SimpleChannelInboundHandler<Object> {\n+ class PossiblySlowUpstreamHandler extends SimpleChannelInboundHandler<HttpPipelinedRequest<FullHttpRequest>> {\n \n private final ExecutorService executorService;\n \n@@ -205,7 +157,7 @@ class PossiblySlowUpstreamHandler extends SimpleChannelInboundHandler<Object> {\n }\n \n @Override\n- protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {\n+ protected void channelRead0(ChannelHandlerContext ctx, HttpPipelinedRequest<FullHttpRequest> msg) throws Exception {\n executorService.submit(new PossiblySlowRunnable(ctx, msg));\n }\n \n@@ -220,26 +172,18 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E\n class PossiblySlowRunnable implements Runnable {\n \n private ChannelHandlerContext ctx;\n- private HttpPipelinedRequest pipelinedRequest;\n+ private HttpPipelinedRequest<FullHttpRequest> pipelinedRequest;\n private FullHttpRequest fullHttpRequest;\n \n- PossiblySlowRunnable(ChannelHandlerContext ctx, Object msg) {\n+ PossiblySlowRunnable(ChannelHandlerContext ctx, HttpPipelinedRequest<FullHttpRequest> msg) {\n this.ctx = ctx;\n- if (msg instanceof HttpPipelinedRequest) {\n- this.pipelinedRequest = (HttpPipelinedRequest) msg;\n- } else if (msg instanceof FullHttpRequest) {\n- this.fullHttpRequest = (FullHttpRequest) msg;\n- }\n+ this.pipelinedRequest = msg;\n+ this.fullHttpRequest = pipelinedRequest.getRequest();\n }\n \n @Override\n public void run() {\n- final String uri;\n- if (pipelinedRequest != null && pipelinedRequest.last() instanceof FullHttpRequest) {\n- uri = ((FullHttpRequest) pipelinedRequest.last()).uri();\n- } else {\n- uri = fullHttpRequest.uri();\n- }\n+ final String uri = fullHttpRequest.uri();\n \n final ByteBuf buffer = Unpooled.copiedBuffer(uri, StandardCharsets.UTF_8);\n \n@@ -258,13 +202,7 @@ public void run() {\n }\n \n final ChannelPromise promise = ctx.newPromise();\n- final Object msg;\n- if (pipelinedRequest != null) {\n- msg = pipelinedRequest.createHttpResponse(httpResponse, promise);\n- } else {\n- msg = httpResponse;\n- }\n- ctx.writeAndFlush(msg, promise);\n+ ctx.writeAndFlush(new Netty4HttpResponse(pipelinedRequest.getSequence(), httpResponse), promise);\n }\n \n }", "filename": "modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpServerPipeliningTests.java", "status": "modified" }, { "diff": "@@ -25,20 +25,21 @@\n import io.netty.handler.codec.http.DefaultFullHttpRequest;\n import io.netty.handler.codec.http.DefaultHttpHeaders;\n import io.netty.handler.codec.http.FullHttpRequest;\n-import io.netty.handler.codec.http.FullHttpResponse;\n import io.netty.handler.codec.http.HttpContentCompressor;\n import io.netty.handler.codec.http.HttpContentDecompressor;\n import io.netty.handler.codec.http.HttpHeaders;\n import io.netty.handler.codec.http.HttpObjectAggregator;\n import io.netty.handler.codec.http.HttpRequestDecoder;\n import io.netty.handler.codec.http.HttpResponseEncoder;\n+import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.concurrent.ThreadContext;\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n import org.elasticsearch.http.HttpHandlingSettings;\n+import org.elasticsearch.http.HttpPipelinedRequest;\n import org.elasticsearch.nio.FlushOperation;\n import org.elasticsearch.nio.InboundChannelBuffer;\n-import org.elasticsearch.nio.ReadWriteHandler;\n import org.elasticsearch.nio.NioSocketChannel;\n+import org.elasticsearch.nio.ReadWriteHandler;\n import org.elasticsearch.nio.SocketChannelContext;\n import org.elasticsearch.nio.WriteOperation;\n import org.elasticsearch.rest.RestRequest;\n@@ -77,6 +78,7 @@ public class HttpReadWriteHandler implements ReadWriteHandler {\n if (settings.isCompression()) {\n handlers.add(new HttpContentCompressor(settings.getCompressionLevel()));\n }\n+ handlers.add(new NioHttpPipeliningHandler(transport.getLogger(), settings.getPipeliningMaxEvents()));\n \n adaptor = new NettyAdaptor(handlers.toArray(new ChannelHandler[0]));\n adaptor.addCloseListener((v, e) -> nioChannel.close());\n@@ -95,9 +97,9 @@ public int consumeReads(InboundChannelBuffer channelBuffer) throws IOException {\n \n @Override\n public WriteOperation createWriteOperation(SocketChannelContext context, Object message, BiConsumer<Void, Throwable> listener) {\n- assert message instanceof FullHttpResponse : \"This channel only supports messages that are of type: \" + FullHttpResponse.class\n- + \". Found type: \" + message.getClass() + \".\";\n- return new HttpWriteOperation(context, (FullHttpResponse) message, listener);\n+ assert message instanceof NioHttpResponse : \"This channel only supports messages that are of type: \"\n+ + NioHttpResponse.class + \". Found type: \" + message.getClass() + \".\";\n+ return new HttpWriteOperation(context, (NioHttpResponse) message, listener);\n }\n \n @Override\n@@ -125,76 +127,85 @@ public void close() throws IOException {\n }\n }\n \n+ @SuppressWarnings(\"unchecked\")\n private void handleRequest(Object msg) {\n- final FullHttpRequest request = (FullHttpRequest) msg;\n+ final HttpPipelinedRequest<FullHttpRequest> pipelinedRequest = (HttpPipelinedRequest<FullHttpRequest>) msg;\n+ FullHttpRequest request = pipelinedRequest.getRequest();\n \n- final FullHttpRequest copiedRequest =\n- new DefaultFullHttpRequest(\n- request.protocolVersion(),\n- request.method(),\n- request.uri(),\n- Unpooled.copiedBuffer(request.content()),\n- request.headers(),\n- request.trailingHeaders());\n-\n- Exception badRequestCause = null;\n-\n- /*\n- * We want to create a REST request from the incoming request from Netty. However, creating this request could fail if there\n- * are incorrectly encoded parameters, or the Content-Type header is invalid. If one of these specific failures occurs, we\n- * attempt to create a REST request again without the input that caused the exception (e.g., we remove the Content-Type header,\n- * or skip decoding the parameters). Once we have a request in hand, we then dispatch the request as a bad request with the\n- * underlying exception that caused us to treat the request as bad.\n- */\n- final NioHttpRequest httpRequest;\n- {\n- NioHttpRequest innerHttpRequest;\n- try {\n- innerHttpRequest = new NioHttpRequest(xContentRegistry, copiedRequest);\n- } catch (final RestRequest.ContentTypeHeaderException e) {\n- badRequestCause = e;\n- innerHttpRequest = requestWithoutContentTypeHeader(copiedRequest, badRequestCause);\n- } catch (final RestRequest.BadParameterException e) {\n- badRequestCause = e;\n- innerHttpRequest = requestWithoutParameters(copiedRequest);\n+ try {\n+ final FullHttpRequest copiedRequest =\n+ new DefaultFullHttpRequest(\n+ request.protocolVersion(),\n+ request.method(),\n+ request.uri(),\n+ Unpooled.copiedBuffer(request.content()),\n+ request.headers(),\n+ request.trailingHeaders());\n+\n+ Exception badRequestCause = null;\n+\n+ /*\n+ * We want to create a REST request from the incoming request from Netty. However, creating this request could fail if there\n+ * are incorrectly encoded parameters, or the Content-Type header is invalid. If one of these specific failures occurs, we\n+ * attempt to create a REST request again without the input that caused the exception (e.g., we remove the Content-Type header,\n+ * or skip decoding the parameters). Once we have a request in hand, we then dispatch the request as a bad request with the\n+ * underlying exception that caused us to treat the request as bad.\n+ */\n+ final NioHttpRequest httpRequest;\n+ {\n+ NioHttpRequest innerHttpRequest;\n+ try {\n+ innerHttpRequest = new NioHttpRequest(xContentRegistry, copiedRequest);\n+ } catch (final RestRequest.ContentTypeHeaderException e) {\n+ badRequestCause = e;\n+ innerHttpRequest = requestWithoutContentTypeHeader(copiedRequest, badRequestCause);\n+ } catch (final RestRequest.BadParameterException e) {\n+ badRequestCause = e;\n+ innerHttpRequest = requestWithoutParameters(copiedRequest);\n+ }\n+ httpRequest = innerHttpRequest;\n }\n- httpRequest = innerHttpRequest;\n- }\n \n- /*\n- * We now want to create a channel used to send the response on. However, creating this channel can fail if there are invalid\n- * parameter values for any of the filter_path, human, or pretty parameters. We detect these specific failures via an\n- * IllegalArgumentException from the channel constructor and then attempt to create a new channel that bypasses parsing of these\n- * parameter values.\n- */\n- final NioHttpChannel channel;\n- {\n- NioHttpChannel innerChannel;\n- try {\n- innerChannel = new NioHttpChannel(nioChannel, transport.getBigArrays(), httpRequest, settings, threadContext);\n- } catch (final IllegalArgumentException e) {\n- if (badRequestCause == null) {\n- badRequestCause = e;\n- } else {\n- badRequestCause.addSuppressed(e);\n+ /*\n+ * We now want to create a channel used to send the response on. However, creating this channel can fail if there are invalid\n+ * parameter values for any of the filter_path, human, or pretty parameters. We detect these specific failures via an\n+ * IllegalArgumentException from the channel constructor and then attempt to create a new channel that bypasses parsing of\n+ * these parameter values.\n+ */\n+ final NioHttpChannel channel;\n+ {\n+ NioHttpChannel innerChannel;\n+ int sequence = pipelinedRequest.getSequence();\n+ BigArrays bigArrays = transport.getBigArrays();\n+ try {\n+ innerChannel = new NioHttpChannel(nioChannel, bigArrays, httpRequest, sequence, settings, threadContext);\n+ } catch (final IllegalArgumentException e) {\n+ if (badRequestCause == null) {\n+ badRequestCause = e;\n+ } else {\n+ badRequestCause.addSuppressed(e);\n+ }\n+ final NioHttpRequest innerRequest =\n+ new NioHttpRequest(\n+ xContentRegistry,\n+ Collections.emptyMap(), // we are going to dispatch the request as a bad request, drop all parameters\n+ copiedRequest.uri(),\n+ copiedRequest);\n+ innerChannel = new NioHttpChannel(nioChannel, bigArrays, innerRequest, sequence, settings, threadContext);\n }\n- final NioHttpRequest innerRequest =\n- new NioHttpRequest(\n- xContentRegistry,\n- Collections.emptyMap(), // we are going to dispatch the request as a bad request, drop all parameters\n- copiedRequest.uri(),\n- copiedRequest);\n- innerChannel = new NioHttpChannel(nioChannel, transport.getBigArrays(), innerRequest, settings, threadContext);\n+ channel = innerChannel;\n }\n- channel = innerChannel;\n- }\n \n- if (request.decoderResult().isFailure()) {\n- transport.dispatchBadRequest(httpRequest, channel, request.decoderResult().cause());\n- } else if (badRequestCause != null) {\n- transport.dispatchBadRequest(httpRequest, channel, badRequestCause);\n- } else {\n- transport.dispatchRequest(httpRequest, channel);\n+ if (request.decoderResult().isFailure()) {\n+ transport.dispatchBadRequest(httpRequest, channel, request.decoderResult().cause());\n+ } else if (badRequestCause != null) {\n+ transport.dispatchBadRequest(httpRequest, channel, badRequestCause);\n+ } else {\n+ transport.dispatchRequest(httpRequest, channel);\n+ }\n+ } finally {\n+ // As we have copied the buffer, we can release the request\n+ request.release();\n }\n }\n ", "filename": "plugins/transport-nio/src/main/java/org/elasticsearch/http/nio/HttpReadWriteHandler.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.http.nio;\n \n-import io.netty.handler.codec.http.FullHttpResponse;\n import org.elasticsearch.nio.SocketChannelContext;\n import org.elasticsearch.nio.WriteOperation;\n \n@@ -28,10 +27,10 @@\n public class HttpWriteOperation implements WriteOperation {\n \n private final SocketChannelContext channelContext;\n- private final FullHttpResponse response;\n+ private final NioHttpResponse response;\n private final BiConsumer<Void, Throwable> listener;\n \n- HttpWriteOperation(SocketChannelContext channelContext, FullHttpResponse response, BiConsumer<Void, Throwable> listener) {\n+ HttpWriteOperation(SocketChannelContext channelContext, NioHttpResponse response, BiConsumer<Void, Throwable> listener) {\n this.channelContext = channelContext;\n this.response = response;\n this.listener = listener;\n@@ -48,7 +47,7 @@ public SocketChannelContext getChannel() {\n }\n \n @Override\n- public FullHttpResponse getObject() {\n+ public NioHttpResponse getObject() {\n return response;\n }\n }", "filename": "plugins/transport-nio/src/main/java/org/elasticsearch/http/nio/HttpWriteOperation.java", "status": "modified" }, { "diff": "@@ -53,12 +53,7 @@ public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)\n try {\n ByteBuf message = (ByteBuf) msg;\n promise.addListener((f) -> message.release());\n- NettyListener listener;\n- if (promise instanceof NettyListener) {\n- listener = (NettyListener) promise;\n- } else {\n- listener = new NettyListener(promise);\n- }\n+ NettyListener listener = NettyListener.fromChannelPromise(promise);\n flushOperations.add(new FlushOperation(message.nioBuffers(), listener));\n } catch (Exception e) {\n promise.setFailure(e);\n@@ -107,18 +102,7 @@ public Object pollInboundMessage() {\n }\n \n public void write(WriteOperation writeOperation) {\n- ChannelPromise channelPromise = nettyChannel.newPromise();\n- channelPromise.addListener(f -> {\n- BiConsumer<Void, Throwable> consumer = writeOperation.getListener();\n- if (f.cause() == null) {\n- consumer.accept(null, null);\n- } else {\n- ExceptionsHelper.dieOnError(f.cause());\n- consumer.accept(null, f.cause());\n- }\n- });\n-\n- nettyChannel.writeAndFlush(writeOperation.getObject(), new NettyListener(channelPromise));\n+ nettyChannel.writeAndFlush(writeOperation.getObject(), NettyListener.fromBiConsumer(writeOperation.getListener(), nettyChannel));\n }\n \n public FlushOperation pollOutboundOperation() {", "filename": "plugins/transport-nio/src/main/java/org/elasticsearch/http/nio/NettyAdaptor.java", "status": "modified" }, { "diff": "@@ -23,7 +23,7 @@\n import io.netty.channel.ChannelPromise;\n import io.netty.util.concurrent.Future;\n import io.netty.util.concurrent.GenericFutureListener;\n-import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n \n import java.util.concurrent.ExecutionException;\n@@ -40,7 +40,7 @@ public class NettyListener implements BiConsumer<Void, Throwable>, ChannelPromis\n \n private final ChannelPromise promise;\n \n- NettyListener(ChannelPromise promise) {\n+ private NettyListener(ChannelPromise promise) {\n this.promise = promise;\n }\n \n@@ -211,4 +211,30 @@ public boolean isVoid() {\n public ChannelPromise unvoid() {\n return promise.unvoid();\n }\n+\n+ public static NettyListener fromBiConsumer(BiConsumer<Void, Throwable> biConsumer, Channel channel) {\n+ if (biConsumer instanceof NettyListener) {\n+ return (NettyListener) biConsumer;\n+ } else {\n+ ChannelPromise channelPromise = channel.newPromise();\n+ channelPromise.addListener(f -> {\n+ if (f.cause() == null) {\n+ biConsumer.accept(null, null);\n+ } else {\n+ ExceptionsHelper.dieOnError(f.cause());\n+ biConsumer.accept(null, f.cause());\n+ }\n+ });\n+\n+ return new NettyListener(channelPromise);\n+ }\n+ }\n+\n+ public static NettyListener fromChannelPromise(ChannelPromise channelPromise) {\n+ if (channelPromise instanceof NettyListener) {\n+ return (NettyListener) channelPromise;\n+ } else {\n+ return new NettyListener(channelPromise);\n+ }\n+ }\n }", "filename": "plugins/transport-nio/src/main/java/org/elasticsearch/http/nio/NettyListener.java", "status": "modified" }, { "diff": "@@ -52,20 +52,23 @@\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n+import java.util.function.BiConsumer;\n \n public class NioHttpChannel extends AbstractRestChannel {\n \n private final BigArrays bigArrays;\n+ private final int sequence;\n private final ThreadContext threadContext;\n private final FullHttpRequest nettyRequest;\n private final NioSocketChannel nioChannel;\n private final boolean resetCookies;\n \n- NioHttpChannel(NioSocketChannel nioChannel, BigArrays bigArrays, NioHttpRequest request,\n+ NioHttpChannel(NioSocketChannel nioChannel, BigArrays bigArrays, NioHttpRequest request, int sequence,\n HttpHandlingSettings settings, ThreadContext threadContext) {\n super(request, settings.getDetailedErrorsEnabled());\n this.nioChannel = nioChannel;\n this.bigArrays = bigArrays;\n+ this.sequence = sequence;\n this.threadContext = threadContext;\n this.nettyRequest = request.getRequest();\n this.resetCookies = settings.isResetCookies();\n@@ -117,9 +120,8 @@ public void sendResponse(RestResponse response) {\n toClose.add(nioChannel::close);\n }\n \n- nioChannel.getContext().sendMessage(resp, (aVoid, throwable) -> {\n- Releasables.close(toClose);\n- });\n+ BiConsumer<Void, Throwable> listener = (aVoid, throwable) -> Releasables.close(toClose);\n+ nioChannel.getContext().sendMessage(new NioHttpResponse(sequence, resp), listener);\n success = true;\n } finally {\n if (success == false) {", "filename": "plugins/transport-nio/src/main/java/org/elasticsearch/http/nio/NioHttpChannel.java", "status": "modified" }, { "diff": "@@ -0,0 +1,103 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.http.nio;\n+\n+import io.netty.channel.ChannelDuplexHandler;\n+import io.netty.channel.ChannelHandlerContext;\n+import io.netty.channel.ChannelPromise;\n+import io.netty.handler.codec.http.LastHttpContent;\n+import org.apache.logging.log4j.Logger;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.http.HttpPipelinedRequest;\n+import org.elasticsearch.http.HttpPipeliningAggregator;\n+import org.elasticsearch.http.nio.NettyListener;\n+import org.elasticsearch.http.nio.NioHttpResponse;\n+\n+import java.nio.channels.ClosedChannelException;\n+import java.util.List;\n+\n+/**\n+ * Implements HTTP pipelining ordering, ensuring that responses are completely served in the same order as their corresponding requests.\n+ */\n+public class NioHttpPipeliningHandler extends ChannelDuplexHandler {\n+\n+ private final Logger logger;\n+ private final HttpPipeliningAggregator<NioHttpResponse, NettyListener> aggregator;\n+\n+ /**\n+ * Construct a new pipelining handler; this handler should be used downstream of HTTP decoding/aggregation.\n+ *\n+ * @param logger for logging unexpected errors\n+ * @param maxEventsHeld the maximum number of channel events that will be retained prior to aborting the channel connection; this is\n+ * required as events cannot queue up indefinitely\n+ */\n+ public NioHttpPipeliningHandler(Logger logger, final int maxEventsHeld) {\n+ this.logger = logger;\n+ this.aggregator = new HttpPipeliningAggregator<>(maxEventsHeld);\n+ }\n+\n+ @Override\n+ public void channelRead(final ChannelHandlerContext ctx, final Object msg) {\n+ if (msg instanceof LastHttpContent) {\n+ HttpPipelinedRequest<LastHttpContent> pipelinedRequest = aggregator.read(((LastHttpContent) msg).retain());\n+ ctx.fireChannelRead(pipelinedRequest);\n+ } else {\n+ ctx.fireChannelRead(msg);\n+ }\n+ }\n+\n+ @Override\n+ public void write(final ChannelHandlerContext ctx, final Object msg, final ChannelPromise promise) {\n+ assert msg instanceof NioHttpResponse : \"Message must be type: \" + NioHttpResponse.class;\n+ NioHttpResponse response = (NioHttpResponse) msg;\n+ boolean success = false;\n+ try {\n+ NettyListener listener = NettyListener.fromChannelPromise(promise);\n+ List<Tuple<NioHttpResponse, NettyListener>> readyResponses = aggregator.write(response, listener);\n+ success = true;\n+ for (Tuple<NioHttpResponse, NettyListener> responseToWrite : readyResponses) {\n+ ctx.write(responseToWrite.v1().getResponse(), responseToWrite.v2());\n+ }\n+ } catch (IllegalStateException e) {\n+ ctx.channel().close();\n+ } finally {\n+ if (success == false) {\n+ promise.setFailure(new ClosedChannelException());\n+ }\n+ }\n+ }\n+\n+ @Override\n+ public void close(ChannelHandlerContext ctx, ChannelPromise promise) {\n+ List<Tuple<NioHttpResponse, NettyListener>> inflightResponses = aggregator.removeAllInflightResponses();\n+\n+ if (inflightResponses.isEmpty() == false) {\n+ ClosedChannelException closedChannelException = new ClosedChannelException();\n+ for (Tuple<NioHttpResponse, NettyListener> inflightResponse : inflightResponses) {\n+ try {\n+ inflightResponse.v2().setFailure(closedChannelException);\n+ } catch (RuntimeException e) {\n+ logger.error(\"unexpected error while releasing pipelined http responses\", e);\n+ }\n+ }\n+ }\n+ ctx.close(promise);\n+ }\n+}", "filename": "plugins/transport-nio/src/main/java/org/elasticsearch/http/nio/NioHttpPipeliningHandler.java", "status": "added" }, { "diff": "@@ -0,0 +1,37 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.http.nio;\n+\n+import io.netty.handler.codec.http.FullHttpResponse;\n+import org.elasticsearch.http.HttpPipelinedMessage;\n+\n+public class NioHttpResponse extends HttpPipelinedMessage {\n+\n+ private final FullHttpResponse response;\n+\n+ public NioHttpResponse(int sequence, FullHttpResponse response) {\n+ super(sequence);\n+ this.response = response;\n+ }\n+\n+ public FullHttpResponse getResponse() {\n+ return response;\n+ }\n+}", "filename": "plugins/transport-nio/src/main/java/org/elasticsearch/http/nio/NioHttpResponse.java", "status": "added" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.http.nio;\n \n import io.netty.handler.timeout.ReadTimeoutException;\n+import org.apache.logging.log4j.Logger;\n import org.apache.logging.log4j.message.ParameterizedMessage;\n import org.apache.logging.log4j.util.Supplier;\n import org.elasticsearch.ElasticsearchException;\n@@ -84,6 +85,7 @@\n import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_RECEIVE_BUFFER_SIZE;\n import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_REUSE_ADDRESS;\n import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_TCP_SEND_BUFFER_SIZE;\n+import static org.elasticsearch.http.HttpTransportSettings.SETTING_PIPELINING_MAX_EVENTS;\n \n public class NioHttpServerTransport extends AbstractHttpServerTransport {\n \n@@ -124,14 +126,16 @@ public NioHttpServerTransport(Settings settings, NetworkService networkService,\n ByteSizeValue maxChunkSize = SETTING_HTTP_MAX_CHUNK_SIZE.get(settings);\n ByteSizeValue maxHeaderSize = SETTING_HTTP_MAX_HEADER_SIZE.get(settings);\n ByteSizeValue maxInitialLineLength = SETTING_HTTP_MAX_INITIAL_LINE_LENGTH.get(settings);\n+ int pipeliningMaxEvents = SETTING_PIPELINING_MAX_EVENTS.get(settings);\n this.httpHandlingSettings = new HttpHandlingSettings(Math.toIntExact(maxContentLength.getBytes()),\n Math.toIntExact(maxChunkSize.getBytes()),\n Math.toIntExact(maxHeaderSize.getBytes()),\n Math.toIntExact(maxInitialLineLength.getBytes()),\n SETTING_HTTP_RESET_COOKIES.get(settings),\n SETTING_HTTP_COMPRESSION.get(settings),\n SETTING_HTTP_COMPRESSION_LEVEL.get(settings),\n- SETTING_HTTP_DETAILED_ERRORS_ENABLED.get(settings));\n+ SETTING_HTTP_DETAILED_ERRORS_ENABLED.get(settings),\n+ pipeliningMaxEvents);\n \n this.tcpNoDelay = SETTING_HTTP_TCP_NO_DELAY.get(settings);\n this.tcpKeepAlive = SETTING_HTTP_TCP_KEEP_ALIVE.get(settings);\n@@ -140,14 +144,19 @@ public NioHttpServerTransport(Settings settings, NetworkService networkService,\n this.tcpReceiveBufferSize = Math.toIntExact(SETTING_HTTP_TCP_RECEIVE_BUFFER_SIZE.get(settings).getBytes());\n \n \n- logger.debug(\"using max_chunk_size[{}], max_header_size[{}], max_initial_line_length[{}], max_content_length[{}]\",\n- maxChunkSize, maxHeaderSize, maxInitialLineLength, maxContentLength);\n+ logger.debug(\"using max_chunk_size[{}], max_header_size[{}], max_initial_line_length[{}], max_content_length[{}],\" +\n+ \" pipelining_max_events[{}]\",\n+ maxChunkSize, maxHeaderSize, maxInitialLineLength, maxContentLength, pipeliningMaxEvents);\n }\n \n BigArrays getBigArrays() {\n return bigArrays;\n }\n \n+ public Logger getLogger() {\n+ return logger;\n+ }\n+\n @Override\n protected void doStart() {\n boolean success = false;", "filename": "plugins/transport-nio/src/main/java/org/elasticsearch/http/nio/NioHttpServerTransport.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n \n import org.elasticsearch.common.network.NetworkModule;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.http.nio.NioHttpServerTransport;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.transport.nio.NioTransport;\n@@ -43,11 +44,13 @@ protected boolean addMockTransportService() {\n @Override\n protected Settings nodeSettings(int nodeOrdinal) {\n Settings.Builder builder = Settings.builder().put(super.nodeSettings(nodeOrdinal));\n- // randomize netty settings\n+ // randomize nio settings\n if (randomBoolean()) {\n builder.put(NioTransport.NIO_WORKER_COUNT.getKey(), random().nextInt(3) + 1);\n+ builder.put(NioHttpServerTransport.NIO_HTTP_WORKER_COUNT.getKey(), random().nextInt(3) + 1);\n }\n builder.put(NetworkModule.TRANSPORT_TYPE_KEY, NioTransportPlugin.NIO_TRANSPORT_NAME);\n+ builder.put(NetworkModule.HTTP_TYPE_KEY, NioTransportPlugin.NIO_HTTP_TRANSPORT_NAME);\n return builder.build();\n }\n ", "filename": "plugins/transport-nio/src/test/java/org/elasticsearch/NioIntegTestCase.java", "status": "modified" }, { "diff": "@@ -61,11 +61,11 @@\n import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_MAX_HEADER_SIZE;\n import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_MAX_INITIAL_LINE_LENGTH;\n import static org.elasticsearch.http.HttpTransportSettings.SETTING_HTTP_RESET_COOKIES;\n+import static org.elasticsearch.http.HttpTransportSettings.SETTING_PIPELINING_MAX_EVENTS;\n import static org.mockito.Matchers.any;\n import static org.mockito.Mockito.mock;\n import static org.mockito.Mockito.times;\n import static org.mockito.Mockito.verify;\n-import static org.mockito.Mockito.verifyZeroInteractions;\n \n public class HttpReadWriteHandlerTests extends ESTestCase {\n \n@@ -91,7 +91,8 @@ public void setMocks() {\n SETTING_HTTP_RESET_COOKIES.getDefault(settings),\n SETTING_HTTP_COMPRESSION.getDefault(settings),\n SETTING_HTTP_COMPRESSION_LEVEL.getDefault(settings),\n- SETTING_HTTP_DETAILED_ERRORS_ENABLED.getDefault(settings));\n+ SETTING_HTTP_DETAILED_ERRORS_ENABLED.getDefault(settings),\n+ SETTING_PIPELINING_MAX_EVENTS.getDefault(settings));\n ThreadContext threadContext = new ThreadContext(settings);\n nioSocketChannel = mock(NioSocketChannel.class);\n handler = new HttpReadWriteHandler(nioSocketChannel, transport, httpHandlingSettings, NamedXContentRegistry.EMPTY, threadContext);\n@@ -148,7 +149,8 @@ public void testDecodeHttpRequestContentLengthToLongGeneratesOutboundMessage() t\n \n handler.consumeReads(toChannelBuffer(buf));\n \n- verifyZeroInteractions(transport);\n+ verify(transport, times(0)).dispatchBadRequest(any(), any(), any());\n+ verify(transport, times(0)).dispatchRequest(any(), any());\n \n List<FlushOperation> flushOperations = handler.pollFlushOperations();\n assertFalse(flushOperations.isEmpty());\n@@ -169,9 +171,10 @@ public void testEncodeHttpResponse() throws IOException {\n prepareHandlerForResponse(handler);\n \n FullHttpResponse fullHttpResponse = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);\n+ NioHttpResponse pipelinedResponse = new NioHttpResponse(0, fullHttpResponse);\n \n SocketChannelContext context = mock(SocketChannelContext.class);\n- HttpWriteOperation writeOperation = new HttpWriteOperation(context, fullHttpResponse, mock(BiConsumer.class));\n+ HttpWriteOperation writeOperation = new HttpWriteOperation(context, pipelinedResponse, mock(BiConsumer.class));\n List<FlushOperation> flushOperations = handler.writeToBytes(writeOperation);\n \n HttpResponse response = responseDecoder.decode(Unpooled.wrappedBuffer(flushOperations.get(0).getBuffersToWrite()));", "filename": "plugins/transport-nio/src/test/java/org/elasticsearch/http/nio/HttpReadWriteHandlerTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,304 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.http.nio;\n+\n+import io.netty.buffer.ByteBuf;\n+import io.netty.buffer.ByteBufUtil;\n+import io.netty.buffer.Unpooled;\n+import io.netty.channel.ChannelHandlerContext;\n+import io.netty.channel.ChannelPromise;\n+import io.netty.channel.SimpleChannelInboundHandler;\n+import io.netty.channel.embedded.EmbeddedChannel;\n+import io.netty.handler.codec.http.DefaultFullHttpRequest;\n+import io.netty.handler.codec.http.DefaultFullHttpResponse;\n+import io.netty.handler.codec.http.DefaultHttpRequest;\n+import io.netty.handler.codec.http.FullHttpRequest;\n+import io.netty.handler.codec.http.FullHttpResponse;\n+import io.netty.handler.codec.http.HttpMethod;\n+import io.netty.handler.codec.http.HttpRequest;\n+import io.netty.handler.codec.http.HttpVersion;\n+import io.netty.handler.codec.http.LastHttpContent;\n+import io.netty.handler.codec.http.QueryStringDecoder;\n+import org.elasticsearch.common.Randomness;\n+import org.elasticsearch.http.HttpPipelinedRequest;\n+import org.elasticsearch.test.ESTestCase;\n+import org.junit.After;\n+\n+import java.nio.channels.ClosedChannelException;\n+import java.nio.charset.StandardCharsets;\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.Queue;\n+import java.util.concurrent.ConcurrentHashMap;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.ExecutorService;\n+import java.util.concurrent.Executors;\n+import java.util.concurrent.LinkedTransferQueue;\n+import java.util.concurrent.TimeUnit;\n+import java.util.stream.Collectors;\n+import java.util.stream.IntStream;\n+\n+import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_LENGTH;\n+import static io.netty.handler.codec.http.HttpResponseStatus.OK;\n+import static io.netty.handler.codec.http.HttpVersion.HTTP_1_1;\n+import static org.hamcrest.core.Is.is;\n+\n+public class NioHttpPipeliningHandlerTests extends ESTestCase {\n+\n+ private final ExecutorService handlerService = Executors.newFixedThreadPool(randomIntBetween(4, 8));\n+ private final ExecutorService eventLoopService = Executors.newFixedThreadPool(1);\n+ private final Map<String, CountDownLatch> waitingRequests = new ConcurrentHashMap<>();\n+ private final Map<String, CountDownLatch> finishingRequests = new ConcurrentHashMap<>();\n+\n+ @After\n+ public void cleanup() throws Exception {\n+ waitingRequests.keySet().forEach(this::finishRequest);\n+ shutdownExecutorService();\n+ }\n+\n+ private CountDownLatch finishRequest(String url) {\n+ waitingRequests.get(url).countDown();\n+ return finishingRequests.get(url);\n+ }\n+\n+ private void shutdownExecutorService() throws InterruptedException {\n+ if (!handlerService.isShutdown()) {\n+ handlerService.shutdown();\n+ handlerService.awaitTermination(10, TimeUnit.SECONDS);\n+ }\n+ if (!eventLoopService.isShutdown()) {\n+ eventLoopService.shutdown();\n+ eventLoopService.awaitTermination(10, TimeUnit.SECONDS);\n+ }\n+ }\n+\n+ public void testThatPipeliningWorksWithFastSerializedRequests() throws InterruptedException {\n+ final int numberOfRequests = randomIntBetween(2, 128);\n+ final EmbeddedChannel embeddedChannel = new EmbeddedChannel(new NioHttpPipeliningHandler(logger, numberOfRequests),\n+ new WorkEmulatorHandler());\n+\n+ for (int i = 0; i < numberOfRequests; i++) {\n+ embeddedChannel.writeInbound(createHttpRequest(\"/\" + String.valueOf(i)));\n+ }\n+\n+ final List<CountDownLatch> latches = new ArrayList<>();\n+ for (final String url : waitingRequests.keySet()) {\n+ latches.add(finishRequest(url));\n+ }\n+\n+ for (final CountDownLatch latch : latches) {\n+ latch.await();\n+ }\n+\n+ embeddedChannel.flush();\n+\n+ for (int i = 0; i < numberOfRequests; i++) {\n+ assertReadHttpMessageHasContent(embeddedChannel, String.valueOf(i));\n+ }\n+\n+ assertTrue(embeddedChannel.isOpen());\n+ }\n+\n+ public void testThatPipeliningWorksWhenSlowRequestsInDifferentOrder() throws InterruptedException {\n+ final int numberOfRequests = randomIntBetween(2, 128);\n+ final EmbeddedChannel embeddedChannel = new EmbeddedChannel(new NioHttpPipeliningHandler(logger, numberOfRequests),\n+ new WorkEmulatorHandler());\n+\n+ for (int i = 0; i < numberOfRequests; i++) {\n+ embeddedChannel.writeInbound(createHttpRequest(\"/\" + String.valueOf(i)));\n+ }\n+\n+ // random order execution\n+ final List<String> urls = new ArrayList<>(waitingRequests.keySet());\n+ Randomness.shuffle(urls);\n+ final List<CountDownLatch> latches = new ArrayList<>();\n+ for (final String url : urls) {\n+ latches.add(finishRequest(url));\n+ }\n+\n+ for (final CountDownLatch latch : latches) {\n+ latch.await();\n+ }\n+\n+ embeddedChannel.flush();\n+\n+ for (int i = 0; i < numberOfRequests; i++) {\n+ assertReadHttpMessageHasContent(embeddedChannel, String.valueOf(i));\n+ }\n+\n+ assertTrue(embeddedChannel.isOpen());\n+ }\n+\n+ public void testThatPipeliningWorksWithChunkedRequests() throws InterruptedException {\n+ final int numberOfRequests = randomIntBetween(2, 128);\n+ final EmbeddedChannel embeddedChannel =\n+ new EmbeddedChannel(\n+ new AggregateUrisAndHeadersHandler(),\n+ new NioHttpPipeliningHandler(logger, numberOfRequests),\n+ new WorkEmulatorHandler());\n+\n+ for (int i = 0; i < numberOfRequests; i++) {\n+ final DefaultHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, \"/\" + i);\n+ embeddedChannel.writeInbound(request);\n+ embeddedChannel.writeInbound(LastHttpContent.EMPTY_LAST_CONTENT);\n+ }\n+\n+ final List<CountDownLatch> latches = new ArrayList<>();\n+ for (int i = numberOfRequests - 1; i >= 0; i--) {\n+ latches.add(finishRequest(Integer.toString(i)));\n+ }\n+\n+ for (final CountDownLatch latch : latches) {\n+ latch.await();\n+ }\n+\n+ embeddedChannel.flush();\n+\n+ for (int i = 0; i < numberOfRequests; i++) {\n+ assertReadHttpMessageHasContent(embeddedChannel, Integer.toString(i));\n+ }\n+\n+ assertTrue(embeddedChannel.isOpen());\n+ }\n+\n+ public void testThatPipeliningClosesConnectionWithTooManyEvents() throws InterruptedException {\n+ final int numberOfRequests = randomIntBetween(2, 128);\n+ final EmbeddedChannel embeddedChannel = new EmbeddedChannel(new NioHttpPipeliningHandler(logger, numberOfRequests),\n+ new WorkEmulatorHandler());\n+\n+ for (int i = 0; i < 1 + numberOfRequests + 1; i++) {\n+ embeddedChannel.writeInbound(createHttpRequest(\"/\" + Integer.toString(i)));\n+ }\n+\n+ final List<CountDownLatch> latches = new ArrayList<>();\n+ final List<Integer> requests = IntStream.range(1, numberOfRequests + 1).boxed().collect(Collectors.toList());\n+ Randomness.shuffle(requests);\n+\n+ for (final Integer request : requests) {\n+ latches.add(finishRequest(request.toString()));\n+ }\n+\n+ for (final CountDownLatch latch : latches) {\n+ latch.await();\n+ }\n+\n+ finishRequest(Integer.toString(numberOfRequests + 1)).await();\n+\n+ embeddedChannel.flush();\n+\n+ assertFalse(embeddedChannel.isOpen());\n+ }\n+\n+ public void testPipeliningRequestsAreReleased() throws InterruptedException {\n+ final int numberOfRequests = 10;\n+ final EmbeddedChannel embeddedChannel =\n+ new EmbeddedChannel(new NioHttpPipeliningHandler(logger, numberOfRequests + 1));\n+\n+ for (int i = 0; i < numberOfRequests; i++) {\n+ embeddedChannel.writeInbound(createHttpRequest(\"/\" + i));\n+ }\n+\n+ HttpPipelinedRequest<FullHttpRequest> inbound;\n+ ArrayList<HttpPipelinedRequest<FullHttpRequest>> requests = new ArrayList<>();\n+ while ((inbound = embeddedChannel.readInbound()) != null) {\n+ requests.add(inbound);\n+ }\n+\n+ ArrayList<ChannelPromise> promises = new ArrayList<>();\n+ for (int i = 1; i < requests.size(); ++i) {\n+ final FullHttpResponse httpResponse = new DefaultFullHttpResponse(HTTP_1_1, OK);\n+ ChannelPromise promise = embeddedChannel.newPromise();\n+ promises.add(promise);\n+ int sequence = requests.get(i).getSequence();\n+ NioHttpResponse resp = new NioHttpResponse(sequence, httpResponse);\n+ embeddedChannel.writeAndFlush(resp, promise);\n+ }\n+\n+ for (ChannelPromise promise : promises) {\n+ assertFalse(promise.isDone());\n+ }\n+ embeddedChannel.close().syncUninterruptibly();\n+ for (ChannelPromise promise : promises) {\n+ assertTrue(promise.isDone());\n+ assertTrue(promise.cause() instanceof ClosedChannelException);\n+ }\n+ }\n+\n+ private void assertReadHttpMessageHasContent(EmbeddedChannel embeddedChannel, String expectedContent) {\n+ FullHttpResponse response = (FullHttpResponse) embeddedChannel.outboundMessages().poll();\n+ assertNotNull(\"Expected response to exist, maybe you did not wait long enough?\", response);\n+ assertNotNull(\"Expected response to have content \" + expectedContent, response.content());\n+ String data = new String(ByteBufUtil.getBytes(response.content()), StandardCharsets.UTF_8);\n+ assertThat(data, is(expectedContent));\n+ }\n+\n+ private FullHttpRequest createHttpRequest(String uri) {\n+ return new DefaultFullHttpRequest(HTTP_1_1, HttpMethod.GET, uri);\n+ }\n+\n+ private static class AggregateUrisAndHeadersHandler extends SimpleChannelInboundHandler<HttpRequest> {\n+\n+ static final Queue<String> QUEUE_URI = new LinkedTransferQueue<>();\n+\n+ @Override\n+ protected void channelRead0(ChannelHandlerContext ctx, HttpRequest request) throws Exception {\n+ QUEUE_URI.add(request.uri());\n+ }\n+\n+ }\n+\n+ private class WorkEmulatorHandler extends SimpleChannelInboundHandler<HttpPipelinedRequest<LastHttpContent>> {\n+\n+ @Override\n+ protected void channelRead0(final ChannelHandlerContext ctx, HttpPipelinedRequest<LastHttpContent> pipelinedRequest) {\n+ LastHttpContent request = pipelinedRequest.getRequest();\n+ final QueryStringDecoder decoder;\n+ if (request instanceof FullHttpRequest) {\n+ decoder = new QueryStringDecoder(((FullHttpRequest)request).uri());\n+ } else {\n+ decoder = new QueryStringDecoder(AggregateUrisAndHeadersHandler.QUEUE_URI.poll());\n+ }\n+\n+ final String uri = decoder.path().replace(\"/\", \"\");\n+ final ByteBuf content = Unpooled.copiedBuffer(uri, StandardCharsets.UTF_8);\n+ final DefaultFullHttpResponse httpResponse = new DefaultFullHttpResponse(HTTP_1_1, OK, content);\n+ httpResponse.headers().add(CONTENT_LENGTH, content.readableBytes());\n+\n+ final CountDownLatch waitingLatch = new CountDownLatch(1);\n+ waitingRequests.put(uri, waitingLatch);\n+ final CountDownLatch finishingLatch = new CountDownLatch(1);\n+ finishingRequests.put(uri, finishingLatch);\n+\n+ handlerService.submit(() -> {\n+ try {\n+ waitingLatch.await(1000, TimeUnit.SECONDS);\n+ final ChannelPromise promise = ctx.newPromise();\n+ eventLoopService.submit(() -> {\n+ ctx.write(new NioHttpResponse(pipelinedRequest.getSequence(), httpResponse), promise);\n+ finishingLatch.countDown();\n+ });\n+ } catch (InterruptedException e) {\n+ fail(e.toString());\n+ }\n+ });\n+ }\n+ }\n+}", "filename": "plugins/transport-nio/src/test/java/org/elasticsearch/http/nio/NioHttpPipeliningHandlerTests.java", "status": "added" }, { "diff": "@@ -227,7 +227,6 @@ public void apply(Settings value, Settings current, Settings previous) {\n HttpTransportSettings.SETTING_CORS_ENABLED,\n HttpTransportSettings.SETTING_CORS_MAX_AGE,\n HttpTransportSettings.SETTING_HTTP_DETAILED_ERRORS_ENABLED,\n- HttpTransportSettings.SETTING_PIPELINING,\n HttpTransportSettings.SETTING_CORS_ALLOW_ORIGIN,\n HttpTransportSettings.SETTING_HTTP_HOST,\n HttpTransportSettings.SETTING_HTTP_PUBLISH_HOST,", "filename": "server/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java", "status": "modified" }, { "diff": "@@ -29,9 +29,11 @@ public class HttpHandlingSettings {\n private final boolean compression;\n private final int compressionLevel;\n private final boolean detailedErrorsEnabled;\n+ private final int pipeliningMaxEvents;\n \n public HttpHandlingSettings(int maxContentLength, int maxChunkSize, int maxHeaderSize, int maxInitialLineLength,\n- boolean resetCookies, boolean compression, int compressionLevel, boolean detailedErrorsEnabled) {\n+ boolean resetCookies, boolean compression, int compressionLevel, boolean detailedErrorsEnabled,\n+ int pipeliningMaxEvents) {\n this.maxContentLength = maxContentLength;\n this.maxChunkSize = maxChunkSize;\n this.maxHeaderSize = maxHeaderSize;\n@@ -40,6 +42,7 @@ public HttpHandlingSettings(int maxContentLength, int maxChunkSize, int maxHeade\n this.compression = compression;\n this.compressionLevel = compressionLevel;\n this.detailedErrorsEnabled = detailedErrorsEnabled;\n+ this.pipeliningMaxEvents = pipeliningMaxEvents;\n }\n \n public int getMaxContentLength() {\n@@ -73,4 +76,8 @@ public int getCompressionLevel() {\n public boolean getDetailedErrorsEnabled() {\n return detailedErrorsEnabled;\n }\n+\n+ public int getPipeliningMaxEvents() {\n+ return pipeliningMaxEvents;\n+ }\n }", "filename": "server/src/main/java/org/elasticsearch/http/HttpHandlingSettings.java", "status": "modified" }, { "diff": "@@ -0,0 +1,37 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.http;\n+\n+public class HttpPipelinedMessage implements Comparable<HttpPipelinedMessage> {\n+\n+ private final int sequence;\n+\n+ public HttpPipelinedMessage(int sequence) {\n+ this.sequence = sequence;\n+ }\n+\n+ public int getSequence() {\n+ return sequence;\n+ }\n+\n+ @Override\n+ public int compareTo(HttpPipelinedMessage o) {\n+ return Integer.compare(sequence, o.sequence);\n+ }\n+}", "filename": "server/src/main/java/org/elasticsearch/http/HttpPipelinedMessage.java", "status": "added" }, { "diff": "@@ -0,0 +1,33 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.http;\n+\n+public class HttpPipelinedRequest<R> extends HttpPipelinedMessage {\n+\n+ private final R request;\n+\n+ HttpPipelinedRequest(int sequence, R request) {\n+ super(sequence);\n+ this.request = request;\n+ }\n+\n+ public R getRequest() {\n+ return request;\n+ }\n+}", "filename": "server/src/main/java/org/elasticsearch/http/HttpPipelinedRequest.java", "status": "added" } ] }
{ "body": "Supplied example doesn't even compile", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-24T09:03:22Z" }, { "body": "This PR can be closes, the documentation has been fixed on master by fb5b2dff579d, which has been backported to all the 6.x branches. To check the question by @rjernst, I ran the docs:check for the \"old\" version and it doesn't fail indeed. When moving the test setup that happens in the second snippet in this file up, the docs test fails. I think I will open a PR to do this and discuss the why/how there.", "created_at": "2018-05-17T16:31:00Z" } ], "number": 27618, "title": "[DOCS] Fix script_field example script" }
{ "body": "Currently the first snippet in the documentation test in script-fields.asciidoc\r\nisn't executed, although it has the CONSOLE annotation. Adding a test setup\r\nannotation to it seems to fix the problem.\r\n\r\nRelates to #27618", "number": 30693, "review_comments": [ { "body": "I'd prefer to change the example so the test lines up a little closer. it isn't a big deal though.", "created_at": "2018-05-18T21:08:35Z" } ], "title": "[Docs] Fix script-fields snippet execution" }
{ "commits": [ { "message": "[Docs] Fix script-fields snippet execution\n\nCurrently the first snippet in the documentation test in script-fields.asciidoc\nisn't executed, although it has the CONSOLE annotation. Adding a test setup\nannotation to it seems to fix the problem.\n\nRelates to #27618" }, { "message": "Slightly adapt example" }, { "message": "Merge branch 'master' into fix-scriptField-docstest" } ], "files": [ { "diff": "@@ -15,13 +15,13 @@ GET /_search\n \"test1\" : {\n \"script\" : {\n \"lang\": \"painless\",\n- \"source\": \"doc['my_field_name'].value * 2\"\n+ \"source\": \"doc['price'].value * 2\"\n }\n },\n \"test2\" : {\n \"script\" : {\n \"lang\": \"painless\",\n- \"source\": \"doc['my_field_name'].value * params.factor\",\n+ \"source\": \"doc['price'].value * params.factor\",\n \"params\" : {\n \"factor\" : 2.0\n }\n@@ -31,7 +31,7 @@ GET /_search\n }\n --------------------------------------------------\n // CONSOLE\n-\n+// TEST[setup:sales]\n \n Script fields can work on fields that are not stored (`my_field_name` in\n the above case), and allow to return custom values to be returned (the", "filename": "docs/reference/search/request/script-fields.asciidoc", "status": "modified" } ] }
{ "body": "*Original comment by @spinscale:*\n\nPutting a watch with invalid JSON does not return an error and puts a broken watch into the system. The Watch parser seems to have an issue here - which should throw an exception but does not.\r\n\r\nPutting the below watch results in a watch stored without condition and actions.. you can use the execution watch API to see that an `always` condition is used and the actions array is empty.\r\n\r\n```\r\ncurl -XPUT localhost:9200/_xpack/watcher/watch/1 -d '\r\n{\r\n \"trigger\": {\r\n \"schedule\": {\r\n \"interval\": \"10s\"\r\n }\r\n },\r\n \"input\": {\r\n \"simple\": {}\r\n }},\r\n \"condition\": {\r\n \"script\": {\r\n \"inline\": \"return false\"\r\n }\r\n },\r\n \"actions\": {\r\n \"logging\": {\r\n \"logging\": {\r\n \"text\": \"{{ctx.payload}}\"\r\n }\r\n }\r\n }\r\n}'\r\n```\r\n\r\nThis issue happens on 2.x as well as 5.x\r\n\r\nOriginating issue was in the forum https://discuss.elastic.co/t/mistake-on-configuring-watches/70943/5\n\n", "comments": [ { "body": "Can an API endpoint be provided that validates a given watch configuration as well? We have a CI process for ingesting new watches into Elastic, and there's no linting tools, JSONschema, or anything available to validate common issues. ", "created_at": "2018-07-25T19:19:14Z" }, { "body": "can you be more specific what you mean with *watch configuration* in this context? The watch gets parsed, scripts get compiled to see if everything works, what validation are you missing? Would your problems be solved, when this issue is fixed or is there more to it?\r\n\r\nThanks for your input, much appreciated!", "created_at": "2018-07-26T11:07:38Z" }, { "body": "If I can supply the watch to the execute API without saving it to the index, and get back a detailed account of what's wrong, then that would solve the issue for me", "created_at": "2018-07-26T17:37:59Z" }, { "body": "@spinscale What's the actual contract with the watch API? How do I determine programmatically that there's a problem? Looking at the Elastic documentation, I can't seem to find any examples or concrete description (other than eyeballing the results -- a Jenkins job doesn't have eyes) of how to do this. ", "created_at": "2018-08-02T21:14:30Z" }, { "body": "This has been open for quite a while, and hasn't had a lot of interest. For now I'm going to close this as something we aren't planning on implementing. We can re-open it later if needed.", "created_at": "2024-05-08T20:55:52Z" } ], "number": 29746, "title": "Watcher: Putting invalid JSON as a watch is allowed" }
{ "body": "Currently, watches may be submitted and accepted by the cluster\r\nthat do not consist of valid JSON. This is because in certain\r\ncases, WatchParser.java will not consume enough tokens from\r\nXContentParser to be able to encounter an underlying JSON parse\r\nerror. This patch fixes this behavior and adds a test.\r\n\r\nCloses #29746\r\n", "number": 30692, "review_comments": [ { "body": "Typically we use XContentBuilder to build the json (see the test below for an example).", "created_at": "2018-05-17T17:03:40Z" }, { "body": "We don't catch JsonParseException in the loop condition when calling `parser.nextToken()`, do we really need to here?", "created_at": "2018-05-17T17:04:32Z" }, { "body": "Can we set up this exception with all the information it needs, so we do not need to catch it and wrap below? It needlessly creates extra stack frames that must be read (by a human).", "created_at": "2018-05-17T17:05:07Z" }, { "body": "This can use the `containsString` matcher instead of a boolean assert (the output on failure is much nicer).", "created_at": "2018-05-17T17:07:25Z" }, { "body": "Use `expectThrows` instead of try/fail/catch", "created_at": "2018-05-17T17:07:44Z" }, { "body": "WatcherXContentParser is not different -- the real issue is that we do not consume enough tokens to encounter an actual error. The real crux of this PR is line 183 -- consuming an additional token beyond what we \"think\" is the end of the watch definition to ensure that we have looked at the whole thing.\r\n\r\nIn the example test, we have an extra closing curly brace. Up to and including that curly brace, the json is invalid -- we have to consume beyond the erroneous end brace to get the error.", "created_at": "2018-05-17T17:21:08Z" }, { "body": "Fair enough -- I interpreted a desire to bubble up ElasticParseExceptions where possible, but if it is fine to just let that go up that makes sense.", "created_at": "2018-05-17T17:22:26Z" }, { "body": "Re: XContentBuilder -- XContentBuilder won't let me build an invalid JSON.", "created_at": "2018-05-17T17:22:46Z" }, { "body": "You can use XContentBuilder to build valid json, and then append an extra token (eg comma as you have). This will ensure there isn't malformed json in the middle which tricks the test.", "created_at": "2018-05-17T17:50:27Z" }, { "body": "minor nit: can you use a `//` comment like the other comments in the methods as well and add new lines before and after?", "created_at": "2018-05-18T07:58:53Z" }, { "body": "minor nit: maybe we can come up with a more explaining error message here, as `unexpected data` can mean anything - however I don't have a smarter idea as well to be honest. Is it `unparseable`/`unknown` or just `too much data`/`unneeded additional data`?", "created_at": "2018-05-18T08:03:50Z" }, { "body": "@spinscale Actually it is this code that enables the JsonParseException for the original test case. Ryan asked that the code for the test case be changed, which resulted in a different error path. However, the key factor is that we now attempt to grab the next token beyond what we believe to be the last token of the watch definition. A previous revision of this code made that fact clear, at the expense of some more complex code.", "created_at": "2018-05-18T14:17:55Z" }, { "body": "yea I think what @spinscale is saying is that he would like the error message changed. Maybe `expected end of payload/data/json but received additional data`", "created_at": "2018-05-18T17:01:24Z" }, { "body": "Any reason you did not opt for the syntax (yea i know, its a builder, lets chain it!!!) used in other spots in this class? Imho its harder to read the nested thing than blocks of code with spaces that define start/endings. This is also how we do it in most of our toXContent functions as well.\r\n\r\n```\r\nb = jsonBuilder();\r\nb.startObject();\r\n\r\nb.startObject(\"trigger\");\r\nb.startObject(\"schedule\");\r\nb.field(\"interval\", \"10s\");\r\nb.endObject();\r\nb.endObject();\r\n\r\n...\r\n\r\nb.endObject();\r\n```", "created_at": "2018-05-18T17:10:36Z" }, { "body": "We prefer using braces like this:\r\n\r\n```diff\r\ndiff --git a/x-pack/plugin/watcher/src/test/java/org/elasticsearch/xpack/watcher/watch/WatchTests.java b/x-pack/plugin/watcher/src/test/java/org/elasticsearch/xpack/watcher/watch/WatchTests.java\r\nindex 1ff59034831..dcb48fbd3c1 100644\r\n--- a/x-pack/plugin/watcher/src/test/java/org/elasticsearch/xpack/watcher/watch/WatchTests.java\r\n+++ b/x-pack/plugin/watcher/src/test/java/org/elasticsearch/xpack/watcher/watch/WatchTests.java\r\n@@ -305,27 +305,44 @@ public class WatchTests extends ESTestCase {\r\n \r\n public void testParserConsumesEntireDefinition() throws Exception {\r\n WatchParser wp = createWatchparser();\r\n- XContentBuilder jsonBuilder = jsonBuilder()\r\n- .startObject()\r\n- .startObject(\"trigger\")\r\n- .startObject(\"schedule\")\r\n- .field(\"interval\", \"10s\")\r\n- .endObject()\r\n- .endObject()\r\n- .startObject(\"input\")\r\n- .startObject(\"simple\")\r\n- .endObject()\r\n- .endObject()\r\n- .startObject(\"condition\")\r\n- .startObject(\"script\")\r\n- .field(\"source\", \"return false\")\r\n- .endObject()\r\n- .endObject()\r\n- .endObject();\r\n- jsonBuilder.generator().writeBinary(\",\".getBytes(StandardCharsets.UTF_8));\r\n- ElasticsearchParseException e = expectThrows(ElasticsearchParseException.class, () ->\r\n- wp.parseWithSecrets(\"failure\", false, BytesReference.bytes(jsonBuilder), new DateTime(), XContentType.JSON, true));\r\n+ try (XContentBuilder builder = jsonBuilder()) {\r\n+ builder.startObject();\r\n+ {\r\n+ builder.startObject(\"trigger\");\r\n+ {\r\n+ builder.startObject(\"schedule\");\r\n+ {\r\n+ builder.field(\"interval\", \"10s\");\r\n+ }\r\n+ builder.endObject();\r\n+ }\r\n+ builder.endObject();\r\n+ builder.startObject(\"input\");\r\n+ {\r\n+ builder.startObject(\"simple\");\r\n+ {\r\n+ }\r\n+ builder.endObject();\r\n+ }\r\n+ builder.endObject();\r\n+ builder.startObject(\"condition\");\r\n+ {\r\n+ builder.startObject(\"script\");\r\n+ {\r\n+ builder.field(\"source\", \"return false\");\r\n+ }\r\n+ builder.endObject();\r\n+ }\r\n+ builder.endObject();\r\n+ }\r\n+ builder.endObject();\r\n+ builder.generator().writeBinary(\",\".getBytes(StandardCharsets.UTF_8));\r\n+ ElasticsearchParseException e = expectThrows(\r\n+ ElasticsearchParseException.class,\r\n+ () -> wp.parseWithSecrets(\"failure\", false, BytesReference.bytes(builder), new DateTime(), XContentType.JSON, true));\r\n assertThat(e.getMessage(), containsString(\"unexpected data beyond\"));\r\n+ }\r\n+\r\n }\r\n \r\n public void testParserDefaults() throws Exception {\r\n```\r\n\r\nAlso note that the builder should be wrapped in a try-with-resources; it's good hygiene.", "created_at": "2018-05-18T17:59:09Z" }, { "body": "Oh, ive not see this before but i like it. I almost removed a `builder.endObject()` on one of my first XContent forays, and I dont think i would have even considered it if this syntax was used.\r\n\r\nAlso, WRT the try-with-resources and hygiene, this is specifically for tests? or should we be doing this in the XContent methods as well? I assumed the caller was responsible for it in our request/response objects.", "created_at": "2018-05-18T18:13:12Z" }, { "body": "Here we own the lifecycle of the builder so we are responsible for closing it.", "created_at": "2018-05-18T18:19:39Z" }, { "body": "@hub-cap I was trying to respond to @spinscale's comment above in the LGTM -- I'll rework the error message", "created_at": "2018-05-18T19:10:51Z" }, { "body": "@hub-cap The way the class does it is mixed, when i started this PR i based style on testParserBadActions(), as that seemed to be the closest example to what I wanted to do. I am happy to conform to whatever is preferred.", "created_at": "2018-05-18T19:12:20Z" }, { "body": "That's the old style before we introduced using braces for this.", "created_at": "2018-05-18T19:18:28Z" } ], "title": "Ensures watch definitions are valid json" }
{ "commits": [ { "message": "Ensures watch definitions are valid json\nCurrently, watches may be submitted and accepted by the cluster\nthat do not consist of valid JSON. This is because in certain\ncases, WatchParser.java will not consume enough tokens from\nXContentParser to be able to encounter an underlying JSON parse\nerror. This patch fixes this behavior and adds a test.\n\nCloses #29746" }, { "message": "address reviewer feedback" }, { "message": "address reviewer feedback" }, { "message": "address reviewer feedback" }, { "message": "Merge remote-tracking branch 'origin/master' into ensure-watch-valid-json" } ], "files": [ { "diff": "@@ -5,6 +5,7 @@\n */\n package org.elasticsearch.xpack.watcher.watch;\n \n+import com.fasterxml.jackson.core.JsonParseException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -174,6 +175,14 @@ public Watch parse(String id, boolean includeStatus, WatcherXContentParser parse\n throw new ElasticsearchParseException(\"could not parse watch [{}]. unexpected field [{}]\", id, currentFieldName);\n }\n }\n+\n+ // Make sure we are at the end of the available input data -- certain types of JSON errors will not manifest\n+ // until we try to consume additional tokens.\n+\n+ if (parser.nextToken() != null) {\n+ throw new ElasticsearchParseException(\"could not parse watch [{}]. expected end of payload, but received additional \" +\n+ \"data at [line: {}, column: {}]\", id, parser.getTokenLocation().lineNumber, parser.getTokenLocation().columnNumber);\n+ }\n if (trigger == null) {\n throw new ElasticsearchParseException(\"could not parse watch [{}]. missing required field [{}]\", id,\n WatchField.TRIGGER.getPreferredName());", "filename": "x-pack/plugin/watcher/src/main/java/org/elasticsearch/xpack/watcher/watch/WatchParser.java", "status": "modified" }, { "diff": "@@ -11,14 +11,18 @@\n import org.elasticsearch.action.support.WriteRequest;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.ParseField;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.xcontent.DeprecationHandler;\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.common.xcontent.json.JsonXContentParser;\n import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.ScriptQueryBuilder;\n@@ -121,6 +125,7 @@\n import org.junit.Before;\n \n import java.io.IOException;\n+import java.nio.charset.StandardCharsets;\n import java.time.Clock;\n import java.time.Instant;\n import java.time.ZoneOffset;\n@@ -143,6 +148,8 @@\n import static org.elasticsearch.xpack.watcher.input.InputBuilders.searchInput;\n import static org.elasticsearch.xpack.watcher.test.WatcherTestUtils.templateRequest;\n import static org.elasticsearch.xpack.watcher.trigger.TriggerBuilders.schedule;\n+\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.hasSize;\n import static org.hamcrest.Matchers.instanceOf;\n@@ -296,6 +303,47 @@ public void testParserBadActions() throws Exception {\n }\n }\n \n+ public void testParserConsumesEntireDefinition() throws Exception {\n+ WatchParser wp = createWatchparser();\n+ try (XContentBuilder builder = jsonBuilder()) {\n+ builder.startObject();\n+ {\n+ builder.startObject(\"trigger\");\n+ {\n+ builder.startObject(\"schedule\");\n+ {\n+ builder.field(\"interval\", \"10s\");\n+ }\n+ builder.endObject();\n+ }\n+ builder.endObject();\n+ builder.startObject(\"input\");\n+ {\n+ builder.startObject(\"simple\");\n+ {\n+ }\n+ builder.endObject();\n+ }\n+ builder.endObject();\n+ builder.startObject(\"condition\");\n+ {\n+ builder.startObject(\"script\");\n+ {\n+ builder.field(\"source\", \"return false\");\n+ }\n+ builder.endObject();\n+ }\n+ builder.endObject();\n+ }\n+ builder.endObject();\n+ builder.generator().writeBinary(\",\".getBytes(StandardCharsets.UTF_8));\n+ ElasticsearchParseException e = expectThrows(\n+ ElasticsearchParseException.class,\n+ () -> wp.parseWithSecrets(\"failure\", false, BytesReference.bytes(builder), new DateTime(), XContentType.JSON, true));\n+ assertThat(e.getMessage(), containsString(\"expected end of payload\"));\n+ }\n+ }\n+\n public void testParserDefaults() throws Exception {\n Schedule schedule = randomSchedule();\n ScheduleRegistry scheduleRegistry = registry(schedule);", "filename": "x-pack/plugin/watcher/src/test/java/org/elasticsearch/xpack/watcher/watch/WatchTests.java", "status": "modified" } ] }
{ "body": "First reported on the discuss forum: https://discuss.elastic.co/t/ml-state-is-too-big/131561\r\n\r\nI happened to look at my own setup and did also notice a few model states that belonged to jobs that no longer existed in my system.\r\n\r\n![image](https://user-images.githubusercontent.com/7461287/39957956-940c4e9e-55c9-11e8-84cf-3536dcd89681.png)\r\n\r\nThere is no current job in my system called `test_kpi` - although I'm sure at one time there was and it was deleted.\r\n\r\n![image](https://user-images.githubusercontent.com/7461287/39957960-a3195a58-55c9-11e8-89af-75eafa48c65f.png)\r\n\r\nNot sure what version is being used on the user on Discuss, but I'm currently using v6.2.0", "comments": [ { "body": "Pinging @elastic/ml-core", "created_at": "2018-05-12T13:52:41Z" }, { "body": "@richcollier Can you see any `model_snapshot` documents in the results index with `job_id` set to `test_kpi`?", "created_at": "2018-05-14T08:45:10Z" }, { "body": "@dimitris-athanasiou - there are no documents in `.ml-anomalies` for `job_id:test_kpi`", "created_at": "2018-05-15T12:33:15Z" }, { "body": "We found the cause of this. It was a bug that was introduced in version `6.1`. When we persist the model state, we persist the state documents in `.ml-state` index and a `model_snapshot` document in the results index. Later, in order to delete the state documents, we need to have the model snapshot doc. Due to the bug, during background periodic persistence, the state documents were persisted but the model snapshot document was put in a buffer. If the job was deleted from the UI before the buffer was flushed, the snapshot documents would never be indexed, meaning the state docs would be left behind after the job was deleted.\r\n\r\nThe above bug is resolved in `6.3.0` (and `6.2.5` if that version is ever released). However, in order to ensure those documents are deleted and to prevent such cases in the future, I will work on enhancing the daily maintenance service to look for left-behind state docs and clean them up.", "created_at": "2018-05-15T14:48:26Z" } ], "number": 30551, "title": "[ML] Model state docs are orphaned in .ml-state after job is deleted" }
{ "body": "It is possible for model state documents to be\r\nleft behind in the state index. This may be\r\nbecause of bugs or uncontrollable scenarios.\r\nIn any case, those documents may take up quite\r\nsome disk space when they add up. This commit\r\nadds a step in the expired data deletion that\r\nis part of the daily maintenance service. The\r\nnew step searches for state documents that\r\ndo not belong to any of the current jobs and\r\ndeletes them.\r\n\r\nCloses #30551", "number": 30659, "review_comments": [ { "body": "It would be nice to encapsulate this in the `ModelState` class - say have a method `jobIdFromDocId` or something like that.", "created_at": "2018-05-16T15:42:27Z" }, { "body": "Theoretically we could also get orphaned quantiles and categorizer state documents (due to bugs we don't know about yet or people meddling with the contents of other indices).\r\n\r\nSo it would be good to add `jobIdFromDocId` methods to the `Quantiles` and `CategorizerState` classes too. The pattern could be they return `null` if they don't understand the format. Then only `continue` if none of the classes we store in the state index can find the job ID from the document ID.", "created_at": "2018-05-16T15:45:54Z" }, { "body": "By convention the `{}` should be in square brackets.\r\n\r\nAlso, if we delete all types of orphaned documents then `model state` -> just `state` (and in the other message below too).", "created_at": "2018-05-16T15:47:47Z" } ], "title": "[ML] Clean left behind model state docs" }
{ "commits": [ { "message": "[ML] Clean left behind model state docs\n\nIt is possible for model state documents to be\nleft behind in the state index. This may be\nbecause of bugs or uncontrollable scenarios.\nIn any case, those documents may take up quite\nsome disk space when they add up. This commit\nadds a step in the expired data deletion that\nis part of the daily maintenance service. The\nnew step searches for state documents that\ndo not belong to any of the current jobs and\ndeletes them.\n\nCloses #30551" }, { "message": "Also remove all other state docs and other review comments" }, { "message": "Uset lastIndexOf" } ], "files": [ { "diff": "@@ -37,6 +37,16 @@ public static final String v54DocumentPrefix(String jobId) {\n return jobId + \"#\";\n }\n \n+ /**\n+ * Given the id of a categorizer state document it extracts the job id\n+ * @param docId the categorizer state document id\n+ * @return the job id or {@code null} if the id is not valid\n+ */\n+ public static final String extractJobId(String docId) {\n+ int suffixIndex = docId.lastIndexOf(\"_\" + TYPE);\n+ return suffixIndex <= 0 ? null : docId.substring(0, suffixIndex);\n+ }\n+\n private CategorizerState() {\n }\n }", "filename": "x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/job/process/autodetect/state/CategorizerState.java", "status": "modified" }, { "diff": "@@ -29,6 +29,16 @@ public static final String v54DocumentId(String jobId, String snapshotId, int do\n return jobId + \"-\" + snapshotId + \"#\" + docNum;\n }\n \n+ /**\n+ * Given the id of a state document it extracts the job id\n+ * @param docId the state document id\n+ * @return the job id or {@code null} if the id is not valid\n+ */\n+ public static final String extractJobId(String docId) {\n+ int suffixIndex = docId.lastIndexOf(\"_\" + TYPE + \"_\");\n+ return suffixIndex <= 0 ? null : docId.substring(0, suffixIndex);\n+ }\n+\n private ModelState() {\n }\n }", "filename": "x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/job/process/autodetect/state/ModelState.java", "status": "modified" }, { "diff": "@@ -60,6 +60,16 @@ public static String v54DocumentId(String jobId) {\n return jobId + \"-\" + TYPE;\n }\n \n+ /**\n+ * Given the id of a quantiles document it extracts the job id\n+ * @param docId the quantiles document id\n+ * @return the job id or {@code null} if the id is not valid\n+ */\n+ public static final String extractJobId(String docId) {\n+ int suffixIndex = docId.lastIndexOf(\"_\" + TYPE);\n+ return suffixIndex <= 0 ? null : docId.substring(0, suffixIndex);\n+ }\n+\n private final String jobId;\n private final Date timestamp;\n private final String quantileState;", "filename": "x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/job/process/autodetect/state/Quantiles.java", "status": "modified" }, { "diff": "@@ -0,0 +1,29 @@\n+/*\n+ * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one\n+ * or more contributor license agreements. Licensed under the Elastic License;\n+ * you may not use this file except in compliance with the Elastic License.\n+ */\n+package org.elasticsearch.xpack.core.ml.job.process.autodetect.state;\n+\n+import org.elasticsearch.test.ESTestCase;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.nullValue;\n+import static org.hamcrest.core.Is.is;\n+\n+public class CategorizerStateTests extends ESTestCase {\n+\n+ public void testExtractJobId_GivenValidDocId() {\n+ assertThat(CategorizerState.extractJobId(\"foo_categorizer_state#1\"), equalTo(\"foo\"));\n+ assertThat(CategorizerState.extractJobId(\"bar_categorizer_state#2\"), equalTo(\"bar\"));\n+ assertThat(CategorizerState.extractJobId(\"foo_bar_categorizer_state#3\"), equalTo(\"foo_bar\"));\n+ assertThat(CategorizerState.extractJobId(\"_categorizer_state_categorizer_state#3\"), equalTo(\"_categorizer_state\"));\n+ }\n+\n+ public void testExtractJobId_GivenInvalidDocId() {\n+ assertThat(CategorizerState.extractJobId(\"\"), is(nullValue()));\n+ assertThat(CategorizerState.extractJobId(\"foo\"), is(nullValue()));\n+ assertThat(CategorizerState.extractJobId(\"_categorizer_state\"), is(nullValue()));\n+ assertThat(CategorizerState.extractJobId(\"foo_model_state_3141341341\"), is(nullValue()));\n+ }\n+}\n\\ No newline at end of file", "filename": "x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/ml/job/process/autodetect/state/CategorizerStateTests.java", "status": "added" }, { "diff": "@@ -0,0 +1,31 @@\n+/*\n+ * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one\n+ * or more contributor license agreements. Licensed under the Elastic License;\n+ * you may not use this file except in compliance with the Elastic License.\n+ */\n+package org.elasticsearch.xpack.core.ml.job.process.autodetect.state;\n+\n+import org.elasticsearch.test.ESTestCase;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.nullValue;\n+import static org.hamcrest.core.Is.is;\n+\n+public class ModelStateTests extends ESTestCase {\n+\n+ public void testExtractJobId_GivenValidDocId() {\n+ assertThat(ModelState.extractJobId(\"foo_model_state_3151373783#1\"), equalTo(\"foo\"));\n+ assertThat(ModelState.extractJobId(\"bar_model_state_451515#3\"), equalTo(\"bar\"));\n+ assertThat(ModelState.extractJobId(\"foo_bar_model_state_blah_blah\"), equalTo(\"foo_bar\"));\n+ assertThat(ModelState.extractJobId(\"_model_state_model_state_11111\"), equalTo(\"_model_state\"));\n+ }\n+\n+ public void testExtractJobId_GivenInvalidDocId() {\n+ assertThat(ModelState.extractJobId(\"\"), is(nullValue()));\n+ assertThat(ModelState.extractJobId(\"foo\"), is(nullValue()));\n+ assertThat(ModelState.extractJobId(\"_model_3141341341\"), is(nullValue()));\n+ assertThat(ModelState.extractJobId(\"_state_3141341341\"), is(nullValue()));\n+ assertThat(ModelState.extractJobId(\"_model_state_3141341341\"), is(nullValue()));\n+ assertThat(ModelState.extractJobId(\"foo_quantiles\"), is(nullValue()));\n+ }\n+}\n\\ No newline at end of file", "filename": "x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/ml/job/process/autodetect/state/ModelStateTests.java", "status": "added" }, { "diff": "@@ -15,9 +15,26 @@\n import java.util.Date;\n \n import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.nullValue;\n+import static org.hamcrest.core.Is.is;\n \n public class QuantilesTests extends AbstractSerializingTestCase<Quantiles> {\n \n+ public void testExtractJobId_GivenValidDocId() {\n+ assertThat(Quantiles.extractJobId(\"foo_quantiles\"), equalTo(\"foo\"));\n+ assertThat(Quantiles.extractJobId(\"bar_quantiles\"), equalTo(\"bar\"));\n+ assertThat(Quantiles.extractJobId(\"foo_bar_quantiles\"), equalTo(\"foo_bar\"));\n+ assertThat(Quantiles.extractJobId(\"_quantiles_quantiles\"), equalTo(\"_quantiles\"));\n+ }\n+\n+ public void testExtractJobId_GivenInvalidDocId() {\n+ assertThat(Quantiles.extractJobId(\"\"), is(nullValue()));\n+ assertThat(Quantiles.extractJobId(\"foo\"), is(nullValue()));\n+ assertThat(Quantiles.extractJobId(\"_quantiles\"), is(nullValue()));\n+ assertThat(Quantiles.extractJobId(\"foo_model_state_3141341341\"), is(nullValue()));\n+ }\n+\n public void testEquals_GivenSameObject() {\n Quantiles quantiles = new Quantiles(\"foo\", new Date(0L), \"foo\");\n assertTrue(quantiles.equals(quantiles));", "filename": "x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/ml/job/process/autodetect/state/QuantilesTests.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.xpack.ml.job.retention.ExpiredModelSnapshotsRemover;\n import org.elasticsearch.xpack.ml.job.retention.ExpiredResultsRemover;\n import org.elasticsearch.xpack.ml.job.retention.MlDataRemover;\n+import org.elasticsearch.xpack.ml.job.retention.UnusedStateRemover;\n import org.elasticsearch.xpack.ml.notifications.Auditor;\n import org.elasticsearch.xpack.ml.utils.VolatileCursorIterator;\n \n@@ -56,7 +57,8 @@ private void deleteExpiredData(ActionListener<DeleteExpiredDataAction.Response>\n List<MlDataRemover> dataRemovers = Arrays.asList(\n new ExpiredResultsRemover(client, clusterService, auditor),\n new ExpiredForecastsRemover(client),\n- new ExpiredModelSnapshotsRemover(client, clusterService)\n+ new ExpiredModelSnapshotsRemover(client, clusterService),\n+ new UnusedStateRemover(client, clusterService)\n );\n Iterator<MlDataRemover> dataRemoversIterator = new VolatileCursorIterator<>(dataRemovers);\n deleteExpiredData(dataRemoversIterator, listener);", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/action/TransportDeleteExpiredDataAction.java", "status": "modified" }, { "diff": "@@ -97,6 +97,7 @@ private SearchResponse initScroll() {\n searchRequest.source(new SearchSourceBuilder()\n .size(BATCH_SIZE)\n .query(getQuery())\n+ .fetchSource(shouldFetchSource())\n .sort(SortBuilders.fieldSort(ElasticsearchMappings.ES_DOC)));\n \n SearchResponse searchResponse = client.search(searchRequest).actionGet();\n@@ -123,6 +124,14 @@ private Deque<T> mapHits(SearchResponse searchResponse) {\n return results;\n }\n \n+ /**\n+ * Should fetch source? Defaults to {@code true}\n+ * @return whether the source should be fetched\n+ */\n+ protected boolean shouldFetchSource() {\n+ return true;\n+ }\n+\n /**\n * Get the query to use for the search\n * @return the search query", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/persistence/BatchedDocumentsIterator.java", "status": "modified" }, { "diff": "@@ -0,0 +1,36 @@\n+/*\n+ * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one\n+ * or more contributor license agreements. Licensed under the Elastic License;\n+ * you may not use this file except in compliance with the Elastic License.\n+ */\n+package org.elasticsearch.xpack.ml.job.persistence;\n+\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.index.query.QueryBuilder;\n+import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.search.SearchHit;\n+\n+/**\n+ * Iterates through the state doc ids\n+ */\n+public class BatchedStateDocIdsIterator extends BatchedDocumentsIterator<String> {\n+\n+ public BatchedStateDocIdsIterator(Client client, String index) {\n+ super(client, index);\n+ }\n+\n+ @Override\n+ protected boolean shouldFetchSource() {\n+ return false;\n+ }\n+\n+ @Override\n+ protected QueryBuilder getQuery() {\n+ return QueryBuilders.matchAllQuery();\n+ }\n+\n+ @Override\n+ protected String map(SearchHit hit) {\n+ return hit.getId();\n+ }\n+}", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/persistence/BatchedStateDocIdsIterator.java", "status": "added" }, { "diff": "@@ -0,0 +1,134 @@\n+/*\n+ * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one\n+ * or more contributor license agreements. Licensed under the Elastic License;\n+ * you may not use this file except in compliance with the Elastic License.\n+ */\n+package org.elasticsearch.xpack.ml.job.retention;\n+\n+import org.apache.logging.log4j.Logger;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.bulk.BulkRequestBuilder;\n+import org.elasticsearch.action.bulk.BulkResponse;\n+import org.elasticsearch.action.delete.DeleteRequest;\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.service.ClusterService;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.xpack.core.ml.MLMetadataField;\n+import org.elasticsearch.xpack.core.ml.MlMetadata;\n+import org.elasticsearch.xpack.core.ml.job.persistence.AnomalyDetectorsIndex;\n+import org.elasticsearch.xpack.core.ml.job.persistence.ElasticsearchMappings;\n+import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.CategorizerState;\n+import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelState;\n+import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.Quantiles;\n+import org.elasticsearch.xpack.ml.job.persistence.BatchedStateDocIdsIterator;\n+\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.Deque;\n+import java.util.List;\n+import java.util.Objects;\n+import java.util.Set;\n+import java.util.function.Function;\n+\n+/**\n+ * If for any reason a job is deleted by some of its state documents\n+ * are left behind, this class deletes any unused documents stored\n+ * in the .ml-state index.\n+ */\n+public class UnusedStateRemover implements MlDataRemover {\n+\n+ private static final Logger LOGGER = Loggers.getLogger(UnusedStateRemover.class);\n+\n+ private final Client client;\n+ private final ClusterService clusterService;\n+\n+ public UnusedStateRemover(Client client, ClusterService clusterService) {\n+ this.client = Objects.requireNonNull(client);\n+ this.clusterService = Objects.requireNonNull(clusterService);\n+ }\n+\n+ @Override\n+ public void remove(ActionListener<Boolean> listener) {\n+ try {\n+ BulkRequestBuilder deleteUnusedStateRequestBuilder = findUnusedStateDocs();\n+ if (deleteUnusedStateRequestBuilder.numberOfActions() > 0) {\n+ executeDeleteUnusedStateDocs(deleteUnusedStateRequestBuilder, listener);\n+ } else {\n+ listener.onResponse(true);\n+ }\n+ } catch (Exception e) {\n+ listener.onFailure(e);\n+ }\n+ }\n+\n+ private BulkRequestBuilder findUnusedStateDocs() {\n+ Set<String> jobIds = getJobIds();\n+ BulkRequestBuilder deleteUnusedStateRequestBuilder = client.prepareBulk();\n+ BatchedStateDocIdsIterator stateDocIdsIterator = new BatchedStateDocIdsIterator(client, AnomalyDetectorsIndex.jobStateIndexName());\n+ while (stateDocIdsIterator.hasNext()) {\n+ Deque<String> stateDocIds = stateDocIdsIterator.next();\n+ for (String stateDocId : stateDocIds) {\n+ String jobId = JobIdExtractor.extractJobId(stateDocId);\n+ if (jobId == null) {\n+ // not a managed state document id\n+ continue;\n+ }\n+ if (jobIds.contains(jobId) == false) {\n+ deleteUnusedStateRequestBuilder.add(new DeleteRequest(\n+ AnomalyDetectorsIndex.jobStateIndexName(), ElasticsearchMappings.DOC_TYPE, stateDocId));\n+ }\n+ }\n+ }\n+ return deleteUnusedStateRequestBuilder;\n+ }\n+\n+ private Set<String> getJobIds() {\n+ ClusterState clusterState = clusterService.state();\n+ MlMetadata mlMetadata = clusterState.getMetaData().custom(MLMetadataField.TYPE);\n+ if (mlMetadata != null) {\n+ return mlMetadata.getJobs().keySet();\n+ }\n+ return Collections.emptySet();\n+ }\n+\n+ private void executeDeleteUnusedStateDocs(BulkRequestBuilder deleteUnusedStateRequestBuilder, ActionListener<Boolean> listener) {\n+ LOGGER.info(\"Found [{}] unused state documents; attempting to delete\",\n+ deleteUnusedStateRequestBuilder.numberOfActions());\n+ deleteUnusedStateRequestBuilder.execute(new ActionListener<BulkResponse>() {\n+ @Override\n+ public void onResponse(BulkResponse bulkItemResponses) {\n+ if (bulkItemResponses.hasFailures()) {\n+ LOGGER.error(\"Some unused state documents could not be deleted due to failures: {}\",\n+ bulkItemResponses.buildFailureMessage());\n+ } else {\n+ LOGGER.info(\"Successfully deleted all unused state documents\");\n+ }\n+ listener.onResponse(true);\n+ }\n+\n+ @Override\n+ public void onFailure(Exception e) {\n+ LOGGER.error(\"Error deleting unused model state documents: \", e);\n+ listener.onFailure(e);\n+ }\n+ });\n+ }\n+\n+ private static class JobIdExtractor {\n+\n+ private static List<Function<String, String>> extractors = Arrays.asList(\n+ ModelState::extractJobId, Quantiles::extractJobId, CategorizerState::extractJobId);\n+\n+ private static String extractJobId(String docId) {\n+ String jobId;\n+ for (Function<String, String> extractor : extractors) {\n+ jobId = extractor.apply(docId);\n+ if (jobId != null) {\n+ return jobId;\n+ }\n+ }\n+ return null;\n+ }\n+ }\n+}", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/retention/UnusedStateRemover.java", "status": "added" }, { "diff": "@@ -8,19 +8,23 @@\n import org.elasticsearch.action.bulk.BulkRequestBuilder;\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.support.WriteRequest;\n import org.elasticsearch.action.update.UpdateAction;\n import org.elasticsearch.action.update.UpdateRequest;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.rest.RestStatus;\n+import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.xpack.core.ml.action.DeleteExpiredDataAction;\n import org.elasticsearch.xpack.core.ml.action.UpdateModelSnapshotAction;\n import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;\n import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;\n import org.elasticsearch.xpack.core.ml.job.config.DataDescription;\n import org.elasticsearch.xpack.core.ml.job.config.Detector;\n import org.elasticsearch.xpack.core.ml.job.config.Job;\n+import org.elasticsearch.xpack.core.ml.job.persistence.AnomalyDetectorsIndex;\n import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSnapshot;\n import org.elasticsearch.xpack.core.ml.job.results.Bucket;\n import org.elasticsearch.xpack.core.ml.job.results.ForecastRequestStats;\n@@ -31,13 +35,16 @@\n import java.nio.charset.StandardCharsets;\n import java.util.ArrayList;\n import java.util.Arrays;\n+import java.util.Collections;\n import java.util.List;\n+import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n \n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n import static org.hamcrest.Matchers.is;\n+import static org.hamcrest.Matchers.lessThan;\n import static org.hamcrest.Matchers.lessThanOrEqualTo;\n \n public class DeleteExpiredDataIT extends MlNativeAutodetectIntegTestCase {\n@@ -78,11 +85,16 @@ public void setUpData() throws IOException {\n }\n \n @After\n- public void tearDownData() throws Exception {\n+ public void tearDownData() {\n client().admin().indices().prepareDelete(DATA_INDEX).get();\n cleanUp();\n }\n \n+ public void testDeleteExpiredDataGivenNothingToDelete() throws Exception {\n+ // Tests that nothing goes wrong when there's nothing to delete\n+ client().execute(DeleteExpiredDataAction.INSTANCE, new DeleteExpiredDataAction.Request()).get();\n+ }\n+\n public void testDeleteExpiredData() throws Exception {\n registerJob(newJobBuilder(\"no-retention\").setResultsRetentionDays(null).setModelSnapshotRetentionDays(null));\n registerJob(newJobBuilder(\"results-retention\").setResultsRetentionDays(1L).setModelSnapshotRetentionDays(null));\n@@ -166,6 +178,18 @@ public void testDeleteExpiredData() throws Exception {\n assertThat(countForecastDocs(forecastStat.getJobId(), forecastStat.getForecastId()), equalTo(forecastStat.getRecordCount()));\n }\n \n+ // Index some unused state documents (more than 10K to test scrolling works)\n+ BulkRequestBuilder bulkRequestBuilder = client().prepareBulk();\n+ bulkRequestBuilder.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);\n+ for (int i = 0; i < 10010; i++) {\n+ String docId = \"non_existing_job_\" + randomFrom(\"model_state_1234567#\" + i, \"quantiles\", \"categorizer_state#\" + i);\n+ IndexRequest indexRequest = new IndexRequest(AnomalyDetectorsIndex.jobStateIndexName(), \"doc\", docId);\n+ indexRequest.source(Collections.emptyMap());\n+ bulkRequestBuilder.add(indexRequest);\n+ }\n+ assertThat(bulkRequestBuilder.get().status(), equalTo(RestStatus.OK));\n+\n+ // Now call the action under test\n client().execute(DeleteExpiredDataAction.INSTANCE, new DeleteExpiredDataAction.Request()).get();\n \n // We need to refresh to ensure the deletion is visible\n@@ -216,6 +240,16 @@ public void testDeleteExpiredData() throws Exception {\n assertThat(countForecastDocs(job.getId(), forecastId), equalTo(0L));\n }\n }\n+\n+ // Verify .ml-state doesn't contain unused state documents\n+ SearchResponse stateDocsResponse = client().prepareSearch(AnomalyDetectorsIndex.jobStateIndexName())\n+ .setFetchSource(false)\n+ .setSize(10000)\n+ .get();\n+ assertThat(stateDocsResponse.getHits().getTotalHits(), lessThan(10000L));\n+ for (SearchHit hit : stateDocsResponse.getHits().getHits()) {\n+ assertThat(hit.getId().startsWith(\"non_existing_job\"), is(false));\n+ }\n }\n \n private static Job.Builder newJobBuilder(String id) {", "filename": "x-pack/qa/ml-native-tests/src/test/java/org/elasticsearch/xpack/ml/integration/DeleteExpiredDataIT.java", "status": "modified" } ] }
{ "body": "\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 6.2.0 and 6.2.1\r\n\r\n**Plugins installed**: [ x-pack ]\r\n\r\n**JVM version** (`java -version`): Any\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Any\r\n\r\n**Description of the problem including expected versus actual behavior**: \r\n\r\nTo be able to use the field capabilities without warnings.\r\n\r\n**Steps to reproduce**:\r\n\r\nIn certain cases, Kibana will trigger warnings in the elasticsearch log through its use of the Field Capabilities API. This happens in particular any time an index pattern is added or refreshed in Kibana. Kibana makes this API call in particular: \r\n\r\n[1] Run the following command\r\n\r\n```\r\nGET /*/_field_caps?fields=*&ignore_unavailable=true&allow_no_indices=false\r\n```\r\n\r\n[2] Which will yield the following warnings in the log:\r\n\r\n```\r\n[2018-05-15T16:06:05,154][WARN ][o.e.d.i.m.UidFieldMapper ] Fielddata access on the _uid field is deprecated, use _id instead\r\n```\r\n\r\nThis is because in MappedFieldType.java we call fielddataBuilder() to determine whether or not the field can be aggregated on:\r\n\r\n```\r\n public boolean isAggregatable() {\r\n try {\r\n fielddataBuilder(\"\");\r\n return true;\r\n } catch (IllegalArgumentException e) {\r\n return false;\r\n }\r\n }\r\n```\r\nThis is likely spewing warnings for a bunch of customers. Kudos to @gmoskovicz for hunting this down.", "comments": [ { "body": "What version of Elasticsearch is this? Please fill out the template when posting an issue.", "created_at": "2018-05-15T18:22:49Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-05-16T07:31:20Z" }, { "body": "I do confirm that this issue is reproducible on ES & Kibana versions 6.2.4.", "created_at": "2018-05-16T09:26:22Z" }, { "body": "If I run a `Refresh field list` in Kibana under Management>Index Patterns I'm also getting a lot of below warning messages on all production nodes despite _uid is not used at all.\r\n\r\n```\r\n[2018-05-16T10:55:28,220][WARN ][o.e.d.i.m.UidFieldMapper ] Fielddata access on the _uid field is deprecated, use _id instead\r\n[2018-05-16T10:55:28,220][WARN ][o.e.d.i.m.UidFieldMapper ] Fielddata access on the _uid field is deprecated, use _id instead\r\n[2018-05-16T10:55:28,220][WARN ][o.e.d.i.m.UidFieldMapper ] Fielddata access on the _uid field is deprecated, use _id instead\r\n[2018-05-16T10:55:28,221][WARN ][o.e.d.i.m.UidFieldMapper ] Fielddata access on the _uid field is deprecated, use _id instead\r\n[2018-05-16T10:55:28,222][WARN ][o.e.d.i.m.UidFieldMapper ] Fielddata access on the _uid field is deprecated, use _id instead\r\n[2018-05-16T10:55:28,222][WARN ][o.e.d.i.m.UidFieldMapper ] Fielddata access on the _uid field is deprecated, use _id instead\r\n[2018-05-16T10:55:28,224][WARN ][o.e.d.i.m.UidFieldMapper ] Fielddata access on the _uid field is deprecated, use _id instead\r\n[2018-05-16T11:02:02,818][WARN ][o.e.d.i.m.UidFieldMapper ] Fielddata access on the _uid field is deprecated, use _id instead\r\n[2018-05-16T11:02:02,818][WARN ][o.e.d.i.m.UidFieldMapper ] Fielddata access on the _uid field is deprecated, use _id instead\r\n```", "created_at": "2018-05-16T11:07:02Z" }, { "body": "@Constantin07 Thanks for the heads up. We are aware of this and to give you some context, the `Refresh Index Pattern` action is going to end up calling the `_field_caps` endpoint, which causes this Warning message to be shown. \r\n\r\nKeep an eye on this issue as it should be the fix for that as well.", "created_at": "2018-05-16T12:22:15Z" }, { "body": "Thanks for reporting @gmoskovicz @tomcallahan \r\nThis is fixed by https://github.com/elastic/elasticsearch/pull/30651", "created_at": "2018-05-16T17:34:59Z" } ], "number": 30625, "title": "Use of field capabilities API can print warnings related to _uid field" }
{ "body": "A deprecation warning is printed when creating the fieldddata builder for the `_uid` field.\r\nThis change moves the deprecation logging to the building of the fielddata since otherwise\r\nAPIs like `_field_caps` can emit deprecation warning when they just test the capabilities\r\nof the `_uid` field.\r\n\r\nCloses #30625", "number": 30651, "review_comments": [], "title": "Delay _uid field data deprecation warning" }
{ "commits": [ { "message": "Delay _uid field data deprecation warning\n\nA deprecation warning is printed when creating the fieldddata builder for the `_uid` field.\nThis change moves the deprecation logging to the building of the fielddata since otherwise\nAPIs like `_field_caps` can emit deprecation warning when they just test the capabilities\nof the `_uid` field.\n\nCloses #30625" } ], "files": [ { "diff": "@@ -113,11 +113,11 @@ public String typeName() {\n @Override\n public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {\n if (indexOptions() == IndexOptions.NONE) {\n- DEPRECATION_LOGGER.deprecated(\"Fielddata access on the _uid field is deprecated, use _id instead\");\n return new IndexFieldData.Builder() {\n @Override\n public IndexFieldData<?> build(IndexSettings indexSettings, MappedFieldType fieldType, IndexFieldDataCache cache,\n CircuitBreakerService breakerService, MapperService mapperService) {\n+ DEPRECATION_LOGGER.deprecated(\"Fielddata access on the _uid field is deprecated, use _id instead\");\n MappedFieldType idFieldType = mapperService.fullName(IdFieldMapper.NAME);\n IndexFieldData<?> idFieldData = idFieldType.fielddataBuilder(fullyQualifiedIndexName)\n .build(indexSettings, idFieldType, cache, breakerService, mapperService);", "filename": "server/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java", "status": "modified" }, { "diff": "@@ -25,16 +25,25 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.UUIDs;\n+import org.elasticsearch.common.breaker.CircuitBreaker;\n+import org.elasticsearch.common.breaker.NoopCircuitBreaker;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.fielddata.IndexFieldData;\n+import org.elasticsearch.index.fielddata.IndexFieldDataCache;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.UidFieldMapper;\n import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.indices.breaker.CircuitBreakerService;\n+import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n import org.mockito.Mockito;\n \n+import java.io.IOException;\n import java.util.Collection;\n import java.util.Collections;\n \n+import static org.mockito.Matchers.any;\n+\n public class UidFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n@@ -132,4 +141,35 @@ public void testTermsQuery() throws Exception {\n query = ft.termQuery(\"type2#id\", context);\n assertEquals(new TermInSetQuery(\"_id\"), query);\n }\n+\n+ public void testIsAggregatable() {\n+ Settings indexSettings = Settings.builder()\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID())\n+ .build();\n+ IndexMetaData indexMetaData = IndexMetaData.builder(IndexMetaData.INDEX_UUID_NA_VALUE).settings(indexSettings).build();\n+ IndexSettings mockSettings = new IndexSettings(indexMetaData, Settings.EMPTY);\n+ MappedFieldType ft = UidFieldMapper.defaultFieldType(mockSettings);\n+ assertTrue(ft.isAggregatable());\n+ }\n+\n+ public void testFieldDataDeprecation() {\n+ Settings indexSettings = Settings.builder()\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID())\n+ .build();\n+ IndexMetaData indexMetaData = IndexMetaData.builder(IndexMetaData.INDEX_UUID_NA_VALUE).settings(indexSettings).build();\n+ IndexSettings mockSettings = new IndexSettings(indexMetaData, Settings.EMPTY);\n+ MappedFieldType ft = UidFieldMapper.defaultFieldType(mockSettings);\n+ IndexFieldData.Builder builder = ft.fielddataBuilder(\"\");\n+ MapperService mockMapper = Mockito.mock(MapperService.class);\n+ Mockito.when(mockMapper.fullName(any())).thenReturn(new IdFieldMapper.IdFieldType());\n+ Mockito.when(mockMapper.types()).thenReturn(Collections.singleton(\"doc\"));\n+ builder.build(mockSettings, ft, null, new NoneCircuitBreakerService(), mockMapper);\n+ assertWarnings(\"Fielddata access on the _uid field is deprecated, use _id instead\");\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/index/mapper/UidFieldTypeTests.java", "status": "modified" } ] }
{ "body": "A wrong check in the activate watch API could lead to watches triggering\r\non wrong nodes. The check was supposed to check if watch execution\r\nwas distributed already in 6.x and only if not, then trigger locally.\r\n\r\nThe check however was broken and triggered the watch only when\r\ndistributed watch execution was actually enabled.\r\n\r\nDue to another check in the trigger schedule engine, this problem is already solved on 6.3 on all non-data nodes (like client nodes), as no trigger is started there. However this needs to be fixed when you connect to a different data node (different as in where the watcher shard that maintains this watch is not).", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-05-15T12:31:23Z" } ], "number": 30613, "title": "Watcher: Prevent triggering watch when using activate API" }
{ "body": "If a user is putting a watch, while upgrading from 5.x to 6.x, this can\r\nlead to the watch being triggered on the node receiving the put watch\r\nrequest.\r\n\r\nNote, that this can only happen when watcher is not running in its\r\ndistributed fashion. The condition for this is, that there are still\r\nnodes running on version 5 in a 6.x cluster.\r\n\r\nNote^2, that this issue is not as significant as #30613 due to the above reason.", "number": 30643, "review_comments": [], "title": "Watcher: Prevent duplicate watch triggering during upgrade" }
{ "commits": [ { "message": "Watcher: Prevent duplicate watch triggering during upgrade\n\nIf a user is putting a watch, while upgrading from 5.x to 6.x, this can\nlead to the watch being triggered on the node receiving the put watch\nrequest.\n\nNote, that this can only happen when watcher is not running in its\ndistributed fashion. The condition for this is, that there are still\nnodes running on version 5 in a 6.x cluster." }, { "message": "Merge branch '6.x' into 1805-fix-duplicate-watch-triggering-during-upgrade" } ], "files": [ { "diff": "@@ -63,6 +63,7 @@ public class TransportPutWatchAction extends WatcherTransportAction<PutWatchRequ\n private final WatchParser parser;\n private final TriggerService triggerService;\n private final Client client;\n+ private final ClusterService clusterService;\n private static final ToXContent.Params DEFAULT_PARAMS =\n WatcherParams.builder().hideSecrets(false).hideHeaders(false).includeStatus(true).build();\n \n@@ -76,6 +77,7 @@ public TransportPutWatchAction(Settings settings, TransportService transportServ\n this.clock = clock;\n this.parser = parser;\n this.client = client;\n+ this.clusterService = clusterService;\n this.triggerService = triggerService;\n }\n \n@@ -106,7 +108,10 @@ protected void masterOperation(PutWatchRequest request, ClusterState state,\n executeAsyncWithOrigin(client.threadPool().getThreadContext(), WATCHER_ORIGIN, updateRequest,\n ActionListener.<UpdateResponse>wrap(response -> {\n boolean created = response.getResult() == DocWriteResponse.Result.CREATED;\n- if (localExecute(request) == false && watch.status().state().isActive()) {\n+ // if not yet in distributed mode (mixed 5/6 version in cluster), only trigger on the master node\n+ if (localExecute(request) == false &&\n+ this.clusterService.state().nodes().isLocalNodeElectedMaster() &&\n+ watch.status().state().isActive()) {\n triggerService.add(watch);\n }\n listener.onResponse(new PutWatchResponse(response.getId(), response.getVersion(), created));", "filename": "x-pack/plugin/watcher/src/main/java/org/elasticsearch/xpack/watcher/transport/actions/put/TransportPutWatchAction.java", "status": "modified" }, { "diff": "@@ -5,13 +5,18 @@\n */\n package org.elasticsearch.xpack.watcher.transport.actions.put;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.index.IndexRequest;\n-import org.elasticsearch.action.index.IndexResponse;\n+import org.elasticsearch.action.DocWriteResponse;\n import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.update.UpdateRequest;\n+import org.elasticsearch.action.update.UpdateResponse;\n import org.elasticsearch.client.Client;\n+import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.ThreadContext;\n@@ -23,17 +28,23 @@\n import org.elasticsearch.transport.TransportService;\n import org.elasticsearch.xpack.core.ClientHelper;\n import org.elasticsearch.xpack.core.watcher.transport.actions.put.PutWatchRequest;\n+import org.elasticsearch.xpack.core.watcher.transport.actions.put.PutWatchResponse;\n import org.elasticsearch.xpack.core.watcher.watch.ClockMock;\n import org.elasticsearch.xpack.core.watcher.watch.Watch;\n+import org.elasticsearch.xpack.core.watcher.watch.WatchStatus;\n import org.elasticsearch.xpack.watcher.test.WatchExecutionContextMockBuilder;\n import org.elasticsearch.xpack.watcher.trigger.TriggerService;\n import org.elasticsearch.xpack.watcher.watch.WatchParser;\n+import org.joda.time.DateTime;\n+import org.joda.time.DateTimeZone;\n import org.junit.Before;\n import org.mockito.ArgumentCaptor;\n \n import java.util.Collections;\n+import java.util.HashSet;\n import java.util.Map;\n \n+import static java.util.Arrays.asList;\n import static org.hamcrest.Matchers.hasKey;\n import static org.hamcrest.Matchers.hasSize;\n import static org.hamcrest.Matchers.is;\n@@ -45,59 +56,134 @@\n import static org.mockito.Mockito.doAnswer;\n import static org.mockito.Mockito.mock;\n import static org.mockito.Mockito.verify;\n+import static org.mockito.Mockito.verifyZeroInteractions;\n import static org.mockito.Mockito.when;\n \n public class TransportPutWatchActionTests extends ESTestCase {\n \n private TransportPutWatchAction action;\n- private Watch watch = new WatchExecutionContextMockBuilder(\"_id\").buildMock().watch();\n- private ThreadContext threadContext = new ThreadContext(Settings.EMPTY);\n+ private final Watch watch = new WatchExecutionContextMockBuilder(\"_id\").buildMock().watch();\n+ private final ThreadContext threadContext = new ThreadContext(Settings.EMPTY);\n+ private final ClusterService clusterService = mock(ClusterService.class);\n+ private final TriggerService triggerService = mock(TriggerService.class);\n+ private final ActionListener<PutWatchResponse> listener = ActionListener.wrap(r -> {}, e -> assertThat(e, is(nullValue())));\n \n @Before\n public void setupAction() throws Exception {\n- TriggerService triggerService = mock(TriggerService.class);\n- ClusterService clusterService = mock(ClusterService.class);\n ThreadPool threadPool = mock(ThreadPool.class);\n when(threadPool.getThreadContext()).thenReturn(threadContext);\n \n TransportService transportService = mock(TransportService.class);\n \n WatchParser parser = mock(WatchParser.class);\n when(parser.parseWithSecrets(eq(\"_id\"), eq(false), anyObject(), anyObject(), anyObject(), anyBoolean())).thenReturn(watch);\n+ WatchStatus status = mock(WatchStatus.class);\n+ WatchStatus.State state = new WatchStatus.State(true, DateTime.now(DateTimeZone.UTC));\n+ when(status.state()).thenReturn(state);\n+ when(watch.status()).thenReturn(status);\n \n Client client = mock(Client.class);\n when(client.threadPool()).thenReturn(threadPool);\n // mock an index response that calls the listener\n doAnswer(invocation -> {\n- IndexRequest request = (IndexRequest) invocation.getArguments()[1];\n- ActionListener<IndexResponse> listener = (ActionListener) invocation.getArguments()[2];\n+ UpdateRequest request = (UpdateRequest) invocation.getArguments()[0];\n+ ActionListener<UpdateResponse> listener = (ActionListener) invocation.getArguments()[1];\n \n ShardId shardId = new ShardId(new Index(Watch.INDEX, \"uuid\"), 0);\n- listener.onResponse(new IndexResponse(shardId, request.type(), request.id(), 1, 1, 1, true));\n+ listener.onResponse(new UpdateResponse(shardId, request.type(), request.id(), request.version(),\n+ DocWriteResponse.Result.UPDATED));\n \n return null;\n- }).when(client).execute(any(), any(), any());\n+ }).when(client).update(any(), any());\n \n action = new TransportPutWatchAction(Settings.EMPTY, transportService, threadPool,\n new ActionFilters(Collections.emptySet()), new IndexNameExpressionResolver(Settings.EMPTY), new ClockMock(),\n new XPackLicenseState(Settings.EMPTY), parser, client, clusterService, triggerService);\n }\n \n public void testHeadersAreFilteredWhenPuttingWatches() throws Exception {\n- ClusterState state = mock(ClusterState.class);\n // set up threadcontext with some arbitrary info\n String headerName = randomFrom(ClientHelper.SECURITY_HEADER_FILTERS);\n threadContext.putHeader(headerName, randomAlphaOfLength(10));\n threadContext.putHeader(randomAlphaOfLength(10), \"doesntmatter\");\n \n PutWatchRequest putWatchRequest = new PutWatchRequest();\n putWatchRequest.setId(\"_id\");\n- action.masterOperation(putWatchRequest, state, ActionListener.wrap(r -> {}, e -> assertThat(e, is(nullValue()))));\n+\n+ ClusterState state = ClusterState.builder(new ClusterName(\"my_cluster\"))\n+ .nodes(DiscoveryNodes.builder()\n+ .masterNodeId(\"node_1\")\n+ .localNodeId(randomFrom(\"node_1\", \"node_2\"))\n+ .add(newNode(\"node_1\", Version.CURRENT))\n+ .add(newNode(\"node_2\", Version.CURRENT)))\n+ .build();\n+ when(clusterService.state()).thenReturn(state);\n+\n+ action.masterOperation(putWatchRequest, state, listener);\n \n ArgumentCaptor<Map> captor = ArgumentCaptor.forClass(Map.class);\n verify(watch.status()).setHeaders(captor.capture());\n Map<String, String> capturedHeaders = captor.getValue();\n assertThat(capturedHeaders.keySet(), hasSize(1));\n assertThat(capturedHeaders, hasKey(headerName));\n }\n-}\n\\ No newline at end of file\n+\n+ public void testWatchesAreNeverTriggeredWhenDistributed() throws Exception {\n+ PutWatchRequest putWatchRequest = new PutWatchRequest();\n+ putWatchRequest.setId(\"_id\");\n+\n+ ClusterState clusterState = ClusterState.builder(new ClusterName(\"my_cluster\"))\n+ .nodes(DiscoveryNodes.builder()\n+ .masterNodeId(\"node_1\")\n+ .localNodeId(randomFrom(\"node_1\", \"node_2\"))\n+ .add(newNode(\"node_1\", Version.CURRENT))\n+ .add(newNode(\"node_2\", Version.CURRENT)))\n+ .build();\n+ when(clusterService.state()).thenReturn(clusterState);\n+\n+ action.masterOperation(putWatchRequest, clusterState, listener);\n+\n+ verifyZeroInteractions(triggerService);\n+ }\n+\n+ public void testWatchesAreNotTriggeredOnNonMasterWhenNotDistributed() throws Exception {\n+ PutWatchRequest putWatchRequest = new PutWatchRequest();\n+ putWatchRequest.setId(\"_id\");\n+\n+ ClusterState clusterState = ClusterState.builder(new ClusterName(\"my_cluster\"))\n+ .nodes(DiscoveryNodes.builder()\n+ .masterNodeId(\"node_2\")\n+ .localNodeId(\"node_1\")\n+ .add(newNode(\"node_1\", Version.CURRENT))\n+ .add(newNode(\"node_2\", Version.V_5_6_10)))\n+ .build();\n+ when(clusterService.state()).thenReturn(clusterState);\n+\n+ action.masterOperation(putWatchRequest, clusterState, listener);\n+\n+ verifyZeroInteractions(triggerService);\n+ }\n+\n+ public void testWatchesAreTriggeredOnMasterWhenNotDistributed() throws Exception {\n+ PutWatchRequest putWatchRequest = new PutWatchRequest();\n+ putWatchRequest.setId(\"_id\");\n+\n+ ClusterState clusterState = ClusterState.builder(new ClusterName(\"my_cluster\"))\n+ .nodes(DiscoveryNodes.builder()\n+ .masterNodeId(\"node_1\")\n+ .localNodeId(\"node_1\")\n+ .add(newNode(\"node_1\", Version.CURRENT))\n+ .add(newNode(\"node_2\", Version.V_5_6_10)))\n+ .build();\n+ when(clusterService.state()).thenReturn(clusterState);\n+\n+ action.masterOperation(putWatchRequest, clusterState, listener);\n+\n+ verify(triggerService).add(eq(watch));\n+ }\n+\n+ private static DiscoveryNode newNode(String nodeId, Version version) {\n+ return new DiscoveryNode(nodeId, ESTestCase.buildNewFakeTransportAddress(), Collections.emptyMap(),\n+ new HashSet<>(asList(DiscoveryNode.Role.values())), version);\n+ }\n+}", "filename": "x-pack/plugin/watcher/src/test/java/org/elasticsearch/xpack/watcher/transport/actions/put/TransportPutWatchActionTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version** (`bin/elasticsearch --version`): 6.2.4\r\n\r\n**Plugins installed**: No plugins installed\r\n\r\n**JVM version** (`java -version`): java version \"1.8.0_25\"\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Darwin Kernel Version 17.5.0\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen trying to use a pipeline aggregation on a date histogram, bucket_path cannot properly be resolved if it is pointing to a child aggregation of the date histogram if a sibling aggregation (pipeline aggregation's sibling) has the same name as the child aggregation of the date histogram. (see example). \r\n\r\nI would expect that elasticsearch could properly resolve the path, as it doesn't seem to exist an ambiguation. Maybe I'm wrong with this and it should behave as it is behaving... \r\n\r\n**Steps to reproduce**:\r\n\r\nHere you have a simple query example that allows to reproduce the problem\r\n\r\n```\r\n{\r\n \"query\": { \"match_all\": {} },\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"sessionsCount\": {\r\n \"filter\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"terms\": {\r\n \"status\": [\r\n \"FINISHED\"\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n },\r\n \"monthlyAverageSessions\": {\r\n \"avg_bucket\": {\r\n \"buckets_path\": \"monthBuckets>sessionsCount>_count\",\r\n \"gap_policy\": \"insert_zeros\"\r\n }\r\n },\r\n \"monthBuckets\": {\r\n \"date_histogram\": {\r\n \"field\": \"startTimestamp\",\r\n \"interval\": \"month\"\r\n },\r\n \"aggs\": {\r\n \"sessionsCount\": {\r\n \"filter\": {\r\n \"bool\": {\r\n \"must\": [\r\n {\r\n \"terms\": {\r\n \"status\": [\r\n \"FINISHED\"\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\nAnd this is the error that I receive. \r\n\r\n`\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"\\\",\\\"phase\\\":\\\"fetch\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[],\\\"caused_by\\\":{\\\"type\\\":\\\"class_cast_exception\\\",\\\"reason\\\":\\\"org.elasticsearch.search.aggregations.bucket.filter.InternalFilter cannot be cast to org.elasticsearch.search.aggregations.InternalMultiBucketAggregation\\\"}},\\\"status\\\":503}\"`\r\n\r\nAny of both aggregations works fine if they are not together (sessionsCount and monthlyAverageSessions). And it also works well if I change the name of the first aggregation (sessionsCount) to a different one (sessions for example). So it looks like a naming problem. \r\n\r\n", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-05-15T12:20:13Z" }, { "body": "@colings86 do you have an idea whether this is something that should be supported or a usage problem?", "created_at": "2018-05-15T12:22:04Z" }, { "body": "@ruizmarc This does indeed looks like a bug at first glance. Do you have the full stack trace for the ClassCastException from your server logs?", "created_at": "2018-05-15T12:33:27Z" }, { "body": "Thanks for your quick response :) Sure, here you have the stack trace:\r\n\r\n```\r\n org.elasticsearch.action.search.SearchPhaseExecutionException:\r\n at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:274) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:92) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:657) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]\r\n Caused by: java.lang.ClassCastException: org.elasticsearch.search.aggregations.bucket.filter.InternalFilter cannot be cast to org.elasticsearch.search.aggregations.InternalMultiBucketAggregation\r\n at org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregator.doReduce(BucketMetricsPipelineAggregator.java:83) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reduceAggs(SearchPhaseController.java:533) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:504) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:421) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController$1.reduce(SearchPhaseController.java:740) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:102) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase.access$000(FetchSearchPhase.java:45) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:87) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n ... 5 more\r\n [2018-05-15T10:42:12,952][WARN ][r.suppressed ] path: /charges/_search, params: {index=charges}\r\n org.elasticsearch.action.search.SearchPhaseExecutionException:\r\n at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:274) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:92) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:657) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]\r\n Caused by: java.lang.ClassCastException: org.elasticsearch.search.aggregations.bucket.filter.InternalFilter cannot be cast to org.elasticsearch.search.aggregations.InternalMultiBucketAggregation\r\n at org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregator.doReduce(BucketMetricsPipelineAggregator.java:83) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reduceAggs(SearchPhaseController.java:533) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:504) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:421) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController$1.reduce(SearchPhaseController.java:740) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:102) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase.access$000(FetchSearchPhase.java:45) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:87) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n ... 5 more\r\n [2018-05-15T10:42:28,853][WARN ][r.suppressed ] path: /charges/_search, params: {index=charges}\r\n org.elasticsearch.action.search.SearchPhaseExecutionException:\r\n at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:274) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:92) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:657) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]\r\n Caused by: java.lang.ClassCastException: org.elasticsearch.search.aggregations.bucket.filter.InternalFilter cannot be cast to org.elasticsearch.search.aggregations.InternalMultiBucketAggregation\r\n at org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregator.doReduce(BucketMetricsPipelineAggregator.java:83) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reduceAggs(SearchPhaseController.java:533) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:504) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:421) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController$1.reduce(SearchPhaseController.java:740) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:102) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase.access$000(FetchSearchPhase.java:45) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:87) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n ... 5 more\r\n [2018-05-15T13:31:07,538][WARN ][r.suppressed ] path: /charges/_search, params: {index=charges}\r\n org.elasticsearch.action.search.SearchPhaseExecutionException:\r\n at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:274) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:92) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:657) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]\r\n Caused by: java.lang.ClassCastException: org.elasticsearch.search.aggregations.bucket.filter.InternalFilter cannot be cast to org.elasticsearch.search.aggregations.InternalMultiBucketAggregation\r\n at org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregator.doReduce(BucketMetricsPipelineAggregator.java:83) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reduceAggs(SearchPhaseController.java:533) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:504) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:421) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.SearchPhaseController$1.reduce(SearchPhaseController.java:740) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:102) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase.access$000(FetchSearchPhase.java:45) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:87) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n ... 5 more\r\n```", "created_at": "2018-05-15T13:32:39Z" }, { "body": "@ruizmarc I hope you don't mind but I re-formatted your stack trace a bit to make it a bit easier to read", "created_at": "2018-05-15T14:29:25Z" }, { "body": "I see the bug. It's not just when the aggs are the same name, but when they line up in just the correct manner:\r\n\r\n- A. Multi-bucket agg in the first entry of our internal list\r\n- B. Regular agg as the immediate child of the multi-bucket in A\r\n- C. Regular agg with the same name as B at the top level, listed as the second entry in our internal list\r\n- D. Finally, a pipeline agg with the path down to B\r\n\r\nIt blows up because we [overwrite the bucket path with the sublist](https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregator.java#L82). So when we start iterating, we match the agg in A, sublist the path and recurse down. But when the loop comes back around to check agg C, the sublisted path now matches because it went from `A>B` to just `B`, and then it throws the cast exception.\r\n\r\nThe fix should be pretty straightforward. I'll work something up. Thanks for the bug report @ruizmarc! ", "created_at": "2018-05-15T21:58:34Z" }, { "body": "I'm glad it helped! Thanks for your quick fix, it will be very helpful! 💯 ☺️", "created_at": "2018-05-16T00:06:20Z" }, { "body": "It seems like I stumbled across the same bug, so I backported the fix to 5.6.8 to check. But unfortunatly the fix isn't working for me:\r\n\r\n```\r\norg.elasticsearch.action.search.SearchPhaseExecutionException: \r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:272) [elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:92) [elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:659) [elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\nCaused by: java.lang.ClassCastException: org.elasticsearch.search.aggregations.bucket.filter.InternalFilter cannot be cast to org.elasticsearch.search.aggregations.InternalMultiBucketAggregation\r\n\tat org.elasticsearch.search.aggregations.pipeline.bucketmetrics.BucketMetricsPipelineAggregator.doReduce(BucketMetricsPipelineAggregator.java:83) ~[elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.action.search.SearchPhaseController.reduceAggs(SearchPhaseController.java:519) ~[elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:490) ~[elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:408) ~[elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.action.search.SearchPhaseController$1.reduce(SearchPhaseController.java:725) ~[elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:102) ~[elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase.access$000(FetchSearchPhase.java:45) ~[elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:87) ~[elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:674) ~[elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.6.8-SNAPSHOT.jar:5.6.8-SNAPSHOT]\r\n\t... 3 more\r\n```\r\n\r\nThe query is: \r\n```\r\n{\r\n \"size\" : 13,\r\n \"query\" : {\r\n \"match_all\": {}\r\n },\r\n \"aggregations\" : { \r\n \"filteredPrices\" : {\r\n \"filter\" : {\r\n \"match_all\": {}\r\n },\r\n \"aggregations\" : {\r\n \"groupById\" : {\r\n \"terms\" : {\r\n \"field\" : \"itemId\"\r\n },\r\n \"aggregations\" : {\r\n \"nestedPrices\" : {\r\n \"nested\" : {\r\n \"path\" : \"prices\"\r\n },\r\n \"aggregations\" : {\r\n \"catalogFiltered\" : {\r\n \"filter\" : {\r\n \"match_all\": {}\r\n },\r\n \"aggregations\" : {\r\n \"minPrice\" : {\r\n \"min\" : {\r\n \"field\" : \"prices.price\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"minPriceOfAllItems\" : {\r\n \"min_bucket\" : {\r\n \"buckets_path\" : [\r\n \"filteredPrices>groupById>nestedPrices>catalogFiltered>minPrice\"\r\n ],\r\n \"gap_policy\" : \"skip\"\r\n }\r\n },\r\n \"maxPriceOfAllItems\" : {\r\n \"max_bucket\" : {\r\n \"buckets_path\" : [\r\n \"filteredPrices>groupById>nestedPrices>catalogFiltered>minPrice\"\r\n ],\r\n \"gap_policy\" : \"skip\"\r\n }\r\n }\r\n }\r\n}\r\n```", "created_at": "2018-05-18T12:32:10Z" }, { "body": "Hey @jmuscireum, I believe your running into a triplet of unrelated issues that constrain pipeline aggs right now.\r\n\r\nFirst, `filter` aggs don't play nicely with pipelines because they only emit a single bucket, whereas the various sibling pipeline aggs (like `min_bucket`) can only use multi-bucket aggs. Issue https://github.com/elastic/elasticsearch/issues/14600 deals with this (it talks about `bucket_script`, but generally applicable to other sibling aggs).\r\n\r\nYou can \"workaround\" it by using `filters` agg, but then you'll run into this issue: https://github.com/elastic/elasticsearch/issues/29287. Which is actually the same thing: nested aggs emit a single bucket instead of multi-buckets\r\n\r\nFinally, and I'm not sure there's an issue for this, but pipeline aggs can't aggregate across multiple levels of terms aggregations easily. You can sometimes work around it by \"proxy'ing\" the value out of the terms agg with an intermediate pipeline agg (e.g. a min_bucket on the same level as the terms, to \"roll up\" the value at that level, then another min_bucket at a higher level to roll up all the previous min_buckets). But you can't normally use one pipeline agg to aggregate across multiple levels of terms aggs.\r\n\r\nSorry for all the bad news... pipeline aggs have some fundamental limitations based on how the framework works. :(", "created_at": "2018-05-18T13:17:17Z" }, { "body": "Hey @polyfractal, thank you for the detailed answer! As a workaround, we can manually fetch the min and max price from the aggregated bucket. But I have the feeling, that there is a way that is more suited for our use case, but I can't think of it. Maybe you have an idea how we can achieve this, without having so many nested aggregations.\r\n\r\nOur products can have different prices in multiple catalogs. Depending on the user, different catalogs are accessible. So the prices differ from user to user. On our search page every product filter is showing the active filter values and the values that are additionally possible. For this we are using the post filter to filter the products after the filter aggregations were done. That's why `filteredPrices` exists. It filters the products that should be taken into consideration for the min and max values of our price filter.\r\n\r\nThank you for doing awesome work and have a nice weekend!", "created_at": "2018-05-18T14:00:47Z" } ], "number": 30608, "title": "Bucket path name resolution fails with siblings and child aggregations with the same name" }
{ "body": "When processing a top-level sibling pipeline, we destructively sublist the path by assigning back onto the same variable. But if aggs are specified such:\r\n\r\nA. Multi-bucket agg in the first entry of our internal list\r\nB. Regular agg as the immediate child of the multi-bucket in A\r\nC. Regular agg with the same name as B at the top level, listed as the second entry in our internal list\r\nD. Finally, a pipeline agg with the path down to B\r\n\r\nWe'll get class cast exception. The first agg will sublist the path from [A,B] to [B], and then when we loop around to check agg C, the sublisted path [B] matches the name of C and it fails.\r\n\r\nThe fix is simple: we just need to store the sublist in a new object so that the old path remains valid for the rest of the aggs in the loop\r\n\r\nCloses #30608\r\n\r\n/cc @colings86 ", "number": 30632, "review_comments": [], "title": "Fix class cast exception in BucketMetricsPipeline path traversal" }
{ "commits": [ { "message": "Fix bug in BucketMetrics path traversal\n\nWhen processing a top-level sibling pipeline, we destructively sublist\nthe path by assigning back onto the same variable. But if aggs are\nspecified such:\n\nA. Multi-bucket agg in the first entry of our internal list\nB. Regular agg as the immediate child of the multi-bucket in A\nC. Regular agg with the same name as B at the top level, listed as the\n second entry in our internal list\nD. Finally, a pipeline agg with the path down to B\n\nWe'll get class cast exception. The first agg will sublist the path\nfrom [A,B] to [B], and then when we loop around to check agg C,\nthe sublisted path [B] matches the name of C and it fails.\n\nThe fix is simple: we just need to store the sublist in a new object\nso that the old path remains valid for the rest of the aggs in the loop\n\nCloses #30608" } ], "files": [ { "diff": "@@ -79,11 +79,11 @@ public final InternalAggregation doReduce(Aggregations aggregations, ReduceConte\n List<String> bucketsPath = AggregationPath.parse(bucketsPaths()[0]).getPathElementsAsStringList();\n for (Aggregation aggregation : aggregations) {\n if (aggregation.getName().equals(bucketsPath.get(0))) {\n- bucketsPath = bucketsPath.subList(1, bucketsPath.size());\n+ List<String> sublistedPath = bucketsPath.subList(1, bucketsPath.size());\n InternalMultiBucketAggregation<?, ?> multiBucketsAgg = (InternalMultiBucketAggregation<?, ?>) aggregation;\n List<? extends InternalMultiBucketAggregation.InternalBucket> buckets = multiBucketsAgg.getBuckets();\n for (InternalMultiBucketAggregation.InternalBucket bucket : buckets) {\n- Double bucketValue = BucketHelpers.resolveBucketValue(multiBucketsAgg, bucket, bucketsPath, gapPolicy);\n+ Double bucketValue = BucketHelpers.resolveBucketValue(multiBucketsAgg, bucket, sublistedPath, gapPolicy);\n if (bucketValue != null && !Double.isNaN(bucketValue)) {\n collectBucketValue(bucket.getKeyAsString(), bucketValue);\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/BucketMetricsPipelineAggregator.java", "status": "modified" }, { "diff": "@@ -0,0 +1,144 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.pipeline.bucketmetrics.avg;\n+\n+import org.apache.lucene.document.Document;\n+import org.apache.lucene.document.SortedNumericDocValuesField;\n+import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.RandomIndexWriter;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.store.Directory;\n+import org.elasticsearch.index.mapper.DateFieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.NumberFieldMapper;\n+import org.elasticsearch.search.aggregations.Aggregation;\n+import org.elasticsearch.search.aggregations.Aggregations;\n+import org.elasticsearch.search.aggregations.AggregatorTestCase;\n+import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;\n+import org.elasticsearch.search.aggregations.bucket.histogram.InternalDateHistogram;\n+import org.elasticsearch.search.aggregations.metrics.avg.AvgAggregationBuilder;\n+import org.elasticsearch.search.aggregations.metrics.avg.InternalAvg;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.List;\n+\n+\n+public class AvgBucketAggregatorTests extends AggregatorTestCase {\n+ private static final String DATE_FIELD = \"date\";\n+ private static final String VALUE_FIELD = \"value\";\n+\n+ private static final List<String> dataset = Arrays.asList(\n+ \"2010-03-12T01:07:45\",\n+ \"2010-04-27T03:43:34\",\n+ \"2012-05-18T04:11:00\",\n+ \"2013-05-29T05:11:31\",\n+ \"2013-10-31T08:24:05\",\n+ \"2015-02-13T13:09:32\",\n+ \"2015-06-24T13:47:43\",\n+ \"2015-11-13T16:14:34\",\n+ \"2016-03-04T17:09:50\",\n+ \"2017-12-12T22:55:46\");\n+\n+ /**\n+ * Test for issue #30608. Under the following circumstances:\n+ *\n+ * A. Multi-bucket agg in the first entry of our internal list\n+ * B. Regular agg as the immediate child of the multi-bucket in A\n+ * C. Regular agg with the same name as B at the top level, listed as the second entry in our internal list\n+ * D. Finally, a pipeline agg with the path down to B\n+ *\n+ * BucketMetrics reduction would throw a class cast exception due to bad subpathing. This test ensures\n+ * it is fixed.\n+ *\n+ * Note: we have this test inside of the `avg_bucket` package so that we can get access to the package-private\n+ * `doReduce()` needed for testing this\n+ */\n+ public void testSameAggNames() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ AvgAggregationBuilder avgBuilder = new AvgAggregationBuilder(\"foo\").field(VALUE_FIELD);\n+ DateHistogramAggregationBuilder histo = new DateHistogramAggregationBuilder(\"histo\")\n+ .dateHistogramInterval(DateHistogramInterval.YEAR)\n+ .field(DATE_FIELD)\n+ .subAggregation(new AvgAggregationBuilder(\"foo\").field(VALUE_FIELD));\n+\n+ AvgBucketPipelineAggregationBuilder avgBucketBuilder\n+ = new AvgBucketPipelineAggregationBuilder(\"the_avg_bucket\", \"histo>foo\");\n+\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {\n+ Document document = new Document();\n+ for (String date : dataset) {\n+ if (frequently()) {\n+ indexWriter.commit();\n+ }\n+\n+ document.add(new SortedNumericDocValuesField(DATE_FIELD, asLong(date)));\n+ document.add(new SortedNumericDocValuesField(VALUE_FIELD, randomInt()));\n+ indexWriter.addDocument(document);\n+ document.clear();\n+ }\n+ }\n+\n+ InternalAvg avgResult;\n+ InternalDateHistogram histogramResult;\n+ try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ IndexSearcher indexSearcher = newSearcher(indexReader, true, true);\n+\n+ DateFieldMapper.Builder builder = new DateFieldMapper.Builder(\"histo\");\n+ DateFieldMapper.DateFieldType fieldType = builder.fieldType();\n+ fieldType.setHasDocValues(true);\n+ fieldType.setName(DATE_FIELD);\n+\n+ MappedFieldType valueFieldType = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG);\n+ valueFieldType.setName(VALUE_FIELD);\n+ valueFieldType.setHasDocValues(true);\n+\n+ avgResult = searchAndReduce(indexSearcher, query, avgBuilder, 10000, new MappedFieldType[]{fieldType, valueFieldType});\n+ histogramResult = searchAndReduce(indexSearcher, query, histo, 10000, new MappedFieldType[]{fieldType, valueFieldType});\n+ }\n+\n+ // Finally, reduce the pipeline agg\n+ PipelineAggregator avgBucketAgg = avgBucketBuilder.createInternal(Collections.emptyMap());\n+ List<Aggregation> reducedAggs = new ArrayList<>(2);\n+\n+ // Histo has to go first to exercise the bug\n+ reducedAggs.add(histogramResult);\n+ reducedAggs.add(avgResult);\n+ Aggregations aggregations = new Aggregations(reducedAggs);\n+ InternalAggregation pipelineResult = ((AvgBucketPipelineAggregator)avgBucketAgg).doReduce(aggregations, null);\n+ assertNotNull(pipelineResult);\n+ }\n+ }\n+\n+\n+ private static long asLong(String dateTime) {\n+ return DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.parser().parseDateTime(dateTime).getMillis();\n+ }\n+}", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/pipeline/bucketmetrics/avg/AvgBucketAggregatorTests.java", "status": "added" } ] }
{ "body": "Hi,\r\n\r\nI did an explain request and get the following response:\r\n\r\n```\r\n \"_score\": 2,\r\n \"fields\": {\r\n \"title\": [\r\n \"Title\"\r\n ],\r\n \"id\": [\r\n \"vbce77e7t9pe\"\r\n ]\r\n },\r\n \"highlight\": {\r\n \"content.search\": [\r\n \"... some content ...\"\r\n ]\r\n },\r\n \"_explanation\": {\r\n \"value\": 196.23717,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {...\r\n },\r\n {\r\n \"value\": 4.737492,\r\n \"description\": \"product of:\",\r\n \"details\": [...\r\n ]\r\n }\r\n ]\r\n },\r\n```\r\nAs you can see the score value is 2 and the value in the explanation is 196.23717.\r\n\r\nWhy is this different?", "comments": [ { "body": "This is a bug, they should always be the same. Can you share your query?\r\n\r\ncc @elastic/es-search-aggs ", "created_at": "2018-02-19T15:24:09Z" }, { "body": "This is the query \r\n\r\n```\r\n{\r\n \"from\" : 0,\r\n \"size\" : 67,\r\n \"query\" : {\r\n \"function_score\" : {\r\n \"query\" : {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"bool\" : {\r\n \"should\" : [\r\n {\r\n \"nested\" : {\r\n \"query\" : {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"constant_score\" : {\r\n \"filter\" : {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"terms\" : {\r\n \"elements.aclgroups\" : [\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"DUMMY_GROUP_ID\"\r\n ],\r\n \"boost\" : 1.0\r\n }\r\n },\r\n {\r\n \"term\" : {\r\n \"elements.hidden\" : {\r\n \"value\" : false,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n },\r\n {\r\n \"bool\" : {\r\n \"should\" : [\r\n {\r\n \"term\" : {\r\n \"elements.approval_status\" : {\r\n \"value\" : \"APPROVED\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n },\r\n {\r\n \"bool\" : {\r\n \"must_not\" : [\r\n {\r\n \"exists\" : {\r\n \"field\" : \"elements.approval_status\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"range\" : {\r\n \"elements.valid_from\" : {\r\n \"from\" : null,\r\n \"to\" : \"now\",\r\n \"include_lower\" : true,\r\n \"include_upper\" : true,\r\n \"time_zone\" : \"+01:00\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n },\r\n {\r\n \"range\" : {\r\n \"elements.valid_to\" : {\r\n \"from\" : \"now\",\r\n \"to\" : null,\r\n \"include_lower\" : true,\r\n \"include_upper\" : true,\r\n \"time_zone\" : \"+01:00\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"boost\" : 1.0\r\n }\r\n },\r\n {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"multi_match\" : {\r\n \"query\" : \"<query>\",\r\n \"fields\" : [\r\n \"elements.content.search^1.0\",\r\n \"elements.tags^1.0\",\r\n \"elements.title.search^1.0\"\r\n ],\r\n \"type\" : \"best_fields\",\r\n \"operator\" : \"AND\",\r\n \"slop\" : 0,\r\n \"prefix_length\" : 0,\r\n \"max_expansions\" : 50,\r\n \"lenient\" : false,\r\n \"cutoff_frequency\" : 0.0,\r\n \"zero_terms_query\" : \"NONE\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"path\" : \"elements\",\r\n \"ignore_unmapped\" : true,\r\n \"score_mode\" : \"none\",\r\n \"boost\" : 1.0,\r\n \"inner_hits\" : {\r\n \"name\" : \"elements\",\r\n \"ignore_unmapped\" : true,\r\n \"from\" : 0,\r\n \"size\" : 3,\r\n \"version\" : false,\r\n \"explain\" : false,\r\n \"track_scores\" : false,\r\n \"highlight\" : {\r\n \"pre_tags\" : [\r\n \"<span class=\\\"highlighted\\\">\"\r\n ],\r\n \"post_tags\" : [\r\n \"</span>\"\r\n ],\r\n \"fragment_size\" : 80,\r\n \"number_of_fragments\" : 3,\r\n \"type\" : \"fvh\",\r\n \"fields\" : {\r\n \"elements.content\" : {\r\n \"type\" : \"fvh\",\r\n \"matched_fields\" : [\r\n \"elements.content\",\r\n \"elements.content.search\"\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n }\r\n },\r\n {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"constant_score\" : {\r\n \"filter\" : {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"bool\" : {\r\n \"should\" : [\r\n {\r\n \"term\" : {\r\n \"aclusers\" : {\r\n \"value\" : \"<some id>\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n },\r\n {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"terms\" : {\r\n \"resource\" : [\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\"\r\n ],\r\n \"boost\" : 1.0\r\n }\r\n },\r\n {\r\n \"terms\" : {\r\n \"aclwritegroups\" : [\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"<some id>\",\r\n \"DUMMY_GROUP_ID\"\r\n ],\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"boost\" : 1.0\r\n }\r\n },\r\n {\r\n \"nested\" : {\r\n \"query\" : {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"multi_match\" : {\r\n \"query\" : \"<query>\",\r\n \"fields\" : [\r\n \"elements.content.search^1.0\",\r\n \"elements.tags^1.0\",\r\n \"elements.title.search^1.0\"\r\n ],\r\n \"type\" : \"best_fields\",\r\n \"operator\" : \"AND\",\r\n \"slop\" : 0,\r\n \"prefix_length\" : 0,\r\n \"max_expansions\" : 50,\r\n \"lenient\" : false,\r\n \"cutoff_frequency\" : 0.0,\r\n \"zero_terms_query\" : \"NONE\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"path\" : \"elements\",\r\n \"ignore_unmapped\" : true,\r\n \"score_mode\" : \"none\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"bool\" : {\r\n \"must_not\" : [\r\n {\r\n \"terms\" : {\r\n \"resource\" : [\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\",\r\n \"<some resource>\"\r\n ],\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"functions\" : [\r\n {\r\n \"filter\" : {\r\n \"match_all\" : {\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"field_value_factor\" : {\r\n \"field\" : \"boost_factor\",\r\n \"factor\" : 1.0,\r\n \"modifier\" : \"none\"\r\n }\r\n }\r\n ],\r\n \"score_mode\" : \"multiply\",\r\n \"max_boost\" : 3.4028235E38,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"explain\" : false,\r\n \"stored_fields\" : [\r\n \"id\",\r\n \"title\"\r\n ],\r\n \"highlight\" : {\r\n \"pre_tags\" : [\r\n \"<span class=\\\"highlighted\\\">\"\r\n ],\r\n \"post_tags\" : [\r\n \"</span>\"\r\n ],\r\n \"type\" : \"fvh\",\r\n \"fields\" : {\r\n \"content.search\" : {\r\n \"fragment_size\" : 80,\r\n \"number_of_fragments\" : 3,\r\n \"type\" : \"fvh\",\r\n \"highlight_query\" : {\r\n \"bool\" : {\r\n \"must\" : [\r\n {\r\n \"multi_match\" : {\r\n \"query\" : \"<query>\",\r\n \"fields\" : [\r\n \"content.search^1.0\"\r\n ],\r\n \"type\" : \"best_fields\",\r\n \"operator\" : \"AND\",\r\n \"slop\" : 0,\r\n \"prefix_length\" : 0,\r\n \"max_expansions\" : 50,\r\n \"lenient\" : false,\r\n \"cutoff_frequency\" : 0.0,\r\n \"zero_terms_query\" : \"NONE\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n ],\r\n \"disable_coord\" : false,\r\n \"adjust_pure_negative\" : true,\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"matched_fields\" : [\r\n \"content.search\"\r\n ]\r\n }\r\n }\r\n },\r\n \"suggest\" : {\r\n \"autocomplete\" : {\r\n \"text\" : \"<query>\",\r\n \"term\" : {\r\n \"field\" : \"autocomplete\",\r\n \"suggest_mode\" : \"MISSING\",\r\n \"accuracy\" : 0.5,\r\n \"sort\" : \"SCORE\",\r\n \"string_distance\" : \"INTERNAL\",\r\n \"max_edits\" : 2,\r\n \"max_inspections\" : 5,\r\n \"max_term_freq\" : 0.01,\r\n \"prefix_length\" : 1,\r\n \"min_word_length\" : 4,\r\n \"min_doc_freq\" : 0.0\r\n }\r\n }\r\n },\r\n \"rescore\" : [\r\n {\r\n \"query\" : {\r\n \"rescore_query\" : {\r\n \"multi_match\" : {\r\n \"query\" : \"<query>\",\r\n \"fields\" : [\r\n \"elements.content.stem^1.0\",\r\n \"elements.tags^1.0\",\r\n \"elements.title.stem^1.0\"\r\n ],\r\n \"type\" : \"phrase\",\r\n \"operator\" : \"OR\",\r\n \"slop\" : 0,\r\n \"prefix_length\" : 0,\r\n \"max_expansions\" : 50,\r\n \"lenient\" : false,\r\n \"zero_terms_query\" : \"NONE\",\r\n \"boost\" : 1.0\r\n }\r\n },\r\n \"query_weight\" : 1.0,\r\n \"rescore_query_weight\" : 80.0,\r\n \"score_mode\" : \"total\"\r\n }\r\n },\r\n {\r\n \"query\" : {\r\n \"rescore_query\" : {\r\n \"match\" : {\r\n \"elements.content.stem\" : {\r\n \"query\" : \"<query>\",\r\n \"operator\" : \"AND\",\r\n \"prefix_length\" : 0,\r\n \"max_expansions\" : 50,\r\n \"fuzzy_transpositions\" : true,\r\n \"lenient\" : false,\r\n \"zero_terms_query\" : \"NONE\",\r\n \"boost\" : 1.0\r\n }\r\n }\r\n },\r\n \"query_weight\" : 1.0,\r\n \"rescore_query_weight\" : 2.0,\r\n \"score_mode\" : \"total\"\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nWhich value is used for sorting the search results? The value from _score or the value from _explanation? Is the result sorted correctly?\r\n", "created_at": "2018-02-19T17:14:25Z" }, { "body": "> Which value is used for sorting the search results? The value from _score or the value from _explanation? Is the result sorted correctly?\r\n\r\nResults are always sorted based on `_score`, `_explanation` is just extra debugging information. I suspect results are still sorted correctly, but it is hard to say for sure without knowing what the exact bug is.", "created_at": "2018-02-19T17:51:09Z" }, { "body": "This is the complete explanation part:\r\n\r\n```\r\n\"_explanation\": {\r\n \"value\": 196.23717,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 191.49968,\r\n \"description\": \"product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 191.49968,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 2,\r\n \"description\": \"product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 2,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 2,\r\n \"description\": \"function score, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 2,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 0,\r\n \"description\": \"Score based on 1 child docs in range from 2714 to 2714, best match:\",\r\n \"details\": [\r\n {\r\n \"value\": 7.6046343,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 7.6046343,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"ConstantScore(+ConstantScore(elements.aclgroups:<id>\r\n elements.aclgroups:<id> elements.aclgroups:<id> elements.aclgroups:<id> elements.aclgroups:<id> elements.aclgroups:<id> elements.aclgroups:<id> elements.aclgroups:<id> elements.aclgroups:<id> elements.aclgroups:<id> elements.aclgroups:<id> elements.aclgroups:DUMMY_GROUP_ID elements.aclgroups:ubguhplma88o) +elements.hidden:F +((elements.approval_status:APPROVED (-ConstantScore(_field_names:elements.approval_status) +*:*))~1) +(+elements.valid_from:[-9223372036854775808 TO 9223372036854775807] +elements.valid_to:[1519050328256 TO 9223372036854775807])), product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 6.6046343,\r\n \"description\": \"max of:\",\r\n \"details\": [\r\n {\r\n \"value\": 6.6046343,\r\n \"description\": \"weight(elements.content.search:term in 16) [PerFieldSimilarity], result of:\",\r\n \"details\": [\r\n {\r\n \"value\": 6.6046343,\r\n \"description\": \"score(doc=16,freq=1.0 = termFreq=1.0\\n), product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 3.9169664,\r\n \"description\": \"idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\",\r\n \"details\": [\r\n {\r\n \"value\": 56,\r\n \"description\": \"docFreq\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 2838,\r\n \"description\": \"docCount\",\r\n \"details\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 1.6861606,\r\n \"description\": \"tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"termFreq=1.0\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1.2,\r\n \"description\": \"parameter k1\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 0.75,\r\n \"description\": \"parameter b\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1944.5743,\r\n \"description\": \"avgFieldLength\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 10.24,\r\n \"description\": \"fieldLength\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0,\r\n \"description\": \"match on required clause, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 0,\r\n \"description\": \"# clause\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"_type:__elements, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"ConstantScore((aclusers:4374ef463c8cb7f0013c967ea7cc4701 (+resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> resource:<resource> +ConstantScore(aclwritegroups:<id> aclwritegroups:<id> aclwritegroups:<id> aclwritegroups:<id> aclwritegroups:<id> aclwritegroups:<id> aclwritegroups:<id> aclwritegroups:<id> aclwritegroups:<id> aclwritegroups:<id> aclwritegroups:<id> aclwritegroups:DUMMY_GROUP_ID aclwritegroups:<id>)))~1), product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0,\r\n \"description\": \"Score based on 1 child docs in range from 2714 to 2714, best match:\",\r\n \"details\": [\r\n {\r\n \"value\": 6.6046343,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 6.6046343,\r\n \"description\": \"max of:\",\r\n \"details\": [\r\n {\r\n \"value\": 6.6046343,\r\n \"description\": \"weight(elements.content.search:term in 16) [PerFieldSimilarity], result of:\",\r\n \"details\": [\r\n {\r\n \"value\": 6.6046343,\r\n \"description\": \"score(doc=16,freq=1.0 = termFreq=1.0\\n), product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 3.9169664,\r\n \"description\": \"idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\",\r\n \"details\": [\r\n {\r\n \"value\": 56,\r\n \"description\": \"docFreq\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 2838,\r\n \"description\": \"docCount\",\r\n \"details\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 1.6861606,\r\n \"description\": \"tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"termFreq=1.0\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1.2,\r\n \"description\": \"parameter k1\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 0.75,\r\n \"description\": \"parameter b\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1944.5743,\r\n \"description\": \"avgFieldLength\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 10.24,\r\n \"description\": \"fieldLength\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0,\r\n \"description\": \"match on required clause, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 0,\r\n \"description\": \"# clause\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"_type:__elements, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"sum of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"*:*, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"min of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"field value function: none(doc['boost_factor'].value * factor=1.0)\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 3.4028235e+38,\r\n \"description\": \"maxBoost\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0,\r\n \"description\": \"match on required clause, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 0,\r\n \"description\": \"# clause\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"#*:* -_type:__*, product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 1,\r\n \"description\": \"boost\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"queryNorm\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"primaryWeight\",\r\n \"details\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 189.49968,\r\n \"description\": \"product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 2.368746,\r\n \"description\": \"max of:\",\r\n \"details\": [\r\n {\r\n \"value\": 2.368746,\r\n \"description\": \"weight(elements.content.stem:term in 17) [PerFieldSimilarity], result of:\",\r\n \"details\": [\r\n {\r\n \"value\": 2.368746,\r\n \"description\": \"score(doc=17,freq=2.0 = termFreq=2.0\\n), product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 3.9530065,\r\n \"description\": \"idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\",\r\n \"details\": [\r\n {\r\n \"value\": 54,\r\n \"description\": \"docFreq\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 2838,\r\n \"description\": \"docCount\",\r\n \"details\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0.5992265,\r\n \"description\": \"tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from:\",\r\n \"details\": [\r\n {\r\n \"value\": 2,\r\n \"description\": \"termFreq=2.0\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1.2,\r\n \"description\": \"parameter k1\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 0.75,\r\n \"description\": \"parameter b\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 238.7012,\r\n \"description\": \"avgFieldLength\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1337.4694,\r\n \"description\": \"fieldLength\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 80,\r\n \"description\": \"secondaryWeight\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 1,\r\n \"description\": \"primaryWeight\",\r\n \"details\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 4.737492,\r\n \"description\": \"product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 2.368746,\r\n \"description\": \"weight(elements.content.stem:term in 17) [PerFieldSimilarity], result of:\",\r\n \"details\": [\r\n {\r\n \"value\": 2.368746,\r\n \"description\": \"score(doc=17,freq=2.0 = termFreq=2.0\\n), product of:\",\r\n \"details\": [\r\n {\r\n \"value\": 3.9530065,\r\n \"description\": \"idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\",\r\n \"details\": [\r\n {\r\n \"value\": 54,\r\n \"description\": \"docFreq\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 2838,\r\n \"description\": \"docCount\",\r\n \"details\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 0.5992265,\r\n \"description\": \"tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from:\",\r\n \"details\": [\r\n {\r\n \"value\": 2,\r\n \"description\": \"termFreq=2.0\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1.2,\r\n \"description\": \"parameter k1\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 0.75,\r\n \"description\": \"parameter b\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 238.7012,\r\n \"description\": \"avgFieldLength\",\r\n \"details\": []\r\n },\r\n {\r\n \"value\": 1337.4694,\r\n \"description\": \"fieldLength\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"value\": 2,\r\n \"description\": \"secondaryWeight\",\r\n \"details\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n```", "created_at": "2018-02-19T18:42:43Z" }, { "body": "I think it's just a known issue with `rescorer`. If the `window_size` (which default to `10`) of the `rescorer` is smaller than the requested size (`from`+`size`, in the example query `0+67`) then all documents that appear after the `window_size` will skip the rescore and they will just apply the `query_weight` to the original score. Though we don't check if the document was part of the window when `explain` is called which is why the score within the explanation is not correct. It shows the score as if it were rescored even though the document might not have been rescored.\r\nYou should have the same score if you change the `window_size` to match the `size+from` of you query (`67` ;) ).\r\nI'll leave this issue open for now because there is a bug in the rescorer `explain` output but the fix is not trivial and I am not sure that it is worth fixing.", "created_at": "2018-02-19T18:45:48Z" }, { "body": "I think we should fix it: scores are explanations should always match.", "created_at": "2018-02-21T15:28:49Z" }, { "body": "Fair enough, I restored the adoptme tag. Also note that the score will not always match the explanations since we can have more than one rescorer so only the explanation of the last rescorer should match the final score of the hit. ", "created_at": "2018-02-21T20:15:26Z" }, { "body": "To implement the correct `explain` in `QueryRescorer` `explain` function, we need to check if a document was within the `window_size` - `N`. This is difficult to check, as documents can be resorted, and the top N documents are not necessarily the one for which rescoring was applied (in case for example `rescore_query_weight` is negative, they may be at the bottom). We can potentially modify `RescoreContext` to save all docIds for which rescore was applied, and use this info during `ExplainFetchSubPhase`, but I am not sure if we want to go this round.\r\n\r\nThe simpler solution would be to remove `window_size` parameter from `rescore` API, and apply rescore for all docs between [from, from+size).\r\n\r\nWhat do you think @jimczi @jpountz ?\r\n\r\n", "created_at": "2018-04-30T21:27:32Z" }, { "body": "> The simpler solution would be to remove window_size parameter from rescore API, and apply rescore for all docs between [from, from+size).\r\n\r\nThe `window_size` is important because it ensures that pagination queries always rescore the same hits. If we remove it the pagination would rescore more documents on each page and this would yields different results (or duplicates) on each page.\r\n\r\n> We can potentially modify RescoreContext to save all docIds for which rescore was applied, and use this info during ExplainFetchSubPhase, but I am not sure if we want to go this round.\r\n\r\nThe window_size should be small (100-1000 or something) so I think that's ok and I don't see how we could do it differently.\r\n", "created_at": "2018-05-03T08:40:37Z" } ], "number": 28725, "title": "Why is score different from value in explanation?" }
{ "body": "Currently in a rescore request if window_size is smaller than\r\nthe top N documents returned (N=size), explanation of scores could be incorrect\r\nfor documents that were a part of topN and not part of rescoring.\r\nThis PR corrects this by saving in RescoreContext docIDs of documents\r\nfor which rescoring was applied, and adding rescoring explanation\r\nonly for these docIDs.\r\n\r\nCloses #28725", "number": 30629, "review_comments": [ { "body": "We always build the rescore explanation even when `rescoreContext.isRescored(topLevelDocId)` is false. We could avoid this by building the primary explanation first and return it directly if `rescoreContext.isRescored(topLevelDocId)` is false ?", "created_at": "2018-05-16T10:05:43Z" }, { "body": "s/recrored/rescored/", "created_at": "2018-05-16T14:47:34Z" } ], "title": "Improve explanation in rescore" }
{ "commits": [ { "message": "Improve explanation in rescore\n\nCurrently in a rescore request if window_size is smaller than\nthe top N documents returned (N=size), explanation of scores could be incorrect\nfor documents that were a part of topN and not part of rescoring.\nThis PR corrects this, but saving in RescoreContext docIDs of documents\nfor which rescoring was applied, and adding rescoring explanation\nonly for these docIDs.\n\nCloses #28725" }, { "message": "Don't build rescore explanation if not needed" }, { "message": "Merge remote-tracking branch 'upstream/master' into improve-explanation-in-rescore" }, { "message": "Merge remote-tracking branch 'upstream/master' into improve-explanation-in-rescore" }, { "message": "Merge remote-tracking branch 'upstream/master' into improve-explanation-in-rescore" } ], "files": [ { "diff": "@@ -0,0 +1,39 @@\n+---\n+\"Score should match explanation in rescore\":\n+ - skip:\n+ version: \" - 6.99.99\"\n+ reason: Explanation for rescoring was corrected after these versions\n+ - do:\n+ bulk:\n+ refresh: true\n+ body:\n+ - '{\"index\": {\"_index\": \"test_index\", \"_type\": \"_doc\", \"_id\": \"1\"}}'\n+ - '{\"f1\": \"1\"}'\n+ - '{\"index\": {\"_index\": \"test_index\", \"_type\": \"_doc\", \"_id\": \"2\"}}'\n+ - '{\"f1\": \"2\"}'\n+ - '{\"index\": {\"_index\": \"test_index\", \"_type\": \"_doc\", \"_id\": \"3\"}}'\n+ - '{\"f1\": \"3\"}'\n+\n+ - do:\n+ search:\n+ index: test_index\n+ body:\n+ explain: true\n+ query:\n+ match_all: {}\n+ rescore:\n+ window_size: 2\n+ query:\n+ rescore_query:\n+ match_all: {}\n+ query_weight: 5\n+ rescore_query_weight: 10\n+\n+ - match: { hits.hits.0._score: 15 }\n+ - match: { hits.hits.0._explanation.value: 15 }\n+\n+ - match: { hits.hits.1._score: 15 }\n+ - match: { hits.hits.1._explanation.value: 15 }\n+\n+ - match: { hits.hits.2._score: 5 }\n+ - match: { hits.hits.2._explanation.value: 5 }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search/210_rescore_explain.yml", "status": "added" }, { "diff": "@@ -30,6 +30,8 @@\n import java.util.Arrays;\n import java.util.Comparator;\n import java.util.Set;\n+import java.util.Collections;\n+import static java.util.stream.Collectors.toSet;\n \n public final class QueryRescorer implements Rescorer {\n \n@@ -61,6 +63,11 @@ protected float combine(float firstPassScore, boolean secondPassMatches, float s\n // First take top slice of incoming docs, to be rescored:\n TopDocs topNFirstPass = topN(topDocs, rescoreContext.getWindowSize());\n \n+ // Save doc IDs for which rescoring was applied to be used in score explanation\n+ Set<Integer> topNDocIDs = Collections.unmodifiableSet(\n+ Arrays.stream(topNFirstPass.scoreDocs).map(scoreDoc -> scoreDoc.doc).collect(toSet()));\n+ rescoreContext.setRescoredDocs(topNDocIDs);\n+\n // Rescore them:\n TopDocs rescored = rescorer.rescore(searcher, topNFirstPass, rescoreContext.getWindowSize());\n \n@@ -71,16 +78,12 @@ protected float combine(float firstPassScore, boolean secondPassMatches, float s\n @Override\n public Explanation explain(int topLevelDocId, IndexSearcher searcher, RescoreContext rescoreContext,\n Explanation sourceExplanation) throws IOException {\n- QueryRescoreContext rescore = (QueryRescoreContext) rescoreContext;\n if (sourceExplanation == null) {\n // this should not happen but just in case\n return Explanation.noMatch(\"nothing matched\");\n }\n- // TODO: this isn't right? I.e., we are incorrectly pretending all first pass hits were rescored? If the requested docID was\n- // beyond the top rescoreContext.window() in the first pass hits, we don't rescore it now?\n- Explanation rescoreExplain = searcher.explain(rescore.query(), topLevelDocId);\n+ QueryRescoreContext rescore = (QueryRescoreContext) rescoreContext;\n float primaryWeight = rescore.queryWeight();\n-\n Explanation prim;\n if (sourceExplanation.isMatch()) {\n prim = Explanation.match(\n@@ -89,23 +92,24 @@ public Explanation explain(int topLevelDocId, IndexSearcher searcher, RescoreCon\n } else {\n prim = Explanation.noMatch(\"First pass did not match\", sourceExplanation);\n }\n-\n- // NOTE: we don't use Lucene's Rescorer.explain because we want to insert our own description with which ScoreMode was used. Maybe\n- // we should add QueryRescorer.explainCombine to Lucene?\n- if (rescoreExplain != null && rescoreExplain.isMatch()) {\n- float secondaryWeight = rescore.rescoreQueryWeight();\n- Explanation sec = Explanation.match(\n+ if (rescoreContext.isRescored(topLevelDocId)){\n+ Explanation rescoreExplain = searcher.explain(rescore.query(), topLevelDocId);\n+ // NOTE: we don't use Lucene's Rescorer.explain because we want to insert our own description with which ScoreMode was used.\n+ // Maybe we should add QueryRescorer.explainCombine to Lucene?\n+ if (rescoreExplain != null && rescoreExplain.isMatch()) {\n+ float secondaryWeight = rescore.rescoreQueryWeight();\n+ Explanation sec = Explanation.match(\n rescoreExplain.getValue() * secondaryWeight,\n \"product of:\",\n rescoreExplain, Explanation.match(secondaryWeight, \"secondaryWeight\"));\n- QueryRescoreMode scoreMode = rescore.scoreMode();\n- return Explanation.match(\n+ QueryRescoreMode scoreMode = rescore.scoreMode();\n+ return Explanation.match(\n scoreMode.combine(prim.getValue(), sec.getValue()),\n scoreMode + \" of:\",\n prim, sec);\n- } else {\n- return prim;\n+ }\n }\n+ return prim;\n }\n \n private static final Comparator<ScoreDoc> SCORE_DOC_COMPARATOR = new Comparator<ScoreDoc>() {", "filename": "server/src/main/java/org/elasticsearch/search/rescore/QueryRescorer.java", "status": "modified" }, { "diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.search.rescore;\n \n+import java.util.Set;\n+\n /**\n * Context available to the rescore while it is running. Rescore\n * implementations should extend this with any additional resources that\n@@ -27,6 +29,7 @@\n public class RescoreContext {\n private final int windowSize;\n private final Rescorer rescorer;\n+ private Set<Integer> recroredDocs; //doc Ids for which rescoring was applied\n \n /**\n * Build the context.\n@@ -50,4 +53,12 @@ public Rescorer rescorer() {\n public int getWindowSize() {\n return windowSize;\n }\n+\n+ public void setRescoredDocs(Set<Integer> docIds) {\n+ recroredDocs = docIds;\n+ }\n+\n+ public boolean isRescored(int docId) {\n+ return recroredDocs.contains(docId);\n+ }\n }", "filename": "server/src/main/java/org/elasticsearch/search/rescore/RescoreContext.java", "status": "modified" } ] }
{ "body": "Not sure if this is a bug or a \"work as designed\", but I see a lot of students run into issues with this in training, so I figured more folks may have problems with this. We may want to make this a bit more user friendly.\r\n\r\nWhen configuring `cluster.routing.allocation.awareness.attributes` via `_cluster/settings`, the setting does not accept an array of values. If you do provide an array of values, the cluster will end up in a red status when creating a new index.\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\n\r\nVersion: 6.2.4, Build: ccec39f/2018-04-12T20:37:28.497551Z, JVM: 1.8.0_77\r\n\r\n**Plugins installed**: []\r\n\r\n_none_\r\n\r\n**JVM version** (`java -version`):\r\n\r\njava version \"1.8.0_77\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_77-b03)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n\r\nDarwin MacBook-1265.local 17.4.0 Darwin Kernel Version 17.4.0: Sun Dec 17 09:19:54 PST 2017; root:xnu-4570.41.2~1/RELEASE_X86_64 x86_64\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nBecause the setting is named \"attributes\", folks expect to be able to provide an array of values for this setting (for example `[\"bar\",\"baz\"]`). However, if you do so, the cluster ends up in a red status when creating a new index. Even when configuring multiple values, these values have to be provided comma-separated as a single string (for example `\"bar,baz\"`).\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Start a one-node cluster and configure the node with: `node.attr.foo: bar`\r\n 2. Get the cluster's health and check the status is green or yellow: `GET _cluster/health`\r\n 3. Apply the following cluster setting:\r\n```\r\nPUT _cluster/settings\r\n{\r\n \"persistent\": {\r\n \"cluster.routing.allocation.awareness.attributes\": [\r\n \"foo\"\r\n ]\r\n }\r\n}\r\n```\r\n 4. Create a new index: `PUT my_index`\r\n 5. The cluster status is now red: `GET _cluster/health`\r\n 6. Use `GET _cluster/allocation/explain` to find out why: \r\n```\r\n\"explanation\": \"node does not contain the awareness attribute [[foo]]; required attributes cluster setting [cluster.routing.allocation.awareness.attributes=[foo]]\"\r\n```\r\n 7. Change the cluster setting and provide a string value instead of an array:\r\n```\r\nPUT _cluster/settings\r\n{\r\n \"persistent\": {\r\n \"cluster.routing.allocation.awareness.attributes\": \"foo\"\r\n }\r\n}\r\n```\r\n 8. The cluster is now yellow: `GET _cluster/health`\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n```\r\n[2018-05-15T15:07:10,668][INFO ][o.e.c.m.MetaDataCreateIndexService] [rGKXCKp] [my_index] creating index, cause [api], templates [], shards [5]/[1], mappings []\r\n[2018-05-15T15:07:10,672][INFO ][o.e.c.r.a.AllocationService] [rGKXCKp] Cluster health status changed from [YELLOW] to [RED] (reason: [index [my_index] created]).\r\n[2018-05-15T15:08:21,596][DEBUG][o.e.a.a.c.a.TransportClusterAllocationExplainAction] [rGKXCKp] explaining the allocation for [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false], found shard [[allblogs_us-en][2], node[null], [R], recovery_source[peer recovery], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2018-05-15T13:05:26.878Z], delayed=false, allocation_status[no_attempt]]]\r\n[2018-05-15T15:09:51,178][INFO ][o.e.c.s.ClusterSettings ] [rGKXCKp] updating [cluster.routing.allocation.awareness.attributes] from [[foo]] to [foo]\r\n[2018-05-15T15:09:51,178][INFO ][o.e.c.s.ClusterSettings ] [rGKXCKp] updating [cluster.routing.allocation.awareness.attributes] from [[foo]] to [foo]\r\n[2018-05-15T15:09:51,246][INFO ][o.e.c.r.a.AllocationService] [rGKXCKp] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[my_index][0]] ...]).\r\n```", "comments": [ { "body": "indeed, I'll fix this", "created_at": "2018-05-15T16:34:53Z" }, { "body": "Pinging @elastic/es-distributed", "created_at": "2018-05-15T16:35:13Z" } ], "number": 30617, "title": "cluster.routing.allocation.awareness.attributes setting does not accept an array of values" }
{ "body": "Allows the setting to be specified using proper array syntax, for example:\r\n\r\n```\r\n\"cluster.routing.allocation.awareness.attributes\": [ \"foo\", \"bar\", \"baz\" ]\r\n```\r\n\r\nCloses #30617", "number": 30626, "review_comments": [], "title": "Move allocation awareness attributes to list setting" }
{ "commits": [ { "message": "Move awareness to list setting" } ], "files": [ { "diff": "@@ -544,20 +544,20 @@ public Set<String> getAllAllocationIds() {\n \n static class AttributesKey {\n \n- final String[] attributes;\n+ final List<String> attributes;\n \n- AttributesKey(String[] attributes) {\n+ AttributesKey(List<String> attributes) {\n this.attributes = attributes;\n }\n \n @Override\n public int hashCode() {\n- return Arrays.hashCode(attributes);\n+ return attributes.hashCode();\n }\n \n @Override\n public boolean equals(Object obj) {\n- return obj instanceof AttributesKey && Arrays.equals(attributes, ((AttributesKey) obj).attributes);\n+ return obj instanceof AttributesKey && attributes.equals(((AttributesKey) obj).attributes);\n }\n }\n \n@@ -621,11 +621,11 @@ private static List<ShardRouting> collectAttributeShards(AttributesKey key, Disc\n return Collections.unmodifiableList(to);\n }\n \n- public ShardIterator preferAttributesActiveInitializingShardsIt(String[] attributes, DiscoveryNodes nodes) {\n+ public ShardIterator preferAttributesActiveInitializingShardsIt(List<String> attributes, DiscoveryNodes nodes) {\n return preferAttributesActiveInitializingShardsIt(attributes, nodes, shuffler.nextSeed());\n }\n \n- public ShardIterator preferAttributesActiveInitializingShardsIt(String[] attributes, DiscoveryNodes nodes, int seed) {\n+ public ShardIterator preferAttributesActiveInitializingShardsIt(List<String> attributes, DiscoveryNodes nodes, int seed) {\n AttributesKey key = new AttributesKey(attributes);\n AttributesRoutings activeRoutings = getActiveAttribute(key, nodes);\n AttributesRoutings initializingRoutings = getInitializingAttribute(key, nodes);", "filename": "server/src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java", "status": "modified" }, { "diff": "@@ -39,6 +39,7 @@\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.HashSet;\n+import java.util.List;\n import java.util.Map;\n import java.util.Set;\n import java.util.stream.Collectors;\n@@ -49,7 +50,7 @@ public class OperationRouting extends AbstractComponent {\n Setting.boolSetting(\"cluster.routing.use_adaptive_replica_selection\", true,\n Setting.Property.Dynamic, Setting.Property.NodeScope);\n \n- private String[] awarenessAttributes;\n+ private List<String> awarenessAttributes;\n private boolean useAdaptiveReplicaSelection;\n \n public OperationRouting(Settings settings, ClusterSettings clusterSettings) {\n@@ -65,7 +66,7 @@ void setUseAdaptiveReplicaSelection(boolean useAdaptiveReplicaSelection) {\n this.useAdaptiveReplicaSelection = useAdaptiveReplicaSelection;\n }\n \n- private void setAwarenessAttributes(String[] awarenessAttributes) {\n+ private void setAwarenessAttributes(List<String> awarenessAttributes) {\n this.awarenessAttributes = awarenessAttributes;\n }\n \n@@ -139,7 +140,7 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index\n @Nullable ResponseCollectorService collectorService,\n @Nullable Map<String, Long> nodeCounts) {\n if (preference == null || preference.isEmpty()) {\n- if (awarenessAttributes.length == 0) {\n+ if (awarenessAttributes.isEmpty()) {\n if (useAdaptiveReplicaSelection) {\n return indexShard.activeInitializingShardsRankedIt(collectorService, nodeCounts);\n } else {\n@@ -174,7 +175,7 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index\n }\n // no more preference\n if (index == -1 || index == preference.length() - 1) {\n- if (awarenessAttributes.length == 0) {\n+ if (awarenessAttributes.isEmpty()) {\n if (useAdaptiveReplicaSelection) {\n return indexShard.activeInitializingShardsRankedIt(collectorService, nodeCounts);\n } else {\n@@ -218,7 +219,7 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index\n // shard ID into the hash of the user-supplied preference key.\n routingHash = 31 * routingHash + indexShard.shardId.hashCode();\n }\n- if (awarenessAttributes.length == 0) {\n+ if (awarenessAttributes.isEmpty()) {\n return indexShard.activeInitializingShardsIt(routingHash);\n } else {\n return indexShard.preferAttributesActiveInitializingShardsIt(awarenessAttributes, nodes, routingHash);", "filename": "server/src/main/java/org/elasticsearch/cluster/routing/OperationRouting.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n+import java.util.function.Function;\n \n import com.carrotsearch.hppc.ObjectIntHashMap;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -34,6 +35,8 @@\n import org.elasticsearch.common.settings.Setting.Property;\n import org.elasticsearch.common.settings.Settings;\n \n+import static java.util.Collections.emptyList;\n+\n /**\n * This {@link AllocationDecider} controls shard allocation based on\n * {@code awareness} key-value pairs defined in the node configuration.\n@@ -78,13 +81,13 @@ public class AwarenessAllocationDecider extends AllocationDecider {\n \n public static final String NAME = \"awareness\";\n \n- public static final Setting<String[]> CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING =\n- new Setting<>(\"cluster.routing.allocation.awareness.attributes\", \"\", s -> Strings.tokenizeToStringArray(s, \",\"), Property.Dynamic,\n+ public static final Setting<List<String>> CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING =\n+ Setting.listSetting(\"cluster.routing.allocation.awareness.attributes\", emptyList(), Function.identity(), Property.Dynamic,\n Property.NodeScope);\n public static final Setting<Settings> CLUSTER_ROUTING_ALLOCATION_AWARENESS_FORCE_GROUP_SETTING =\n Setting.groupSetting(\"cluster.routing.allocation.awareness.force.\", Property.Dynamic, Property.NodeScope);\n \n- private volatile String[] awarenessAttributes;\n+ private volatile List<String> awarenessAttributes;\n \n private volatile Map<String, List<String>> forcedAwarenessAttributes;\n \n@@ -109,7 +112,7 @@ private void setForcedAwarenessAttributes(Settings forceSettings) {\n this.forcedAwarenessAttributes = forcedAwarenessAttributes;\n }\n \n- private void setAwarenessAttributes(String[] awarenessAttributes) {\n+ private void setAwarenessAttributes(List<String> awarenessAttributes) {\n this.awarenessAttributes = awarenessAttributes;\n }\n \n@@ -124,7 +127,7 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl\n }\n \n private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation, boolean moveToNode) {\n- if (awarenessAttributes.length == 0) {\n+ if (awarenessAttributes.isEmpty()) {\n return allocation.decision(Decision.YES, NAME,\n \"allocation awareness is not enabled, set cluster setting [%s] to enable it\",\n CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.getKey());\n@@ -138,7 +141,7 @@ private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, Rout\n return allocation.decision(Decision.NO, NAME,\n \"node does not contain the awareness attribute [%s]; required attributes cluster setting [%s=%s]\",\n awarenessAttribute, CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.getKey(),\n- allocation.debugDecision() ? Strings.arrayToCommaDelimitedString(awarenessAttributes) : null);\n+ allocation.debugDecision() ? Strings.collectionToCommaDelimitedString(awarenessAttributes) : null);\n }\n \n // build attr_value -> nodes map", "filename": "server/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java", "status": "modified" }, { "diff": "@@ -24,11 +24,13 @@\n import org.elasticsearch.test.ESTestCase;\n \n import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.List;\n \n public class IndexShardRoutingTableTests extends ESTestCase {\n public void testEqualsAttributesKey() {\n- String[] attr1 = {\"a\"};\n- String[] attr2 = {\"b\"};\n+ List<String> attr1 = Arrays.asList(\"a\");\n+ List<String> attr2 = Arrays.asList(\"b\");\n IndexShardRoutingTable.AttributesKey attributesKey1 = new IndexShardRoutingTable.AttributesKey(attr1);\n IndexShardRoutingTable.AttributesKey attributesKey2 = new IndexShardRoutingTable.AttributesKey(attr1);\n IndexShardRoutingTable.AttributesKey attributesKey3 = new IndexShardRoutingTable.AttributesKey(attr2);", "filename": "server/src/test/java/org/elasticsearch/cluster/routing/IndexShardRoutingTableTests.java", "status": "modified" }, { "diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.shard.ShardId;\n \n+import java.util.Arrays;\n import java.util.Collections;\n import java.util.HashMap;\n import java.util.Iterator;\n@@ -50,7 +51,6 @@\n import static java.util.Collections.unmodifiableMap;\n import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n import static org.hamcrest.Matchers.anyOf;\n-import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.not;\n import static org.hamcrest.Matchers.notNullValue;\n@@ -224,11 +224,16 @@ public void testRandomRouting() {\n }\n \n public void testAttributePreferenceRouting() {\n- AllocationService strategy = createAllocationService(Settings.builder()\n- .put(\"cluster.routing.allocation.node_concurrent_recoveries\", 10)\n- .put(ClusterRebalanceAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE_SETTING.getKey(), \"always\")\n- .put(\"cluster.routing.allocation.awareness.attributes\", \"rack_id,zone\")\n- .build());\n+ Settings.Builder settings = Settings.builder()\n+ .put(\"cluster.routing.allocation.node_concurrent_recoveries\", 10)\n+ .put(ClusterRebalanceAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE_SETTING.getKey(), \"always\");\n+ if (randomBoolean()) {\n+ settings.put(\"cluster.routing.allocation.awareness.attributes\", \" rack_id, zone \");\n+ } else {\n+ settings.putList(\"cluster.routing.allocation.awareness.attributes\", \"rack_id\", \"zone\");\n+ }\n+\n+ AllocationService strategy = createAllocationService(settings.build());\n \n MetaData metaData = MetaData.builder()\n .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(1))\n@@ -258,15 +263,15 @@ public void testAttributePreferenceRouting() {\n clusterState = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING));\n \n // after all are started, check routing iteration\n- ShardIterator shardIterator = clusterState.routingTable().index(\"test\").shard(0).preferAttributesActiveInitializingShardsIt(new String[]{\"rack_id\"}, clusterState.nodes());\n+ ShardIterator shardIterator = clusterState.routingTable().index(\"test\").shard(0).preferAttributesActiveInitializingShardsIt(Arrays.asList(\"rack_id\"), clusterState.nodes());\n ShardRouting shardRouting = shardIterator.nextOrNull();\n assertThat(shardRouting, notNullValue());\n assertThat(shardRouting.currentNodeId(), equalTo(\"node1\"));\n shardRouting = shardIterator.nextOrNull();\n assertThat(shardRouting, notNullValue());\n assertThat(shardRouting.currentNodeId(), equalTo(\"node2\"));\n \n- shardIterator = clusterState.routingTable().index(\"test\").shard(0).preferAttributesActiveInitializingShardsIt(new String[]{\"rack_id\"}, clusterState.nodes());\n+ shardIterator = clusterState.routingTable().index(\"test\").shard(0).preferAttributesActiveInitializingShardsIt(Arrays.asList(\"rack_id\"), clusterState.nodes());\n shardRouting = shardIterator.nextOrNull();\n assertThat(shardRouting, notNullValue());\n assertThat(shardRouting.currentNodeId(), equalTo(\"node1\"));", "filename": "server/src/test/java/org/elasticsearch/cluster/structure/RoutingIteratorTests.java", "status": "modified" } ] }
{ "body": "*Original comment by @astefan:*\n\nExample:\r\n```\r\nPOST /_xpack/sql?format=txt\r\n{\r\n \"query\": \"SELECT author,COUNT(name) FROM library WHERE match(author,'dan') GROUP BY author HAVING COUNT(name)>0 ORDER BY COUNT(name) DESC\"\r\n}\r\n```\r\n\r\nResult:\r\n```\r\n author | COUNT(name) \r\n---------------+---------------\r\nDan Andrei |1 \r\nDan Bla |3 \r\nDan Simmons |2 \r\n```\r\n\r\nThe query being created it's using `composite` aggregation which doesn't actually allow for \"custom\" sorting of buckets, only by their values. Accepting an `ORDER BY COUNT` should, at least, reject the query as not being supported.", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-25T07:22:51Z" } ], "number": 29900, "title": "SQL: ORDER BY a function doesn't actually order nor complains about not being able to" }
{ "body": "Due to the way composite aggregation works, ordering in GROUP BY can be\r\napplied only through grouped columns which now the analyzer verifier\r\nenforces.\r\n\r\nFix #29900", "number": 30585, "review_comments": [], "title": "SQL: Verify GROUP BY ordering on grouped columns" }
{ "commits": [ { "message": "SQL: Verify GROUP BY ordering on grouped columns\n\nDue to the way composite aggregation works, ordering in GROUP BY can be\napplied only through grouped columns which now the analyzer verifier\nenforces.\n\nFix 29900" }, { "message": "Eliminate SCORE() from supported ORDER BYs" }, { "message": "Fix comparison between expressions and attributes" } ], "files": [ { "diff": "@@ -211,12 +211,13 @@ static Collection<Failure> verify(LogicalPlan plan) {\n \n /**\n * Check validity of Aggregate/GroupBy.\n- * This rule is needed for two reasons:\n+ * This rule is needed for multiple reasons:\n * 1. a user might specify an invalid aggregate (SELECT foo GROUP BY bar)\n * 2. the order/having might contain a non-grouped attribute. This is typically\n * caught by the Analyzer however if wrapped in a function (ABS()) it gets resolved\n * (because the expression gets resolved little by little without being pushed down,\n * without the Analyzer modifying anything.\n+ * 3. composite agg (used for GROUP BY) allows ordering only on the group keys\n */\n private static boolean checkGroupBy(LogicalPlan p, Set<Failure> localFailures,\n Map<String, Function> resolvedFunctions, Set<LogicalPlan> groupingFailures) {\n@@ -225,7 +226,7 @@ && checkGroupByOrder(p, localFailures, groupingFailures, resolvedFunctions)\n && checkGroupByHaving(p, localFailures, groupingFailures, resolvedFunctions);\n }\n \n- // check whether an orderBy failed\n+ // check whether an orderBy failed or if it occurs on a non-key\n private static boolean checkGroupByOrder(LogicalPlan p, Set<Failure> localFailures,\n Set<LogicalPlan> groupingFailures, Map<String, Function> functions) {\n if (p instanceof OrderBy) {\n@@ -234,7 +235,23 @@ private static boolean checkGroupByOrder(LogicalPlan p, Set<Failure> localFailur\n Aggregate a = (Aggregate) o.child();\n \n Map<Expression, Node<?>> missing = new LinkedHashMap<>();\n- o.order().forEach(oe -> oe.collectFirstChildren(c -> checkGroupMatch(c, oe, a.groupings(), missing, functions)));\n+ o.order().forEach(oe -> {\n+ Expression e = oe.child();\n+ // cannot order by aggregates (not supported by composite)\n+ if (Functions.isAggregate(e)) {\n+ missing.put(e, oe);\n+ return;\n+ }\n+\n+ // make sure to compare attributes directly\n+ if (Expressions.anyMatch(a.groupings(), \n+ g -> e.semanticEquals(e instanceof Attribute ? Expressions.attribute(g) : g))) {\n+ return;\n+ }\n+\n+ // nothing matched, cannot group by it\n+ missing.put(e, oe);\n+ });\n \n if (!missing.isEmpty()) {\n String plural = missing.size() > 1 ? \"s\" : StringUtils.EMPTY;", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/analysis/analyzer/Verifier.java", "status": "modified" }, { "diff": "@@ -111,7 +111,7 @@ public void testGroupByOrderByNonGrouped() {\n }\n \n public void testGroupByOrderByScalarOverNonGrouped() {\n- assertEquals(\"1:50: Cannot order by non-grouped column [date], expected [text]\",\n+ assertEquals(\"1:50: Cannot order by non-grouped column [YEAR(date [UTC])], expected [text]\",\n verify(\"SELECT MAX(int) FROM test GROUP BY text ORDER BY YEAR(date)\"));\n }\n \n@@ -144,4 +144,19 @@ public void testUnsupportedType() {\n assertEquals(\"1:8: Cannot use field [unsupported] type [ip_range] as is unsupported\",\n verify(\"SELECT unsupported FROM test\"));\n }\n-}\n+\n+ public void testGroupByOrderByNonKey() {\n+ assertEquals(\"1:52: Cannot order by non-grouped column [a], expected [bool]\",\n+ verify(\"SELECT AVG(int) a FROM test GROUP BY bool ORDER BY a\"));\n+ }\n+\n+ public void testGroupByOrderByFunctionOverKey() {\n+ assertEquals(\"1:44: Cannot order by non-grouped column [MAX(int)], expected [int]\",\n+ verify(\"SELECT int FROM test GROUP BY int ORDER BY MAX(int)\"));\n+ }\n+\n+ public void testGroupByOrderByScore() {\n+ assertEquals(\"1:44: Cannot order by non-grouped column [SCORE()], expected [int]\",\n+ verify(\"SELECT int FROM test GROUP BY int ORDER BY SCORE()\"));\n+ }\n+}\n\\ No newline at end of file", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/analysis/analyzer/VerifierErrorMessagesTests.java", "status": "modified" }, { "diff": "@@ -49,4 +49,18 @@ public void testMultiGroupBy() {\n assertEquals(\"1:32: Currently, only a single expression can be used with GROUP BY; please select one of [bool, keyword]\",\n verify(\"SELECT bool FROM test GROUP BY bool, keyword\"));\n }\n+\n+ //\n+ // TODO potential improvements\n+ //\n+ // regarding resolution\n+ // public void testGroupByOrderByKeyAlias() {\n+ // assertEquals(\"1:8: Cannot use field [unsupported] type [ip_range] as is unsupported\",\n+ // verify(\"SELECT int i FROM test GROUP BY int ORDER BY i\"));\n+ // }\n+ //\n+ // public void testGroupByAlias() {\n+ // assertEquals(\"1:8: Cannot use field [unsupported] type [ip_range] as is unsupported\",\n+ // verify(\"SELECT int i FROM test GROUP BY i ORDER BY int\"));\n+ // }\n }", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/planner/VerifierErrorMessagesTests.java", "status": "modified" } ] }
{ "body": "Auto-expands replicas in the same cluster state update (instead of a follow-up reroute) where nodes are added or removed. \r\n\r\nFixes #1873", "comments": [ { "body": "Pinging @elastic/es-distributed", "created_at": "2018-05-07T13:05:21Z" }, { "body": "Thanks @bleskes ", "created_at": "2018-05-07T20:26:42Z" }, { "body": "@ywelsch I was quickly looking at the test in this PR. This issue resulted in dropping the data and resyncing it from other nodes right after that. If I read the test correctly it works only with cluster state a bit, it does not verify that the data resyncing issue does not occur explicitly. So the assumption is that changes in how the cluster state is represented now should not lead to the replication data issue, is that correct?", "created_at": "2018-05-08T05:21:29Z" }, { "body": "@lukas-vlcek The issue resulted from how two distinct components in the system interacted: the auto-expand replica logic, and the shard deletion logic. The way the shard deletion logic works is as follows: When a data node receives a cluster state where a shard is fully allocated (i.e. primary + all replicas are active) and that node has some unallocated data for that shard on disk, it proceeds to delete said data as there is no need to keep this extra copy of the data around. The way the auto-expand replica logic used to work was as follows: This component hooked into the cluster lifecycle, waiting for cluster state changes, comparing the number of nodes in the cluster with the number of auto-expanded replicas and then submitting a settings update to the cluster to adjust the number_of_replicas if the two got out of sync. The way these two components now interacted resulted in the following behavior: Assume a 5 node cluster and an index with auto-expand-replicas set to `0-all`. In that case, `number_of_replicas` is expanded to 4 (i.e. there are 4 replica copies for each primary). Now assume that one node drops out only to later on rejoin the cluster. When the node drops from the cluster, it is removed from the cluster state. In a follow-up settings update initiated by the auto-expand-logic, `number_of_replicas` for the index is adjusted from 4 to 3. When the node rejoins the cluster, it is added to the cluster state by the master and receives this cluster state. The cluster state now has 5 nodes again, but the `number_of_replicas` is not expanded yet from 3 to 4, as this is only triggered shortly after by the auto-expand component (running on the active master). The shard deletion logic on the data node now sees that the shard is fully allocated (`number_of_replicas` set to 3 with all shard copies active). It then proceeds to delete the extra local shard copy, just before it receives the updated cluster state where `number_of_replicas` is now adjusted to 4 again. It therefore has to resync the full data from the primary again as the local copy was completely wiped.\r\n\r\nThe two main options to fix this were:\r\n- make the shard deletion logic auto-expand replicas aware, accounting for the inconsistencies in the cluster state.\r\n- ensure that cluster states are always consistent w.r.t the auto-expansion of replicas.\r\n\r\nFor better separation of concerns, and to have stronger consistency guarantees on cluster states, I opted for the second kind of fix. With this PR, the auto-expansion of replicas is done in the same cluster state update where nodes are added or removed from the cluster state. This means that when the newly joining node receives its first cluster state, `number_of_replicas` is already correctly expanded to 4, and the node is aware that the shard is not fully allocated, and therefore does not proceed to delete its local shard copy. The tests in this PR check that the consistency guarantees in the cluster state (`number_of_replicas` is properly expanded at all times) cannot be violated. The shard deletion logic is already separately tested, working on the assumption of a consistent cluster state.", "created_at": "2018-05-08T06:48:37Z" } ], "number": 30423, "title": "Auto-expand replicas when adding or removing nodes" }
{ "body": "#30423 combined auto-expansion in the same cluster state update where nodes are removed. As the auto-expansion step would run before deassociating the dead nodes from the routing table, the auto-expansion would possibly remove replicas from live nodes instead of dead ones. This PR reverses the order to ensure that when nodes leave the cluster that the auto-expand-replica functionality only triggers after failing the shards on the removed nodes. This ensures that active shards on other live nodes are not failed if the primary resided on a now dead node.\r\nInstead, one of the replicas on the live nodes first gets promoted to primary, and the auto-expansion (removing replicas) only triggers in a follow-up step (but still same cluster state update).\r\n\r\nRelates to https://github.com/elastic/elasticsearch/issues/30456#issuecomment-388553530\r\nand follow-up of #30423\r\n", "number": 30553, "review_comments": [], "title": "Auto-expand replicas only after failing nodes" }
{ "commits": [ { "message": "Auto-expand replicas only after failing nodes" }, { "message": "simplify" }, { "message": "move assertion" }, { "message": "Merge remote-tracking branch 'elastic/master' into auto-expand-replicas-on-node-removal" } ], "files": [ { "diff": "@@ -114,11 +114,24 @@ public ClusterState applyStartedShards(ClusterState clusterState, List<ShardRout\n }\n \n protected ClusterState buildResultAndLogHealthChange(ClusterState oldState, RoutingAllocation allocation, String reason) {\n- RoutingTable oldRoutingTable = oldState.routingTable();\n- RoutingNodes newRoutingNodes = allocation.routingNodes();\n+ ClusterState newState = buildResult(oldState, allocation);\n+\n+ logClusterHealthStateChange(\n+ new ClusterStateHealth(oldState),\n+ new ClusterStateHealth(newState),\n+ reason\n+ );\n+\n+ return newState;\n+ }\n+\n+ private ClusterState buildResult(ClusterState oldState, RoutingAllocation allocation) {\n+ final RoutingTable oldRoutingTable = oldState.routingTable();\n+ final RoutingNodes newRoutingNodes = allocation.routingNodes();\n final RoutingTable newRoutingTable = new RoutingTable.Builder().updateNodes(oldRoutingTable.version(), newRoutingNodes).build();\n- MetaData newMetaData = allocation.updateMetaDataWithRoutingChanges(newRoutingTable);\n+ final MetaData newMetaData = allocation.updateMetaDataWithRoutingChanges(newRoutingTable);\n assert newRoutingTable.validate(newMetaData); // validates the routing table is coherent with the cluster state metadata\n+\n final ClusterState.Builder newStateBuilder = ClusterState.builder(oldState)\n .routingTable(newRoutingTable)\n .metaData(newMetaData);\n@@ -131,13 +144,7 @@ protected ClusterState buildResultAndLogHealthChange(ClusterState oldState, Rout\n newStateBuilder.customs(customsBuilder.build());\n }\n }\n- final ClusterState newState = newStateBuilder.build();\n- logClusterHealthStateChange(\n- new ClusterStateHealth(oldState),\n- new ClusterStateHealth(newState),\n- reason\n- );\n- return newState;\n+ return newStateBuilder.build();\n }\n \n // Used for testing\n@@ -209,24 +216,23 @@ public ClusterState applyFailedShards(final ClusterState clusterState, final Lis\n * if needed.\n */\n public ClusterState deassociateDeadNodes(ClusterState clusterState, boolean reroute, String reason) {\n- ClusterState fixedClusterState = adaptAutoExpandReplicas(clusterState);\n- RoutingNodes routingNodes = getMutableRoutingNodes(fixedClusterState);\n+ RoutingNodes routingNodes = getMutableRoutingNodes(clusterState);\n // shuffle the unassigned nodes, just so we won't have things like poison failed shards\n routingNodes.unassigned().shuffle();\n- RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, fixedClusterState,\n+ RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, clusterState,\n clusterInfoService.getClusterInfo(), currentNanoTime());\n \n // first, clear from the shards any node id they used to belong to that is now dead\n deassociateDeadNodes(allocation);\n \n- if (reroute) {\n- reroute(allocation);\n+ if (allocation.routingNodesChanged()) {\n+ clusterState = buildResult(clusterState, allocation);\n }\n-\n- if (fixedClusterState == clusterState && allocation.routingNodesChanged() == false) {\n+ if (reroute) {\n+ return reroute(clusterState, reason);\n+ } else {\n return clusterState;\n }\n- return buildResultAndLogHealthChange(clusterState, allocation, reason);\n }\n \n /**", "filename": "server/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java", "status": "modified" }, { "diff": "@@ -380,7 +380,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n /**\n * a task indicated that the current node should become master, if no current master is known\n */\n- private static final DiscoveryNode BECOME_MASTER_TASK = new DiscoveryNode(\"_BECOME_MASTER_TASK_\",\n+ public static final DiscoveryNode BECOME_MASTER_TASK = new DiscoveryNode(\"_BECOME_MASTER_TASK_\",\n new TransportAddress(TransportAddress.META_ADDRESS, 0),\n Collections.emptyMap(), Collections.emptySet(), Version.CURRENT) {\n @Override\n@@ -393,7 +393,7 @@ public String toString() {\n * a task that is used to signal the election is stopped and we should process pending joins.\n * it may be use in combination with {@link #BECOME_MASTER_TASK}\n */\n- private static final DiscoveryNode FINISH_ELECTION_TASK = new DiscoveryNode(\"_FINISH_ELECTION_\",\n+ public static final DiscoveryNode FINISH_ELECTION_TASK = new DiscoveryNode(\"_FINISH_ELECTION_\",\n new TransportAddress(TransportAddress.META_ADDRESS, 0), Collections.emptyMap(), Collections.emptySet(), Version.CURRENT) {\n @Override\n public String toString() {", "filename": "server/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java", "status": "modified" }, { "diff": "@@ -18,8 +18,36 @@\n */\n package org.elasticsearch.cluster.metadata;\n \n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteRequest;\n+import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;\n+import org.elasticsearch.action.support.ActiveShardCount;\n+import org.elasticsearch.action.support.replication.ClusterStateCreationUtils;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.set.Sets;\n+import org.elasticsearch.indices.cluster.ClusterStateChanges;\n import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.threadpool.TestThreadPool;\n+import org.elasticsearch.threadpool.ThreadPool;\n+\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Locale;\n+import java.util.Set;\n+import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.stream.Collectors;\n+\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS;\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n+import static org.hamcrest.Matchers.everyItem;\n+import static org.hamcrest.Matchers.isIn;\n \n public class AutoExpandReplicasTests extends ESTestCase {\n \n@@ -72,4 +100,104 @@ public void testInvalidValues() {\n }\n \n }\n+\n+ private static final AtomicInteger nodeIdGenerator = new AtomicInteger();\n+\n+ protected DiscoveryNode createNode(DiscoveryNode.Role... mustHaveRoles) {\n+ Set<DiscoveryNode.Role> roles = new HashSet<>(randomSubsetOf(Sets.newHashSet(DiscoveryNode.Role.values())));\n+ for (DiscoveryNode.Role mustHaveRole : mustHaveRoles) {\n+ roles.add(mustHaveRole);\n+ }\n+ final String id = String.format(Locale.ROOT, \"node_%03d\", nodeIdGenerator.incrementAndGet());\n+ return new DiscoveryNode(id, id, buildNewFakeTransportAddress(), Collections.emptyMap(), roles,\n+ Version.CURRENT);\n+ }\n+\n+ /**\n+ * Checks that when nodes leave the cluster that the auto-expand-replica functionality only triggers after failing the shards on\n+ * the removed nodes. This ensures that active shards on other live nodes are not failed if the primary resided on a now dead node.\n+ * Instead, one of the replicas on the live nodes first gets promoted to primary, and the auto-expansion (removing replicas) only\n+ * triggers in a follow-up step.\n+ */\n+ public void testAutoExpandWhenNodeLeavesAndPossiblyRejoins() throws InterruptedException {\n+ final ThreadPool threadPool = new TestThreadPool(getClass().getName());\n+ final ClusterStateChanges cluster = new ClusterStateChanges(xContentRegistry(), threadPool);\n+\n+ try {\n+ List<DiscoveryNode> allNodes = new ArrayList<>();\n+ DiscoveryNode localNode = createNode(DiscoveryNode.Role.MASTER); // local node is the master\n+ allNodes.add(localNode);\n+ int numDataNodes = randomIntBetween(3, 5);\n+ List<DiscoveryNode> dataNodes = new ArrayList<>(numDataNodes);\n+ for (int i = 0; i < numDataNodes; i++) {\n+ dataNodes.add(createNode(DiscoveryNode.Role.DATA));\n+ }\n+ allNodes.addAll(dataNodes);\n+ ClusterState state = ClusterStateCreationUtils.state(localNode, localNode, allNodes.toArray(new DiscoveryNode[allNodes.size()]));\n+\n+ CreateIndexRequest request = new CreateIndexRequest(\"index\",\n+ Settings.builder()\n+ .put(SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(SETTING_AUTO_EXPAND_REPLICAS, \"0-all\").build())\n+ .waitForActiveShards(ActiveShardCount.NONE);\n+ state = cluster.createIndex(state, request);\n+ assertTrue(state.metaData().hasIndex(\"index\"));\n+ while (state.routingTable().index(\"index\").shard(0).allShardsStarted() == false) {\n+ logger.info(state);\n+ state = cluster.applyStartedShards(state,\n+ state.routingTable().index(\"index\").shard(0).shardsWithState(ShardRoutingState.INITIALIZING));\n+ state = cluster.reroute(state, new ClusterRerouteRequest());\n+ }\n+\n+ IndexShardRoutingTable preTable = state.routingTable().index(\"index\").shard(0);\n+ final Set<String> unchangedNodeIds;\n+ final IndexShardRoutingTable postTable;\n+\n+ if (randomBoolean()) {\n+ // simulate node removal\n+ List<DiscoveryNode> nodesToRemove = randomSubsetOf(2, dataNodes);\n+ unchangedNodeIds = dataNodes.stream().filter(n -> nodesToRemove.contains(n) == false)\n+ .map(DiscoveryNode::getId).collect(Collectors.toSet());\n+\n+ state = cluster.removeNodes(state, nodesToRemove);\n+ postTable = state.routingTable().index(\"index\").shard(0);\n+\n+ assertTrue(\"not all shards started in \" + state.toString(), postTable.allShardsStarted());\n+ assertThat(postTable.toString(), postTable.getAllAllocationIds(), everyItem(isIn(preTable.getAllAllocationIds())));\n+ } else {\n+ // fake an election where conflicting nodes are removed and readded\n+ state = ClusterState.builder(state).nodes(DiscoveryNodes.builder(state.nodes()).masterNodeId(null).build()).build();\n+\n+ List<DiscoveryNode> conflictingNodes = randomSubsetOf(2, dataNodes);\n+ unchangedNodeIds = dataNodes.stream().filter(n -> conflictingNodes.contains(n) == false)\n+ .map(DiscoveryNode::getId).collect(Collectors.toSet());\n+\n+ List<DiscoveryNode> nodesToAdd = conflictingNodes.stream()\n+ .map(n -> new DiscoveryNode(n.getName(), n.getId(), buildNewFakeTransportAddress(), n.getAttributes(), n.getRoles(), n.getVersion()))\n+ .collect(Collectors.toList());\n+\n+ if (randomBoolean()) {\n+ nodesToAdd.add(createNode(DiscoveryNode.Role.DATA));\n+ }\n+\n+ state = cluster.joinNodesAndBecomeMaster(state, nodesToAdd);\n+ postTable = state.routingTable().index(\"index\").shard(0);\n+ }\n+\n+ Set<String> unchangedAllocationIds = preTable.getShards().stream().filter(shr -> unchangedNodeIds.contains(shr.currentNodeId()))\n+ .map(shr -> shr.allocationId().getId()).collect(Collectors.toSet());\n+\n+ assertThat(postTable.toString(), unchangedAllocationIds, everyItem(isIn(postTable.getAllAllocationIds())));\n+\n+ postTable.getShards().forEach(\n+ shardRouting -> {\n+ if (shardRouting.assignedToNode() && unchangedAllocationIds.contains(shardRouting.allocationId().getId())) {\n+ assertTrue(\"Shard should be active: \" + shardRouting, shardRouting.active());\n+ }\n+ }\n+ );\n+ } finally {\n+ terminate(threadPool);\n+ }\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/cluster/metadata/AutoExpandReplicasTests.java", "status": "modified" }, { "diff": "@@ -87,6 +87,7 @@\n import org.elasticsearch.transport.TransportService;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.HashSet;\n@@ -232,6 +233,15 @@ public ClusterState addNodes(ClusterState clusterState, List<DiscoveryNode> node\n return runTasks(joinTaskExecutor, clusterState, nodes);\n }\n \n+ public ClusterState joinNodesAndBecomeMaster(ClusterState clusterState, List<DiscoveryNode> nodes) {\n+ List<DiscoveryNode> joinNodes = new ArrayList<>();\n+ joinNodes.add(NodeJoinController.BECOME_MASTER_TASK);\n+ joinNodes.add(NodeJoinController.FINISH_ELECTION_TASK);\n+ joinNodes.addAll(nodes);\n+\n+ return runTasks(joinTaskExecutor, clusterState, joinNodes);\n+ }\n+\n public ClusterState removeNodes(ClusterState clusterState, List<DiscoveryNode> nodes) {\n return runTasks(nodeRemovalExecutor, clusterState, nodes.stream()\n .map(n -> new ZenDiscovery.NodeRemovalClusterStateTaskExecutor.Task(n, \"dummy reason\")).collect(Collectors.toList()));", "filename": "server/src/test/java/org/elasticsearch/indices/cluster/ClusterStateChanges.java", "status": "modified" } ] }
{ "body": "Fixes wire compatibility streaming JobUpdates to versions before 7.0.\r\n\r\nTowards #30456\r\n\r\n`establishedModelMemory` was introduced in 6.1.0\r\n`jobVersion` in 6.3.0\r\n`modelSnapshotMinVersion` in 7.0.0-alpha1\r\n\r\nfields must be streamed in that order. \r\n\r\nAdditionally I made a change to prevent the job update fields that should only be set internally being set by a request to the `anomaly_detectors/JOB_ID/_update`\r\n\r\n\r\n", "comments": [ { "body": "Pinging @elastic/ml-core", "created_at": "2018-05-10T09:32:08Z" }, { "body": "Oh, I also think this should go into `6.3.0`, right?", "created_at": "2018-05-10T11:20:17Z" } ], "number": 30512, "title": "[ML] Fix wire BWC for JobUpdate" }
{ "body": "Internal fields `Model Snapshot ID`, `Established Model Memory` and `Job Version` should not be settable via a request to `anomaly_detectors/JOB_ID/_update`\r\n\r\nPartial backport of #30512", "number": 30537, "review_comments": [], "title": "[ML] Hide internal Job update options from the REST API" }
{ "commits": [ { "message": "Add JobVersion to update\n\nHide internal fields from the REST request parser" }, { "message": "Change yml tests to not use secret job update settings\n\nUse the revert model snapshot API instead" } ], "files": [ { "diff": "@@ -45,7 +45,7 @@ public PutJobAction.Response newResponse() {\n public static class Request extends AcknowledgedRequest<UpdateJobAction.Request> implements ToXContentObject {\n \n public static UpdateJobAction.Request parseRequest(String jobId, XContentParser parser) {\n- JobUpdate update = JobUpdate.PARSER.apply(parser, null).setJobId(jobId).build();\n+ JobUpdate update = JobUpdate.EXTERNAL_PARSER.apply(parser, null).setJobId(jobId).build();\n return new UpdateJobAction.Request(jobId, update);\n }\n ", "filename": "x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/action/UpdateJobAction.java", "status": "modified" }, { "diff": "@@ -30,26 +30,34 @@\n public class JobUpdate implements Writeable, ToXContentObject {\n public static final ParseField DETECTORS = new ParseField(\"detectors\");\n \n- public static final ConstructingObjectParser<Builder, Void> PARSER = new ConstructingObjectParser<>(\n+ // For internal updates\n+ static final ConstructingObjectParser<Builder, Void> INTERNAL_PARSER = new ConstructingObjectParser<>(\n+ \"job_update\", args -> new Builder((String) args[0]));\n+\n+ // For parsing REST requests\n+ public static final ConstructingObjectParser<Builder, Void> EXTERNAL_PARSER = new ConstructingObjectParser<>(\n \"job_update\", args -> new Builder((String) args[0]));\n \n static {\n- PARSER.declareString(ConstructingObjectParser.optionalConstructorArg(), Job.ID);\n- PARSER.declareStringArray(Builder::setGroups, Job.GROUPS);\n- PARSER.declareStringOrNull(Builder::setDescription, Job.DESCRIPTION);\n- PARSER.declareObjectArray(Builder::setDetectorUpdates, DetectorUpdate.PARSER, DETECTORS);\n- PARSER.declareObject(Builder::setModelPlotConfig, ModelPlotConfig.CONFIG_PARSER, Job.MODEL_PLOT_CONFIG);\n- PARSER.declareObject(Builder::setAnalysisLimits, AnalysisLimits.CONFIG_PARSER, Job.ANALYSIS_LIMITS);\n- PARSER.declareString((builder, val) -> builder.setBackgroundPersistInterval(\n- TimeValue.parseTimeValue(val, Job.BACKGROUND_PERSIST_INTERVAL.getPreferredName())), Job.BACKGROUND_PERSIST_INTERVAL);\n- PARSER.declareLong(Builder::setRenormalizationWindowDays, Job.RENORMALIZATION_WINDOW_DAYS);\n- PARSER.declareLong(Builder::setResultsRetentionDays, Job.RESULTS_RETENTION_DAYS);\n- PARSER.declareLong(Builder::setModelSnapshotRetentionDays, Job.MODEL_SNAPSHOT_RETENTION_DAYS);\n- PARSER.declareStringArray(Builder::setCategorizationFilters, AnalysisConfig.CATEGORIZATION_FILTERS);\n- PARSER.declareField(Builder::setCustomSettings, (p, c) -> p.map(), Job.CUSTOM_SETTINGS, ObjectParser.ValueType.OBJECT);\n- PARSER.declareString(Builder::setModelSnapshotId, Job.MODEL_SNAPSHOT_ID);\n- PARSER.declareLong(Builder::setEstablishedModelMemory, Job.ESTABLISHED_MODEL_MEMORY);\n- PARSER.declareString(Builder::setJobVersion, Job.JOB_VERSION);\n+ for (ConstructingObjectParser<Builder, Void> parser : Arrays.asList(INTERNAL_PARSER, EXTERNAL_PARSER)) {\n+ parser.declareString(ConstructingObjectParser.optionalConstructorArg(), Job.ID);\n+ parser.declareStringArray(Builder::setGroups, Job.GROUPS);\n+ parser.declareStringOrNull(Builder::setDescription, Job.DESCRIPTION);\n+ parser.declareObjectArray(Builder::setDetectorUpdates, DetectorUpdate.PARSER, DETECTORS);\n+ parser.declareObject(Builder::setModelPlotConfig, ModelPlotConfig.CONFIG_PARSER, Job.MODEL_PLOT_CONFIG);\n+ parser.declareObject(Builder::setAnalysisLimits, AnalysisLimits.CONFIG_PARSER, Job.ANALYSIS_LIMITS);\n+ parser.declareString((builder, val) -> builder.setBackgroundPersistInterval(\n+ TimeValue.parseTimeValue(val, Job.BACKGROUND_PERSIST_INTERVAL.getPreferredName())), Job.BACKGROUND_PERSIST_INTERVAL);\n+ parser.declareLong(Builder::setRenormalizationWindowDays, Job.RENORMALIZATION_WINDOW_DAYS);\n+ parser.declareLong(Builder::setResultsRetentionDays, Job.RESULTS_RETENTION_DAYS);\n+ parser.declareLong(Builder::setModelSnapshotRetentionDays, Job.MODEL_SNAPSHOT_RETENTION_DAYS);\n+ parser.declareStringArray(Builder::setCategorizationFilters, AnalysisConfig.CATEGORIZATION_FILTERS);\n+ parser.declareField(Builder::setCustomSettings, (p, c) -> p.map(), Job.CUSTOM_SETTINGS, ObjectParser.ValueType.OBJECT);\n+ }\n+ // These fields should not be set by a REST request\n+ INTERNAL_PARSER.declareString(Builder::setModelSnapshotId, Job.MODEL_SNAPSHOT_ID);\n+ INTERNAL_PARSER.declareLong(Builder::setEstablishedModelMemory, Job.ESTABLISHED_MODEL_MEMORY);\n+ INTERNAL_PARSER.declareString(Builder::setJobVersion, Job.JOB_VERSION);\n }\n \n private final String jobId;\n@@ -224,14 +232,14 @@ public Long getEstablishedModelMemory() {\n return establishedModelMemory;\n }\n \n- public boolean isAutodetectProcessUpdate() {\n- return modelPlotConfig != null || detectorUpdates != null;\n- }\n-\n public Version getJobVersion() {\n return jobVersion;\n }\n \n+ public boolean isAutodetectProcessUpdate() {\n+ return modelPlotConfig != null || detectorUpdates != null;\n+ }\n+\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject();\n@@ -332,7 +340,7 @@ public Set<String> getUpdateFields() {\n /**\n * Updates {@code source} with the new values in this object returning a new {@link Job}.\n *\n- * @param source Source job to be updated\n+ * @param source Source job to be updated\n * @param maxModelMemoryLimit The maximum model memory allowed\n * @return A new job equivalent to {@code source} updated.\n */", "filename": "x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/job/config/JobUpdate.java", "status": "modified" }, { "diff": "@@ -26,6 +26,8 @@\n \n public class JobUpdateTests extends AbstractSerializingTestCase<JobUpdate> {\n \n+ private boolean useInternalParser = randomBoolean();\n+\n @Override\n protected JobUpdate createTestInstance() {\n JobUpdate.Builder update = new JobUpdate.Builder(randomAlphaOfLength(4));\n@@ -84,13 +86,13 @@ protected JobUpdate createTestInstance() {\n if (randomBoolean()) {\n update.setCustomSettings(Collections.singletonMap(randomAlphaOfLength(10), randomAlphaOfLength(10)));\n }\n- if (randomBoolean()) {\n+ if (useInternalParser && randomBoolean()) {\n update.setModelSnapshotId(randomAlphaOfLength(10));\n }\n- if (randomBoolean()) {\n+ if (useInternalParser && randomBoolean()) {\n update.setEstablishedModelMemory(randomNonNegativeLong());\n }\n- if (randomBoolean()) {\n+ if (useInternalParser && randomBoolean()) {\n update.setJobVersion(randomFrom(Version.CURRENT, Version.V_6_2_0, Version.V_6_1_0));\n }\n \n@@ -104,7 +106,11 @@ protected Writeable.Reader<JobUpdate> instanceReader() {\n \n @Override\n protected JobUpdate doParseInstance(XContentParser parser) {\n- return JobUpdate.PARSER.apply(parser, null).build();\n+ if (useInternalParser) {\n+ return JobUpdate.INTERNAL_PARSER.apply(parser, null).build();\n+ } else {\n+ return JobUpdate.EXTERNAL_PARSER.apply(parser, null).build();\n+ }\n }\n \n public void testMergeWithJob() {\n@@ -141,7 +147,7 @@ public void testMergeWithJob() {\n JobUpdate update = updateBuilder.build();\n \n Job.Builder jobBuilder = new Job.Builder(\"foo\");\n- jobBuilder.setGroups(Arrays.asList(\"group-1\"));\n+ jobBuilder.setGroups(Collections.singletonList(\"group-1\"));\n Detector.Builder d1 = new Detector.Builder(\"info_content\", \"domain\");\n d1.setOverFieldName(\"mlcategory\");\n Detector.Builder d2 = new Detector.Builder(\"min\", \"field\");", "filename": "x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/ml/job/config/JobUpdateTests.java", "status": "modified" }, { "diff": "@@ -88,7 +88,24 @@ setup:\n \"description\": \"second\",\n \"latest_record_time_stamp\": \"2016-06-01T00:00:00Z\",\n \"latest_result_time_stamp\": \"2016-06-01T00:00:00Z\",\n- \"snapshot_doc_count\": 3\n+ \"snapshot_doc_count\": 3,\n+ \"model_size_stats\": {\n+ \"job_id\" : \"delete-model-snapshot\",\n+ \"result_type\" : \"model_size_stats\",\n+ \"model_bytes\" : 0,\n+ \"total_by_field_count\" : 101,\n+ \"total_over_field_count\" : 0,\n+ \"total_partition_field_count\" : 0,\n+ \"bucket_allocation_failures_count\" : 0,\n+ \"memory_status\" : \"ok\",\n+ \"log_time\" : 1495808248662,\n+ \"timestamp\" : 1495808248662\n+ },\n+ \"quantiles\": {\n+ \"job_id\": \"delete-model-snapshot\",\n+ \"timestamp\": 1495808248662,\n+ \"quantile_state\": \"quantiles-1\"\n+ }\n }\n \n - do:\n@@ -106,12 +123,10 @@ setup:\n - do:\n headers:\n Authorization: \"Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==\" # run as x_pack_rest_user, i.e. the test setup superuser\n- xpack.ml.update_job:\n+ xpack.ml.revert_model_snapshot:\n job_id: delete-model-snapshot\n- body: >\n- {\n- \"model_snapshot_id\": \"active-snapshot\"\n- }\n+ snapshot_id: \"active-snapshot\"\n+\n \n ---\n \"Test delete snapshot missing snapshotId\":", "filename": "x-pack/plugin/src/test/resources/rest-api-spec/test/ml/delete_model_snapshot.yml", "status": "modified" } ] }
{ "body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): `5.6.3` (bug also appears to be on master)\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): 1.8.0_161\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Darwin\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen using a `geo_polygon` query using points, I would expect any lon value `< -180` or `> 180` to be invalid and return an error similar to the one returned when any lat value is `< -90` or `> 90`. Looking at the code [here](https://github.com/elastic/elasticsearch/blob/v5.6.3/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java#L191) the check does exist but is using the lat value instead of the lon value. \r\n\r\nThis also appears to exist in the current master version (link to code [here](https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java#L180))\r\n\r\n**Steps to reproduce**:\r\n\r\nYou can reproduce by running a `geo_polygon` query using lat/long points in elasticsearch `5.6.3`. Note that you will need a `geo_point` field to query against. Using a lat of `< -90` or `> 90` will return an error about an `illegal latitude value`, but using a lon of `< -180` or `> 180` will not return an error.\r\n\r\nBased on the code link above, I would also assume you could do the same on any version `> 5.6.3`.\r\n", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-05-09T20:40:36Z" }, { "body": "Tested it in 6.x and it looks like the query with invalid longitude actually fails but with a quite convoluted error that is generated during polygon validation. ", "created_at": "2018-05-09T21:45:10Z" } ], "number": 30488, "title": "geo_polygon query using point does not properly validate longitude values" }
{ "body": "Fixes longitude validation in geo_polygon_query builder. The queries\r\nwith wrong longitude currently fail but only later during polygon\r\nwith quite complicated error message.\r\n\r\nFixes #30488\r\n", "number": 30497, "review_comments": [], "title": "Add proper longitude validation in geo_polygon_query" }
{ "commits": [ { "message": "Add proper longitude validation in geo_polygon_query\n\nFixes longitude validation in geo_polygon_query builder. The queries\nwith wrong longitude currently fail but only later during polygon\nwith quite complicated error message.\n\nFixes #30488" } ], "files": [ { "diff": "@@ -177,7 +177,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n throw new QueryShardException(context, \"illegal latitude value [{}] for [{}]\", point.lat(),\n GeoPolygonQueryBuilder.NAME);\n }\n- if (!GeoUtils.isValidLongitude(point.lat())) {\n+ if (!GeoUtils.isValidLongitude(point.lon())) {\n throw new QueryShardException(context, \"illegal longitude value [{}] for [{}]\", point.lon(),\n GeoPolygonQueryBuilder.NAME);\n }", "filename": "server/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java", "status": "modified" }, { "diff": "@@ -254,4 +254,38 @@ public void testIgnoreUnmapped() throws IOException {\n QueryShardException e = expectThrows(QueryShardException.class, () -> failingQueryBuilder.toQuery(createShardContext()));\n assertThat(e.getMessage(), containsString(\"failed to find geo_point field [unmapped]\"));\n }\n+\n+ public void testPointValidation() throws IOException {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ QueryShardContext context = createShardContext();\n+ String queryInvalidLat = \"{\\n\" +\n+ \" \\\"geo_polygon\\\":{\\n\" +\n+ \" \\\"\" + GEO_POINT_FIELD_NAME + \"\\\":{\\n\" +\n+ \" \\\"points\\\":[\\n\" +\n+ \" [-70, 140],\\n\" +\n+ \" [-80, 30],\\n\" +\n+ \" [-90, 20]\\n\" +\n+ \" ]\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\\n\";\n+\n+ QueryShardException e1 = expectThrows(QueryShardException.class, () -> parseQuery(queryInvalidLat).toQuery(context));\n+ assertThat(e1.getMessage(), containsString(\"illegal latitude value [140.0] for [geo_polygon]\"));\n+\n+ String queryInvalidLon = \"{\\n\" +\n+ \" \\\"geo_polygon\\\":{\\n\" +\n+ \" \\\"\" + GEO_POINT_FIELD_NAME + \"\\\":{\\n\" +\n+ \" \\\"points\\\":[\\n\" +\n+ \" [-70, 40],\\n\" +\n+ \" [-80, 30],\\n\" +\n+ \" [-190, 20]\\n\" +\n+ \" ]\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\\n\";\n+\n+ QueryShardException e2 = expectThrows(QueryShardException.class, () -> parseQuery(queryInvalidLon).toQuery(context));\n+ assertThat(e2.getMessage(), containsString(\"illegal longitude value [-190.0] for [geo_polygon]\"));\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/index/query/GeoPolygonQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -1,19 +1,12 @@\n {\n- \"filtered\": {\n- \"query\": {\n- \"match_all\": {}\n- },\n- \"filter\": {\n- \"geo_polygon\": {\n- \"location\": {\n- \"points\": {\n- \"points\": [\n- [-70, 40],\n- [-80, 30],\n- [-90, 20]\n- ]\n- }\n- }\n+ \"geo_polygon\": {\n+ \"location\": {\n+ \"points\": {\n+ \"points\": [\n+ [-70, 40],\n+ [-80, 30],\n+ [-90, 20]\n+ ]\n }\n }\n }", "filename": "server/src/test/resources/org/elasticsearch/index/query/geo_polygon_exception_1.json", "status": "modified" }, { "diff": "@@ -1,21 +1,13 @@\n {\n- \"filtered\": {\n- \"query\": {\n- \"match_all\": {}\n- },\n- \"filter\": {\n- \"geo_polygon\": {\n- \"location\": {\n- \"points\": [\n- [-70, 40],\n- [-80, 30],\n- [-90, 20]\n- ],\n- \"something_else\": {\n+ \"geo_polygon\": {\n+ \"location\": {\n+ \"points\": [\n+ [-70, 40],\n+ [-80, 30],\n+ [-90, 20]\n+ ],\n+ \"something_else\": {\n \n- }\n-\n- }\n }\n }\n }", "filename": "server/src/test/resources/org/elasticsearch/index/query/geo_polygon_exception_2.json", "status": "modified" }, { "diff": "@@ -1,12 +1,5 @@\n {\n- \"filtered\": {\n- \"query\": {\n- \"match_all\": {}\n- },\n- \"filter\": {\n- \"geo_polygon\": {\n- \"location\": [\"WRONG\"]\n- }\n- }\n+ \"geo_polygon\": {\n+ \"location\": [\"WRONG\"]\n }\n }", "filename": "server/src/test/resources/org/elasticsearch/index/query/geo_polygon_exception_3.json", "status": "modified" }, { "diff": "@@ -1,19 +1,12 @@\n {\n- \"filtered\": {\n- \"query\": {\n- \"match_all\": {}\n+ \"geo_polygon\": {\n+ \"location\": {\n+ \"points\": [\n+ [-70, 40],\n+ [-80, 30],\n+ [-90, 20]\n+ ]\n },\n- \"filter\": {\n- \"geo_polygon\": {\n- \"location\": {\n- \"points\": [\n- [-70, 40],\n- [-80, 30],\n- [-90, 20]\n- ]\n- },\n- \"bla\": true\n- }\n- }\n+ \"bla\": true\n }\n }", "filename": "server/src/test/resources/org/elasticsearch/index/query/geo_polygon_exception_4.json", "status": "modified" }, { "diff": "@@ -1,19 +1,12 @@\n {\n- \"filtered\": {\n- \"query\": {\n- \"match_all\": {}\n+ \"geo_polygon\": {\n+ \"location\": {\n+ \"points\": [\n+ [-70, 40],\n+ [-80, 30],\n+ [-90, 20]\n+ ]\n },\n- \"filter\": {\n- \"geo_polygon\": {\n- \"location\": {\n- \"points\": [\n- [-70, 40],\n- [-80, 30],\n- [-90, 20]\n- ]\n- },\n- \"bla\": [\"array\"]\n- }\n- }\n+ \"bla\": [\"array\"]\n }\n }", "filename": "server/src/test/resources/org/elasticsearch/index/query/geo_polygon_exception_5.json", "status": "modified" } ] }
{ "body": "**Elasticsearch version** (`bin/elasticsearch --version`): Docs issue with\r\nhttps://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nThe **Repositories** section includes a few examples referencing `my_backup` like\r\n`PUT /_snapshot/my_backup`\r\nand\r\n`GET /_snapshot/my_backup`\r\n\r\nBut in the **Shared File System Repository** section it keeps referring to `my_backup` in the text but isn't using that in the examples so it makes it rather confusing;\r\n\r\n\"... This location (or one of its parent directories) must be registered in the path.repo setting on all master and data nodes.\r\nAssuming that the shared filesystem is mounted to /mount/backups/my_backup, the following setting should be added to elasticsearch.yml file:\"\r\n\r\n`path.repo: [\"/mount/backups\", \"/mount/longterm_backups\"]`\r\n\r\nAnd again here. It looks like maybe `my_backup` was changed to `my_fs_backup` but I don't know snapshots very well so I could me misinterpreting it.\r\n\r\n\"After all nodes are restarted, the following command can be used to register the shared file system repository with the name my_backup:\"\r\n\r\n```\r\nPUT /_snapshot/my_fs_backup\r\n{\r\n```\r\n\r\nAnd then a few more examples alternate between `my_backup` and `my_fs_backup`;\r\n\r\nGET /_snapshot/my_backup/_current\r\nDELETE /_snapshot/my_backup/snapshot_2\r\nDELETE /_snapshot/`my_fs_backup`\r\nPOST /_snapshot/my_backup/snapshot_1/_restore\r\n", "comments": [ { "body": "Pinging @elastic/es-distributed", "created_at": "2018-05-08T08:11:27Z" }, { "body": "Thanks @LeeDr, I created #30480", "created_at": "2018-05-09T08:18:48Z" } ], "number": 30444, "title": "[DOC] inconsistency with my_backup examples" }
{ "body": "Closes #30444 and also fix headers level in repository-azure doc.", "number": 30480, "review_comments": [], "title": "[Docs] Fix inconsistencies in snapshot/restore doc" }
{ "commits": [ { "message": "[Docs] Fix inconsistencies in snapshot/restore doc\n\nCloses #30444" } ], "files": [ { "diff": "@@ -84,7 +84,7 @@ When `proxy.type` is set to `http` or `socks`, `proxy.host` and `proxy.port` mus\n \n \n [[repository-azure-repository-settings]]\n-===== Repository settings\n+==== Repository settings\n \n The Azure repository supports following settings:\n \n@@ -178,7 +178,7 @@ client.admin().cluster().preparePutRepository(\"my_backup_java1\")\n ----\n \n [[repository-azure-validation]]\n-===== Repository validation rules\n+==== Repository validation rules\n \n According to the http://msdn.microsoft.com/en-us/library/dd135715.aspx[containers naming guide], a container name must\n be a valid DNS name, conforming to the following naming rules:", "filename": "docs/plugins/repository-azure.asciidoc", "status": "modified" }, { "diff": "@@ -124,8 +124,8 @@ the shared file system repository it is necessary to mount the same shared files\n master and data nodes. This location (or one of its parent directories) must be registered in the `path.repo`\n setting on all master and data nodes.\n \n-Assuming that the shared filesystem is mounted to `/mount/backups/my_backup`, the following setting should be added to\n-`elasticsearch.yml` file:\n+Assuming that the shared filesystem is mounted to `/mount/backups/my_fs_backup_location`, the following setting should\n+be added to `elasticsearch.yml` file:\n \n [source,yaml]\n --------------\n@@ -141,7 +141,7 @@ path.repo: [\"\\\\\\\\MY_SERVER\\\\Snapshots\"]\n --------------\n \n After all nodes are restarted, the following command can be used to register the shared file system repository with\n-the name `my_backup`:\n+the name `my_fs_backup`:\n \n [source,js]\n -----------------------------------\n@@ -419,7 +419,7 @@ A repository can be unregistered using the following command:\n \n [source,sh]\n -----------------------------------\n-DELETE /_snapshot/my_fs_backup\n+DELETE /_snapshot/my_backup\n -----------------------------------\n // CONSOLE\n // TEST[continued]", "filename": "docs/reference/modules/snapshots.asciidoc", "status": "modified" } ] }
{ "body": "Here is a stack dump from a failing test:\r\n\r\n```\r\n\"Thread-1\" #19 prio=5 os_prio=31 tid=0x00007f906eb14800 nid=0x7803 waiting on condition [0x000070000618c000]\r\n java.lang.Thread.State: WAITING (parking)\r\n at jdk.internal.misc.Unsafe.park(java.base@10/Native Method)\r\n - parking to wait for <0x00000006cecb5790> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)\r\n at java.util.concurrent.locks.LockSupport.park(java.base@10/LockSupport.java:194)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@10/AbstractQueuedSynchronizer.java:883)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.base@10/AbstractQueuedSynchronizer.java:915)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@10/AbstractQueuedSynchronizer.java:1238)\r\n at java.util.concurrent.locks.ReentrantLock.lock(java.base@10/ReentrantLock.java:267)\r\n at org.elasticsearch.common.util.concurrent.ReleasableLock.acquire(ReleasableLock.java:55)\r\n at org.elasticsearch.common.cache.Cache.lambda$computeIfAbsent$5(Cache.java:391)\r\n at org.elasticsearch.common.cache.Cache$$Lambda$207/1567223321.apply(Unknown Source)\r\n at java.util.concurrent.CompletableFuture.uniHandle(java.base@10/CompletableFuture.java:930)\r\n at java.util.concurrent.CompletableFuture$UniHandle.tryFire(java.base@10/CompletableFuture.java:907)\r\n at java.util.concurrent.CompletableFuture.postComplete(java.base@10/CompletableFuture.java:506)\r\n at java.util.concurrent.CompletableFuture.complete(java.base@10/CompletableFuture.java:2073)\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:422)\r\n at org.elasticsearch.common.cache.CacheTests.lambda$testComputeIfAbsentDeadlock$9(CacheTests.java:358)\r\n at org.elasticsearch.common.cache.CacheTests$$Lambda$200/1841769401.run(Unknown Source)\r\n at java.lang.Thread.run(java.base@10/Thread.java:844)\r\n\r\n\"Thread-2\" #20 prio=5 os_prio=31 tid=0x00007f906eb17000 nid=0x8d03 waiting on condition [0x000070000628e000]\r\n java.lang.Thread.State: WAITING (parking)\r\n at jdk.internal.misc.Unsafe.park(java.base@10/Native Method)\r\n - parking to wait for <0x00000006ce8b7498> (a java.util.concurrent.CompletableFuture$Signaller)\r\n at java.util.concurrent.locks.LockSupport.park(java.base@10/LockSupport.java:194)\r\n at java.util.concurrent.CompletableFuture$Signaller.block(java.base@10/CompletableFuture.java:1796)\r\n at java.util.concurrent.ForkJoinPool.managedBlock(java.base@10/ForkJoinPool.java:3156)\r\n at java.util.concurrent.CompletableFuture.waitingGet(java.base@10/CompletableFuture.java:1823)\r\n at java.util.concurrent.CompletableFuture.get(java.base@10/CompletableFuture.java:1998)\r\n at org.elasticsearch.common.cache.Cache$CacheSegment.remove(Cache.java:290)\r\n at org.elasticsearch.common.cache.Cache.evictEntry(Cache.java:720)\r\n at org.elasticsearch.common.cache.Cache.evict(Cache.java:711)\r\n at org.elasticsearch.common.cache.Cache.promote(Cache.java:701)\r\n at org.elasticsearch.common.cache.Cache.lambda$computeIfAbsent$5(Cache.java:392)\r\n at org.elasticsearch.common.cache.Cache$$Lambda$207/1567223321.apply(Unknown Source)\r\n at java.util.concurrent.CompletableFuture.uniHandle(java.base@10/CompletableFuture.java:930)\r\n at java.util.concurrent.CompletableFuture$UniHandle.tryFire(java.base@10/CompletableFuture.java:907)\r\n at java.util.concurrent.CompletableFuture.postComplete(java.base@10/CompletableFuture.java:506)\r\n at java.util.concurrent.CompletableFuture.complete(java.base@10/CompletableFuture.java:2073)\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:422)\r\n at org.elasticsearch.common.cache.CacheTests.lambda$testComputeIfAbsentDeadlock$9(CacheTests.java:358)\r\n at org.elasticsearch.common.cache.CacheTests$$Lambda$200/1841769401.run(Unknown Source)\r\n at java.lang.Thread.run(java.base@10/Thread.java:844)\r\n```\r\n\r\nI will push the failing test and mark it as awaiting fix by this issue.", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-05-07T16:55:10Z" } ], "number": 30428, "title": "Core cache implementation can deadlock" }
{ "body": "This commit avoids deadlocks in the cache by removing dangerous places where we try to take the LRU lock while completing a future. Instead, we block for the future to complete, and then execute the handling code under the LRU lock (for example, eviction).\r\n\r\nCloses #30428\r\n", "number": 30461, "review_comments": [ { "body": "why does this follow a different pattern than invalidate? as far as I can tell if we don't wait for the future to be completed, it may be re-inserted into the LRU by the future completion logic. I would also like to understand why this isn't a race condition even if you do complete the future (i.e., aren't we susceptible to race conditions in between the execution of the handler and the `get()` returning), which will cause the LRU to go out of sync (similar issue in put)", "created_at": "2018-05-09T10:03:17Z" }, { "body": "@jasontedor and I discussed this on another channel. The reason for a different execution paths on the call backs has to do with whether we already hold a reference to the relevant entry or not. I personally prefer to not have two paths here but not enough to request a change. \r\n\r\n> I would also like to understand why this isn't a race condition even if you do complete the future (i.e., aren't we susceptible to race conditions in between the execution of the handler and the get() returning), which will cause the LRU to go out of sync (similar issue in put)\r\n\r\nThis one is guarded against by the state in the entry. Deleting an entry also changes the state to deleted and thus it will not be re-added by the handler in computeIfAbsent. That said we found another issue there where delete doesn't mark the entry as deleted if it's in the new state. This will be dealt with in a followup.", "created_at": "2018-05-09T15:42:19Z" } ], "title": "Avoid deadlocks in cache" }
{ "commits": [ { "message": "Avoid deadlocks in cache\n\nThis commit avoids deadlocks in the cache by removing dangerous places\nwhere we try to take the LRU lock while completing a future. Instead, we\nblock for the future to complete, and then execute the handling code\nunder the LRU lock (for example, eviction)." }, { "message": "Final" }, { "message": "Merge branch 'master' into cache-deadlock\n\n* master:\n [Docs] Fix typo in cardinality-aggregation.asciidoc (#30434)\n Avoid NPE in `more_like_this` when field has zero tokens (#30365)\n Build: Switch to building javadoc with html5 (#30440)" }, { "message": "Variable names" }, { "message": "Merge remote-tracking branch 'elastic/master' into cache-deadlock\n\n* elastic/master:\n Mute ML upgrade test (#30458)\n Stop forking javac (#30462)\n Client: Deprecate many argument performRequest (#30315)\n Docs: Use task_id in examples of tasks (#30436)\n Security: Rename IndexLifecycleManager to SecurityIndexManager (#30442)" } ], "files": [ { "diff": "@@ -206,34 +206,33 @@ private static class CacheSegment<K, V> {\n */\n Entry<K, V> get(K key, long now, Predicate<Entry<K, V>> isExpired, Consumer<Entry<K, V>> onExpiration) {\n CompletableFuture<Entry<K, V>> future;\n- Entry<K, V> entry = null;\n try (ReleasableLock ignored = readLock.acquire()) {\n future = map.get(key);\n }\n if (future != null) {\n+ Entry<K, V> entry;\n try {\n- entry = future.handle((ok, ex) -> {\n- if (ok != null && !isExpired.test(ok)) {\n- segmentStats.hit();\n- ok.accessTime = now;\n- return ok;\n- } else {\n- segmentStats.miss();\n- if (ok != null) {\n- assert isExpired.test(ok);\n- onExpiration.accept(ok);\n- }\n- return null;\n- }\n- }).get();\n- } catch (ExecutionException | InterruptedException e) {\n+ entry = future.get();\n+ } catch (ExecutionException e) {\n+ assert future.isCompletedExceptionally();\n+ segmentStats.miss();\n+ return null;\n+ } catch (InterruptedException e) {\n throw new IllegalStateException(e);\n }\n- }\n- else {\n+ if (isExpired.test(entry)) {\n+ segmentStats.miss();\n+ onExpiration.accept(entry);\n+ return null;\n+ } else {\n+ segmentStats.hit();\n+ entry.accessTime = now;\n+ return entry;\n+ }\n+ } else {\n segmentStats.miss();\n+ return null;\n }\n- return entry;\n }\n \n /**\n@@ -269,30 +268,18 @@ Tuple<Entry<K, V>, Entry<K, V>> put(K key, V value, long now) {\n /**\n * remove an entry from the segment\n *\n- * @param key the key of the entry to remove from the cache\n- * @return the removed entry if there was one, otherwise null\n+ * @param key the key of the entry to remove from the cache\n+ * @param onRemoval a callback for the removed entry\n */\n- Entry<K, V> remove(K key) {\n+ void remove(K key, Consumer<CompletableFuture<Entry<K, V>>> onRemoval) {\n CompletableFuture<Entry<K, V>> future;\n- Entry<K, V> entry = null;\n try (ReleasableLock ignored = writeLock.acquire()) {\n future = map.remove(key);\n }\n if (future != null) {\n- try {\n- entry = future.handle((ok, ex) -> {\n- if (ok != null) {\n- segmentStats.eviction();\n- return ok;\n- } else {\n- return null;\n- }\n- }).get();\n- } catch (ExecutionException | InterruptedException e) {\n- throw new IllegalStateException(e);\n- }\n+ segmentStats.eviction();\n+ onRemoval.accept(future);\n }\n- return entry;\n }\n \n private static class SegmentStats {\n@@ -476,12 +463,18 @@ private void put(K key, V value, long now) {\n */\n public void invalidate(K key) {\n CacheSegment<K, V> segment = getCacheSegment(key);\n- Entry<K, V> entry = segment.remove(key);\n- if (entry != null) {\n- try (ReleasableLock ignored = lruLock.acquire()) {\n- delete(entry, RemovalNotification.RemovalReason.INVALIDATED);\n+ segment.remove(key, f -> {\n+ try {\n+ Entry<K, V> entry = f.get();\n+ try (ReleasableLock ignored = lruLock.acquire()) {\n+ delete(entry, RemovalNotification.RemovalReason.INVALIDATED);\n+ }\n+ } catch (ExecutionException e) {\n+ // ok\n+ } catch (InterruptedException e) {\n+ throw new IllegalStateException(e);\n }\n- }\n+ });\n }\n \n /**\n@@ -632,7 +625,7 @@ public void remove() {\n Entry<K, V> entry = current;\n if (entry != null) {\n CacheSegment<K, V> segment = getCacheSegment(entry.key);\n- segment.remove(entry.key);\n+ segment.remove(entry.key, f -> {});\n try (ReleasableLock ignored = lruLock.acquire()) {\n current = null;\n delete(entry, RemovalNotification.RemovalReason.INVALIDATED);\n@@ -717,7 +710,7 @@ private void evictEntry(Entry<K, V> entry) {\n \n CacheSegment<K, V> segment = getCacheSegment(entry.key);\n if (segment != null) {\n- segment.remove(entry.key);\n+ segment.remove(entry.key, f -> {});\n }\n delete(entry, RemovalNotification.RemovalReason.EVICTED);\n }", "filename": "server/src/main/java/org/elasticsearch/common/cache/Cache.java", "status": "modified" }, { "diff": "@@ -344,7 +344,6 @@ protected long now() {\n assertEquals(numberOfEntries, cache.stats().getEvictions());\n }\n \n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/30428\")\n public void testComputeIfAbsentDeadlock() throws BrokenBarrierException, InterruptedException {\n final int numberOfThreads = randomIntBetween(2, 32);\n final Cache<Integer, String> cache =", "filename": "server/src/test/java/org/elasticsearch/common/cache/CacheTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version** (`bin/elasticsearch --version`):\r\nVersion 6.2.2\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nDifferent metric aggregations are returning different values when none of the documents in the bucket contain the field used in the aggregation. `avg`, `min` and `max` for example return `null`, whereas the `percentiles` agg returns `NaN`. I would expect the return values to be consistent across aggregations, whether it be `null` or `NaN`.\r\n\r\nRan the following aggregations, where some buckets contained docs without the `test.sslTime` field.\r\n\r\navg agg:\r\n```\r\n\"aggs\": {\r\n \"2\": {\r\n \"date_histogram\": {\r\n \"field\": \"createdDate\",\r\n \"interval\": \"15m\",\r\n \"time_zone\": \"Europe/London\",\r\n \"min_doc_count\": 1\r\n },\r\n \"aggs\": {\r\n \"3\": {\r\n \"terms\": {\r\n \"field\": \"test.testId.keyword\",\r\n \"size\": 5,\r\n \"order\": {\r\n \"_term\": \"desc\"\r\n }\r\n },\r\n \"aggs\": {\r\n \"1\": {\r\n \"avg\": {\r\n \"field\": \"test.sslTime\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n```\r\n\r\nPercentiles agg, to obtain the median:\r\n```\r\n\"aggs\": {\r\n \"2\": {\r\n \"date_histogram\": {\r\n \"field\": \"createdDate\",\r\n \"interval\": \"15m\",\r\n \"time_zone\": \"Europe/London\",\r\n \"min_doc_count\": 1\r\n },\r\n \"aggs\": {\r\n \"3\": {\r\n \"terms\": {\r\n \"field\": \"test.testId.keyword\",\r\n \"size\": 5,\r\n \"order\": {\r\n \"_term\": \"desc\"\r\n }\r\n },\r\n \"aggs\": {\r\n \"1\": {\r\n \"percentiles\": {\r\n \"field\": \"test.sslTime\",\r\n \"percents\": [\r\n 50\r\n ],\r\n \"keyed\": false\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n```\r\nWIth example of the responses:\r\n\r\nFrom the avg agg:\r\n```\r\n {\r\n \"3\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"1\": {\r\n \"value\": null\r\n },\r\n \"key\": \"VAL1\",\r\n \"doc_count\": 6\r\n }\r\n ]\r\n },\r\n \"key_as_string\": \"2018-02-03T11:45:00.000Z\",\r\n \"key\": 1517658300000,\r\n \"doc_count\": 6\r\n }\r\n```\r\n\r\nand from the percentiles agg:\r\n```\r\n {\r\n \"3\": {\r\n \"doc_count_error_upper_bound\": 0,\r\n \"sum_other_doc_count\": 0,\r\n \"buckets\": [\r\n {\r\n \"1\": {\r\n \"values\": [\r\n {\r\n \"key\": 50,\r\n \"value\": \"NaN\"\r\n }\r\n ]\r\n },\r\n \"key\": \"VAL1\",\r\n \"doc_count\": 6\r\n }\r\n ]\r\n },\r\n \"key_as_string\": \"2018-02-03T11:45:00.000Z\",\r\n \"key\": 1517658300000,\r\n \"doc_count\": 6\r\n }\r\n```\r\n\r\n\r\n", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-14T17:42:39Z" } ], "number": 29066, "title": "Inconsistent return value across metric aggregations when no docs in bucket contain field" }
{ "body": "The other metric aggregations (min/max/etc) return `null` as their XContent value and string when nothing was computed (due to empty/missing fields). Percentiles and Percentile Ranks, however, return `NaN `which is inconsistent and confusing for the user. This fixes the inconsistency by making the aggs return `null`. This applies to both the numeric value and the \"as string\" value. \r\n\r\nNote: like the metric aggs, this does not change the value if fetched directly from the percentiles object, which will return as `NaN`/`\"NaN\"`. This only changes the XContent output.\r\n\r\nI looked through all the other metric aggs and they appear to return null (or 0.0, in the case of cardinality/value_count/sum). So percentiles were the only outliers.\r\n\r\nThis is sorta a bwc break, but could also be seen as a bugfix. I'm not sure what we want to do with regards to backporting.\r\n\r\nCloses #29066\r\n", "number": 30460, "review_comments": [ { "body": "I'm just looking how similar the xContent output of InternalHDRPercentilesRanksTests and InternalTDigestPercentilesRanksTest, maybe these two test could be pushed up one level to InternalPercentilesRanksTestCase by calling the sub-tests createTestInstance() method with the appropriate values? I haven't really checked if the outputs are exactly the same, maybe I'm missing something, but it would be great to reduce the number of rather identical test cases.\r\nMaybe pushing all four cases up to AbstractPercentilesTestCase would work as well? Not sure though. ", "created_at": "2018-05-09T15:43:03Z" }, { "body": "++ this combined nicely into two tests at the InternalPercentile(Ranks)TestCase level. Couldn't move fully to the Abstract class as the API between percentile and ranks is slightly different.", "created_at": "2018-05-14T21:23:59Z" }, { "body": "You mentioned earlier you cannot push this test up into AbstractPercentilesTestCase because of some subtle difference, but I cannot spot it. Do you remember what it was? Otherwise I'd give it another try to push it up.", "created_at": "2018-06-07T10:42:00Z" }, { "body": "It's super tiny: Percentiles uses `percent() / percentAsString()` while PercentileRanks uses `percentile() / percentileAsString()`.\r\n\r\nI could collapse them into a single test and then do an `instanceOf` or `getType()` and switch on that if you think it'd be cleaner. Less test code duplication, but a bit more fragile.\r\n\r\n", "created_at": "2018-06-07T13:57:16Z" }, { "body": "Ah, I see it now. What about pulling the test up and just doing the two lines of assertions that are different in their own little helper method that you overwrite differently in both cases? I'm usually also not a fan of doing so much code acrobatics in tests but in this case I think the gain in non-duplicated lines of code would justify it. I don't think its super important though, thanks for pointing out the difference.", "created_at": "2018-06-07T14:18:01Z" }, { "body": "++ this cleaned up nicely. Thanks for the suggestion!", "created_at": "2018-06-07T15:29:15Z" }, { "body": "Is this only the case for certain locales? I would bne suprised if some JDKs would return a weird UTF8 character in all cases. To make this comment more readable it would probably also make sense to put in the bad utf8 value as octal or hex codepoint and to clarify under which circumstances this happens.", "created_at": "2018-06-14T15:23:55Z" }, { "body": "I'm not actually sure how this behaves across Locales, but I don't think it matters for us. We seem to always initialize the Decimal `DocValueFormat` [with `Locale.Root`](https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/search/DocValueFormat.java#L366) which I believe uses the JRE's default symbol table. \r\n\r\nSo for JDK8 the root locale will use `JRELocaleProviderAdapter` to get the symbols, which loads `sun.text.resources.FormatData`, and you can see the [`NaN` symbol is `\\uFFFD`](http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/sun/text/resources/FormatData.java#l269)\r\n\r\nFor JDK 9+, the root locale will use `CLDRLocaleProviderAdapter`, which loads `sun.text.resources.cldr.FormatData`. And in that resource file you can see the `NaN` symbol is `\"NaN\"` (Can't find a link to the code, but you can see it in your IDE).\r\n\r\n++ to making the comment more descriptive. I'll try to distill this thread into a sane comment, and probably leave a reference to the comments here in case anyone wants to see more info.\r\n\r\n", "created_at": "2018-06-14T16:58:17Z" }, { "body": "As an aside, I really wonder why Oracle thought � would be a good default representation of \"NaN\"... :(", "created_at": "2018-06-14T17:02:58Z" }, { "body": "Hm, I don't like how this was implemented, looking at it. Going to move it over to the DocValueFormat itself, so that it only applies to the Decimal formatter when looking at doubles... otherwise it'll be checked against all formatters (geo, IP, etc). Harmless I think, but no need.", "created_at": "2018-06-14T18:54:18Z" }, { "body": "+1\r\nGreat comment, my future self will be glad its here ;-)", "created_at": "2018-06-15T08:00:23Z" }, { "body": ":) Me too!", "created_at": "2018-06-15T14:33:40Z" } ], "title": "Percentile/Ranks should return null instead of NaN when empty" }
{ "commits": [ { "message": "Percentile/Ranks should return null instead of NaN when empty\n\nThe other metric aggregations (min/max/etc) return `null` as their\nXContent value and string when nothing was computed (due to empty/missing\nfields). Percentiles and Percentile Ranks, however, return NaN\nwhich is inconsistent and confusing for the user.\n\nThis fixes the inconsistency by making the aggs return `null`. This\napplies to both the value and the string getters.\n\nNote: like the metric aggs, this does not change the value if fetched\ndirectly from the percentiles object it will return as `NaN`/`\"NaN\"`.\nThis only changes the XContent output." }, { "message": "Merge remote-tracking branch 'origin/master' into consistent_missing_values_aggs" }, { "message": "Centralize empty test, test xcontent roundtrip with empty occasionally" }, { "message": "Merge remote-tracking branch 'origin/master' into consistent_missing_values_aggs" }, { "message": "Merge remote-tracking branch 'origin/master' into consistent_missing_values_aggs" }, { "message": "Review cleanup: centralize tests more" }, { "message": "Review cleanup: add note to 7.0 breaking changes" }, { "message": "Explicitly check for NaN and manually return value" }, { "message": "Merge remote-tracking branch 'origin/master' into consistent_missing_values_aggs" }, { "message": "Move NaN check to Decimal#DocValueFormat, better comments" } ], "files": [ { "diff": "@@ -16,3 +16,9 @@ Cross-Cluster-Search::\n \n Rest API::\n * The Clear Cache API only supports `POST` as HTTP method\n+\n+Aggregations::\n+* The Percentiles and PercentileRanks aggregations now return `null` in the REST response,\n+ instead of `NaN`. This makes it consistent with the rest of the aggregations. Note:\n+ this only applies to the REST response, the java objects continue to return `NaN` (also\n+ consistent with other aggregations)\n\\ No newline at end of file", "filename": "docs/reference/release-notes/7.0.0-alpha1.asciidoc", "status": "modified" }, { "diff": "@@ -394,6 +394,22 @@ public String format(long value) {\n \n @Override\n public String format(double value) {\n+ /**\n+ * Explicitly check for NaN, since it formats to \"�\" or \"NaN\" depending on JDK version.\n+ *\n+ * Decimal formatter uses the JRE's default symbol list (via Locale.ROOT above). In JDK8,\n+ * this translates into using {@link sun.util.locale.provider.JRELocaleProviderAdapter}, which loads\n+ * {@link sun.text.resources.FormatData} for symbols. There, `NaN` is defined as `\\ufffd` (�)\n+ *\n+ * In JDK9+, {@link sun.util.cldr.CLDRLocaleProviderAdapter} is used instead, which loads\n+ * {@link sun.text.resources.cldr.FormatData}. There, `NaN` is defined as `\"NaN\"`\n+ *\n+ * Since the character � isn't very useful, and makes the output change depending on JDK version,\n+ * we manually check to see if the value is NaN and return the string directly.\n+ */\n+ if (Double.isNaN(value)) {\n+ return String.valueOf(Double.NaN);\n+ }\n return format.format(value);\n }\n ", "filename": "server/src/main/java/org/elasticsearch/search/DocValueFormat.java", "status": "modified" }, { "diff": "@@ -92,9 +92,9 @@ protected XContentBuilder doXContentBody(XContentBuilder builder, Params params)\n builder.startObject(CommonFields.VALUES.getPreferredName());\n for (Map.Entry<Double, Double> percentile : percentiles.entrySet()) {\n Double key = percentile.getKey();\n- builder.field(String.valueOf(key), percentile.getValue());\n-\n- if (valuesAsString) {\n+ Double value = percentile.getValue();\n+ builder.field(String.valueOf(key), value.isNaN() ? null : value);\n+ if (valuesAsString && value.isNaN() == false) {\n builder.field(key + \"_as_string\", getPercentileAsString(key));\n }\n }\n@@ -106,8 +106,9 @@ protected XContentBuilder doXContentBody(XContentBuilder builder, Params params)\n builder.startObject();\n {\n builder.field(CommonFields.KEY.getPreferredName(), key);\n- builder.field(CommonFields.VALUE.getPreferredName(), percentile.getValue());\n- if (valuesAsString) {\n+ Double value = percentile.getValue();\n+ builder.field(CommonFields.VALUE.getPreferredName(), value.isNaN() ? null : value);\n+ if (valuesAsString && value.isNaN() == false) {\n builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), getPercentileAsString(key));\n }\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/ParsedPercentiles.java", "status": "modified" }, { "diff": "@@ -123,9 +123,9 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n for(int i = 0; i < keys.length; ++i) {\n String key = String.valueOf(keys[i]);\n double value = value(keys[i]);\n- builder.field(key, value);\n- if (format != DocValueFormat.RAW) {\n- builder.field(key + \"_as_string\", format.format(value));\n+ builder.field(key, state.getTotalCount() == 0 ? null : value);\n+ if (format != DocValueFormat.RAW && state.getTotalCount() > 0) {\n+ builder.field(key + \"_as_string\", format.format(value).toString());\n }\n }\n builder.endObject();\n@@ -135,8 +135,8 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n double value = value(keys[i]);\n builder.startObject();\n builder.field(CommonFields.KEY.getPreferredName(), keys[i]);\n- builder.field(CommonFields.VALUE.getPreferredName(), value);\n- if (format != DocValueFormat.RAW) {\n+ builder.field(CommonFields.VALUE.getPreferredName(), state.getTotalCount() == 0 ? null : value);\n+ if (format != DocValueFormat.RAW && state.getTotalCount() > 0) {\n builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), format.format(value).toString());\n }\n builder.endObject();", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/AbstractInternalHDRPercentiles.java", "status": "modified" }, { "diff": "@@ -106,9 +106,9 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n for(int i = 0; i < keys.length; ++i) {\n String key = String.valueOf(keys[i]);\n double value = value(keys[i]);\n- builder.field(key, value);\n- if (format != DocValueFormat.RAW) {\n- builder.field(key + \"_as_string\", format.format(value));\n+ builder.field(key, state.size() == 0 ? null : value);\n+ if (format != DocValueFormat.RAW && state.size() > 0) {\n+ builder.field(key + \"_as_string\", format.format(value).toString());\n }\n }\n builder.endObject();\n@@ -118,8 +118,8 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n double value = value(keys[i]);\n builder.startObject();\n builder.field(CommonFields.KEY.getPreferredName(), keys[i]);\n- builder.field(CommonFields.VALUE.getPreferredName(), value);\n- if (format != DocValueFormat.RAW) {\n+ builder.field(CommonFields.VALUE.getPreferredName(), state.size() == 0 ? null : value);\n+ if (format != DocValueFormat.RAW && state.size() > 0) {\n builder.field(CommonFields.VALUE_AS_STRING.getPreferredName(), format.format(value).toString());\n }\n builder.endObject();", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/AbstractInternalTDigestPercentiles.java", "status": "modified" }, { "diff": "@@ -19,6 +19,10 @@\n \n package org.elasticsearch.search.aggregations.metrics.percentiles;\n \n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.Aggregation.CommonFields;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n@@ -27,11 +31,14 @@\n \n import java.io.IOException;\n import java.util.Arrays;\n+import java.util.Collections;\n import java.util.Iterator;\n import java.util.List;\n import java.util.Map;\n import java.util.function.Predicate;\n \n+import static org.hamcrest.Matchers.equalTo;\n+\n public abstract class AbstractPercentilesTestCase<T extends InternalAggregation & Iterable<Percentile>>\n extends InternalAggregationTestCase<T> {\n \n@@ -49,7 +56,7 @@ public void setUp() throws Exception {\n \n @Override\n protected T createTestInstance(String name, List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData) {\n- int numValues = randomInt(100);\n+ int numValues = frequently() ? randomInt(100) : 0;\n double[] values = new double[numValues];\n for (int i = 0; i < numValues; ++i) {\n values[i] = randomDouble();\n@@ -89,4 +96,53 @@ public static double[] randomPercents(boolean sorted) {\n protected Predicate<String> excludePathsFromXContentInsertion() {\n return path -> path.endsWith(CommonFields.VALUES.getPreferredName());\n }\n+\n+ protected abstract void assertPercentile(T agg, Double value);\n+\n+ public void testEmptyRanksXContent() throws IOException {\n+ double[] percents = new double[]{1,2,3};\n+ boolean keyed = randomBoolean();\n+ DocValueFormat docValueFormat = randomNumericDocValueFormat();\n+\n+ T agg = createTestInstance(\"test\", Collections.emptyList(), Collections.emptyMap(), keyed, docValueFormat, percents, new double[0]);\n+\n+ for (Percentile percentile : agg) {\n+ Double value = percentile.getValue();\n+ assertPercentile(agg, value);\n+ }\n+\n+ XContentBuilder builder = JsonXContent.contentBuilder().prettyPrint();\n+ builder.startObject();\n+ agg.doXContentBody(builder, ToXContent.EMPTY_PARAMS);\n+ builder.endObject();\n+ String expected;\n+ if (keyed) {\n+ expected = \"{\\n\" +\n+ \" \\\"values\\\" : {\\n\" +\n+ \" \\\"1.0\\\" : null,\\n\" +\n+ \" \\\"2.0\\\" : null,\\n\" +\n+ \" \\\"3.0\\\" : null\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+ } else {\n+ expected = \"{\\n\" +\n+ \" \\\"values\\\" : [\\n\" +\n+ \" {\\n\" +\n+ \" \\\"key\\\" : 1.0,\\n\" +\n+ \" \\\"value\\\" : null\\n\" +\n+ \" },\\n\" +\n+ \" {\\n\" +\n+ \" \\\"key\\\" : 2.0,\\n\" +\n+ \" \\\"value\\\" : null\\n\" +\n+ \" },\\n\" +\n+ \" {\\n\" +\n+ \" \\\"key\\\" : 3.0,\\n\" +\n+ \" \\\"value\\\" : null\\n\" +\n+ \" }\\n\" +\n+ \" ]\\n\" +\n+ \"}\";\n+ }\n+\n+ assertThat(Strings.toString(builder), equalTo(expected));\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/metrics/percentiles/AbstractPercentilesTestCase.java", "status": "modified" }, { "diff": "@@ -22,6 +22,8 @@\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.ParsedAggregation;\n \n+import static org.hamcrest.Matchers.equalTo;\n+\n public abstract class InternalPercentilesRanksTestCase<T extends InternalAggregation & PercentileRanks>\n extends AbstractPercentilesTestCase<T> {\n \n@@ -39,4 +41,10 @@ protected final void assertFromXContent(T aggregation, ParsedAggregation parsedA\n Class<? extends ParsedPercentiles> parsedClass = implementationClass();\n assertTrue(parsedClass != null && parsedClass.isInstance(parsedAggregation));\n }\n+\n+ @Override\n+ protected void assertPercentile(T agg, Double value) {\n+ assertThat(agg.percent(value), equalTo(Double.NaN));\n+ assertThat(agg.percentAsString(value), equalTo(\"NaN\"));\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/metrics/percentiles/InternalPercentilesRanksTestCase.java", "status": "modified" }, { "diff": "@@ -24,6 +24,8 @@\n \n import java.util.List;\n \n+import static org.hamcrest.Matchers.equalTo;\n+\n public abstract class InternalPercentilesTestCase<T extends InternalAggregation & Percentiles> extends AbstractPercentilesTestCase<T> {\n \n @Override\n@@ -49,4 +51,10 @@ public static double[] randomPercents() {\n }\n return percents;\n }\n+\n+ @Override\n+ protected void assertPercentile(T agg, Double value) {\n+ assertThat(agg.percentile(value), equalTo(Double.NaN));\n+ assertThat(agg.percentileAsString(value), equalTo(\"NaN\"));\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/metrics/percentiles/InternalPercentilesTestCase.java", "status": "modified" }, { "diff": "@@ -31,6 +31,7 @@\n import java.util.List;\n import java.util.Map;\n \n+\n public class InternalHDRPercentilesRanksTests extends InternalPercentilesRanksTestCase<InternalHDRPercentileRanks> {\n \n @Override", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/metrics/percentiles/hdr/InternalHDRPercentilesRanksTests.java", "status": "modified" } ] }
{ "body": "*Original comment by @bpintea:*\n\nWhen selecting a column of type DATE, a ISO8601 DATETIME string is returned.\r\nWhen trying to cast the returned string as DATE, an exception is thrown, the trouble seeming to be related to the milliseconds field: `Invalid format: \"1953-09-02T00:00:00.000Z\" is malformed at \".000Z\"`. \r\n\r\n```\r\nsql> select birth_date from test_emp limit 1;\r\n birth_date\r\n------------------------\r\n1953-09-02T00:00:00.000Z\r\nsql>\r\nsql> SELECT CAST('1953-09-02T00:00:00.000Z' AS DATE);\r\nServer error [Server encountered an error [cannot cast [1953-09-02T00:00:00.000Z] to [Date]:Invalid format: \"1953-09-02T00:00:00.000Z\" is malformed at \".000Z\"]. [SqlIllegalArgumentException[cannot cast [1953-09-02T00:00:00.000Z] to [Date]:Invalid format: \"1953-09-02T00:00:00.000Z\" is malformed at \".000Z\"]; nested: IllegalArgumentException[Invalid format: \"1953-09-02T00:00:00.000Z\" is malformed at \".000Z\"];\r\n\tat org.elasticsearch.xpack.sql.type.DataTypeConversion$Conversion.lambda$fromString$15(DataTypeConversion.java:372)\r\n\tat org.elasticsearch.xpack.sql.type.DataTypeConversion$Conversion.convert(DataTypeConversion.java:385)\r\n\tat org.elasticsearch.xpack.sql.type.DataTypeConversion.convert(DataTypeConversion.java:313)\r\n\tat org.elasticsearch.xpack.sql.expression.function.scalar.Cast.fold(Cast.java:73)\r\n\tat org.elasticsearch.xpack.sql.optimizer.Optimizer$ConstantFolding.fold(Optimizer.java:1114)\r\n\tat org.elasticsearch.xpack.sql.optimizer.Optimizer$ConstantFolding.rule(Optimizer.java:1099)\r\n\tat org.elasticsearch.xpack.sql.tree.Node.transformDown(Node.java:173)\r\n\tat org.elasticsearch.xpack.sql.plan.QueryPlan.lambda$transformExpressionsDown$2(QueryPlan.java:72)\r\n\tat org.elasticsearch.xpack.sql.plan.QueryPlan.doTransformExpression(QueryPlan.java:81)\r\n\tat org.elasticsearch.xpack.sql.plan.QueryPlan.doTransformExpression(QueryPlan.java:97)\r\n\tat org.elasticsearch.xpack.sql.plan.QueryPlan.lambda$transformExpressionsDown$3(QueryPlan.java:72)\r\n\tat org.elasticsearch.xpack.sql.tree.NodeInfo.lambda$transform$0(NodeInfo.java:67)\r\n\tat org.elasticsearch.xpack.sql.tree.NodeInfo$3.innerTransform(NodeInfo.java:128)\r\n\tat org.elasticsearch.xpack.sql.tree.NodeInfo.transform(NodeInfo.java:71)\r\n\tat org.elasticsearch.xpack.sql.tree.Node.transformNodeProps(Node.java:252)\r\n\tat org.elasticsearch.xpack.sql.tree.Node.lambda$transformPropertiesDown$12(Node.java:236)\r\n\tat org.elasticsearch.xpack.sql.tree.Node.transformDown(Node.java:173)\r\n\tat org.elasticsearch.xpack.sql.tree.Node.transformPropertiesDown(Node.java:236)\r\n\tat org.elasticsearch.xpack.sql.plan.QueryPlan.transformExpressionsDown(QueryPlan.java:72)\r\n\tat org.elasticsearch.xpack.sql.optimizer.Optimizer$OptimizerExpressionRule.apply(Optimizer.java:1435)\r\n\tat org.elasticsearch.xpack.sql.optimizer.Optimizer$OptimizerExpressionRule.apply(Optimizer.java:1425)\r\n\tat org.elasticsearch.xpack.sql.rule.RuleExecutor$Transformation.<init>(RuleExecutor.java:94)\r\n\tat org.elasticsearch.xpack.sql.rule.RuleExecutor.executeWithInfo(RuleExecutor.java:167)\r\n\tat org.elasticsearch.xpack.sql.rule.RuleExecutor.execute(RuleExecutor.java:142)\r\n\tat org.elasticsearch.xpack.sql.optimizer.Optimizer.optimize(Optimizer.java:107)\r\n\tat org.elasticsearch.xpack.sql.session.SqlSession.lambda$optimizedPlan$3(SqlSession.java:155)\r\n\tat org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60)\r\n\tat org.elasticsearch.xpack.sql.session.SqlSession.preAnalyze(SqlSession.java:147)\r\n\tat org.elasticsearch.xpack.sql.session.SqlSession.analyzedPlan(SqlSession.java:108)\r\n\tat org.elasticsearch.xpack.sql.session.SqlSession.optimizedPlan(SqlSession.java:155)\r\n\tat org.elasticsearch.xpack.sql.session.SqlSession.physicalPlan(SqlSession.java:159)\r\n\tat org.elasticsearch.xpack.sql.session.SqlSession.sqlExecutable(SqlSession.java:168)\r\n\tat org.elasticsearch.xpack.sql.session.SqlSession.sql(SqlSession.java:163)\r\n\tat org.elasticsearch.xpack.sql.execution.PlanExecutor.sql(PlanExecutor.java:76)\r\n\tat org.elasticsearch.xpack.sql.plugin.TransportSqlQueryAction.operation(TransportSqlQueryAction.java:76)\r\n\tat org.elasticsearch.xpack.sql.plugin.TransportSqlQueryAction.doExecute(TransportSqlQueryAction.java:63)\r\n\tat org.elasticsearch.xpack.sql.plugin.TransportSqlQueryAction.doExecute(TransportSqlQueryAction.java:43)\r\n\tat org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:143)\r\n\tat org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167)\r\n\tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139)\r\n\tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81)\r\n\tat org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83)\r\n\tat org.elasticsearch.xpack.sql.plugin.RestSqlQueryAction.lambda$prepareRequest$0(RestSqlQueryAction.java:60)\r\n\tat org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:97)\r\n\tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:240)\r\n\tat org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:336)\r\n\tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:174)\r\n\tat org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:467)\r\n\tat org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:137)\r\n\tat io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:68)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n\tat io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499)\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)\r\n\tat java.base/java.lang.Thread.run(Thread.java:844)\r\nCaused by: java.lang.IllegalArgumentException: Invalid format: \"1953-09-02T00:00:00.000Z\" is malformed at \".000Z\"\r\n\tat org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187)\r\n\tat org.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:826)\r\n\tat org.elasticsearch.xpack.sql.type.DataTypeConversion$Conversion.lambda$fromString$15(DataTypeConversion.java:368)\r\n\t... 93 more\r\n]]\r\nsql>\r\nsql> SELECT CAST('1953-09-02T00:00:00Z' AS DATE);\r\nCAST(1953-09-02T00:00:00Z)\r\n--------------------------\r\n-515376000000\r\n```", "comments": [ { "body": "*Original comment by @costin:*\n\nFixed by LINK REDACTED ", "created_at": "2018-04-16T17:41:58Z" }, { "body": "*Original comment by @bpintea:*\n\nThis seems to still remain an issue: if milliseconds are specified (as returned when SELECTing a DATE column) to a CAST, an exception is thrown.", "created_at": "2018-04-20T15:30:57Z" } ], "number": 30002, "title": "SQL: inconsistent date(time) format handling." }
{ "body": "Dates internally contain milliseconds (which appear when converting them\r\nto Strings) however parsing does not accept them (and is being strict).\r\nThe parser has been changed so that Date is mandatory but the time\r\n(including its fractions such as millis) are optional.\r\n\r\nFix #30002", "number": 30419, "review_comments": [], "title": "SQL: Fix parsing of dates with milliseconds" }
{ "commits": [ { "message": "SQL: Fix parsing of dates with milliseconds\n\nDates internally contain milliseconds (which appear when converting them\nto Strings) however parsing does not accept them (and is being strict).\nThe parser has been changed so that Date is mandatory but the time\n(including its fractions such as millis) are optional.\n\nFix #30002" }, { "message": "Merge remote-tracking branch 'remotes/upstream/master' into fix-30002" }, { "message": "Try to fix changelog conflict" } ], "files": [ { "diff": "@@ -115,6 +115,9 @@ Rollup::\n * Validate timezone in range queries to ensure they match the selected job when\n searching ({pull}30338[#30338])\n \n+SQL::\n+* Fix parsing of Dates containing milliseconds ({pull}30419[#30419])\n+\n [float]\n === Regressions\n Fail snapshot operations early when creating or deleting a snapshot on a repository that has been\n@@ -201,6 +204,8 @@ Rollup::\n * Validate timezone in range queries to ensure they match the selected job when\n searching ({pull}30338[#30338])\n \n+SQL::\n+* Fix parsing of Dates containing milliseconds ({pull}30419[#30419])\n \n Allocation::\n \n@@ -241,6 +246,9 @@ Reduce the number of object allocations made by {security} when resolving the in\n \n Respect accept header on requests with no handler ({pull}30383[#30383])\n \n+SQL::\n+* Fix parsing of Dates containing milliseconds ({pull}30419[#30419])\n+\n //[float]\n //=== Regressions\n ", "filename": "docs/CHANGELOG.asciidoc", "status": "modified" }, { "diff": "@@ -31,7 +31,7 @@\n */\n public abstract class DataTypeConversion {\n \n- private static final DateTimeFormatter UTC_DATE_FORMATTER = ISODateTimeFormat.dateTimeNoMillis().withZoneUTC();\n+ private static final DateTimeFormatter UTC_DATE_FORMATTER = ISODateTimeFormat.dateOptionalTimeParser().withZoneUTC();\n \n /**\n * Returns the type compatible with both left and right types", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/type/DataTypeConversion.java", "status": "modified" }, { "diff": "@@ -82,10 +82,15 @@ public void testConversionToDate() {\n Conversion conversion = DataTypeConversion.conversionFor(DataType.KEYWORD, to);\n assertNull(conversion.convert(null));\n \n- // TODO we'd like to be able to optionally parse millis here I think....\n assertEquals(new DateTime(1000L, DateTimeZone.UTC), conversion.convert(\"1970-01-01T00:00:01Z\"));\n assertEquals(new DateTime(1483228800000L, DateTimeZone.UTC), conversion.convert(\"2017-01-01T00:00:00Z\"));\n assertEquals(new DateTime(18000000L, DateTimeZone.UTC), conversion.convert(\"1970-01-01T00:00:00-05:00\"));\n+ \n+ // double check back and forth conversion\n+ DateTime dt = DateTime.now(DateTimeZone.UTC);\n+ Conversion forward = DataTypeConversion.conversionFor(DataType.DATE, DataType.KEYWORD);\n+ Conversion back = DataTypeConversion.conversionFor(DataType.KEYWORD, DataType.DATE);\n+ assertEquals(dt, back.convert(forward.convert(dt)));\n Exception e = expectThrows(SqlIllegalArgumentException.class, () -> conversion.convert(\"0xff\"));\n assertEquals(\"cannot cast [0xff] to [Date]:Invalid format: \\\"0xff\\\" is malformed at \\\"xff\\\"\", e.getMessage());\n }", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/type/DataTypeConversionTests.java", "status": "modified" } ] }
{ "body": "Opening a PR for this backport of #29483 just to be on the safe side, since the backport involves a versioning change in addition to a cherry pick from master. Would appreciate a quick sanity check!", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-05-03T02:22:28Z" } ], "number": 30319, "title": "6.x Backport: Terms query validate bug " }
{ "body": "3rd and final step in making versioning changes for #29033: I made `QueryExplanation.index` an optional field, so here's what I did in order to avoid any CI failures related to this versioning change:\r\n\r\n1. in the initial PR, make the version change at v7 since PR is going to master #29483 \r\n2. backport to 6.x, changing the version to 6.4.0 #30319 \r\n3. Now that 6.x is backported, change master to support from 6.4.0: this PR\r\n\r\nI understand that usually steps 2 and 3 wouldn't require PRs, but I wanted to make sure to get this right, so I'd appreciate a quick review!", "number": 30390, "review_comments": [], "title": "add version compatibility from 6.4.0 after backport, see #30319" }
{ "commits": [ { "message": "add version compatibility from 6.4.0 after backport, see #30319" }, { "message": "Merge branch 'master' into forward-port-query-explanation" }, { "message": "Merge branch 'master' into forward-port-query-explanation" } ], "files": [ { "diff": "@@ -75,7 +75,7 @@ public String getExplanation() {\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n- if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ if (in.getVersion().onOrAfter(Version.V_6_4_0)) {\n index = in.readOptionalString();\n } else {\n index = in.readString();\n@@ -92,7 +92,7 @@ public void readFrom(StreamInput in) throws IOException {\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n- if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ if (out.getVersion().onOrAfter(Version.V_6_4_0)) {\n out.writeOptionalString(index);\n } else {\n out.writeString(index);", "filename": "server/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java", "status": "modified" } ] }
{ "body": "Fixes #29033, currently WIP as I need to fix a few integration tests. As @colings86 described on the issue, we're missing a step for terms queries, we need to rewrite the query on the coordinating node.\r\n\r\nSteps to reproduce (and confirm fix)\r\n\r\n1. Run a local server: `./gradlew server`\r\n\r\n2. create index `curl -XPUT 'localhost:9200/twitter' -H 'Content-Type: application/json' -d'{}'`\r\n\r\n3. add mapping `curl -XPUT 'localhost:9200/twitter/_mapping/_doc' -H 'Content-Type: application/json' -d'{ \"properties\": { \"user\": { \"type\": \"integer\" }, \"followers\": { \"type\": \"integer\" } } }'`\r\n\r\n4. run a validate request on a terms query: `curl -XPOST 'localhost:9200/twitter/_validate/query?explain=true' -H 'Content-Type: application/json' -d'{ \"query\": { \"terms\": { \"user\": { \"index\": \"twitter\", \"type\": \"_doc\", \"id\": \"2\", \"path\": \"followers\" } } } }'`\r\n", "comments": [ { "body": "Added some integration test fixes, and my own comments about those fixes; I could use some advice on the best way to handle this error in the right way at the point in execution where it's occurring (on the coordinating node, so there are no shards involved yet in the query).", "created_at": "2018-04-11T21:03:19Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-12T08:21:05Z" }, { "body": "@jpountz happy to wait", "created_at": "2018-04-19T13:56:10Z" }, { "body": "@elasticmachine please test this", "created_at": "2018-04-30T14:12:27Z" }, { "body": "@elasticmachine test this please", "created_at": "2018-04-30T20:50:54Z" } ], "number": 29483, "title": "Fix failure for validate API on a terms query" }
{ "body": "3rd and final step in making versioning changes for #29033: I made `QueryExplanation.index` an optional field, so here's what I did in order to avoid any CI failures related to this versioning change:\r\n\r\n1. in the initial PR, make the version change at v7 since PR is going to master #29483 \r\n2. backport to 6.x, changing the version to 6.4.0 #30319 \r\n3. Now that 6.x is backported, change master to support from 6.4.0: this PR\r\n\r\nI understand that usually steps 2 and 3 wouldn't require PRs, but I wanted to make sure to get this right, so I'd appreciate a quick review!", "number": 30390, "review_comments": [], "title": "add version compatibility from 6.4.0 after backport, see #30319" }
{ "commits": [ { "message": "add version compatibility from 6.4.0 after backport, see #30319" }, { "message": "Merge branch 'master' into forward-port-query-explanation" }, { "message": "Merge branch 'master' into forward-port-query-explanation" } ], "files": [ { "diff": "@@ -75,7 +75,7 @@ public String getExplanation() {\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n- if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ if (in.getVersion().onOrAfter(Version.V_6_4_0)) {\n index = in.readOptionalString();\n } else {\n index = in.readString();\n@@ -92,7 +92,7 @@ public void readFrom(StreamInput in) throws IOException {\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n- if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ if (out.getVersion().onOrAfter(Version.V_6_4_0)) {\n out.writeOptionalString(index);\n } else {\n out.writeString(index);", "filename": "server/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java", "status": "modified" } ] }
{ "body": "On a cluster with no `x-pack` and version `6.2.4`:\r\n\r\n```\r\nPOST http://127.0.0.1:9200/_xpack/migration/upgrade/.watches HTTP/1.1\r\nConnection: Keep-Alive\r\nAccept: application/json\r\nContent-Length: 0\r\nHost: 127.0.0.1:9200\r\n```\r\n\r\n```\r\nHTTP/1.1 400 Bad Request\r\ncontent-type: text/plain; charset=UTF-8\r\ncontent-length: 79\r\n\r\nNo handler found for uri [/_xpack/migration/upgrade/.watches] and method [POST]\r\n```\r\n\r\n\r\n\r\nIdeally it returns `application/json` as requested by the `Accept` header. This is how we currently return `405` and `406`\r\n\r\n```\r\nPOST http://127.0.0.1:9200/_blah HTTP/1.1\r\nConnection: Keep-Alive\r\nAccept: application/json\r\nContent-Length: 0\r\nHost: 127.0.0.1:9200\r\n```\r\n\r\n```\r\n{\"error\":\"Incorrect HTTP method for uri [/_blah] and method [POST], allowed: [HEAD, DELETE, PUT, GET]\",\"status\":405}\r\n```\r\n\r\nOne could also argue it should not be a `400` but a `404` if the resource does not exist:\r\n\r\n```\r\nGET http://127.0.0.1:9200/_nodes/safdasd/asdasd/sadasd HTTP/1.1\r\nConnection: Keep-Alive\r\nAccept: application/json\r\nContent-Length: 0\r\nHost: 127.0.0.1:9200\r\n```\r\n\r\n```\r\nNo handler found for uri [/_nodes/safdasd/asdasd/sadasd] and method [GET]\r\n```\r\n\r\n\r\n\r\n\r\n", "comments": [ { "body": "Can I pick up this bug? Any inputs will be appreciated.", "created_at": "2018-05-03T02:13:38Z" }, { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-05-03T02:17:29Z" } ], "number": 30329, "title": "400 Bad Request always returns plain text" }
{ "body": "Today when processing a request for a URL path for which we can not find a handler we send back a plain-text response. Yet, we have the accept header in our hand and can respect the accepted media type of the request. This commit addresses this.\r\n\r\nCloses #30329\r\n", "number": 30383, "review_comments": [ { "body": "super minor nit, you could use `newErrorBuilder()` to disable filtering ", "created_at": "2018-05-04T09:56:50Z" }, { "body": "Good one.", "created_at": "2018-05-04T10:15:46Z" } ], "title": "Respect accept header on no handler" }
{ "commits": [ { "message": "Respect accept header on no handler\n\nToday when processing a request for a URL path for which we can not find\na handler we send back a plain-text response. Yet, we have the accept\nheader in our hand and can respect the accepted media type of the\nrequest. This commit addresses this." }, { "message": "Add changelog entry" }, { "message": "Use error builder" }, { "message": "Merge remote-tracking branch 'origin/master' into no-handler-accept\n\n* origin/master:\n Set the new lucene version for 6.4.0" }, { "message": "Merge branch 'master' into no-handler-accept\n\n* master:\n Docs: remove transport_client from CCS role example (#30263)\n [Rollup] Validate timezone in range queries (#30338)\n Use readFully() to read bytes from CipherInputStream (#28515)\n Fix docs Recently merged #29229 had a doc bug that broke the doc build. This commit fixes.\n Test: remove cluster permission from CCS user (#30262)\n Add Get Settings API support to java high-level rest client (#29229)\n Watcher: Remove unneeded index deletion in tests" } ], "files": [ { "diff": "@@ -107,6 +107,7 @@ ones that the user is authorized to access in case field level security is enabl\n Fixed prerelease version of elasticsearch in the `deb` package to sort before GA versions\n ({pull}29000[#29000])\n \n+Respect accept header on requests with no handler ({pull}30383[#30383])\n Rollup::\n * Validate timezone in range queries to ensure they match the selected job when\n searching ({pull}30338[#30338])", "filename": "docs/CHANGELOG.asciidoc", "status": "modified" }, { "diff": "@@ -0,0 +1,59 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.http;\n+\n+import org.apache.http.message.BasicHeader;\n+import org.apache.http.util.EntityUtils;\n+import org.elasticsearch.client.Response;\n+import org.elasticsearch.client.ResponseException;\n+\n+import java.io.IOException;\n+\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.is;\n+\n+public class NoHandlerIT extends HttpSmokeTestCase {\n+\n+ public void testNoHandlerRespectsAcceptHeader() throws IOException {\n+ runTestNoHandlerRespectsAcceptHeader(\n+ \"application/json\",\n+ \"application/json; charset=UTF-8\",\n+ \"\\\"error\\\":\\\"no handler found for uri [/foo/bar/baz/qux/quux] and method [GET]\\\"\");\n+ runTestNoHandlerRespectsAcceptHeader(\n+ \"application/yaml\",\n+ \"application/yaml\",\n+ \"error: \\\"no handler found for uri [/foo/bar/baz/qux/quux] and method [GET]\\\"\");\n+ }\n+\n+ private void runTestNoHandlerRespectsAcceptHeader(\n+ final String accept, final String contentType, final String expect) throws IOException {\n+ final ResponseException e =\n+ expectThrows(\n+ ResponseException.class,\n+ () -> getRestClient().performRequest(\"GET\", \"/foo/bar/baz/qux/quux\", new BasicHeader(\"Accept\", accept)));\n+\n+ final Response response = e.getResponse();\n+ assertThat(response.getHeader(\"Content-Type\"), equalTo(contentType));\n+ assertThat(EntityUtils.toString(e.getResponse().getEntity()), containsString(expect));\n+ assertThat(response.getStatusLine().getStatusCode(), is(400));\n+ }\n+\n+}", "filename": "qa/smoke-test-http/src/test/java/org/elasticsearch/http/NoHandlerIT.java", "status": "added" }, { "diff": "@@ -401,9 +401,15 @@ private void handleOptionsRequest(RestRequest request, RestChannel channel, Set<\n * Handle a requests with no candidate handlers (return a 400 Bad Request\n * error).\n */\n- private void handleBadRequest(RestRequest request, RestChannel channel) {\n- channel.sendResponse(new BytesRestResponse(BAD_REQUEST,\n- \"No handler found for uri [\" + request.uri() + \"] and method [\" + request.method() + \"]\"));\n+ private void handleBadRequest(RestRequest request, RestChannel channel) throws IOException {\n+ try (XContentBuilder builder = channel.newErrorBuilder()) {\n+ builder.startObject();\n+ {\n+ builder.field(\"error\", \"no handler found for uri [\" + request.uri() + \"] and method [\" + request.method() + \"]\");\n+ }\n+ builder.endObject();\n+ channel.sendResponse(new BytesRestResponse(BAD_REQUEST, builder));\n+ }\n }\n \n /**", "filename": "server/src/main/java/org/elasticsearch/rest/RestController.java", "status": "modified" }, { "diff": "@@ -51,6 +51,6 @@ public void testActionsFail() throws Exception {\n ResponseException exception = expectThrows(ResponseException.class, () -> client().performRequest(\"put\",\n MachineLearning.BASE_PATH + \"anomaly_detectors/foo\", Collections.emptyMap(),\n new StringEntity(Strings.toString(xContentBuilder), ContentType.APPLICATION_JSON)));\n- assertThat(exception.getMessage(), containsString(\"No handler found for uri [/_xpack/ml/anomaly_detectors/foo] and method [PUT]\"));\n+ assertThat(exception.getMessage(), containsString(\"no handler found for uri [/_xpack/ml/anomaly_detectors/foo] and method [PUT]\"));\n }\n }", "filename": "x-pack/qa/ml-disabled/src/test/java/org/elasticsearch/xpack/ml/integration/MlPluginDisabledIT.java", "status": "modified" } ] }
{ "body": "Starting with the refactoring in https://github.com/elastic/elasticsearch/pull/22778 (released in 5.3) we may fail to properly replicate operation when a mapping update on master fails. If a bulk operations needs a mapping update half way, it will send a request to the master before continuing to index the operations. If that request times out or isn't acked (i.e., even one node in the cluster didn't process it within 30s), we end up throwing the exception and aborting the entire bulk. This is a problem because all operations that were processed so far are not replicated any more to the replicas. Although these operations were never \"acked\" to the user (we threw an error) it cause the local checkpoint on the replicas to lag (on 6.x) and the primary and replica to diverge. \r\n\r\nThis PR does a couple of things:\r\n1) Most importantly, treat *any* mapping update failure as a document level failure, meaning only the relevant indexing operation will fail.\r\n2) Removes the mapping update callbacks from `IndexShard.applyIndexOperationOnPrimary` and similar methods for simpler execution. We don't use exceptions any more when a mapping update was successful.\r\n\r\nI think we need to do more work here (the fact that a single slow node can prevent those mappings updates from being acked and thus fail operations is bad), but I want to keep this as small as I can (it is already too big). \r\n\r\nNote that this needs to go to 5.x but I'm not sure how cleanly it will back port. I'll evaluate once this has been reviewed and put into 7.0 & 6.x", "comments": [ { "body": "Pinging @elastic/es-distributed", "created_at": "2018-04-30T06:44:14Z" }, { "body": "@ywelsch thanks. I addressed your comments. Care to take another look?", "created_at": "2018-04-30T12:48:29Z" }, { "body": " sample packaging tests", "created_at": "2018-04-30T18:35:09Z" }, { "body": "run sample packaging tests", "created_at": "2018-04-30T18:35:40Z" }, { "body": "run sample packaging tests", "created_at": "2018-04-30T20:36:49Z" }, { "body": "this is backported to 6.3 . 5.6 still pending evaluation", "created_at": "2018-05-01T09:55:21Z" }, { "body": "We have encountered the same issue with elasticsearch v6.2.4, can you merge it to the 6.2 branch also?", "created_at": "2018-05-03T10:20:27Z" }, { "body": "Hi\r\nI've referenced this issue in #30351 (now closed, as the first part was caused by this issue exactly). However, this is a critical issue in our production elasticsearch cluster and we must find a solution soon. Could you backport this pull request to elasticsearch v6.2 as well?\r\n\r\nThis would be extremely helpful.\r\n\r\nThanks!", "created_at": "2018-05-03T14:07:36Z" }, { "body": "@bleskes any plans to backport it to 6.2?\r\nI know it is in 6.3, however, as it is a minor release I'm worried about using it as a snapshot in production.\r\nAny other way to get it as bugfix on top of 6.2? maybe on a temp branch :)\r\nIf not, what is the readiness of 6.3?\r\n\r\nThanks for your amazing help.", "created_at": "2018-05-05T05:15:47Z" }, { "body": "We are actively working on releasing 6.3.0, which will contain this fix. Once the 6.3.0 is out, we will no longer release a new patch release to the 6.2.x series. The reason is that 6.3 is minor release and is fully compatible with previous 6.x releases. You just as easily upgrade to it as you would to a 6.2.x using the standard rolling upgrade mechansim.", "created_at": "2018-05-05T13:52:51Z" }, { "body": "Thanks @bleskes, is it safe to assume that 6.3.0 will be released in the next few days?", "created_at": "2018-05-06T09:33:04Z" }, { "body": "> Thanks @bleskes, is it safe to assume that 6.3.0 will be released in the next few days?\r\n\r\nWe can't say anything concrete. I don't think in the next few days is realistic. \r\n\r\n\r\n\r\n", "created_at": "2018-05-06T09:55:59Z" }, { "body": "Got it. Thaks for responding. ", "created_at": "2018-05-07T07:54:04Z" }, { "body": "@bleskes, do you have any ideia about the 6.3 launch? I mean, the plan is to launch in a few days or few months or several months?", "created_at": "2018-05-30T13:32:09Z" }, { "body": "@candreoliveira I can't say for sure but several months is extremely unlikely.", "created_at": "2018-05-30T14:44:44Z" }, { "body": "@bleskes any more updates on when 6.3 will be released? This specific issue is causing us a lot of problems.", "created_at": "2018-06-10T01:46:07Z" }, { "body": "@jt6211 still working on stabilizing it.. soon is all I can say at this point.", "created_at": "2018-06-10T19:37:37Z" } ], "number": 30244, "title": "Bulk operation fail to replicate operations when a mapping update times out" }
{ "body": "Starting with the refactoring in https://github.com/elastic/elasticsearch/pull/22778 (released in 5.3)\r\n\r\nwe may fail to properly replicate operation when a mapping update on master fails.\r\nIf a bulk operations needs a mapping update half way, it will send a request to the\r\nmaster before continuing to index the operations. If that request times out or isn't\r\nacked (i.e., even one node in the cluster didn't process it within 30s), we end up\r\nthrowing the exception and aborting the entire bulk. This is a problem because all\r\noperations that were processed so far are not replicated any more to the replicas.\r\nAlthough these operations were never \"acked\" to the user (we threw an error) it\r\ncause the local checkpoint on the replicas to lag (on 6.x) and the primary and replica\r\nto diverge.\r\n\r\nThis PR changes the logic to treat *any* mapping update failure as a document level\r\nfailure, meaning only the relevant indexing operation will fail.\r\n\r\nBack port of #30244\r\n", "number": 30379, "review_comments": [ { "body": "5.x does not have a separate MasterService. Maybe just use `BlockClusterStateProcessing` instead of introducing this class?", "created_at": "2018-05-04T11:19:21Z" }, { "body": "Good suggestion. Will do.", "created_at": "2018-05-04T12:03:03Z" } ], "title": "Bulk operation fail to replicate operations when a mapping update times out" }
{ "commits": [ { "message": "Starting with the refactoring in https://github.com/elastic/elasticsearch/pull/22778 (released in 5.3)\nwe may fail to properly replicate operation when a mapping update on master fails.\nIf a bulk operations needs a mapping update half way, it will send a request to the\nmaster before continuing to index the operations. If that request times out or isn't\nacked (i.e., even one node in the cluster didn't process it within 30s), we end up\nthrowing the exception and aborting the entire bulk. This is a problem because all\noperations that were processed so far are not replicated any more to the replicas.\nAlthough these operations were never \"acked\" to the user (we threw an error) it\ncause the local checkpoint on the replicas to lag (on 6.x) and the primary and replica\nto diverge.\n\nThis PR changes the logic to treat *any* mapping update failure as a document level\nfailure, meaning only the relevant indexing operation will fail.\n\nBackport of #30244" }, { "message": "rename" }, { "message": "remove" } ], "files": [ { "diff": "@@ -52,7 +52,6 @@\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.engine.VersionConflictEngineException;\n import org.elasticsearch.index.mapper.MapperParsingException;\n-import org.elasticsearch.index.get.GetResult;\n import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.index.mapper.SourceToParse;\n import org.elasticsearch.index.shard.IndexShard;\n@@ -475,8 +474,9 @@ public static Engine.IndexResult executeIndexRequestOnPrimary(IndexRequest reque\n // which are bubbled up\n try {\n mappingUpdatedAction.updateMappingOnMaster(shardId.getIndex(), request.type(), update);\n- } catch (IllegalArgumentException e) {\n- // throws IAE on conflicts merging dynamic mappings\n+ } catch (Exception e) {\n+ // failure to update the mapping should translate to a failure of specific requests. Other requests\n+ // still need to be executed and replicated.\n return new Engine.IndexResult(e, request.version());\n }\n try {\n@@ -511,7 +511,9 @@ public static Engine.DeleteResult executeDeleteRequestOnPrimary(DeleteRequest re\n mappingUpdateNeeded = true;\n mappingUpdatedAction.updateMappingOnMaster(shardId.getIndex(), request.type(), update);\n }\n- } catch (MapperParsingException | IllegalArgumentException e) {\n+ } catch (Exception e) {\n+ // failure to update the mapping should translate to a failure of specific requests. Other requests\n+ // still need to be executed and replicated.\n return new Engine.DeleteResult(e, request.version(), false);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java", "status": "modified" }, { "diff": "@@ -27,6 +27,8 @@\n import org.elasticsearch.action.DocWriteResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse;\n+import org.elasticsearch.action.bulk.BulkRequestBuilder;\n+import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.index.IndexResponse;\n@@ -74,6 +76,7 @@\n import org.elasticsearch.test.InternalTestCluster;\n import org.elasticsearch.test.discovery.ClusterDiscoveryConfiguration;\n import org.elasticsearch.test.discovery.TestZenDiscovery;\n+import org.elasticsearch.test.disruption.BlockClusterStateProcessing;\n import org.elasticsearch.test.disruption.IntermittentLongGCDisruption;\n import org.elasticsearch.test.disruption.LongGCDisruption;\n import org.elasticsearch.test.disruption.NetworkDisruption;\n@@ -121,6 +124,7 @@\n import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n import static org.hamcrest.Matchers.instanceOf;\n@@ -1373,6 +1377,51 @@ public void clusterChanged(ClusterChangedEvent event) {\n }\n }\n \n+ @TestLogging(\n+ \"_root:DEBUG,\"\n+ + \"org.elasticsearch.action.bulk:TRACE,\"\n+ + \"org.elasticsearch.action.get:TRACE,\"\n+ + \"org.elasticsearch.cluster.service:TRACE,\"\n+ + \"org.elasticsearch.discovery:TRACE,\"\n+ + \"org.elasticsearch.indices.cluster:TRACE,\"\n+ + \"org.elasticsearch.indices.recovery:TRACE,\"\n+ + \"org.elasticsearch.index.seqno:TRACE,\"\n+ + \"org.elasticsearch.index.shard:TRACE\")\n+ public void testMappingTimeout() throws Exception {\n+ startCluster(3);\n+ assertAcked(prepareCreate(\"test\").setSettings(Settings.builder()\n+ .put(\"index.number_of_shards\", 1)\n+ .put(\"index.number_of_replicas\", 1)\n+ .put(\"index.routing.allocation.exclude._name\", internalCluster().getMasterName())\n+ .build()));\n+\n+ // create one field\n+ index(\"test\", \"doc\", \"1\", \"{ \\\"f\\\": 1 }\");\n+\n+ ensureGreen();\n+\n+ assertAcked(client().admin().cluster().prepareUpdateSettings().setTransientSettings(\n+ Settings.builder().put(\"indices.mapping.dynamic_timeout\", \"1ms\")));\n+\n+ ServiceDisruptionScheme disruption = new BlockClusterStateProcessing(internalCluster().getMasterName(), random());\n+ setDisruptionScheme(disruption);\n+\n+ disruption.startDisrupting();\n+\n+ BulkRequestBuilder bulk = client().prepareBulk();\n+ bulk.add(client().prepareIndex(\"test\", \"doc\", \"2\").setSource(\"{ \\\"f\\\": 1 }\", XContentType.JSON));\n+ bulk.add(client().prepareIndex(\"test\", \"doc\", \"3\").setSource(\"{ \\\"g\\\": 1 }\", XContentType.JSON));\n+ bulk.add(client().prepareIndex(\"test\", \"doc\", \"4\").setSource(\"{ \\\"f\\\": 1 }\", XContentType.JSON));\n+ BulkResponse bulkResponse = bulk.get();\n+ assertTrue(bulkResponse.hasFailures());\n+\n+ disruption.stopDisrupting();\n+\n+ refresh(\"test\");\n+ assertSearchHits(client().prepareSearch(\"test\").setPreference(\"_primary\").get(), \"1\", \"2\", \"4\");\n+ assertSearchHits(client().prepareSearch(\"test\").setPreference(\"_replica\").get(), \"1\", \"2\", \"4\");\n+ }\n+\n private void createRandomIndex(String idxName) throws ExecutionException, InterruptedException {\n assertAcked(prepareCreate(idxName, 0, Settings.builder().put(\"number_of_shards\", between(1, 20))\n .put(\"number_of_replicas\", 0)));", "filename": "core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java", "status": "modified" } ] }
{ "body": "Hi,\r\n\r\nI'm migrating an ES cluster from v1.7.5 to v5.1.2 and noticed that in v5.1.2 empy geo_point fields are being treated as if they were valid points with coordinates [-180, -90] within geo_xxx queries (used to work fine in v1.7.5). I'm trying to simply get the geo_bounds of my geo_point field to determine the area it covers. Previously I used \"max\" and \"min\" aggs on each coordinate (lat and lon) to accomplish this (this is just for our own ease of implementation) and now this is not even working (returns \"null\" for every coordinates max and min).\r\nHas anyone experienced this? How do I get the geo_bounds without the empty points being considered? I've tried \"bool\" > \"must\" > \"exists\" without success.\r\nPlease see below some examples of what I just reported.\r\n### Mapping\r\n```json\r\n \"Location\": {\r\n \"type\": \"geo_point\"\r\n },\r\n```\r\n### Data Samples\r\n```json\r\n\"Location\": \"-0.533218, 34.5575\", << non-empty valid point\r\n\"Location\": \"\", << empty invalid point\r\n```\r\n### Query\r\n```json\r\nGET _search\r\n{\r\n \"query\": {\r\n \"bool\" : {\r\n \"must\" : {\r\n \"exists\": { \"field\": \"Location\" }\r\n }\r\n }\r\n },\r\n \"size\": 0,\r\n \"aggs\": {\r\n \"Location_bounds\": {\r\n \"geo_bounds\": {\r\n \"field\": \"Location\"\r\n }\r\n },\r\n \"Location_lon_min\": {\r\n \"min\": {\r\n \"field\": \"Location.lon\"\r\n }\r\n },\r\n \"Location_lon_max\": {\r\n \"max\": {\r\n \"field\": \"Location.lon\"\r\n }\r\n },\r\n \"Location_lat_min\": {\r\n \"min\": {\r\n \"field\": \"Location.lat\"\r\n }\r\n },\r\n \"Location_lat_max\": {\r\n \"max\": {\r\n \"field\": \"Location.lat\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\n### Query result\r\n```json\r\n{\r\n \"took\": 3,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 66000,\r\n \"max_score\": 0,\r\n \"hits\": []\r\n },\r\n \"aggregations\": {\r\n \"Location_lat_min\": {\r\n \"value\": null\r\n },\r\n \"Location_lat_max\": {\r\n \"value\": null\r\n },\r\n \"Location_lon_min\": {\r\n \"value\": null\r\n },\r\n \"Location_lon_max\": {\r\n \"value\": null\r\n },\r\n \"Location_bounds\": {\r\n \"bounds\": {\r\n \"top_left\": {\r\n \"lat\": 0.5858219834044576,\r\n \"lon\": 34.09869999624789\r\n },\r\n \"bottom_right\": {\r\n \"lat\": -90,\r\n \"lon\": -180\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\nAnd I can confirm I have no geo_point in the [-180, -90] coordinate.\r\n\r\nThank you very much.", "comments": [ { "body": "I believe this is a bug in `GeoPoint.resetFromString`. We should be checking for empty value at the beginning, so that we do not fall back to trying to decode a geohash with an empty string (which looks like it will have a morton encoding of 0, which I believe maps to [-180, -90]). @nknize can you confirm?", "created_at": "2017-03-15T20:06:08Z" } ], "number": 23579, "title": "Empty geo_point being treated as [-180, -90]?" }
{ "body": "Adds verification that geohashes are not empty and contain only\r\nvalid characters. It fixes the issue when en empty geohash is \r\ntreated as [-180, -90] and geohashes with non-geohash character\r\nare getting resolved into invalid coordinates.\r\n\r\nCloses #23579\r\n", "number": 30376, "review_comments": [ { "body": "why don't we just use the IAE?", "created_at": "2018-05-04T12:45:56Z" }, { "body": "All other formatting issues are throwing `ElasticsearchParseException` at the moment, so I have decided to stick with it for consistency sake. It also makes it easier to ignore in `parseGeoPointIgnoringMalformed` and `parseGeoPointStringIgnoringMalformed` later on if mapping has `ignore_malformed` set to `true`.", "created_at": "2018-05-04T15:17:49Z" } ], "title": "Add stricter geohash parsing" }
{ "commits": [ { "message": "Add stricter geohash parsing\n\nAdds verification that geohashes are not empty and contain only\nvalid characters.\n\nCloses #23579" }, { "message": "Add changelog entry" }, { "message": "Merge remote-tracking branch 'elastic/master' into issue-23579-empty-geohashes" }, { "message": "Merge remote-tracking branch 'elastic/master' into issue-23579-empty-geohashes" }, { "message": "Merge remote-tracking branch 'elastic/master' into issue-23579-empty-geohashes" } ], "files": [ { "diff": "@@ -177,6 +177,21 @@ Machine Learning::\n \n * Account for gaps in data counts after job is reopened ({pull}30294[#30294])\n \n+Add validation that geohashes are not empty and don't contain unsupported characters ({pull}30376[#30376])\n+\n+[[release-notes-6.3.1]]\n+== Elasticsearch version 6.3.1\n+\n+//[float]\n+//=== New Features\n+\n+//[float]\n+//=== Enhancements\n+\n+[float]\n+=== Bug Fixes\n+\n+Reduce the number of object allocations made by {security} when resolving the indices and aliases for a request ({pull}30180[#30180])\n Rollup::\n * Validate timezone in range queries to ensure they match the selected job when\n searching ({pull}30338[#30338])", "filename": "docs/CHANGELOG.asciidoc", "status": "modified" }, { "diff": "@@ -171,11 +171,17 @@ public static final String stringEncodeFromMortonLong(long hashedVal, final int\n * Encode to a morton long value from a given geohash string\n */\n public static final long mortonEncode(final String hash) {\n+ if (hash.isEmpty()) {\n+ throw new IllegalArgumentException(\"empty geohash\");\n+ }\n int level = 11;\n long b;\n long l = 0L;\n for(char c : hash.toCharArray()) {\n b = (long)(BASE_32_STRING.indexOf(c));\n+ if (b < 0) {\n+ throw new IllegalArgumentException(\"unsupported symbol [\" + c + \"] in geohash [\" + hash + \"]\");\n+ }\n l |= (b<<((level--*5) + MORTON_OFFSET));\n if (level < 0) {\n // We cannot handle more than 12 levels", "filename": "server/src/main/java/org/elasticsearch/common/geo/GeoHashUtils.java", "status": "modified" }, { "diff": "@@ -28,7 +28,6 @@\n import org.elasticsearch.common.xcontent.ToXContentFragment;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.common.Strings;\n \n import java.io.IOException;\n import java.util.Arrays;\n@@ -126,7 +125,12 @@ public GeoPoint resetFromIndexableField(IndexableField field) {\n }\n \n public GeoPoint resetFromGeoHash(String geohash) {\n- final long hash = mortonEncode(geohash);\n+ final long hash;\n+ try {\n+ hash = mortonEncode(geohash);\n+ } catch (IllegalArgumentException ex) {\n+ throw new ElasticsearchParseException(ex.getMessage(), ex);\n+ }\n return this.reset(GeoHashUtils.decodeLatitude(hash), GeoHashUtils.decodeLongitude(hash));\n }\n ", "filename": "server/src/main/java/org/elasticsearch/common/geo/GeoPoint.java", "status": "modified" }, { "diff": "@@ -299,14 +299,7 @@ public Mapper parse(ParseContext context) throws IOException {\n if (token == XContentParser.Token.START_ARRAY) {\n // its an array of array of lon/lat [ [1.2, 1.3], [1.4, 1.5] ]\n while (token != XContentParser.Token.END_ARRAY) {\n- try {\n- parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse));\n- } catch (ElasticsearchParseException e) {\n- if (ignoreMalformed.value() == false) {\n- throw e;\n- }\n- context.addIgnoredField(fieldType.name());\n- }\n+ parseGeoPointIgnoringMalformed(context, sparse);\n token = context.parser().nextToken();\n }\n } else {\n@@ -326,42 +319,57 @@ public Mapper parse(ParseContext context) throws IOException {\n } else {\n while (token != XContentParser.Token.END_ARRAY) {\n if (token == XContentParser.Token.VALUE_STRING) {\n- parse(context, sparse.resetFromString(context.parser().text(), ignoreZValue.value()));\n+ parseGeoPointStringIgnoringMalformed(context, sparse);\n } else {\n- try {\n- parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse));\n- } catch (ElasticsearchParseException e) {\n- if (ignoreMalformed.value() == false) {\n- throw e;\n- }\n- }\n+ parseGeoPointIgnoringMalformed(context, sparse);\n }\n token = context.parser().nextToken();\n }\n }\n }\n } else if (token == XContentParser.Token.VALUE_STRING) {\n- parse(context, sparse.resetFromString(context.parser().text(), ignoreZValue.value()));\n+ parseGeoPointStringIgnoringMalformed(context, sparse);\n } else if (token == XContentParser.Token.VALUE_NULL) {\n if (fieldType.nullValue() != null) {\n parse(context, (GeoPoint) fieldType.nullValue());\n }\n } else {\n- try {\n- parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse));\n- } catch (ElasticsearchParseException e) {\n- if (ignoreMalformed.value() == false) {\n- throw e;\n- }\n- context.addIgnoredField(fieldType.name());\n- }\n+ parseGeoPointIgnoringMalformed(context, sparse);\n }\n }\n \n context.path().remove();\n return null;\n }\n \n+ /**\n+ * Parses geopoint represented as an object or an array, ignores malformed geopoints if needed\n+ */\n+ private void parseGeoPointIgnoringMalformed(ParseContext context, GeoPoint sparse) throws IOException {\n+ try {\n+ parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse));\n+ } catch (ElasticsearchParseException e) {\n+ if (ignoreMalformed.value() == false) {\n+ throw e;\n+ }\n+ context.addIgnoredField(fieldType.name());\n+ }\n+ }\n+\n+ /**\n+ * Parses geopoint represented as a string and ignores malformed geopoints if needed\n+ */\n+ private void parseGeoPointStringIgnoringMalformed(ParseContext context, GeoPoint sparse) throws IOException {\n+ try {\n+ parse(context, sparse.resetFromString(context.parser().text(), ignoreZValue.value()));\n+ } catch (ElasticsearchParseException e) {\n+ if (ignoreMalformed.value() == false) {\n+ throw e;\n+ }\n+ context.addIgnoredField(fieldType.name());\n+ }\n+ }\n+\n @Override\n protected void doXContentBody(XContentBuilder builder, boolean includeDefaults, Params params) throws IOException {\n super.doXContentBody(builder, includeDefaults, params);", "filename": "server/src/main/java/org/elasticsearch/index/mapper/GeoPointFieldMapper.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.common.geo;\n \n import org.apache.lucene.geo.Rectangle;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.test.ESTestCase;\n \n /**\n@@ -95,7 +96,17 @@ public void testLongGeohashes() {\n Rectangle expectedBbox = GeoHashUtils.bbox(geohash);\n Rectangle actualBbox = GeoHashUtils.bbox(extendedGeohash);\n assertEquals(\"Additional data points above 12 should be ignored [\" + extendedGeohash + \"]\" , expectedBbox, actualBbox);\n-\n }\n }\n+\n+ public void testInvalidGeohashes() {\n+ IllegalArgumentException ex;\n+\n+ ex = expectThrows(IllegalArgumentException.class, () -> GeoHashUtils.mortonEncode(\"55.5\"));\n+ assertEquals(\"unsupported symbol [.] in geohash [55.5]\", ex.getMessage());\n+\n+ ex = expectThrows(IllegalArgumentException.class, () -> GeoHashUtils.mortonEncode(\"\"));\n+ assertEquals(\"empty geohash\", ex.getMessage());\n+ }\n+\n }", "filename": "server/src/test/java/org/elasticsearch/common/geo/GeoHashTests.java", "status": "modified" }, { "diff": "@@ -49,6 +49,7 @@\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.not;\n import static org.hamcrest.Matchers.notNullValue;\n+import static org.hamcrest.Matchers.nullValue;\n \n public class GeoPointFieldMapperTests extends ESSingleNodeTestCase {\n \n@@ -398,4 +399,50 @@ public void testNullValue() throws Exception {\n assertThat(defaultValue, not(equalTo(doc.rootDoc().getField(\"location\").binaryValue())));\n }\n \n+ public void testInvalidGeohashIgnored() throws Exception {\n+ String mapping = Strings.toString(XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"location\")\n+ .field(\"type\", \"geo_point\")\n+ .field(\"ignore_malformed\", \"true\")\n+ .endObject()\n+ .endObject().endObject().endObject());\n+\n+ DocumentMapper defaultMapper = createIndex(\"test\").mapperService().documentMapperParser()\n+ .parse(\"type\", new CompressedXContent(mapping));\n+\n+ ParsedDocument doc = defaultMapper.parse(SourceToParse.source(\"test\", \"type\", \"1\", BytesReference\n+ .bytes(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"location\", \"1234.333\")\n+ .endObject()),\n+ XContentType.JSON));\n+\n+ assertThat(doc.rootDoc().getField(\"location\"), nullValue());\n+ }\n+\n+\n+ public void testInvalidGeohashNotIgnored() throws Exception {\n+ String mapping = Strings.toString(XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"location\")\n+ .field(\"type\", \"geo_point\")\n+ .endObject()\n+ .endObject().endObject().endObject());\n+\n+ DocumentMapper defaultMapper = createIndex(\"test\").mapperService().documentMapperParser()\n+ .parse(\"type\", new CompressedXContent(mapping));\n+\n+ MapperParsingException ex = expectThrows(MapperParsingException.class,\n+ () -> defaultMapper.parse(SourceToParse.source(\"test\", \"type\", \"1\", BytesReference\n+ .bytes(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"location\", \"1234.333\")\n+ .endObject()),\n+ XContentType.JSON)));\n+\n+ assertThat(ex.getMessage(), equalTo(\"failed to parse\"));\n+ assertThat(ex.getRootCause().getMessage(), equalTo(\"unsupported symbol [.] in geohash [1234.333]\"));\n+ }\n+\n }", "filename": "server/src/test/java/org/elasticsearch/index/mapper/GeoPointFieldMapperTests.java", "status": "modified" }, { "diff": "@@ -175,6 +175,19 @@ public void testInvalidField() throws IOException {\n assertThat(e.getMessage(), is(\"field must be either [lat], [lon] or [geohash]\"));\n }\n \n+ public void testInvalidGeoHash() throws IOException {\n+ XContentBuilder content = JsonXContent.contentBuilder();\n+ content.startObject();\n+ content.field(\"geohash\", \"!!!!\");\n+ content.endObject();\n+\n+ XContentParser parser = createParser(JsonXContent.jsonXContent, BytesReference.bytes(content));\n+ parser.nextToken();\n+\n+ Exception e = expectThrows(ElasticsearchParseException.class, () -> GeoUtils.parseGeoPoint(parser));\n+ assertThat(e.getMessage(), is(\"unsupported symbol [!] in geohash [!!!!]\"));\n+ }\n+\n private XContentParser objectLatLon(double lat, double lon) throws IOException {\n XContentBuilder content = JsonXContent.contentBuilder();\n content.startObject();", "filename": "server/src/test/java/org/elasticsearch/index/search/geo/GeoPointParsingTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 6.2.4\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: 1.8.0_101\r\n\r\n**OS version**: MacOS (Darwin Kernel Version 15.6.0)\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\"More Like This\" queries do not return any results when a field on the source document produces no tokens at index time. Using a keyword field and manually specifying the analyzer at query time works as expected.\r\n\r\n**Steps to reproduce**:\r\n```\r\nPUT test\r\n{\r\n \"mappings\": {\r\n \"type\": {\r\n \"properties\": {\r\n \"myField\": {\r\n \"type\": \"text\"\r\n },\r\n \"empty\": {\r\n \"type\": \"text\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n```\r\nPOST /_bulk\r\n{ \"index\": { \"_index\": \"test\", \"_type\": \"type\",\"_id\":1}}\r\n{\"myField\":\"and_foo\", \"empty\":\"\"}\r\n{ \"index\": { \"_index\": \"test\", \"_type\": \"type\",\"_id\":2}}\r\n{\"myField\":\"and_foo\", \"empty\":\"\"}\r\n```\r\n\r\nThis query correctly returns 1 result:\r\n```\r\nGET /_search\r\n{\r\n \"query\": {\r\n \"more_like_this\": {\r\n \"fields\": [\r\n \"myField\"\r\n ],\r\n \"like\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"type\",\r\n \"_id\": \"1\"\r\n }\r\n ],\r\n \"min_term_freq\": 1,\r\n \"min_doc_freq\": 1\r\n }\r\n }\r\n}\r\n```\r\n\r\nThis query returns no results when using both fields:\r\n```\r\nGET /_search\r\n{\r\n \"query\": {\r\n \"more_like_this\": {\r\n \"fields\": [\r\n \"myField\", \"empty\"\r\n ],\r\n \"like\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"type\",\r\n \"_id\": \"1\"\r\n }\r\n ],\r\n \"min_term_freq\": 1,\r\n \"min_doc_freq\": 1\r\n }\r\n }\r\n}\r\n```\r\n\r\nIf you update the \"empty\" field in document 1 to contain non-analyzable characters (like punctuation), the first query still gives 0 results. Changing the \"empty\" field to be a keyword field works as expected.", "comments": [ { "body": "This reproduces on master. Another way of looking at it is that a MLT document with an empty field will produce other documents with an empty field if it's a keyword type, but not if it's a text type.", "created_at": "2018-04-25T22:40:47Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-25T22:42:28Z" }, { "body": "I'm not sure if this would be considered a bug or just a difference in handling analyzed vs non analyzed fields, but either way it seems unintuitive", "created_at": "2018-04-25T22:43:15Z" }, { "body": "It's probably also worth noting the MLT query produces an exception when the documents only have the empty field\r\n\r\n```\r\nPUT test\r\n{\r\n \"settings\": {\r\n \"number_of_shards\": 1,\r\n \"number_of_replicas\": 0\r\n }, \r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"nothing\": {\r\n \"type\": \"text\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPOST /_bulk\r\n{ \"index\": { \"_index\": \"test\", \"_type\": \"_doc\",\"_id\":1}}\r\n{\"nothing\": \"\"}\r\n{ \"index\": { \"_index\": \"test\", \"_type\": \"_doc\",\"_id\":2}}\r\n{\"nothing\": \"\"}\r\n\r\nGET /_search\r\n{\r\n \"query\": {\r\n \"more_like_this\": {\r\n \"fields\": [\r\n \"nothing\"\r\n ],\r\n \"like\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"_doc\",\r\n \"_id\": \"1\"\r\n }\r\n ],\r\n \"min_term_freq\": 1,\r\n \"min_doc_freq\": 1\r\n }\r\n }\r\n}\r\n```\r\n\r\n```\r\n[2018-04-25T15:57:19,682][DEBUG][o.e.a.t.TransportShardMultiTermsVectorAction] [DFW1LSW] [test][0] failed to execute multi term vectors for [_doc]/[1]\r\norg.elasticsearch.ElasticsearchException: failed to execute term vector request\r\n at org.elasticsearch.index.termvectors.TermVectorsService.getTermVectors(TermVectorsService.java:150) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.termvectors.TermVectorsService.getTermVectors(TermVectorsService.java:77) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.termvectors.TransportShardMultiTermsVectorAction.shardOperation(TransportShardMultiTermsVectorAction.java:85) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.termvectors.TransportShardMultiTermsVectorAction.shardOperation(TransportShardMultiTermsVectorAction.java:41) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$1.doRun(TransportSingleShardAction.java:112) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) [?:?]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]\r\n at java.lang.Thread.run(Thread.java:844) [?:?]\r\nCaused by: java.lang.NullPointerException\r\n at org.elasticsearch.action.termvectors.TermVectorsWriter.setFields(TermVectorsWriter.java:82) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.termvectors.TermVectorsResponse.setFields(TermVectorsResponse.java:361) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.index.termvectors.TermVectorsService.getTermVectors(TermVectorsService.java:146) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n ... 9 more\r\n```", "created_at": "2018-04-25T23:03:06Z" }, { "body": "Hello, Can I take up this bug? any pointers are much appreciated", "created_at": "2018-05-02T12:43:19Z" }, { "body": "At this point -> \r\nhttps://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/action/termvectors/TermVectorsWriter.java#L69\r\n\r\nfieldTermVector is getting null. I initialized fieldTermVector with EMPTY_TERMS in case of null.\r\nThis solved both the above given issues. \r\nShould I go ahead and make PR for it?\r\n", "created_at": "2018-05-02T14:20:42Z" } ], "number": 30148, "title": "More Like This queries return 0 results if field in source document has 0 tokens in analyzed field" }
{ "body": "Closes #30148 \r\n \r\nAt this point ->\r\nhttps://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/action/termvectors/TermVectorsWriter.java#L69\r\nfieldTermVector is getting null in case of empty field. I initialized fieldTermVector with EMPTY_TERMS in case of null. This solves both the given issue given in the ticket.\r\n", "number": 30365, "review_comments": [ { "body": "nit: could you remove the second empty line (or maybe even both?), I think a few empty lines to separate blocks don't hurt but too many are also just bloating the code.", "created_at": "2018-05-07T18:38:17Z" } ], "title": "Avoid NPE in `more_like_this` when field has zero tokens" }
{ "commits": [ { "message": "assign field term vector with empty terms when null." }, { "message": "Add test for more like this in case of zero tokens in one of the field." }, { "message": "Update changelog and remove empty lines." }, { "message": "Merge branch 'master' into bug-fix/mlt-empty-field" }, { "message": "Update CHANGELOG.asciidoc" } ], "files": [ { "diff": "@@ -104,6 +104,8 @@ ones that the user is authorized to access in case field level security is enabl\n [float]\n === Bug Fixes\n \n+Fix NPE in 'more_like_this' when field has zero tokens ({pull}30365[#30365])\n+\n Fixed prerelease version of elasticsearch in the `deb` package to sort before GA versions\n ({pull}29000[#29000])\n \n@@ -169,6 +171,8 @@ Added put index template API to the high level rest client ({pull}30400[#30400])\n [float]\n === Bug Fixes\n \n+Fix NPE in 'more_like_this' when field has zero tokens ({pull}30365[#30365])\n+\n Do not ignore request analysis/similarity settings on index resize operations when the source index already contains such settings ({pull}30216[#30216])\n \n Fix NPE when CumulativeSum agg encounters null value/empty bucket ({pull}29641[#29641])", "filename": "docs/CHANGELOG.asciidoc", "status": "modified" }, { "diff": "@@ -70,6 +70,10 @@ void setFields(Fields termVectorsByField, Set<String> selectedFields, EnumSet<Fl\n Terms topLevelTerms = topLevelFields.terms(field);\n \n // if no terms found, take the retrieved term vector fields for stats\n+ if (fieldTermVector == null) {\n+ fieldTermVector = EMPTY_TERMS;\n+ }\n+\n if (topLevelTerms == null) {\n topLevelTerms = EMPTY_TERMS;\n }", "filename": "server/src/main/java/org/elasticsearch/action/termvectors/TermVectorsWriter.java", "status": "modified" }, { "diff": "@@ -91,6 +91,36 @@ public void testSimpleMoreLikeThis() throws Exception {\n assertHitCount(response, 1L);\n }\n \n+ //Issue #30148\n+ public void testMoreLikeThisForZeroTokensInOneOfTheAnalyzedFields() throws Exception {\n+ CreateIndexRequestBuilder createIndexRequestBuilder = prepareCreate(\"test\")\n+ .addMapping(\"type\", jsonBuilder()\n+ .startObject().startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"myField\").field(\"type\", \"text\").endObject()\n+ .startObject(\"empty\").field(\"type\", \"text\").endObject()\n+ .endObject()\n+ .endObject().endObject());\n+\n+ assertAcked(createIndexRequestBuilder);\n+\n+ ensureGreen();\n+\n+ client().index(indexRequest(\"test\").type(\"type\").id(\"1\").source(jsonBuilder().startObject()\n+ .field(\"myField\", \"and_foo\").field(\"empty\", \"\").endObject())).actionGet();\n+ client().index(indexRequest(\"test\").type(\"type\").id(\"2\").source(jsonBuilder().startObject()\n+ .field(\"myField\", \"and_foo\").field(\"empty\", \"\").endObject())).actionGet();\n+\n+ client().admin().indices().refresh(refreshRequest()).actionGet();\n+\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(\n+ moreLikeThisQuery(new String[]{\"myField\", \"empty\"}, null, new Item[]{new Item(\"test\", \"type\", \"1\")})\n+ .minTermFreq(1).minDocFreq(1)\n+ ).get();\n+\n+ assertHitCount(searchResponse, 1L);\n+ }\n+\n public void testSimpleMoreLikeOnLongField() throws Exception {\n logger.info(\"Creating index test\");\n assertAcked(prepareCreate(\"test\")", "filename": "server/src/test/java/org/elasticsearch/search/morelikethis/MoreLikeThisIT.java", "status": "modified" } ] }
{ "body": "*Original comment by @sophiec20:*\n\nFound in 6.3.0 ` \"build\" : {\r\n \"hash\" : \"a47564f\",\r\n \"date\" : \"2018-04-09T21:45:51.347364Z\"\r\n },`\r\n\r\nWhen a job is configured to use model plot `terms`, then model plot results should only be written for the specified terms. This is not happening.\r\n\r\nfarequote job was created for `mean(responsetime) by airline` with the following model plot:\r\n```\r\n \"model_plot_config\": {\r\n \"enabled\": true,\r\n \"terms\": \"ASA\"\r\n },\r\n```\r\n\r\nResults were written for all airlines\r\n![image](https://user-images.githubusercontent.com/4185750/38672199-61fb882a-3e45-11e8-97ec-ba3f87320b57.png)\r\n\r\nHowever, when using `mean(responsetime) partition airline` then only one airline was written, working as expected.\r\n\r\nThis is related to one of the job validation tests which checks for potentially large cardinality jobs with model plot enabled. LINK REDACTED", "comments": [ { "body": "*Original comment by @dimitris-athanasiou:*\n\nI suggest postponing working on this as the rules implementation may change radically.", "created_at": "2018-04-13T13:50:57Z" } ], "number": 30004, "title": "[ML] Model plot terms are not honoured for by-fields" }
{ "body": "Relates #30004\r\n", "number": 30359, "review_comments": [], "title": "[ML] Add integration test for model plots" }
{ "commits": [ { "message": "[ML] Add integration test for model plots\n\nRelates #30004" } ], "files": [ { "diff": "@@ -53,10 +53,6 @@ public ModelPlotConfig() {\n this(true, null);\n }\n \n- public ModelPlotConfig(boolean enabled) {\n- this(false, null);\n- }\n-\n public ModelPlotConfig(boolean enabled, String terms) {\n this.enabled = enabled;\n this.terms = terms;", "filename": "x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/job/config/ModelPlotConfig.java", "status": "modified" }, { "diff": "@@ -0,0 +1,164 @@\n+/*\n+ * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one\n+ * or more contributor license agreements. Licensed under the Elastic License;\n+ * you may not use this file except in compliance with the Elastic License.\n+ */\n+package org.elasticsearch.xpack.ml.integration;\n+\n+import org.elasticsearch.action.bulk.BulkRequestBuilder;\n+import org.elasticsearch.action.bulk.BulkResponse;\n+import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.support.WriteRequest;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.search.aggregations.AggregationBuilders;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;\n+import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;\n+import org.elasticsearch.xpack.core.ml.job.config.DataDescription;\n+import org.elasticsearch.xpack.core.ml.job.config.Detector;\n+import org.elasticsearch.xpack.core.ml.job.config.Job;\n+import org.elasticsearch.xpack.core.ml.job.config.ModelPlotConfig;\n+import org.junit.After;\n+import org.junit.Before;\n+\n+import java.util.Arrays;\n+import java.util.List;\n+import java.util.Set;\n+import java.util.stream.Collectors;\n+\n+import static org.hamcrest.Matchers.containsInAnyOrder;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.is;\n+\n+public class ModelPlotsIT extends MlNativeAutodetectIntegTestCase {\n+\n+ private static final String DATA_INDEX = \"model-plots-test-data\";\n+ private static final String DATA_TYPE = \"doc\";\n+\n+ @Before\n+ public void setUpData() {\n+ client().admin().indices().prepareCreate(DATA_INDEX)\n+ .addMapping(DATA_TYPE, \"time\", \"type=date,format=epoch_millis\", \"user\", \"type=keyword\")\n+ .get();\n+\n+ List<String> users = Arrays.asList(\"user_1\", \"user_2\", \"user_3\");\n+\n+ // We are going to create data for last day\n+ long nowMillis = System.currentTimeMillis();\n+ int totalBuckets = 24;\n+ BulkRequestBuilder bulkRequestBuilder = client().prepareBulk();\n+ for (int bucket = 0; bucket < totalBuckets; bucket++) {\n+ long timestamp = nowMillis - TimeValue.timeValueHours(totalBuckets - bucket).getMillis();\n+ for (String user : users) {\n+ IndexRequest indexRequest = new IndexRequest(DATA_INDEX, DATA_TYPE);\n+ indexRequest.source(\"time\", timestamp, \"user\", user);\n+ bulkRequestBuilder.add(indexRequest);\n+ }\n+ }\n+\n+ BulkResponse bulkResponse = bulkRequestBuilder\n+ .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)\n+ .get();\n+ assertThat(bulkResponse.hasFailures(), is(false));\n+ }\n+\n+ @After\n+ public void tearDownData() {\n+ client().admin().indices().prepareDelete(DATA_INDEX).get();\n+ cleanUp();\n+ }\n+\n+ public void testPartitionFieldWithoutTerms() throws Exception {\n+ Job.Builder job = jobWithPartitionUser(\"model-plots-it-test-partition-field-without-terms\");\n+ job.setModelPlotConfig(new ModelPlotConfig());\n+ putJob(job);\n+ String datafeedId = job.getId() + \"-feed\";\n+ DatafeedConfig datafeed = newDatafeed(datafeedId, job.getId());\n+ registerDatafeed(datafeed);\n+ putDatafeed(datafeed);\n+ openJob(job.getId());\n+ startDatafeed(datafeedId, 0, System.currentTimeMillis());\n+ waitUntilJobIsClosed(job.getId());\n+\n+ assertThat(getBuckets(job.getId()).size(), equalTo(23));\n+ Set<String> modelPlotTerms = modelPlotTerms(job.getId(), \"partition_field_value\");\n+ assertThat(modelPlotTerms, containsInAnyOrder(\"user_1\", \"user_2\", \"user_3\"));\n+ }\n+\n+ public void testPartitionFieldWithTerms() throws Exception {\n+ Job.Builder job = jobWithPartitionUser(\"model-plots-it-test-partition-field-with-terms\");\n+ job.setModelPlotConfig(new ModelPlotConfig(true, \"user_2,user_3\"));\n+ putJob(job);\n+ String datafeedId = job.getId() + \"-feed\";\n+ DatafeedConfig datafeed = newDatafeed(datafeedId, job.getId());\n+ registerDatafeed(datafeed);\n+ putDatafeed(datafeed);\n+ openJob(job.getId());\n+ startDatafeed(datafeedId, 0, System.currentTimeMillis());\n+ waitUntilJobIsClosed(job.getId());\n+\n+ assertThat(getBuckets(job.getId()).size(), equalTo(23));\n+ Set<String> modelPlotTerms = modelPlotTerms(job.getId(), \"partition_field_value\");\n+ assertThat(modelPlotTerms, containsInAnyOrder(\"user_2\", \"user_3\"));\n+ }\n+\n+ public void testByFieldWithTerms() throws Exception {\n+ Job.Builder job = jobWithByUser(\"model-plots-it-test-by-field-with-terms\");\n+ job.setModelPlotConfig(new ModelPlotConfig(true, \"user_2,user_3\"));\n+ putJob(job);\n+ String datafeedId = job.getId() + \"-feed\";\n+ DatafeedConfig datafeed = newDatafeed(datafeedId, job.getId());\n+ registerDatafeed(datafeed);\n+ putDatafeed(datafeed);\n+ openJob(job.getId());\n+ startDatafeed(datafeedId, 0, System.currentTimeMillis());\n+ waitUntilJobIsClosed(job.getId());\n+\n+ assertThat(getBuckets(job.getId()).size(), equalTo(23));\n+ Set<String> modelPlotTerms = modelPlotTerms(job.getId(), \"by_field_value\");\n+ assertThat(modelPlotTerms, containsInAnyOrder(\"user_2\", \"user_3\"));\n+ }\n+\n+ private static Job.Builder jobWithPartitionUser(String id) {\n+ Detector.Builder detector = new Detector.Builder();\n+ detector.setFunction(\"count\");\n+ detector.setPartitionFieldName(\"user\");\n+ return newJobBuilder(id, detector.build());\n+ }\n+\n+ private static Job.Builder jobWithByUser(String id) {\n+ Detector.Builder detector = new Detector.Builder();\n+ detector.setFunction(\"count\");\n+ detector.setByFieldName(\"user\");\n+ return newJobBuilder(id, detector.build());\n+ }\n+\n+ private static Job.Builder newJobBuilder(String id, Detector detector) {\n+ AnalysisConfig.Builder analysisConfig = new AnalysisConfig.Builder(Arrays.asList(detector));\n+ analysisConfig.setBucketSpan(TimeValue.timeValueHours(1));\n+ DataDescription.Builder dataDescription = new DataDescription.Builder();\n+ dataDescription.setTimeField(\"time\");\n+ Job.Builder jobBuilder = new Job.Builder(id);\n+ jobBuilder.setAnalysisConfig(analysisConfig);\n+ jobBuilder.setDataDescription(dataDescription);\n+ return jobBuilder;\n+ }\n+\n+ private static DatafeedConfig newDatafeed(String datafeedId, String jobId) {\n+ DatafeedConfig.Builder datafeedConfig = new DatafeedConfig.Builder(datafeedId, jobId);\n+ datafeedConfig.setIndices(Arrays.asList(DATA_INDEX));\n+ return datafeedConfig.build();\n+ }\n+\n+ private Set<String> modelPlotTerms(String jobId, String fieldName) {\n+ SearchResponse searchResponse = client().prepareSearch(\".ml-anomalies-\" + jobId)\n+ .setQuery(QueryBuilders.termQuery(\"result_type\", \"model_plot\"))\n+ .addAggregation(AggregationBuilders.terms(\"model_plot_terms\").field(fieldName))\n+ .get();\n+\n+ Terms aggregation = searchResponse.getAggregations().get(\"model_plot_terms\");\n+ return aggregation.getBuckets().stream().map(agg -> agg.getKeyAsString()).collect(Collectors.toSet());\n+ }\n+}", "filename": "x-pack/qa/ml-native-tests/src/test/java/org/elasticsearch/xpack/ml/integration/ModelPlotsIT.java", "status": "added" } ] }
{ "body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 5.6.5 and 6.1.2\r\n\r\n**Plugins installed**: [discovery-ec2, repository-s3]\r\n\r\n**JVM version** : 1.8\r\n\r\n**OS version** : Ubuntu 16.04\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nThe s3 repository plugin occasionally produces partial snapshots due to throwing an exception while deleting a file from AWS s3 storage. This is a result of s3 only providing eventual consistency for read-after-delete operations. The full documentation on that is here:\r\nhttps://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel\r\n\r\nbut the key section says:\r\n`A process deletes an existing object and immediately lists keys within its bucket. Until the deletion is fully propagated, Amazon S3 might list the deleted object.`\r\n\r\nIn more concrete terms, I'm using the s3 repository plugin in circumstances where the deleted snapshot files can still show up in the bucket for seconds/minutes after they've been deleted. When the s3 plugin then tries to delete one of these files for the second time, s3 then correctly shows it doesn't exist, and the s3 plugin considers that enough of an error condition to abort that particular part of the snapshot.\r\n\r\nHere is a log of such an error (with some redactions). This log shows the problem happens to multiple indices and shards in the snapshot operation, and some of the failures involve a common file in s3.\r\n```\r\n[2018-01-21T18:01:25,809][WARN ][o.e.s.SnapshotShardsService] [q8fsCAc] [[INDEX_REDACTED-2017.01.17][1]] [my_snapshot:2018-01-21-18:00:02/bEaUAdWaQ9ip3os5ghGvfQ] failed to create snapshot\r\norg.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: error deleting index file [pending-index-62] during cleanup\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1149) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1409) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:976) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:382) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:88) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:335) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\nCaused by: java.nio.file.NoSuchFileException: Blob [pending-index-62] does not exist\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.deleteBlob(S3BlobContainer.java:122) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1145) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\t... 10 more\r\n[2018-01-21T18:02:35,691][WARN ][o.e.s.SnapshotShardsService] [q8fsCAc] [[INDEX_REDACTED-2017.01.24][0]] [my_snapshot:2018-01-21-18:00:02/bEaUAdWaQ9ip3os5ghGvfQ] failed to create snapshot\r\norg.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: error deleting index file [index-62] during cleanup\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1149) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1409) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:976) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:382) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:88) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:335) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\nCaused by: java.nio.file.NoSuchFileException: Blob [index-62] does not exist\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.deleteBlob(S3BlobContainer.java:122) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1145) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\t... 10 more\r\n[2018-01-21T18:02:36,148][WARN ][o.e.s.SnapshotShardsService] [q8fsCAc] [[INDEX_REDACTED-2017.01.24][1]] [my_snapshot:2018-01-21-18:00:02/bEaUAdWaQ9ip3os5ghGvfQ] failed to create snapshot\r\norg.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: error deleting index file [index-62] during cleanup\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1149) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1409) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:976) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:382) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:88) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:335) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\nCaused by: java.nio.file.NoSuchFileException: Blob [index-62] does not exist\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.deleteBlob(S3BlobContainer.java:122) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1145) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\t... 10 more\r\n[2018-01-21T18:02:39,338][WARN ][o.e.s.SnapshotShardsService] [q8fsCAc] [[INDEX_REDACTED-2016.12.21][0]] [my_snapshot:2018-01-21-18:00:02/bEaUAdWaQ9ip3os5ghGvfQ] failed to create snapshot\r\norg.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: error deleting index file [pending-index-63] during cleanup\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1149) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1409) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:976) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:382) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:88) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:335) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.5.jar:5.6.5]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\nCaused by: java.nio.file.NoSuchFileException: Blob [pending-index-63] does not exist\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.deleteBlob(S3BlobContainer.java:122) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1145) ~[elasticsearch-5.6.5.jar:5.6.5]\r\n\t... 10 more\r\n[2018-01-21T18:03:40,905][INFO ][o.e.s.SnapshotShardsService] [q8fsCAc] snapshot [my_snapshot:2018-01-21-18:00:02/bEaUAdWaQ9ip3os5ghGvfQ] is done\r\n```\r\n\r\nThis log was produced with elasticsearch and repository-s3 version 5.6.5, but I have recreated the problem in 6.1.2 as well. The logic involved has not changed, so this makes sense.\r\nWhen listBlobs() is called at the start of a snapshot operation,\r\nhttps://github.com/elastic/elasticsearch/blob/v6.1.2/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L1133\r\ns3 can return the files which have already been deleted.\r\n\r\nLater, during the finalize phase, it will now attempt to delete all of the blobs that were listed above. The deleteBlob() method will verify that each blob exists, but the blobs that were previously deleted will not individually show as existent. So deleteBlob() will throw an exception.\r\nhttps://github.com/elastic/elasticsearch/blob/v6.1.2/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L1238\r\n\r\n**Steps to reproduce**:\r\nThis depends on unpredictable s3 consistency behaviors, and so may not be easy to reproduce naturally. I will include a patch against 6.1.2 which will allow you to simulate this scenario by sleeping at the key points in the snapshot operation. This will allow you to add and delete the files from s3, to simulate its eventual consistency.\r\n\r\nAll that's needed to reproduce is to take a snapshot and have an s3 deleted file reappear in the bucket contents at the right time. It will be a partial snapshot, and the above errors will be logged.\r\n\r\nIf using the patch which will be provided:\r\n1. Take a snapshot and watch the logs\r\n2. When directed in the log message, add a file starting with `index-` to the s3 path noted in the logs. For instance, `index-DUMMY.txt` (This simulates a file which was previously deleted by the plugin showing up again in the bucket contents.)\r\n3. Keep watching the logs, and when the listBlobs() operation is complete, they will direct you to remove the file you uploaded. (This simulates that the file, though it was in the bucket contents, will fail the blobExists() check.)\r\n4. The snapshot will then fail shortly after.\r\n\r\n\r\n**Suggested fix**: \r\nSince this is unlikely to occur for the majority of s3 users, a fix which would not affect the majority of s3 users also seems appropriate. An optional configuration property which lets me filter the listBlobs() result for any blobs that I know I have already deleted occurs to me. Keeping track of that state may present a challenge, so being able to configure more lenient deleteBlob() checks (and thus accepting the risk of the s3 repo being concurrently modified by other processes) would also make sense to me.\r\n\r\n\r\n\r\n**Patch to simulate the issue**:\r\n```\r\ndiff --git a/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java b/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java\r\nindex e3ee189..e1398d2 100644\r\n--- a/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java\r\n+++ b/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java\r\n@@ -167,6 +167,8 @@ import static java.util.Collections.unmodifiableMap;\r\n */\r\n public abstract class BlobStoreRepository extends AbstractLifecycleComponent implements Repository {\r\n \r\n+ private static Map<SnapshotId, Boolean> slept = new HashMap<>();\r\n+\r\n private BlobContainer snapshotsBlobContainer;\r\n \r\n protected final RepositoryMetaData metadata;\r\n@@ -1130,10 +1132,24 @@ public abstract class BlobStoreRepository extends AbstractLifecycleComponent imp\r\n try {\r\n final Map<String, BlobMetaData> blobs;\r\n try {\r\n+ long sleepMinutes = 3;\r\n+ if(!slept.getOrDefault(snapshotId, false)) {\r\n+ logger.info(\"Sleeping for {} minutes. Add file starting with 'index-' to path {} now so listBlobs() will see it.\", sleepMinutes, blobContainer.path().buildAsString());\r\n+ Thread.sleep(sleepMinutes * 60 * 1000);\r\n+\t\t }\r\n+\r\n blobs = blobContainer.listBlobs();\r\n+\r\n+ if(!slept.getOrDefault(snapshotId, false)) {\r\n+ logger.info(\"listBlobs() complete. Sleeping for {} minutes. Delete the file from path {} now so it won't be present when the delete is attempted.\", sleepMinutes, blobContainer.path().buildAsString());\r\n+ Thread.sleep(sleepMinutes * 60 * 1000);\r\n+ slept.put(snapshotId, true);\r\n+\t\t }\r\n } catch (IOException e) {\r\n throw new IndexShardSnapshotFailedException(shardId, \"failed to list blobs\", e);\r\n- }\r\n+ } catch (InterruptedException e) {\r\n+\t\t throw new RuntimeException(e);\r\n+\t\t}\r\n \r\n long generation = findLatestFileNameGeneration(blobs);\r\n Tuple<BlobStoreIndexShardSnapshots, Integer> tuple = buildBlobStoreIndexShardSnapshots(blobs);\r\n\r\n```\r\n", "comments": [ { "body": "Thanks for this very detailed report @TruthyBoolish ! I've seen a similar behavior from time to time too on repositories with a lot of creation/deletion per hour.\r\n\r\nI created #30332 that relaxes the constraint when deleting files during the finalization of the snapshot. It should help to reduce these errors.", "created_at": "2018-05-02T12:21:40Z" }, { "body": "@tlrx Will the fix from #30332 also be applied to ES v5.6.x?", "created_at": "2018-05-03T21:09:23Z" }, { "body": "As far as I can find in release notes this is fixed in 6.3.0 and 6.4.0 version, correct my if I am wrong.\r\nCan/will this be backported to 5.x lines?", "created_at": "2019-07-11T14:02:15Z" } ], "number": 28322, "title": "Partial snapshots when deleted s3 files reappear in bucket" }
{ "body": "When deleting or creating a snapshot for a given shard, elasticsearch usually starts by listing all the existing snapshotted files in the repository. Then it computes a diff and deletes the snapshotted files that are not needed anymore. During this deletion, an exception is thrown if the file to be deleted does not exist anymore. \r\n\r\nThis behavior is challenging with cloud based repository implementations like S3 where a file that has been deleted can still appear in the bucket for few seconds/minutes (because the deletion can take some time to be fully replicated on S3). If the deleted file appears in the listing of files, then the following deletion will fail with a NoSuchFileException and the snapshot will be partially created/deleted.\r\n\r\nThis pull request makes the deletion of these files a bit less strict, ie not failing if the file we want to delete does not exist anymore. It introduces a new `BlobContainer.deleteIgnoringIfNotExists()` method that can be used at some specific places where not failing when deleting a file is considered harmless.\r\n\r\nCloses #28322", "number": 30332, "review_comments": [ { "body": "this does not change anything here? We are already catching the `NoSuchFileException` in the line below, which is an `IOException`.", "created_at": "2018-05-03T06:57:55Z" }, { "body": "this does not change anything here? We are already catching the NoSuchFileException in the line below, which is an IOException.", "created_at": "2018-05-03T07:00:32Z" }, { "body": "what about the comment below this:\r\n> // We cannot delete index file - this is fatal, we cannot continue, otherwise we might end up with references to non-existing files with references to non-existing files\r\n\r\nThis sounds very dangerous, and I wonder if we need special precaution here. I wonder for example if we should first write the new index file before starting to delete old ones. It sounds to me as if finalization bears the risk of losing all index files if there is a process crash between deleting old ones and writing new one. We currently compensate for this in `buildBlobStoreIndexShardSnapshots` by doing a file listing and loading individual snapshots, but I find it odd to rely on such a fallback logic.", "created_at": "2018-05-03T07:21:46Z" }, { "body": "this does not change anything here? We are already catching the NoSuchFileException in the line below, which is an IOException.", "created_at": "2018-05-03T07:23:16Z" }, { "body": "If something goes wrong and an IOException is thrown when writing the temporary blob in \r\n```\r\nsnapshotsBlobContainer.writeBlob(tempBlobName,..)\r\n```\r\nIt means that in the catch block the deletion of the temp blob will fail, and adding this NoSuchFileException as a suppressed exception here looks like noise to me.", "created_at": "2018-05-03T07:39:24Z" }, { "body": "Yes, the behavior is the same but it just make more explicit that it is ok to try to delete a missing file here.", "created_at": "2018-05-03T07:40:21Z" }, { "body": "Right - in this case it must not swallow the NoSuchFileException", "created_at": "2018-05-03T07:41:11Z" }, { "body": "Yes, I wanted to look at this too. I agree we should write the shard index file first, fail hard if something goes wrong, and then delete the files that are not needed anymore. We can still keep the individual file listing in `buildBlobStoreIndexShardSnapshots`.", "created_at": "2018-05-03T07:56:43Z" }, { "body": "ok", "created_at": "2018-05-04T07:27:54Z" }, { "body": "`Writes a new index file for the shard and removes`...", "created_at": "2018-05-04T07:28:49Z" }, { "body": "why would the current index gen be part of the blobs?", "created_at": "2018-05-04T07:34:48Z" }, { "body": "I think it's still nice to keep the debug logging as we had before.", "created_at": "2018-05-04T07:40:29Z" }, { "body": "before, we would ignore any IOException, now we only ignore NoSuchFileException. I think, for optional clean-up, the previous one is better.", "created_at": "2018-05-04T07:42:50Z" }, { "body": "I preferred the exception handling how it was before, i.e. local to each action. This allows better debug messages.", "created_at": "2018-05-04T07:46:34Z" }, { "body": "Yes, it's an unnecessary extra safety, I'll remove this.", "created_at": "2018-05-04T07:57:53Z" }, { "body": "After debugging many snapshot/restore issues I've never found them useful as the IOException almost always contains the blobname and is logged at a higher level.", "created_at": "2018-05-04T08:00:00Z" }, { "body": "I don't see why we should blindly ignore IOException here (except NoSuchFileException which is harmless because we want to delete the file anyway). It could hide a permission issue, and let unnecessary files in the repository that could be deleted if it was noticed?", "created_at": "2018-05-04T08:01:59Z" }, { "body": "I don't agree but OK, I'll add them back.", "created_at": "2018-05-04T08:02:30Z" }, { "body": "> I don't see why we should blindly ignore IOException here\r\n\r\nI'm not a fan of blindly ignoring IOException here either. But I don't think it should the fail the snapshot which successfully completed. We can log the IOExceptions which are not NoSuchFileException as warning to make them more prominent?", "created_at": "2018-05-04T08:09:31Z" }, { "body": "That would be a good compromise, yes. Thanks!", "created_at": "2018-05-04T08:27:43Z" } ], "title": "Do not fail snapshot when deleting a missing snapshotted file" }
{ "commits": [ { "message": "Make blob deletions less strict" }, { "message": "Apply feedback" }, { "message": "Bring logging back" }, { "message": "apply feedback" }, { "message": "Add changelog entry, fix log usage" } ], "files": [ { "diff": "@@ -113,6 +113,7 @@ Fail snapshot operations early when creating or deleting a snapshot on a reposit\n written to by an older Elasticsearch after writing to it with a newer Elasticsearch version. ({pull}30140[#30140])\n \n Fix NPE when CumulativeSum agg encounters null value/empty bucket ({pull}29641[#29641])\n+Do not fail snapshot when deleting a missing snapshotted file ({pull}30332[#30332])\n \n //[float]\n //=== Regressions", "filename": "docs/CHANGELOG.asciidoc", "status": "modified" }, { "diff": "@@ -75,7 +75,8 @@ public interface BlobContainer {\n void writeBlob(String blobName, InputStream inputStream, long blobSize) throws IOException;\n \n /**\n- * Deletes a blob with giving name, if the blob exists. If the blob does not exist, this method throws an IOException.\n+ * Deletes a blob with giving name, if the blob exists. If the blob does not exist,\n+ * this method throws a NoSuchFileException.\n *\n * @param blobName\n * The name of the blob to delete.\n@@ -84,6 +85,21 @@ public interface BlobContainer {\n */\n void deleteBlob(String blobName) throws IOException;\n \n+ /**\n+ * Deletes a blob with giving name, ignoring if the blob does not exist.\n+ *\n+ * @param blobName\n+ * The name of the blob to delete.\n+ * @throws IOException if the blob exists but could not be deleted.\n+ */\n+ default void deleteBlobIgnoringIfNotExists(String blobName) throws IOException {\n+ try {\n+ deleteBlob(blobName);\n+ } catch (final NoSuchFileException ignored) {\n+ // This exception is ignored\n+ }\n+ }\n+\n /**\n * Lists all blobs in the container.\n *", "filename": "server/src/main/java/org/elasticsearch/common/blobstore/BlobContainer.java", "status": "modified" }, { "diff": "@@ -102,13 +102,6 @@ private BlobStoreIndexShardSnapshots(Map<String, FileInfo> files, List<SnapshotF\n this.physicalFiles = unmodifiableMap(mapBuilder);\n }\n \n- private BlobStoreIndexShardSnapshots() {\n- shardSnapshots = Collections.emptyList();\n- files = Collections.emptyMap();\n- physicalFiles = Collections.emptyMap();\n- }\n-\n-\n /**\n * Returns list of snapshots\n *", "filename": "server/src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshots.java", "status": "modified" }, { "diff": "@@ -90,7 +90,6 @@ public T read(BlobContainer blobContainer, String name) throws IOException {\n return readBlob(blobContainer, blobName);\n }\n \n-\n /**\n * Deletes obj in the blob container\n */", "filename": "server/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreFormat.java", "status": "modified" }, { "diff": "@@ -120,6 +120,7 @@\n \n import static java.util.Collections.emptyMap;\n import static java.util.Collections.unmodifiableMap;\n+import static org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot.FileInfo.canonicalName;\n \n /**\n * BlobStore - based implementation of Snapshot Repository\n@@ -780,7 +781,7 @@ private void writeAtomic(final String blobName, final BytesReference bytesRef) t\n } catch (IOException ex) {\n // temporary blob creation or move failed - try cleaning up\n try {\n- snapshotsBlobContainer.deleteBlob(tempBlobName);\n+ snapshotsBlobContainer.deleteBlobIgnoringIfNotExists(tempBlobName);\n } catch (IOException e) {\n ex.addSuppressed(e);\n }\n@@ -915,13 +916,13 @@ public void delete() {\n }\n }\n // finalize the snapshot and rewrite the snapshot index with the next sequential snapshot index\n- finalize(newSnapshotsList, fileListGeneration + 1, blobs);\n+ finalize(newSnapshotsList, fileListGeneration + 1, blobs, \"snapshot deletion [\" + snapshotId + \"]\");\n }\n \n /**\n * Loads information about shard snapshot\n */\n- public BlobStoreIndexShardSnapshot loadSnapshot() {\n+ BlobStoreIndexShardSnapshot loadSnapshot() {\n try {\n return indexShardSnapshotFormat(version).read(blobContainer, snapshotId.getUUID());\n } catch (IOException ex) {\n@@ -930,54 +931,57 @@ public BlobStoreIndexShardSnapshot loadSnapshot() {\n }\n \n /**\n- * Removes all unreferenced files from the repository and writes new index file\n+ * Writes a new index file for the shard and removes all unreferenced files from the repository.\n *\n- * We need to be really careful in handling index files in case of failures to make sure we have index file that\n- * points to files that were deleted.\n+ * We need to be really careful in handling index files in case of failures to make sure we don't\n+ * have index file that points to files that were deleted.\n *\n- *\n- * @param snapshots list of active snapshots in the container\n+ * @param snapshots list of active snapshots in the container\n * @param fileListGeneration the generation number of the snapshot index file\n- * @param blobs list of blobs in the container\n+ * @param blobs list of blobs in the container\n+ * @param reason a reason explaining why the shard index file is written\n */\n- protected void finalize(List<SnapshotFiles> snapshots, int fileListGeneration, Map<String, BlobMetaData> blobs) {\n- BlobStoreIndexShardSnapshots newSnapshots = new BlobStoreIndexShardSnapshots(snapshots);\n- // delete old index files first\n- for (String blobName : blobs.keySet()) {\n- if (indexShardSnapshotsFormat.isTempBlobName(blobName) || blobName.startsWith(SNAPSHOT_INDEX_PREFIX)) {\n- try {\n- blobContainer.deleteBlob(blobName);\n- } catch (IOException e) {\n- // We cannot delete index file - this is fatal, we cannot continue, otherwise we might end up\n- // with references to non-existing files\n- throw new IndexShardSnapshotFailedException(shardId, \"error deleting index file [\"\n- + blobName + \"] during cleanup\", e);\n- }\n+ protected void finalize(final List<SnapshotFiles> snapshots,\n+ final int fileListGeneration,\n+ final Map<String, BlobMetaData> blobs,\n+ final String reason) {\n+ final String indexGeneration = Integer.toString(fileListGeneration);\n+ final String currentIndexGen = indexShardSnapshotsFormat.blobName(indexGeneration);\n+\n+ final BlobStoreIndexShardSnapshots updatedSnapshots = new BlobStoreIndexShardSnapshots(snapshots);\n+ try {\n+ // If we deleted all snapshots, we don't need to create a new index file\n+ if (snapshots.size() > 0) {\n+ indexShardSnapshotsFormat.writeAtomic(updatedSnapshots, blobContainer, indexGeneration);\n }\n- }\n \n- // now go over all the blobs, and if they don't exist in a snapshot, delete them\n- for (String blobName : blobs.keySet()) {\n- // delete unused files\n- if (blobName.startsWith(DATA_BLOB_PREFIX)) {\n- if (newSnapshots.findNameFile(BlobStoreIndexShardSnapshot.FileInfo.canonicalName(blobName)) == null) {\n+ // Delete old index files\n+ for (final String blobName : blobs.keySet()) {\n+ if (indexShardSnapshotsFormat.isTempBlobName(blobName) || blobName.startsWith(SNAPSHOT_INDEX_PREFIX)) {\n try {\n- blobContainer.deleteBlob(blobName);\n+ blobContainer.deleteBlobIgnoringIfNotExists(blobName);\n } catch (IOException e) {\n- // TODO: don't catch and let the user handle it?\n- logger.debug(() -> new ParameterizedMessage(\"[{}] [{}] error deleting blob [{}] during cleanup\", snapshotId, shardId, blobName), e);\n+ logger.warn(() -> new ParameterizedMessage(\"[{}][{}] failed to delete index blob [{}] during finalization\",\n+ snapshotId, shardId, blobName), e);\n+ throw e;\n }\n }\n }\n- }\n \n- // If we deleted all snapshots - we don't need to create the index file\n- if (snapshots.size() > 0) {\n- try {\n- indexShardSnapshotsFormat.writeAtomic(newSnapshots, blobContainer, Integer.toString(fileListGeneration));\n- } catch (IOException e) {\n- throw new IndexShardSnapshotFailedException(shardId, \"Failed to write file list\", e);\n+ // Delete all blobs that don't exist in a snapshot\n+ for (final String blobName : blobs.keySet()) {\n+ if (blobName.startsWith(DATA_BLOB_PREFIX) && (updatedSnapshots.findNameFile(canonicalName(blobName)) == null)) {\n+ try {\n+ blobContainer.deleteBlobIgnoringIfNotExists(blobName);\n+ } catch (IOException e) {\n+ logger.warn(() -> new ParameterizedMessage(\"[{}][{}] failed to delete data blob [{}] during finalization\",\n+ snapshotId, shardId, blobName), e);\n+ }\n+ }\n }\n+ } catch (IOException e) {\n+ String message = \"Failed to finalize \" + reason + \" with shard index [\" + currentIndexGen + \"]\";\n+ throw new IndexShardSnapshotFailedException(shardId, message, e);\n }\n }\n \n@@ -1003,7 +1007,7 @@ protected long findLatestFileNameGeneration(Map<String, BlobMetaData> blobs) {\n if (!name.startsWith(DATA_BLOB_PREFIX)) {\n continue;\n }\n- name = BlobStoreIndexShardSnapshot.FileInfo.canonicalName(name);\n+ name = canonicalName(name);\n try {\n long currentGen = Long.parseLong(name.substring(DATA_BLOB_PREFIX.length()), Character.MAX_RADIX);\n if (currentGen > generation) {\n@@ -1217,7 +1221,7 @@ public void snapshot(final IndexCommit snapshotIndexCommit) {\n newSnapshotsList.add(point);\n }\n // finalize the snapshot and rewrite the snapshot index with the next sequential snapshot index\n- finalize(newSnapshotsList, fileListGeneration + 1, blobs);\n+ finalize(newSnapshotsList, fileListGeneration + 1, blobs, \"snapshot creation [\" + snapshotId + \"]\");\n snapshotStatus.moveToDone(System.currentTimeMillis());\n \n }", "filename": "server/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java", "status": "modified" }, { "diff": "@@ -29,13 +29,14 @@\n \n import java.io.IOException;\n import java.io.InputStream;\n+import java.nio.file.NoSuchFileException;\n import java.util.Arrays;\n import java.util.HashMap;\n import java.util.Map;\n \n-import static org.elasticsearch.repositories.ESBlobStoreTestCase.writeRandomBlob;\n import static org.elasticsearch.repositories.ESBlobStoreTestCase.randomBytes;\n import static org.elasticsearch.repositories.ESBlobStoreTestCase.readBlobFully;\n+import static org.elasticsearch.repositories.ESBlobStoreTestCase.writeRandomBlob;\n import static org.hamcrest.CoreMatchers.equalTo;\n import static org.hamcrest.CoreMatchers.notNullValue;\n \n@@ -116,15 +117,27 @@ public void testDeleteBlob() throws IOException {\n try (BlobStore store = newBlobStore()) {\n final String blobName = \"foobar\";\n final BlobContainer container = store.blobContainer(new BlobPath());\n- expectThrows(IOException.class, () -> container.deleteBlob(blobName));\n+ expectThrows(NoSuchFileException.class, () -> container.deleteBlob(blobName));\n \n byte[] data = randomBytes(randomIntBetween(10, scaledRandomIntBetween(1024, 1 << 16)));\n final BytesArray bytesArray = new BytesArray(data);\n writeBlob(container, blobName, bytesArray);\n container.deleteBlob(blobName); // should not raise\n \n // blob deleted, so should raise again\n- expectThrows(IOException.class, () -> container.deleteBlob(blobName));\n+ expectThrows(NoSuchFileException.class, () -> container.deleteBlob(blobName));\n+ }\n+ }\n+\n+ public void testDeleteBlobIgnoringIfNotExists() throws IOException {\n+ try (BlobStore store = newBlobStore()) {\n+ BlobPath blobPath = new BlobPath();\n+ if (randomBoolean()) {\n+ blobPath = blobPath.add(randomAlphaOfLengthBetween(1, 10));\n+ }\n+\n+ final BlobContainer container = store.blobContainer(blobPath);\n+ container.deleteBlobIgnoringIfNotExists(\"does_not_exist\");\n }\n }\n ", "filename": "test/framework/src/main/java/org/elasticsearch/repositories/ESBlobStoreContainerTestCase.java", "status": "modified" } ] }
{ "body": "\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 6.2.4\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: 1.8.0_101\r\n\r\n**OS version**: MacOS (Darwin Kernel Version 15.6.0)\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen specifying a wildcard index source filter, empty source fields are not returned in queries.\r\n\r\nThis actually seems very similar to https://github.com/elastic/elasticsearch/issues/23796 .\r\n\r\n**Steps to reproduce**:\r\n\r\n```\r\nPUT test\r\n{\r\n \"mappings\": {\r\n \"foo\": {\r\n \"_source\": {\r\n \"excludes\": [\r\n \"*bar\"\r\n ]\r\n },\r\n \"properties\": {\r\n \"myField\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n```\r\nPUT test/foo/1\r\n{\r\n \"myField\": []\r\n}\r\n```\r\n\r\n```\r\nPUT test/foo/2\r\n{\r\n \"myField\": [\"bar\"]\r\n}\r\n```\r\n\r\n`GET test/foo/_search`\r\n```\r\n{\r\n \"took\": 0,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 2,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"foo\",\r\n \"_id\": \"4\",\r\n \"_score\": 1,\r\n \"_source\": {}\r\n },\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"foo\",\r\n \"_id\": \"2\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"myField\": [\r\n \"bar\"\r\n ]\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n\r\nYou'll see that you get no source for document 1 even though the empty field should be returned. If you update the index to not have a wildcard source filter then everything is returned as expected:\r\n\r\n```\r\nPUT test\r\n{\r\n \"mappings\": {\r\n \"foo\": {\r\n \"_source\": {\r\n \"excludes\": [\r\n \"bar\"\r\n ]\r\n },\r\n \"properties\": {\r\n \"myField\": {\r\n \"type\": \"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n`GET test/foo/_search`\r\n```\r\n{\r\n \"took\": 0,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 2,\r\n \"max_score\": 1,\r\n \"hits\": [\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"foo\",\r\n \"_id\": \"2\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"myField\": [\r\n \"bar\"\r\n ]\r\n }\r\n },\r\n {\r\n \"_index\": \"test\",\r\n \"_type\": \"foo\",\r\n \"_id\": \"1\",\r\n \"_score\": 1,\r\n \"_source\": {\r\n \"myField\": []\r\n }\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-20T07:26:04Z" }, { "body": "Thanks for reporting. This is indeed similar to #23796 (which is about nested documents). \r\n\r\nI think that @bleskes [comment](https://github.com/elastic/elasticsearch/issues/23796#issuecomment-308058728) is still relevant. We need to make source filtering more uniform and better test it and document it. I don't mean that we should return empty objects but at least the behavior should be clearly tested and documented.\r\n\r\nI'm labelling this issue as Search as it concerns indexing/searching docs but filtering should be coherent with response filtering too (aka filter_path)", "created_at": "2018-04-20T07:30:53Z" } ], "number": 29622, "title": "Empty source fields are not returned in queries when specifying unrelated wildcard index source filters" }
{ "body": "- fixed reported bug of empty lists not being index within _source, and absent\r\n when fetched later.\r\n- fixed filtering so maps and lists that are made empty (not initially empty)\r\n are removed from their parent.\r\n- added substantial numbers of tests for all combos of filtering maps.\r\n- introduced helpers in XContentMapValuesTest, and cleaned up most tests.\r\n- #29622\r\n\r\n<!--\r\nThank you for your interest in and contributing to Elasticsearch! There\r\nare a few simple things to check before submitting your pull request\r\nthat can help with the review process. You should delete these items\r\nfrom your submission, but they are here to help bring them to your\r\nattention.\r\n-->\r\n\r\n- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?\r\nYES\r\n\r\n- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md)?\r\nYES\r\n\r\n- If submitting code, have you built your formula locally prior to submission with `gradle check`?\r\nYES - passed\r\n\r\n- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.\r\nrebased against master\r\n\r\n- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?\r\nOSX - shouldnt matter\r\n\r\n- If you are submitting this code for a class then read our [policy](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-as-part-of-a-class) for that.\r\n\r\nNA", "number": 30327, "review_comments": [ { "body": "this was part of the bug causing empty lists to disappear from _source. The same would have been true of map (see L300).\r\n\r\nThe filter map and filter list methods now return null to tell the parent to remove them.", "created_at": "2018-05-02T10:57:39Z" }, { "body": "lots of tests, for all combos. The previous tests were unreadable, every test basically asserts against an expected map, making it easy to see exactly what is missing/extra.", "created_at": "2018-05-02T10:58:44Z" }, { "body": "need a List so code can test isEmpty at the end. L332.", "created_at": "2018-05-02T10:59:28Z" } ], "title": "Fix source field mapping wildcard deletes empty fields #29622" }
{ "commits": [ { "message": "Fix source field mapping wildcard deletes empty fields #29622\n\n- fixed reported bug of empty lists not being index within _source, and absent\n when fetched later.\n- fixed filtering so maps and lists that are made empty (not initially empty)\n are removed from their parent.\n- added substantial numbers of tests for all combos of filtering maps.\n- introduced helpers in XContentMapValuesTest, and cleaned up most tests.\n- #29622" } ], "files": [ { "diff": "@@ -32,6 +32,8 @@\n \n import java.util.ArrayList;\n import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.Collections;\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n@@ -187,8 +189,8 @@ public static Map<String, Object> filter(Map<String, ?> map, String[] includes,\n // we want all sub properties to match as soon as an object matches\n \n return (map) -> filter(map,\n- include, 0,\n- exclude, 0,\n+ include,\n+ exclude,\n matchAllAutomaton);\n }\n \n@@ -208,6 +210,16 @@ private static int step(CharacterRunAutomaton automaton, String key, int state)\n return state;\n }\n \n+ private static Map<String, Object> filter(Map<String, ?> map,\n+ CharacterRunAutomaton includeAutomaton,\n+ CharacterRunAutomaton excludeAutomaton,\n+ CharacterRunAutomaton matchAllAutomaton) {\n+ final Map<String, Object> result = filter(map,\n+ includeAutomaton, 0, excludeAutomaton, 0, matchAllAutomaton);\n+ // $result will be null if all its properties were not included/excluded etc.\n+ return null != result ? result : Collections.emptyMap();\n+ }\n+\n private static Map<String, Object> filter(Map<String, ?> map,\n CharacterRunAutomaton includeAutomaton, int initialIncludeState,\n CharacterRunAutomaton excludeAutomaton, int initialExcludeState,\n@@ -256,15 +268,16 @@ private static Map<String, Object> filter(Map<String, ?> map,\n Map<String, Object> valueAsMap = (Map<String, Object>) value;\n Map<String, Object> filteredValue = filter(valueAsMap,\n subIncludeAutomaton, subIncludeState, excludeAutomaton, excludeState, matchAllAutomaton);\n- if (includeAutomaton.isAccept(includeState) || filteredValue.isEmpty() == false) {\n- filtered.put(key, filteredValue);\n+ if(null!=filteredValue) {\n+ if (includeAutomaton.isAccept(includeState) || filteredValue.isEmpty() == false) {\n+ filtered.put(key, filteredValue);\n+ }\n }\n \n- } else if (value instanceof Iterable) {\n-\n- List<Object> filteredValue = filter((Iterable<?>) value,\n+ } else if (value instanceof List) {\n+ List<Object> filteredValue = filter((List<?>) value,\n subIncludeAutomaton, subIncludeState, excludeAutomaton, excludeState, matchAllAutomaton);\n- if (filteredValue.isEmpty() == false) {\n+ if (null!=filteredValue) {\n filtered.put(key, filteredValue);\n }\n \n@@ -277,18 +290,23 @@ private static Map<String, Object> filter(Map<String, ?> map,\n }\n \n }\n+ }\n \n+ // if we have filtered away (deleted all fields) of the input map return null.\n+ if(filtered.isEmpty() && !map.isEmpty()) {\n+ filtered = null;\n }\n+\n return filtered;\n }\n \n- private static List<Object> filter(Iterable<?> iterable,\n+ private static List<Object> filter(List<?> from,\n CharacterRunAutomaton includeAutomaton, int initialIncludeState,\n CharacterRunAutomaton excludeAutomaton, int initialExcludeState,\n CharacterRunAutomaton matchAllAutomaton) {\n List<Object> filtered = new ArrayList<>();\n boolean isInclude = includeAutomaton.isAccept(initialIncludeState);\n- for (Object value : iterable) {\n+ for (Object value : from) {\n if (value instanceof Map) {\n int includeState = includeAutomaton.step(initialIncludeState, '.');\n int excludeState = initialExcludeState;\n@@ -297,20 +315,26 @@ private static List<Object> filter(Iterable<?> iterable,\n }\n Map<String, Object> filteredValue = filter((Map<String, ?>)value,\n includeAutomaton, includeState, excludeAutomaton, excludeState, matchAllAutomaton);\n- if (filteredValue.isEmpty() == false) {\n+ if (null != filteredValue) {\n filtered.add(filteredValue);\n }\n- } else if (value instanceof Iterable) {\n- List<Object> filteredValue = filter((Iterable<?>) value,\n+ } else if (value instanceof List) {\n+ List<Object> filteredValue = filter((List<?>) value,\n includeAutomaton, initialIncludeState, excludeAutomaton, initialExcludeState, matchAllAutomaton);\n- if (filteredValue.isEmpty() == false) {\n+ if (null!=filteredValue) {\n filtered.add(filteredValue);\n }\n } else if (isInclude) {\n // #22557: only accept this array value if the key we are on is accepted:\n filtered.add(value);\n }\n }\n+\n+ // if list was initially not empty and everything was filtered away delete it also.\n+ if(filtered.isEmpty() && !from.isEmpty()) {\n+ filtered = null;\n+ }\n+\n return filtered;\n }\n ", "filename": "server/src/main/java/org/elasticsearch/common/xcontent/support/XContentMapValues.java", "status": "modified" }, { "diff": "@@ -28,14 +28,22 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.junit.Assert;\n \n+import java.io.ByteArrayInputStream;\n+import java.io.ByteArrayOutputStream;\n import java.io.IOException;\n+import java.io.ObjectInputStream;\n+import java.io.ObjectOutputStream;\n+import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n import java.util.HashMap;\n+import java.util.LinkedHashMap;\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n+import java.util.stream.Collectors;\n \n import static org.elasticsearch.common.xcontent.XContentHelper.convertToMap;\n import static org.elasticsearch.common.xcontent.XContentHelper.toXContent;\n@@ -48,10 +56,235 @@\n \n public class XContentMapValuesTests extends AbstractFilteringTestCase {\n \n+ private static final String NO_INCLUDE = null;\n+ private static final String NO_EXCLUDE = null;\n+ private static final Map<String, Object> EMPTY = Collections.emptyMap();\n+\n+ public void testFieldWithEmptyObject() {\n+ Map<String, Object> root = map(\"abc\", new HashMap<>());\n+\n+ filterAndCheck(root, NO_INCLUDE, NO_EXCLUDE, root);\n+ }\n+\n+ public void testEmptyObjectInclude() {\n+ Map<String, Object> root = new HashMap<>();\n+\n+ filterAndCheck(root, \"qqq\", NO_EXCLUDE, root);\n+ }\n+\n+ public void testEmptyObjectExclude() {\n+ Map<String, Object> root = new HashMap<>();\n+\n+ filterAndCheck(root, NO_INCLUDE, \"qqq\", root);\n+ }\n+\n+ public void testFieldIncludeExact() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ \"abc\",\n+ NO_EXCLUDE,\n+ map(\"abc\", 1));\n+ }\n+\n+ public void testFieldIncludeExactIgnoresPartialPrefix() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ \"a\",\n+ NO_EXCLUDE,\n+ EMPTY);\n+ }\n+\n+ public void testFieldIncludeExactUnmatched() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ \"qqq\",\n+ NO_EXCLUDE,\n+ EMPTY);\n+ }\n+\n+ public void testFieldIncludePrefixWildcard() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ \"a*\",\n+ NO_EXCLUDE,\n+ map(\"abc\", 1));\n+ }\n+\n+ public void testFieldIncludeSuffixWildcard() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ \"*c\",\n+ NO_EXCLUDE,\n+ map(\"abc\", 1));\n+ }\n+\n+ public void testFieldIncludeWildcardUnmatched() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ \"zz*\",\n+ NO_EXCLUDE,\n+ EMPTY);\n+ }\n+\n+ public void testFieldExcludeExact() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ NO_INCLUDE,\n+ \"xyz\",\n+ map(\"abc\", 1));\n+ }\n+\n+ public void testFieldExcludeExactIgnoresPartialPrefix() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ NO_INCLUDE,\n+ \"a\");\n+ }\n+\n+ public void testFieldExcludeExactUmmatched() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ NO_INCLUDE,\n+ \"qqq\");\n+ }\n+\n+ public void testFieldExcludeSuffixWildcard() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ NO_INCLUDE,\n+ \"*z\",\n+ map(\"abc\", 1));\n+ }\n+\n+ public void testFieldExcludePrefixWildcard() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ NO_INCLUDE,\n+ \"x*\",\n+ map(\"abc\", 1));\n+ }\n+\n+ public void testFieldExcludeWildcardUnmatched() {\n+ filterAndCheck(map(\"abc\", 1, \"xyz\", 2),\n+ NO_INCLUDE,\n+ \"*zzz\");\n+ }\n+\n+ public void testIncludeNestedExact() {\n+ filterAndCheck( map(\"root\", map(\"leaf\", \"value\")),\n+ \"root.leaf\",\n+ NO_EXCLUDE);\n+ }\n+\n+ public void testIncludeNestedExact2() {\n+ filterAndCheck(map(\"root\", map(\"leaf\", \"value\"), \"lost\", 123),\n+ \"root.leaf\",\n+ NO_EXCLUDE,\n+ map(\"root\", map(\"leaf\", \"value\")));\n+ }\n+\n+ public void testIncludeNestedExactWildcard() {\n+ filterAndCheck(map(\"root\", map(\"leaf\", \"value\"), \"lost\", 123),\n+ \"root.*\",\n+ NO_EXCLUDE,\n+ map(\"root\", map(\"leaf\", \"value\")));\n+ }\n+\n+ public void testNestedExcludeExact() {\n+ filterAndCheck(map(\"root\", map(\"leaf\", \"value\")),\n+ NO_INCLUDE,\n+ \"root.leaf\",\n+ EMPTY);\n+ }\n+\n+ public void testNestedExcludeExact2() {\n+ filterAndCheck(map(\"root\", map(\"leaf\", \"value\", \"kept\", 123)),\n+ NO_INCLUDE,\n+ \"root.leaf\",\n+ map(\"root\", map(\"kept\", 123)));\n+ }\n+\n+ public void testNestedExcludeWildcard() {\n+ filterAndCheck(map(\"root\", map(\"leaf\", \"value\")),\n+ NO_INCLUDE,\n+ \"root.*\",\n+ EMPTY);\n+ }\n+\n+ public void testNestedExcludeWildcard2() {\n+ filterAndCheck(map(\"root\", map(\"leaf\", \"value\"), \"kept\", 123),\n+ NO_INCLUDE,\n+ \"root.*\",\n+ map(\"kept\", 123));\n+ }\n+\n+ public void testArrayIncludeExact() {\n+ filterAndCheck(map(\"root\", list(\"leaf\")), \"root\", NO_EXCLUDE);\n+ }\n+\n+ public void testArrayIncludeExact2() {\n+ filterAndCheck(map(\"root\", list(\"leaf\"), \"lost\", 123),\n+ \"root\",\n+ NO_EXCLUDE,\n+ map(\"root\", list(\"leaf\")));\n+ }\n+\n+ public void testArrayNestedIncludeExact() {\n+ filterAndCheck(map(\"root\", list(map(\"leaf\", \"value\"))),\n+ \"root.leaf\",\n+ NO_EXCLUDE);\n+ }\n+\n+ public void testArrayNestedIncludeExact2() {\n+ filterAndCheck(map(\"root\", list(map(\"leaf\", \"value\")), \"lost\", 123),\n+ \"root.leaf\",\n+ NO_EXCLUDE,\n+ map(\"root\", list(map(\"leaf\", \"value\"))));\n+ }\n+\n+ public void testArrayIncludeExactWildcard() {\n+ filterAndCheck(map(\"root\", list(\"leaf\", \"2leaf2\"),\n+ \"lost\", 123),\n+ \"root*\",\n+ NO_EXCLUDE,\n+ map(\"root\", list(\"leaf\", \"2leaf2\")));\n+ }\n+\n+ public void testArrayExcludeExact() {\n+ filterAndCheck(map(\"root\", list(\"leaf\")),\n+ NO_INCLUDE,\n+ \"root\",\n+ EMPTY);\n+ }\n+\n+ public void testArrayExcludeExact2() {\n+ filterAndCheck(map(\"root\", list(\"leaf\"),\n+ \"lost\", 123),\n+ NO_INCLUDE,\n+ \"root\",\n+ map(\"lost\", 123));\n+ }\n+\n+ public void testArrayExcludeExact3() {\n+ filterAndCheck(map(\"root\", list(\"leaf\", \"leaf2\"),\n+ \"lost\", 123),\n+ NO_INCLUDE,\n+ \"root\",\n+ map(\"lost\", 123));\n+ }\n+\n+ public void testArrayExcludeWildcard() {\n+ filterAndCheck(map(\"root\", list(\"leaf\")),\n+ NO_INCLUDE,\n+ \"root.*\");\n+ }\n+\n+ public void testArrayExcludeWildcard2() {\n+ filterAndCheck(map(\"root\", list(\"leaf\"), \"lost\", 123),\n+ NO_INCLUDE,\n+ \"ro*\",\n+ map(\"lost\", 123));\n+ }\n+\n+ public void testArrayExcludeWildcard3() {\n+ filterAndCheck(map(\"root\", list(\"leaf\", \"2leaf2\"),\n+ \"lost\", 123),\n+ NO_INCLUDE,\n+ \"root.*\");\n+ }\n+\n @Override\n protected void testFilter(Builder expected, Builder actual, Set<String> includes, Set<String> excludes) throws IOException {\n final XContentType xContentType = randomFrom(XContentType.values());\n- final boolean humanReadable = randomBoolean();\n \n String[] sourceIncludes;\n if (includes == null) {\n@@ -65,10 +298,10 @@ protected void testFilter(Builder expected, Builder actual, Set<String> includes\n } else {\n sourceExcludes = excludes.toArray(new String[excludes.size()]);\n }\n-\n- assertEquals(\"Filtered map must be equal to the expected map\",\n- toMap(expected, xContentType, humanReadable),\n- XContentMapValues.filter(toMap(actual, xContentType, humanReadable), sourceIncludes, sourceExcludes));\n+ filterAndCheck(toMap(actual, xContentType, true),\n+ sourceIncludes,\n+ sourceExcludes,\n+ toMap(expected, xContentType, true));\n }\n \n @SuppressWarnings({\"unchecked\"})\n@@ -200,190 +433,186 @@ public void testExtractRawValue() throws Exception {\n assertThat(XContentMapValues.extractRawValues(\"path1.xxx.path2.yyy.test\", map).get(0).toString(), equalTo(\"value\"));\n }\n \n- public void testPrefixedNamesFilteringTest() {\n+ public void testPrefixedNames() {\n Map<String, Object> map = new HashMap<>();\n map.put(\"obj\", \"value\");\n map.put(\"obj_name\", \"value_name\");\n- Map<String, Object> filteredMap = XContentMapValues.filter(map, new String[]{\"obj_name\"}, Strings.EMPTY_ARRAY);\n- assertThat(filteredMap.size(), equalTo(1));\n- assertThat((String) filteredMap.get(\"obj_name\"), equalTo(\"value_name\"));\n+ filterAndCheck(map, \"obj_name\", NO_EXCLUDE, map(\"obj_name\", \"value_name\"));\n }\n \n-\n- @SuppressWarnings(\"unchecked\")\n- public void testNestedFiltering() {\n+ public void testIncludeNested1() {\n Map<String, Object> map = new HashMap<>();\n map.put(\"field\", \"value\");\n map.put(\"array\",\n- Arrays.asList(\n- 1,\n- new HashMap<String, Object>() {{\n- put(\"nested\", 2);\n- put(\"nested_2\", 3);\n- }}));\n- Map<String, Object> filteredMap = XContentMapValues.filter(map, new String[]{\"array.nested\"}, Strings.EMPTY_ARRAY);\n- assertThat(filteredMap.size(), equalTo(1));\n-\n- assertThat(((List<?>) filteredMap.get(\"array\")), hasSize(1));\n- assertThat(((Map<String, Object>) ((List) filteredMap.get(\"array\")).get(0)).size(), equalTo(1));\n- assertThat((Integer) ((Map<String, Object>) ((List) filteredMap.get(\"array\")).get(0)).get(\"nested\"), equalTo(2));\n-\n- filteredMap = XContentMapValues.filter(map, new String[]{\"array.*\"}, Strings.EMPTY_ARRAY);\n- assertThat(filteredMap.size(), equalTo(1));\n- assertThat(((List<?>) filteredMap.get(\"array\")), hasSize(1));\n- assertThat(((Map<String, Object>) ((List) filteredMap.get(\"array\")).get(0)).size(), equalTo(2));\n-\n- map.clear();\n+ Arrays.asList(\n+ 1,\n+ new HashMap<String, Object>() {{\n+ put(\"nested\", 2);\n+ put(\"nested_2\", 3);\n+ }}));\n+ filterAndCheck(map,\n+ \"array.nested\",\n+ NO_EXCLUDE,\n+ map(\"array\", list(map(\"nested\", 2))));\n+ }\n+\n+ public void testIncludeNested2() {\n+ Map<String, Object> map = new HashMap<>();\n map.put(\"field\", \"value\");\n- map.put(\"obj\",\n+ map.put(\"array\",\n+ Arrays.asList(\n+ 1,\n new HashMap<String, Object>() {{\n- put(\"field\", \"value\");\n- put(\"field2\", \"value2\");\n- }});\n- filteredMap = XContentMapValues.filter(map, new String[]{\"obj.field\"}, Strings.EMPTY_ARRAY);\n- assertThat(filteredMap.size(), equalTo(1));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")).size(), equalTo(1));\n- assertThat((String) ((Map<String, Object>) filteredMap.get(\"obj\")).get(\"field\"), equalTo(\"value\"));\n+ put(\"nested\", 2);\n+ put(\"nested_2\", 3);\n+ }}));\n+ filterAndCheck(map,\n+ \"array.*\",\n+ NO_EXCLUDE,\n+ map(\"array\", list(map(\"nested\", 2, \"nested_2\", 3))));\n+ }\n \n- filteredMap = XContentMapValues.filter(map, new String[]{\"obj.*\"}, Strings.EMPTY_ARRAY);\n- assertThat(filteredMap.size(), equalTo(1));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")).size(), equalTo(2));\n- assertThat((String) ((Map<String, Object>) filteredMap.get(\"obj\")).get(\"field\"), equalTo(\"value\"));\n- assertThat((String) ((Map<String, Object>) filteredMap.get(\"obj\")).get(\"field2\"), equalTo(\"value2\"));\n+ public void testIncludeNested3() {\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"field\", \"value\");\n+ map.put(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\"));\n \n+ filterAndCheck(map,\n+ \"obj.field\",\n+ NO_EXCLUDE,\n+ map(\"obj\", map(\"field\", \"value\")));\n }\n \n- @SuppressWarnings(\"unchecked\")\n- public void testCompleteObjectFiltering() {\n+ public void testIncludeNested4() {\n+ filterAndCheck(map(\"field\", \"value\",\n+ \"obj\", map(\"field\", \"value\", \"field2\", \"value2\")),\n+ \"obj.*\",\n+ NO_EXCLUDE,\n+ map(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\")));\n+ }\n+\n+ public void testIncludeAndExclude() {\n Map<String, Object> map = new HashMap<>();\n map.put(\"field\", \"value\");\n- map.put(\"obj\",\n- new HashMap<String, Object>() {{\n- put(\"field\", \"value\");\n- put(\"field2\", \"value2\");\n- }});\n+ map.put(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\"));\n map.put(\"array\",\n- Arrays.asList(\n- 1,\n- new HashMap<String, Object>() {{\n- put(\"field\", \"value\");\n- put(\"field2\", \"value2\");\n- }}));\n+ list(1,\n+ map(\"field\", \"value\", \"field2\", \"value2\")));\n \n- Map<String, Object> filteredMap = XContentMapValues.filter(map, new String[]{\"obj\"}, Strings.EMPTY_ARRAY);\n- assertThat(filteredMap.size(), equalTo(1));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")).size(), equalTo(2));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")).get(\"field\").toString(), equalTo(\"value\"));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")).get(\"field2\").toString(), equalTo(\"value2\"));\n-\n-\n- filteredMap = XContentMapValues.filter(map, new String[]{\"obj\"}, new String[]{\"*.field2\"});\n- assertThat(filteredMap.size(), equalTo(1));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")).size(), equalTo(1));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")).get(\"field\").toString(), equalTo(\"value\"));\n+ filterAndCheck(map,\n+ \"obj\",\n+ NO_EXCLUDE,\n+ map(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\")));\n+ }\n \n+ public void testIncludeAndExclude2() {\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"field\", \"value\");\n+ map.put(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\"));\n+ map.put(\"array\",\n+ list(1,\n+ map(\"field\", \"value\", \"field2\", \"value2\")));\n+\n+ filterAndCheck(map,\n+ \"obj\",\n+ \"*.field2\",\n+ map(\"obj\", map(\"field\", \"value\")));\n+ }\n+\n+ public void testIncludeAndExclude3() {\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"field\", \"value\");\n+ map.put(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\"));\n+ map.put(\"array\",\n+ list(1,\n+ map(\"field\", \"value\", \"field2\", \"value2\")));\n+\n+ filterAndCheck(map,\n+ \"array\",\n+ NO_EXCLUDE,\n+ map(\"array\",\n+ list(1,\n+ map(\"field\", \"value\", \"field2\", \"value2\"))));\n+ }\n \n- filteredMap = XContentMapValues.filter(map, new String[]{\"array\"}, new String[]{});\n- assertThat(filteredMap.size(), equalTo(1));\n- assertThat(((List) filteredMap.get(\"array\")).size(), equalTo(2));\n- assertThat((Integer) ((List) filteredMap.get(\"array\")).get(0), equalTo(1));\n- assertThat(((Map<String, Object>) ((List) filteredMap.get(\"array\")).get(1)).size(), equalTo(2));\n+ public void testIncludeAndExclude4() {\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"field\", \"value\");\n+ map.put(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\"));\n+ map.put(\"array\",\n+ list(1,\n+ map(\"field\", \"value\", \"field2\", \"value2\")));\n \n- filteredMap = XContentMapValues.filter(map, new String[]{\"array\"}, new String[]{\"*.field2\"});\n- assertThat(filteredMap.size(), equalTo(1));\n- assertThat(((List<?>) filteredMap.get(\"array\")), hasSize(2));\n- assertThat((Integer) ((List) filteredMap.get(\"array\")).get(0), equalTo(1));\n- assertThat(((Map<String, Object>) ((List) filteredMap.get(\"array\")).get(1)).size(), equalTo(1));\n- assertThat(((Map<String, Object>) ((List) filteredMap.get(\"array\")).get(1)).get(\"field\").toString(), equalTo(\"value\"));\n+ filterAndCheck(map,\n+ \"array\",\n+ \"*.field2\",\n+ map(\"array\", list(1, map(\"field\", \"value\"))));\n }\n \n @SuppressWarnings(\"unchecked\")\n public void testFilterIncludesUsingStarPrefix() {\n Map<String, Object> map = new HashMap<>();\n map.put(\"field\", \"value\");\n- map.put(\"obj\",\n- new HashMap<String, Object>() {{\n- put(\"field\", \"value\");\n- put(\"field2\", \"value2\");\n- }});\n- map.put(\"n_obj\",\n- new HashMap<String, Object>() {{\n- put(\"n_field\", \"value\");\n- put(\"n_field2\", \"value2\");\n- }});\n-\n- Map<String, Object> filteredMap = XContentMapValues.filter(map, new String[]{\"*.field2\"}, Strings.EMPTY_ARRAY);\n- assertThat(filteredMap.size(), equalTo(1));\n- assertThat(filteredMap, hasKey(\"obj\"));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")).size(), equalTo(1));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")), hasKey(\"field2\"));\n-\n- // only objects\n- filteredMap = XContentMapValues.filter(map, new String[]{\"*.*\"}, Strings.EMPTY_ARRAY);\n- assertThat(filteredMap.size(), equalTo(2));\n- assertThat(filteredMap, hasKey(\"obj\"));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")).size(), equalTo(2));\n- assertThat(filteredMap, hasKey(\"n_obj\"));\n- assertThat(((Map<String, Object>) filteredMap.get(\"n_obj\")).size(), equalTo(2));\n-\n-\n- filteredMap = XContentMapValues.filter(map, new String[]{\"*\"}, new String[]{\"*.*2\"});\n- assertThat(filteredMap.size(), equalTo(3));\n- assertThat(filteredMap, hasKey(\"field\"));\n- assertThat(filteredMap, hasKey(\"obj\"));\n- assertThat(((Map) filteredMap.get(\"obj\")).size(), equalTo(1));\n- assertThat(((Map<String, Object>) filteredMap.get(\"obj\")), hasKey(\"field\"));\n- assertThat(filteredMap, hasKey(\"n_obj\"));\n- assertThat(((Map<String, Object>) filteredMap.get(\"n_obj\")).size(), equalTo(1));\n- assertThat(((Map<String, Object>) filteredMap.get(\"n_obj\")), hasKey(\"n_field\"));\n-\n- }\n-\n- public void testFilterWithEmptyIncludesExcludes() {\n+ map.put(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\"));\n+ map.put(\"n_obj\", map(\"n_field\", \"value\", \"n_field2\", \"value2\"));\n+\n+ filterAndCheck(map, \"*.field2\", NO_EXCLUDE, map(\"obj\", map(\"field2\", \"value2\")));\n+ }\n+\n+ public void testFilterIncludesUsingStarPrefix2() {\n Map<String, Object> map = new HashMap<>();\n map.put(\"field\", \"value\");\n- Map<String, Object> filteredMap = XContentMapValues.filter(map, Strings.EMPTY_ARRAY, Strings.EMPTY_ARRAY);\n- assertThat(filteredMap.size(), equalTo(1));\n- assertThat(filteredMap.get(\"field\").toString(), equalTo(\"value\"));\n+ map.put(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\"));\n+ map.put(\"n_obj\", map(\"n_field\", \"value\", \"n_field2\", \"value2\"));\n+\n+ Map<String, Object> expected = new HashMap<>();\n+ expected.put(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\"));\n+ expected.put(\"n_obj\", map(\"n_field\", \"value\", \"n_field2\", \"value2\"));\n+\n+ filterAndCheck(map, \"*.*\", NO_EXCLUDE, expected);\n }\n \n- public void testThatFilterIncludesEmptyObjectWhenUsingIncludes() throws Exception {\n- XContentBuilder builder = XContentFactory.jsonBuilder().startObject()\n- .startObject(\"obj\")\n- .endObject()\n- .endObject();\n+ public void testFilterIncludesUsingStarPrefix3() {\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"field\", \"value\");\n+ map.put(\"obj\", map(\"field\", \"value\", \"field2\", \"value2\"));\n+ map.put(\"n_obj\", map(\"n_field\", \"value\", \"n_field2\", \"value2\"));\n \n- Tuple<XContentType, Map<String, Object>> mapTuple = convertToMap(BytesReference.bytes(builder), true, builder.contentType());\n- Map<String, Object> filteredSource = XContentMapValues.filter(mapTuple.v2(), new String[]{\"obj\"}, Strings.EMPTY_ARRAY);\n+ Map<String, Object> expected = new HashMap<>();\n+ expected.put(\"field\", \"value\");\n+ expected.put(\"obj\", map(\"field\", \"value\"));\n+ expected.put(\"n_obj\", map(\"n_field\", \"value\"));\n \n- assertThat(mapTuple.v2(), equalTo(filteredSource));\n+ filterAndCheck(map, \"*\", \"*.*2\", expected);\n }\n \n- public void testThatFilterIncludesEmptyObjectWhenUsingExcludes() throws Exception {\n+ public void testFilterWithEmptyIncludesEmptyExcludes() {\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"field\", \"value\");\n+ filterAndCheck(map, NO_INCLUDE, NO_EXCLUDE, map);\n+ }\n+\n+ public void testFilterWithEmptyIncludesEmptyExcludes_EmptyString() {\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"field\", \"\");\n+ filterAndCheck(map, NO_INCLUDE, NO_EXCLUDE, map);\n+ }\n+\n+ public void testThatFilterIncludesEmptyObjectWhenUsingIncludes() throws Exception {\n XContentBuilder builder = XContentFactory.jsonBuilder().startObject()\n .startObject(\"obj\")\n .endObject()\n .endObject();\n \n- Tuple<XContentType, Map<String, Object>> mapTuple = convertToMap(BytesReference.bytes(builder), true, builder.contentType());\n- Map<String, Object> filteredSource = XContentMapValues.filter(mapTuple.v2(), Strings.EMPTY_ARRAY, new String[]{\"nonExistingField\"});\n-\n- assertThat(mapTuple.v2(), equalTo(filteredSource));\n+ filterAndCheck(builder, \"obj\", NO_EXCLUDE);\n }\n \n- public void testNotOmittingObjectsWithExcludedProperties() throws Exception {\n+ public void testThatFilterIncludesEmptyObjectWhenUsingExcludes() throws Exception {\n XContentBuilder builder = XContentFactory.jsonBuilder().startObject()\n .startObject(\"obj\")\n- .field(\"f1\", \"v1\")\n .endObject()\n .endObject();\n-\n- Tuple<XContentType, Map<String, Object>> mapTuple = convertToMap(BytesReference.bytes(builder), true, builder.contentType());\n- Map<String, Object> filteredSource = XContentMapValues.filter(mapTuple.v2(), Strings.EMPTY_ARRAY, new String[]{\"obj.f1\"});\n-\n- assertThat(filteredSource.size(), equalTo(1));\n- assertThat(filteredSource, hasKey(\"obj\"));\n- assertThat(((Map) filteredSource.get(\"obj\")).size(), equalTo(0));\n+ filterAndCheck(builder, NO_INCLUDE, \"nonExistingField\");\n }\n \n @SuppressWarnings({\"unchecked\"})\n@@ -398,25 +627,7 @@ public void testNotOmittingObjectWithNestedExcludedObject() throws Exception {\n .endObject();\n \n // implicit include\n- Tuple<XContentType, Map<String, Object>> mapTuple = convertToMap(BytesReference.bytes(builder), true, builder.contentType());\n- Map<String, Object> filteredSource = XContentMapValues.filter(mapTuple.v2(), Strings.EMPTY_ARRAY, new String[]{\"*.obj2\"});\n-\n- assertThat(filteredSource.size(), equalTo(1));\n- assertThat(filteredSource, hasKey(\"obj1\"));\n- assertThat(((Map) filteredSource.get(\"obj1\")).size(), equalTo(0));\n-\n- // explicit include\n- filteredSource = XContentMapValues.filter(mapTuple.v2(), new String[]{\"obj1\"}, new String[]{\"*.obj2\"});\n- assertThat(filteredSource.size(), equalTo(1));\n- assertThat(filteredSource, hasKey(\"obj1\"));\n- assertThat(((Map) filteredSource.get(\"obj1\")).size(), equalTo(0));\n-\n- // wild card include\n- filteredSource = XContentMapValues.filter(mapTuple.v2(), new String[]{\"*.obj2\"}, new String[]{\"*.obj3\"});\n- assertThat(filteredSource.size(), equalTo(1));\n- assertThat(filteredSource, hasKey(\"obj1\"));\n- assertThat(((Map<String, Object>) filteredSource.get(\"obj1\")), hasKey(\"obj2\"));\n- assertThat(((Map) ((Map) filteredSource.get(\"obj1\")).get(\"obj2\")).size(), equalTo(0));\n+ filterAndCheck(builder, NO_INCLUDE, \"*.obj2\", EMPTY);\n }\n \n @SuppressWarnings({\"unchecked\"})\n@@ -428,17 +639,9 @@ public void testIncludingObjectWithNestedIncludedObject() throws Exception {\n .endObject()\n .endObject();\n \n- Tuple<XContentType, Map<String, Object>> mapTuple = convertToMap(BytesReference.bytes(builder), true, builder.contentType());\n- Map<String, Object> filteredSource = XContentMapValues.filter(mapTuple.v2(), new String[]{\"*.obj2\"}, Strings.EMPTY_ARRAY);\n-\n- assertThat(filteredSource.size(), equalTo(1));\n- assertThat(filteredSource, hasKey(\"obj1\"));\n- assertThat(((Map) filteredSource.get(\"obj1\")).size(), equalTo(1));\n- assertThat(((Map<String, Object>) filteredSource.get(\"obj1\")), hasKey(\"obj2\"));\n- assertThat(((Map) ((Map) filteredSource.get(\"obj1\")).get(\"obj2\")).size(), equalTo(0));\n+ filterAndCheck(builder, \"*.obj2\", NO_EXCLUDE);\n }\n \n-\n public void testDotsInFieldNames() {\n Map<String, Object> map = new HashMap<>();\n map.put(\"foo.bar\", 2);\n@@ -448,49 +651,188 @@ public void testDotsInFieldNames() {\n map.put(\"quux\", 5);\n \n // dots in field names in includes\n- Map<String, Object> filtered = XContentMapValues.filter(map, new String[] {\"foo\"}, new String[0]);\n Map<String, Object> expected = new HashMap<>(map);\n expected.remove(\"quux\");\n- assertEquals(expected, filtered);\n+ filterAndCheck(map, \"foo\", NO_EXCLUDE, expected);\n \n // dots in field names in excludes\n- filtered = XContentMapValues.filter(map, new String[0], new String[] {\"foo\"});\n expected = new HashMap<>(map);\n expected.keySet().retainAll(Collections.singleton(\"quux\"));\n- assertEquals(expected, filtered);\n+ filterAndCheck(map, NO_INCLUDE, \"foo\", expected);\n }\n \n public void testSupplementaryCharactersInPaths() {\n Map<String, Object> map = new HashMap<>();\n map.put(\"搜索\", 2);\n map.put(\"指数\", 3);\n \n- assertEquals(Collections.singletonMap(\"搜索\", 2), XContentMapValues.filter(map, new String[] {\"搜索\"}, new String[0]));\n- assertEquals(Collections.singletonMap(\"指数\", 3), XContentMapValues.filter(map, new String[0], new String[] {\"搜索\"}));\n+ filterAndCheck(map, \"搜索\", NO_EXCLUDE, map(\"搜索\", 2));\n+ }\n+\n+ public void testSupplementaryCharactersInPaths2() {\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"搜索\", 2);\n+ map.put(\"指数\", 3);\n+\n+ filterAndCheck(map, NO_INCLUDE, \"搜索\", map(\"指数\", 3));\n }\n \n public void testSharedPrefixes() {\n Map<String, Object> map = new HashMap<>();\n map.put(\"foobar\", 2);\n map.put(\"foobaz\", 3);\n \n- assertEquals(Collections.singletonMap(\"foobar\", 2), XContentMapValues.filter(map, new String[] {\"foobar\"}, new String[0]));\n- assertEquals(Collections.singletonMap(\"foobaz\", 3), XContentMapValues.filter(map, new String[0], new String[] {\"foobar\"}));\n+ filterAndCheck(map, \"foobar\", NO_EXCLUDE, map(\"foobar\", 2));\n+ }\n+\n+ public void testSharedPrefixes2() {\n+ filterAndCheck(map(\"foobar\", 2, \"foobaz\", 3),\n+ NO_INCLUDE,\n+ \"foobar\",\n+ map(\"foobaz\", 3));\n }\n \n public void testPrefix() {\n Map<String, Object> map = new HashMap<>();\n map.put(\"photos\", Arrays.asList(new String[] {\"foo\", \"bar\"}));\n map.put(\"photosCount\", 2);\n \n- Map<String, Object> filtered = XContentMapValues.filter(map, new String[] {\"photosCount\"}, new String[0]);\n- Map<String, Object> expected = new HashMap<>();\n- expected.put(\"photosCount\", 2);\n- assertEquals(expected, filtered);\n+ filterAndCheck(map, \"photosCount\", NO_EXCLUDE, map(\"photosCount\", 2));\n+ }\n+\n+ public void testEmptyArray() throws Exception {\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject()\n+ .array(\"arrayField\")\n+ .endObject();\n+\n+ filterAndCheck(builder, NO_INCLUDE, NO_EXCLUDE);\n+ }\n+\n+ public void testEmptyArray_UnmatchedExclude() throws Exception {\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject()\n+ .array(\"myField\")\n+ .endObject();\n+\n+ filterAndCheck(builder, NO_INCLUDE, \"bar\");\n+ }\n+\n+ public void testEmptyArray_UnmatchedExcludeWildcard() throws Exception {\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject()\n+ .array(\"myField\")\n+ .endObject();\n+\n+ filterAndCheck(builder, NO_INCLUDE, \"*bar\");\n+ }\n+\n+ public void testOneElementArray() throws Exception {\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject()\n+ .array(\"arrayField\", \"value1\")\n+ .endObject();\n+\n+ filterAndCheck(builder, NO_INCLUDE, NO_EXCLUDE);\n+ }\n+\n+ public void testOneElementArray_UnmatchedExclude() throws Exception {\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject()\n+ .array(\"arrayField\", \"value1\")\n+ .endObject();\n+\n+ filterAndCheck(builder, NO_INCLUDE, \"different\");\n+ }\n+\n+ public void testOneElementArray_UnmatchedExcludeWildcard() throws Exception {\n+ XContentBuilder builder = XContentFactory.jsonBuilder().startObject()\n+ .array(\"arrayField\", \"\")\n+ .endObject();\n+\n+ filterAndCheck(builder, NO_INCLUDE, \"different*\");\n }\n \n private static Map<String, Object> toMap(Builder test, XContentType xContentType, boolean humanReadable) throws IOException {\n ToXContentObject toXContent = (builder, params) -> test.apply(builder);\n return convertToMap(toXContent(toXContent, xContentType, humanReadable), true, xContentType).v2();\n }\n+\n+ private Map<String, Object> toMap(final XContentBuilder builder) {\n+ return convertToMap(BytesReference.bytes(builder), true, builder.contentType()).v2();\n+ }\n+\n+ private Map<String, Object> filter(final Map<String, Object> input, final String include, final String exclude) {\n+ return filter(input, array(include), array(exclude));\n+ }\n+\n+ private Map<String, Object> filter(final XContentBuilder input, final String include, final String exclude) {\n+ return filter(input, array(include), array(exclude));\n+ }\n+\n+ private String[] array(final String...elements) {\n+ return elements.length == 1 && elements[0] == null ? Strings.EMPTY_ARRAY : elements;\n+ }\n+\n+ private Map<String, Object> filter(final Map<String, Object> input, final String[] includes, final String[] excludes) {\n+ return XContentMapValues.filter(input, includes, excludes);\n+ }\n+\n+ private void filterAndCheck(final Map<String, Object> input, final String include, final String exclude) {\n+ filterAndCheck(input, array(include), array(exclude), Collections.unmodifiableMap(input));\n+ }\n+\n+ private void filterAndCheck(final Map<String, Object> input,\n+ final String include,\n+ final String exclude,\n+ final Map<String, Object> expected) {\n+ filterAndCheck(input, array(include), array(exclude), expected);\n+ }\n+\n+ private void filterAndCheck(final Map<String, Object> input,\n+ final String[] includes,\n+ final String[] excludes,\n+ final Map<String, Object> expected) {\n+ Map<String, Object> filteredSource = filter(input, includes, excludes);\n+\n+ final StringBuilder b = new StringBuilder();\n+ if(null!=includes && includes.length > 0) {\n+ b.append(\"includes=\")\n+ .append(Arrays.stream(includes).collect(Collectors.joining(\", \")))\n+ .append(' ');\n+ }\n+ if(null!=excludes && excludes.length > 0) {\n+ b.append(\" excludes=\")\n+ .append(Arrays.stream(excludes).collect(Collectors.joining(\", \")))\n+ .append(' ');\n+ }\n+ b.append(\"input: \").append(input);\n+\n+ Assert.assertEquals(b.toString(), expected, filteredSource);\n+ }\n+\n+ private Map<String, Object> filter(final XContentBuilder input, final String[] includes, final String[] excludes) {\n+ return filter(toMap(input), includes, excludes);\n+ }\n+\n+ private void filterAndCheck(final XContentBuilder input, final String include, final String exclude) {\n+ filterAndCheck(toMap(input), include, exclude);\n+ }\n+\n+ private void filterAndCheck(final XContentBuilder input,\n+ final String include,\n+ final String exclude,\n+ final Map<String, Object> expected) {\n+ filterAndCheck(toMap(input), include, exclude, expected);\n+ }\n+\n+ private <T> List<T> list(final T...elements) {\n+ return new ArrayList<>(Arrays.asList(elements));\n+ }\n+\n+ private <K, V> Map<K, V> map(final K key, final V value) {\n+ return Collections.singletonMap(key, value);\n+ }\n+\n+ private <K, V> Map<K, V> map(final K key1, final V value1, final K key2, final V value2) {\n+ final Map<K, V> map = new LinkedHashMap<>();\n+ map.put(key1, value1);\n+ map.put(key2, value2);\n+ return map;\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/common/xcontent/support/XContentMapValuesTests.java", "status": "modified" } ] }
{ "body": "Document that we are using by default `ec2.us-east-1.amazonaws.com` as\r\nthe EC2 endpoint if not explicitly set.\r\n\r\nAlso switch to non deprecated method when building the EC2 Client.\r\n\r\nCloses #27464.\r\n", "comments": [ { "body": "Thanks @rjernst! I pushed a new commit to address your points.", "created_at": "2017-12-22T16:22:20Z" }, { "body": "@rjernst Could you tell me what you think of the PR now?", "created_at": "2018-02-21T14:32:27Z" }, { "body": "@rjernst I pushed a new change. LMK. Thanks!", "created_at": "2018-02-22T10:25:07Z" }, { "body": "jenkins test this please", "created_at": "2018-02-22T11:23:24Z" }, { "body": "@dadoonet I think you need to sync with master, it looks like CI failed because it can't find gradlew. ", "created_at": "2018-02-25T22:58:11Z" }, { "body": "Thanks @rjernst for letting me know. I was confused by the failure.", "created_at": "2018-02-26T08:20:59Z" }, { "body": "So. It's still failing in `Ec2DiscoveryClusterFormationTests`\r\n\r\n```java\r\n return Settings.builder().put(super.nodeSettings(nodeOrdinal))\r\n .put(DiscoveryModule.DISCOVERY_HOSTS_PROVIDER_SETTING.getKey(), \"ec2\")\r\n .put(\"path.logs\", resolve)\r\n .put(\"transport.tcp.port\", 0)\r\n .put(\"node.portsfile\", \"true\")\r\n .put(AwsEc2Service.ENDPOINT_SETTING.getKey(), \"http://\" + httpServer.getAddress().getHostName() + \":\" +\r\n httpServer.getAddress().getPort())\r\n .setSecureSettings(secureSettings)\r\n .build();\r\n```\r\n\r\nIndeed `http://localhost:60235` is not a valid endpoint\r\n\r\n```\r\njava.lang.IllegalArgumentException: Can not guess a region from endpoint [http://localhost:60235].\r\n\r\n\tat org.elasticsearch.discovery.ec2.AwsEc2ServiceImpl.buildRegion(AwsEc2ServiceImpl.java:87)\r\n\tat org.elasticsearch.discovery.ec2.AwsEc2ServiceImpl.client(AwsEc2ServiceImpl.java:68)\r\n\tat org.elasticsearch.discovery.ec2.AwsEc2UnicastHostsProvider.<init>(AwsEc2UnicastHostsProvider.java:79)\r\n```\r\n\r\nAnd because I can't set anymore a region, I can't \"force\" skipping guessing the region from the endpoint.\r\n\r\n@rjernst Do you have any idea to workaround this?", "created_at": "2018-02-27T00:23:38Z" }, { "body": "@elastic/es-distributed Any idea on how to solve the test issue I mentioned earlier on? https://github.com/elastic/elasticsearch/pull/27925#issuecomment-368700618 \r\n\r\n", "created_at": "2018-04-01T13:01:01Z" }, { "body": "The documentation is the most important part of this pull request (took me hours to discover this behavior). \r\n\r\nWhat do you think about merging only the docs?", "created_at": "2018-04-13T19:51:34Z" }, { "body": "Yeah. I probably should have start only with that.", "created_at": "2018-04-13T19:54:26Z" }, { "body": "I opened #30323 to fix only the documentation issue which will leave time to merge this PR.", "created_at": "2018-05-02T06:23:17Z" }, { "body": "any updates here @dadoonet ? do we still want to merge this one?", "created_at": "2018-08-16T14:00:04Z" }, { "body": "@javanna I'm still waiting for an answer about https://github.com/elastic/elasticsearch/pull/27925#issuecomment-377785299", "created_at": "2018-08-16T20:40:12Z" }, { "body": "thanks @dadoonet . ping @elastic/es-distributed your input is needed here. Can we move this forward?", "created_at": "2018-08-17T06:37:59Z" }, { "body": "@tlrx can you maybe help?", "created_at": "2018-09-09T17:47:51Z" }, { "body": "@dadoonet \r\nSorry for the so long delay in response. To your https://github.com/elastic/elasticsearch/pull/27925#issuecomment-368700618 : \r\n\r\nIt looks you need some kind of fixture that mimics ec2 like it is done for repository-s3:\r\n\r\nAmazonS3Fixture is started before integration test and passing a system property `com.amazonaws.sdk.ec2MetadataServiceEndpointOverride` (it is like `127.0.0.1:58245`) to integration test:\r\n\r\nhttps://github.com/elastic/elasticsearch/blob/7c0fc209bf78e4824ca1f232b84a1dab22bc2dfa/plugins/repository-s3/build.gradle#L373\r\n\r\nAmazonS3Fixture handles some EC2 requests (auth for the case of repository-s3) :\r\nhttps://github.com/elastic/elasticsearch/blob/7c0fc209bf78e4824ca1f232b84a1dab22bc2dfa/plugins/repository-s3/src/test/java/org/elasticsearch/repositories/s3/AmazonS3Fixture.java#L392 \r\n\r\nAt least [/latest/dynamic/instance-identity/document](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html) has to be handled, so `region` could be specified in that fixture respone.\r\n\r\n_minor_: within `buildRegion` the call to InstanceMetadataRegionProvider has to be wrapped with `SocketAccess.doPrivileged(() -> {...})`.\r\n\r\nHope it helps you to proceed further.", "created_at": "2018-09-14T12:33:40Z" }, { "body": "I don't know if this PR is still useful or not. I'd assume it might be useless now. Thoughts?\r\nIf not, I'll need to change `testClientSettingsReInit` test as `ec2_endpoint_1` and `ec2_endpoint_2` are not valid endpoints so we can not guess a region from those values.\r\n\r\nNote that documentation was updated with #30323.\r\n\r\nThere is something which bugs me though. The fact that we are still using deprecated methods to build the `AmazonEc2` client. \r\n\r\n```java\r\nnew AmazonEC2Client(credentials, configuration);\r\n```\r\n\r\nWe should now use instead a builder as proposed with the current PR:\r\n\r\n```java\r\nAmazonEC2ClientBuilder builder = AmazonEC2ClientBuilder.standard()\r\n .withCredentials(credentials)\r\n .withClientConfiguration(configuration)\r\n .withEndpointConfiguration(endpointConfiguration);\r\nfinal AmazonEC2 client = builder.build();\r\n```\r\n\r\nWould you like another PR for this? \r\n", "created_at": "2019-02-07T15:02:22Z" }, { "body": "> Only do this in 8.0\r\n\r\nNote that we can add the new magical value, `_current_region_endpoint_`, in 7.x. We can then inform people that the default will change to `_current_region_endpoint_` in 8.0 via a deprecation warning.\r\n\r\n", "created_at": "2019-03-13T17:13:21Z" }, { "body": "The documentation change has already been addressed in #30323 and the change about guessing the region is redundant since the AWS SDK already does it implicitly if the endpoint isn't provided. \r\n\r\nSee #27924, #30723", "created_at": "2022-07-29T12:44:09Z" } ], "number": 27925, "title": "Default ec2 endpoint is ec2.us-east-1.amazonaws.com" }
{ "body": "I split #27925 in two parts:\r\n\r\n* The documentation fix (this PR)\r\n* The code fix (still in #27925)\r\n\r\nCloses #27464.\r\n", "number": 30323, "review_comments": [], "title": "Default ec2 endpoint is ec2.us-east-1.amazonaws.com" }
{ "commits": [ { "message": "Default ec2 endpoint is ec2.us-east-1.amazonaws.com\n\nI split #27925 in two parts:\n\n* The documentation fix (this PR)\n* The code fix (still in #27925)\n\nCloses #27464." }, { "message": "Replace \"The only necessary configuration change\" sentence" }, { "message": "Merge branch 'master' into doc/27925-fix-default-ec2-endpoint" }, { "message": "Reformat to 80 columns" } ], "files": [ { "diff": "@@ -11,11 +11,12 @@ include::install_remove.asciidoc[]\n [[discovery-ec2-usage]]\n ==== Getting started with AWS\n \n-The plugin provides a hosts provider for zen discovery named `ec2`. This hosts provider\n-finds other Elasticsearch instances in EC2 through AWS metadata. Authentication is done using \n-http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[IAM Role]\n-credentials by default. The only necessary configuration change to enable the plugin\n-is setting the unicast host provider for zen discovery:\n+The plugin provides a hosts provider for zen discovery named `ec2`. This hosts\n+provider finds other Elasticsearch instances in EC2 through AWS metadata.\n+Authentication is done using\n+http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[IAM\n+Role] credentials by default. To enable the plugin, set the unicast host\n+provider for Zen discovery to `ec2`:\n \n [source,yaml]\n ----\n@@ -51,9 +52,9 @@ Those that must be stored in the keystore are marked as `Secure`.\n \n `endpoint`::\n \n- The ec2 service endpoint to connect to. This will be automatically\n- figured out by the ec2 client based on the instance location, but\n- can be specified explicitly. See http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region.\n+ The ec2 service endpoint to connect to. See\n+ http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region. This\n+ defaults to `ec2.us-east-1.amazonaws.com`.\n \n `protocol`::\n ", "filename": "docs/plugins/discovery-ec2.asciidoc", "status": "modified" } ] }
{ "body": "Fixes #29033, currently WIP as I need to fix a few integration tests. As @colings86 described on the issue, we're missing a step for terms queries, we need to rewrite the query on the coordinating node.\r\n\r\nSteps to reproduce (and confirm fix)\r\n\r\n1. Run a local server: `./gradlew server`\r\n\r\n2. create index `curl -XPUT 'localhost:9200/twitter' -H 'Content-Type: application/json' -d'{}'`\r\n\r\n3. add mapping `curl -XPUT 'localhost:9200/twitter/_mapping/_doc' -H 'Content-Type: application/json' -d'{ \"properties\": { \"user\": { \"type\": \"integer\" }, \"followers\": { \"type\": \"integer\" } } }'`\r\n\r\n4. run a validate request on a terms query: `curl -XPOST 'localhost:9200/twitter/_validate/query?explain=true' -H 'Content-Type: application/json' -d'{ \"query\": { \"terms\": { \"user\": { \"index\": \"twitter\", \"type\": \"_doc\", \"id\": \"2\", \"path\": \"followers\" } } } }'`\r\n", "comments": [ { "body": "Added some integration test fixes, and my own comments about those fixes; I could use some advice on the best way to handle this error in the right way at the point in execution where it's occurring (on the coordinating node, so there are no shards involved yet in the query).", "created_at": "2018-04-11T21:03:19Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-12T08:21:05Z" }, { "body": "@jpountz happy to wait", "created_at": "2018-04-19T13:56:10Z" }, { "body": "@elasticmachine please test this", "created_at": "2018-04-30T14:12:27Z" }, { "body": "@elasticmachine test this please", "created_at": "2018-04-30T20:50:54Z" } ], "number": 29483, "title": "Fix failure for validate API on a terms query" }
{ "body": "Opening a PR for this backport of #29483 just to be on the safe side, since the backport involves a versioning change in addition to a cherry pick from master. Would appreciate a quick sanity check!", "number": 30319, "review_comments": [], "title": "6.x Backport: Terms query validate bug " }
{ "commits": [ { "message": "Fix failure for validate API on a terms query (#29483)\n\n* WIP commit to try calling rewrite on coordinating node during TransportSearchAction\r\n\r\n* Use re-written query instead of using the original query\r\n\r\n* fix incorrect/unused imports and wildcarding\r\n\r\n* add error handling for cases where an exception is thrown\r\n\r\n* correct exception handling such that integration tests pass successfully\r\n\r\n* fix additional case covered by IndicesOptionsIntegrationIT.\r\n\r\n* add integration test case that verifies queries are now valid\r\n\r\n* add optional value for index\r\n\r\n* address review comments: catch superclass of XContentParseException\r\n\r\nfixes #29483" }, { "message": "Backport terms-query-validate-bug changes to 6.x" } ], "files": [ { "diff": "@@ -75,7 +75,11 @@ public String getExplanation() {\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n- index = in.readString();\n+ if (in.getVersion().onOrAfter(Version.V_6_4_0)) {\n+ index = in.readOptionalString();\n+ } else {\n+ index = in.readString();\n+ }\n if (in.getVersion().onOrAfter(Version.V_5_4_0)) {\n shard = in.readInt();\n } else {\n@@ -88,7 +92,11 @@ public void readFrom(StreamInput in) throws IOException {\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n- out.writeString(index);\n+ if (out.getVersion().onOrAfter(Version.V_6_4_0)) {\n+ out.writeOptionalString(index);\n+ } else {\n+ out.writeString(index);\n+ }\n if (out.getVersion().onOrAfter(Version.V_5_4_0)) {\n out.writeInt(shard);\n }", "filename": "server/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java", "status": "modified" }, { "diff": "@@ -38,8 +38,11 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.query.QueryShardException;\n+import org.elasticsearch.index.query.Rewriteable;\n+import org.elasticsearch.indices.IndexClosedException;\n import org.elasticsearch.search.SearchService;\n import org.elasticsearch.search.internal.AliasFilter;\n import org.elasticsearch.search.internal.SearchContext;\n@@ -54,6 +57,7 @@\n import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.atomic.AtomicReferenceArray;\n+import java.util.function.LongSupplier;\n \n public class TransportValidateQueryAction extends TransportBroadcastAction<ValidateQueryRequest, ValidateQueryResponse, ShardValidateQueryRequest, ShardValidateQueryResponse> {\n \n@@ -71,7 +75,39 @@ public TransportValidateQueryAction(Settings settings, ThreadPool threadPool, Cl\n @Override\n protected void doExecute(Task task, ValidateQueryRequest request, ActionListener<ValidateQueryResponse> listener) {\n request.nowInMillis = System.currentTimeMillis();\n- super.doExecute(task, request, listener);\n+ LongSupplier timeProvider = () -> request.nowInMillis;\n+ ActionListener<org.elasticsearch.index.query.QueryBuilder> rewriteListener = ActionListener.wrap(rewrittenQuery -> {\n+ request.query(rewrittenQuery);\n+ super.doExecute(task, request, listener);\n+ },\n+ ex -> {\n+ if (ex instanceof IndexNotFoundException ||\n+ ex instanceof IndexClosedException) {\n+ listener.onFailure(ex);\n+ }\n+ List<QueryExplanation> explanations = new ArrayList<>();\n+ explanations.add(new QueryExplanation(null,\n+ QueryExplanation.RANDOM_SHARD,\n+ false,\n+ null,\n+ ex.getMessage()));\n+ listener.onResponse(\n+ new ValidateQueryResponse(\n+ false,\n+ explanations,\n+ // totalShards is documented as \"the total shards this request ran against\",\n+ // which is 0 since the failure is happening on the coordinating node.\n+ 0,\n+ 0 ,\n+ 0,\n+ null));\n+ });\n+ if (request.query() == null) {\n+ rewriteListener.onResponse(request.query());\n+ } else {\n+ Rewriteable.rewriteAndFetch(request.query(), searchService.getRewriteContext(timeProvider),\n+ rewriteListener);\n+ }\n }\n \n @Override", "filename": "server/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.index.query;\n \n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.common.ParsingException;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -111,7 +112,7 @@ static <T extends Rewriteable<T>> void rewriteAndFetch(T original, QueryRewriteC\n }\n }\n rewriteResponse.onResponse(builder);\n- } catch (IOException ex) {\n+ } catch (IOException|IllegalArgumentException|ParsingException ex) {\n rewriteResponse.onFailure(ex);\n }\n }", "filename": "server/src/main/java/org/elasticsearch/index/query/Rewriteable.java", "status": "modified" }, { "diff": "@@ -29,6 +29,8 @@\n import org.elasticsearch.index.query.MoreLikeThisQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.index.query.TermsQueryBuilder;\n+import org.elasticsearch.indices.TermsLookup;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n@@ -330,4 +332,21 @@ private static void assertExplanations(QueryBuilder queryBuilder,\n assertThat(response.isValid(), equalTo(true));\n }\n }\n+\n+ public void testExplainTermsQueryWithLookup() throws Exception {\n+ client().admin().indices().prepareCreate(\"twitter\")\n+ .addMapping(\"_doc\", \"user\", \"type=integer\", \"followers\", \"type=integer\")\n+ .setSettings(Settings.builder().put(SETTING_NUMBER_OF_SHARDS, 2).put(\"index.number_of_routing_shards\", 2)).get();\n+ client().prepareIndex(\"twitter\", \"_doc\", \"1\")\n+ .setSource(\"followers\", new int[] {1, 2, 3}).get();\n+ refresh();\n+\n+ TermsQueryBuilder termsLookupQuery = QueryBuilders.termsLookupQuery(\"user\", new TermsLookup(\"twitter\", \"_doc\", \"1\", \"followers\"));\n+ ValidateQueryResponse response = client().admin().indices().prepareValidateQuery(\"twitter\")\n+ .setTypes(\"_doc\")\n+ .setQuery(termsLookupQuery)\n+ .setExplain(true)\n+ .execute().actionGet();\n+ assertThat(response.isValid(), is(true));\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/validate/SimpleValidateQueryIT.java", "status": "modified" } ] }
{ "body": "*Original comment by @davidkyle:*\n\nOpen a job send some data and close the job then reopen the job and send some data timestamped a week later than the previous batch. Autodetect will create empty bucket results for the intervening period but `DataCounts::bucket_count` will not reflect that. \r\n\r\nThe test`MlBasicMultiNodeIT::testMiniFarequoteReopen` does exactly this but the test was asserting that `bucket_count == 2` rather than `bucket_count = 7 days of buckets`. `bucket_count` should equal to the number of buckets written by autodetect, with the caveat that old results are sometimes pruned. ", "comments": [], "number": 30080, "title": "[ML] bucket_count is inaccurate when there are gaps in the data" }
{ "body": "This commit fixes an issue with the data diagnostics were\r\nempty buckets are not reported even though they should. Once\r\na job is reopened, the diagnostics do not get initialized from\r\nthe current data counts (especially the latest record timestamp).\r\nThe result is that if the data that is sent have a time gap compared\r\nto the previous ones, that gap is not accounted for in the empty bucket\r\ncount.\r\n\r\nThis commit fixes that by initializing the diagnostics with the current\r\ndata counts.\r\n\r\nCloses #30080", "number": 30294, "review_comments": [], "title": "[ML] Account for gaps in data counts after job is reopened" }
{ "commits": [ { "message": "[ML] Account for gaps in data counts after job is reopened\n\nThis commit fixes an issue with the data diagnostics were\nempty buckets are not reported even though they should. Once\na job is reopened, the diagnostics do not get initialized from\nthe current data counts (especially the latest record timestamp).\nThe result is that if the data that is sent have a time gap compared\nto the previous ones, that gap is not accounted for in the empty bucket\ncount.\n\nThis commit fixes that by initializing the diagnostics with the current\ndata counts.\n\nCloses #30080" }, { "message": "Fix MlBasicMultiNodeIT" }, { "message": "Add to changelog" } ], "files": [ { "diff": "@@ -104,6 +104,10 @@ Do not ignore request analysis/similarity settings on index resize operations wh\n \n Fix NPE when CumulativeSum agg encounters null value/empty bucket ({pull}29641[#29641])\n \n+Machine Learning::\n+\n+* Account for gaps in data counts after job is reopened ({pull}30294[#30294])\n+\n //[float]\n //=== Regressions\n ", "filename": "docs/CHANGELOG.asciidoc", "status": "modified" }, { "diff": "@@ -82,7 +82,7 @@ public DataCountsReporter(Settings settings, Job job, DataCounts counts, JobData\n \n totalRecordStats = counts;\n incrementalRecordStats = new DataCounts(job.getId());\n- diagnostics = new DataStreamDiagnostics(job);\n+ diagnostics = new DataStreamDiagnostics(job, counts);\n \n acceptablePercentDateParseErrors = ACCEPTABLE_PERCENTAGE_DATE_PARSE_ERRORS_SETTING.get(settings);\n acceptablePercentOutOfOrderErrors = ACCEPTABLE_PERCENTAGE_OUT_OF_ORDER_ERRORS_SETTING.get(settings);", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/process/DataCountsReporter.java", "status": "modified" }, { "diff": "@@ -6,8 +6,11 @@\n package org.elasticsearch.xpack.ml.job.process.diagnostics;\n \n import org.elasticsearch.xpack.core.ml.job.config.Job;\n+import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;\n import org.elasticsearch.xpack.core.ml.utils.Intervals;\n \n+import java.util.Date;\n+\n /**\n * A moving window of buckets that allow keeping\n * track of some statistics like the bucket count,\n@@ -33,12 +36,17 @@ class BucketDiagnostics {\n private long latestFlushedBucketStartMs = -1;\n private final BucketFlushListener bucketFlushListener;\n \n- BucketDiagnostics(Job job, BucketFlushListener bucketFlushListener) {\n+ BucketDiagnostics(Job job, DataCounts dataCounts, BucketFlushListener bucketFlushListener) {\n bucketSpanMs = job.getAnalysisConfig().getBucketSpan().millis();\n latencyMs = job.getAnalysisConfig().getLatency() == null ? 0 : job.getAnalysisConfig().getLatency().millis();\n maxSize = Math.max((int) (Intervals.alignToCeil(latencyMs, bucketSpanMs) / bucketSpanMs), MIN_BUCKETS);\n buckets = new long[maxSize];\n this.bucketFlushListener = bucketFlushListener;\n+\n+ Date latestRecordTimestamp = dataCounts.getLatestRecordTimeStamp();\n+ if (latestRecordTimestamp != null) {\n+ addRecord(latestRecordTimestamp.getTime());\n+ }\n }\n \n void addRecord(long recordTimestampMs) {", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/process/diagnostics/BucketDiagnostics.java", "status": "modified" }, { "diff": "@@ -8,6 +8,7 @@\n import org.apache.logging.log4j.Logger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.xpack.core.ml.job.config.Job;\n+import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;\n \n import java.util.Date;\n \n@@ -32,8 +33,8 @@ public class DataStreamDiagnostics {\n private long sparseBucketCount = 0;\n private long latestSparseBucketTime = -1;\n \n- public DataStreamDiagnostics(Job job) {\n- bucketDiagnostics = new BucketDiagnostics(job, createBucketFlushListener());\n+ public DataStreamDiagnostics(Job job, DataCounts dataCounts) {\n+ bucketDiagnostics = new BucketDiagnostics(job, dataCounts, createBucketFlushListener());\n }\n \n private BucketDiagnostics.BucketFlushListener createBucketFlushListener() {", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/process/diagnostics/DataStreamDiagnostics.java", "status": "modified" }, { "diff": "@@ -11,6 +11,7 @@\n import org.elasticsearch.xpack.core.ml.job.config.DataDescription;\n import org.elasticsearch.xpack.core.ml.job.config.Detector;\n import org.elasticsearch.xpack.core.ml.job.config.Job;\n+import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;\n import org.junit.Before;\n \n import java.util.Arrays;\n@@ -20,6 +21,7 @@ public class DataStreamDiagnosticsTests extends ESTestCase {\n \n private static final long BUCKET_SPAN = 60000;\n private Job job;\n+ private DataCounts dataCounts;\n \n @Before\n public void setUpMocks() {\n@@ -32,10 +34,11 @@ public void setUpMocks() {\n builder.setAnalysisConfig(acBuilder);\n builder.setDataDescription(new DataDescription.Builder());\n job = createJob(TimeValue.timeValueMillis(BUCKET_SPAN), null);\n+ dataCounts = new DataCounts(job.getId());\n }\n \n public void testIncompleteBuckets() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n d.checkRecord(1000);\n d.checkRecord(2000);\n@@ -81,7 +84,7 @@ public void testIncompleteBuckets() {\n }\n \n public void testSimple() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n d.checkRecord(70000);\n d.checkRecord(130000);\n@@ -103,7 +106,7 @@ public void testSimple() {\n }\n \n public void testSimpleReverse() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n d.checkRecord(610000);\n d.checkRecord(550000);\n@@ -126,7 +129,7 @@ public void testSimpleReverse() {\n \n public void testWithLatencyLessThanTenBuckets() {\n job = createJob(TimeValue.timeValueMillis(BUCKET_SPAN), TimeValue.timeValueMillis(3 * BUCKET_SPAN));\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n long timestamp = 70000;\n while (timestamp < 70000 + 20 * BUCKET_SPAN) {\n@@ -141,7 +144,7 @@ public void testWithLatencyLessThanTenBuckets() {\n \n public void testWithLatencyGreaterThanTenBuckets() {\n job = createJob(TimeValue.timeValueMillis(BUCKET_SPAN), TimeValue.timeValueMillis(13 * BUCKET_SPAN + 10000));\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n long timestamp = 70000;\n while (timestamp < 70000 + 20 * BUCKET_SPAN) {\n@@ -155,7 +158,7 @@ public void testWithLatencyGreaterThanTenBuckets() {\n }\n \n public void testEmptyBuckets() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n d.checkRecord(10000);\n d.checkRecord(70000);\n@@ -177,7 +180,7 @@ public void testEmptyBuckets() {\n }\n \n public void testEmptyBucketsStartLater() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n d.checkRecord(1110000);\n d.checkRecord(1170000);\n@@ -199,7 +202,7 @@ public void testEmptyBucketsStartLater() {\n }\n \n public void testSparseBuckets() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n sendManyDataPoints(d, 10000, 69000, 1000);\n sendManyDataPoints(d, 70000, 129000, 1200);\n@@ -227,7 +230,7 @@ public void testSparseBuckets() {\n * signal\n */\n public void testSparseBucketsLast() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n sendManyDataPoints(d, 10000, 69000, 1000);\n sendManyDataPoints(d, 70000, 129000, 1200);\n@@ -255,7 +258,7 @@ public void testSparseBucketsLast() {\n * signal on the 2nd to last\n */\n public void testSparseBucketsLastTwo() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n sendManyDataPoints(d, 10000, 69000, 1000);\n sendManyDataPoints(d, 70000, 129000, 1200);\n@@ -280,7 +283,7 @@ public void testSparseBucketsLastTwo() {\n }\n \n public void testMixedEmptyAndSparseBuckets() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n sendManyDataPoints(d, 10000, 69000, 1000);\n sendManyDataPoints(d, 70000, 129000, 1200);\n@@ -308,7 +311,7 @@ public void testMixedEmptyAndSparseBuckets() {\n * whether counts are right.\n */\n public void testEmptyBucketsLongerOutage() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n d.checkRecord(10000);\n d.checkRecord(70000);\n@@ -336,7 +339,7 @@ public void testEmptyBucketsLongerOutage() {\n * The number of sparse buckets should not be to much, it could be normal.\n */\n public void testSparseBucketsLongerPeriod() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n \n sendManyDataPoints(d, 10000, 69000, 1000);\n sendManyDataPoints(d, 70000, 129000, 1200);\n@@ -374,7 +377,7 @@ private static Job createJob(TimeValue bucketSpan, TimeValue latency) {\n }\n \n public void testFlushAfterZeroRecords() {\n- DataStreamDiagnostics d = new DataStreamDiagnostics(job);\n+ DataStreamDiagnostics d = new DataStreamDiagnostics(job, dataCounts);\n d.flush();\n assertEquals(0, d.getBucketCount());\n }", "filename": "x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/process/diagnostics/DataStreamDiagnosticsTests.java", "status": "modified" }, { "diff": "@@ -241,7 +241,7 @@ public void testMiniFarequoteReopen() throws Exception {\n assertEquals(0, responseBody.get(\"invalid_date_count\"));\n assertEquals(0, responseBody.get(\"missing_field_count\"));\n assertEquals(0, responseBody.get(\"out_of_order_timestamp_count\"));\n- assertEquals(0, responseBody.get(\"bucket_count\"));\n+ assertEquals(1000, responseBody.get(\"bucket_count\"));\n \n // unintuitive: should return the earliest record timestamp of this feed???\n assertEquals(null, responseBody.get(\"earliest_record_timestamp\"));\n@@ -266,7 +266,7 @@ public void testMiniFarequoteReopen() throws Exception {\n assertEquals(0, dataCountsDoc.get(\"invalid_date_count\"));\n assertEquals(0, dataCountsDoc.get(\"missing_field_count\"));\n assertEquals(0, dataCountsDoc.get(\"out_of_order_timestamp_count\"));\n- assertEquals(0, dataCountsDoc.get(\"bucket_count\"));\n+ assertEquals(1000, dataCountsDoc.get(\"bucket_count\"));\n assertEquals(1403481600000L, dataCountsDoc.get(\"earliest_record_timestamp\"));\n assertEquals(1407082000000L, dataCountsDoc.get(\"latest_record_timestamp\"));\n ", "filename": "x-pack/qa/ml-basic-multi-node/src/test/java/org/elasticsearch/xpack/ml/integration/MlBasicMultiNodeIT.java", "status": "modified" }, { "diff": "@@ -0,0 +1,93 @@\n+/*\n+ * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one\n+ * or more contributor license agreements. Licensed under the Elastic License;\n+ * you may not use this file except in compliance with the Elastic License.\n+ */\n+package org.elasticsearch.xpack.ml.integration;\n+\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.xpack.core.ml.action.GetBucketsAction;\n+import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;\n+import org.elasticsearch.xpack.core.ml.job.config.DataDescription;\n+import org.elasticsearch.xpack.core.ml.job.config.Detector;\n+import org.elasticsearch.xpack.core.ml.job.config.Job;\n+import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;\n+import org.junit.After;\n+\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.HashMap;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.stream.Collectors;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+\n+/**\n+ * Tests that after reopening a job and sending more\n+ * data after a gap, data counts are reported correctly.\n+ */\n+public class ReopenJobWithGapIT extends MlNativeAutodetectIntegTestCase {\n+\n+ private static final String JOB_ID = \"reopen-job-with-gap-test\";\n+ private static final long BUCKET_SPAN_SECONDS = 3600;\n+\n+ @After\n+ public void cleanUpTest() {\n+ cleanUp();\n+ }\n+\n+ public void test() throws Exception {\n+ AnalysisConfig.Builder analysisConfig = new AnalysisConfig.Builder(\n+ Collections.singletonList(new Detector.Builder(\"count\", null).build()));\n+ analysisConfig.setBucketSpan(TimeValue.timeValueSeconds(BUCKET_SPAN_SECONDS));\n+ DataDescription.Builder dataDescription = new DataDescription.Builder();\n+ dataDescription.setTimeFormat(\"epoch\");\n+ Job.Builder job = new Job.Builder(JOB_ID);\n+ job.setAnalysisConfig(analysisConfig);\n+ job.setDataDescription(dataDescription);\n+\n+ registerJob(job);\n+ putJob(job);\n+ openJob(job.getId());\n+\n+ long timestamp = 1483228800L; // 2017-01-01T00:00:00Z\n+ List<String> data = new ArrayList<>();\n+ for (int i = 0; i < 10; i++) {\n+ data.add(createJsonRecord(createRecord(timestamp)));\n+ timestamp += BUCKET_SPAN_SECONDS;\n+ }\n+\n+ postData(job.getId(), data.stream().collect(Collectors.joining()));\n+ flushJob(job.getId(), true);\n+ closeJob(job.getId());\n+\n+ GetBucketsAction.Request request = new GetBucketsAction.Request(job.getId());\n+ request.setExcludeInterim(true);\n+ assertThat(client().execute(GetBucketsAction.INSTANCE, request).actionGet().getBuckets().count(), equalTo(9L));\n+ assertThat(getJobStats(job.getId()).get(0).getDataCounts().getBucketCount(), equalTo(9L));\n+\n+ timestamp += 10 * BUCKET_SPAN_SECONDS;\n+ data = new ArrayList<>();\n+ for (int i = 0; i < 10; i++) {\n+ data.add(createJsonRecord(createRecord(timestamp)));\n+ timestamp += BUCKET_SPAN_SECONDS;\n+ }\n+\n+ openJob(job.getId());\n+ postData(job.getId(), data.stream().collect(Collectors.joining()));\n+ flushJob(job.getId(), true);\n+ closeJob(job.getId());\n+\n+ assertThat(client().execute(GetBucketsAction.INSTANCE, request).actionGet().getBuckets().count(), equalTo(29L));\n+ DataCounts dataCounts = getJobStats(job.getId()).get(0).getDataCounts();\n+ assertThat(dataCounts.getBucketCount(), equalTo(29L));\n+ assertThat(dataCounts.getEmptyBucketCount(), equalTo(10L));\n+ }\n+\n+ private static Map<String, Object> createRecord(long timestamp) {\n+ Map<String, Object> record = new HashMap<>();\n+ record.put(\"time\", timestamp);\n+ return record;\n+ }\n+}", "filename": "x-pack/qa/ml-native-tests/src/test/java/org/elasticsearch/xpack/ml/integration/ReopenJobWithGapIT.java", "status": "added" } ] }
{ "body": "*Original comment by @LeeDr:*\n\nRunning 5.0.0 alpha2;\n\n```\nEMAIL REDACTED /usr/share/elasticsearch/bin/x-pack/users useradd -h\nAdds a native user\n\nAdds a file based user to elasticsearch (via internal realm). The user will...\n```\n\nFrom here (I think);\nLINK REDACTED\n\n\n", "comments": [], "number": 29729, "title": "useradd help says it \"Adds a native user\" and a file based user" }
{ "body": "The elasticsearch-users utility had various messages that were\r\noutdated or incorrect. This commit updates the output from this\r\ncommand to reflect current terminology and configuration.\r\n\r\nFixes: #29729 ", "number": 30293, "review_comments": [], "title": "Fix message content in users tool" }
{ "commits": [ { "message": "Fix message content in users tool\n\nThe elasticsearch-users utility had various messages that were\noutdated or incorrect. This commit updates the output from this\ncommand to reflect current terminology and configuration." } ], "files": [ { "diff": "@@ -17,15 +17,14 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.env.Environment;\n-import org.elasticsearch.xpack.core.XPackField;\n import org.elasticsearch.xpack.core.XPackSettings;\n import org.elasticsearch.xpack.core.security.authc.support.Hasher;\n-import org.elasticsearch.xpack.security.authz.store.FileRolesStore;\n import org.elasticsearch.xpack.core.security.authz.store.ReservedRolesStore;\n import org.elasticsearch.xpack.core.security.support.Validation;\n import org.elasticsearch.xpack.core.security.support.Validation.Users;\n import org.elasticsearch.xpack.security.authc.file.FileUserPasswdStore;\n import org.elasticsearch.xpack.security.authc.file.FileUserRolesStore;\n+import org.elasticsearch.xpack.security.authz.store.FileRolesStore;\n import org.elasticsearch.xpack.security.support.FileAttributesChecker;\n \n import java.nio.file.Files;\n@@ -47,7 +46,7 @@ public static void main(String[] args) throws Exception {\n }\n \n UsersTool() {\n- super(\"Manages elasticsearch native users\");\n+ super(\"Manages elasticsearch file users\");\n subcommands.put(\"useradd\", newAddUserCommand());\n subcommands.put(\"userdel\", newDeleteUserCommand());\n subcommands.put(\"passwd\", newPasswordCommand());\n@@ -82,7 +81,7 @@ static class AddUserCommand extends EnvironmentAwareCommand {\n private final OptionSpec<String> arguments;\n \n AddUserCommand() {\n- super(\"Adds a native user\");\n+ super(\"Adds a file user\");\n \n this.passwordOption = parser.acceptsAll(Arrays.asList(\"p\", \"password\"),\n \"The user password\")\n@@ -96,11 +95,8 @@ static class AddUserCommand extends EnvironmentAwareCommand {\n @Override\n protected void printAdditionalHelp(Terminal terminal) {\n terminal.println(\"Adds a file based user to elasticsearch (via internal realm). The user will\");\n- terminal.println(\"be added to the users file and its roles will be added to the\");\n- terminal.println(\"users_roles file. If non-default files are used (different file\");\n- terminal.println(\"locations are configured in elasticsearch.yml) the appropriate files\");\n- terminal.println(\"will be resolved from the settings and the user and its roles will be\");\n- terminal.println(\"added to them.\");\n+ terminal.println(\"be added to the \\\"users\\\" file and its roles will be added to the\");\n+ terminal.println(\"\\\"users_roles\\\" file in the elasticsearch config directory.\");\n terminal.println(\"\");\n }\n \n@@ -123,7 +119,7 @@ protected void execute(Terminal terminal, OptionSet options, Environment env) th\n \n Map<String, char[]> users = FileUserPasswdStore.parseFile(passwordFile, null, env.settings());\n if (users == null) {\n- throw new UserException(ExitCodes.CONFIG, \"Configuration file [users] is missing\");\n+ throw new UserException(ExitCodes.CONFIG, \"Configuration file [\" + passwordFile + \"] is missing\");\n }\n if (users.containsKey(username)) {\n throw new UserException(ExitCodes.CODE_ERROR, \"User [\" + username + \"] already exists\");\n@@ -155,11 +151,8 @@ static class DeleteUserCommand extends EnvironmentAwareCommand {\n @Override\n protected void printAdditionalHelp(Terminal terminal) {\n terminal.println(\"Removes an existing file based user from elasticsearch. The user will be\");\n- terminal.println(\"removed from the users file and its roles will be removed from the\");\n- terminal.println(\"users_roles file. If non-default files are used (different file\");\n- terminal.println(\"locations are configured in elasticsearch.yml) the appropriate files\");\n- terminal.println(\"will be resolved from the settings and the user and its roles will be\");\n- terminal.println(\"removed from them.\");\n+ terminal.println(\"removed from the \\\"users\\\" file and its roles will be removed from the\");\n+ terminal.println(\"\\\"users_roles\\\" file in the elasticsearch config directory.\");\n terminal.println(\"\");\n }\n \n@@ -173,7 +166,7 @@ protected void execute(Terminal terminal, OptionSet options, Environment env) th\n \n Map<String, char[]> users = FileUserPasswdStore.parseFile(passwordFile, null, env.settings());\n if (users == null) {\n- throw new UserException(ExitCodes.CONFIG, \"Configuration file [users] is missing\");\n+ throw new UserException(ExitCodes.CONFIG, \"Configuration file [\" + passwordFile + \"] is missing\");\n }\n if (users.containsKey(username) == false) {\n throw new UserException(ExitCodes.NO_USER, \"User [\" + username + \"] doesn't exist\");\n@@ -213,12 +206,10 @@ static class PasswordCommand extends EnvironmentAwareCommand {\n \n @Override\n protected void printAdditionalHelp(Terminal terminal) {\n- terminal.println(\"The passwd command changes passwords for files based users. The tool\");\n+ terminal.println(\"The passwd command changes passwords for file based users. The tool\");\n terminal.println(\"prompts twice for a replacement password. The second entry is compared\");\n terminal.println(\"against the first and both are required to match in order for the\");\n- terminal.println(\"password to be changed. If non-default users file is used (a different\");\n- terminal.println(\"file location is configured in elasticsearch.yml) the appropriate file\");\n- terminal.println(\"will be resolved from the settings.\");\n+ terminal.println(\"password to be changed.\");\n terminal.println(\"\");\n }\n \n@@ -232,7 +223,7 @@ protected void execute(Terminal terminal, OptionSet options, Environment env) th\n FileAttributesChecker attributesChecker = new FileAttributesChecker(file);\n Map<String, char[]> users = new HashMap<>(FileUserPasswdStore.parseFile(file, null, env.settings()));\n if (users == null) {\n- throw new UserException(ExitCodes.CONFIG, \"Configuration file [users] is missing\");\n+ throw new UserException(ExitCodes.CONFIG, \"Configuration file [\" + file + \"] is missing\");\n }\n if (users.containsKey(username) == false) {\n throw new UserException(ExitCodes.NO_USER, \"User [\" + username + \"] doesn't exist\");\n@@ -345,19 +336,19 @@ static void listUsersAndRoles(Terminal terminal, Environment env, String usernam\n Path userRolesFilePath = FileUserRolesStore.resolveFile(env);\n Map<String, String[]> userRoles = FileUserRolesStore.parseFile(userRolesFilePath, null);\n if (userRoles == null) {\n- throw new UserException(ExitCodes.CONFIG, \"Configuration file [users_roles] is missing\");\n+ throw new UserException(ExitCodes.CONFIG, \"Configuration file [\" + userRolesFilePath + \"] is missing\");\n }\n \n Path userFilePath = FileUserPasswdStore.resolveFile(env);\n Map<String, char[]> users = FileUserPasswdStore.parseFile(userFilePath, null, env.settings());\n if (users == null) {\n- throw new UserException(ExitCodes.CONFIG, \"Configuration file [users] is missing\");\n+ throw new UserException(ExitCodes.CONFIG, \"Configuration file [\" + userFilePath + \"] is missing\");\n }\n \n Path rolesFilePath = FileRolesStore.resolveFile(env);\n Set<String> knownRoles = Sets.union(FileRolesStore.parseFileForRoleNames(rolesFilePath, null), ReservedRolesStore.names());\n if (knownRoles == null) {\n- throw new UserException(ExitCodes.CONFIG, \"Configuration file [roles.xml] is missing\");\n+ throw new UserException(ExitCodes.CONFIG, \"Configuration file [\" + rolesFilePath + \"] is missing\");\n }\n \n if (username != null) {", "filename": "x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authc/file/tool/UsersTool.java", "status": "modified" }, { "diff": "@@ -499,7 +499,7 @@ public void testUserAddNoConfig() throws Exception {\n execute(\"useradd\", pathHomeParameter, fileTypeParameter, \"username\", \"-p\", SecuritySettingsSourceField.TEST_PASSWORD);\n });\n assertEquals(ExitCodes.CONFIG, e.exitCode);\n- assertThat(e.getMessage(), containsString(\"Configuration file [users] is missing\"));\n+ assertThat(e.getMessage(), containsString(\"Configuration file [eshome/config/users] is missing\"));\n }\n \n public void testUserListNoConfig() throws Exception {\n@@ -511,7 +511,7 @@ public void testUserListNoConfig() throws Exception {\n execute(\"list\", pathHomeParameter, fileTypeParameter);\n });\n assertEquals(ExitCodes.CONFIG, e.exitCode);\n- assertThat(e.getMessage(), containsString(\"Configuration file [users] is missing\"));\n+ assertThat(e.getMessage(), containsString(\"Configuration file [eshome/config/users] is missing\"));\n }\n \n public void testUserDelNoConfig() throws Exception {\n@@ -523,7 +523,7 @@ public void testUserDelNoConfig() throws Exception {\n execute(\"userdel\", pathHomeParameter, fileTypeParameter, \"username\");\n });\n assertEquals(ExitCodes.CONFIG, e.exitCode);\n- assertThat(e.getMessage(), containsString(\"Configuration file [users] is missing\"));\n+ assertThat(e.getMessage(), containsString(\"Configuration file [eshome/config/users] is missing\"));\n }\n \n public void testListUserRolesNoConfig() throws Exception {\n@@ -535,6 +535,6 @@ public void testListUserRolesNoConfig() throws Exception {\n execute(\"roles\", pathHomeParameter, fileTypeParameter, \"username\");\n });\n assertEquals(ExitCodes.CONFIG, e.exitCode);\n- assertThat(e.getMessage(), containsString(\"Configuration file [users_roles] is missing\"));\n+ assertThat(e.getMessage(), containsString(\"Configuration file [eshome/config/users_roles] is missing\"));\n }\n }", "filename": "x-pack/qa/security-tools-tests/src/test/java/org/elasticsearch/xpack/security/authc/file/tool/UsersToolTests.java", "status": "modified" } ] }
{ "body": "Following #29373 parse exceptions are now `XContentParseException` rather than `ParsingException`. This has a major effect on the determination of the `root_cause` of an exception that is found during parsing due to the definition of `root_cause`.\r\n\r\nThe `root_cause` is determined by starting at the outermost exception and recursing through the causes until an exception is found that is _not_ an `ElasticsearchException`. Then:\r\n\r\n* If the outermost exception was not an `ElasticsearchException` then it is the `root_cause`\r\n* Otherwise the most deeply nested `ElasticsearchException` is the `root_cause`\r\n\r\n`ParsingException` extends `ElasticsearchException` whereas `XContentParseException` does not.\r\n\r\nThis means that if a validation error occurs during parsing then following #29373 the most general parsing exception is reported as the root cause. Prior to #29373 the most specific parsing exception, i.e. most likely the one detailing the validation error, was considered the `root_cause`.\r\n\r\nAs a concrete example, this is a validation error after #29373:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [{\r\n \"type\": \"x_content_parse_exception\",\r\n \"reason\": \"[1:144] [job_details] failed to parse field [analysis_limits]\"\r\n }],\r\n \"type\": \"x_content_parse_exception\",\r\n \"reason\": \"[1:144] [job_details] failed to parse field [analysis_limits]\",\r\n \"caused_by\": {\r\n \"type\": \"x_content_parse_exception\",\r\n \"reason\": \"Failed to build [analysis_limits] after last required field arrived\",\r\n \"caused_by\": {\r\n \"type\": \"status_exception\",\r\n \"reason\": \"categorization_examples_limit cannot be less than 0. Value = -1\"\r\n }\r\n }\r\n },\r\n \"status\": 400\r\n}\r\n```\r\n\r\nAnd this is the equivalent error before #29373:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"status_exception\",\r\n \"reason\": \"categorization_examples_limit cannot be less than 0. Value = -1\"\r\n }\r\n ],\r\n \"type\": \"parsing_exception\",\r\n \"reason\": \"[job_details] failed to parse field [analysis_limits]\",\r\n \"line\": 11,\r\n \"col\": 3,\r\n \"caused_by\": {\r\n \"type\": \"parsing_exception\",\r\n \"reason\": \"Failed to build [analysis_limits] after last required field arrived\",\r\n \"caused_by\": {\r\n \"type\": \"status_exception\",\r\n \"reason\": \"categorization_examples_limit cannot be less than 0. Value = -1\"\r\n }\r\n }\r\n },\r\n \"status\": 400\r\n}\r\n```\r\n\r\nIf all you have to go on is the `root_cause` then `[1:144] [job_details] failed to parse field [analysis_limits]` is nowhere near as useful as `categorization_examples_limit cannot be less than 0. Value = -1`. The red error bars that Kibana shows when an error occurs only show the `root_cause`, so Kibana users suffer from this.\r\n\r\nI think the solution that will keep the `root_cause` functionality as it was previously would be to consider both `ElasticsearchException` and `XContentParseException` when recursing through causes.", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-04-30T13:31:40Z" }, { "body": "This seems like a legit thing to me. @dakrone do you want this or should I have a look?", "created_at": "2018-04-30T13:33:19Z" }, { "body": "This is a regression with respect to user experience. It's the sort of issue that results in more support calls which, once raised, will be difficult to diagnose. What are the chances of a fix for 6.3?", "created_at": "2018-04-30T15:24:57Z" }, { "body": "@nik9000 Would you pick this one up please?", "created_at": "2018-04-30T15:34:12Z" }, { "body": ">This is a regression with respect to user experience. It's the sort of issue that results in more support calls which, once raised, will be difficult to diagnose. What are the chances of a fix for 6.3?\r\n\r\nWe will assess and let you know.", "created_at": "2018-04-30T15:34:53Z" }, { "body": "> @nik9000 Would you pick this one up please?\r\n\r\nSure. I'll start right now.", "created_at": "2018-04-30T16:27:04Z" }, { "body": ">>This is a regression with respect to user experience. It's the sort of issue that results in more support calls which, once raised, will be difficult to diagnose. What are the chances of a fix for 6.3?\r\n\r\n> We will assess and let you know.\r\n\r\nWe will get this fixed in 6.3.0.", "created_at": "2018-04-30T20:01:40Z" } ], "number": 30261, "title": "Causes of XContentParseException should be included in search for root_cause" }
{ "body": "Just like `ElasticsearchException`, the inner most\r\n`XContentParseException` tends to contain the root cause of the\r\nexception and show be show to the user in the `root_cause` field.\r\n\r\nThe effectively undoes most of the changes that #29373 made to the\r\n`root_cause` for parsing exceptions. The `type` field still changes from\r\n`parse_exception` to `x_content_parse_exception`, but this seems like a\r\nfairly safe change.\r\n\r\n`ElasticsearchWrapperException` *looks* tempting to implement this but\r\nthe behavior isn't quite right. `ElasticsearchWrapperExceptions` are\r\nentirely unwrapped until the cause no longer\r\n`implements ElasticsearchWrapperException` but `XContentParseException`\r\nshould be unwrapped until its cause is no longer an\r\n`XContentParseException` but no further. In other words,\r\n`ElasticsearchWrapperException` are unwrapped one step too far.\r\n\r\nCloses #30261\r\n\r\n", "number": 30270, "review_comments": [ { "body": "I feel like if we get *another* exception like this one we should try to build something a little more generic and less \"heuristics that make the exception look good\". But `XContentParseException isn't really the same thing as `ElasticsearchException`. It is similar, but different enough that I don't think I could boil out the generic bits without this getting wonky.", "created_at": "2018-04-30T19:15:32Z" }, { "body": "This one turned out not to be useful but I researched it so I figured I'd add the javadocs.", "created_at": "2018-04-30T19:15:55Z" }, { "body": "These were all backwards....", "created_at": "2018-04-30T19:16:07Z" } ], "title": "Core: Pick inner most parse exception as root cause" }
{ "commits": [ { "message": "Core: Pick inner most parse exception as root cause\n\nJust like `ElasticsearchException`, the inner most\n`XContentParseException` tends to contain the root cause of the\nexception and show be show to the user in the `root_cause` field.\n\nThe effectively undoes most of the changes that #29373 made to the\n`root_cause` for parsing exceptions. The `type` field still changes from\n`parse_exception` to `x_content_parse_exception`, but this seems like a\nfairly safe change.\n\n`ElasticsearchWrapperException` *looks* tempting to implement this but\nthe behavior isn't quite right. `ElasticsearchWrapperExceptions` are\nentirely unwrapped until the cause no longer\n`implements ElasticsearchWrapperException` but `XContentParseException`\nshould be unwrapped until its cause is no longer an\n`XContentParseException` but no further. In other words,\n`ElasticsearchWrapperException` are unwrapped one step too far.\n\nCloses #30261" } ], "files": [ { "diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.common.logging.LoggerMessageFormat;\n import org.elasticsearch.common.xcontent.ToXContentFragment;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParseException;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.ShardId;\n@@ -635,8 +636,25 @@ public ElasticsearchException[] guessRootCauses() {\n public static ElasticsearchException[] guessRootCauses(Throwable t) {\n Throwable ex = ExceptionsHelper.unwrapCause(t);\n if (ex instanceof ElasticsearchException) {\n+ // ElasticsearchException knows how to guess its own root cause\n return ((ElasticsearchException) ex).guessRootCauses();\n }\n+ if (ex instanceof XContentParseException) {\n+ /*\n+ * We'd like to unwrap parsing exceptions to the inner-most\n+ * parsing exception because that is generally the most interesting\n+ * exception to return to the user. If that exception is caused by\n+ * an ElasticsearchException we'd like to keep unwrapping because\n+ * ElasticserachExceptions tend to contain useful information for\n+ * the user.\n+ */\n+ Throwable cause = ex.getCause();\n+ if (cause != null) {\n+ if (cause instanceof XContentParseException || cause instanceof ElasticsearchException) {\n+ return guessRootCauses(ex.getCause());\n+ }\n+ }\n+ }\n return new ElasticsearchException[]{new ElasticsearchException(t.getMessage(), t) {\n @Override\n protected String getExceptionName() {", "filename": "server/src/main/java/org/elasticsearch/ElasticsearchException.java", "status": "modified" }, { "diff": "@@ -19,7 +19,11 @@\n \n package org.elasticsearch;\n \n+/**\n+ * An exception that is meant to be \"unwrapped\" when sent back to the user\n+ * as an error because its is {@link #getCause() cause}, if non-null is\n+ * <strong>always</strong> more useful to the user than the exception itself.\n+ */\n public interface ElasticsearchWrapperException {\n-\n Throwable getCause();\n }", "filename": "server/src/main/java/org/elasticsearch/ElasticsearchWrapperException.java", "status": "modified" }, { "diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentLocation;\n+import org.elasticsearch.common.xcontent.XContentParseException;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.discovery.DiscoverySettings;\n@@ -78,6 +79,7 @@\n import static org.hamcrest.CoreMatchers.hasItems;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.hasSize;\n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.startsWith;\n \n public class ElasticsearchExceptionTests extends ESTestCase {\n@@ -124,13 +126,13 @@ public void testGuessRootCause() {\n } else {\n rootCauses = ElasticsearchException.guessRootCauses(randomBoolean() ? new RemoteTransportException(\"remoteboom\", ex) : ex);\n }\n- assertEquals(ElasticsearchException.getExceptionName(rootCauses[0]), \"parsing_exception\");\n- assertEquals(rootCauses[0].getMessage(), \"foobar\");\n+ assertEquals(\"parsing_exception\", ElasticsearchException.getExceptionName(rootCauses[0]));\n+ assertEquals(\"foobar\", rootCauses[0].getMessage());\n \n ElasticsearchException oneLevel = new ElasticsearchException(\"foo\", new RuntimeException(\"foobar\"));\n rootCauses = oneLevel.guessRootCauses();\n- assertEquals(ElasticsearchException.getExceptionName(rootCauses[0]), \"exception\");\n- assertEquals(rootCauses[0].getMessage(), \"foo\");\n+ assertEquals(\"exception\", ElasticsearchException.getExceptionName(rootCauses[0]));\n+ assertEquals(\"foo\", rootCauses[0].getMessage());\n }\n {\n ShardSearchFailure failure = new ShardSearchFailure(\n@@ -146,20 +148,40 @@ public void testGuessRootCause() {\n assertEquals(rootCauses.length, 2);\n assertEquals(ElasticsearchException.getExceptionName(rootCauses[0]), \"parsing_exception\");\n assertEquals(rootCauses[0].getMessage(), \"foobar\");\n- assertEquals(((ParsingException) rootCauses[0]).getLineNumber(), 1);\n- assertEquals(((ParsingException) rootCauses[0]).getColumnNumber(), 2);\n- assertEquals(ElasticsearchException.getExceptionName(rootCauses[1]), \"query_shard_exception\");\n- assertEquals((rootCauses[1]).getIndex().getName(), \"foo1\");\n- assertEquals(rootCauses[1].getMessage(), \"foobar\");\n+ assertEquals(1, ((ParsingException) rootCauses[0]).getLineNumber());\n+ assertEquals(2, ((ParsingException) rootCauses[0]).getColumnNumber());\n+ assertEquals(\"query_shard_exception\", ElasticsearchException.getExceptionName(rootCauses[1]));\n+ assertEquals(\"foo1\", rootCauses[1].getIndex().getName());\n+ assertEquals(\"foobar\", rootCauses[1].getMessage());\n }\n \n {\n final ElasticsearchException[] foobars = ElasticsearchException.guessRootCauses(new IllegalArgumentException(\"foobar\"));\n assertEquals(foobars.length, 1);\n- assertTrue(foobars[0] instanceof ElasticsearchException);\n- assertEquals(foobars[0].getMessage(), \"foobar\");\n- assertEquals(foobars[0].getCause().getClass(), IllegalArgumentException.class);\n- assertEquals(foobars[0].getExceptionName(), \"illegal_argument_exception\");\n+ assertThat(foobars[0], instanceOf(ElasticsearchException.class));\n+ assertEquals(\"foobar\", foobars[0].getMessage());\n+ assertEquals(IllegalArgumentException.class, foobars[0].getCause().getClass());\n+ assertEquals(\"illegal_argument_exception\", foobars[0].getExceptionName());\n+ }\n+\n+ {\n+ XContentParseException inner = new XContentParseException(null, \"inner\");\n+ XContentParseException outer = new XContentParseException(null, \"outer\", inner);\n+ final ElasticsearchException[] causes = ElasticsearchException.guessRootCauses(outer);\n+ assertEquals(causes.length, 1);\n+ assertThat(causes[0], instanceOf(ElasticsearchException.class));\n+ assertEquals(\"inner\", causes[0].getMessage());\n+ assertEquals(\"x_content_parse_exception\", causes[0].getExceptionName());\n+ }\n+\n+ {\n+ ElasticsearchException inner = new ElasticsearchException(\"inner\");\n+ XContentParseException outer = new XContentParseException(null, \"outer\", inner);\n+ final ElasticsearchException[] causes = ElasticsearchException.guessRootCauses(outer);\n+ assertEquals(causes.length, 1);\n+ assertThat(causes[0], instanceOf(ElasticsearchException.class));\n+ assertEquals(\"inner\", causes[0].getMessage());\n+ assertEquals(\"exception\", causes[0].getExceptionName());\n }\n }\n ", "filename": "server/src/test/java/org/elasticsearch/ElasticsearchExceptionTests.java", "status": "modified" } ] }
{ "body": "*Original comment by @astefan:*\n\nFor a query like:\r\n\r\n```\r\nPOST /_xpack/sql/translate\r\n{\r\n \"query\":\"SELECT name.keyword FROM library WHERE name!='NULL' AND release_date >= '2011-06-02' AND release_date <= '2011-06-02' AND match(author,'dan')\"\r\n}\r\n```\r\n\r\nor (with numerics):\r\n\r\n```\r\nPOST /_xpack/sql/translate\r\n{\r\n \"query\":\"SELECT name.keyword FROM library WHERE name!='NULL' AND price >= 10 AND price <= 200 AND match(author,'dan')\"\r\n}\r\n```\r\n\r\nThe translated query uses individual `range` queries for the lower and upper limits even though the field being used in the query is the same and an optimization like `{\"range\":{\"price\":{\"from\":10,\"to\":200,\"include_lower\":true,\"include_upper\":true,\"boost\":1}}}` can be used instead. For the reference, this is the translated query at the moment (irrelevant parts not provided):\r\n\r\n```\r\n...\r\n \"query\": {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"bool\": {\r\n \"filter\": [\r\n {\r\n \"bool\": {\r\n \"must_not\": [\r\n {\r\n \"term\": {\r\n \"name.keyword\": {\r\n \"value\": \"NULL\",\r\n \"boost\": 1\r\n }\r\n }\r\n }\r\n ],\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"range\": {\r\n \"price\": {\r\n \"from\": 10,\r\n \"to\": null,\r\n \"include_lower\": true,\r\n \"include_upper\": false,\r\n \"boost\": 1\r\n }\r\n }\r\n }\r\n ],\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n {\r\n \"range\": {\r\n \"price\": {\r\n \"from\": null,\r\n \"to\": 200,\r\n \"include_lower\": false,\r\n \"include_upper\": true,\r\n \"boost\": 1\r\n }\r\n }\r\n }\r\n ],\r\n \"adjust_pure_negative\": true,\r\n \"boost\": 1\r\n }\r\n },\r\n...\r\n```", "comments": [], "number": 30017, "title": "SQL: date and numeric comparison are translated to separate range queries" }
{ "body": "Rewrote optimization rule for combining ranges by improving the\r\ndetection of binary comparisons in a tree to better combine\r\nthem in a range, regardless of their place inside an expression.\r\nAdditionally, improve the comparisons of Numbers of different types\r\nAlso, improve reassembly of conjunction/disjunction into balanced\r\ntrees.\r\n\r\nFix #30017\r\nCloses #30019", "number": 30267, "review_comments": [ { "body": "What are the side effects on not catching this and letting it bubble out? Or even wrapping it? I think it'd be nice to have a big comment here about why returning `null` is ok.\r\n\r\nAlso, can you move the `try` into the `if` statement so it looks smaller? That'd just make me a bit more comfortable. If returning `null` is right here then I think you should return it from the `catch` rather than fall out. That feels a little safer even if it isn't.", "created_at": "2018-04-30T15:55:28Z" }, { "body": "Maybe \r\n```\r\nBuilds a \"pyramid\" out of all clauses by combining them pairwise. So\r\n{@code combine(List(a, b, c, d, e), AND)` becomes\r\n<pre><code>\r\n AND\r\n |---/ \\---|\r\n AND AND\r\n / \\ / \\\r\ne AND b c\r\n / \\ \r\n a b \r\n</pre></code>\r\n```\r\n\r\nThe picture would make me feel better.", "created_at": "2018-04-30T16:04:56Z" }, { "body": "I wonder if this should be `Queue<Expression> work = new ArrayDeque<>(exps);` and then you do `work.addLast(combiner.apply(work.removeFirst(), work.removeFirst()));` The `for` loop and `remove` together scare me.", "created_at": "2018-04-30T16:10:54Z" }, { "body": "Can you remove the extra `(` and `)`? I don't think they help here.", "created_at": "2018-04-30T16:11:43Z" }, { "body": "Can you move the explanation to javadoc? I do like that you catch this case. I might rename this method. I'd call it `isDegenerate` but I think that isn't a great name either. `matchesNothing`?", "created_at": "2018-04-30T16:13:48Z" }, { "body": "Can you give an example, maybe in terms of SQL? Like, \r\n`converts {@code SELECT * FROM a < 10 OR a == 10} into {@code SELECT * FROM a <= 10}?`?", "created_at": "2018-04-30T16:15:29Z" }, { "body": "Can you name `equ` and `eq` a little more differently? They sort of blur together to me when I read this.", "created_at": "2018-04-30T16:18:26Z" }, { "body": "I see. This one `converts {@code SELECT * FROM a < 10 AND a = 11} into {@SELECT * FROM FALSE}`.", "created_at": "2018-04-30T16:20:38Z" }, { "body": "Maybe in a follow up move each of these chunks into separate tests? `OptimizerConstantFoldingTests` and `OptimizerLogicalSimplificationsTests` and `OptimizerRangeOptimizationTests` or something.", "created_at": "2018-04-30T16:23:04Z" }, { "body": "I'd like to extract the optimizer and the tests and have the rules for `BinaryComparisons` separate.", "created_at": "2018-04-30T16:28:44Z" }, { "body": "CCE is thrown when the types are not compatible; I've expanded the comment and returns the null directly from catch.", "created_at": "2018-04-30T17:30:04Z" }, { "body": "Done.", "created_at": "2018-04-30T17:33:04Z" }, { "body": "Done.", "created_at": "2018-04-30T18:49:39Z" }, { "body": "Done.", "created_at": "2018-04-30T18:51:59Z" }, { "body": "I talked with @costin earlier about this - he wants to keep the order the same and my proposal doesn't.\r\n\r\nWhat about this?\r\n```\r\nwhile (result.size() > 1) {\r\n ListIterator<Expression> itr = result.iterator();\r\n while (itr.hasNext()) {\r\n itr.add(combiner.apply(itr.remove(), itr.remove()));\r\n }\r\n}\r\n```\r\n\r\n\r\nYour version works but `for (int i = 0; i < result.size() - 1; i++) {` make me think it'll be a normal loop and then you remove and add and I'm confused. Using the `ListIterator` forces the reader to think.", "created_at": "2018-04-30T21:56:10Z" }, { "body": "Done", "created_at": "2018-05-02T16:38:32Z" }, { "body": "If you'd like we can follow this up in a different PR.\r\nThe approach above doesn't work since `remove` can only be called _once_ per _next_/_previous_.\r\n\r\nI've added a comment to the loop explaining that it updates the list (and thus why it uses the temporary variable).\r\nConsidering the loops are fairly contained I think that conveys the message without the gymnastics of iterator.\r\n\r\nP.S. The list is created just above as a copy for this specific reason.", "created_at": "2018-05-02T16:41:51Z" }, { "body": "I'm fine with leaving it, yeah. I did want a prettier one but if this is what we can do, it'll do.", "created_at": "2018-05-02T19:01:00Z" } ], "title": "SQL: Reduce number of ranges generated for comparisons" }
{ "commits": [ { "message": "SQL: Reduce number of ranges generated for comparisons\n\nRewrote optimization rule for combining ranges by improving the\ndetection of binary comparisons in a tree to better combine\nthem in a range, regardless of their place inside an expression.\nAdditionally, improve the comparisons of Numbers of different types\nAlso, improve reassembly of conjunction/disjunction into balanced\ntrees.\n\nFix #30017" }, { "message": "Address feedback" }, { "message": "Improve rule for optimizing binary comparison\n\nDo not promote BinaryComparisons to Ranges since it introduces NULL\nboundaries and thus a corner-case that needs too much handling\nCompare BinaryComparisons directly between themselves and to Ranges" }, { "message": "Merge remote-tracking branch 'remotes/upstream/master' into better-binary-comparison-combination" }, { "message": "Fix tests" }, { "message": "Add extra comments" } ], "files": [ { "diff": "@@ -9,7 +9,6 @@\n import org.elasticsearch.xpack.sql.expression.Expression;\n import org.elasticsearch.xpack.sql.tree.Location;\n import org.elasticsearch.xpack.sql.type.DataType;\n-import org.elasticsearch.xpack.sql.type.DataTypes;\n \n // marker class to indicate operations that rely on values\n public abstract class BinaryComparison extends BinaryOperator {\n@@ -33,11 +32,42 @@ public DataType dataType() {\n return DataType.BOOLEAN;\n }\n \n+ /**\n+ * Compares two expression arguments (typically Numbers), if possible.\n+ * Otherwise returns null (the arguments are not comparable or at least\n+ * one of them is null).\n+ */\n @SuppressWarnings({ \"rawtypes\", \"unchecked\" })\n- static Integer compare(Object left, Object right) {\n- if (left instanceof Comparable && right instanceof Comparable) {\n- return Integer.valueOf(((Comparable) left).compareTo(right));\n+ public static Integer compare(Object l, Object r) {\n+ // typical number comparison\n+ if (l instanceof Number && r instanceof Number) {\n+ return compare((Number) l, (Number) r);\n }\n+\n+ if (l instanceof Comparable && r instanceof Comparable) {\n+ try {\n+ return Integer.valueOf(((Comparable) l).compareTo(r));\n+ } catch (ClassCastException cce) {\n+ // when types are not compatible, cce is thrown\n+ // fall back to null\n+ return null;\n+ }\n+ }\n+\n return null;\n }\n+\n+ static Integer compare(Number l, Number r) {\n+ if (l instanceof Double || r instanceof Double) {\n+ return Double.compare(l.doubleValue(), r.doubleValue());\n+ }\n+ if (l instanceof Float || r instanceof Float) {\n+ return Float.compare(l.floatValue(), r.floatValue());\n+ }\n+ if (l instanceof Long || r instanceof Long) {\n+ return Long.compare(l.longValue(), r.longValue());\n+ }\n+\n+ return Integer.valueOf(Integer.compare(l.intValue(), r.intValue()));\n+ }\n }", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/expression/predicate/BinaryComparison.java", "status": "modified" }, { "diff": "@@ -9,8 +9,11 @@\n import org.elasticsearch.xpack.sql.plan.logical.LogicalPlan;\n \n import java.util.ArrayList;\n-import java.util.Collections;\n import java.util.List;\n+import java.util.function.BiFunction;\n+\n+import static java.util.Collections.emptyList;\n+import static java.util.Collections.singletonList;\n \n public abstract class Predicates {\n \n@@ -22,7 +25,7 @@ public static List<Expression> splitAnd(Expression exp) {\n list.addAll(splitAnd(and.right()));\n return list;\n }\n- return Collections.singletonList(exp);\n+ return singletonList(exp);\n }\n \n public static List<Expression> splitOr(Expression exp) {\n@@ -33,15 +36,51 @@ public static List<Expression> splitOr(Expression exp) {\n list.addAll(splitOr(or.right()));\n return list;\n }\n- return Collections.singletonList(exp);\n+ return singletonList(exp);\n }\n \n public static Expression combineOr(List<Expression> exps) {\n- return exps.stream().reduce((l, r) -> new Or(l.location(), l, r)).orElse(null);\n+ return combine(exps, (l, r) -> new Or(l.location(), l, r));\n }\n \n public static Expression combineAnd(List<Expression> exps) {\n- return exps.stream().reduce((l, r) -> new And(l.location(), l, r)).orElse(null);\n+ return combine(exps, (l, r) -> new And(l.location(), l, r));\n+ }\n+\n+ /**\n+ * Build a binary 'pyramid' from the given list:\n+ * <pre>\n+ * AND\n+ * / \\\n+ * AND AND\n+ * / \\ / \\\n+ * A B C D\n+ * </pre>\n+ * \n+ * using the given combiner.\n+ * \n+ * While a bit longer, this method creates a balanced tree as oppose to a plain\n+ * recursive approach which creates an unbalanced one (either to the left or right).\n+ */\n+ private static Expression combine(List<Expression> exps, BiFunction<Expression, Expression, Expression> combiner) {\n+ if (exps.isEmpty()) {\n+ return null;\n+ }\n+\n+ // clone the list (to modify it)\n+ List<Expression> result = new ArrayList<>(exps);\n+\n+ while (result.size() > 1) {\n+ // combine (in place) expressions in pairs\n+ // NB: this loop modifies the list (just like an array)\n+ for (int i = 0; i < result.size() - 1; i++) {\n+ Expression l = result.remove(i);\n+ Expression r = result.remove(i);\n+ result.add(i, combiner.apply(l, r));\n+ }\n+ }\n+\n+ return result.get(0);\n }\n \n public static List<Expression> inCommon(List<Expression> l, List<Expression> r) {\n@@ -53,7 +92,7 @@ public static List<Expression> inCommon(List<Expression> l, List<Expression> r)\n }\n }\n }\n- return common.isEmpty() ? Collections.emptyList() : common;\n+ return common.isEmpty() ? emptyList() : common;\n }\n \n public static List<Expression> subtract(List<Expression> from, List<Expression> r) {\n@@ -65,7 +104,7 @@ public static List<Expression> subtract(List<Expression> from, List<Expression>\n }\n }\n }\n- return diff.isEmpty() ? Collections.emptyList() : diff;\n+ return diff.isEmpty() ? emptyList() : diff;\n }\n \n ", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/expression/predicate/Predicates.java", "status": "modified" }, { "diff": "@@ -9,7 +9,6 @@\n import org.elasticsearch.xpack.sql.tree.Location;\n import org.elasticsearch.xpack.sql.tree.NodeInfo;\n import org.elasticsearch.xpack.sql.type.DataType;\n-import org.elasticsearch.xpack.sql.type.DataTypes;\n \n import java.util.Arrays;\n import java.util.List;\n@@ -66,11 +65,19 @@ public boolean includeUpper() {\n \n @Override\n public boolean foldable() {\n- return value.foldable() && lower.foldable() && upper.foldable();\n+ if (lower.foldable() && upper.foldable()) {\n+ return areBoundariesInvalid() || value.foldable();\n+ }\n+\n+ return false;\n }\n \n @Override\n public Object fold() {\n+ if (areBoundariesInvalid()) {\n+ return Boolean.FALSE;\n+ }\n+\n Object val = value.fold();\n Integer lowerCompare = BinaryComparison.compare(lower.fold(), val);\n Integer upperCompare = BinaryComparison.compare(val, upper().fold());\n@@ -79,6 +86,16 @@ public Object fold() {\n return lowerComparsion && upperComparsion;\n }\n \n+ /**\n+ * Check whether the boundaries are invalid ( upper < lower) or not.\n+ * If they do, the value does not have to be evaluate.\n+ */\n+ private boolean areBoundariesInvalid() {\n+ Integer compare = BinaryComparison.compare(lower.fold(), upper.fold());\n+ // upper < lower OR upper == lower and the range doesn't contain any equals\n+ return compare != null && (compare > 0 || (compare == 0 && (!includeLower || !includeUpper)));\n+ }\n+\n @Override\n public boolean nullable() {\n return value.nullable() && lower.nullable() && upper.nullable();\n@@ -122,4 +139,4 @@ public String toString() {\n sb.append(upper);\n return sb.toString();\n }\n-}\n+}\n\\ No newline at end of file", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/expression/predicate/Range.java", "status": "modified" }, { "diff": "@@ -47,6 +47,7 @@\n import org.elasticsearch.xpack.sql.expression.predicate.LessThanOrEqual;\n import org.elasticsearch.xpack.sql.expression.predicate.Not;\n import org.elasticsearch.xpack.sql.expression.predicate.Or;\n+import org.elasticsearch.xpack.sql.expression.predicate.Predicates;\n import org.elasticsearch.xpack.sql.expression.predicate.Range;\n import org.elasticsearch.xpack.sql.plan.logical.Aggregate;\n import org.elasticsearch.xpack.sql.plan.logical.Filter;\n@@ -60,6 +61,7 @@\n import org.elasticsearch.xpack.sql.rule.RuleExecutor;\n import org.elasticsearch.xpack.sql.session.EmptyExecutable;\n import org.elasticsearch.xpack.sql.session.SingletonExecutable;\n+import org.elasticsearch.xpack.sql.util.CollectionUtils;\n \n import java.util.ArrayList;\n import java.util.Arrays;\n@@ -121,9 +123,11 @@ protected Iterable<RuleExecutor<LogicalPlan>.Batch> batches() {\n new ConstantFolding(),\n // boolean\n new BooleanSimplification(),\n- new BinaryComparisonSimplification(),\n new BooleanLiteralsOnTheRight(),\n- new CombineComparisonsIntoRange(),\n+ new BinaryComparisonSimplification(),\n+ // needs to occur before BinaryComparison combinations (see class)\n+ new PropagateEquals(),\n+ new CombineBinaryComparisons(),\n // prune/elimination\n new PruneFilters(),\n new PruneOrderBy(),\n@@ -1231,7 +1235,7 @@ private Expression simplifyNot(Not n) {\n static class BinaryComparisonSimplification extends OptimizerExpressionRule {\n \n BinaryComparisonSimplification() {\n- super(TransformDirection.UP);\n+ super(TransformDirection.DOWN);\n }\n \n @Override\n@@ -1277,47 +1281,483 @@ private Expression literalToTheRight(BinaryExpression be) {\n }\n }\n \n- static class CombineComparisonsIntoRange extends OptimizerExpressionRule {\n-\n- CombineComparisonsIntoRange() {\n- super(TransformDirection.UP);\n+ /**\n+ * Propagate Equals to eliminate conjuncted Ranges.\n+ * When encountering a different Equals or non-containing {@link Range}, the conjunction becomes false.\n+ * When encountering a containing {@link Range}, the range gets eliminated by the equality.\n+ * \n+ * This rule doesn't perform any promotion of {@link BinaryComparison}s, that is handled by\n+ * {@link CombineBinaryComparisons} on purpose as the resulting Range might be foldable\n+ * (which is picked by the folding rule on the next run).\n+ */\n+ static class PropagateEquals extends OptimizerExpressionRule {\n+\n+ PropagateEquals() {\n+ super(TransformDirection.DOWN);\n }\n \n @Override\n protected Expression rule(Expression e) {\n- return e instanceof And ? combine((And) e) : e;\n+ if (e instanceof And) {\n+ return propagate((And) e);\n+ }\n+ return e;\n+ }\n+\n+ // combine conjunction\n+ private Expression propagate(And and) {\n+ List<Range> ranges = new ArrayList<>();\n+ List<Equals> equals = new ArrayList<>();\n+ List<Expression> exps = new ArrayList<>();\n+\n+ boolean changed = false;\n+\n+ for (Expression ex : Predicates.splitAnd(and)) {\n+ if (ex instanceof Range) {\n+ ranges.add((Range) ex);\n+ } else if (ex instanceof Equals) {\n+ Equals otherEq = (Equals) ex;\n+ // equals on different values evaluate to FALSE\n+ if (otherEq.right().foldable()) {\n+ for (Equals eq : equals) {\n+ // cannot evaluate equals so skip it\n+ if (!eq.right().foldable()) {\n+ continue;\n+ }\n+ if (otherEq.left().semanticEquals(eq.left())) {\n+ if (eq.right().foldable() && otherEq.right().foldable()) {\n+ Integer comp = BinaryComparison.compare(eq.right().fold(), otherEq.right().fold());\n+ if (comp != null) {\n+ // var cannot be equal to two different values at the same time\n+ if (comp != 0) {\n+ return FALSE;\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ equals.add(otherEq);\n+ } else {\n+ exps.add(ex);\n+ }\n+ }\n+\n+ // check\n+ for (Equals eq : equals) {\n+ // cannot evaluate equals so skip it\n+ if (!eq.right().foldable()) {\n+ continue;\n+ }\n+ Object eqValue = eq.right().fold();\n+ \n+ for (int i = 0; i < ranges.size(); i++) {\n+ Range range = ranges.get(i);\n+\n+ if (range.value().semanticEquals(eq.left())) {\n+ // if equals is outside the interval, evaluate the whole expression to FALSE\n+ if (range.lower().foldable()) {\n+ Integer compare = BinaryComparison.compare(range.lower().fold(), eqValue);\n+ if (compare != null && (\n+ // eq outside the lower boundary\n+ compare > 0 ||\n+ // eq matches the boundary but should not be included\n+ (compare == 0 && !range.includeLower()))\n+ ) {\n+ return FALSE;\n+ }\n+ }\n+ if (range.upper().foldable()) {\n+ Integer compare = BinaryComparison.compare(range.upper().fold(), eqValue);\n+ if (compare != null && (\n+ // eq outside the upper boundary\n+ compare < 0 ||\n+ // eq matches the boundary but should not be included\n+ (compare == 0 && !range.includeUpper()))\n+ ) {\n+ return FALSE;\n+ }\n+ }\n+ \n+ // it's in the range and thus, remove it\n+ ranges.remove(i);\n+ changed = true;\n+ }\n+ }\n+ }\n+ \n+ return changed ? Predicates.combineAnd(CollectionUtils.combine(exps, equals, ranges)) : and;\n+ }\n+ }\n+\n+ static class CombineBinaryComparisons extends OptimizerExpressionRule {\n+\n+ CombineBinaryComparisons() {\n+ super(TransformDirection.DOWN);\n }\n \n+ @Override\n+ protected Expression rule(Expression e) {\n+ if (e instanceof And) {\n+ return combine((And) e);\n+ } else if (e instanceof Or) {\n+ return combine((Or) e);\n+ }\n+ return e;\n+ }\n+ \n+ // combine conjunction\n private Expression combine(And and) {\n- Expression l = and.left();\n- Expression r = and.right();\n+ List<Range> ranges = new ArrayList<>();\n+ List<BinaryComparison> bcs = new ArrayList<>();\n+ List<Expression> exps = new ArrayList<>();\n+\n+ boolean changed = false;\n+\n+ for (Expression ex : Predicates.splitAnd(and)) {\n+ if (ex instanceof Range) {\n+ Range r = (Range) ex;\n+ if (findExistingRange(r, ranges, true)) {\n+ changed = true;\n+ } else {\n+ ranges.add(r);\n+ }\n+ } else if (ex instanceof BinaryComparison && !(ex instanceof Equals)) {\n+ BinaryComparison bc = (BinaryComparison) ex;\n \n- if (l instanceof BinaryComparison && r instanceof BinaryComparison) {\n- // if the same operator is used\n- BinaryComparison lb = (BinaryComparison) l;\n- BinaryComparison rb = (BinaryComparison) r;\n+ if (bc.right().foldable() && (findConjunctiveComparisonInRange(bc, ranges) || findExistingComparison(bc, bcs, true))) {\n+ changed = true;\n+ } else {\n+ bcs.add(bc);\n+ }\n+ } else {\n+ exps.add(ex);\n+ }\n+ }\n+ \n+ // finally try combining any left BinaryComparisons into possible Ranges\n+ // this could be a different rule but it's clearer here wrt the order of comparisons\n+\n+ for (int i = 0; i < bcs.size() - 1; i++) {\n+ BinaryComparison main = bcs.get(i);\n+\n+ for (int j = i + 1; j < bcs.size(); j++) {\n+ BinaryComparison other = bcs.get(j);\n+ \n+ if (main.left().semanticEquals(other.left())) {\n+ // >/>= AND </<=\n+ if ((main instanceof GreaterThan || main instanceof GreaterThanOrEqual)\n+ && (other instanceof LessThan || other instanceof LessThanOrEqual)) {\n+ bcs.remove(j);\n+ bcs.remove(i);\n+ \n+ ranges.add(new Range(and.location(), main.left(),\n+ main.right(), main instanceof GreaterThanOrEqual,\n+ other.right(), other instanceof LessThanOrEqual));\n \n+ changed = true;\n+ }\n+ // </<= AND >/>=\n+ else if ((other instanceof GreaterThan || other instanceof GreaterThanOrEqual)\n+ && (main instanceof LessThan || main instanceof LessThanOrEqual)) {\n+ bcs.remove(j);\n+ bcs.remove(i);\n+ \n+ ranges.add(new Range(and.location(), main.left(),\n+ other.right(), other instanceof GreaterThanOrEqual,\n+ main.right(), main instanceof LessThanOrEqual));\n \n- if (lb.left().equals(((BinaryComparison) r).left()) && lb.right() instanceof Literal && rb.right() instanceof Literal) {\n- // >/>= AND </<=\n- if ((l instanceof GreaterThan || l instanceof GreaterThanOrEqual)\n- && (r instanceof LessThan || r instanceof LessThanOrEqual)) {\n- return new Range(and.location(), lb.left(), lb.right(), l instanceof GreaterThanOrEqual, rb.right(),\n- r instanceof LessThanOrEqual);\n+ changed = true;\n+ }\n }\n- // </<= AND >/>=\n- else if ((r instanceof GreaterThan || r instanceof GreaterThanOrEqual)\n- && (l instanceof LessThan || l instanceof LessThanOrEqual)) {\n- return new Range(and.location(), rb.left(), rb.right(), r instanceof GreaterThanOrEqual, lb.right(),\n- l instanceof LessThanOrEqual);\n+ }\n+ }\n+ \n+ \n+ return changed ? Predicates.combineAnd(CollectionUtils.combine(exps, bcs, ranges)) : and;\n+ }\n+\n+ // combine disjunction\n+ private Expression combine(Or or) {\n+ List<BinaryComparison> bcs = new ArrayList<>();\n+ List<Range> ranges = new ArrayList<>();\n+ List<Expression> exps = new ArrayList<>();\n+\n+ boolean changed = false;\n+\n+ for (Expression ex : Predicates.splitOr(or)) {\n+ if (ex instanceof Range) {\n+ Range r = (Range) ex;\n+ if (findExistingRange(r, ranges, false)) {\n+ changed = true;\n+ } else {\n+ ranges.add(r);\n }\n+ } else if (ex instanceof BinaryComparison) {\n+ BinaryComparison bc = (BinaryComparison) ex;\n+ if (bc.right().foldable() && findExistingComparison(bc, bcs, false)) {\n+ changed = true;\n+ } else {\n+ bcs.add(bc);\n+ }\n+ } else {\n+ exps.add(ex);\n }\n }\n \n- return and;\n+ return changed ? Predicates.combineOr(CollectionUtils.combine(exps, bcs, ranges)) : or;\n }\n- }\n \n+ private static boolean findExistingRange(Range main, List<Range> ranges, boolean conjunctive) {\n+ if (!main.lower().foldable() && !main.upper().foldable()) {\n+ return false;\n+ }\n+ // NB: the loop modifies the list (hence why the int is used)\n+ for (int i = 0; i < ranges.size(); i++) {\n+ Range other = ranges.get(i);\n+\n+ if (main.value().semanticEquals(other.value())) {\n+\n+ // make sure the comparison was done\n+ boolean compared = false;\n+\n+ boolean lower = false;\n+ boolean upper = false;\n+ // boundary equality (useful to differentiate whether a range is included or not)\n+ // and thus whether it should be preserved or ignored\n+ boolean lowerEq = false;\n+ boolean upperEq = false;\n+\n+ // evaluate lower\n+ if (main.lower().foldable() && other.lower().foldable()) {\n+ compared = true;\n+\n+ Integer comp = BinaryComparison.compare(main.lower().fold(), other.lower().fold());\n+ // values are comparable\n+ if (comp != null) {\n+ // boundary equality\n+ lowerEq = comp == 0 && main.includeLower() == other.includeLower();\n+ // AND\n+ if (conjunctive) {\n+ // (2 < a < 3) AND (1 < a < 3) -> (1 < a < 3)\n+ lower = comp > 0 ||\n+ // (2 < a < 3) AND (2 < a <= 3) -> (2 < a < 3)\n+ (comp == 0 && !main.includeLower() && other.includeLower());\n+ }\n+ // OR\n+ else {\n+ // (1 < a < 3) OR (2 < a < 3) -> (1 < a < 3)\n+ lower = comp < 0 ||\n+ // (2 <= a < 3) OR (2 < a < 3) -> (2 <= a < 3)\n+ (comp == 0 && main.includeLower() && !other.includeLower()) || lowerEq;\n+ }\n+ }\n+ }\n+ // evaluate upper\n+ if (main.upper().foldable() && other.upper().foldable()) {\n+ compared = true;\n+\n+ Integer comp = BinaryComparison.compare(main.upper().fold(), other.upper().fold());\n+ // values are comparable\n+ if (comp != null) {\n+ // boundary equality\n+ upperEq = comp == 0 && main.includeUpper() == other.includeUpper();\n+\n+ // AND\n+ if (conjunctive) {\n+ // (1 < a < 2) AND (1 < a < 3) -> (1 < a < 2)\n+ upper = comp < 0 ||\n+ // (1 < a < 2) AND (1 < a <= 2) -> (1 < a < 2)\n+ (comp == 0 && !main.includeUpper() && other.includeUpper());\n+ }\n+ // OR\n+ else {\n+ // (1 < a < 3) OR (1 < a < 2) -> (1 < a < 3)\n+ upper = comp > 0 ||\n+ // (1 < a <= 3) OR (1 < a < 3) -> (2 < a < 3)\n+ (comp == 0 && main.includeUpper() && !other.includeUpper()) || upperEq;\n+ }\n+ }\n+ }\n+\n+ // AND - at least one of lower or upper\n+ if (conjunctive) {\n+ // can tighten range\n+ if (lower || upper) {\n+ ranges.remove(i);\n+ ranges.add(i,\n+ new Range(main.location(), main.value(),\n+ lower ? main.lower() : other.lower(),\n+ lower ? main.includeLower() : other.includeLower(),\n+ upper ? main.upper() : other.upper(),\n+ upper ? main.includeUpper() : other.includeUpper()));\n+ }\n+\n+ // range was comparable\n+ return compared;\n+ }\n+ // OR - needs both upper and lower to loosen range\n+ else {\n+ // can loosen range\n+ if (lower && upper) {\n+ ranges.remove(i);\n+ ranges.add(i,\n+ new Range(main.location(), main.value(),\n+ lower ? main.lower() : other.lower(),\n+ lower ? main.includeLower() : other.includeLower(),\n+ upper ? main.upper() : other.upper(),\n+ upper ? main.includeUpper() : other.includeUpper()));\n+ return true;\n+ }\n+\n+ // if the range in included, no need to add it\n+ return compared && (!((lower && !lowerEq) || (upper && !upperEq)));\n+ }\n+ }\n+ }\n+ return false;\n+ }\n+\n+ private boolean findConjunctiveComparisonInRange(BinaryComparison main, List<Range> ranges) {\n+ Object value = main.right().fold();\n+ \n+ // NB: the loop modifies the list (hence why the int is used)\n+ for (int i = 0; i < ranges.size(); i++) {\n+ Range other = ranges.get(i);\n+ \n+ if (main.left().semanticEquals(other.value())) {\n+ \n+ if (main instanceof GreaterThan || main instanceof GreaterThanOrEqual) {\n+ if (other.lower().foldable()) {\n+ Integer comp = BinaryComparison.compare(value, other.lower().fold());\n+ if (comp != null) {\n+ // 2 < a AND (2 <= a < 3) -> 2 < a < 3\n+ boolean lowerEq = comp == 0 && other.includeLower() && main instanceof GreaterThan;\n+ // 2 < a AND (1 < a < 3) -> 2 < a < 3\n+ boolean lower = comp > 0 || lowerEq;\n+ \n+ if (lower) {\n+ ranges.remove(i);\n+ ranges.add(i,\n+ new Range(other.location(), other.value(),\n+ main.right(), lowerEq ? true : other.includeLower(),\n+ other.upper(), other.includeUpper()));\n+ }\n+\n+ // found a match\n+ return true;\n+ }\n+ }\n+ } else if (main instanceof LessThan || main instanceof LessThanOrEqual) {\n+ if (other.lower().foldable()) {\n+ Integer comp = BinaryComparison.compare(value, other.lower().fold());\n+ if (comp != null) {\n+ // a < 2 AND (1 < a <= 2) -> 1 < a < 2\n+ boolean upperEq = comp == 0 && other.includeUpper() && main instanceof LessThan;\n+ // a < 2 AND (1 < a < 3) -> 1 < a < 2\n+ boolean upper = comp > 0 || upperEq;\n+\n+ if (upper) {\n+ ranges.remove(i);\n+ ranges.add(i, new Range(other.location(), other.value(),\n+ other.lower(), other.includeLower(),\n+ main.right(), upperEq ? true : other.includeUpper()));\n+ }\n+\n+ // found a match\n+ return true;\n+ }\n+ }\n+ }\n+\n+ return false;\n+ }\n+ }\n+ return false;\n+ }\n+ \n+ /**\n+ * Find commonalities between the given comparison in the given list.\n+ * The method can be applied both for conjunctive (AND) or disjunctive purposes (OR).\n+ */\n+ private static boolean findExistingComparison(BinaryComparison main, List<BinaryComparison> bcs, boolean conjunctive) {\n+ Object value = main.right().fold();\n+ \n+ // NB: the loop modifies the list (hence why the int is used)\n+ for (int i = 0; i < bcs.size(); i++) {\n+ BinaryComparison other = bcs.get(i);\n+ // skip if cannot evaluate\n+ if (!other.right().foldable()) {\n+ continue;\n+ }\n+ // if bc is a higher/lower value or gte vs gt, use it instead\n+ if ((other instanceof GreaterThan || other instanceof GreaterThanOrEqual) &&\n+ (main instanceof GreaterThan || main instanceof GreaterThanOrEqual)) {\n+ \n+ if (main.left().semanticEquals(other.left())) {\n+ Integer compare = BinaryComparison.compare(value, other.right().fold());\n+ \n+ if (compare != null) {\n+ // AND\n+ if ((conjunctive &&\n+ // a > 3 AND a > 2 -> a > 3\n+ (compare > 0 ||\n+ // a > 2 AND a >= 2 -> a > 2\n+ (compare == 0 && main instanceof GreaterThan && other instanceof GreaterThanOrEqual)))\n+ ||\n+ // OR\n+ (!conjunctive &&\n+ // a > 2 OR a > 3 -> a > 2\n+ (compare < 0 ||\n+ // a >= 2 OR a > 2 -> a >= 2\n+ (compare == 0 && main instanceof GreaterThanOrEqual && other instanceof GreaterThan)))) {\n+ bcs.remove(i);\n+ bcs.add(i, main);\n+ }\n+ // found a match\n+ return true;\n+ }\n+\n+ return false;\n+ }\n+ }\n+ // if bc is a lower/higher value or lte vs lt, use it instead\n+ else if ((other instanceof LessThan || other instanceof LessThanOrEqual) &&\n+ (main instanceof LessThan || main instanceof LessThanOrEqual)) {\n+ \n+ if (main.left().semanticEquals(other.left())) {\n+ Integer compare = BinaryComparison.compare(value, other.right().fold());\n+ \n+ if (compare != null) {\n+ // AND\n+ if ((conjunctive &&\n+ // a < 2 AND a < 3 -> a < 2\n+ (compare < 0 ||\n+ // a < 2 AND a <= 2 -> a < 2\n+ (compare == 0 && main instanceof LessThan && other instanceof LessThanOrEqual)))\n+ ||\n+ // OR\n+ (!conjunctive &&\n+ // a < 2 OR a < 3 -> a < 3\n+ (compare > 0 ||\n+ // a <= 2 OR a < 2 -> a <= 2\n+ (compare == 0 && main instanceof LessThanOrEqual && other instanceof LessThan)))) {\n+ bcs.remove(i);\n+ bcs.add(i, main);\n+ \n+ }\n+ // found a match\n+ return true;\n+ }\n+\n+ return false;\n+ }\n+ }\n+ }\n+ \n+ return false;\n+ }\n+ }\n \n static class SkipQueryOnLimitZero extends OptimizerRule<Limit> {\n @Override\n@@ -1435,4 +1875,4 @@ protected LogicalPlan rule(LogicalPlan plan) {\n enum TransformDirection {\n UP, DOWN\n };\n-}\n+}\n\\ No newline at end of file", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/optimizer/Optimizer.java", "status": "modified" }, { "diff": "@@ -5,6 +5,8 @@\n */\n package org.elasticsearch.xpack.sql.tree;\n \n+import org.elasticsearch.xpack.sql.SqlIllegalArgumentException;\n+\n import java.util.ArrayList;\n import java.util.BitSet;\n import java.util.List;\n@@ -37,6 +39,9 @@ public abstract class Node<T extends Node<T>> {\n \n public Node(Location location, List<T> children) {\n this.location = (location != null ? location : Location.EMPTY);\n+ if (children.contains(null)) {\n+ throw new SqlIllegalArgumentException(\"Null children are not allowed\");\n+ }\n this.children = children;\n }\n ", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/tree/Node.java", "status": "modified" }, { "diff": "@@ -0,0 +1,49 @@\n+/*\n+ * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one\n+ * or more contributor license agreements. Licensed under the Elastic License;\n+ * you may not use this file except in compliance with the Elastic License.\n+ */\n+package org.elasticsearch.xpack.sql.optimizer;\n+\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.xpack.sql.analysis.analyzer.Analyzer;\n+import org.elasticsearch.xpack.sql.analysis.index.EsIndex;\n+import org.elasticsearch.xpack.sql.analysis.index.IndexResolution;\n+import org.elasticsearch.xpack.sql.expression.function.FunctionRegistry;\n+import org.elasticsearch.xpack.sql.parser.SqlParser;\n+import org.elasticsearch.xpack.sql.plan.logical.LogicalPlan;\n+import org.elasticsearch.xpack.sql.type.EsField;\n+import org.elasticsearch.xpack.sql.type.TypesTests;\n+\n+import java.util.Map;\n+import java.util.TimeZone;\n+\n+public class OptimizerRunTests extends ESTestCase {\n+\n+ private final SqlParser parser;\n+ private final IndexResolution getIndexResult;\n+ private final FunctionRegistry functionRegistry;\n+ private final Analyzer analyzer;\n+ private final Optimizer optimizer;\n+\n+ public OptimizerRunTests() {\n+ parser = new SqlParser();\n+ functionRegistry = new FunctionRegistry();\n+\n+ Map<String, EsField> mapping = TypesTests.loadMapping(\"mapping-multi-field-variation.json\");\n+\n+ EsIndex test = new EsIndex(\"test\", mapping);\n+ getIndexResult = IndexResolution.valid(test);\n+ analyzer = new Analyzer(functionRegistry, getIndexResult, TimeZone.getTimeZone(\"UTC\"));\n+ optimizer = new Optimizer();\n+ }\n+\n+ private LogicalPlan plan(String sql) {\n+ return optimizer.optimize(analyzer.analyze(parser.createStatement(sql)));\n+ }\n+\n+ public void testWhereClause() {\n+ LogicalPlan p = plan(\"SELECT some.string l FROM test WHERE int IS NOT NULL AND int < 10005 ORDER BY int\");\n+ assertNotNull(p);\n+ }\n+}\n\\ No newline at end of file", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/optimizer/OptimizerRunTests.java", "status": "added" }, { "diff": "@@ -9,6 +9,7 @@\n import org.elasticsearch.xpack.sql.expression.Alias;\n import org.elasticsearch.xpack.sql.expression.Expression;\n import org.elasticsearch.xpack.sql.expression.Expressions;\n+import org.elasticsearch.xpack.sql.expression.FieldAttribute;\n import org.elasticsearch.xpack.sql.expression.Literal;\n import org.elasticsearch.xpack.sql.expression.NamedExpression;\n import org.elasticsearch.xpack.sql.expression.Order;\n@@ -49,8 +50,10 @@\n import org.elasticsearch.xpack.sql.optimizer.Optimizer.BinaryComparisonSimplification;\n import org.elasticsearch.xpack.sql.optimizer.Optimizer.BooleanLiteralsOnTheRight;\n import org.elasticsearch.xpack.sql.optimizer.Optimizer.BooleanSimplification;\n+import org.elasticsearch.xpack.sql.optimizer.Optimizer.CombineBinaryComparisons;\n import org.elasticsearch.xpack.sql.optimizer.Optimizer.CombineProjections;\n import org.elasticsearch.xpack.sql.optimizer.Optimizer.ConstantFolding;\n+import org.elasticsearch.xpack.sql.optimizer.Optimizer.PropagateEquals;\n import org.elasticsearch.xpack.sql.optimizer.Optimizer.PruneDuplicateFunctions;\n import org.elasticsearch.xpack.sql.optimizer.Optimizer.PruneSubqueryAliases;\n import org.elasticsearch.xpack.sql.optimizer.Optimizer.ReplaceFoldableAttributes;\n@@ -65,7 +68,7 @@\n import org.elasticsearch.xpack.sql.tree.Location;\n import org.elasticsearch.xpack.sql.tree.NodeInfo;\n import org.elasticsearch.xpack.sql.type.DataType;\n-import org.joda.time.DateTimeZone;\n+import org.elasticsearch.xpack.sql.type.EsField;\n \n import java.util.Arrays;\n import java.util.Collections;\n@@ -74,6 +77,7 @@\n \n import static java.util.Arrays.asList;\n import static java.util.Collections.emptyList;\n+import static java.util.Collections.emptyMap;\n import static java.util.Collections.singletonList;\n import static org.elasticsearch.xpack.sql.tree.Location.EMPTY;\n \n@@ -217,6 +221,10 @@ public void testReplaceFoldableAttributes() {\n assertEquals(10, lt.right().fold());\n }\n \n+ //\n+ // Constant folding\n+ //\n+\n public void testConstantFolding() {\n Expression exp = new Add(EMPTY, L(2), L(3));\n \n@@ -314,6 +322,10 @@ private static Object unwrapAlias(Expression e) {\n return l.value();\n }\n \n+ //\n+ // Logical simplifications\n+ //\n+\n public void testBinaryComparisonSimplification() {\n assertEquals(Literal.TRUE, new BinaryComparisonSimplification().rule(new Equals(EMPTY, L(5), L(5))));\n assertEquals(Literal.TRUE, new BinaryComparisonSimplification().rule(new GreaterThanOrEqual(EMPTY, L(5), L(5))));\n@@ -369,4 +381,512 @@ public void testBoolCommonFactorExtraction() {\n \n assertEquals(expected, simplification.rule(actual));\n }\n-}\n+\n+ //\n+ // Range optimization\n+ //\n+\n+ // 6 < a <= 5 -> FALSE\n+ public void testFoldExcludingRangeToFalse() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r = new Range(EMPTY, fa, L(6), false, L(5), true);\n+ assertTrue(r.foldable());\n+ assertEquals(Boolean.FALSE, r.fold());\n+ }\n+\n+ // 6 < a <= 5.5 -> FALSE\n+ public void testFoldExcludingRangeWithDifferentTypesToFalse() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r = new Range(EMPTY, fa, L(6), false, L(5.5d), true);\n+ assertTrue(r.foldable());\n+ assertEquals(Boolean.FALSE, r.fold());\n+ }\n+\n+ // Conjunction\n+\n+ public void testCombineBinaryComparisonsNotComparable() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ LessThanOrEqual lte = new LessThanOrEqual(EMPTY, fa, L(6));\n+ LessThan lt = new LessThan(EMPTY, fa, Literal.FALSE);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ And and = new And(EMPTY, lte, lt);\n+ Expression exp = rule.rule(and);\n+ assertEquals(exp, and);\n+ }\n+\n+ // a <= 6 AND a < 5 -> a < 5\n+ public void testCombineBinaryComparisonsUpper() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ LessThanOrEqual lte = new LessThanOrEqual(EMPTY, fa, L(6));\n+ LessThan lt = new LessThan(EMPTY, fa, L(5));\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+\n+ Expression exp = rule.rule(new And(EMPTY, lte, lt));\n+ assertEquals(LessThan.class, exp.getClass());\n+ LessThan r = (LessThan) exp;\n+ assertEquals(L(5), r.right());\n+ }\n+\n+ // 6 <= a AND 5 < a -> 6 <= a\n+ public void testCombineBinaryComparisonsLower() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ GreaterThanOrEqual gte = new GreaterThanOrEqual(EMPTY, fa, L(6));\n+ GreaterThan gt = new GreaterThan(EMPTY, fa, L(5));\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+\n+ Expression exp = rule.rule(new And(EMPTY, gte, gt));\n+ assertEquals(GreaterThanOrEqual.class, exp.getClass());\n+ GreaterThanOrEqual r = (GreaterThanOrEqual) exp;\n+ assertEquals(L(6), r.right());\n+ }\n+\n+ // 5 <= a AND 5 < a -> 5 < a\n+ public void testCombineBinaryComparisonsInclude() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ GreaterThanOrEqual gte = new GreaterThanOrEqual(EMPTY, fa, L(5));\n+ GreaterThan gt = new GreaterThan(EMPTY, fa, L(5));\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+\n+ Expression exp = rule.rule(new And(EMPTY, gte, gt));\n+ assertEquals(GreaterThan.class, exp.getClass());\n+ GreaterThan r = (GreaterThan) exp;\n+ assertEquals(L(5), r.right());\n+ }\n+\n+ // 3 <= a AND 4 < a AND a <= 7 AND a < 6 -> 4 < a < 6\n+ public void testCombineMultipleBinaryComparisons() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ GreaterThanOrEqual gte = new GreaterThanOrEqual(EMPTY, fa, L(3));\n+ GreaterThan gt = new GreaterThan(EMPTY, fa, L(4));\n+ LessThanOrEqual lte = new LessThanOrEqual(EMPTY, fa, L(7));\n+ LessThan lt = new LessThan(EMPTY, fa, L(6));\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+\n+ Expression exp = rule.rule(new And(EMPTY, gte, new And(EMPTY, gt, new And(EMPTY, lt, lte))));\n+ assertEquals(Range.class, exp.getClass());\n+ Range r = (Range) exp;\n+ assertEquals(L(4), r.lower());\n+ assertFalse(r.includeLower());\n+ assertEquals(L(6), r.upper());\n+ assertFalse(r.includeUpper());\n+ }\n+\n+ // 3 <= a AND TRUE AND 4 < a AND a != 5 AND a <= 7 -> 4 < a <= 7 AND a != 5 AND TRUE\n+ public void testCombineMixedMultipleBinaryComparisons() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ GreaterThanOrEqual gte = new GreaterThanOrEqual(EMPTY, fa, L(3));\n+ GreaterThan gt = new GreaterThan(EMPTY, fa, L(4));\n+ LessThanOrEqual lte = new LessThanOrEqual(EMPTY, fa, L(7));\n+ Expression ne = new Not(EMPTY, new Equals(EMPTY, fa, L(5)));\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+\n+ // TRUE AND a != 5 AND 4 < a <= 7\n+ Expression exp = rule.rule(new And(EMPTY, gte, new And(EMPTY, Literal.TRUE, new And(EMPTY, gt, new And(EMPTY, ne, lte)))));\n+ assertEquals(And.class, exp.getClass());\n+ And and = ((And) exp);\n+ assertEquals(Range.class, and.right().getClass());\n+ Range r = (Range) and.right();\n+ assertEquals(L(4), r.lower());\n+ assertFalse(r.includeLower());\n+ assertEquals(L(7), r.upper());\n+ assertTrue(r.includeUpper());\n+ }\n+\n+ // 1 <= a AND a < 5 -> 1 <= a < 5\n+ public void testCombineComparisonsIntoRange() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ GreaterThanOrEqual gte = new GreaterThanOrEqual(EMPTY, fa, L(1));\n+ LessThan lt = new LessThan(EMPTY, fa, L(5));\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(new And(EMPTY, gte, lt));\n+ assertEquals(Range.class, rule.rule(exp).getClass());\n+\n+ Range r = (Range) exp;\n+ assertEquals(L(1), r.lower());\n+ assertTrue(r.includeLower());\n+ assertEquals(L(5), r.upper());\n+ assertFalse(r.includeUpper());\n+ }\n+\n+ // a != NULL AND a > 1 AND a < 5 AND a == 10 -> (a != NULL AND a == 10) AND 1 <= a < 5\n+ public void testCombineUnbalancedComparisonsMixedWithEqualsIntoRange() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ IsNotNull isn = new IsNotNull(EMPTY, fa);\n+ GreaterThanOrEqual gte = new GreaterThanOrEqual(EMPTY, fa, L(1));\n+\n+ Equals eq = new Equals(EMPTY, fa, L(10));\n+ LessThan lt = new LessThan(EMPTY, fa, L(5));\n+\n+ And and = new And(EMPTY, new And(EMPTY, isn, gte), new And(EMPTY, lt, eq));\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(and);\n+ assertEquals(And.class, exp.getClass());\n+ And a = (And) exp;\n+ assertEquals(Range.class, a.right().getClass());\n+\n+ Range r = (Range) a.right();\n+ assertEquals(L(1), r.lower());\n+ assertTrue(r.includeLower());\n+ assertEquals(L(5), r.upper());\n+ assertFalse(r.includeUpper());\n+ }\n+\n+ // (2 < a < 3) AND (1 < a < 4) -> (2 < a < 3)\n+ public void testCombineBinaryComparisonsConjunctionOfIncludedRange() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r1 = new Range(EMPTY, fa, L(2), false, L(3), false);\n+ Range r2 = new Range(EMPTY, fa, L(1), false, L(4), false);\n+\n+ And and = new And(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(and);\n+ assertEquals(r1, exp);\n+ }\n+\n+ // (2 < a < 3) AND a < 2 -> 2 < a < 2\n+ public void testCombineBinaryComparisonsConjunctionOfNonOverlappingBoundaries() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r1 = new Range(EMPTY, fa, L(2), false, L(3), false);\n+ Range r2 = new Range(EMPTY, fa, L(1), false, L(2), false);\n+\n+ And and = new And(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(and);\n+ assertEquals(Range.class, exp.getClass());\n+ Range r = (Range) exp;\n+ assertEquals(L(2), r.lower());\n+ assertFalse(r.includeLower());\n+ assertEquals(L(2), r.upper());\n+ assertFalse(r.includeUpper());\n+ assertEquals(Boolean.FALSE, r.fold());\n+ }\n+\n+ // (2 < a < 3) AND (2 < a <= 3) -> 2 < a < 3\n+ public void testCombineBinaryComparisonsConjunctionOfUpperEqualsOverlappingBoundaries() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r1 = new Range(EMPTY, fa, L(2), false, L(3), false);\n+ Range r2 = new Range(EMPTY, fa, L(2), false, L(3), true);\n+\n+ And and = new And(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(and);\n+ assertEquals(r1, exp);\n+ }\n+\n+ // (2 < a < 3) AND (1 < a < 3) -> 2 < a < 3\n+ public void testCombineBinaryComparisonsConjunctionOverlappingUpperBoundary() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r2 = new Range(EMPTY, fa, L(2), false, L(3), false);\n+ Range r1 = new Range(EMPTY, fa, L(1), false, L(3), false);\n+\n+ And and = new And(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(and);\n+ assertEquals(r2, exp);\n+ }\n+\n+ // (2 < a <= 3) AND (1 < a < 3) -> 2 < a < 3\n+ public void testCombineBinaryComparisonsConjunctionWithDifferentUpperLimitInclusion() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r1 = new Range(EMPTY, fa, L(1), false, L(3), false);\n+ Range r2 = new Range(EMPTY, fa, L(2), false, L(3), true);\n+\n+ And and = new And(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(and);\n+ assertEquals(Range.class, exp.getClass());\n+ Range r = (Range) exp;\n+ assertEquals(L(2), r.lower());\n+ assertFalse(r.includeLower());\n+ assertEquals(L(3), r.upper());\n+ assertFalse(r.includeUpper());\n+ }\n+\n+ // (0 < a <= 1) AND (0 <= a < 2) -> 0 < a <= 1\n+ public void testRangesOverlappingConjunctionNoLowerBoundary() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r1 = new Range(EMPTY, fa, L(0), false, L(1), true);\n+ Range r2 = new Range(EMPTY, fa, L(0), true, L(2), false);\n+\n+ And and = new And(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(and);\n+ assertEquals(r1, exp);\n+ }\n+\n+ // Disjunction\n+\n+ public void testCombineBinaryComparisonsDisjunctionNotComparable() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ GreaterThan gt1 = new GreaterThan(EMPTY, fa, L(1));\n+ GreaterThan gt2 = new GreaterThan(EMPTY, fa, Literal.FALSE);\n+\n+ Or or = new Or(EMPTY, gt1, gt2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(exp, or);\n+ }\n+\n+\n+ // 2 < a OR 1 < a OR 3 < a -> 1 < a\n+ public void testCombineBinaryComparisonsDisjunctionLowerBound() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ GreaterThan gt1 = new GreaterThan(EMPTY, fa, L(1));\n+ GreaterThan gt2 = new GreaterThan(EMPTY, fa, L(2));\n+ GreaterThan gt3 = new GreaterThan(EMPTY, fa, L(3));\n+\n+ Or or = new Or(EMPTY, gt1, new Or(EMPTY, gt2, gt3));\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(GreaterThan.class, exp.getClass());\n+\n+ GreaterThan gt = (GreaterThan) exp;\n+ assertEquals(L(1), gt.right());\n+ }\n+\n+ // 2 < a OR 1 < a OR 3 <= a -> 1 < a\n+ public void testCombineBinaryComparisonsDisjunctionIncludeLowerBounds() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ GreaterThan gt1 = new GreaterThan(EMPTY, fa, L(1));\n+ GreaterThan gt2 = new GreaterThan(EMPTY, fa, L(2));\n+ GreaterThanOrEqual gte3 = new GreaterThanOrEqual(EMPTY, fa, L(3));\n+\n+ Or or = new Or(EMPTY, new Or(EMPTY, gt1, gt2), gte3);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(GreaterThan.class, exp.getClass());\n+\n+ GreaterThan gt = (GreaterThan) exp;\n+ assertEquals(L(1), gt.right());\n+ }\n+\n+ // a < 1 OR a < 2 OR a < 3 -> a < 3\n+ public void testCombineBinaryComparisonsDisjunctionUpperBound() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ LessThan lt1 = new LessThan(EMPTY, fa, L(1));\n+ LessThan lt2 = new LessThan(EMPTY, fa, L(2));\n+ LessThan lt3 = new LessThan(EMPTY, fa, L(3));\n+\n+ Or or = new Or(EMPTY, new Or(EMPTY, lt1, lt2), lt3);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(LessThan.class, exp.getClass());\n+\n+ LessThan lt = (LessThan) exp;\n+ assertEquals(L(3), lt.right());\n+ }\n+\n+ // a < 2 OR a <= 2 OR a < 1 -> a <= 2\n+ public void testCombineBinaryComparisonsDisjunctionIncludeUpperBounds() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ LessThan lt1 = new LessThan(EMPTY, fa, L(1));\n+ LessThan lt2 = new LessThan(EMPTY, fa, L(2));\n+ LessThanOrEqual lte2 = new LessThanOrEqual(EMPTY, fa, L(2));\n+\n+ Or or = new Or(EMPTY, lt2, new Or(EMPTY, lte2, lt1));\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(LessThanOrEqual.class, exp.getClass());\n+\n+ LessThanOrEqual lte = (LessThanOrEqual) exp;\n+ assertEquals(L(2), lte.right());\n+ }\n+\n+ // a < 2 OR 3 < a OR a < 1 OR 4 < a -> a < 2 OR 3 < a\n+ public void testCombineBinaryComparisonsDisjunctionOfLowerAndUpperBounds() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ LessThan lt1 = new LessThan(EMPTY, fa, L(1));\n+ LessThan lt2 = new LessThan(EMPTY, fa, L(2));\n+\n+ GreaterThan gt3 = new GreaterThan(EMPTY, fa, L(3));\n+ GreaterThan gt4 = new GreaterThan(EMPTY, fa, L(4));\n+\n+ Or or = new Or(EMPTY, new Or(EMPTY, lt2, gt3), new Or(EMPTY, lt1, gt4));\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(Or.class, exp.getClass());\n+\n+ Or ro = (Or) exp;\n+\n+ assertEquals(LessThan.class, ro.left().getClass());\n+ LessThan lt = (LessThan) ro.left();\n+ assertEquals(L(2), lt.right());\n+ assertEquals(GreaterThan.class, ro.right().getClass());\n+ GreaterThan gt = (GreaterThan) ro.right();\n+ assertEquals(L(3), gt.right());\n+ }\n+\n+ // (2 < a < 3) OR (1 < a < 4) -> (1 < a < 4)\n+ public void testCombineBinaryComparisonsDisjunctionOfIncludedRangeNotComparable() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r1 = new Range(EMPTY, fa, L(2), false, L(3), false);\n+ Range r2 = new Range(EMPTY, fa, L(1), false, Literal.FALSE, false);\n+\n+ Or or = new Or(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(or, exp);\n+ }\n+\n+\n+ // (2 < a < 3) OR (1 < a < 4) -> (1 < a < 4)\n+ public void testCombineBinaryComparisonsDisjunctionOfIncludedRange() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r1 = new Range(EMPTY, fa, L(2), false, L(3), false);\n+ Range r2 = new Range(EMPTY, fa, L(1), false, L(4), false);\n+\n+ Or or = new Or(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(Range.class, exp.getClass());\n+\n+ Range r = (Range) exp;\n+ assertEquals(L(1), r.lower());\n+ assertFalse(r.includeLower());\n+ assertEquals(L(4), r.upper());\n+ assertFalse(r.includeUpper());\n+ }\n+\n+ // (2 < a < 3) OR (1 < a < 2) -> same\n+ public void testCombineBinaryComparisonsDisjunctionOfNonOverlappingBoundaries() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r1 = new Range(EMPTY, fa, L(2), false, L(3), false);\n+ Range r2 = new Range(EMPTY, fa, L(1), false, L(2), false);\n+\n+ Or or = new Or(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(or, exp);\n+ }\n+\n+ // (2 < a < 3) OR (2 < a <= 3) -> 2 < a <= 3\n+ public void testCombineBinaryComparisonsDisjunctionOfUpperEqualsOverlappingBoundaries() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r1 = new Range(EMPTY, fa, L(2), false, L(3), false);\n+ Range r2 = new Range(EMPTY, fa, L(2), false, L(3), true);\n+\n+ Or or = new Or(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(r2, exp);\n+ }\n+\n+ // (2 < a < 3) OR (1 < a < 3) -> 1 < a < 3\n+ public void testCombineBinaryComparisonsOverlappingUpperBoundary() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r2 = new Range(EMPTY, fa, L(2), false, L(3), false);\n+ Range r1 = new Range(EMPTY, fa, L(1), false, L(3), false);\n+\n+ Or or = new Or(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(r1, exp);\n+ }\n+\n+ // (2 < a <= 3) OR (1 < a < 3) -> same (the <= prevents the ranges from being combined)\n+ public void testCombineBinaryComparisonsWithDifferentUpperLimitInclusion() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r1 = new Range(EMPTY, fa, L(1), false, L(3), false);\n+ Range r2 = new Range(EMPTY, fa, L(2), false, L(3), true);\n+\n+ Or or = new Or(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(or, exp);\n+ }\n+\n+ // (0 < a <= 1) OR (0 < a < 2) -> 0 < a < 2\n+ public void testRangesOverlappingNoLowerBoundary() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+\n+ Range r2 = new Range(EMPTY, fa, L(0), false, L(2), false);\n+ Range r1 = new Range(EMPTY, fa, L(0), false, L(1), true);\n+\n+ Or or = new Or(EMPTY, r1, r2);\n+\n+ CombineBinaryComparisons rule = new CombineBinaryComparisons();\n+ Expression exp = rule.rule(or);\n+ assertEquals(r2, exp);\n+ }\n+\n+ // Equals\n+\n+ // a == 1 AND a == 2 -> FALSE\n+ public void testDualEqualsConjunction() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ Equals eq1 = new Equals(EMPTY, fa, L(1));\n+ Equals eq2 = new Equals(EMPTY, fa, L(2));\n+\n+ PropagateEquals rule = new PropagateEquals();\n+ Expression exp = rule.rule(new And(EMPTY, eq1, eq2));\n+ assertEquals(Literal.FALSE, rule.rule(exp));\n+ }\n+\n+ // 1 <= a < 10 AND a == 1 -> a == 1\n+ public void testEliminateRangeByEqualsInInterval() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ Equals eq1 = new Equals(EMPTY, fa, L(1));\n+ Range r = new Range(EMPTY, fa, L(1), true, L(10), false);\n+\n+ PropagateEquals rule = new PropagateEquals();\n+ Expression exp = rule.rule(new And(EMPTY, eq1, r));\n+ assertEquals(eq1, rule.rule(exp));\n+ }\n+\n+ // 1 < a < 10 AND a == 10 -> FALSE\n+ public void testEliminateRangeByEqualsOutsideInterval() {\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.INTEGER, emptyMap(), true));\n+ Equals eq1 = new Equals(EMPTY, fa, L(10));\n+ Range r = new Range(EMPTY, fa, L(1), false, L(10), false);\n+\n+ PropagateEquals rule = new PropagateEquals();\n+ Expression exp = rule.rule(new And(EMPTY, eq1, r));\n+ assertEquals(Literal.FALSE, rule.rule(exp));\n+ }\n+}\n\\ No newline at end of file", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/optimizer/OptimizerTests.java", "status": "modified" }, { "diff": "@@ -8,16 +8,21 @@\n import org.elasticsearch.index.query.MatchQueryBuilder;\n import org.elasticsearch.index.query.Operator;\n import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.xpack.sql.expression.FieldAttribute;\n import org.elasticsearch.xpack.sql.expression.predicate.fulltext.MatchQueryPredicate;\n import org.elasticsearch.xpack.sql.tree.Location;\n import org.elasticsearch.xpack.sql.tree.LocationTests;\n+import org.elasticsearch.xpack.sql.type.DataType;\n+import org.elasticsearch.xpack.sql.type.EsField;\n \n import java.util.Arrays;\n import java.util.List;\n import java.util.function.Function;\n \n-import static org.hamcrest.Matchers.equalTo;\n+import static java.util.Collections.emptyMap;\n import static org.elasticsearch.test.EqualsHashCodeTestUtils.checkEqualsAndHashCode;\n+import static org.elasticsearch.xpack.sql.tree.Location.EMPTY;\n+import static org.hamcrest.Matchers.equalTo;\n \n public class MatchQueryTests extends ESTestCase {\n static MatchQuery randomMatchQuery() {\n@@ -62,14 +67,16 @@ public void testQueryBuilding() {\n \n private static MatchQueryBuilder getBuilder(String options) {\n final Location location = new Location(1, 1);\n- final MatchQueryPredicate mmqp = new MatchQueryPredicate(location, null, \"eggplant\", options);\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.KEYWORD, emptyMap(), true));\n+ final MatchQueryPredicate mmqp = new MatchQueryPredicate(location, fa, \"eggplant\", options);\n final MatchQuery mmq = new MatchQuery(location, \"eggplant\", \"foo\", mmqp);\n return (MatchQueryBuilder) mmq.asBuilder();\n }\n \n public void testToString() {\n final Location location = new Location(1, 1);\n- final MatchQueryPredicate mmqp = new MatchQueryPredicate(location, null, \"eggplant\", \"\");\n+ FieldAttribute fa = new FieldAttribute(EMPTY, \"a\", new EsField(\"af\", DataType.KEYWORD, emptyMap(), true));\n+ final MatchQueryPredicate mmqp = new MatchQueryPredicate(location, fa, \"eggplant\", \"\");\n final MatchQuery mmq = new MatchQuery(location, \"eggplant\", \"foo\", mmqp);\n assertEquals(\"MatchQuery@1:2[eggplant:foo]\", mmq.toString());\n }", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/querydsl/query/MatchQueryTests.java", "status": "modified" } ] }
{ "body": "*Original comment by @bpintea:*\n\nProviding no statement, but ending the line with semicolon will result in an exception.\r\nEx.: ; + CR\r\n\r\n```\r\nsql> ;\r\nServer error [Server sent bad type [action_request_validation_exception]. Original type was [Validation Failed: 1: one of [query] or [cursor] is required;]. [org.elasticsearch.action.ActionRequestValidationException: Validation Failed: 1: one of [query] or [cursor] is required;\r\n\tat org.elasticsearch.action.ValidateActions.addValidationError(ValidateActions.java:26)\r\n\tat org.elasticsearch.xpack.sql.plugin.SqlQueryRequest.validate(SqlQueryRequest.java:70)\r\n\tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:128)\r\n\tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81)\r\n\tat org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83)\r\n\tat org.elasticsearch.xpack.sql.plugin.RestSqlQueryAction.lambda$prepareRequest$0(RestSqlQueryAction.java:60)\r\n\tat org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:97)\r\n\tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:240)\r\n\tat org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:336)\r\n\tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:174)\r\n\tat org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:467)\r\n\tat org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:137)\r\n\tat io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:68)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n\tat io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499)\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)\r\n\tat java.base/java.lang.Thread.run(Thread.java:844)\r\n]]\r\nsql>\r\n```", "comments": [], "number": 30000, "title": "SQL: CLI: empty statement generates an exception" }
{ "body": "Cause the CLI to ignore commands that are empty or consist only of\r\nnewlines. This is a fairly standard thing for SQL CLIs to do.\r\n\r\nIt looks like:\r\n```\r\nsql> ;\r\nsql>\r\n |\r\n | ;\r\nsql> exit;\r\nBye!\r\n```\r\n\r\nI think I *could* have implemented this with a `CliCommand` that throws\r\nout empty string but it felt simpler to bake it in to the `CliRepl`.\r\n\r\nCloses #30000\r\n\r\n", "number": 30265, "review_comments": [], "title": "SQL: Teach the CLI to ignore empty commands" }
{ "commits": [ { "message": "SQL: Teach the CLI to ignore empty commands\n\nCause the CLI to ignore commands that are empty or consist only of\nnewlines. This is a fairly standard thing for SQL CLIs to do.\n\nIt looks like:\n```\nsql> ;\nsql>\n |\n | ;\nsql> exit;\nBye!\n```\n\nI think I *could* have implemented this with a `CliCommand` that throws\nout empty string but it felt simpler to bake it in to the `CliRepl`.\n\nCloses #30000" }, { "message": "Merge branch 'master' into sql_cli_ignore_empty" } ], "files": [ { "diff": "@@ -56,6 +56,11 @@ public void execute() {\n multiLine.setLength(0);\n }\n \n+ // Skip empty commands\n+ if (line.isEmpty()) {\n+ continue;\n+ }\n+\n // special case to handle exit\n if (isExit(line)) {\n cliTerminal.line().em(\"Bye!\").ln();", "filename": "x-pack/plugin/sql/sql-cli/src/main/java/org/elasticsearch/xpack/sql/cli/CliRepl.java", "status": "modified" }, { "diff": "@@ -38,6 +38,28 @@ public void testBasicCliFunctionality() throws Exception {\n verifyNoMoreInteractions(mockCommand, mockSession);\n }\n \n+ /**\n+ * Test that empty commands are skipped. This includes commands that are\n+ * just new lines.\n+ */\n+ public void testEmptyNotSent() {\n+ CliTerminal cliTerminal = new TestTerminal(\n+ \";\",\n+ \"\",\n+ \"\",\n+ \";\",\n+ \"exit;\"\n+ );\n+\n+ CliSession mockSession = mock(CliSession.class);\n+ CliCommand mockCommand = mock(CliCommand.class);\n+\n+ CliRepl cli = new CliRepl(cliTerminal, mockSession, mockCommand);\n+ cli.execute();\n+\n+ verify(mockCommand, times(1)).handle(cliTerminal, mockSession, \"logo\");\n+ verifyNoMoreInteractions(mockSession, mockCommand);\n+ }\n \n public void testFatalCliExceptionHandling() throws Exception {\n CliTerminal cliTerminal = new TestTerminal(", "filename": "x-pack/plugin/sql/sql-cli/src/test/java/org/elasticsearch/xpack/sql/cli/CliReplTests.java", "status": "modified" } ] }
{ "body": "Tested on 6.2.3:\r\n\r\nThis is a Range query on a date field:\r\n\r\n```\r\nDELETE test\r\nPUT test\r\n{\r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"mydate\": {\r\n \"type\": \"date\", \r\n \"format\": \"yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\nPUT test/_doc/1\r\n{\r\n \"mydate\" : \"2015-10-31 12:00:00\"\r\n}\r\nGET test/_search\r\n{\r\n \"query\": {\r\n \"range\": {\r\n \"mydate\": {\r\n \"gte\": \"2015-10-31 12:00:00\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nThis works well.\r\n\r\nThis is a Range query on a date_range field:\r\n\r\n```\r\nDELETE range_index\r\nPUT range_index\r\n{\r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"time_frame\": {\r\n \"type\": \"date_range\", \r\n \"format\": \"yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\nPUT range_index/_doc/1\r\n{\r\n \"time_frame\" : { \r\n \"gte\" : \"2015-10-31 12:00:00\", \r\n \"lte\" : \"2015-11-01\"\r\n }\r\n}\r\nGET range_index/_search\r\n{\r\n \"query\" : {\r\n \"range\" : {\r\n \"time_frame\" : {\r\n \"gte\" : \"2015-10-31 12:00:00\",\r\n \"lte\" : \"2015-11-01\",\r\n \"relation\" : \"within\" \r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nThis is failing with:\r\n\r\n```\r\n {\r\n \"type\": \"parse_exception\",\r\n \"reason\": \"failed to parse date field [2015-10-31 12:00:00] with format [strict_date_optional_time||epoch_millis]\"\r\n }\r\n```\r\n\r\nWe can see that the format defined in the mapping is not used here. It should use `yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis` but it uses `strict_date_optional_time||epoch_millis`.\r\n\r\nIf we manually force the `format` to be the same as the `mapping`, then it works:\r\n\r\n```\r\nGET range_index/_search\r\n{\r\n \"query\" : {\r\n \"range\" : {\r\n \"time_frame\" : {\r\n \"gte\" : \"2015-10-31 12:00:00\",\r\n \"lte\" : \"2015-11-01\",\r\n \"format\": \"yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||yyyy\", \r\n \"relation\" : \"within\" \r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nI suggest that we try first to use the mapping defined for the field and fallback to the default one if needed.\r\n\r\ncc @melvynator ", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-29T02:46:33Z" } ], "number": 29282, "title": "range filter on date_range datatype should use the format defined in the mapping" }
{ "body": "- previously the format given in the mapping was ignored while parsing\r\n date in responses, fixed.\r\n- #29282\r\n- added more tests for some combinations of range queries in RangeFieldTypeTests.\r\n\r\n<!--\r\nThank you for your interest in and contributing to Elasticsearch! There\r\nare a few simple things to check before submitting your pull request\r\nthat can help with the review process. You should delete these items\r\nfrom your submission, but they are here to help bring them to your\r\nattention.\r\n-->\r\n\r\n- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?\r\nYES\r\n\r\n- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md)?\r\nYES\r\n\r\n- If submitting code, have you built your formula locally prior to submission with `gradle check`?\r\nYES - passed\r\n\r\n- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.\r\nrebased against master 2018/4/30\r\n\r\n- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?\r\nOSX (n/a)\r\n\r\n- If you are submitting this code for a class then read our [policy](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-as-part-of-a-class) for that.\r\nN/A", "number": 30252, "review_comments": [], "title": "RangeFieldType.rangeQuery now defaults to mappings format(parser)" }
{ "commits": [ { "message": "RangeFieldType.rangeQuery now defaults to mappings format(parser)\n\n- previously the format given in the mapping was ignored while parsing\n date in responses, fixed.\n- #29282\n- added more tests for some combinations of range queries in RangeFieldTypeTests." } ], "files": [ { "diff": "@@ -287,8 +287,16 @@ public Query termQuery(Object value, QueryShardContext context) {\n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n ShapeRelation relation, DateTimeZone timeZone, DateMathParser parser, QueryShardContext context) {\n failIfNotIndexed();\n- return rangeType.rangeQuery(name(), hasDocValues(), lowerTerm, upperTerm, includeLower, includeUpper, relation,\n- timeZone, parser, context);\n+ return rangeType.rangeQuery(name(),\n+ hasDocValues(),\n+ lowerTerm,\n+ upperTerm,\n+ includeLower,\n+ includeUpper,\n+ relation,\n+ timeZone,\n+ parser != null ? parser : this.dateMathParser, // try and use the parser with the mapping format.\n+ context);\n }\n }\n \n@@ -586,6 +594,8 @@ public Query rangeQuery(String field, boolean hasDocValues, Object lowerTerm, Ob\n boolean includeUpper, ShapeRelation relation, @Nullable DateTimeZone timeZone,\n @Nullable DateMathParser parser, QueryShardContext context) {\n DateTimeZone zone = (timeZone == null) ? DateTimeZone.UTC : timeZone;\n+\n+ // if no parser (no format in mapping or query) use default.\n DateMathParser dateMathParser = (parser == null) ?\n new DateMathParser(DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER) : parser;\n Long low = lowerTerm == null ? Long.MIN_VALUE :", "filename": "server/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java", "status": "modified" }, { "diff": "@@ -31,9 +31,12 @@\n import org.apache.lucene.search.IndexOrDocValuesQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.geo.ShapeRelation;\n+import org.elasticsearch.common.joda.DateMathParser;\n+import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.settings.Settings;\n@@ -42,6 +45,9 @@\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.test.IndexSettingsModule;\n import org.joda.time.DateTime;\n+import org.joda.time.DateTimeZone;\n+import org.joda.time.format.DateTimeFormatter;\n+import org.junit.Assert;\n import org.junit.Before;\n \n import java.net.InetAddress;\n@@ -52,6 +58,7 @@ public class RangeFieldTypeTests extends FieldTypeTestCase {\n protected static String FIELDNAME = \"field\";\n protected static int DISTANCE = 10;\n private static long nowInMillis;\n+ private static final DateMathParser WITHOUT_DATE_MATH_PARSER = null;\n \n @Before\n public void setupProperties() {\n@@ -79,14 +86,7 @@ protected RangeFieldMapper.RangeFieldType createDefaultFieldType() {\n }\n \n public void testRangeQuery() throws Exception {\n- Settings indexSettings = Settings.builder()\n- .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n- IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(randomAlphaOfLengthBetween(1, 10), indexSettings);\n- QueryShardContext context = new QueryShardContext(0, idxSettings, null, null, null, null, null, xContentRegistry(),\n- writableRegistry(), null, null, () -> nowInMillis, null);\n- RangeFieldMapper.RangeFieldType ft = new RangeFieldMapper.RangeFieldType(type, Version.CURRENT);\n- ft.setName(FIELDNAME);\n- ft.setIndexOptions(IndexOptions.DOCS);\n+ RangeFieldMapper.RangeFieldType ft = this.rangeFieldType(type);\n \n ShapeRelation relation = RandomPicks.randomFrom(random(), ShapeRelation.values());\n boolean includeLower = random().nextBoolean();\n@@ -95,7 +95,171 @@ public void testRangeQuery() throws Exception {\n Object to = nextTo(from);\n \n assertEquals(getExpectedRangeQuery(relation, from, to, includeLower, includeUpper),\n- ft.rangeQuery(from, to, includeLower, includeUpper, relation, null, null, context));\n+ ft.rangeQuery(from, to, includeLower, includeUpper, relation, null, null, this.context()));\n+ }\n+\n+ public void testRangeQueryDate_WithoutFormat_usesRangeFieldTypeFormat_includeLowerIncludeUpperTimeZone() {\n+ testRangeQueryDate_WithoutFormat_usesRangeFieldTypeFormat0(true, true, DateTimeZone.UTC);\n+ }\n+\n+ public void testRangeQueryDate_WithoutFormat_usesRangeFieldTypeFormat() {\n+ testRangeQueryDate_WithoutFormat_usesRangeFieldTypeFormat0(false, false, null);\n+ }\n+\n+ private void testRangeQueryDate_WithoutFormat_usesRangeFieldTypeFormat0(final boolean includeLower,\n+ final boolean includeUpper,\n+ final DateTimeZone timeZone) {\n+ final String pattern = \"yyyy-MM-dd\";\n+ final String lower = \"2015-10-31\";\n+ final String higher = \"2016-11-01\";\n+\n+ final Query query = this.rangeQuery(RangeType.DATE,\n+ pattern,\n+ lower,\n+ higher,\n+ includeLower,\n+ includeUpper,\n+ ShapeRelation.WITHIN,\n+ timeZone,\n+ WITHOUT_DATE_MATH_PARSER);\n+\n+ final DateTimeFormatter parser = this.parser(pattern, timeZone);\n+\n+ Assert.assertEquals(this.getDateRangeQuery(\n+ ShapeRelation.WITHIN,\n+ parser.parseDateTime(lower),\n+ parser.parseDateTime(higher),\n+ includeLower,\n+ includeUpper),\n+ query);\n+ }\n+\n+ // fails because lowerTerm/upperTerm cannt be parsed with dd-MM-yyyy\n+ public void testRangeQueryDate_WithoutFormat_usesRangeFieldTypeFormat_incompatFails() {\n+ this.rangeQueryFails(RangeType.DATE,\n+ \"dd-yyyy-MM\", //deliberate must cause failure\n+ \"2015-10-31\",\n+ \"2016-11-01\",\n+ true,\n+ true,\n+ ShapeRelation.WITHIN,\n+ DateTimeZone.UTC,\n+ WITHOUT_DATE_MATH_PARSER,\n+ ElasticsearchParseException.class);\n+ }\n+\n+ public void testRangeQueryDate_WithFormat() {\n+ testRangeQueryDate_WithFormat0(false, false, null);\n+ }\n+\n+ public void testRangeQueryDate_WithFormat_includeLowerIncludeUpperTimezone() {\n+ testRangeQueryDate_WithFormat0(true, true, DateTimeZone.UTC);\n+ }\n+\n+ private void testRangeQueryDate_WithFormat0(final boolean includeLower,\n+ final boolean includeUpper,\n+ final DateTimeZone timeZone) {\n+ final String pattern = \"yyyy-MM-dd\";\n+ final String lower = \"2015-10-31\";\n+ final String higher = \"2016-11-01\";\n+\n+ final Query query = this.rangeQuery(RangeType.DATE,\n+ pattern,\n+ lower,\n+ higher,\n+ includeLower,\n+ includeUpper,\n+ ShapeRelation.WITHIN,\n+ timeZone,\n+ dateMathParser(pattern));\n+\n+ final DateTimeFormatter parser = this.parser(pattern, timeZone);\n+\n+ Assert.assertEquals(this.getDateRangeQuery(\n+ ShapeRelation.WITHIN,\n+ parser.parseDateTime(lower),\n+ parser.parseDateTime(higher),\n+ includeLower,\n+ includeUpper),\n+ query);\n+ }\n+\n+ private void rangeQueryFails(final RangeType rangeType,\n+ final String dateTimeFormatterFormat,\n+ final String lowerTerm,\n+ final String upperTerm,\n+ final boolean includeLower,\n+ final boolean includeUpper,\n+ final ShapeRelation relation,\n+ final DateTimeZone dateTimeZone,\n+ final DateMathParser dateMathParser,\n+ final Class<? extends Throwable> thrown)\n+ {\n+ expectThrows(thrown,\n+ () -> rangeQuery(rangeType,\n+ dateTimeFormatterFormat,\n+ lowerTerm,\n+ upperTerm,\n+ includeLower,\n+ includeUpper,\n+ relation,\n+ dateTimeZone,\n+ dateMathParser));\n+ }\n+\n+ private Query rangeQuery(final RangeType rangeType,\n+ final String dateTimeFormatterFormat,\n+ final String lowerTerm,\n+ final String upperTerm,\n+ final boolean includeLower,\n+ final boolean includeUpper,\n+ final ShapeRelation relation,\n+ final DateTimeZone dateTimeZone,\n+ final DateMathParser dateMathParser)\n+ {\n+ final RangeFieldMapper.RangeFieldType type = rangeFieldType(rangeType, dateTimeFormatterFormat);\n+\n+ return type.rangeQuery(\n+ lowerTerm,\n+ upperTerm,\n+ includeLower,\n+ includeUpper,\n+ relation,\n+ dateTimeZone,\n+ dateMathParser,\n+ this.context());\n+ }\n+\n+ private RangeFieldMapper.RangeFieldType rangeFieldType(final RangeType rangeType) {\n+ return this.rangeFieldType(rangeType, null);\n+ }\n+\n+ private RangeFieldMapper.RangeFieldType rangeFieldType(final RangeType rangeType,\n+ final String dateTimeFormatterFormat) {\n+ final RangeFieldMapper.RangeFieldType rangeFieldType = new RangeFieldMapper.RangeFieldType(rangeType, Version.CURRENT);\n+ rangeFieldType.setName(FIELDNAME);\n+ rangeFieldType.setIndexOptions(IndexOptions.DOCS);\n+\n+ if(null!=dateTimeFormatterFormat){\n+ rangeFieldType.setDateTimeFormatter(Joda.forPattern(dateTimeFormatterFormat));\n+ }\n+ return rangeFieldType;\n+ }\n+\n+ private DateMathParser dateMathParser(final String pattern) {\n+ return new DateMathParser(formatter(pattern));\n+ }\n+\n+ private FormatDateTimeFormatter formatter(final String pattern) {\n+ return Joda.forPattern(pattern, Locale.ROOT);\n+ }\n+\n+ private DateTimeFormatter parser(final String pattern, final DateTimeZone timeZone) {\n+ DateTimeFormatter parser = formatter(pattern).parser();\n+ if(null !=timeZone){\n+ parser = parser.withZone(timeZone);\n+ }\n+ return parser;\n }\n \n private Query getExpectedRangeQuery(ShapeRelation relation, Object from, Object to, boolean includeLower, boolean includeUpper) {\n@@ -279,20 +443,33 @@ public void testParseIp() {\n \n public void testTermQuery() throws Exception, IllegalArgumentException {\n // See https://github.com/elastic/elasticsearch/issues/25950\n- Settings indexSettings = Settings.builder()\n- .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n- IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(randomAlphaOfLengthBetween(1, 10), indexSettings);\n- QueryShardContext context = new QueryShardContext(0, idxSettings, null, null, null, null, null, xContentRegistry(),\n- writableRegistry(), null, null, () -> nowInMillis, null);\n- RangeFieldMapper.RangeFieldType ft = new RangeFieldMapper.RangeFieldType(type, Version.CURRENT);\n- ft.setName(FIELDNAME);\n- ft.setIndexOptions(IndexOptions.DOCS);\n+ RangeFieldMapper.RangeFieldType ft = this.rangeFieldType(type);\n \n Object value = nextFrom();\n ShapeRelation relation = ShapeRelation.INTERSECTS;\n boolean includeLower = true;\n boolean includeUpper = true;\n assertEquals(getExpectedRangeQuery(relation, value, value, includeLower, includeUpper),\n- ft.termQuery(value, context));\n+ ft.termQuery(value, this.context()));\n+ }\n+\n+ private QueryShardContext context() {\n+ Settings indexSettings = Settings.builder()\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n+ .build();\n+ IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(randomAlphaOfLengthBetween(1, 10), indexSettings);\n+ return new QueryShardContext(0,\n+ idxSettings,\n+ null,\n+ null,\n+ null,\n+ null,\n+ null,\n+ xContentRegistry(),\n+ writableRegistry(),\n+ null,\n+ null,\n+ () -> nowInMillis,\n+ null);\n }\n }", "filename": "server/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java", "status": "modified" } ] }
{ "body": "elasticsearch 6.1.3\r\n\r\nIn GlobalOrdinalsStringTermsAggregator\r\nWhen there are levels of aggregation, parent agg and valueCount both more than 100 thousands bucket\r\n\r\nthe loop may be explode\r\n`for (long globalTermOrd = 0; globalTermOrd < valueCount; ++globalTermOrd) `\r\n\r\nMy temporary resolution is, loop by bucketOrds\r\n```\r\n private static Field keysField;\r\n static {\r\n try {\r\n keysField = LongHash.class.getDeclaredField(\"keys\");\r\n keysField.setAccessible(true);\r\n } catch (Exception e) {\r\n LOGGER.error(e.getMessage(), e);\r\n }\r\n \r\n if (bucketOrds != null && bucketOrds.size() < valueCount && bucketCountThresholds.getMinDocCount() > 0 ) {\r\n try {\r\n loop = false;\r\n LongArray keys = ((LongArray) keysField.get(bucketOrds));\r\n for (long i = 0; i < keys.size(); i++) {\r\n //i: bucketOrd\r\n int bucketDocCount = bucketDocCount(i);\r\n if ( bucketDocCount == 0) {\r\n continue;\r\n } \r\n long globalTermOrd = keys.get(i);\r\n if (includeExclude != null && !acceptedGlobalOrdinals.get(globalTermOrd)) {\r\n continue;\r\n }\r\n otherDocCount += bucketDocCount;\r\n spare.globalOrd = globalTermOrd;\r\n spare.bucketOrd = i;\r\n spare.docCount = bucketDocCount;\r\n if (bucketCountThresholds.getShardMinDocCount() <= spare.docCount) {\r\n spare = ordered.insertWithOverflow(spare);\r\n if (spare == null) {\r\n spare = new OrdBucket(-1, 0, null, showTermDocCountError, 0);\r\n }\r\n }\r\n\r\n }\r\n\r\n } catch (IllegalAccessException e) {\r\n LOGGER.error(e.getMessage(), e);\r\n loop = true;\r\n }\r\n }\r\n```", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-25T09:34:27Z" }, { "body": "I agree this is a performance bug and defeats the purpose of using a hash table to collect matching ordinals.", "created_at": "2018-04-25T13:06:16Z" }, { "body": "Great catch @pakein ! I modified your change a bit and opened #30166. ", "created_at": "2018-04-26T07:56:23Z" } ], "number": 30117, "title": "string terms is very slow when there are millions of buckets" }
{ "body": "The global ordinals terms aggregator has an option to remap global ordinals to\r\ndense ordinal that match the request. This mode is automatically picked when the terms\r\naggregator is a child of another bucket aggregator or when it needs to defer buckets to an\r\naggregation that is used in the ordering of the terms.\r\nThough when building the final buckets, this aggregator loops over all possible global ordinals\r\nrather than using the hash map that was built to remap the ordinals.\r\nFor fields with high cardinality this is highly inefficient and can lead to slow responses even\r\nwhen the number of terms that match the query is low.\r\nThis change fixes this performance issue by using the hash table of matching ordinals to perform\r\nthe pruning of the final buckets for the terms and significant_terms aggregation.\r\nI ran a simple benchmark with 1M documents containing 0 to 10 keywords randomly selected among 1M unique terms.\r\nThis field is used to perform a multi-level terms aggregation using rally to collect the response times.\r\nThe aggregation below is an example of a two-level terms aggregation that was used to perform the benchmark:\r\n\r\n```\r\n\"aggregations\":{\r\n \"1\":{\r\n \"terms\":{\r\n \"field\":\"keyword\"\r\n },\r\n \"aggregations\":{\r\n \"2\":{\r\n \"terms\":{\r\n \"field\":\"keyword\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n| Levels of aggregation | 50th percentile ms (master) | 50th percentile ms (patch) |\r\n| --- | --- | --- |\r\n| 2 | 640.41 | 577.499 |\r\n| 3 | 2239.66 | 600.154 |\r\n| 4 | 14141.2 | 703.512 |\r\n\r\nCloses #30117\r\n", "number": 30166, "review_comments": [ { "body": "Typo - \"needsFullScan\"", "created_at": "2018-04-26T08:26:32Z" }, { "body": "let's make both final", "created_at": "2018-04-26T09:51:05Z" }, { "body": "let's make them final?", "created_at": "2018-04-26T09:51:32Z" } ], "title": "Build global ordinals terms bucket from matching ordinals" }
{ "commits": [ { "message": "Build terms bucket from matching ordinals\n\nThe global ordinals terms aggregator has an option to remap global ordinals to\ndense ordinal that match the request. This mode is automatically picked when the terms\naggregator is a child of another bucket aggregator or when it needs to defer buckets to an\naggregation that is used in the ordering of the terms.\nThough when building the final buckets, this aggregator loops over all possible global ordinals\nrather than using the hash map that was built to remap the ordinals.\nFor fields with high cardinality this is highly inefficient and can lead to slow responses even\nwhen the number of terms that match the query is low.\nThis change fixes this performance issue by using the hash table of matching ordinals to perform\nthe pruning of the final buckets for the terms and significant_terms aggregation.\nI ran a simple benchmark with 1M documents containing 0 to 10 keywords randomly selected among 1M unique terms.\nThis field is used to perform a multi-level terms aggregation using rally to collect the response times.\nThe aggregation below is an example of a two-level terms aggregation that was used to perform the benchmark:\n\n```\n\"aggregations\":{\n \"1\":{\n \"terms\":{\n \"field\":\"keyword\"\n },\n \"aggregations\":{\n \"2\":{\n \"terms\":{\n \"field\":\"keyword\"\n }\n }\n }\n }\n}\n```\n\n| Levels of aggregation | 50th percentile ms (master) | 50th percentile ms (patch) |\n| --- | --- | --- |\n| 2 | 640.41ms | 577.499ms |\n| 3 | 2239.66ms | 600.154ms |\n| 4 | 14141.2ms | 703.512ms |\n\nCloses #30117" }, { "message": "unused import" }, { "message": "typos" }, { "message": "address review" }, { "message": "Merge branch 'master' into global_ordinal_loop" }, { "message": "Merge branch 'master' into global_ordinal_loop" } ], "files": [ { "diff": "@@ -20,10 +20,8 @@\n \n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.LeafReaderContext;\n-import org.apache.lucene.index.SortedSetDocValues;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.lease.Releasables;\n-import org.elasticsearch.common.util.LongHash;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactories;\n@@ -103,11 +101,22 @@ public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws\n \n BucketSignificancePriorityQueue<SignificantStringTerms.Bucket> ordered = new BucketSignificancePriorityQueue<>(size);\n SignificantStringTerms.Bucket spare = null;\n- for (long globalTermOrd = 0; globalTermOrd < valueCount; ++globalTermOrd) {\n- if (includeExclude != null && !acceptedGlobalOrdinals.get(globalTermOrd)) {\n+ final boolean needsFullScan = bucketOrds == null || bucketCountThresholds.getMinDocCount() == 0;\n+ final long maxId = needsFullScan ? valueCount : bucketOrds.size();\n+ for (long ord = 0; ord < maxId; ord++) {\n+ final long globalOrd;\n+ final long bucketOrd;\n+ if (needsFullScan) {\n+ bucketOrd = bucketOrds == null ? ord : bucketOrds.find(ord);\n+ globalOrd = ord;\n+ } else {\n+ assert bucketOrds != null;\n+ bucketOrd = ord;\n+ globalOrd = bucketOrds.get(ord);\n+ }\n+ if (includeExclude != null && !acceptedGlobalOrdinals.get(globalOrd)) {\n continue;\n }\n- final long bucketOrd = getBucketOrd(globalTermOrd);\n final int bucketDocCount = bucketOrd < 0 ? 0 : bucketDocCount(bucketOrd);\n if (bucketCountThresholds.getMinDocCount() > 0 && bucketDocCount == 0) {\n continue;\n@@ -120,7 +129,7 @@ public SignificantStringTerms buildAggregation(long owningBucketOrdinal) throws\n spare = new SignificantStringTerms.Bucket(new BytesRef(), 0, 0, 0, 0, null, format);\n }\n spare.bucketOrd = bucketOrd;\n- copy(lookupGlobalOrd.apply(globalTermOrd), spare.termBytes);\n+ copy(lookupGlobalOrd.apply(globalOrd), spare.termBytes);\n spare.subsetDf = bucketDocCount;\n spare.subsetSize = subsetSize;\n spare.supersetDf = termsAggFactory.getBackgroundFrequency(spare.termBytes);", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/GlobalOrdinalsSignificantTermsAggregator.java", "status": "modified" }, { "diff": "@@ -71,7 +71,7 @@ public class GlobalOrdinalsStringTermsAggregator extends AbstractStringTermsAggr\n protected final long valueCount;\n protected final GlobalOrdLookupFunction lookupGlobalOrd;\n \n- private final LongHash bucketOrds;\n+ protected final LongHash bucketOrds;\n \n public interface GlobalOrdLookupFunction {\n BytesRef apply(long ord) throws IOException;\n@@ -107,10 +107,6 @@ boolean remapGlobalOrds() {\n return bucketOrds != null;\n }\n \n- protected final long getBucketOrd(long globalOrd) {\n- return bucketOrds == null ? globalOrd : bucketOrds.find(globalOrd);\n- }\n-\n private void collectGlobalOrd(int doc, long globalOrd, LeafBucketCollector sub) throws IOException {\n if (bucketOrds == null) {\n collectExistingBucket(sub, doc, globalOrd);\n@@ -188,17 +184,28 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) throws IOE\n long otherDocCount = 0;\n BucketPriorityQueue<OrdBucket> ordered = new BucketPriorityQueue<>(size, order.comparator(this));\n OrdBucket spare = new OrdBucket(-1, 0, null, showTermDocCountError, 0);\n- for (long globalTermOrd = 0; globalTermOrd < valueCount; ++globalTermOrd) {\n- if (includeExclude != null && !acceptedGlobalOrdinals.get(globalTermOrd)) {\n+ final boolean needsFullScan = bucketOrds == null || bucketCountThresholds.getMinDocCount() == 0;\n+ final long maxId = needsFullScan ? valueCount : bucketOrds.size();\n+ for (long ord = 0; ord < maxId; ord++) {\n+ final long globalOrd;\n+ final long bucketOrd;\n+ if (needsFullScan) {\n+ bucketOrd = bucketOrds == null ? ord : bucketOrds.find(ord);\n+ globalOrd = ord;\n+ } else {\n+ assert bucketOrds != null;\n+ bucketOrd = ord;\n+ globalOrd = bucketOrds.get(ord);\n+ }\n+ if (includeExclude != null && !acceptedGlobalOrdinals.get(globalOrd)) {\n continue;\n }\n- final long bucketOrd = getBucketOrd(globalTermOrd);\n final int bucketDocCount = bucketOrd < 0 ? 0 : bucketDocCount(bucketOrd);\n if (bucketCountThresholds.getMinDocCount() > 0 && bucketDocCount == 0) {\n continue;\n }\n otherDocCount += bucketDocCount;\n- spare.globalOrd = globalTermOrd;\n+ spare.globalOrd = globalOrd;\n spare.bucketOrd = bucketOrd;\n spare.docCount = bucketDocCount;\n if (bucketCountThresholds.getShardMinDocCount() <= spare.docCount) {\n@@ -378,7 +385,7 @@ private void mapSegmentCountsToGlobalCounts(LongUnaryOperator mapping) throws IO\n }\n final long ord = i - 1; // remember we do +1 when counting\n final long globalOrd = mapping.applyAsLong(ord);\n- long bucketOrd = getBucketOrd(globalOrd);\n+ long bucketOrd = bucketOrds == null ? globalOrd : bucketOrds.find(globalOrd);\n incrementBucketDocCount(bucketOrd, inc);\n }\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java", "status": "modified" } ] }
{ "body": "If the cumulative sum agg encounters a null value, it's because the value is missing (like the first value from a derivative agg), the path is not valid, or the bucket in the path was empty.\r\n\r\nPreviously cusum would just explode on the null, but this changes it so we only increment the sum if the value is non-null and finite. This is safe because even if the cusum encounters all null or empty buckets, the cumulative sum is still zero (like how the sum agg returns zero even if all the docs were missing values)\r\n\r\nI went ahead and tweaked AggregatorTestCase to allow testing pipelines, so that I could delete the IT test and reimplement it as AggTests.\r\n\r\nI think this closes #27544", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-20T19:11:48Z" }, { "body": "Jenkins, retest this please.", "created_at": "2018-04-25T11:11:18Z" }, { "body": "Jenkins, retest this please", "created_at": "2018-04-27T11:00:18Z" }, { "body": "Jenkins, run gradle build tests", "created_at": "2018-04-27T15:27:10Z" }, { "body": "run sample packaging tests", "created_at": "2018-05-01T20:21:39Z" } ], "number": 29641, "title": "Fix NPE when CumulativeSum agg encounters null value/empty bucket" }
{ "body": "Instead of throwing a technical error when the user specifies a pipeline agg path that includes a multi-bucket, we can check during the property resolution and throw a friendlier exception\r\n\r\nThe old exception:\r\n\r\n>buckets_path must reference either a number value or a single value numeric metric aggregation, got: java.lang.Object[]\r\n\r\n\r\nCompared to the new exceptions:\r\n\r\n> [histo2] is a [date_histogram], but only number value or a single value numeric metric aggregation are allowed.\r\n\r\n>[the_percentiles] is a [tdigest_percentiles] which contains multiple values, but only number value or a single value numeric metric aggregation are allowed. Please specify which value to return.\r\n\r\n\r\nI'm not super pleased with how this was implemented, adding a boolean to `getProperty()`. But it seemed the least invasive way. In particular because we have to deal with both multi-buckets and numeric aggs that have multiple values. \r\n\r\n\r\n~Note: this uses the same changes to `AggregatorTestCase` as #29641 so we can test pipelines easier.~ Not relevant anymore\r\n\r\nCloses #25273", "number": 30152, "review_comments": [], "title": " Better error message when buckets_path refers to multi-bucket agg " }
{ "commits": [ { "message": "Better error message when buckets_path refers to multi-bucket agg\n\nInstead of throwing a technical error (`found Object[]`), we can check\nto see if any of the referenced aggs are a multi-bucket agg and throw\na friendlier exception.\n\nCloses #25273" }, { "message": "Tweak error message, and adjust how multi-numeric values are handled" }, { "message": "Merge remote-tracking branch 'origin/master' into better_error_pipeline_path" }, { "message": "Fix method signatures" }, { "message": "Merge remote-tracking branch 'origin/master' into better_error_pipeline_path" }, { "message": "Tweak to make sure _bucket_count works correctly\n\n_bucket_count is a special case, since it returns a single value\nfor a multi-bucket which is allowable." }, { "message": "Merge remote-tracking branch 'origin/master' into better_error_pipeline_path" } ], "files": [ { "diff": "@@ -201,7 +201,7 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1) {", "filename": "modules/aggs-matrix-stats/src/main/java/org/elasticsearch/search/aggregations/matrix/stats/InternalMatrixStats.java", "status": "modified" }, { "diff": "@@ -160,10 +160,16 @@ public boolean isMapped() {\n */\n public Object getProperty(String path) {\n AggregationPath aggPath = AggregationPath.parse(path);\n- return getProperty(aggPath.getPathElementsAsStringList());\n+ return getProperty(aggPath.getPathElementsAsStringList(), true);\n }\n \n- public abstract Object getProperty(List<String> path);\n+ /**\n+ *\n+ * @param path the path to the property in the aggregation tree\n+ * @param allowMultiBucket true if multi-bucket aggregations are allowed to be processed, false if exception should be thrown\n+ * @return the value of the property\n+ */\n+ public abstract Object getProperty(List<String> path, boolean allowMultiBucket);\n \n /**\n * Read a size under the assumption that a value of 0 means unlimited.", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/InternalAggregation.java", "status": "modified" }, { "diff": "@@ -68,7 +68,13 @@ protected InternalMultiBucketAggregation(StreamInput in) throws IOException {\n public abstract List<? extends InternalBucket> getBuckets();\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n+ boolean needsBucketCount = path.isEmpty() == false && path.get(0).equals(\"_bucket_count\") ? true : false;\n+ if (allowMultiBucket == false && needsBucketCount == false) {\n+ throw new AggregationExecutionException(\"[\" + getName() + \"] is a [\" + getType()\n+ + \"], but only number value or a single value numeric metric aggregations are allowed.\");\n+ }\n+\n if (path.isEmpty()) {\n return this;\n } else if (path.get(0).equals(\"_bucket_count\")) {\n@@ -119,6 +125,10 @@ public static int countInnerBucket(Aggregation agg) {\n public abstract static class InternalBucket implements Bucket, Writeable {\n \n public Object getProperty(String containingAggName, List<String> path) {\n+ return getProperty(containingAggName, path, true);\n+ }\n+\n+ public Object getProperty(String containingAggName, List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n }\n@@ -140,7 +150,8 @@ public Object getProperty(String containingAggName, List<String> path) {\n throw new InvalidAggregationPathException(\"Cannot find an aggregation named [\" + aggName + \"] in [\" + containingAggName\n + \"]\");\n }\n- return aggregation.getProperty(path.subList(1, path.size()));\n+\n+ return aggregation.getProperty(path.subList(1, path.size()), allowMultiBucket);\n }\n }\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/InternalMultiBucketAggregation.java", "status": "modified" }, { "diff": "@@ -22,8 +22,10 @@\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.aggregations.Aggregation;\n+import org.elasticsearch.search.aggregations.AggregationExecutionException;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n+import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n \n import java.io.IOException;\n@@ -110,7 +112,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else {\n@@ -125,7 +127,11 @@ public Object getProperty(List<String> path) {\n if (aggregation == null) {\n throw new IllegalArgumentException(\"Cannot find an aggregation named [\" + aggName + \"] in [\" + getName() + \"]\");\n }\n- return aggregation.getProperty(path.subList(1, path.size()));\n+ if (allowMultiBucket == false && aggregation instanceof InternalMultiBucketAggregation) {\n+ throw new AggregationExecutionException(\"[\" + aggName + \"] is a [\" + aggregation.getType()\n+ + \"], but only number value or a single value numeric metric aggregations are allowed.\");\n+ }\n+ return aggregation.getProperty(path.subList(1, path.size()), allowMultiBucket);\n }\n }\n ", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/InternalSingleBucketAggregation.java", "status": "modified" }, { "diff": "@@ -128,7 +128,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalGeoBounds.java", "status": "modified" }, { "diff": "@@ -126,7 +126,7 @@ public InternalGeoCentroid doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalGeoCentroid.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n \n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.aggregations.AggregationExecutionException;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n \n@@ -52,7 +53,7 @@ public String getValueAsString() {\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1 && \"value\".equals(path.get(0))) {\n@@ -83,8 +84,13 @@ public String valueAsString(String name) {\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n+ if (allowMultiBucket == false) {\n+ throw new AggregationExecutionException(\"[\" + getName() + \"] is a [\" + getType()\n+ + \"] which contains multiple values, but only number value or a single value numeric metric aggregation. Please \" +\n+ \"specify which value to return.\");\n+ }\n return this;\n } else if (path.size() == 1) {\n return value(path.get(0));", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalNumericMetricsAggregation.java", "status": "modified" }, { "diff": "@@ -115,7 +115,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1 && \"value\".equals(path.get(0))) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalScriptedMetric.java", "status": "modified" }, { "diff": "@@ -165,7 +165,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalTopHits.java", "status": "modified" }, { "diff": "@@ -155,7 +155,7 @@ public static Double resolveBucketValue(MultiBucketsAggregation agg,\n public static Double resolveBucketValue(MultiBucketsAggregation agg,\n InternalMultiBucketAggregation.InternalBucket bucket, List<String> aggPathAsList, GapPolicy gapPolicy) {\n try {\n- Object propertyValue = bucket.getProperty(agg.getName(), aggPathAsList);\n+ Object propertyValue = bucket.getProperty(agg.getName(), aggPathAsList, false);\n if (propertyValue == null) {\n throw new AggregationExecutionException(AbstractPipelineAggregationBuilder.BUCKETS_PATH_FIELD.getPreferredName()\n + \" must reference either a number value or a single value numeric metric aggregation\");", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/pipeline/BucketHelpers.java", "status": "modified" }, { "diff": "@@ -90,7 +90,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1 && \"value\".equals(path.get(0))) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/pipeline/InternalBucketMetricValue.java", "status": "modified" }, { "diff": "@@ -71,7 +71,7 @@ DocValueFormat formatter() {\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1 && \"value\".equals(path.get(0))) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/pipeline/InternalDerivative.java", "status": "modified" }, { "diff": "@@ -0,0 +1,234 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.pipeline;\n+\n+import org.apache.lucene.document.Document;\n+import org.apache.lucene.document.NumericDocValuesField;\n+import org.apache.lucene.document.SortedNumericDocValuesField;\n+import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.RandomIndexWriter;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.store.Directory;\n+import org.elasticsearch.common.CheckedConsumer;\n+import org.elasticsearch.index.mapper.DateFieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.NumberFieldMapper;\n+import org.elasticsearch.index.query.MatchAllQueryBuilder;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregationExecutionException;\n+import org.elasticsearch.search.aggregations.AggregatorTestCase;\n+import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;\n+import org.elasticsearch.search.aggregations.bucket.histogram.InternalDateHistogram;\n+import org.elasticsearch.search.aggregations.metrics.AvgAggregationBuilder;\n+import org.elasticsearch.search.aggregations.metrics.PercentilesAggregationBuilder;\n+\n+import java.io.IOException;\n+import java.util.Arrays;\n+import java.util.List;\n+import java.util.function.Consumer;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+\n+public class BucketHelperTests extends AggregatorTestCase {\n+\n+ private static final String HISTO_FIELD = \"histo\";\n+ private static final String VALUE_FIELD = \"value_field\";\n+\n+ private static final List<String> datasetTimes = Arrays.asList(\n+ \"2017-01-01T01:07:45\",\n+ \"2017-01-02T03:43:34\",\n+ \"2017-01-03T04:11:00\",\n+ \"2017-01-04T05:11:31\",\n+ \"2017-01-05T08:24:05\",\n+ \"2017-01-06T13:09:32\",\n+ \"2017-01-07T13:47:43\",\n+ \"2017-01-08T16:14:34\",\n+ \"2017-01-09T17:09:50\",\n+ \"2017-01-10T22:55:46\");\n+\n+ private static final List<Integer> datasetValues = Arrays.asList(1,2,3,4,5,6,7,8,9,10);\n+\n+ public void testPathingThroughMultiBucket() {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new DateHistogramAggregationBuilder(\"histo2\")\n+ .dateHistogramInterval(DateHistogramInterval.HOUR)\n+ .field(HISTO_FIELD).subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD)));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"histo2>the_avg\"));\n+\n+ AggregationExecutionException e = expectThrows(AggregationExecutionException.class,\n+ () -> executeTestCase(query, aggBuilder, histogram -> fail(\"Should have thrown exception because of multi-bucket agg\")));\n+\n+ assertThat(e.getMessage(),\n+ equalTo(\"[histo2] is a [date_histogram], but only number value or a single value numeric metric aggregations are allowed.\"));\n+ }\n+\n+ public void testhMultiBucketBucketCount() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new DateHistogramAggregationBuilder(\"histo2\")\n+ .dateHistogramInterval(DateHistogramInterval.HOUR)\n+ .field(HISTO_FIELD).subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD)));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"histo2._bucket_count\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> assertThat(((InternalDateHistogram)histogram).getBuckets().size(), equalTo(10)));\n+ }\n+\n+ public void testCountOnMultiBucket() {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new DateHistogramAggregationBuilder(\"histo2\")\n+ .dateHistogramInterval(DateHistogramInterval.HOUR)\n+ .field(HISTO_FIELD).subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD)));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"histo2._count\"));\n+\n+ AggregationExecutionException e = expectThrows(AggregationExecutionException.class,\n+ () -> executeTestCase(query, aggBuilder, histogram -> fail(\"Should have thrown exception because of multi-bucket agg\")));\n+\n+ assertThat(e.getMessage(),\n+ equalTo(\"[histo2] is a [date_histogram], but only number value or a single value numeric metric aggregations are allowed.\"));\n+ }\n+\n+ public void testPathingThroughSingleBuckets() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new FilterAggregationBuilder(\"the_filter\", new MatchAllQueryBuilder())\n+ .subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD)));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"the_filter>the_avg\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> {\n+ assertThat(((InternalDateHistogram)histogram).getBuckets().size(), equalTo(10));\n+ });\n+ }\n+\n+ public void testPathingThroughSingleThenMulti() {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new FilterAggregationBuilder(\"the_filter\", new MatchAllQueryBuilder())\n+ .subAggregation(new DateHistogramAggregationBuilder(\"histo2\")\n+ .dateHistogramInterval(DateHistogramInterval.HOUR)\n+ .field(HISTO_FIELD).subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD))));\n+\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"the_filter>histo2>the_avg\"));\n+\n+ AggregationExecutionException e = expectThrows(AggregationExecutionException.class,\n+ () -> executeTestCase(query, aggBuilder, histogram -> fail(\"Should have thrown exception because of multi-bucket agg\")));\n+\n+ assertThat(e.getMessage(),\n+ equalTo(\"[histo2] is a [date_histogram], but only number value or a single value numeric metric aggregations are allowed.\"));\n+ }\n+\n+ public void testPercentilesWithoutSpecificValue() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new PercentilesAggregationBuilder(\"the_percentiles\").field(VALUE_FIELD));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"the_percentiles\"));\n+\n+ AggregationExecutionException e = expectThrows(AggregationExecutionException.class,\n+ () -> executeTestCase(query, aggBuilder, histogram -> fail(\"Should have thrown exception because of multi-bucket agg\")));\n+\n+ assertThat(e.getMessage(),\n+ equalTo(\"[the_percentiles] is a [tdigest_percentiles] which contains multiple values, but only number value or a \" +\n+ \"single value numeric metric aggregation. Please specify which value to return.\"));\n+ }\n+\n+ public void testPercentilesWithSpecificValue() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new PercentilesAggregationBuilder(\"the_percentiles\").field(VALUE_FIELD));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"the_percentiles.99\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> {\n+ assertThat(((InternalDateHistogram)histogram).getBuckets().size(), equalTo(10));\n+ });\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private void executeTestCase(Query query, AggregationBuilder aggBuilder, Consumer<InternalAggregation> verify) throws IOException {\n+ executeTestCase(query, aggBuilder, verify, indexWriter -> {\n+ Document document = new Document();\n+ int counter = 0;\n+ for (String date : datasetTimes) {\n+ if (frequently()) {\n+ indexWriter.commit();\n+ }\n+\n+ long instant = asLong(date);\n+ document.add(new SortedNumericDocValuesField(HISTO_FIELD, instant));\n+ document.add(new NumericDocValuesField(VALUE_FIELD, datasetValues.get(counter)));\n+ indexWriter.addDocument(document);\n+ document.clear();\n+ counter += 1;\n+ }\n+ });\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private void executeTestCase(Query query, AggregationBuilder aggBuilder, Consumer<InternalAggregation> verify,\n+ CheckedConsumer<RandomIndexWriter, IOException> setup) throws IOException {\n+\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {\n+ setup.accept(indexWriter);\n+ }\n+\n+ try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ IndexSearcher indexSearcher = newSearcher(indexReader, true, true);\n+\n+ DateFieldMapper.Builder builder = new DateFieldMapper.Builder(\"_name\");\n+ DateFieldMapper.DateFieldType fieldType = builder.fieldType();\n+ fieldType.setHasDocValues(true);\n+ fieldType.setName(HISTO_FIELD);\n+\n+ MappedFieldType valueFieldType = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG);\n+ valueFieldType.setHasDocValues(true);\n+ valueFieldType.setName(\"value_field\");\n+\n+ InternalAggregation histogram;\n+ histogram = searchAndReduce(indexSearcher, query, aggBuilder, new MappedFieldType[]{fieldType, valueFieldType});\n+ verify.accept(histogram);\n+ }\n+ }\n+ }\n+\n+ private static long asLong(String dateTime) {\n+ return DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.parser().parseDateTime(dateTime).getMillis();\n+ }\n+}", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/pipeline/BucketHelperTests.java", "status": "added" }, { "diff": "@@ -42,7 +42,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (this.path.equals(path)) {\n return value;\n }", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/execution/search/extractor/TestSingleValueAggregation.java", "status": "modified" } ] }
{ "body": "Executing a sum_bucket aggregation with a bucket path multiple levels deep, like `bucket_1>bucket_2>metric` will fail with:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"\",\r\n \"phase\": \"fetch\",\r\n \"grouped\": true,\r\n \"failed_shards\": [],\r\n \"caused_by\": {\r\n \"type\": \"aggregation_execution_exception\",\r\n \"reason\": \"buckets_path must reference either a number value or a single value numeric metric aggregation, got: java.lang.Object[]\"\r\n }\r\n },\r\n \"status\": 503\r\n}\r\n```\r\n\r\nIt would be preferable that this work. You can work around this by having one `sum_bucket` per aggregation depth level, but it would be nice to either have this return a more relevant error or just work.", "comments": [ { "body": "After discussing this with @polyfractal it seems that this is most likely only the case with nested terms aggregations.", "created_at": "2017-06-20T13:04:55Z" }, { "body": "This will be true for any nested aggregations (e.g. terms>histogram) and was made like this by design. I agree that we should improve the error message here but I am hesitant to support summing over multiple levels as its possible to do (like you say) with multiple `sum_bucket` aggregations and we don't currently have support int eh `bucket_path` syntax for traversing multi bucket aggregations", "created_at": "2017-06-20T13:08:46Z" }, { "body": "@colings86 I think that makes sense. I do think it'd be better to return a `422` response code than a `503`, along with a more helpful error.", "created_at": "2017-06-20T13:12:52Z" }, { "body": "I'll just say that I was confused by this for a while till I did some experiments and determined the workaround.", "created_at": "2017-06-20T13:13:23Z" }, { "body": "@elastic/es-search-aggs \r\n\r\nNote: keeping this open because we should fix the error message to be more helpful (display the name of the path component that was not compatible), not the underlying issue of pathing through a multi-bucket.", "created_at": "2018-03-22T16:44:33Z" }, { "body": "After a lot of digging I came here as well, to find it's not a bug but a feature. The error message threw me off as well. \r\n\r\nI was trying to do a `extended_stats_bucket` on a result that was first bucketed by `modelId`, and then another layer that bucketed the results in 24 hour chunks with `date_histogram`, which had an `agg` that summed up a specific value in that 24 hour window.\r\n\r\nThis produced a result like this, that I was unable to unleash the stats on:\r\n\r\n```\r\n \"aggregations\" : {\r\n \"models\" : {\r\n \"doc_count_error_upper_bound\" : 0,\r\n \"sum_other_doc_count\" : 0,\r\n \"buckets\" : [\r\n {\r\n \"key\" : 1602,\r\n \"doc_count\" : 116,\r\n \"24_hour_windows\" : {\r\n \"buckets\" : [\r\n {\r\n \"key_as_string\" : \"2019-03-01T00:00:00.000Z\",\r\n \"key\" : 1551398400000,\r\n \"doc_count\" : 11,\r\n \"value\" : {\r\n \"value\" : 39.0\r\n }\r\n },\r\n {\r\n \"key_as_string\" : \"2019-03-02T00:00:00.000Z\",\r\n \"key\" : 1551484800000,\r\n \"doc_count\" : 18,\r\n \"value\" : {\r\n \"value\" : 59.0\r\n }\r\n },\r\n```\r\n\r\nErroring out with:\r\n\r\n```\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [ ],\r\n \"type\" : \"search_phase_execution_exception\",\r\n \"reason\" : \"\",\r\n \"phase\" : \"fetch\",\r\n \"grouped\" : true,\r\n \"failed_shards\" : [ ],\r\n \"caused_by\" : {\r\n \"type\" : \"aggregation_execution_exception\",\r\n \"reason\" : \"buckets_path must reference either a number value or a single value numeric metric aggregation, got: [Object[]] at aggregation [24_hour_windows]\"\r\n }\r\n },\r\n \"status\" : 500\r\n}\r\n```\r\n\r\nI would have expected `models>24_hour_windows>value` to work.", "created_at": "2020-11-05T16:47:13Z" }, { "body": "@ssmulders so how did you solve it? i don't get what feature you are talking about.", "created_at": "2021-08-25T14:32:29Z" } ], "number": 25273, "title": "Sum bucket aggregation cannot sum more than one level deep" }
{ "body": "Instead of throwing a technical error when the user specifies a pipeline agg path that includes a multi-bucket, we can check during the property resolution and throw a friendlier exception\r\n\r\nThe old exception:\r\n\r\n>buckets_path must reference either a number value or a single value numeric metric aggregation, got: java.lang.Object[]\r\n\r\n\r\nCompared to the new exceptions:\r\n\r\n> [histo2] is a [date_histogram], but only number value or a single value numeric metric aggregation are allowed.\r\n\r\n>[the_percentiles] is a [tdigest_percentiles] which contains multiple values, but only number value or a single value numeric metric aggregation are allowed. Please specify which value to return.\r\n\r\n\r\nI'm not super pleased with how this was implemented, adding a boolean to `getProperty()`. But it seemed the least invasive way. In particular because we have to deal with both multi-buckets and numeric aggs that have multiple values. \r\n\r\n\r\n~Note: this uses the same changes to `AggregatorTestCase` as #29641 so we can test pipelines easier.~ Not relevant anymore\r\n\r\nCloses #25273", "number": 30152, "review_comments": [], "title": " Better error message when buckets_path refers to multi-bucket agg " }
{ "commits": [ { "message": "Better error message when buckets_path refers to multi-bucket agg\n\nInstead of throwing a technical error (`found Object[]`), we can check\nto see if any of the referenced aggs are a multi-bucket agg and throw\na friendlier exception.\n\nCloses #25273" }, { "message": "Tweak error message, and adjust how multi-numeric values are handled" }, { "message": "Merge remote-tracking branch 'origin/master' into better_error_pipeline_path" }, { "message": "Fix method signatures" }, { "message": "Merge remote-tracking branch 'origin/master' into better_error_pipeline_path" }, { "message": "Tweak to make sure _bucket_count works correctly\n\n_bucket_count is a special case, since it returns a single value\nfor a multi-bucket which is allowable." }, { "message": "Merge remote-tracking branch 'origin/master' into better_error_pipeline_path" } ], "files": [ { "diff": "@@ -201,7 +201,7 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1) {", "filename": "modules/aggs-matrix-stats/src/main/java/org/elasticsearch/search/aggregations/matrix/stats/InternalMatrixStats.java", "status": "modified" }, { "diff": "@@ -160,10 +160,16 @@ public boolean isMapped() {\n */\n public Object getProperty(String path) {\n AggregationPath aggPath = AggregationPath.parse(path);\n- return getProperty(aggPath.getPathElementsAsStringList());\n+ return getProperty(aggPath.getPathElementsAsStringList(), true);\n }\n \n- public abstract Object getProperty(List<String> path);\n+ /**\n+ *\n+ * @param path the path to the property in the aggregation tree\n+ * @param allowMultiBucket true if multi-bucket aggregations are allowed to be processed, false if exception should be thrown\n+ * @return the value of the property\n+ */\n+ public abstract Object getProperty(List<String> path, boolean allowMultiBucket);\n \n /**\n * Read a size under the assumption that a value of 0 means unlimited.", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/InternalAggregation.java", "status": "modified" }, { "diff": "@@ -68,7 +68,13 @@ protected InternalMultiBucketAggregation(StreamInput in) throws IOException {\n public abstract List<? extends InternalBucket> getBuckets();\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n+ boolean needsBucketCount = path.isEmpty() == false && path.get(0).equals(\"_bucket_count\") ? true : false;\n+ if (allowMultiBucket == false && needsBucketCount == false) {\n+ throw new AggregationExecutionException(\"[\" + getName() + \"] is a [\" + getType()\n+ + \"], but only number value or a single value numeric metric aggregations are allowed.\");\n+ }\n+\n if (path.isEmpty()) {\n return this;\n } else if (path.get(0).equals(\"_bucket_count\")) {\n@@ -119,6 +125,10 @@ public static int countInnerBucket(Aggregation agg) {\n public abstract static class InternalBucket implements Bucket, Writeable {\n \n public Object getProperty(String containingAggName, List<String> path) {\n+ return getProperty(containingAggName, path, true);\n+ }\n+\n+ public Object getProperty(String containingAggName, List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n }\n@@ -140,7 +150,8 @@ public Object getProperty(String containingAggName, List<String> path) {\n throw new InvalidAggregationPathException(\"Cannot find an aggregation named [\" + aggName + \"] in [\" + containingAggName\n + \"]\");\n }\n- return aggregation.getProperty(path.subList(1, path.size()));\n+\n+ return aggregation.getProperty(path.subList(1, path.size()), allowMultiBucket);\n }\n }\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/InternalMultiBucketAggregation.java", "status": "modified" }, { "diff": "@@ -22,8 +22,10 @@\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.aggregations.Aggregation;\n+import org.elasticsearch.search.aggregations.AggregationExecutionException;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n+import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n \n import java.io.IOException;\n@@ -110,7 +112,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else {\n@@ -125,7 +127,11 @@ public Object getProperty(List<String> path) {\n if (aggregation == null) {\n throw new IllegalArgumentException(\"Cannot find an aggregation named [\" + aggName + \"] in [\" + getName() + \"]\");\n }\n- return aggregation.getProperty(path.subList(1, path.size()));\n+ if (allowMultiBucket == false && aggregation instanceof InternalMultiBucketAggregation) {\n+ throw new AggregationExecutionException(\"[\" + aggName + \"] is a [\" + aggregation.getType()\n+ + \"], but only number value or a single value numeric metric aggregations are allowed.\");\n+ }\n+ return aggregation.getProperty(path.subList(1, path.size()), allowMultiBucket);\n }\n }\n ", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/InternalSingleBucketAggregation.java", "status": "modified" }, { "diff": "@@ -128,7 +128,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalGeoBounds.java", "status": "modified" }, { "diff": "@@ -126,7 +126,7 @@ public InternalGeoCentroid doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalGeoCentroid.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n \n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.search.DocValueFormat;\n+import org.elasticsearch.search.aggregations.AggregationExecutionException;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n \n@@ -52,7 +53,7 @@ public String getValueAsString() {\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1 && \"value\".equals(path.get(0))) {\n@@ -83,8 +84,13 @@ public String valueAsString(String name) {\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n+ if (allowMultiBucket == false) {\n+ throw new AggregationExecutionException(\"[\" + getName() + \"] is a [\" + getType()\n+ + \"] which contains multiple values, but only number value or a single value numeric metric aggregation. Please \" +\n+ \"specify which value to return.\");\n+ }\n return this;\n } else if (path.size() == 1) {\n return value(path.get(0));", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalNumericMetricsAggregation.java", "status": "modified" }, { "diff": "@@ -115,7 +115,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1 && \"value\".equals(path.get(0))) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalScriptedMetric.java", "status": "modified" }, { "diff": "@@ -165,7 +165,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/InternalTopHits.java", "status": "modified" }, { "diff": "@@ -155,7 +155,7 @@ public static Double resolveBucketValue(MultiBucketsAggregation agg,\n public static Double resolveBucketValue(MultiBucketsAggregation agg,\n InternalMultiBucketAggregation.InternalBucket bucket, List<String> aggPathAsList, GapPolicy gapPolicy) {\n try {\n- Object propertyValue = bucket.getProperty(agg.getName(), aggPathAsList);\n+ Object propertyValue = bucket.getProperty(agg.getName(), aggPathAsList, false);\n if (propertyValue == null) {\n throw new AggregationExecutionException(AbstractPipelineAggregationBuilder.BUCKETS_PATH_FIELD.getPreferredName()\n + \" must reference either a number value or a single value numeric metric aggregation\");", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/pipeline/BucketHelpers.java", "status": "modified" }, { "diff": "@@ -90,7 +90,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1 && \"value\".equals(path.get(0))) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/pipeline/InternalBucketMetricValue.java", "status": "modified" }, { "diff": "@@ -71,7 +71,7 @@ DocValueFormat formatter() {\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (path.isEmpty()) {\n return this;\n } else if (path.size() == 1 && \"value\".equals(path.get(0))) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/pipeline/InternalDerivative.java", "status": "modified" }, { "diff": "@@ -0,0 +1,234 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.pipeline;\n+\n+import org.apache.lucene.document.Document;\n+import org.apache.lucene.document.NumericDocValuesField;\n+import org.apache.lucene.document.SortedNumericDocValuesField;\n+import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.RandomIndexWriter;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.store.Directory;\n+import org.elasticsearch.common.CheckedConsumer;\n+import org.elasticsearch.index.mapper.DateFieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.NumberFieldMapper;\n+import org.elasticsearch.index.query.MatchAllQueryBuilder;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregationExecutionException;\n+import org.elasticsearch.search.aggregations.AggregatorTestCase;\n+import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;\n+import org.elasticsearch.search.aggregations.bucket.histogram.InternalDateHistogram;\n+import org.elasticsearch.search.aggregations.metrics.AvgAggregationBuilder;\n+import org.elasticsearch.search.aggregations.metrics.PercentilesAggregationBuilder;\n+\n+import java.io.IOException;\n+import java.util.Arrays;\n+import java.util.List;\n+import java.util.function.Consumer;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+\n+public class BucketHelperTests extends AggregatorTestCase {\n+\n+ private static final String HISTO_FIELD = \"histo\";\n+ private static final String VALUE_FIELD = \"value_field\";\n+\n+ private static final List<String> datasetTimes = Arrays.asList(\n+ \"2017-01-01T01:07:45\",\n+ \"2017-01-02T03:43:34\",\n+ \"2017-01-03T04:11:00\",\n+ \"2017-01-04T05:11:31\",\n+ \"2017-01-05T08:24:05\",\n+ \"2017-01-06T13:09:32\",\n+ \"2017-01-07T13:47:43\",\n+ \"2017-01-08T16:14:34\",\n+ \"2017-01-09T17:09:50\",\n+ \"2017-01-10T22:55:46\");\n+\n+ private static final List<Integer> datasetValues = Arrays.asList(1,2,3,4,5,6,7,8,9,10);\n+\n+ public void testPathingThroughMultiBucket() {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new DateHistogramAggregationBuilder(\"histo2\")\n+ .dateHistogramInterval(DateHistogramInterval.HOUR)\n+ .field(HISTO_FIELD).subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD)));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"histo2>the_avg\"));\n+\n+ AggregationExecutionException e = expectThrows(AggregationExecutionException.class,\n+ () -> executeTestCase(query, aggBuilder, histogram -> fail(\"Should have thrown exception because of multi-bucket agg\")));\n+\n+ assertThat(e.getMessage(),\n+ equalTo(\"[histo2] is a [date_histogram], but only number value or a single value numeric metric aggregations are allowed.\"));\n+ }\n+\n+ public void testhMultiBucketBucketCount() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new DateHistogramAggregationBuilder(\"histo2\")\n+ .dateHistogramInterval(DateHistogramInterval.HOUR)\n+ .field(HISTO_FIELD).subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD)));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"histo2._bucket_count\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> assertThat(((InternalDateHistogram)histogram).getBuckets().size(), equalTo(10)));\n+ }\n+\n+ public void testCountOnMultiBucket() {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new DateHistogramAggregationBuilder(\"histo2\")\n+ .dateHistogramInterval(DateHistogramInterval.HOUR)\n+ .field(HISTO_FIELD).subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD)));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"histo2._count\"));\n+\n+ AggregationExecutionException e = expectThrows(AggregationExecutionException.class,\n+ () -> executeTestCase(query, aggBuilder, histogram -> fail(\"Should have thrown exception because of multi-bucket agg\")));\n+\n+ assertThat(e.getMessage(),\n+ equalTo(\"[histo2] is a [date_histogram], but only number value or a single value numeric metric aggregations are allowed.\"));\n+ }\n+\n+ public void testPathingThroughSingleBuckets() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new FilterAggregationBuilder(\"the_filter\", new MatchAllQueryBuilder())\n+ .subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD)));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"the_filter>the_avg\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> {\n+ assertThat(((InternalDateHistogram)histogram).getBuckets().size(), equalTo(10));\n+ });\n+ }\n+\n+ public void testPathingThroughSingleThenMulti() {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new FilterAggregationBuilder(\"the_filter\", new MatchAllQueryBuilder())\n+ .subAggregation(new DateHistogramAggregationBuilder(\"histo2\")\n+ .dateHistogramInterval(DateHistogramInterval.HOUR)\n+ .field(HISTO_FIELD).subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD))));\n+\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"the_filter>histo2>the_avg\"));\n+\n+ AggregationExecutionException e = expectThrows(AggregationExecutionException.class,\n+ () -> executeTestCase(query, aggBuilder, histogram -> fail(\"Should have thrown exception because of multi-bucket agg\")));\n+\n+ assertThat(e.getMessage(),\n+ equalTo(\"[histo2] is a [date_histogram], but only number value or a single value numeric metric aggregations are allowed.\"));\n+ }\n+\n+ public void testPercentilesWithoutSpecificValue() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new PercentilesAggregationBuilder(\"the_percentiles\").field(VALUE_FIELD));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"the_percentiles\"));\n+\n+ AggregationExecutionException e = expectThrows(AggregationExecutionException.class,\n+ () -> executeTestCase(query, aggBuilder, histogram -> fail(\"Should have thrown exception because of multi-bucket agg\")));\n+\n+ assertThat(e.getMessage(),\n+ equalTo(\"[the_percentiles] is a [tdigest_percentiles] which contains multiple values, but only number value or a \" +\n+ \"single value numeric metric aggregation. Please specify which value to return.\"));\n+ }\n+\n+ public void testPercentilesWithSpecificValue() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new PercentilesAggregationBuilder(\"the_percentiles\").field(VALUE_FIELD));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"the_percentiles.99\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> {\n+ assertThat(((InternalDateHistogram)histogram).getBuckets().size(), equalTo(10));\n+ });\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private void executeTestCase(Query query, AggregationBuilder aggBuilder, Consumer<InternalAggregation> verify) throws IOException {\n+ executeTestCase(query, aggBuilder, verify, indexWriter -> {\n+ Document document = new Document();\n+ int counter = 0;\n+ for (String date : datasetTimes) {\n+ if (frequently()) {\n+ indexWriter.commit();\n+ }\n+\n+ long instant = asLong(date);\n+ document.add(new SortedNumericDocValuesField(HISTO_FIELD, instant));\n+ document.add(new NumericDocValuesField(VALUE_FIELD, datasetValues.get(counter)));\n+ indexWriter.addDocument(document);\n+ document.clear();\n+ counter += 1;\n+ }\n+ });\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private void executeTestCase(Query query, AggregationBuilder aggBuilder, Consumer<InternalAggregation> verify,\n+ CheckedConsumer<RandomIndexWriter, IOException> setup) throws IOException {\n+\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {\n+ setup.accept(indexWriter);\n+ }\n+\n+ try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ IndexSearcher indexSearcher = newSearcher(indexReader, true, true);\n+\n+ DateFieldMapper.Builder builder = new DateFieldMapper.Builder(\"_name\");\n+ DateFieldMapper.DateFieldType fieldType = builder.fieldType();\n+ fieldType.setHasDocValues(true);\n+ fieldType.setName(HISTO_FIELD);\n+\n+ MappedFieldType valueFieldType = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG);\n+ valueFieldType.setHasDocValues(true);\n+ valueFieldType.setName(\"value_field\");\n+\n+ InternalAggregation histogram;\n+ histogram = searchAndReduce(indexSearcher, query, aggBuilder, new MappedFieldType[]{fieldType, valueFieldType});\n+ verify.accept(histogram);\n+ }\n+ }\n+ }\n+\n+ private static long asLong(String dateTime) {\n+ return DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.parser().parseDateTime(dateTime).getMillis();\n+ }\n+}", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/pipeline/BucketHelperTests.java", "status": "added" }, { "diff": "@@ -42,7 +42,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n }\n \n @Override\n- public Object getProperty(List<String> path) {\n+ public Object getProperty(List<String> path, boolean allowMultiBucket) {\n if (this.path.equals(path)) {\n return value;\n }", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/execution/search/extractor/TestSingleValueAggregation.java", "status": "modified" } ] }
{ "body": "*Original comment by @astefan:*\n\nFor a test case with dynamic mapping created following (which creates a `date` type field for `release_date`):\r\n\r\n```\r\nPUT /library/book/_bulk?refresh\r\n{\"index\":{\"_id\": \"1\"}}\r\n{\"name\": \"Leviathan Wakes\", \"author\": \"James S.A. Corey\", \"release_date\": \"2011-06-02\", \"page_count\": 561,\"price\":33.456}\r\n```\r\n\r\na (wrong) sql query \r\n\r\n```\r\nPOST /_xpack/sql\r\n{\r\n \"query\": \"SELECT name.keyword FROM library WHERE release_date >= 2011-06-02\"\r\n}\r\n```\r\n\r\ngives an error message (expected) but one that is incomplete:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"sql_illegal_argument_exception\",\r\n \"reason\": \"Line %d:%d - Comparisons against variables are not (currently) supported; offender %s in %s\"\r\n }\r\n ],\r\n \"type\": \"sql_illegal_argument_exception\",\r\n \"reason\": \"Line %d:%d - Comparisons against variables are not (currently) supported; offender %s in %s\"\r\n },\r\n \"status\": 500\r\n}\r\n```", "comments": [ { "body": "The error message is indeed incorrect however I'm unable to reproduce that the query above throws an exception (it simply translates to a range query).\r\nOne thing to note (and this needs to be sorted out in the docs) is that the expression above `2011-06-02` is not a literal and thus evaluates to 2001 minus 06 minus 02 so 1993.\r\nOne can use `'` to specify it as a literal or a `CAST` - this is likely to be improved in a future release by extending the date-time support.", "created_at": "2018-04-25T15:27:16Z" } ], "number": 30016, "title": "SQL: incomplete error message for the wrong date comparison" }
{ "body": "Error messages had placeholders that were not replaced; this PR fixes\r\nthat\r\n\r\nFIX #30016", "number": 30138, "review_comments": [], "title": "SQL: Correct error message" }
{ "commits": [ { "message": "SQL: Correct error message\n\nError messages had placeholders that were not replaced; this PR fixes\nthat\n\nFIX #30016" }, { "message": "Merge remote-tracking branch 'remotes/upstream/master' into fix-for-30016" }, { "message": "Update tests" }, { "message": "Merge branch 'master' into fix-for-30016" } ], "files": [ { "diff": "@@ -10,6 +10,7 @@\n import org.elasticsearch.xpack.sql.expression.BinaryExpression;\n import org.elasticsearch.xpack.sql.expression.Expression;\n import org.elasticsearch.xpack.sql.expression.ExpressionId;\n+import org.elasticsearch.xpack.sql.expression.Expressions;\n import org.elasticsearch.xpack.sql.expression.FieldAttribute;\n import org.elasticsearch.xpack.sql.expression.Literal;\n import org.elasticsearch.xpack.sql.expression.NamedExpression;\n@@ -159,7 +160,7 @@ static QueryTranslation toQuery(Expression e, boolean onAggs) {\n }\n }\n \n- throw new UnsupportedOperationException(format(Locale.ROOT, \"Don't know how to translate %s %s\", e.nodeName(), e));\n+ throw new SqlIllegalArgumentException(\"Don't know how to translate {} {}\", e.nodeName(), e);\n }\n \n static LeafAgg toAgg(String id, Function f) {\n@@ -171,7 +172,7 @@ static LeafAgg toAgg(String id, Function f) {\n }\n }\n \n- throw new UnsupportedOperationException(format(Locale.ROOT, \"Don't know how to translate %s %s\", f.nodeName(), f));\n+ throw new SqlIllegalArgumentException(\"Don't know how to translate {} {}\", f.nodeName(), f);\n }\n \n static class GroupingContext {\n@@ -395,8 +396,8 @@ static String field(AggregateFunction af) {\n if (arg instanceof Literal) {\n return String.valueOf(((Literal) arg).value());\n }\n- throw new SqlIllegalArgumentException(\"Does not know how to convert argument \" + arg.nodeString()\n- + \" for function \" + af.nodeString());\n+ throw new SqlIllegalArgumentException(\"Does not know how to convert argument {} for function {}\", arg.nodeString(),\n+ af.nodeString());\n }\n \n // TODO: need to optimize on ngram\n@@ -505,9 +506,9 @@ static class BinaryComparisons extends ExpressionTranslator<BinaryComparison> {\n @Override\n protected QueryTranslation asQuery(BinaryComparison bc, boolean onAggs) {\n Check.isTrue(bc.right().foldable(),\n- \"Line %d:%d - Comparisons against variables are not (currently) supported; offender %s in %s\",\n+ \"Line {}:{}: Comparisons against variables are not (currently) supported; offender [{}] in [{}]\",\n bc.right().location().getLineNumber(), bc.right().location().getColumnNumber(),\n- bc.right().nodeName(), bc.nodeName());\n+ Expressions.name(bc.right()), bc.symbol());\n \n if (bc.left() instanceof NamedExpression) {\n NamedExpression ne = (NamedExpression) bc.left();\n@@ -605,8 +606,8 @@ private static Query translateQuery(BinaryComparison bc) {\n return new TermQuery(loc, name, value);\n }\n \n- Check.isTrue(false, \"don't know how to translate binary comparison [{}] in [{}]\", bc.right().nodeString(), bc);\n- return null;\n+ throw new SqlIllegalArgumentException(\"Don't know how to translate binary comparison [{}] in [{}]\", bc.right().nodeString(),\n+ bc);\n }\n }\n \n@@ -700,9 +701,8 @@ else if (onAggs) {\n return new QueryTranslation(query, aggFilter);\n }\n else {\n- throw new UnsupportedOperationException(\"No idea how to translate \" + e);\n+ throw new SqlIllegalArgumentException(\"No idea how to translate \" + e);\n }\n-\n }\n }\n ", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/planner/QueryTranslator.java", "status": "modified" }, { "diff": "@@ -153,7 +153,7 @@ public void testDottedFieldPathTypo() {\n public void testStarExpansionExcludesObjectAndUnsupportedTypes() {\n LogicalPlan plan = plan(\"SELECT * FROM test\");\n List<? extends NamedExpression> list = ((Project) plan).projections();\n- assertThat(list, hasSize(7));\n+ assertThat(list, hasSize(8));\n List<String> names = Expressions.names(list);\n assertThat(names, not(hasItem(\"some\")));\n assertThat(names, not(hasItem(\"some.dotted\")));", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/analysis/analyzer/FieldAttributeTests.java", "status": "modified" }, { "diff": "@@ -17,7 +17,7 @@ public class SysColumnsTests extends ESTestCase {\n public void testSysColumns() {\n List<List<?>> rows = new ArrayList<>();\n SysColumns.fillInRows(\"test\", \"index\", TypesTests.loadMapping(\"mapping-multi-field-variation.json\", true), null, rows, null);\n- assertEquals(15, rows.size());\n+ assertEquals(16, rows.size());\n assertEquals(24, rows.get(0).size());\n \n List<?> row = rows.get(0);\n@@ -38,13 +38,13 @@ public void testSysColumns() {\n assertEquals(null, radix(row));\n assertEquals(Integer.MAX_VALUE, bufferLength(row));\n \n- row = rows.get(6);\n+ row = rows.get(7);\n assertEquals(\"some.dotted\", name(row));\n assertEquals(Types.STRUCT, sqlType(row));\n assertEquals(null, radix(row));\n assertEquals(-1, bufferLength(row));\n \n- row = rows.get(14);\n+ row = rows.get(15);\n assertEquals(\"some.ambiguous.normalized\", name(row));\n assertEquals(Types.VARCHAR, sqlType(row));\n assertEquals(null, radix(row));", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/plan/logical/command/sys/SysColumnsTests.java", "status": "modified" }, { "diff": "@@ -6,6 +6,7 @@\n package org.elasticsearch.xpack.sql.planner;\n \n import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.xpack.sql.SqlIllegalArgumentException;\n import org.elasticsearch.xpack.sql.analysis.analyzer.Analyzer;\n import org.elasticsearch.xpack.sql.analysis.index.EsIndex;\n import org.elasticsearch.xpack.sql.analysis.index.IndexResolution;\n@@ -18,9 +19,11 @@\n import org.elasticsearch.xpack.sql.plan.logical.Project;\n import org.elasticsearch.xpack.sql.planner.QueryTranslator.QueryTranslation;\n import org.elasticsearch.xpack.sql.querydsl.query.Query;\n+import org.elasticsearch.xpack.sql.querydsl.query.RangeQuery;\n import org.elasticsearch.xpack.sql.querydsl.query.TermQuery;\n import org.elasticsearch.xpack.sql.type.EsField;\n import org.elasticsearch.xpack.sql.type.TypesTests;\n+import org.joda.time.DateTime;\n \n import java.util.Map;\n import java.util.TimeZone;\n@@ -84,4 +87,56 @@ public void testTermEqualityNotAnalyzed() {\n assertEquals(\"int\", tq.term());\n assertEquals(5, tq.value());\n }\n+\n+ public void testComparisonAgainstColumns() {\n+ LogicalPlan p = plan(\"SELECT some.string FROM test WHERE date > int\");\n+ assertTrue(p instanceof Project);\n+ p = ((Project) p).child();\n+ assertTrue(p instanceof Filter);\n+ Expression condition = ((Filter) p).condition();\n+ SqlIllegalArgumentException ex = expectThrows(SqlIllegalArgumentException.class, () -> QueryTranslator.toQuery(condition, false));\n+ assertEquals(\"Line 1:43: Comparisons against variables are not (currently) supported; offender [int] in [>]\", ex.getMessage());\n+ }\n+\n+ public void testDateRange() {\n+ LogicalPlan p = plan(\"SELECT some.string FROM test WHERE date > 1969-05-13\");\n+ assertTrue(p instanceof Project);\n+ p = ((Project) p).child();\n+ assertTrue(p instanceof Filter);\n+ Expression condition = ((Filter) p).condition();\n+ QueryTranslation translation = QueryTranslator.toQuery(condition, false);\n+ Query query = translation.query;\n+ assertTrue(query instanceof RangeQuery);\n+ RangeQuery rq = (RangeQuery) query;\n+ assertEquals(\"date\", rq.field());\n+ assertEquals(1951, rq.lower());\n+ }\n+\n+ public void testDateRangeLiteral() {\n+ LogicalPlan p = plan(\"SELECT some.string FROM test WHERE date > '1969-05-13'\");\n+ assertTrue(p instanceof Project);\n+ p = ((Project) p).child();\n+ assertTrue(p instanceof Filter);\n+ Expression condition = ((Filter) p).condition();\n+ QueryTranslation translation = QueryTranslator.toQuery(condition, false);\n+ Query query = translation.query;\n+ assertTrue(query instanceof RangeQuery);\n+ RangeQuery rq = (RangeQuery) query;\n+ assertEquals(\"date\", rq.field());\n+ assertEquals(\"1969-05-13\", rq.lower());\n+ }\n+\n+ public void testDateRangeCast() {\n+ LogicalPlan p = plan(\"SELECT some.string FROM test WHERE date > CAST('1969-05-13T12:34:56Z' AS DATE)\");\n+ assertTrue(p instanceof Project);\n+ p = ((Project) p).child();\n+ assertTrue(p instanceof Filter);\n+ Expression condition = ((Filter) p).condition();\n+ QueryTranslation translation = QueryTranslator.toQuery(condition, false);\n+ Query query = translation.query;\n+ assertTrue(query instanceof RangeQuery);\n+ RangeQuery rq = (RangeQuery) query;\n+ assertEquals(\"date\", rq.field());\n+ assertEquals(DateTime.parse(\"1969-05-13T12:34:56Z\"), rq.lower());\n+ }\n }\n\\ No newline at end of file", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/planner/QueryTranslatorTests.java", "status": "modified" }, { "diff": "@@ -4,6 +4,7 @@\n \"int\" : { \"type\" : \"integer\" },\n \"text\" : { \"type\" : \"text\" },\n \"keyword\" : { \"type\" : \"keyword\" },\n+ \"date\" : { \"type\" : \"date\" },\n \"unsupported\" : { \"type\" : \"ip_range\" },\n \"some\" : {\n \"properties\" : {", "filename": "x-pack/plugin/sql/src/test/resources/mapping-multi-field-variation.json", "status": "modified" } ] }
{ "body": "*Original comment by @davidkyle:*\n\nOpen a job send some data and close the job then reopen the job and send some data timestamped a week later than the previous batch. Autodetect will create empty bucket results for the intervening period but `DataCounts::bucket_count` will not reflect that. \r\n\r\nThe test`MlBasicMultiNodeIT::testMiniFarequoteReopen` does exactly this but the test was asserting that `bucket_count == 2` rather than `bucket_count = 7 days of buckets`. `bucket_count` should equal to the number of buckets written by autodetect, with the caveat that old results are sometimes pruned. ", "comments": [], "number": 30080, "title": "[ML] bucket_count is inaccurate when there are gaps in the data" }
{ "body": "This commit refactors the DataStreamDiagnostics class\r\nachieving the following advantages:\r\n\r\n- simpler code; by encapsulating the moving bucket histogram\r\ninto its own class\r\n- better performance; by using an array to store the buckets\r\ninstead of a map\r\n- explicit handling of gap buckets; in preparation of fixing #30080", "number": 30129, "review_comments": [ { "body": "I suspect an off by 1:\r\n\r\n```\r\nlatency = 11\r\nbucketSpanMs=4\r\n```\r\nwould (ignoring `MIN_BUCKETS`) result in `mazSize` = 2,\r\n\r\nbucket[0] => (0, 4]\r\nbucket[1] => (4, 8]\r\n\r\nif you get a results for t = 9, there is no bucket[2] to place it, and because latency == 11 you can not finalize bucket[0] yet. So I think you must round up, not down.", "created_at": "2018-04-25T13:43:35Z" }, { "body": "Ah, yes. I forgot to test how it works with latency and there aren't any unit tests. I'll fix this and add some tests as well.", "created_at": "2018-04-25T14:00:17Z" }, { "body": "`(latencyMs + bucketSpanMs -1) / bucketSpanMs` will return the right value.", "created_at": "2018-04-25T16:15:38Z" }, { "body": "`bucketSpanMs` and `latencyMs` are needed the job isn't you can break that dependency. Also the name `BucketHistogram` is very generic perhaps `BucketDiagnostics` ", "created_at": "2018-04-25T16:17:36Z" }, { "body": "Pushed the fix.", "created_at": "2018-04-25T16:17:45Z" }, { "body": "I like the rename to `BucketDiagnostics`. Pushed a commit that fixes that.\r\n\r\nAs for removing the dependency to job, I can see the point but on the other hand it increases argument lists and it is a dependency that does not seem completely out of place. I could easily imagine more job parameters being used somehow in the diagnostics.", "created_at": "2018-04-25T17:34:53Z" } ], "title": "[ML] Refactor DataStreamDiagnostics to use array" }
{ "commits": [ { "message": "[ML] Refactor DataStreamDiagnostics to use array\n\nThis commit refactors the DataStreamDiagnostics class\nachieving the following advantages:\n\n- simpler code; by encapsulating the moving bucket histogram\ninto its own class\n- better performance; by using an array to store the buckets\ninstead of a map\n- explicit handling of gap buckets; in preparation of fixing #30080" }, { "message": "Fix latency" }, { "message": "Rename BucketHistogram to BucketDiagnostics" }, { "message": "Remove public qualifiers from package private class" } ], "files": [ { "diff": "@@ -12,8 +12,9 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.xpack.core.ml.job.config.Job;\n-import org.elasticsearch.xpack.ml.job.persistence.JobDataCountsPersister;\n import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;\n+import org.elasticsearch.xpack.ml.job.persistence.JobDataCountsPersister;\n+import org.elasticsearch.xpack.ml.job.process.diagnostics.DataStreamDiagnostics;\n \n import java.util.Date;\n import java.util.Locale;", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/process/DataCountsReporter.java", "status": "modified" }, { "diff": "@@ -0,0 +1,132 @@\n+/*\n+ * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one\n+ * or more contributor license agreements. Licensed under the Elastic License;\n+ * you may not use this file except in compliance with the Elastic License.\n+ */\n+package org.elasticsearch.xpack.ml.job.process.diagnostics;\n+\n+import org.elasticsearch.xpack.core.ml.job.config.Job;\n+import org.elasticsearch.xpack.core.ml.utils.Intervals;\n+\n+/**\n+ * A moving window of buckets that allow keeping\n+ * track of some statistics like the bucket count,\n+ * empty or sparse buckets, etc.\n+ *\n+ * The counts are stored in an array that functions as a\n+ * circular buffer. When time is advanced, all buckets\n+ * out of the window are flushed.\n+ */\n+class BucketDiagnostics {\n+\n+ private static final int MIN_BUCKETS = 10;\n+\n+ private final long bucketSpanMs;\n+ private final long latencyMs;\n+ private final int maxSize;\n+ private final long[] buckets;\n+ private long movingBucketCount = 0;\n+ private long latestBucketStartMs = -1;\n+ private int latestBucketIndex;\n+ private long earliestBucketStartMs = -1;\n+ private int earliestBucketIndex;\n+ private long latestFlushedBucketStartMs = -1;\n+ private final BucketFlushListener bucketFlushListener;\n+\n+ BucketDiagnostics(Job job, BucketFlushListener bucketFlushListener) {\n+ bucketSpanMs = job.getAnalysisConfig().getBucketSpan().millis();\n+ latencyMs = job.getAnalysisConfig().getLatency() == null ? 0 : job.getAnalysisConfig().getLatency().millis();\n+ maxSize = Math.max((int) (Intervals.alignToCeil(latencyMs, bucketSpanMs) / bucketSpanMs), MIN_BUCKETS);\n+ buckets = new long[maxSize];\n+ this.bucketFlushListener = bucketFlushListener;\n+ }\n+\n+ void addRecord(long recordTimestampMs) {\n+ long bucketStartMs = Intervals.alignToFloor(recordTimestampMs, bucketSpanMs);\n+\n+ // Initialize earliest/latest times\n+ if (latestBucketStartMs < 0) {\n+ latestBucketStartMs = bucketStartMs;\n+ earliestBucketStartMs = bucketStartMs;\n+ }\n+\n+ advanceTime(bucketStartMs);\n+ addToBucket(bucketStartMs);\n+ }\n+\n+ private void advanceTime(long bucketStartMs) {\n+ while (bucketStartMs > latestBucketStartMs) {\n+ int flushBucketIndex = (latestBucketIndex + 1) % maxSize;\n+\n+ if (flushBucketIndex == earliestBucketIndex) {\n+ flush(flushBucketIndex);\n+ movingBucketCount -= buckets[flushBucketIndex];\n+ earliestBucketStartMs += bucketSpanMs;\n+ earliestBucketIndex = (earliestBucketIndex + 1) % maxSize;\n+ }\n+ buckets[flushBucketIndex] = 0L;\n+\n+ latestBucketStartMs += bucketSpanMs;\n+ latestBucketIndex = flushBucketIndex;\n+ }\n+ }\n+\n+ private void addToBucket(long bucketStartMs) {\n+ int offsetToLatest = (int) ((bucketStartMs - latestBucketStartMs) / bucketSpanMs);\n+ int bucketIndex = (latestBucketIndex + offsetToLatest) % maxSize;\n+ if (bucketIndex < 0) {\n+ bucketIndex = maxSize + bucketIndex;\n+ }\n+\n+ ++buckets[bucketIndex];\n+ ++movingBucketCount;\n+\n+ if (bucketStartMs < earliestBucketStartMs) {\n+ earliestBucketStartMs = bucketStartMs;\n+ earliestBucketIndex = bucketIndex;\n+ }\n+ }\n+\n+ private void flush(int bucketIndex) {\n+ long bucketStartMs = getTimestampMs(bucketIndex);\n+ if (bucketStartMs > latestFlushedBucketStartMs) {\n+ bucketFlushListener.onBucketFlush(bucketStartMs, buckets[bucketIndex]);\n+ latestFlushedBucketStartMs = bucketStartMs;\n+ }\n+ }\n+\n+ private long getTimestampMs(int bucketIndex) {\n+ int offsetToLatest = latestBucketIndex - bucketIndex;\n+ if (offsetToLatest < 0) {\n+ offsetToLatest = maxSize + offsetToLatest;\n+ }\n+ return latestBucketStartMs - offsetToLatest * bucketSpanMs;\n+ }\n+\n+ void flush() {\n+ if (latestBucketStartMs < 0) {\n+ return;\n+ }\n+\n+ int bucketIndex = earliestBucketIndex;\n+ while (bucketIndex != latestBucketIndex) {\n+ flush(bucketIndex);\n+ bucketIndex = (bucketIndex + 1) % maxSize;\n+ }\n+ }\n+\n+ double averageBucketCount() {\n+ return (double) movingBucketCount / size();\n+ }\n+\n+ private int size() {\n+ if (latestBucketStartMs < 0) {\n+ return 0;\n+ }\n+ return (int) ((latestBucketStartMs - earliestBucketStartMs) / bucketSpanMs) + 1;\n+ }\n+\n+ interface BucketFlushListener {\n+ void onBucketFlush(long bucketStartMs, long bucketCounts);\n+ }\n+}", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/process/diagnostics/BucketDiagnostics.java", "status": "added" }, { "diff": "@@ -0,0 +1,113 @@\n+/*\n+ * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one\n+ * or more contributor license agreements. Licensed under the Elastic License;\n+ * you may not use this file except in compliance with the Elastic License.\n+ */\n+package org.elasticsearch.xpack.ml.job.process.diagnostics;\n+\n+import org.apache.logging.log4j.Logger;\n+import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.xpack.core.ml.job.config.Job;\n+\n+import java.util.Date;\n+\n+public class DataStreamDiagnostics {\n+\n+ /**\n+ * Threshold to report potential sparsity problems.\n+ *\n+ * Sparsity score is calculated: log(average) - log(current)\n+ *\n+ * If score is above the threshold, bucket is reported as sparse bucket.\n+ */\n+ private static final int DATA_SPARSITY_THRESHOLD = 2;\n+\n+ private static final Logger LOGGER = Loggers.getLogger(DataStreamDiagnostics.class);\n+\n+ private final BucketDiagnostics bucketDiagnostics;\n+\n+ private long bucketCount = 0;\n+ private long emptyBucketCount = 0;\n+ private long latestEmptyBucketTime = -1;\n+ private long sparseBucketCount = 0;\n+ private long latestSparseBucketTime = -1;\n+\n+ public DataStreamDiagnostics(Job job) {\n+ bucketDiagnostics = new BucketDiagnostics(job, createBucketFlushListener());\n+ }\n+\n+ private BucketDiagnostics.BucketFlushListener createBucketFlushListener() {\n+ return (flushedBucketStartMs, flushedBucketCount) -> {\n+ ++bucketCount;\n+ if (flushedBucketCount == 0) {\n+ ++emptyBucketCount;\n+ latestEmptyBucketTime = flushedBucketStartMs;\n+ } else {\n+ // simplistic way to calculate data sparsity, just take the log and\n+ // check the difference\n+ double averageBucketSize = bucketDiagnostics.averageBucketCount();\n+ double logAverageBucketSize = Math.log(averageBucketSize);\n+ double logBucketSize = Math.log(flushedBucketCount);\n+ double sparsityScore = logAverageBucketSize - logBucketSize;\n+\n+ if (sparsityScore > DATA_SPARSITY_THRESHOLD) {\n+ LOGGER.debug(\"Sparse bucket {}, this bucket: {} average: {}, sparsity score: {}\", flushedBucketStartMs,\n+ flushedBucketCount, averageBucketSize, sparsityScore);\n+ ++sparseBucketCount;\n+ latestSparseBucketTime = flushedBucketStartMs;\n+ }\n+ }\n+ };\n+ }\n+\n+ /**\n+ * Check record\n+ *\n+ * @param recordTimestampInMs\n+ * The record timestamp in milliseconds since epoch\n+ */\n+ public void checkRecord(long recordTimestampInMs) {\n+ bucketDiagnostics.addRecord(recordTimestampInMs);\n+ }\n+\n+ /**\n+ * Flush all counters, should be called at the end of the data stream\n+ */\n+ public void flush() {\n+ // flush all we know\n+ bucketDiagnostics.flush();\n+ }\n+\n+ public long getBucketCount() {\n+ return bucketCount;\n+ }\n+\n+ public long getEmptyBucketCount() {\n+ return emptyBucketCount;\n+ }\n+\n+ public Date getLatestEmptyBucketTime() {\n+ return latestEmptyBucketTime > 0 ? new Date(latestEmptyBucketTime) : null;\n+ }\n+\n+ public long getSparseBucketCount() {\n+ return sparseBucketCount;\n+ }\n+\n+ public Date getLatestSparseBucketTime() {\n+ return latestSparseBucketTime > 0 ? new Date(latestSparseBucketTime) : null;\n+ }\n+\n+ /**\n+ * Resets counts,\n+ *\n+ * Note: This does not reset the inner state for e.g. sparse bucket\n+ * detection.\n+ *\n+ */\n+ public void resetCounts() {\n+ bucketCount = 0;\n+ emptyBucketCount = 0;\n+ sparseBucketCount = 0;\n+ }\n+}", "filename": "x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/process/diagnostics/DataStreamDiagnostics.java", "status": "added" } ] }
{ "body": "*Original comment by @tlrx:*\n\nThe test `JdbcSqlSpecIT` failed today on CI on master branch:\r\nLINK REDACTED\r\n\r\nIt reproduces locally:\r\n\r\n```\r\n./gradlew :x-pack-elasticsearch:qa:sql:security:no-ssl:integTestRunner -Dtests.seed=C56C85CB596BD3AA -Dtests.class=org.elasticsearch.xpack.qa.sql.security.JdbcSqlSpecIT -Dtests.method=\"test {math.testMathATan2}\" -Dtests.security.manager=true -Dtests.locale=sq -Dtests.timezone=Indian/Mayotte\r\n```\r\n\r\nMany tests methods failed but they seem to have the same original root cause:\r\n```\r\nThrowable LINK REDACTED: java.sql.SQLException: Server encountered an error [Unexpected failure decoding cursor]. [SqlIllegalArgumentException[Unexpected failure decoding cursor]; nested: IllegalArgumentException[Unknown NamedWriteable [org.elasticsearch.xpack.sql.expression.function.scalar.processor.runtime.Processor][mb]];\r\n10:56:30 > \tat org.elasticsearch.xpack.sql.session.Cursors.decodeFromString(Cursors.java:108)\r\n10:56:30 > \tat org.elasticsearch.xpack.sql.plugin.TransportSqlQueryAction.operation(TransportSqlQueryAction.java:79)\r\n10:56:30 > \tat org.elasticsearch.xpack.sql.plugin.TransportSqlQueryAction.doExecute(TransportSqlQueryAction.java:63)\r\n10:56:30 > \tat org.elasticsearch.xpack.sql.plugin.TransportSqlQueryAction.doExecute(TransportSqlQueryAction.java:43)\r\n```\r\n\r\ncc @elastic/es-sql ", "comments": [ { "body": "*Original comment by @nik9000:*\n\nLooks like a thing for @costin that probably came from the switch to composite aggs.", "created_at": "2018-04-23T13:06:50Z" }, { "body": "*Original comment by @costin:*\n\nIt's from the new math functions. The question is how I can fix it? The code is frozen so it will have to wait until after the move, right?", "created_at": "2018-04-23T13:32:59Z" }, { "body": "*Original comment by @nik9000:*\n\n> It's from the new math functions. The question is how I can fix it? The code is frozen so it will have to wait until after the move, right?\r\n\r\nYou can open a PR but then you wait, yeah.", "created_at": "2018-04-23T14:07:38Z" }, { "body": "More recent failures from today:\r\n\r\nhttps://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.3+periodic/5/console\r\n", "created_at": "2018-04-25T20:16:28Z" }, { "body": "Other failures:\r\n* https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-unix-compatibility/os=fedora/2374/consoleFull\r\n* https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+intake/1611/consoleText\r\n\r\n```\r\nJdbcSqlSpecIT.test {math.testMathPower} <<< FAILURES!\r\n > Throwable #1: java.sql.SQLException: Server encountered an error [Unexpected failure decoding cursor]. [SqlIllegalArgumentException[Unexpected failure decoding cursor]; nested: IllegalArgumentException[Unknown NamedWriteable [org.elasticsearch.xpack.sql.expression.function.scalar.processor.runtime.Processor][mb]];\r\n > \tat org.elasticsearch.xpack.sql.session.Cursors.decodeFromString(Cursors.java:96)\r\n > \tat org.elasticsearch.xpack.sql.plugin.TransportSqlQueryAction.operation(TransportSqlQueryAction.java:67)\r\n > \tat org.elasticsearch.xpack.sql.plugin.TransportSqlQueryAction.doExecute(TransportSqlQueryAction.java:51)\r\n > \tat org.elasticsearch.xpack.sql.plugin.TransportSqlQueryAction.doExecute(TransportSqlQueryAction.java:31)\r\n > \tat org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:143)\r\n > \tat org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167)\r\n > \tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139)\r\n > \tat org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81)\r\n > \tat org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:87)\r\n > \tat org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:76)\r\n > \tat org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:405)\r\n > \tat org.elasticsearch.xpack.sql.plugin.RestSqlQueryAction.lambda$prepareRequest$0(RestSqlQueryAction.java:79)\r\n > \tat org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:97)\r\n > \tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:239)\r\n > \tat org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:335)\r\n > \tat org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:173)\r\n > \tat org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:467)\r\n > \tat org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:137)\r\n > \tat io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n > \tat org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:68)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n > \tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n > \tat io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n > \tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n > \tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n > \tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)\r\n > \tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n > \tat io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n > \tat io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n > \tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n > \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n > \tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)\r\n > \tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)\r\n > \tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)\r\n > \tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545)\r\n > \tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499)\r\n > \tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)\r\n```", "created_at": "2018-04-26T10:42:56Z" } ], "number": 30014, "title": "[CI] JdbcSqlSpecIT fails because of unknown NamedWriteable Processor" }
{ "body": "BinaryMathProcessor was missing from the list of register named\r\nwriteables causing deserialization errors\r\n\r\nFix #30014", "number": 30127, "review_comments": [], "title": "SQL: Add BinaryMathProcessor to named writeables list" }
{ "commits": [ { "message": "SQL: Add BinaryMathProcessor to named writeables list\n\nBinaryMathProcessor was missing from the list of register named\nwriteables causing deserialization errors\n\nFix #30014" }, { "message": "Merge remote-tracking branch 'remotes/upstream/master' into fix-for-30014" } ], "files": [ { "diff": "@@ -10,6 +10,7 @@\n import org.elasticsearch.xpack.sql.expression.function.scalar.arithmetic.BinaryArithmeticProcessor;\n import org.elasticsearch.xpack.sql.expression.function.scalar.arithmetic.UnaryArithmeticProcessor;\n import org.elasticsearch.xpack.sql.expression.function.scalar.datetime.DateTimeProcessor;\n+import org.elasticsearch.xpack.sql.expression.function.scalar.math.BinaryMathProcessor;\n import org.elasticsearch.xpack.sql.expression.function.scalar.math.MathProcessor;\n import org.elasticsearch.xpack.sql.expression.function.scalar.processor.runtime.BucketExtractorProcessor;\n import org.elasticsearch.xpack.sql.expression.function.scalar.processor.runtime.ChainingProcessor;\n@@ -40,6 +41,7 @@ public static List<NamedWriteableRegistry.Entry> getNamedWriteables() {\n // arithmetic\n entries.add(new Entry(Processor.class, BinaryArithmeticProcessor.NAME, BinaryArithmeticProcessor::new));\n entries.add(new Entry(Processor.class, UnaryArithmeticProcessor.NAME, UnaryArithmeticProcessor::new));\n+ entries.add(new Entry(Processor.class, BinaryMathProcessor.NAME, BinaryMathProcessor::new));\n // datetime\n entries.add(new Entry(Processor.class, DateTimeProcessor.NAME, DateTimeProcessor::new));\n // math", "filename": "x-pack/plugin/sql/src/main/java/org/elasticsearch/xpack/sql/expression/function/scalar/Processors.java", "status": "modified" }, { "diff": "@@ -0,0 +1,59 @@\n+/*\n+ * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one\n+ * or more contributor license agreements. Licensed under the Elastic License;\n+ * you may not use this file except in compliance with the Elastic License.\n+ */\n+package org.elasticsearch.xpack.sql.expression.function.scalar.math;\n+\n+import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n+import org.elasticsearch.common.io.stream.Writeable.Reader;\n+import org.elasticsearch.test.AbstractWireSerializingTestCase;\n+import org.elasticsearch.xpack.sql.expression.Literal;\n+import org.elasticsearch.xpack.sql.expression.function.scalar.Processors;\n+import org.elasticsearch.xpack.sql.expression.function.scalar.processor.runtime.ConstantProcessor;\n+import org.elasticsearch.xpack.sql.expression.function.scalar.processor.runtime.Processor;\n+\n+import static org.elasticsearch.xpack.sql.tree.Location.EMPTY;\n+\n+public class BinaryMathProcessorTests extends AbstractWireSerializingTestCase<BinaryMathProcessor> {\n+ public static BinaryMathProcessor randomProcessor() {\n+ return new BinaryMathProcessor(\n+ new ConstantProcessor(randomLong()),\n+ new ConstantProcessor(randomLong()),\n+ randomFrom(BinaryMathProcessor.BinaryMathOperation.values()));\n+ }\n+\n+ @Override\n+ protected BinaryMathProcessor createTestInstance() {\n+ return randomProcessor();\n+ }\n+\n+ @Override\n+ protected Reader<BinaryMathProcessor> instanceReader() {\n+ return BinaryMathProcessor::new;\n+ }\n+\n+ @Override\n+ protected NamedWriteableRegistry getNamedWriteableRegistry() {\n+ return new NamedWriteableRegistry(Processors.getNamedWriteables());\n+ }\n+\n+ public void testAtan2() {\n+ Processor ba = new ATan2(EMPTY, l(1), l(1)).makeProcessorDefinition().asProcessor();\n+ assertEquals(0.7853981633974483d, ba.process(null));\n+ }\n+\n+ public void testPower() {\n+ Processor ba = new Power(EMPTY, l(2), l(2)).makeProcessorDefinition().asProcessor();\n+ assertEquals(4d, ba.process(null));\n+ }\n+\n+ public void testHandleNull() {\n+ assertNull(new ATan2(EMPTY, l(null), l(3)).makeProcessorDefinition().asProcessor().process(null));\n+ assertNull(new Power(EMPTY, l(null), l(null)).makeProcessorDefinition().asProcessor().process(null));\n+ }\n+ \n+ private static Literal l(Object value) {\n+ return Literal.of(EMPTY, value);\n+ }\n+}\n\\ No newline at end of file", "filename": "x-pack/plugin/sql/src/test/java/org/elasticsearch/xpack/sql/expression/function/scalar/math/BinaryMathProcessorTests.java", "status": "added" } ] }
{ "body": "Cumulative sum aggregation returns an error when using it with other pipeline aggregations. This happens consistently with both bucket_scripts and derivatives.\r\n\r\nSteps to Reproduce:\r\n\r\n1. Index some Metricbeat data\r\n2. Run the following query:\r\n\r\n```\r\nGET metricbeat-*/_search\r\n{\r\n \"size\": 0,\r\n \"query\": {\r\n \"range\": {\r\n \"@timestamp\": {\r\n \"gte\": \"now-15m/m\",\r\n \"lte\": \"now\"\r\n }\r\n }\r\n },\r\n \"aggs\": {\r\n \"timeseries\": {\r\n \"date_histogram\": {\r\n \"field\": \"@timestamp\",\r\n \"interval\": \"10s\"\r\n },\r\n \"aggs\": {\r\n \"maxTX\": {\r\n \"max\": {\r\n \"field\": \"system.network.out.bytes\"\r\n }\r\n },\r\n \"calculation\": {\r\n \"bucket_script\": {\r\n \"buckets_path\": {\r\n \"maxTX\": \"maxTX\"\r\n }, \r\n \"script\": { \r\n \"source\": \"params.maxTX\" \r\n }\r\n }\r\n },\r\n \"cumsum\": {\r\n \"cumulative_sum\": {\r\n \"buckets_path\": \"calculation\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n3. Receive following response:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"\",\r\n \"phase\": \"fetch\",\r\n \"grouped\": true,\r\n \"failed_shards\": [],\r\n \"caused_by\": {\r\n \"type\": \"null_pointer_exception\",\r\n \"reason\": null\r\n }\r\n },\r\n \"status\": 503\r\n}\r\n```\r\n\r\n", "comments": [ { "body": "@simianhacker The server logs should contain a stack trace for that null pointer exception. Do you have it?", "created_at": "2017-11-27T22:26:34Z" }, { "body": "Woops... forgot to add that... here you go:\r\n\r\n```\r\n[2017-11-27T16:06:27,676][WARN ][r.suppressed ] path: /infra-*/_search, params: {index=infra-*}\r\norg.elasticsearch.action.search.SearchPhaseExecutionException:\r\n at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:272) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:92) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:622) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.common.util.concurrent.TimedRunnable.run(TimedRunnable.java:41) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]\r\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]\r\nCaused by: java.lang.NullPointerException\r\n at org.elasticsearch.search.aggregations.pipeline.cumulativesum.CumulativeSumPipelineAggregator.reduce(CumulativeSumPipelineAggregator.java:82) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.search.aggregations.InternalAggregation.reduce(InternalAggregation.java:123) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:77) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.search.SearchPhaseController.reduceAggs(SearchPhaseController.java:523) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:500) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:417) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.search.SearchPhaseController$1.reduce(SearchPhaseController.java:736) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:102) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.search.FetchSearchPhase.access$000(FetchSearchPhase.java:45) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:87) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]\r\n ... 4 more\r\n\r\n```", "created_at": "2017-11-27T23:08:03Z" }, { "body": "Thanks. It looks like an NPE from unboxing a null, although that appears to only arise if an invalid aggregation path exception was swallowed previously.\r\n\r\nCan you take a look @colings86?", "created_at": "2017-11-27T23:18:44Z" }, { "body": "I think this occurs because for an empty bucket (a bucket where the doc count is 0), the bucket_script is skipped and doesn't output anything for the bucket. This means that when the cumulative sum aggregator try to access the bucket script value for that bucket it can't find it (hence the invalid aggregation path) and null is returned. This issue relates to https://github.com/elastic/elasticsearch/issues/27377 which is also caused because the value is not output in the bucket when the doc count is 0.", "created_at": "2017-11-28T09:01:59Z" }, { "body": "@elastic/es-search-aggs ", "created_at": "2018-03-20T15:50:20Z" } ], "number": 27544, "title": "Cumilative Sum doesn't work with other pipeline aggregations" }
{ "body": "If the cumulative sum agg encounters a null value, it's because the value is missing (like the first value from a derivative agg), the path is not valid, or the bucket in the path was empty.\r\n\r\nPreviously cusum would just explode on the null, but this changes it so we only increment the sum if the value is non-null and finite. This is safe because even if the cusum encounters all null or empty buckets, the cumulative sum is still zero (like how the sum agg returns zero even if all the docs were missing values)\r\n\r\nI went ahead and tweaked AggregatorTestCase to allow testing pipelines, so that I could delete the IT test and reimplement it as AggTests.\r\n\r\nI think this closes #27544", "number": 29641, "review_comments": [], "title": "Fix NPE when CumulativeSum agg encounters null value/empty bucket" }
{ "commits": [ { "message": "Fix NPE when CumulativeSum agg encounters null value/empty bucket\n\nIf the cusum agg encounters a null value, it's because the value is\nmissing (like the first value from a derivative agg), the path is\nnot valid, or the bucket in the path was empty.\n\nPreviously cusum would just explode on the null, but this changes it\nso we only increment the sum if the value is non-null and finite.\nThis is safe because even if the cusum encounters all null or empty\nbuckets, the cumulative sum is still zero (like how the sum agg returns\nzero even if all the docs were missing values)\n\nI went ahead and tweaked AggregatorTestCase to allow testing pipelines,\nso that I could delete the IT test and reimplement it as AggTests.\n\nCloses #27544" }, { "message": "Merge remote-tracking branch 'origin/master' into cusum_npe" }, { "message": "Merge remote-tracking branch 'origin/master' into cusum_npe" }, { "message": "[Docs] Add to changelog" }, { "message": "Merge remote-tracking branch 'origin/master' into cusum_npe" }, { "message": "Merge remote-tracking branch 'origin/master' into cusum_npe" }, { "message": "Fix changelog formatting" } ], "files": [ { "diff": "@@ -62,6 +62,8 @@ ones that the user is authorized to access in case field level security is enabl\n Fail snapshot operations early when creating or deleting a snapshot on a repository that has been\n written to by an older Elasticsearch after writing to it with a newer Elasticsearch version. ({pull}30140[#30140])\n \n+Fix NPE when CumulativeSum agg encounters null value/empty bucket ({pull}29641[#29641])\n+\n //[float]\n //=== Regressions\n \n@@ -92,6 +94,7 @@ multi-argument versions. ({pull}29623[#29623])\n \n Do not ignore request analysis/similarity settings on index resize operations when the source index already contains such settings ({pull}30216[#30216])\n \n+Fix NPE when CumulativeSum agg encounters null value/empty bucket ({pull}29641[#29641])\n \n //[float]\n //=== Regressions", "filename": "docs/CHANGELOG.asciidoc", "status": "modified" }, { "diff": "@@ -79,11 +79,17 @@ public InternalAggregation reduce(InternalAggregation aggregation, ReduceContext\n double sum = 0;\n for (InternalMultiBucketAggregation.InternalBucket bucket : buckets) {\n Double thisBucketValue = resolveBucketValue(histo, bucket, bucketsPaths()[0], GapPolicy.INSERT_ZEROS);\n- sum += thisBucketValue;\n- List<InternalAggregation> aggs = StreamSupport.stream(bucket.getAggregations().spliterator(), false).map((p) -> {\n- return (InternalAggregation) p;\n- }).collect(Collectors.toList());\n- aggs.add(new InternalSimpleValue(name(), sum, formatter, new ArrayList<PipelineAggregator>(), metaData()));\n+\n+ // Only increment the sum if it's a finite value, otherwise \"increment by zero\" is correct\n+ if (thisBucketValue != null && thisBucketValue.isInfinite() == false && thisBucketValue.isNaN() == false) {\n+ sum += thisBucketValue;\n+ }\n+\n+ List<InternalAggregation> aggs = StreamSupport\n+ .stream(bucket.getAggregations().spliterator(), false)\n+ .map((p) -> (InternalAggregation) p)\n+ .collect(Collectors.toList());\n+ aggs.add(new InternalSimpleValue(name(), sum, formatter, new ArrayList<>(), metaData()));\n Bucket newBucket = factory.createBucket(factory.getKey(bucket), bucket.getDocCount(), new InternalAggregations(aggs));\n newBuckets.add(newBucket);\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/pipeline/cumulativesum/CumulativeSumPipelineAggregator.java", "status": "modified" }, { "diff": "@@ -0,0 +1,316 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.pipeline;\n+\n+import org.apache.lucene.document.Document;\n+import org.apache.lucene.document.NumericDocValuesField;\n+import org.apache.lucene.document.SortedNumericDocValuesField;\n+import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.RandomIndexWriter;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.MatchNoDocsQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.store.Directory;\n+import org.elasticsearch.common.CheckedConsumer;\n+import org.elasticsearch.index.mapper.DateFieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.NumberFieldMapper;\n+import org.elasticsearch.search.aggregations.AggregationBuilder;\n+import org.elasticsearch.search.aggregations.AggregatorTestCase;\n+import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;\n+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;\n+import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder;\n+import org.elasticsearch.search.aggregations.metrics.avg.AvgAggregationBuilder;\n+import org.elasticsearch.search.aggregations.metrics.avg.InternalAvg;\n+import org.elasticsearch.search.aggregations.metrics.sum.Sum;\n+import org.elasticsearch.search.aggregations.metrics.sum.SumAggregationBuilder;\n+import org.elasticsearch.search.aggregations.pipeline.cumulativesum.CumulativeSumPipelineAggregationBuilder;\n+import org.elasticsearch.search.aggregations.pipeline.derivative.DerivativePipelineAggregationBuilder;\n+\n+import java.io.IOException;\n+import java.util.Arrays;\n+import java.util.List;\n+import java.util.function.Consumer;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.core.IsNull.notNullValue;\n+\n+public class CumulativeSumAggregatorTests extends AggregatorTestCase {\n+\n+ private static final String HISTO_FIELD = \"histo\";\n+ private static final String VALUE_FIELD = \"value_field\";\n+\n+ private static final List<String> datasetTimes = Arrays.asList(\n+ \"2017-01-01T01:07:45\",\n+ \"2017-01-02T03:43:34\",\n+ \"2017-01-03T04:11:00\",\n+ \"2017-01-04T05:11:31\",\n+ \"2017-01-05T08:24:05\",\n+ \"2017-01-06T13:09:32\",\n+ \"2017-01-07T13:47:43\",\n+ \"2017-01-08T16:14:34\",\n+ \"2017-01-09T17:09:50\",\n+ \"2017-01-10T22:55:46\");\n+\n+ private static final List<Integer> datasetValues = Arrays.asList(1,2,3,4,5,6,7,8,9,10);\n+\n+ public void testSimple() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"the_avg\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> {\n+ assertEquals(10, ((Histogram)histogram).getBuckets().size());\n+ List<? extends Histogram.Bucket> buckets = ((Histogram)histogram).getBuckets();\n+ double sum = 0.0;\n+ for (Histogram.Bucket bucket : buckets) {\n+ sum += ((InternalAvg) (bucket.getAggregations().get(\"the_avg\"))).value();\n+ assertThat(((InternalSimpleValue) (bucket.getAggregations().get(\"cusum\"))).value(), equalTo(sum));\n+ }\n+ });\n+ }\n+\n+ /**\n+ * First value from a derivative is null, so this makes sure the cusum can handle that\n+ */\n+ public void testDerivative() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ DateHistogramAggregationBuilder aggBuilder = new DateHistogramAggregationBuilder(\"histo\");\n+ aggBuilder.dateHistogramInterval(DateHistogramInterval.DAY).field(HISTO_FIELD);\n+ aggBuilder.subAggregation(new AvgAggregationBuilder(\"the_avg\").field(VALUE_FIELD));\n+ aggBuilder.subAggregation(new DerivativePipelineAggregationBuilder(\"the_deriv\", \"the_avg\"));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"the_deriv\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> {\n+ assertEquals(10, ((Histogram)histogram).getBuckets().size());\n+ List<? extends Histogram.Bucket> buckets = ((Histogram)histogram).getBuckets();\n+ double sum = 0.0;\n+ for (int i = 0; i < buckets.size(); i++) {\n+ if (i == 0) {\n+ assertThat(((InternalSimpleValue)(buckets.get(i).getAggregations().get(\"cusum\"))).value(), equalTo(0.0));\n+ } else {\n+ sum += 1.0;\n+ assertThat(((InternalSimpleValue)(buckets.get(i).getAggregations().get(\"cusum\"))).value(), equalTo(sum));\n+ }\n+ }\n+ });\n+ }\n+\n+ public void testDocCount() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ int numDocs = randomIntBetween(6, 20);\n+ int interval = randomIntBetween(2, 5);\n+\n+ int minRandomValue = 0;\n+ int maxRandomValue = 20;\n+\n+ int numValueBuckets = ((maxRandomValue - minRandomValue) / interval) + 1;\n+ long[] valueCounts = new long[numValueBuckets];\n+\n+ HistogramAggregationBuilder aggBuilder = new HistogramAggregationBuilder(\"histo\")\n+ .field(VALUE_FIELD)\n+ .interval(interval)\n+ .extendedBounds(minRandomValue, maxRandomValue);\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"_count\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> {\n+ List<? extends Histogram.Bucket> buckets = ((Histogram)histogram).getBuckets();\n+\n+ assertThat(buckets.size(), equalTo(numValueBuckets));\n+\n+ double sum = 0;\n+ for (int i = 0; i < numValueBuckets; ++i) {\n+ Histogram.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(((Number) bucket.getKey()).longValue(), equalTo((long) i * interval));\n+ assertThat(bucket.getDocCount(), equalTo(valueCounts[i]));\n+ sum += bucket.getDocCount();\n+ InternalSimpleValue cumulativeSumValue = bucket.getAggregations().get(\"cusum\");\n+ assertThat(cumulativeSumValue, notNullValue());\n+ assertThat(cumulativeSumValue.getName(), equalTo(\"cusum\"));\n+ assertThat(cumulativeSumValue.value(), equalTo(sum));\n+ }\n+ }, indexWriter -> {\n+ Document document = new Document();\n+\n+ for (int i = 0; i < numDocs; i++) {\n+ int fieldValue = randomIntBetween(minRandomValue, maxRandomValue);\n+ document.add(new NumericDocValuesField(VALUE_FIELD, fieldValue));\n+ final int bucket = (fieldValue / interval);\n+ valueCounts[bucket]++;\n+\n+ indexWriter.addDocument(document);\n+ document.clear();\n+ }\n+ });\n+ }\n+\n+ public void testMetric() throws IOException {\n+ Query query = new MatchAllDocsQuery();\n+\n+ int numDocs = randomIntBetween(6, 20);\n+ int interval = randomIntBetween(2, 5);\n+\n+ int minRandomValue = 0;\n+ int maxRandomValue = 20;\n+\n+ int numValueBuckets = ((maxRandomValue - minRandomValue) / interval) + 1;\n+ long[] valueCounts = new long[numValueBuckets];\n+\n+ HistogramAggregationBuilder aggBuilder = new HistogramAggregationBuilder(\"histo\")\n+ .field(VALUE_FIELD)\n+ .interval(interval)\n+ .extendedBounds(minRandomValue, maxRandomValue);\n+ aggBuilder.subAggregation(new SumAggregationBuilder(\"sum\").field(VALUE_FIELD));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"sum\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> {\n+ List<? extends Histogram.Bucket> buckets = ((Histogram)histogram).getBuckets();\n+\n+ assertThat(buckets.size(), equalTo(numValueBuckets));\n+\n+ double bucketSum = 0;\n+ for (int i = 0; i < numValueBuckets; ++i) {\n+ Histogram.Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(((Number) bucket.getKey()).longValue(), equalTo((long) i * interval));\n+ Sum sum = bucket.getAggregations().get(\"sum\");\n+ assertThat(sum, notNullValue());\n+ bucketSum += sum.value();\n+\n+ InternalSimpleValue sumBucketValue = bucket.getAggregations().get(\"cusum\");\n+ assertThat(sumBucketValue, notNullValue());\n+ assertThat(sumBucketValue.getName(), equalTo(\"cusum\"));\n+ assertThat(sumBucketValue.value(), equalTo(bucketSum));\n+ }\n+ }, indexWriter -> {\n+ Document document = new Document();\n+\n+ for (int i = 0; i < numDocs; i++) {\n+ int fieldValue = randomIntBetween(minRandomValue, maxRandomValue);\n+ document.add(new NumericDocValuesField(VALUE_FIELD, fieldValue));\n+ final int bucket = (fieldValue / interval);\n+ valueCounts[bucket]++;\n+\n+ indexWriter.addDocument(document);\n+ document.clear();\n+ }\n+ });\n+ }\n+\n+ public void testNoBuckets() throws IOException {\n+ int numDocs = randomIntBetween(6, 20);\n+ int interval = randomIntBetween(2, 5);\n+\n+ int minRandomValue = 0;\n+ int maxRandomValue = 20;\n+\n+ int numValueBuckets = ((maxRandomValue - minRandomValue) / interval) + 1;\n+ long[] valueCounts = new long[numValueBuckets];\n+\n+ Query query = new MatchNoDocsQuery();\n+\n+ HistogramAggregationBuilder aggBuilder = new HistogramAggregationBuilder(\"histo\")\n+ .field(VALUE_FIELD)\n+ .interval(interval);\n+ aggBuilder.subAggregation(new SumAggregationBuilder(\"sum\").field(VALUE_FIELD));\n+ aggBuilder.subAggregation(new CumulativeSumPipelineAggregationBuilder(\"cusum\", \"sum\"));\n+\n+ executeTestCase(query, aggBuilder, histogram -> {\n+ List<? extends Histogram.Bucket> buckets = ((Histogram)histogram).getBuckets();\n+\n+ assertThat(buckets.size(), equalTo(0));\n+\n+ }, indexWriter -> {\n+ Document document = new Document();\n+\n+ for (int i = 0; i < numDocs; i++) {\n+ int fieldValue = randomIntBetween(minRandomValue, maxRandomValue);\n+ document.add(new NumericDocValuesField(VALUE_FIELD, fieldValue));\n+ final int bucket = (fieldValue / interval);\n+ valueCounts[bucket]++;\n+\n+ indexWriter.addDocument(document);\n+ document.clear();\n+ }\n+ });\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private void executeTestCase(Query query, AggregationBuilder aggBuilder, Consumer<InternalAggregation> verify) throws IOException {\n+ executeTestCase(query, aggBuilder, verify, indexWriter -> {\n+ Document document = new Document();\n+ int counter = 0;\n+ for (String date : datasetTimes) {\n+ if (frequently()) {\n+ indexWriter.commit();\n+ }\n+\n+ long instant = asLong(date);\n+ document.add(new SortedNumericDocValuesField(HISTO_FIELD, instant));\n+ document.add(new NumericDocValuesField(VALUE_FIELD, datasetValues.get(counter)));\n+ indexWriter.addDocument(document);\n+ document.clear();\n+ counter += 1;\n+ }\n+ });\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private void executeTestCase(Query query, AggregationBuilder aggBuilder, Consumer<InternalAggregation> verify,\n+ CheckedConsumer<RandomIndexWriter, IOException> setup) throws IOException {\n+\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {\n+ setup.accept(indexWriter);\n+ }\n+\n+ try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ IndexSearcher indexSearcher = newSearcher(indexReader, true, true);\n+\n+ DateFieldMapper.Builder builder = new DateFieldMapper.Builder(\"_name\");\n+ DateFieldMapper.DateFieldType fieldType = builder.fieldType();\n+ fieldType.setHasDocValues(true);\n+ fieldType.setName(HISTO_FIELD);\n+\n+ MappedFieldType valueFieldType = new NumberFieldMapper.NumberFieldType(NumberFieldMapper.NumberType.LONG);\n+ valueFieldType.setHasDocValues(true);\n+ valueFieldType.setName(\"value_field\");\n+\n+ InternalAggregation histogram;\n+ histogram = searchAndReduce(indexSearcher, query, aggBuilder, new MappedFieldType[]{fieldType, valueFieldType});\n+ verify.accept(histogram);\n+ }\n+ }\n+ }\n+\n+ private static long asLong(String dateTime) {\n+ return DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.parser().parseDateTime(dateTime).getMillis();\n+ }\n+}", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/pipeline/CumulativeSumAggregatorTests.java", "status": "added" }, { "diff": "@@ -61,6 +61,8 @@\n import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache;\n import org.elasticsearch.mock.orig.Mockito;\n+import org.elasticsearch.search.aggregations.MultiBucketConsumerService.MultiBucketConsumer;\n+import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.fetch.FetchPhase;\n import org.elasticsearch.search.fetch.subphase.DocValueFieldsFetchSubPhase;\n import org.elasticsearch.search.fetch.subphase.FetchSourceSubPhase;\n@@ -79,6 +81,7 @@\n import java.util.Collections;\n import java.util.List;\n \n+import static org.elasticsearch.test.InternalAggregationTestCase.DEFAULT_MAX_BUCKETS;\n import static org.mockito.Matchers.anyObject;\n import static org.mockito.Matchers.anyString;\n import static org.mockito.Mockito.doAnswer;\n@@ -369,6 +372,11 @@ protected <A extends InternalAggregation, C extends Aggregator> A searchAndReduc\n \n @SuppressWarnings(\"unchecked\")\n A internalAgg = (A) aggs.get(0).doReduce(aggs, context);\n+ if (internalAgg.pipelineAggregators().size() > 0) {\n+ for (PipelineAggregator pipelineAggregator : internalAgg.pipelineAggregators()) {\n+ internalAgg = (A) pipelineAggregator.reduce(internalAgg, context);\n+ }\n+ }\n InternalAggregationTestCase.assertMultiBucketConsumer(internalAgg, reduceBucketConsumer);\n return internalAgg;\n }", "filename": "test/framework/src/main/java/org/elasticsearch/search/aggregations/AggregatorTestCase.java", "status": "modified" } ] }
{ "body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --5.6.4`):\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`1.8`):\r\n\r\n**OS version** (Ubuntu):\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nSnifferBuilder.setSniffAfterFailureDelayMillis is not honored and not called based on the supplied interval. Instead the interval sniffer is using is SnifferBuilder.setSniffIntervalMillis\r\n\r\n**Steps to reproduce**:\r\n\r\nUse following code which was taken from elastic doc [:](https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/_usage.html)\r\n\r\n```\r\nSniffOnFailureListener sniffOnFailureListener = new SniffOnFailureListener();\r\nRestClient restClient = RestClient.builder(new HttpHost(\"localhost\", 9200))\r\n .setFailureListener(sniffOnFailureListener) \r\n .build();\r\nSniffer sniffer = Sniffer.builder(restClient)\r\n .setSniffAfterFailureDelayMillis(30000) \r\n .build();\r\nsniffOnFailureListener.setSniffer(sniffer); \r\n```\r\n\r\nI think the reason of the bug is in method `Sniffer.Task.sniff()`.\r\nThe first call to `Sniffer.Task.sniff()` is done for `setSniffIntervalMillis` timeout which is blocked in `hostsSniffer.sniffHosts()`.\r\nDuring that time, for each failed node, `Sniffer.Task.sniff()` is called with `setSniffAfterFailureDelayMillis` timeout. Since the first call is still blocking in `Sniffer.Task.sniff()` it cannot call `scheduleNextRun` with `setSniffAfterFailureDelayMillis` because `if (running.compareAndSet(false, true))` condition prevents it from. When all failed nodes are sniffed, `hostsSniffer.sniffHosts()` returns and set the time again to `setSniffIntervalMillis`.\r\n\r\nI think the in the Exception statement, you should set `nextSniffDelayMillis` to be equal to `sniffAfterFailureDelayMillis` and in case of success set `nextSniffDelayMillis` to `setSniffIntervalMillis`.\r\n\r\n", "comments": [ { "body": "@javanna can you please have a look at this one?", "created_at": "2017-12-07T09:14:00Z" }, { "body": "After reading this multiple times, I think what it boils down to is that when sniff on failure is configured, and a failure happens (the failure listener is notified of a failure) while another sniff round is already running, the sniff on failure round will not run (which is ok) but the following sniff round will be scheduled with the ordinary delay (`sniffInterval`) instead of `sniffAfterFailureDelay`, which is a bug. I've been working on adding tests for this and fixing it. Stay tuned.", "created_at": "2018-03-30T19:55:23Z" } ], "number": 27697, "title": "SnifferBuilder.sniffOnFailureListener() not honored" }
{ "body": "This PR reworks the `Sniffer` component to simplify it and make it possible to test it. \r\n\r\nIn particular, it no longer takes out the host that failed when sniffing on failure, but rather relies on whatever the cluster returns. This is the result of some valid comments from #27985. Taking out one single host is too naive, hard to test and debug.\r\n\r\nA new `Scheduler` abstraction is introduced to abstract the tasks scheduling away and make it possible to plug in any test implementation and take out timing aspects when testing.\r\n\r\nConcurrency aspects have also been improved, synchronized methods are no longer required. At the same time, we were able to take #27697 and #25701 into account and fix them, especially now that we can more easily add tests. \r\n\r\nLast but not least, good news is that we now have unit tests for the `Sniffer` component, long overdue.\r\n\r\nCloses #27697\r\nCloses #25701", "number": 29638, "review_comments": [ { "body": "onFailure behaviour is really hard to test, too moving parts, so I went for testing different aspects of it with the following three test methods. Suggestions are welcome on whether we think coverage is good enough. Certainly better than before as it was 0% up until now :(", "created_at": "2018-04-20T15:39:44Z" }, { "body": "I wonder whether it makes sense to add a test that uses the sniffer with on failure on against a proper http server. but that would add timing aspects which are not desirable. My concern is that we never test the sniffer in real-life. Maybe it would be nice to turn sniffing on when executing yaml tests?", "created_at": "2018-04-20T15:40:54Z" }, { "body": "When I was playing with cancellation as part of reindex I found that canceling a Runnable was sort of \"best effort\". If you make a test that calls `sniffOnFailure` a bunch of time really fast together I'll bet you get multiple rounds of sniffing in parallel.", "created_at": "2018-04-20T16:06:54Z" }, { "body": "Maybe drop this method and call `scheduler.schedule(theTask, delay)` all the places?", "created_at": "2018-04-20T16:10:13Z" }, { "body": "Probably best to wrap this `if (logger.isDebugEnabled())` so we don't build the string if we don't need it.", "created_at": "2018-04-20T16:10:50Z" }, { "body": "Do you want to include something in this message so it is easier to debug?", "created_at": "2018-04-20T16:12:37Z" }, { "body": ":+1:", "created_at": "2018-04-20T16:14:21Z" }, { "body": "I guess it depends on whether the runnable has already started or not? That is what I've seen, but hard to test in real-life though from a unit test...", "created_at": "2018-04-20T18:55:33Z" }, { "body": "makes sense", "created_at": "2018-04-20T18:56:49Z" }, { "body": "I think this change is worse? Maybe revert it?", "created_at": "2018-05-29T20:26:37Z" }, { "body": "Could you add the traditional ` * ` before each line of the comment?", "created_at": "2018-05-29T20:28:32Z" }, { "body": "Maybe \"reschedule sniffing to run as soon as possible if it isn't already running\".", "created_at": "2018-05-29T20:31:37Z" }, { "body": "I'd do\r\n```\r\ntask.cancel(false);\r\nreturn task.skip();\r\n```\r\n\r\nI don't think it is worth relying on `cancel` returning `true` in the case when we want to cancel. Maybe I'm overly paranoid but I don't trust it.", "created_at": "2018-05-29T20:36:32Z" }, { "body": "And I might rename this method to `skip`.", "created_at": "2018-05-29T20:36:44Z" }, { "body": "I think we can manage that in a followup.", "created_at": "2018-05-29T20:38:13Z" }, { "body": "yep sorry", "created_at": "2018-05-30T09:38:18Z" }, { "body": "sure", "created_at": "2018-05-30T09:38:28Z" }, { "body": "I agree! I have tests that verify this and I ran them many times but I am playing with fire here. After all, if cancel returns false whenever a task has already run, its corresponding state change will also fail. With this change all tests are still green so I think it's a good one to stay on the safe side.", "created_at": "2018-05-30T09:48:03Z" }, { "body": "++", "created_at": "2018-05-30T09:56:53Z" } ], "title": "Refactor Sniffer and make it testable" }
{ "commits": [ { "message": "Simplify `Sniffer` and begin testing it properly\n\nIntroduced `Scheduler` abstraction to make `Sniffer` testable so that tasks scheduling is isolated and easily mockable.\n\nAlso added a bunch of TODOs mainly around new tests that should be added soon-ish." }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "some improvements" }, { "message": "tests rewritten" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "refactor and add tests" }, { "message": "improve tests" }, { "message": "address first review comments" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "adapt NodeFailureListener" }, { "message": "fixed warnings" }, { "message": "Adapt HttpExporter" }, { "message": "adapt NodeFailureListenerTests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Add tests around cancelling tasks and make impl more robust" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "update comments" }, { "message": "addressed comments" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" } ], "files": [ { "diff": "@@ -61,6 +61,7 @@\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.Iterator;\n+import java.util.LinkedHashSet;\n import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n@@ -132,7 +133,7 @@ public synchronized void setHosts(HttpHost... hosts) {\n if (hosts == null || hosts.length == 0) {\n throw new IllegalArgumentException(\"hosts must not be null nor empty\");\n }\n- Set<HttpHost> httpHosts = new HashSet<>();\n+ Set<HttpHost> httpHosts = new LinkedHashSet<>();\n AuthCache authCache = new BasicAuthCache();\n for (HttpHost host : hosts) {\n Objects.requireNonNull(host, \"host cannot be null\");\n@@ -143,6 +144,13 @@ public synchronized void setHosts(HttpHost... hosts) {\n this.blacklist.clear();\n }\n \n+ /**\n+ * Returns the configured hosts\n+ */\n+ public List<HttpHost> getHosts() {\n+ return new ArrayList<>(hostTuple.hosts);\n+ }\n+\n /**\n * Sends a request to the Elasticsearch cluster that the client points to.\n * Blocks until the request is completed and returns its response or fails", "filename": "client/rest/src/main/java/org/elasticsearch/client/RestClient.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n \n import java.io.IOException;\n import java.net.URI;\n+import java.util.Arrays;\n import java.util.Collections;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.TimeUnit;\n@@ -251,6 +252,37 @@ public void testSetHostsWrongArguments() throws IOException {\n }\n }\n \n+ public void testSetHostsPreservesOrdering() throws Exception {\n+ try (RestClient restClient = createRestClient()) {\n+ HttpHost[] hosts = randomHosts();\n+ restClient.setHosts(hosts);\n+ assertEquals(Arrays.asList(hosts), restClient.getHosts());\n+ }\n+ }\n+\n+ private static HttpHost[] randomHosts() {\n+ int numHosts = randomIntBetween(1, 10);\n+ HttpHost[] hosts = new HttpHost[numHosts];\n+ for (int i = 0; i < hosts.length; i++) {\n+ hosts[i] = new HttpHost(\"host-\" + i, 9200);\n+ }\n+ return hosts;\n+ }\n+\n+ public void testSetHostsDuplicatedHosts() throws Exception {\n+ try (RestClient restClient = createRestClient()) {\n+ int numHosts = randomIntBetween(1, 10);\n+ HttpHost[] hosts = new HttpHost[numHosts];\n+ HttpHost host = new HttpHost(\"host\", 9200);\n+ for (int i = 0; i < hosts.length; i++) {\n+ hosts[i] = host;\n+ }\n+ restClient.setHosts(hosts);\n+ assertEquals(1, restClient.getHosts().size());\n+ assertEquals(host, restClient.getHosts().get(0));\n+ }\n+ }\n+\n /**\n * @deprecated will remove method in 7.0 but needs tests until then. Replaced by {@link RequestTests#testConstructor()}.\n */", "filename": "client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java", "status": "modified" }, { "diff": "@@ -58,7 +58,6 @@ public void onFailure(HttpHost host) {\n if (sniffer == null) {\n throw new IllegalStateException(\"sniffer was not set, unable to sniff on failure\");\n }\n- //re-sniff immediately but take out the node that failed\n- sniffer.sniffOnFailure(host);\n+ sniffer.sniffOnFailure();\n }\n }", "filename": "client/sniffer/src/main/java/org/elasticsearch/client/sniff/SniffOnFailureListener.java", "status": "modified" }, { "diff": "@@ -31,12 +31,14 @@\n import java.security.PrivilegedAction;\n import java.util.List;\n import java.util.concurrent.Executors;\n+import java.util.concurrent.Future;\n import java.util.concurrent.ScheduledExecutorService;\n-import java.util.concurrent.ScheduledFuture;\n+import java.util.concurrent.ScheduledThreadPoolExecutor;\n import java.util.concurrent.ThreadFactory;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicReference;\n \n /**\n * Class responsible for sniffing nodes from some source (default is elasticsearch itself) and setting them to a provided instance of\n@@ -51,101 +53,175 @@ public class Sniffer implements Closeable {\n private static final Log logger = LogFactory.getLog(Sniffer.class);\n private static final String SNIFFER_THREAD_NAME = \"es_rest_client_sniffer\";\n \n- private final Task task;\n+ private final HostsSniffer hostsSniffer;\n+ private final RestClient restClient;\n+ private final long sniffIntervalMillis;\n+ private final long sniffAfterFailureDelayMillis;\n+ private final Scheduler scheduler;\n+ private final AtomicBoolean initialized = new AtomicBoolean(false);\n+ private volatile ScheduledTask nextScheduledTask;\n \n Sniffer(RestClient restClient, HostsSniffer hostsSniffer, long sniffInterval, long sniffAfterFailureDelay) {\n- this.task = new Task(hostsSniffer, restClient, sniffInterval, sniffAfterFailureDelay);\n+ this(restClient, hostsSniffer, new DefaultScheduler(), sniffInterval, sniffAfterFailureDelay);\n+ }\n+\n+ Sniffer(RestClient restClient, HostsSniffer hostsSniffer, Scheduler scheduler, long sniffInterval, long sniffAfterFailureDelay) {\n+ this.hostsSniffer = hostsSniffer;\n+ this.restClient = restClient;\n+ this.sniffIntervalMillis = sniffInterval;\n+ this.sniffAfterFailureDelayMillis = sniffAfterFailureDelay;\n+ this.scheduler = scheduler;\n+ /*\n+ * The first sniffing round is async, so this constructor returns before nextScheduledTask is assigned to a task.\n+ * The initialized flag is a protection against NPE due to that.\n+ */\n+ Task task = new Task(sniffIntervalMillis) {\n+ @Override\n+ public void run() {\n+ super.run();\n+ initialized.compareAndSet(false, true);\n+ }\n+ };\n+ /*\n+ * We do not keep track of the returned future as we never intend to cancel the initial sniffing round, we rather\n+ * prevent any other operation from being executed till the sniffer is properly initialized\n+ */\n+ scheduler.schedule(task, 0L);\n }\n \n /**\n- * Triggers a new sniffing round and explicitly takes out the failed host provided as argument\n+ * Schedule sniffing to run as soon as possible if it isn't already running. Once such sniffing round runs\n+ * it will also schedule a new round after sniffAfterFailureDelay ms.\n */\n- public void sniffOnFailure(HttpHost failedHost) {\n- this.task.sniffOnFailure(failedHost);\n+ public void sniffOnFailure() {\n+ //sniffOnFailure does nothing until the initial sniffing round has been completed\n+ if (initialized.get()) {\n+ /*\n+ * If sniffing is already running, there is no point in scheduling another round right after the current one.\n+ * Concurrent calls may be checking the same task state, but only the first skip call on the same task returns true.\n+ * The task may also get replaced while we check its state, in which case calling skip on it returns false.\n+ */\n+ if (this.nextScheduledTask.skip()) {\n+ /*\n+ * We do not keep track of this future as the task will immediately run and we don't intend to cancel it\n+ * due to concurrent sniffOnFailure runs. Effectively the previous (now cancelled or skipped) task will stay\n+ * assigned to nextTask till this onFailure round gets run and schedules its corresponding afterFailure round.\n+ */\n+ scheduler.schedule(new Task(sniffAfterFailureDelayMillis), 0L);\n+ }\n+ }\n }\n \n- @Override\n- public void close() throws IOException {\n- task.shutdown();\n+ enum TaskState {\n+ WAITING, SKIPPED, STARTED\n }\n \n- private static class Task implements Runnable {\n- private final HostsSniffer hostsSniffer;\n- private final RestClient restClient;\n-\n- private final long sniffIntervalMillis;\n- private final long sniffAfterFailureDelayMillis;\n- private final ScheduledExecutorService scheduledExecutorService;\n- private final AtomicBoolean running = new AtomicBoolean(false);\n- private ScheduledFuture<?> scheduledFuture;\n-\n- private Task(HostsSniffer hostsSniffer, RestClient restClient, long sniffIntervalMillis, long sniffAfterFailureDelayMillis) {\n- this.hostsSniffer = hostsSniffer;\n- this.restClient = restClient;\n- this.sniffIntervalMillis = sniffIntervalMillis;\n- this.sniffAfterFailureDelayMillis = sniffAfterFailureDelayMillis;\n- SnifferThreadFactory threadFactory = new SnifferThreadFactory(SNIFFER_THREAD_NAME);\n- this.scheduledExecutorService = Executors.newScheduledThreadPool(1, threadFactory);\n- scheduleNextRun(0);\n- }\n-\n- synchronized void scheduleNextRun(long delayMillis) {\n- if (scheduledExecutorService.isShutdown() == false) {\n- try {\n- if (scheduledFuture != null) {\n- //regardless of when the next sniff is scheduled, cancel it and schedule a new one with updated delay\n- this.scheduledFuture.cancel(false);\n- }\n- logger.debug(\"scheduling next sniff in \" + delayMillis + \" ms\");\n- this.scheduledFuture = this.scheduledExecutorService.schedule(this, delayMillis, TimeUnit.MILLISECONDS);\n- } catch(Exception e) {\n- logger.error(\"error while scheduling next sniffer task\", e);\n- }\n- }\n+ class Task implements Runnable {\n+ final long nextTaskDelay;\n+ final AtomicReference<TaskState> taskState = new AtomicReference<>(TaskState.WAITING);\n+\n+ Task(long nextTaskDelay) {\n+ this.nextTaskDelay = nextTaskDelay;\n }\n \n @Override\n public void run() {\n- sniff(null, sniffIntervalMillis);\n- }\n-\n- void sniffOnFailure(HttpHost failedHost) {\n- sniff(failedHost, sniffAfterFailureDelayMillis);\n- }\n-\n- void sniff(HttpHost excludeHost, long nextSniffDelayMillis) {\n- if (running.compareAndSet(false, true)) {\n- try {\n- List<HttpHost> sniffedHosts = hostsSniffer.sniffHosts();\n- logger.debug(\"sniffed hosts: \" + sniffedHosts);\n- if (excludeHost != null) {\n- sniffedHosts.remove(excludeHost);\n- }\n- if (sniffedHosts.isEmpty()) {\n- logger.warn(\"no hosts to set, hosts will be updated at the next sniffing round\");\n- } else {\n- this.restClient.setHosts(sniffedHosts.toArray(new HttpHost[sniffedHosts.size()]));\n- }\n- } catch (Exception e) {\n- logger.error(\"error while sniffing nodes\", e);\n- } finally {\n- scheduleNextRun(nextSniffDelayMillis);\n- running.set(false);\n- }\n+ /*\n+ * Skipped or already started tasks do nothing. In most cases tasks will be cancelled and not run, but we want to protect for\n+ * cases where future#cancel returns true yet the task runs. We want to make sure that such tasks do nothing otherwise they will\n+ * schedule another round at the end and so on, leaving us with multiple parallel sniffing \"tracks\" whish is undesirable.\n+ */\n+ if (taskState.compareAndSet(TaskState.WAITING, TaskState.STARTED) == false) {\n+ return;\n }\n- }\n-\n- synchronized void shutdown() {\n- scheduledExecutorService.shutdown();\n try {\n- if (scheduledExecutorService.awaitTermination(1000, TimeUnit.MILLISECONDS)) {\n- return;\n- }\n- scheduledExecutorService.shutdownNow();\n- } catch (InterruptedException e) {\n- Thread.currentThread().interrupt();\n+ sniff();\n+ } catch (Exception e) {\n+ logger.error(\"error while sniffing nodes\", e);\n+ } finally {\n+ Task task = new Task(sniffIntervalMillis);\n+ Future<?> future = scheduler.schedule(task, nextTaskDelay);\n+ //tasks are run by a single threaded executor, so swapping is safe with a simple volatile variable\n+ ScheduledTask previousTask = nextScheduledTask;\n+ nextScheduledTask = new ScheduledTask(task, future);\n+ assert initialized.get() == false ||\n+ previousTask.task.isSkipped() || previousTask.task.hasStarted() : \"task that we are replacing is neither \" +\n+ \"cancelled nor has it ever started\";\n }\n }\n+\n+ /**\n+ * Returns true if the task has started, false in case it didn't start (yet?) or it was skipped\n+ */\n+ boolean hasStarted() {\n+ return taskState.get() == TaskState.STARTED;\n+ }\n+\n+ /**\n+ * Sets this task to be skipped. Returns true if the task will be skipped, false if the task has already started.\n+ */\n+ boolean skip() {\n+ /*\n+ * Threads may still get run although future#cancel returns true. We make sure that a task is either cancelled (or skipped),\n+ * or entirely run. In the odd case that future#cancel returns true and the thread still runs, the task won't do anything.\n+ * In case future#cancel returns true but the task has already started, this state change will not succeed hence this method\n+ * returns false and the task will normally run.\n+ */\n+ return taskState.compareAndSet(TaskState.WAITING, TaskState.SKIPPED);\n+ }\n+\n+ /**\n+ * Returns true if the task was set to be skipped before it was started\n+ */\n+ boolean isSkipped() {\n+ return taskState.get() == TaskState.SKIPPED;\n+ }\n+ }\n+\n+ static final class ScheduledTask {\n+ final Task task;\n+ final Future<?> future;\n+\n+ ScheduledTask(Task task, Future<?> future) {\n+ this.task = task;\n+ this.future = future;\n+ }\n+\n+ /**\n+ * Cancels this task. Returns true if the task has been successfully cancelled, meaning it won't be executed\n+ * or if it is its execution won't have any effect. Returns false if the task cannot be cancelled (possibly it was\n+ * already cancelled or already completed).\n+ */\n+ boolean skip() {\n+ /*\n+ * Future#cancel should return false whenever a task cannot be cancelled, most likely as it has already started. We don't\n+ * trust it much though so we try to cancel hoping that it will work. At the same time we always call skip too, which means\n+ * that if the task has already started the state change will fail. We could potentially not call skip when cancel returns\n+ * false but we prefer to stay on the safe side.\n+ */\n+ future.cancel(false);\n+ return task.skip();\n+ }\n+ }\n+\n+ final void sniff() throws IOException {\n+ List<HttpHost> sniffedHosts = hostsSniffer.sniffHosts();\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"sniffed hosts: \" + sniffedHosts);\n+ }\n+ if (sniffedHosts.isEmpty()) {\n+ logger.warn(\"no hosts to set, hosts will be updated at the next sniffing round\");\n+ } else {\n+ restClient.setHosts(sniffedHosts.toArray(new HttpHost[sniffedHosts.size()]));\n+ }\n+ }\n+\n+ @Override\n+ public void close() {\n+ if (initialized.get()) {\n+ nextScheduledTask.skip();\n+ }\n+ this.scheduler.shutdown();\n }\n \n /**\n@@ -158,8 +234,62 @@ public static SnifferBuilder builder(RestClient restClient) {\n return new SnifferBuilder(restClient);\n }\n \n- private static class SnifferThreadFactory implements ThreadFactory {\n+ /**\n+ * The Scheduler interface allows to isolate the sniffing scheduling aspects so that we can test\n+ * the sniffer by injecting when needed a custom scheduler that is more suited for testing.\n+ */\n+ interface Scheduler {\n+ /**\n+ * Schedules the provided {@link Runnable} to be executed in <code>delayMillis</code> milliseconds\n+ */\n+ Future<?> schedule(Task task, long delayMillis);\n+\n+ /**\n+ * Shuts this scheduler down\n+ */\n+ void shutdown();\n+ }\n+\n+ /**\n+ * Default implementation of {@link Scheduler}, based on {@link ScheduledExecutorService}\n+ */\n+ static final class DefaultScheduler implements Scheduler {\n+ final ScheduledExecutorService executor;\n+\n+ DefaultScheduler() {\n+ this(initScheduledExecutorService());\n+ }\n+\n+ DefaultScheduler(ScheduledExecutorService executor) {\n+ this.executor = executor;\n+ }\n+\n+ private static ScheduledExecutorService initScheduledExecutorService() {\n+ ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(1, new SnifferThreadFactory(SNIFFER_THREAD_NAME));\n+ executor.setRemoveOnCancelPolicy(true);\n+ return executor;\n+ }\n+\n+ @Override\n+ public Future<?> schedule(Task task, long delayMillis) {\n+ return executor.schedule(task, delayMillis, TimeUnit.MILLISECONDS);\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ executor.shutdown();\n+ try {\n+ if (executor.awaitTermination(1000, TimeUnit.MILLISECONDS)) {\n+ return;\n+ }\n+ executor.shutdownNow();\n+ } catch (InterruptedException ignore) {\n+ Thread.currentThread().interrupt();\n+ }\n+ }\n+ }\n \n+ static class SnifferThreadFactory implements ThreadFactory {\n private final AtomicInteger threadNumber = new AtomicInteger(1);\n private final String namePrefix;\n private final ThreadFactory originalThreadFactory;", "filename": "client/sniffer/src/main/java/org/elasticsearch/client/sniff/Sniffer.java", "status": "modified" }, { "diff": "@@ -21,7 +21,6 @@\n \n import org.apache.http.HttpHost;\n \n-import java.io.IOException;\n import java.util.Collections;\n import java.util.List;\n \n@@ -30,7 +29,7 @@\n */\n class MockHostsSniffer implements HostsSniffer {\n @Override\n- public List<HttpHost> sniffHosts() throws IOException {\n+ public List<HttpHost> sniffHosts() {\n return Collections.singletonList(new HttpHost(\"localhost\", 9200));\n }\n }", "filename": "client/sniffer/src/test/java/org/elasticsearch/client/sniff/MockHostsSniffer.java", "status": "modified" }, { "diff": "@@ -0,0 +1,656 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.client.sniff;\n+\n+import org.apache.http.HttpHost;\n+import org.elasticsearch.client.RestClient;\n+import org.elasticsearch.client.RestClientTestCase;\n+import org.elasticsearch.client.sniff.Sniffer.DefaultScheduler;\n+import org.elasticsearch.client.sniff.Sniffer.Scheduler;\n+import org.mockito.Matchers;\n+import org.mockito.invocation.InvocationOnMock;\n+import org.mockito.stubbing.Answer;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.Set;\n+import java.util.concurrent.CancellationException;\n+import java.util.concurrent.CopyOnWriteArraySet;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.ExecutionException;\n+import java.util.concurrent.ExecutorService;\n+import java.util.concurrent.Executors;\n+import java.util.concurrent.Future;\n+import java.util.concurrent.ScheduledExecutorService;\n+import java.util.concurrent.ScheduledFuture;\n+import java.util.concurrent.ScheduledThreadPoolExecutor;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicReference;\n+\n+import static org.hamcrest.CoreMatchers.equalTo;\n+import static org.hamcrest.CoreMatchers.instanceOf;\n+import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n+import static org.hamcrest.Matchers.is;\n+import static org.junit.Assert.assertEquals;\n+import static org.junit.Assert.assertFalse;\n+import static org.junit.Assert.assertNotEquals;\n+import static org.junit.Assert.assertNull;\n+import static org.junit.Assert.assertSame;\n+import static org.junit.Assert.assertThat;\n+import static org.junit.Assert.assertTrue;\n+import static org.junit.Assert.fail;\n+import static org.mockito.Matchers.any;\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.times;\n+import static org.mockito.Mockito.verify;\n+import static org.mockito.Mockito.verifyNoMoreInteractions;\n+import static org.mockito.Mockito.when;\n+\n+public class SnifferTests extends RestClientTestCase {\n+\n+ /**\n+ * Tests the {@link Sniffer#sniff()} method in isolation. Verifies that it uses the {@link HostsSniffer} implementation\n+ * to retrieve nodes and set them (when not empty) to the provided {@link RestClient} instance.\n+ */\n+ public void testSniff() throws IOException {\n+ HttpHost initialHost = new HttpHost(\"localhost\", 9200);\n+ try (RestClient restClient = RestClient.builder(initialHost).build()) {\n+ Scheduler noOpScheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ return mock(Future.class);\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+\n+ }\n+ };\n+ CountingHostsSniffer hostsSniffer = new CountingHostsSniffer();\n+ int iters = randomIntBetween(5, 30);\n+ try (Sniffer sniffer = new Sniffer(restClient, hostsSniffer, noOpScheduler, 1000L, -1)){\n+ {\n+ assertEquals(1, restClient.getHosts().size());\n+ HttpHost httpHost = restClient.getHosts().get(0);\n+ assertEquals(\"localhost\", httpHost.getHostName());\n+ assertEquals(9200, httpHost.getPort());\n+ }\n+ int emptyList = 0;\n+ int failures = 0;\n+ int runs = 0;\n+ List<HttpHost> lastHosts = Collections.singletonList(initialHost);\n+ for (int i = 0; i < iters; i++) {\n+ try {\n+ runs++;\n+ sniffer.sniff();\n+ if (hostsSniffer.failures.get() > failures) {\n+ failures++;\n+ fail(\"should have failed given that hostsSniffer says it threw an exception\");\n+ } else if (hostsSniffer.emptyList.get() > emptyList) {\n+ emptyList++;\n+ assertEquals(lastHosts, restClient.getHosts());\n+ } else {\n+ assertNotEquals(lastHosts, restClient.getHosts());\n+ List<HttpHost> expectedHosts = CountingHostsSniffer.buildHosts(runs);\n+ assertEquals(expectedHosts, restClient.getHosts());\n+ lastHosts = restClient.getHosts();\n+ }\n+ } catch(IOException e) {\n+ if (hostsSniffer.failures.get() > failures) {\n+ failures++;\n+ assertEquals(\"communication breakdown\", e.getMessage());\n+ }\n+ }\n+ }\n+ assertEquals(hostsSniffer.emptyList.get(), emptyList);\n+ assertEquals(hostsSniffer.failures.get(), failures);\n+ assertEquals(hostsSniffer.runs.get(), runs);\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Test multiple sniffing rounds by mocking the {@link Scheduler} as well as the {@link HostsSniffer}.\n+ * Simulates the ordinary behaviour of {@link Sniffer} when sniffing on failure is not enabled.\n+ * The {@link CountingHostsSniffer} doesn't make any network connection but may throw exception or return no hosts, which makes\n+ * it possible to verify that errors are properly handled and don't affect subsequent runs and their scheduling.\n+ * The {@link Scheduler} implementation submits rather than scheduling tasks, meaning that it doesn't respect the requested sniff\n+ * delays while allowing to assert that the requested delays for each requested run and the following one are the expected values.\n+ */\n+ public void testOrdinarySniffRounds() throws Exception {\n+ final long sniffInterval = randomLongBetween(1, Long.MAX_VALUE);\n+ long sniffAfterFailureDelay = randomLongBetween(1, Long.MAX_VALUE);\n+ RestClient restClient = mock(RestClient.class);\n+ CountingHostsSniffer hostsSniffer = new CountingHostsSniffer();\n+ final int iters = randomIntBetween(30, 100);\n+ final Set<Future<?>> futures = new CopyOnWriteArraySet<>();\n+ final CountDownLatch completionLatch = new CountDownLatch(1);\n+ final AtomicInteger runs = new AtomicInteger(iters);\n+ final ExecutorService executor = Executors.newSingleThreadExecutor();\n+ final AtomicReference<Future<?>> lastFuture = new AtomicReference<>();\n+ final AtomicReference<Sniffer.Task> lastTask = new AtomicReference<>();\n+ Scheduler scheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ assertEquals(sniffInterval, task.nextTaskDelay);\n+ int numberOfRuns = runs.getAndDecrement();\n+ if (numberOfRuns == iters) {\n+ //the first call is to schedule the first sniff round from the Sniffer constructor, with delay O\n+ assertEquals(0L, delayMillis);\n+ assertEquals(sniffInterval, task.nextTaskDelay);\n+ } else {\n+ //all of the subsequent times \"schedule\" is called with delay set to the configured sniff interval\n+ assertEquals(sniffInterval, delayMillis);\n+ assertEquals(sniffInterval, task.nextTaskDelay);\n+ if (numberOfRuns == 0) {\n+ completionLatch.countDown();\n+ return null;\n+ }\n+ }\n+ //we submit rather than scheduling to make the test quick and not depend on time\n+ Future<?> future = executor.submit(task);\n+ futures.add(future);\n+ if (numberOfRuns == 1) {\n+ lastFuture.set(future);\n+ lastTask.set(task);\n+ }\n+ return future;\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ //the executor is closed externally, shutdown is tested separately\n+ }\n+ };\n+ try {\n+ new Sniffer(restClient, hostsSniffer, scheduler, sniffInterval, sniffAfterFailureDelay);\n+ assertTrue(\"timeout waiting for sniffing rounds to be completed\", completionLatch.await(1000, TimeUnit.MILLISECONDS));\n+ assertEquals(iters, futures.size());\n+ //the last future is the only one that may not be completed yet, as the count down happens\n+ //while scheduling the next round which is still part of the execution of the runnable itself.\n+ assertTrue(lastTask.get().hasStarted());\n+ lastFuture.get().get();\n+ for (Future<?> future : futures) {\n+ assertTrue(future.isDone());\n+ future.get();\n+ }\n+ } finally {\n+ executor.shutdown();\n+ assertTrue(executor.awaitTermination(1000, TimeUnit.MILLISECONDS));\n+ }\n+ int totalRuns = hostsSniffer.runs.get();\n+ assertEquals(iters, totalRuns);\n+ int setHostsRuns = totalRuns - hostsSniffer.failures.get() - hostsSniffer.emptyList.get();\n+ verify(restClient, times(setHostsRuns)).setHosts(Matchers.<HttpHost>anyVararg());\n+ verifyNoMoreInteractions(restClient);\n+ }\n+\n+ /**\n+ * Test that {@link Sniffer#close()} shuts down the underlying {@link Scheduler}, and that such calls are idempotent.\n+ * Also verifies that the next scheduled round gets cancelled.\n+ */\n+ public void testClose() {\n+ final Future<?> future = mock(Future.class);\n+ long sniffInterval = randomLongBetween(1, Long.MAX_VALUE);\n+ long sniffAfterFailureDelay = randomLongBetween(1, Long.MAX_VALUE);\n+ RestClient restClient = mock(RestClient.class);\n+ final AtomicInteger shutdown = new AtomicInteger(0);\n+ final AtomicBoolean initialized = new AtomicBoolean(false);\n+ Scheduler scheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ if (initialized.compareAndSet(false, true)) {\n+ //run from the same thread so the sniffer gets for sure initialized and the scheduled task gets cancelled on close\n+ task.run();\n+ }\n+ return future;\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ shutdown.incrementAndGet();\n+ }\n+ };\n+\n+ Sniffer sniffer = new Sniffer(restClient, new MockHostsSniffer(), scheduler, sniffInterval, sniffAfterFailureDelay);\n+ assertEquals(0, shutdown.get());\n+ int iters = randomIntBetween(3, 10);\n+ for (int i = 1; i <= iters; i++) {\n+ sniffer.close();\n+ verify(future, times(i)).cancel(false);\n+ assertEquals(i, shutdown.get());\n+ }\n+ }\n+\n+ public void testSniffOnFailureNotInitialized() {\n+ RestClient restClient = mock(RestClient.class);\n+ CountingHostsSniffer hostsSniffer = new CountingHostsSniffer();\n+ long sniffInterval = randomLongBetween(1, Long.MAX_VALUE);\n+ long sniffAfterFailureDelay = randomLongBetween(1, Long.MAX_VALUE);\n+ final AtomicInteger scheduleCalls = new AtomicInteger(0);\n+ Scheduler scheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ scheduleCalls.incrementAndGet();\n+ return null;\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ }\n+ };\n+\n+ Sniffer sniffer = new Sniffer(restClient, hostsSniffer, scheduler, sniffInterval, sniffAfterFailureDelay);\n+ for (int i = 0; i < 10; i++) {\n+ sniffer.sniffOnFailure();\n+ }\n+ assertEquals(1, scheduleCalls.get());\n+ int totalRuns = hostsSniffer.runs.get();\n+ assertEquals(0, totalRuns);\n+ int setHostsRuns = totalRuns - hostsSniffer.failures.get() - hostsSniffer.emptyList.get();\n+ verify(restClient, times(setHostsRuns)).setHosts(Matchers.<HttpHost>anyVararg());\n+ verifyNoMoreInteractions(restClient);\n+ }\n+\n+ /**\n+ * Test behaviour when a bunch of onFailure sniffing rounds are triggered in parallel. Each run will always\n+ * schedule a subsequent afterFailure round. Also, for each onFailure round that starts, the net scheduled round\n+ * (either afterFailure or ordinary) gets cancelled.\n+ */\n+ public void testSniffOnFailure() throws Exception {\n+ RestClient restClient = mock(RestClient.class);\n+ CountingHostsSniffer hostsSniffer = new CountingHostsSniffer();\n+ final AtomicBoolean initializing = new AtomicBoolean(true);\n+ final long sniffInterval = randomLongBetween(1, Long.MAX_VALUE);\n+ final long sniffAfterFailureDelay = randomLongBetween(1, Long.MAX_VALUE);\n+ int minNumOnFailureRounds = randomIntBetween(5, 10);\n+ final CountDownLatch initializingLatch = new CountDownLatch(1);\n+ final Set<Sniffer.ScheduledTask> ordinaryRoundsTasks = new CopyOnWriteArraySet<>();\n+ final AtomicReference<Future<?>> initializingFuture = new AtomicReference<>();\n+ final Set<Sniffer.ScheduledTask> onFailureTasks = new CopyOnWriteArraySet<>();\n+ final Set<Sniffer.ScheduledTask> afterFailureTasks = new CopyOnWriteArraySet<>();\n+ final AtomicBoolean onFailureCompleted = new AtomicBoolean(false);\n+ final CountDownLatch completionLatch = new CountDownLatch(1);\n+ final ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();\n+ try {\n+ Scheduler scheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(final Sniffer.Task task, long delayMillis) {\n+ if (initializing.compareAndSet(true, false)) {\n+ assertEquals(0L, delayMillis);\n+ Future<?> future = executor.submit(new Runnable() {\n+ @Override\n+ public void run() {\n+ try {\n+ task.run();\n+ } finally {\n+ //we need to make sure that the sniffer is initialized, so the sniffOnFailure\n+ //call does what it needs to do. Otherwise nothing happens until initialized.\n+ initializingLatch.countDown();\n+ }\n+ }\n+ });\n+ assertTrue(initializingFuture.compareAndSet(null, future));\n+ return future;\n+ }\n+ if (delayMillis == 0L) {\n+ Future<?> future = executor.submit(task);\n+ onFailureTasks.add(new Sniffer.ScheduledTask(task, future));\n+ return future;\n+ }\n+ if (delayMillis == sniffAfterFailureDelay) {\n+ Future<?> future = scheduleOrSubmit(task);\n+ afterFailureTasks.add(new Sniffer.ScheduledTask(task, future));\n+ return future;\n+ }\n+\n+ assertEquals(sniffInterval, delayMillis);\n+ assertEquals(sniffInterval, task.nextTaskDelay);\n+\n+ if (onFailureCompleted.get() && onFailureTasks.size() == afterFailureTasks.size()) {\n+ completionLatch.countDown();\n+ return mock(Future.class);\n+ }\n+\n+ Future<?> future = scheduleOrSubmit(task);\n+ ordinaryRoundsTasks.add(new Sniffer.ScheduledTask(task, future));\n+ return future;\n+ }\n+\n+ private Future<?> scheduleOrSubmit(Sniffer.Task task) {\n+ if (randomBoolean()) {\n+ return executor.schedule(task, randomLongBetween(0L, 200L), TimeUnit.MILLISECONDS);\n+ } else {\n+ return executor.submit(task);\n+ }\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ }\n+ };\n+ final Sniffer sniffer = new Sniffer(restClient, hostsSniffer, scheduler, sniffInterval, sniffAfterFailureDelay);\n+ assertTrue(\"timeout waiting for sniffer to get initialized\", initializingLatch.await(1000, TimeUnit.MILLISECONDS));\n+\n+ ExecutorService onFailureExecutor = Executors.newFixedThreadPool(randomIntBetween(5, 20));\n+ Set<Future<?>> onFailureFutures = new CopyOnWriteArraySet<>();\n+ try {\n+ //with tasks executing quickly one after each other, it is very likely that the onFailure round gets skipped\n+ //as another round is already running. We retry till enough runs get through as that's what we want to test.\n+ while (onFailureTasks.size() < minNumOnFailureRounds) {\n+ onFailureFutures.add(onFailureExecutor.submit(new Runnable() {\n+ @Override\n+ public void run() {\n+ sniffer.sniffOnFailure();\n+ }\n+ }));\n+ }\n+ assertThat(onFailureFutures.size(), greaterThanOrEqualTo(minNumOnFailureRounds));\n+ for (Future<?> onFailureFuture : onFailureFutures) {\n+ assertNull(onFailureFuture.get());\n+ }\n+ onFailureCompleted.set(true);\n+ } finally {\n+ onFailureExecutor.shutdown();\n+ onFailureExecutor.awaitTermination(1000, TimeUnit.MILLISECONDS);\n+ }\n+\n+ assertFalse(initializingFuture.get().isCancelled());\n+ assertTrue(initializingFuture.get().isDone());\n+ assertNull(initializingFuture.get().get());\n+\n+ assertTrue(\"timeout waiting for sniffing rounds to be completed\", completionLatch.await(1000, TimeUnit.MILLISECONDS));\n+ assertThat(onFailureTasks.size(), greaterThanOrEqualTo(minNumOnFailureRounds));\n+ assertEquals(onFailureTasks.size(), afterFailureTasks.size());\n+\n+ for (Sniffer.ScheduledTask onFailureTask : onFailureTasks) {\n+ assertFalse(onFailureTask.future.isCancelled());\n+ assertTrue(onFailureTask.future.isDone());\n+ assertNull(onFailureTask.future.get());\n+ assertTrue(onFailureTask.task.hasStarted());\n+ assertFalse(onFailureTask.task.isSkipped());\n+ }\n+\n+ int cancelledTasks = 0;\n+ int completedTasks = onFailureTasks.size() + 1;\n+ for (Sniffer.ScheduledTask afterFailureTask : afterFailureTasks) {\n+ if (assertTaskCancelledOrCompleted(afterFailureTask)) {\n+ completedTasks++;\n+ } else {\n+ cancelledTasks++;\n+ }\n+ }\n+\n+ assertThat(ordinaryRoundsTasks.size(), greaterThan(0));\n+ for (Sniffer.ScheduledTask task : ordinaryRoundsTasks) {\n+ if (assertTaskCancelledOrCompleted(task)) {\n+ completedTasks++;\n+ } else {\n+ cancelledTasks++;\n+ }\n+ }\n+ assertEquals(onFailureTasks.size(), cancelledTasks);\n+\n+ assertEquals(completedTasks, hostsSniffer.runs.get());\n+ int setHostsRuns = hostsSniffer.runs.get() - hostsSniffer.failures.get() - hostsSniffer.emptyList.get();\n+ verify(restClient, times(setHostsRuns)).setHosts(Matchers.<HttpHost>anyVararg());\n+ verifyNoMoreInteractions(restClient);\n+ } finally {\n+ executor.shutdown();\n+ executor.awaitTermination(1000L, TimeUnit.MILLISECONDS);\n+ }\n+ }\n+\n+ private static boolean assertTaskCancelledOrCompleted(Sniffer.ScheduledTask task) throws ExecutionException, InterruptedException {\n+ if (task.task.isSkipped()) {\n+ assertTrue(task.future.isCancelled());\n+ try {\n+ task.future.get();\n+ fail(\"cancellation exception should have been thrown\");\n+ } catch(CancellationException ignore) {\n+ }\n+ return false;\n+ } else {\n+ try {\n+ assertNull(task.future.get());\n+ } catch(CancellationException ignore) {\n+ assertTrue(task.future.isCancelled());\n+ }\n+ assertTrue(task.future.isDone());\n+ assertTrue(task.task.hasStarted());\n+ return true;\n+ }\n+ }\n+\n+ public void testTaskCancelling() throws Exception {\n+ RestClient restClient = mock(RestClient.class);\n+ HostsSniffer hostsSniffer = mock(HostsSniffer.class);\n+ Scheduler noOpScheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ return null;\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ }\n+ };\n+ Sniffer sniffer = new Sniffer(restClient, hostsSniffer, noOpScheduler, 0L, 0L);\n+ ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();\n+ try {\n+ int numIters = randomIntBetween(50, 100);\n+ for (int i = 0; i < numIters; i++) {\n+ Sniffer.Task task = sniffer.new Task(0L);\n+ TaskWrapper wrapper = new TaskWrapper(task);\n+ Future<?> future;\n+ if (rarely()) {\n+ future = executor.schedule(wrapper, randomLongBetween(0L, 200L), TimeUnit.MILLISECONDS);\n+ } else {\n+ future = executor.submit(wrapper);\n+ }\n+ Sniffer.ScheduledTask scheduledTask = new Sniffer.ScheduledTask(task, future);\n+ boolean skip = scheduledTask.skip();\n+ try {\n+ assertNull(future.get());\n+ } catch(CancellationException ignore) {\n+ assertTrue(future.isCancelled());\n+ }\n+\n+ if (skip) {\n+ //the task was either cancelled before starting, in which case it will never start (thanks to Future#cancel),\n+ //or skipped, in which case it will run but do nothing (thanks to Task#skip).\n+ //Here we want to make sure that whenever skip returns true, the task either won't run or it won't do anything,\n+ //otherwise we may end up with parallel sniffing tracks given that each task schedules the following one. We need to\n+ // make sure that onFailure takes scheduling over while at the same time ordinary rounds don't go on.\n+ assertFalse(task.hasStarted());\n+ assertTrue(task.isSkipped());\n+ assertTrue(future.isCancelled());\n+ assertTrue(future.isDone());\n+ } else {\n+ //if a future is cancelled when its execution has already started, future#get throws CancellationException before\n+ //completion. The execution continues though so we use a latch to try and wait for the task to be completed.\n+ //Here we want to make sure that whenever skip returns false, the task will be completed, otherwise we may be\n+ //missing to schedule the following round, which means no sniffing will ever happen again besides on failure sniffing.\n+ assertTrue(wrapper.await());\n+ //the future may or may not be cancelled but the task has for sure started and completed\n+ assertTrue(task.toString(), task.hasStarted());\n+ assertFalse(task.isSkipped());\n+ assertTrue(future.isDone());\n+ }\n+ //subsequent cancel calls return false for sure\n+ int cancelCalls = randomIntBetween(1, 10);\n+ for (int j = 0; j < cancelCalls; j++) {\n+ assertFalse(scheduledTask.skip());\n+ }\n+ }\n+ } finally {\n+ executor.shutdown();\n+ executor.awaitTermination(1000, TimeUnit.MILLISECONDS);\n+ }\n+ }\n+\n+ /**\n+ * Wraps a {@link Sniffer.Task} and allows to wait for its completion. This is needed to verify\n+ * that tasks are either never started or always completed. Calling {@link Future#get()} against a cancelled future will\n+ * throw {@link CancellationException} straight-away but the execution of the task will continue if it had already started,\n+ * in which case {@link Future#cancel(boolean)} returns true which is not very helpful.\n+ */\n+ private static final class TaskWrapper implements Runnable {\n+ final Sniffer.Task task;\n+ final CountDownLatch completionLatch = new CountDownLatch(1);\n+\n+ TaskWrapper(Sniffer.Task task) {\n+ this.task = task;\n+ }\n+\n+ @Override\n+ public void run() {\n+ try {\n+ task.run();\n+ } finally {\n+ completionLatch.countDown();\n+ }\n+ }\n+\n+ boolean await() throws InterruptedException {\n+ return completionLatch.await(1000, TimeUnit.MILLISECONDS);\n+ }\n+ }\n+\n+ /**\n+ * Mock {@link HostsSniffer} implementation used for testing, which most of the times return a fixed host.\n+ * It rarely throws exception or return an empty list of hosts, to make sure that such situations are properly handled.\n+ * It also asserts that it never gets called concurrently, based on the assumption that only one sniff run can be run\n+ * at a given point in time.\n+ */\n+ private static class CountingHostsSniffer implements HostsSniffer {\n+ private final AtomicInteger runs = new AtomicInteger(0);\n+ private final AtomicInteger failures = new AtomicInteger(0);\n+ private final AtomicInteger emptyList = new AtomicInteger(0);\n+\n+ @Override\n+ public List<HttpHost> sniffHosts() throws IOException {\n+ int run = runs.incrementAndGet();\n+ if (rarely()) {\n+ failures.incrementAndGet();\n+ //check that if communication breaks, sniffer keeps on working\n+ throw new IOException(\"communication breakdown\");\n+ }\n+ if (rarely()) {\n+ emptyList.incrementAndGet();\n+ return Collections.emptyList();\n+ }\n+ return buildHosts(run);\n+ }\n+\n+ private static List<HttpHost> buildHosts(int run) {\n+ int size = run % 5 + 1;\n+ assert size > 0;\n+ List<HttpHost> hosts = new ArrayList<>(size);\n+ for (int i = 0; i < size; i++) {\n+ hosts.add(new HttpHost(\"sniffed-\" + run, 9200 + i));\n+ }\n+ return hosts;\n+ }\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ public void testDefaultSchedulerSchedule() {\n+ RestClient restClient = mock(RestClient.class);\n+ HostsSniffer hostsSniffer = mock(HostsSniffer.class);\n+ Scheduler noOpScheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ return mock(Future.class);\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+\n+ }\n+ };\n+ Sniffer sniffer = new Sniffer(restClient, hostsSniffer, noOpScheduler, 0L, 0L);\n+ Sniffer.Task task = sniffer.new Task(randomLongBetween(1, Long.MAX_VALUE));\n+\n+ ScheduledExecutorService scheduledExecutorService = mock(ScheduledExecutorService.class);\n+ final ScheduledFuture<?> mockedFuture = mock(ScheduledFuture.class);\n+ when(scheduledExecutorService.schedule(any(Runnable.class), any(Long.class), any(TimeUnit.class)))\n+ .then(new Answer<ScheduledFuture<?>>() {\n+ @Override\n+ public ScheduledFuture<?> answer(InvocationOnMock invocationOnMock) {\n+ return mockedFuture;\n+ }\n+ });\n+ DefaultScheduler scheduler = new DefaultScheduler(scheduledExecutorService);\n+ long delay = randomLongBetween(1, Long.MAX_VALUE);\n+ Future<?> future = scheduler.schedule(task, delay);\n+ assertSame(mockedFuture, future);\n+ verify(scheduledExecutorService).schedule(task, delay, TimeUnit.MILLISECONDS);\n+ verifyNoMoreInteractions(scheduledExecutorService, mockedFuture);\n+ }\n+\n+ public void testDefaultSchedulerThreadFactory() {\n+ DefaultScheduler defaultScheduler = new DefaultScheduler();\n+ try {\n+ ScheduledExecutorService executorService = defaultScheduler.executor;\n+ assertThat(executorService, instanceOf(ScheduledThreadPoolExecutor.class));\n+ assertThat(executorService, instanceOf(ScheduledThreadPoolExecutor.class));\n+ ScheduledThreadPoolExecutor executor = (ScheduledThreadPoolExecutor) executorService;\n+ assertTrue(executor.getRemoveOnCancelPolicy());\n+ assertFalse(executor.getContinueExistingPeriodicTasksAfterShutdownPolicy());\n+ assertTrue(executor.getExecuteExistingDelayedTasksAfterShutdownPolicy());\n+ assertThat(executor.getThreadFactory(), instanceOf(Sniffer.SnifferThreadFactory.class));\n+ int iters = randomIntBetween(3, 10);\n+ for (int i = 1; i <= iters; i++) {\n+ Thread thread = executor.getThreadFactory().newThread(new Runnable() {\n+ @Override\n+ public void run() {\n+\n+ }\n+ });\n+ assertThat(thread.getName(), equalTo(\"es_rest_client_sniffer[T#\" + i + \"]\"));\n+ assertThat(thread.isDaemon(), is(true));\n+ }\n+ } finally {\n+ defaultScheduler.shutdown();\n+ }\n+ }\n+\n+ public void testDefaultSchedulerShutdown() throws Exception {\n+ ScheduledThreadPoolExecutor executor = mock(ScheduledThreadPoolExecutor.class);\n+ DefaultScheduler defaultScheduler = new DefaultScheduler(executor);\n+ defaultScheduler.shutdown();\n+ verify(executor).shutdown();\n+ verify(executor).awaitTermination(1000, TimeUnit.MILLISECONDS);\n+ verify(executor).shutdownNow();\n+ verifyNoMoreInteractions(executor);\n+\n+ when(executor.awaitTermination(1000, TimeUnit.MILLISECONDS)).thenReturn(true);\n+ defaultScheduler.shutdown();\n+ verify(executor, times(2)).shutdown();\n+ verify(executor, times(2)).awaitTermination(1000, TimeUnit.MILLISECONDS);\n+ verifyNoMoreInteractions(executor);\n+ }\n+}", "filename": "client/sniffer/src/test/java/org/elasticsearch/client/sniff/SnifferTests.java", "status": "added" }, { "diff": "@@ -41,8 +41,6 @@\n import org.joda.time.format.DateTimeFormatter;\n \n import javax.net.ssl.SSLContext;\n-\n-import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n@@ -658,12 +656,12 @@ public void doClose() {\n if (sniffer != null) {\n sniffer.close();\n }\n- } catch (IOException | RuntimeException e) {\n+ } catch (Exception e) {\n logger.error(\"an error occurred while closing the internal client sniffer\", e);\n } finally {\n try {\n client.close();\n- } catch (IOException | RuntimeException e) {\n+ } catch (Exception e) {\n logger.error(\"an error occurred while closing the internal client\", e);\n }\n }", "filename": "x-pack/plugin/monitoring/src/main/java/org/elasticsearch/xpack/monitoring/exporter/http/HttpExporter.java", "status": "modified" }, { "diff": "@@ -86,7 +86,7 @@ public void onFailure(final HttpHost host) {\n resource.markDirty();\n }\n if (sniffer != null) {\n- sniffer.sniffOnFailure(host);\n+ sniffer.sniffOnFailure();\n }\n }\n ", "filename": "x-pack/plugin/monitoring/src/main/java/org/elasticsearch/xpack/monitoring/exporter/http/NodeFailureListener.java", "status": "modified" }, { "diff": "@@ -46,7 +46,7 @@ public void testSnifferNotifiedOnFailure() {\n \n listener.onFailure(host);\n \n- verify(sniffer).sniffOnFailure(host);\n+ verify(sniffer).sniffOnFailure();\n }\n \n public void testResourceNotifiedOnFailure() {\n@@ -71,7 +71,7 @@ public void testResourceAndSnifferNotifiedOnFailure() {\n }\n \n if (optionalSniffer != null) {\n- verify(sniffer).sniffOnFailure(host);\n+ verify(sniffer).sniffOnFailure();\n }\n }\n ", "filename": "x-pack/plugin/monitoring/src/test/java/org/elasticsearch/xpack/monitoring/exporter/http/NodeFailureListenerTests.java", "status": "modified" } ] }
{ "body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 2.4.0\r\n\r\n**Elasticsearch java rest client version**: 5.2.2\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version** (`java -version`): 1.8.0_131\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Ubuntu 16.04\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nDefault SniffonfailureListener on rest client blocks the HTTPAsyncClient reactor thread when request encounters a `java.net.ConnectException`\r\n**Steps to reproduce**:\r\n 1. Have two es nodes and let sniffer pick them up\r\n 2. Shut down one node\r\n 3. Client tries to connect to that node --> fails --> tries to sniff and hangs till maxRetryTimeoutMillis\r\n\r\nThe failed callback triggers the sniffer https://github.com/elastic/elasticsearch/blob/master/client/rest/src/main/java/org/elasticsearch/client/RestClient.java#L374\r\n\r\nHowever, the failed callback is being handled by the reactor thread of the underlying HttpAsyncClient. Since, the sniffer does a blocking `performRequest` using the same client instance and the HttpClient can't handle the request because the reactor thread is blocked, its effectively a deadlock till the SyncResponselistener timeout of `maxRetryTimeoutMillis` and no requests can be served at all during this time period. :cold_sweat:\r\n\r\nI found a similar issue https://issues.apache.org/jira/browse/HTTPCLIENT-1805 where the suggestion is to avoid potentially blocking or long running operations in the callbacks and more so in the failed callback since it could block the reactor thread. \r\n\r\nI guess the solution would be to trigger the retries as well as sniffer on a separate threadpool internal to the RestClient so that the HttpClient's dispatcher and reactor threads are freed up asap.", "comments": [ { "body": "@javanna could you please have a look at this one?", "created_at": "2017-07-13T07:24:55Z" }, { "body": "@javanna any updates on this issue or need more information?", "created_at": "2017-08-09T15:59:14Z" }, { "body": "@javanna @danielmitterdorfer is this not an issue? any updates on this?", "created_at": "2017-12-14T07:28:59Z" } ], "number": 25701, "title": "Java minimal rest client hangs with default SniffOnFailure listener enabled" }
{ "body": "This PR reworks the `Sniffer` component to simplify it and make it possible to test it. \r\n\r\nIn particular, it no longer takes out the host that failed when sniffing on failure, but rather relies on whatever the cluster returns. This is the result of some valid comments from #27985. Taking out one single host is too naive, hard to test and debug.\r\n\r\nA new `Scheduler` abstraction is introduced to abstract the tasks scheduling away and make it possible to plug in any test implementation and take out timing aspects when testing.\r\n\r\nConcurrency aspects have also been improved, synchronized methods are no longer required. At the same time, we were able to take #27697 and #25701 into account and fix them, especially now that we can more easily add tests. \r\n\r\nLast but not least, good news is that we now have unit tests for the `Sniffer` component, long overdue.\r\n\r\nCloses #27697\r\nCloses #25701", "number": 29638, "review_comments": [ { "body": "onFailure behaviour is really hard to test, too moving parts, so I went for testing different aspects of it with the following three test methods. Suggestions are welcome on whether we think coverage is good enough. Certainly better than before as it was 0% up until now :(", "created_at": "2018-04-20T15:39:44Z" }, { "body": "I wonder whether it makes sense to add a test that uses the sniffer with on failure on against a proper http server. but that would add timing aspects which are not desirable. My concern is that we never test the sniffer in real-life. Maybe it would be nice to turn sniffing on when executing yaml tests?", "created_at": "2018-04-20T15:40:54Z" }, { "body": "When I was playing with cancellation as part of reindex I found that canceling a Runnable was sort of \"best effort\". If you make a test that calls `sniffOnFailure` a bunch of time really fast together I'll bet you get multiple rounds of sniffing in parallel.", "created_at": "2018-04-20T16:06:54Z" }, { "body": "Maybe drop this method and call `scheduler.schedule(theTask, delay)` all the places?", "created_at": "2018-04-20T16:10:13Z" }, { "body": "Probably best to wrap this `if (logger.isDebugEnabled())` so we don't build the string if we don't need it.", "created_at": "2018-04-20T16:10:50Z" }, { "body": "Do you want to include something in this message so it is easier to debug?", "created_at": "2018-04-20T16:12:37Z" }, { "body": ":+1:", "created_at": "2018-04-20T16:14:21Z" }, { "body": "I guess it depends on whether the runnable has already started or not? That is what I've seen, but hard to test in real-life though from a unit test...", "created_at": "2018-04-20T18:55:33Z" }, { "body": "makes sense", "created_at": "2018-04-20T18:56:49Z" }, { "body": "I think this change is worse? Maybe revert it?", "created_at": "2018-05-29T20:26:37Z" }, { "body": "Could you add the traditional ` * ` before each line of the comment?", "created_at": "2018-05-29T20:28:32Z" }, { "body": "Maybe \"reschedule sniffing to run as soon as possible if it isn't already running\".", "created_at": "2018-05-29T20:31:37Z" }, { "body": "I'd do\r\n```\r\ntask.cancel(false);\r\nreturn task.skip();\r\n```\r\n\r\nI don't think it is worth relying on `cancel` returning `true` in the case when we want to cancel. Maybe I'm overly paranoid but I don't trust it.", "created_at": "2018-05-29T20:36:32Z" }, { "body": "And I might rename this method to `skip`.", "created_at": "2018-05-29T20:36:44Z" }, { "body": "I think we can manage that in a followup.", "created_at": "2018-05-29T20:38:13Z" }, { "body": "yep sorry", "created_at": "2018-05-30T09:38:18Z" }, { "body": "sure", "created_at": "2018-05-30T09:38:28Z" }, { "body": "I agree! I have tests that verify this and I ran them many times but I am playing with fire here. After all, if cancel returns false whenever a task has already run, its corresponding state change will also fail. With this change all tests are still green so I think it's a good one to stay on the safe side.", "created_at": "2018-05-30T09:48:03Z" }, { "body": "++", "created_at": "2018-05-30T09:56:53Z" } ], "title": "Refactor Sniffer and make it testable" }
{ "commits": [ { "message": "Simplify `Sniffer` and begin testing it properly\n\nIntroduced `Scheduler` abstraction to make `Sniffer` testable so that tasks scheduling is isolated and easily mockable.\n\nAlso added a bunch of TODOs mainly around new tests that should be added soon-ish." }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "some improvements" }, { "message": "tests rewritten" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "refactor and add tests" }, { "message": "improve tests" }, { "message": "address first review comments" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "adapt NodeFailureListener" }, { "message": "fixed warnings" }, { "message": "Adapt HttpExporter" }, { "message": "adapt NodeFailureListenerTests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Add tests around cancelling tasks and make impl more robust" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "update comments" }, { "message": "addressed comments" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" }, { "message": "Merge branch 'master' into enhancement/rest_client_sniffer_tests" } ], "files": [ { "diff": "@@ -61,6 +61,7 @@\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.Iterator;\n+import java.util.LinkedHashSet;\n import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n@@ -132,7 +133,7 @@ public synchronized void setHosts(HttpHost... hosts) {\n if (hosts == null || hosts.length == 0) {\n throw new IllegalArgumentException(\"hosts must not be null nor empty\");\n }\n- Set<HttpHost> httpHosts = new HashSet<>();\n+ Set<HttpHost> httpHosts = new LinkedHashSet<>();\n AuthCache authCache = new BasicAuthCache();\n for (HttpHost host : hosts) {\n Objects.requireNonNull(host, \"host cannot be null\");\n@@ -143,6 +144,13 @@ public synchronized void setHosts(HttpHost... hosts) {\n this.blacklist.clear();\n }\n \n+ /**\n+ * Returns the configured hosts\n+ */\n+ public List<HttpHost> getHosts() {\n+ return new ArrayList<>(hostTuple.hosts);\n+ }\n+\n /**\n * Sends a request to the Elasticsearch cluster that the client points to.\n * Blocks until the request is completed and returns its response or fails", "filename": "client/rest/src/main/java/org/elasticsearch/client/RestClient.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n \n import java.io.IOException;\n import java.net.URI;\n+import java.util.Arrays;\n import java.util.Collections;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.TimeUnit;\n@@ -251,6 +252,37 @@ public void testSetHostsWrongArguments() throws IOException {\n }\n }\n \n+ public void testSetHostsPreservesOrdering() throws Exception {\n+ try (RestClient restClient = createRestClient()) {\n+ HttpHost[] hosts = randomHosts();\n+ restClient.setHosts(hosts);\n+ assertEquals(Arrays.asList(hosts), restClient.getHosts());\n+ }\n+ }\n+\n+ private static HttpHost[] randomHosts() {\n+ int numHosts = randomIntBetween(1, 10);\n+ HttpHost[] hosts = new HttpHost[numHosts];\n+ for (int i = 0; i < hosts.length; i++) {\n+ hosts[i] = new HttpHost(\"host-\" + i, 9200);\n+ }\n+ return hosts;\n+ }\n+\n+ public void testSetHostsDuplicatedHosts() throws Exception {\n+ try (RestClient restClient = createRestClient()) {\n+ int numHosts = randomIntBetween(1, 10);\n+ HttpHost[] hosts = new HttpHost[numHosts];\n+ HttpHost host = new HttpHost(\"host\", 9200);\n+ for (int i = 0; i < hosts.length; i++) {\n+ hosts[i] = host;\n+ }\n+ restClient.setHosts(hosts);\n+ assertEquals(1, restClient.getHosts().size());\n+ assertEquals(host, restClient.getHosts().get(0));\n+ }\n+ }\n+\n /**\n * @deprecated will remove method in 7.0 but needs tests until then. Replaced by {@link RequestTests#testConstructor()}.\n */", "filename": "client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java", "status": "modified" }, { "diff": "@@ -58,7 +58,6 @@ public void onFailure(HttpHost host) {\n if (sniffer == null) {\n throw new IllegalStateException(\"sniffer was not set, unable to sniff on failure\");\n }\n- //re-sniff immediately but take out the node that failed\n- sniffer.sniffOnFailure(host);\n+ sniffer.sniffOnFailure();\n }\n }", "filename": "client/sniffer/src/main/java/org/elasticsearch/client/sniff/SniffOnFailureListener.java", "status": "modified" }, { "diff": "@@ -31,12 +31,14 @@\n import java.security.PrivilegedAction;\n import java.util.List;\n import java.util.concurrent.Executors;\n+import java.util.concurrent.Future;\n import java.util.concurrent.ScheduledExecutorService;\n-import java.util.concurrent.ScheduledFuture;\n+import java.util.concurrent.ScheduledThreadPoolExecutor;\n import java.util.concurrent.ThreadFactory;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicReference;\n \n /**\n * Class responsible for sniffing nodes from some source (default is elasticsearch itself) and setting them to a provided instance of\n@@ -51,101 +53,175 @@ public class Sniffer implements Closeable {\n private static final Log logger = LogFactory.getLog(Sniffer.class);\n private static final String SNIFFER_THREAD_NAME = \"es_rest_client_sniffer\";\n \n- private final Task task;\n+ private final HostsSniffer hostsSniffer;\n+ private final RestClient restClient;\n+ private final long sniffIntervalMillis;\n+ private final long sniffAfterFailureDelayMillis;\n+ private final Scheduler scheduler;\n+ private final AtomicBoolean initialized = new AtomicBoolean(false);\n+ private volatile ScheduledTask nextScheduledTask;\n \n Sniffer(RestClient restClient, HostsSniffer hostsSniffer, long sniffInterval, long sniffAfterFailureDelay) {\n- this.task = new Task(hostsSniffer, restClient, sniffInterval, sniffAfterFailureDelay);\n+ this(restClient, hostsSniffer, new DefaultScheduler(), sniffInterval, sniffAfterFailureDelay);\n+ }\n+\n+ Sniffer(RestClient restClient, HostsSniffer hostsSniffer, Scheduler scheduler, long sniffInterval, long sniffAfterFailureDelay) {\n+ this.hostsSniffer = hostsSniffer;\n+ this.restClient = restClient;\n+ this.sniffIntervalMillis = sniffInterval;\n+ this.sniffAfterFailureDelayMillis = sniffAfterFailureDelay;\n+ this.scheduler = scheduler;\n+ /*\n+ * The first sniffing round is async, so this constructor returns before nextScheduledTask is assigned to a task.\n+ * The initialized flag is a protection against NPE due to that.\n+ */\n+ Task task = new Task(sniffIntervalMillis) {\n+ @Override\n+ public void run() {\n+ super.run();\n+ initialized.compareAndSet(false, true);\n+ }\n+ };\n+ /*\n+ * We do not keep track of the returned future as we never intend to cancel the initial sniffing round, we rather\n+ * prevent any other operation from being executed till the sniffer is properly initialized\n+ */\n+ scheduler.schedule(task, 0L);\n }\n \n /**\n- * Triggers a new sniffing round and explicitly takes out the failed host provided as argument\n+ * Schedule sniffing to run as soon as possible if it isn't already running. Once such sniffing round runs\n+ * it will also schedule a new round after sniffAfterFailureDelay ms.\n */\n- public void sniffOnFailure(HttpHost failedHost) {\n- this.task.sniffOnFailure(failedHost);\n+ public void sniffOnFailure() {\n+ //sniffOnFailure does nothing until the initial sniffing round has been completed\n+ if (initialized.get()) {\n+ /*\n+ * If sniffing is already running, there is no point in scheduling another round right after the current one.\n+ * Concurrent calls may be checking the same task state, but only the first skip call on the same task returns true.\n+ * The task may also get replaced while we check its state, in which case calling skip on it returns false.\n+ */\n+ if (this.nextScheduledTask.skip()) {\n+ /*\n+ * We do not keep track of this future as the task will immediately run and we don't intend to cancel it\n+ * due to concurrent sniffOnFailure runs. Effectively the previous (now cancelled or skipped) task will stay\n+ * assigned to nextTask till this onFailure round gets run and schedules its corresponding afterFailure round.\n+ */\n+ scheduler.schedule(new Task(sniffAfterFailureDelayMillis), 0L);\n+ }\n+ }\n }\n \n- @Override\n- public void close() throws IOException {\n- task.shutdown();\n+ enum TaskState {\n+ WAITING, SKIPPED, STARTED\n }\n \n- private static class Task implements Runnable {\n- private final HostsSniffer hostsSniffer;\n- private final RestClient restClient;\n-\n- private final long sniffIntervalMillis;\n- private final long sniffAfterFailureDelayMillis;\n- private final ScheduledExecutorService scheduledExecutorService;\n- private final AtomicBoolean running = new AtomicBoolean(false);\n- private ScheduledFuture<?> scheduledFuture;\n-\n- private Task(HostsSniffer hostsSniffer, RestClient restClient, long sniffIntervalMillis, long sniffAfterFailureDelayMillis) {\n- this.hostsSniffer = hostsSniffer;\n- this.restClient = restClient;\n- this.sniffIntervalMillis = sniffIntervalMillis;\n- this.sniffAfterFailureDelayMillis = sniffAfterFailureDelayMillis;\n- SnifferThreadFactory threadFactory = new SnifferThreadFactory(SNIFFER_THREAD_NAME);\n- this.scheduledExecutorService = Executors.newScheduledThreadPool(1, threadFactory);\n- scheduleNextRun(0);\n- }\n-\n- synchronized void scheduleNextRun(long delayMillis) {\n- if (scheduledExecutorService.isShutdown() == false) {\n- try {\n- if (scheduledFuture != null) {\n- //regardless of when the next sniff is scheduled, cancel it and schedule a new one with updated delay\n- this.scheduledFuture.cancel(false);\n- }\n- logger.debug(\"scheduling next sniff in \" + delayMillis + \" ms\");\n- this.scheduledFuture = this.scheduledExecutorService.schedule(this, delayMillis, TimeUnit.MILLISECONDS);\n- } catch(Exception e) {\n- logger.error(\"error while scheduling next sniffer task\", e);\n- }\n- }\n+ class Task implements Runnable {\n+ final long nextTaskDelay;\n+ final AtomicReference<TaskState> taskState = new AtomicReference<>(TaskState.WAITING);\n+\n+ Task(long nextTaskDelay) {\n+ this.nextTaskDelay = nextTaskDelay;\n }\n \n @Override\n public void run() {\n- sniff(null, sniffIntervalMillis);\n- }\n-\n- void sniffOnFailure(HttpHost failedHost) {\n- sniff(failedHost, sniffAfterFailureDelayMillis);\n- }\n-\n- void sniff(HttpHost excludeHost, long nextSniffDelayMillis) {\n- if (running.compareAndSet(false, true)) {\n- try {\n- List<HttpHost> sniffedHosts = hostsSniffer.sniffHosts();\n- logger.debug(\"sniffed hosts: \" + sniffedHosts);\n- if (excludeHost != null) {\n- sniffedHosts.remove(excludeHost);\n- }\n- if (sniffedHosts.isEmpty()) {\n- logger.warn(\"no hosts to set, hosts will be updated at the next sniffing round\");\n- } else {\n- this.restClient.setHosts(sniffedHosts.toArray(new HttpHost[sniffedHosts.size()]));\n- }\n- } catch (Exception e) {\n- logger.error(\"error while sniffing nodes\", e);\n- } finally {\n- scheduleNextRun(nextSniffDelayMillis);\n- running.set(false);\n- }\n+ /*\n+ * Skipped or already started tasks do nothing. In most cases tasks will be cancelled and not run, but we want to protect for\n+ * cases where future#cancel returns true yet the task runs. We want to make sure that such tasks do nothing otherwise they will\n+ * schedule another round at the end and so on, leaving us with multiple parallel sniffing \"tracks\" whish is undesirable.\n+ */\n+ if (taskState.compareAndSet(TaskState.WAITING, TaskState.STARTED) == false) {\n+ return;\n }\n- }\n-\n- synchronized void shutdown() {\n- scheduledExecutorService.shutdown();\n try {\n- if (scheduledExecutorService.awaitTermination(1000, TimeUnit.MILLISECONDS)) {\n- return;\n- }\n- scheduledExecutorService.shutdownNow();\n- } catch (InterruptedException e) {\n- Thread.currentThread().interrupt();\n+ sniff();\n+ } catch (Exception e) {\n+ logger.error(\"error while sniffing nodes\", e);\n+ } finally {\n+ Task task = new Task(sniffIntervalMillis);\n+ Future<?> future = scheduler.schedule(task, nextTaskDelay);\n+ //tasks are run by a single threaded executor, so swapping is safe with a simple volatile variable\n+ ScheduledTask previousTask = nextScheduledTask;\n+ nextScheduledTask = new ScheduledTask(task, future);\n+ assert initialized.get() == false ||\n+ previousTask.task.isSkipped() || previousTask.task.hasStarted() : \"task that we are replacing is neither \" +\n+ \"cancelled nor has it ever started\";\n }\n }\n+\n+ /**\n+ * Returns true if the task has started, false in case it didn't start (yet?) or it was skipped\n+ */\n+ boolean hasStarted() {\n+ return taskState.get() == TaskState.STARTED;\n+ }\n+\n+ /**\n+ * Sets this task to be skipped. Returns true if the task will be skipped, false if the task has already started.\n+ */\n+ boolean skip() {\n+ /*\n+ * Threads may still get run although future#cancel returns true. We make sure that a task is either cancelled (or skipped),\n+ * or entirely run. In the odd case that future#cancel returns true and the thread still runs, the task won't do anything.\n+ * In case future#cancel returns true but the task has already started, this state change will not succeed hence this method\n+ * returns false and the task will normally run.\n+ */\n+ return taskState.compareAndSet(TaskState.WAITING, TaskState.SKIPPED);\n+ }\n+\n+ /**\n+ * Returns true if the task was set to be skipped before it was started\n+ */\n+ boolean isSkipped() {\n+ return taskState.get() == TaskState.SKIPPED;\n+ }\n+ }\n+\n+ static final class ScheduledTask {\n+ final Task task;\n+ final Future<?> future;\n+\n+ ScheduledTask(Task task, Future<?> future) {\n+ this.task = task;\n+ this.future = future;\n+ }\n+\n+ /**\n+ * Cancels this task. Returns true if the task has been successfully cancelled, meaning it won't be executed\n+ * or if it is its execution won't have any effect. Returns false if the task cannot be cancelled (possibly it was\n+ * already cancelled or already completed).\n+ */\n+ boolean skip() {\n+ /*\n+ * Future#cancel should return false whenever a task cannot be cancelled, most likely as it has already started. We don't\n+ * trust it much though so we try to cancel hoping that it will work. At the same time we always call skip too, which means\n+ * that if the task has already started the state change will fail. We could potentially not call skip when cancel returns\n+ * false but we prefer to stay on the safe side.\n+ */\n+ future.cancel(false);\n+ return task.skip();\n+ }\n+ }\n+\n+ final void sniff() throws IOException {\n+ List<HttpHost> sniffedHosts = hostsSniffer.sniffHosts();\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"sniffed hosts: \" + sniffedHosts);\n+ }\n+ if (sniffedHosts.isEmpty()) {\n+ logger.warn(\"no hosts to set, hosts will be updated at the next sniffing round\");\n+ } else {\n+ restClient.setHosts(sniffedHosts.toArray(new HttpHost[sniffedHosts.size()]));\n+ }\n+ }\n+\n+ @Override\n+ public void close() {\n+ if (initialized.get()) {\n+ nextScheduledTask.skip();\n+ }\n+ this.scheduler.shutdown();\n }\n \n /**\n@@ -158,8 +234,62 @@ public static SnifferBuilder builder(RestClient restClient) {\n return new SnifferBuilder(restClient);\n }\n \n- private static class SnifferThreadFactory implements ThreadFactory {\n+ /**\n+ * The Scheduler interface allows to isolate the sniffing scheduling aspects so that we can test\n+ * the sniffer by injecting when needed a custom scheduler that is more suited for testing.\n+ */\n+ interface Scheduler {\n+ /**\n+ * Schedules the provided {@link Runnable} to be executed in <code>delayMillis</code> milliseconds\n+ */\n+ Future<?> schedule(Task task, long delayMillis);\n+\n+ /**\n+ * Shuts this scheduler down\n+ */\n+ void shutdown();\n+ }\n+\n+ /**\n+ * Default implementation of {@link Scheduler}, based on {@link ScheduledExecutorService}\n+ */\n+ static final class DefaultScheduler implements Scheduler {\n+ final ScheduledExecutorService executor;\n+\n+ DefaultScheduler() {\n+ this(initScheduledExecutorService());\n+ }\n+\n+ DefaultScheduler(ScheduledExecutorService executor) {\n+ this.executor = executor;\n+ }\n+\n+ private static ScheduledExecutorService initScheduledExecutorService() {\n+ ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(1, new SnifferThreadFactory(SNIFFER_THREAD_NAME));\n+ executor.setRemoveOnCancelPolicy(true);\n+ return executor;\n+ }\n+\n+ @Override\n+ public Future<?> schedule(Task task, long delayMillis) {\n+ return executor.schedule(task, delayMillis, TimeUnit.MILLISECONDS);\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ executor.shutdown();\n+ try {\n+ if (executor.awaitTermination(1000, TimeUnit.MILLISECONDS)) {\n+ return;\n+ }\n+ executor.shutdownNow();\n+ } catch (InterruptedException ignore) {\n+ Thread.currentThread().interrupt();\n+ }\n+ }\n+ }\n \n+ static class SnifferThreadFactory implements ThreadFactory {\n private final AtomicInteger threadNumber = new AtomicInteger(1);\n private final String namePrefix;\n private final ThreadFactory originalThreadFactory;", "filename": "client/sniffer/src/main/java/org/elasticsearch/client/sniff/Sniffer.java", "status": "modified" }, { "diff": "@@ -21,7 +21,6 @@\n \n import org.apache.http.HttpHost;\n \n-import java.io.IOException;\n import java.util.Collections;\n import java.util.List;\n \n@@ -30,7 +29,7 @@\n */\n class MockHostsSniffer implements HostsSniffer {\n @Override\n- public List<HttpHost> sniffHosts() throws IOException {\n+ public List<HttpHost> sniffHosts() {\n return Collections.singletonList(new HttpHost(\"localhost\", 9200));\n }\n }", "filename": "client/sniffer/src/test/java/org/elasticsearch/client/sniff/MockHostsSniffer.java", "status": "modified" }, { "diff": "@@ -0,0 +1,656 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.client.sniff;\n+\n+import org.apache.http.HttpHost;\n+import org.elasticsearch.client.RestClient;\n+import org.elasticsearch.client.RestClientTestCase;\n+import org.elasticsearch.client.sniff.Sniffer.DefaultScheduler;\n+import org.elasticsearch.client.sniff.Sniffer.Scheduler;\n+import org.mockito.Matchers;\n+import org.mockito.invocation.InvocationOnMock;\n+import org.mockito.stubbing.Answer;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.Set;\n+import java.util.concurrent.CancellationException;\n+import java.util.concurrent.CopyOnWriteArraySet;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.ExecutionException;\n+import java.util.concurrent.ExecutorService;\n+import java.util.concurrent.Executors;\n+import java.util.concurrent.Future;\n+import java.util.concurrent.ScheduledExecutorService;\n+import java.util.concurrent.ScheduledFuture;\n+import java.util.concurrent.ScheduledThreadPoolExecutor;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicReference;\n+\n+import static org.hamcrest.CoreMatchers.equalTo;\n+import static org.hamcrest.CoreMatchers.instanceOf;\n+import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n+import static org.hamcrest.Matchers.is;\n+import static org.junit.Assert.assertEquals;\n+import static org.junit.Assert.assertFalse;\n+import static org.junit.Assert.assertNotEquals;\n+import static org.junit.Assert.assertNull;\n+import static org.junit.Assert.assertSame;\n+import static org.junit.Assert.assertThat;\n+import static org.junit.Assert.assertTrue;\n+import static org.junit.Assert.fail;\n+import static org.mockito.Matchers.any;\n+import static org.mockito.Mockito.mock;\n+import static org.mockito.Mockito.times;\n+import static org.mockito.Mockito.verify;\n+import static org.mockito.Mockito.verifyNoMoreInteractions;\n+import static org.mockito.Mockito.when;\n+\n+public class SnifferTests extends RestClientTestCase {\n+\n+ /**\n+ * Tests the {@link Sniffer#sniff()} method in isolation. Verifies that it uses the {@link HostsSniffer} implementation\n+ * to retrieve nodes and set them (when not empty) to the provided {@link RestClient} instance.\n+ */\n+ public void testSniff() throws IOException {\n+ HttpHost initialHost = new HttpHost(\"localhost\", 9200);\n+ try (RestClient restClient = RestClient.builder(initialHost).build()) {\n+ Scheduler noOpScheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ return mock(Future.class);\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+\n+ }\n+ };\n+ CountingHostsSniffer hostsSniffer = new CountingHostsSniffer();\n+ int iters = randomIntBetween(5, 30);\n+ try (Sniffer sniffer = new Sniffer(restClient, hostsSniffer, noOpScheduler, 1000L, -1)){\n+ {\n+ assertEquals(1, restClient.getHosts().size());\n+ HttpHost httpHost = restClient.getHosts().get(0);\n+ assertEquals(\"localhost\", httpHost.getHostName());\n+ assertEquals(9200, httpHost.getPort());\n+ }\n+ int emptyList = 0;\n+ int failures = 0;\n+ int runs = 0;\n+ List<HttpHost> lastHosts = Collections.singletonList(initialHost);\n+ for (int i = 0; i < iters; i++) {\n+ try {\n+ runs++;\n+ sniffer.sniff();\n+ if (hostsSniffer.failures.get() > failures) {\n+ failures++;\n+ fail(\"should have failed given that hostsSniffer says it threw an exception\");\n+ } else if (hostsSniffer.emptyList.get() > emptyList) {\n+ emptyList++;\n+ assertEquals(lastHosts, restClient.getHosts());\n+ } else {\n+ assertNotEquals(lastHosts, restClient.getHosts());\n+ List<HttpHost> expectedHosts = CountingHostsSniffer.buildHosts(runs);\n+ assertEquals(expectedHosts, restClient.getHosts());\n+ lastHosts = restClient.getHosts();\n+ }\n+ } catch(IOException e) {\n+ if (hostsSniffer.failures.get() > failures) {\n+ failures++;\n+ assertEquals(\"communication breakdown\", e.getMessage());\n+ }\n+ }\n+ }\n+ assertEquals(hostsSniffer.emptyList.get(), emptyList);\n+ assertEquals(hostsSniffer.failures.get(), failures);\n+ assertEquals(hostsSniffer.runs.get(), runs);\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Test multiple sniffing rounds by mocking the {@link Scheduler} as well as the {@link HostsSniffer}.\n+ * Simulates the ordinary behaviour of {@link Sniffer} when sniffing on failure is not enabled.\n+ * The {@link CountingHostsSniffer} doesn't make any network connection but may throw exception or return no hosts, which makes\n+ * it possible to verify that errors are properly handled and don't affect subsequent runs and their scheduling.\n+ * The {@link Scheduler} implementation submits rather than scheduling tasks, meaning that it doesn't respect the requested sniff\n+ * delays while allowing to assert that the requested delays for each requested run and the following one are the expected values.\n+ */\n+ public void testOrdinarySniffRounds() throws Exception {\n+ final long sniffInterval = randomLongBetween(1, Long.MAX_VALUE);\n+ long sniffAfterFailureDelay = randomLongBetween(1, Long.MAX_VALUE);\n+ RestClient restClient = mock(RestClient.class);\n+ CountingHostsSniffer hostsSniffer = new CountingHostsSniffer();\n+ final int iters = randomIntBetween(30, 100);\n+ final Set<Future<?>> futures = new CopyOnWriteArraySet<>();\n+ final CountDownLatch completionLatch = new CountDownLatch(1);\n+ final AtomicInteger runs = new AtomicInteger(iters);\n+ final ExecutorService executor = Executors.newSingleThreadExecutor();\n+ final AtomicReference<Future<?>> lastFuture = new AtomicReference<>();\n+ final AtomicReference<Sniffer.Task> lastTask = new AtomicReference<>();\n+ Scheduler scheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ assertEquals(sniffInterval, task.nextTaskDelay);\n+ int numberOfRuns = runs.getAndDecrement();\n+ if (numberOfRuns == iters) {\n+ //the first call is to schedule the first sniff round from the Sniffer constructor, with delay O\n+ assertEquals(0L, delayMillis);\n+ assertEquals(sniffInterval, task.nextTaskDelay);\n+ } else {\n+ //all of the subsequent times \"schedule\" is called with delay set to the configured sniff interval\n+ assertEquals(sniffInterval, delayMillis);\n+ assertEquals(sniffInterval, task.nextTaskDelay);\n+ if (numberOfRuns == 0) {\n+ completionLatch.countDown();\n+ return null;\n+ }\n+ }\n+ //we submit rather than scheduling to make the test quick and not depend on time\n+ Future<?> future = executor.submit(task);\n+ futures.add(future);\n+ if (numberOfRuns == 1) {\n+ lastFuture.set(future);\n+ lastTask.set(task);\n+ }\n+ return future;\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ //the executor is closed externally, shutdown is tested separately\n+ }\n+ };\n+ try {\n+ new Sniffer(restClient, hostsSniffer, scheduler, sniffInterval, sniffAfterFailureDelay);\n+ assertTrue(\"timeout waiting for sniffing rounds to be completed\", completionLatch.await(1000, TimeUnit.MILLISECONDS));\n+ assertEquals(iters, futures.size());\n+ //the last future is the only one that may not be completed yet, as the count down happens\n+ //while scheduling the next round which is still part of the execution of the runnable itself.\n+ assertTrue(lastTask.get().hasStarted());\n+ lastFuture.get().get();\n+ for (Future<?> future : futures) {\n+ assertTrue(future.isDone());\n+ future.get();\n+ }\n+ } finally {\n+ executor.shutdown();\n+ assertTrue(executor.awaitTermination(1000, TimeUnit.MILLISECONDS));\n+ }\n+ int totalRuns = hostsSniffer.runs.get();\n+ assertEquals(iters, totalRuns);\n+ int setHostsRuns = totalRuns - hostsSniffer.failures.get() - hostsSniffer.emptyList.get();\n+ verify(restClient, times(setHostsRuns)).setHosts(Matchers.<HttpHost>anyVararg());\n+ verifyNoMoreInteractions(restClient);\n+ }\n+\n+ /**\n+ * Test that {@link Sniffer#close()} shuts down the underlying {@link Scheduler}, and that such calls are idempotent.\n+ * Also verifies that the next scheduled round gets cancelled.\n+ */\n+ public void testClose() {\n+ final Future<?> future = mock(Future.class);\n+ long sniffInterval = randomLongBetween(1, Long.MAX_VALUE);\n+ long sniffAfterFailureDelay = randomLongBetween(1, Long.MAX_VALUE);\n+ RestClient restClient = mock(RestClient.class);\n+ final AtomicInteger shutdown = new AtomicInteger(0);\n+ final AtomicBoolean initialized = new AtomicBoolean(false);\n+ Scheduler scheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ if (initialized.compareAndSet(false, true)) {\n+ //run from the same thread so the sniffer gets for sure initialized and the scheduled task gets cancelled on close\n+ task.run();\n+ }\n+ return future;\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ shutdown.incrementAndGet();\n+ }\n+ };\n+\n+ Sniffer sniffer = new Sniffer(restClient, new MockHostsSniffer(), scheduler, sniffInterval, sniffAfterFailureDelay);\n+ assertEquals(0, shutdown.get());\n+ int iters = randomIntBetween(3, 10);\n+ for (int i = 1; i <= iters; i++) {\n+ sniffer.close();\n+ verify(future, times(i)).cancel(false);\n+ assertEquals(i, shutdown.get());\n+ }\n+ }\n+\n+ public void testSniffOnFailureNotInitialized() {\n+ RestClient restClient = mock(RestClient.class);\n+ CountingHostsSniffer hostsSniffer = new CountingHostsSniffer();\n+ long sniffInterval = randomLongBetween(1, Long.MAX_VALUE);\n+ long sniffAfterFailureDelay = randomLongBetween(1, Long.MAX_VALUE);\n+ final AtomicInteger scheduleCalls = new AtomicInteger(0);\n+ Scheduler scheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ scheduleCalls.incrementAndGet();\n+ return null;\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ }\n+ };\n+\n+ Sniffer sniffer = new Sniffer(restClient, hostsSniffer, scheduler, sniffInterval, sniffAfterFailureDelay);\n+ for (int i = 0; i < 10; i++) {\n+ sniffer.sniffOnFailure();\n+ }\n+ assertEquals(1, scheduleCalls.get());\n+ int totalRuns = hostsSniffer.runs.get();\n+ assertEquals(0, totalRuns);\n+ int setHostsRuns = totalRuns - hostsSniffer.failures.get() - hostsSniffer.emptyList.get();\n+ verify(restClient, times(setHostsRuns)).setHosts(Matchers.<HttpHost>anyVararg());\n+ verifyNoMoreInteractions(restClient);\n+ }\n+\n+ /**\n+ * Test behaviour when a bunch of onFailure sniffing rounds are triggered in parallel. Each run will always\n+ * schedule a subsequent afterFailure round. Also, for each onFailure round that starts, the net scheduled round\n+ * (either afterFailure or ordinary) gets cancelled.\n+ */\n+ public void testSniffOnFailure() throws Exception {\n+ RestClient restClient = mock(RestClient.class);\n+ CountingHostsSniffer hostsSniffer = new CountingHostsSniffer();\n+ final AtomicBoolean initializing = new AtomicBoolean(true);\n+ final long sniffInterval = randomLongBetween(1, Long.MAX_VALUE);\n+ final long sniffAfterFailureDelay = randomLongBetween(1, Long.MAX_VALUE);\n+ int minNumOnFailureRounds = randomIntBetween(5, 10);\n+ final CountDownLatch initializingLatch = new CountDownLatch(1);\n+ final Set<Sniffer.ScheduledTask> ordinaryRoundsTasks = new CopyOnWriteArraySet<>();\n+ final AtomicReference<Future<?>> initializingFuture = new AtomicReference<>();\n+ final Set<Sniffer.ScheduledTask> onFailureTasks = new CopyOnWriteArraySet<>();\n+ final Set<Sniffer.ScheduledTask> afterFailureTasks = new CopyOnWriteArraySet<>();\n+ final AtomicBoolean onFailureCompleted = new AtomicBoolean(false);\n+ final CountDownLatch completionLatch = new CountDownLatch(1);\n+ final ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();\n+ try {\n+ Scheduler scheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(final Sniffer.Task task, long delayMillis) {\n+ if (initializing.compareAndSet(true, false)) {\n+ assertEquals(0L, delayMillis);\n+ Future<?> future = executor.submit(new Runnable() {\n+ @Override\n+ public void run() {\n+ try {\n+ task.run();\n+ } finally {\n+ //we need to make sure that the sniffer is initialized, so the sniffOnFailure\n+ //call does what it needs to do. Otherwise nothing happens until initialized.\n+ initializingLatch.countDown();\n+ }\n+ }\n+ });\n+ assertTrue(initializingFuture.compareAndSet(null, future));\n+ return future;\n+ }\n+ if (delayMillis == 0L) {\n+ Future<?> future = executor.submit(task);\n+ onFailureTasks.add(new Sniffer.ScheduledTask(task, future));\n+ return future;\n+ }\n+ if (delayMillis == sniffAfterFailureDelay) {\n+ Future<?> future = scheduleOrSubmit(task);\n+ afterFailureTasks.add(new Sniffer.ScheduledTask(task, future));\n+ return future;\n+ }\n+\n+ assertEquals(sniffInterval, delayMillis);\n+ assertEquals(sniffInterval, task.nextTaskDelay);\n+\n+ if (onFailureCompleted.get() && onFailureTasks.size() == afterFailureTasks.size()) {\n+ completionLatch.countDown();\n+ return mock(Future.class);\n+ }\n+\n+ Future<?> future = scheduleOrSubmit(task);\n+ ordinaryRoundsTasks.add(new Sniffer.ScheduledTask(task, future));\n+ return future;\n+ }\n+\n+ private Future<?> scheduleOrSubmit(Sniffer.Task task) {\n+ if (randomBoolean()) {\n+ return executor.schedule(task, randomLongBetween(0L, 200L), TimeUnit.MILLISECONDS);\n+ } else {\n+ return executor.submit(task);\n+ }\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ }\n+ };\n+ final Sniffer sniffer = new Sniffer(restClient, hostsSniffer, scheduler, sniffInterval, sniffAfterFailureDelay);\n+ assertTrue(\"timeout waiting for sniffer to get initialized\", initializingLatch.await(1000, TimeUnit.MILLISECONDS));\n+\n+ ExecutorService onFailureExecutor = Executors.newFixedThreadPool(randomIntBetween(5, 20));\n+ Set<Future<?>> onFailureFutures = new CopyOnWriteArraySet<>();\n+ try {\n+ //with tasks executing quickly one after each other, it is very likely that the onFailure round gets skipped\n+ //as another round is already running. We retry till enough runs get through as that's what we want to test.\n+ while (onFailureTasks.size() < minNumOnFailureRounds) {\n+ onFailureFutures.add(onFailureExecutor.submit(new Runnable() {\n+ @Override\n+ public void run() {\n+ sniffer.sniffOnFailure();\n+ }\n+ }));\n+ }\n+ assertThat(onFailureFutures.size(), greaterThanOrEqualTo(minNumOnFailureRounds));\n+ for (Future<?> onFailureFuture : onFailureFutures) {\n+ assertNull(onFailureFuture.get());\n+ }\n+ onFailureCompleted.set(true);\n+ } finally {\n+ onFailureExecutor.shutdown();\n+ onFailureExecutor.awaitTermination(1000, TimeUnit.MILLISECONDS);\n+ }\n+\n+ assertFalse(initializingFuture.get().isCancelled());\n+ assertTrue(initializingFuture.get().isDone());\n+ assertNull(initializingFuture.get().get());\n+\n+ assertTrue(\"timeout waiting for sniffing rounds to be completed\", completionLatch.await(1000, TimeUnit.MILLISECONDS));\n+ assertThat(onFailureTasks.size(), greaterThanOrEqualTo(minNumOnFailureRounds));\n+ assertEquals(onFailureTasks.size(), afterFailureTasks.size());\n+\n+ for (Sniffer.ScheduledTask onFailureTask : onFailureTasks) {\n+ assertFalse(onFailureTask.future.isCancelled());\n+ assertTrue(onFailureTask.future.isDone());\n+ assertNull(onFailureTask.future.get());\n+ assertTrue(onFailureTask.task.hasStarted());\n+ assertFalse(onFailureTask.task.isSkipped());\n+ }\n+\n+ int cancelledTasks = 0;\n+ int completedTasks = onFailureTasks.size() + 1;\n+ for (Sniffer.ScheduledTask afterFailureTask : afterFailureTasks) {\n+ if (assertTaskCancelledOrCompleted(afterFailureTask)) {\n+ completedTasks++;\n+ } else {\n+ cancelledTasks++;\n+ }\n+ }\n+\n+ assertThat(ordinaryRoundsTasks.size(), greaterThan(0));\n+ for (Sniffer.ScheduledTask task : ordinaryRoundsTasks) {\n+ if (assertTaskCancelledOrCompleted(task)) {\n+ completedTasks++;\n+ } else {\n+ cancelledTasks++;\n+ }\n+ }\n+ assertEquals(onFailureTasks.size(), cancelledTasks);\n+\n+ assertEquals(completedTasks, hostsSniffer.runs.get());\n+ int setHostsRuns = hostsSniffer.runs.get() - hostsSniffer.failures.get() - hostsSniffer.emptyList.get();\n+ verify(restClient, times(setHostsRuns)).setHosts(Matchers.<HttpHost>anyVararg());\n+ verifyNoMoreInteractions(restClient);\n+ } finally {\n+ executor.shutdown();\n+ executor.awaitTermination(1000L, TimeUnit.MILLISECONDS);\n+ }\n+ }\n+\n+ private static boolean assertTaskCancelledOrCompleted(Sniffer.ScheduledTask task) throws ExecutionException, InterruptedException {\n+ if (task.task.isSkipped()) {\n+ assertTrue(task.future.isCancelled());\n+ try {\n+ task.future.get();\n+ fail(\"cancellation exception should have been thrown\");\n+ } catch(CancellationException ignore) {\n+ }\n+ return false;\n+ } else {\n+ try {\n+ assertNull(task.future.get());\n+ } catch(CancellationException ignore) {\n+ assertTrue(task.future.isCancelled());\n+ }\n+ assertTrue(task.future.isDone());\n+ assertTrue(task.task.hasStarted());\n+ return true;\n+ }\n+ }\n+\n+ public void testTaskCancelling() throws Exception {\n+ RestClient restClient = mock(RestClient.class);\n+ HostsSniffer hostsSniffer = mock(HostsSniffer.class);\n+ Scheduler noOpScheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ return null;\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+ }\n+ };\n+ Sniffer sniffer = new Sniffer(restClient, hostsSniffer, noOpScheduler, 0L, 0L);\n+ ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();\n+ try {\n+ int numIters = randomIntBetween(50, 100);\n+ for (int i = 0; i < numIters; i++) {\n+ Sniffer.Task task = sniffer.new Task(0L);\n+ TaskWrapper wrapper = new TaskWrapper(task);\n+ Future<?> future;\n+ if (rarely()) {\n+ future = executor.schedule(wrapper, randomLongBetween(0L, 200L), TimeUnit.MILLISECONDS);\n+ } else {\n+ future = executor.submit(wrapper);\n+ }\n+ Sniffer.ScheduledTask scheduledTask = new Sniffer.ScheduledTask(task, future);\n+ boolean skip = scheduledTask.skip();\n+ try {\n+ assertNull(future.get());\n+ } catch(CancellationException ignore) {\n+ assertTrue(future.isCancelled());\n+ }\n+\n+ if (skip) {\n+ //the task was either cancelled before starting, in which case it will never start (thanks to Future#cancel),\n+ //or skipped, in which case it will run but do nothing (thanks to Task#skip).\n+ //Here we want to make sure that whenever skip returns true, the task either won't run or it won't do anything,\n+ //otherwise we may end up with parallel sniffing tracks given that each task schedules the following one. We need to\n+ // make sure that onFailure takes scheduling over while at the same time ordinary rounds don't go on.\n+ assertFalse(task.hasStarted());\n+ assertTrue(task.isSkipped());\n+ assertTrue(future.isCancelled());\n+ assertTrue(future.isDone());\n+ } else {\n+ //if a future is cancelled when its execution has already started, future#get throws CancellationException before\n+ //completion. The execution continues though so we use a latch to try and wait for the task to be completed.\n+ //Here we want to make sure that whenever skip returns false, the task will be completed, otherwise we may be\n+ //missing to schedule the following round, which means no sniffing will ever happen again besides on failure sniffing.\n+ assertTrue(wrapper.await());\n+ //the future may or may not be cancelled but the task has for sure started and completed\n+ assertTrue(task.toString(), task.hasStarted());\n+ assertFalse(task.isSkipped());\n+ assertTrue(future.isDone());\n+ }\n+ //subsequent cancel calls return false for sure\n+ int cancelCalls = randomIntBetween(1, 10);\n+ for (int j = 0; j < cancelCalls; j++) {\n+ assertFalse(scheduledTask.skip());\n+ }\n+ }\n+ } finally {\n+ executor.shutdown();\n+ executor.awaitTermination(1000, TimeUnit.MILLISECONDS);\n+ }\n+ }\n+\n+ /**\n+ * Wraps a {@link Sniffer.Task} and allows to wait for its completion. This is needed to verify\n+ * that tasks are either never started or always completed. Calling {@link Future#get()} against a cancelled future will\n+ * throw {@link CancellationException} straight-away but the execution of the task will continue if it had already started,\n+ * in which case {@link Future#cancel(boolean)} returns true which is not very helpful.\n+ */\n+ private static final class TaskWrapper implements Runnable {\n+ final Sniffer.Task task;\n+ final CountDownLatch completionLatch = new CountDownLatch(1);\n+\n+ TaskWrapper(Sniffer.Task task) {\n+ this.task = task;\n+ }\n+\n+ @Override\n+ public void run() {\n+ try {\n+ task.run();\n+ } finally {\n+ completionLatch.countDown();\n+ }\n+ }\n+\n+ boolean await() throws InterruptedException {\n+ return completionLatch.await(1000, TimeUnit.MILLISECONDS);\n+ }\n+ }\n+\n+ /**\n+ * Mock {@link HostsSniffer} implementation used for testing, which most of the times return a fixed host.\n+ * It rarely throws exception or return an empty list of hosts, to make sure that such situations are properly handled.\n+ * It also asserts that it never gets called concurrently, based on the assumption that only one sniff run can be run\n+ * at a given point in time.\n+ */\n+ private static class CountingHostsSniffer implements HostsSniffer {\n+ private final AtomicInteger runs = new AtomicInteger(0);\n+ private final AtomicInteger failures = new AtomicInteger(0);\n+ private final AtomicInteger emptyList = new AtomicInteger(0);\n+\n+ @Override\n+ public List<HttpHost> sniffHosts() throws IOException {\n+ int run = runs.incrementAndGet();\n+ if (rarely()) {\n+ failures.incrementAndGet();\n+ //check that if communication breaks, sniffer keeps on working\n+ throw new IOException(\"communication breakdown\");\n+ }\n+ if (rarely()) {\n+ emptyList.incrementAndGet();\n+ return Collections.emptyList();\n+ }\n+ return buildHosts(run);\n+ }\n+\n+ private static List<HttpHost> buildHosts(int run) {\n+ int size = run % 5 + 1;\n+ assert size > 0;\n+ List<HttpHost> hosts = new ArrayList<>(size);\n+ for (int i = 0; i < size; i++) {\n+ hosts.add(new HttpHost(\"sniffed-\" + run, 9200 + i));\n+ }\n+ return hosts;\n+ }\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ public void testDefaultSchedulerSchedule() {\n+ RestClient restClient = mock(RestClient.class);\n+ HostsSniffer hostsSniffer = mock(HostsSniffer.class);\n+ Scheduler noOpScheduler = new Scheduler() {\n+ @Override\n+ public Future<?> schedule(Sniffer.Task task, long delayMillis) {\n+ return mock(Future.class);\n+ }\n+\n+ @Override\n+ public void shutdown() {\n+\n+ }\n+ };\n+ Sniffer sniffer = new Sniffer(restClient, hostsSniffer, noOpScheduler, 0L, 0L);\n+ Sniffer.Task task = sniffer.new Task(randomLongBetween(1, Long.MAX_VALUE));\n+\n+ ScheduledExecutorService scheduledExecutorService = mock(ScheduledExecutorService.class);\n+ final ScheduledFuture<?> mockedFuture = mock(ScheduledFuture.class);\n+ when(scheduledExecutorService.schedule(any(Runnable.class), any(Long.class), any(TimeUnit.class)))\n+ .then(new Answer<ScheduledFuture<?>>() {\n+ @Override\n+ public ScheduledFuture<?> answer(InvocationOnMock invocationOnMock) {\n+ return mockedFuture;\n+ }\n+ });\n+ DefaultScheduler scheduler = new DefaultScheduler(scheduledExecutorService);\n+ long delay = randomLongBetween(1, Long.MAX_VALUE);\n+ Future<?> future = scheduler.schedule(task, delay);\n+ assertSame(mockedFuture, future);\n+ verify(scheduledExecutorService).schedule(task, delay, TimeUnit.MILLISECONDS);\n+ verifyNoMoreInteractions(scheduledExecutorService, mockedFuture);\n+ }\n+\n+ public void testDefaultSchedulerThreadFactory() {\n+ DefaultScheduler defaultScheduler = new DefaultScheduler();\n+ try {\n+ ScheduledExecutorService executorService = defaultScheduler.executor;\n+ assertThat(executorService, instanceOf(ScheduledThreadPoolExecutor.class));\n+ assertThat(executorService, instanceOf(ScheduledThreadPoolExecutor.class));\n+ ScheduledThreadPoolExecutor executor = (ScheduledThreadPoolExecutor) executorService;\n+ assertTrue(executor.getRemoveOnCancelPolicy());\n+ assertFalse(executor.getContinueExistingPeriodicTasksAfterShutdownPolicy());\n+ assertTrue(executor.getExecuteExistingDelayedTasksAfterShutdownPolicy());\n+ assertThat(executor.getThreadFactory(), instanceOf(Sniffer.SnifferThreadFactory.class));\n+ int iters = randomIntBetween(3, 10);\n+ for (int i = 1; i <= iters; i++) {\n+ Thread thread = executor.getThreadFactory().newThread(new Runnable() {\n+ @Override\n+ public void run() {\n+\n+ }\n+ });\n+ assertThat(thread.getName(), equalTo(\"es_rest_client_sniffer[T#\" + i + \"]\"));\n+ assertThat(thread.isDaemon(), is(true));\n+ }\n+ } finally {\n+ defaultScheduler.shutdown();\n+ }\n+ }\n+\n+ public void testDefaultSchedulerShutdown() throws Exception {\n+ ScheduledThreadPoolExecutor executor = mock(ScheduledThreadPoolExecutor.class);\n+ DefaultScheduler defaultScheduler = new DefaultScheduler(executor);\n+ defaultScheduler.shutdown();\n+ verify(executor).shutdown();\n+ verify(executor).awaitTermination(1000, TimeUnit.MILLISECONDS);\n+ verify(executor).shutdownNow();\n+ verifyNoMoreInteractions(executor);\n+\n+ when(executor.awaitTermination(1000, TimeUnit.MILLISECONDS)).thenReturn(true);\n+ defaultScheduler.shutdown();\n+ verify(executor, times(2)).shutdown();\n+ verify(executor, times(2)).awaitTermination(1000, TimeUnit.MILLISECONDS);\n+ verifyNoMoreInteractions(executor);\n+ }\n+}", "filename": "client/sniffer/src/test/java/org/elasticsearch/client/sniff/SnifferTests.java", "status": "added" }, { "diff": "@@ -41,8 +41,6 @@\n import org.joda.time.format.DateTimeFormatter;\n \n import javax.net.ssl.SSLContext;\n-\n-import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n@@ -658,12 +656,12 @@ public void doClose() {\n if (sniffer != null) {\n sniffer.close();\n }\n- } catch (IOException | RuntimeException e) {\n+ } catch (Exception e) {\n logger.error(\"an error occurred while closing the internal client sniffer\", e);\n } finally {\n try {\n client.close();\n- } catch (IOException | RuntimeException e) {\n+ } catch (Exception e) {\n logger.error(\"an error occurred while closing the internal client\", e);\n }\n }", "filename": "x-pack/plugin/monitoring/src/main/java/org/elasticsearch/xpack/monitoring/exporter/http/HttpExporter.java", "status": "modified" }, { "diff": "@@ -86,7 +86,7 @@ public void onFailure(final HttpHost host) {\n resource.markDirty();\n }\n if (sniffer != null) {\n- sniffer.sniffOnFailure(host);\n+ sniffer.sniffOnFailure();\n }\n }\n ", "filename": "x-pack/plugin/monitoring/src/main/java/org/elasticsearch/xpack/monitoring/exporter/http/NodeFailureListener.java", "status": "modified" }, { "diff": "@@ -46,7 +46,7 @@ public void testSnifferNotifiedOnFailure() {\n \n listener.onFailure(host);\n \n- verify(sniffer).sniffOnFailure(host);\n+ verify(sniffer).sniffOnFailure();\n }\n \n public void testResourceNotifiedOnFailure() {\n@@ -71,7 +71,7 @@ public void testResourceAndSnifferNotifiedOnFailure() {\n }\n \n if (optionalSniffer != null) {\n- verify(sniffer).sniffOnFailure(host);\n+ verify(sniffer).sniffOnFailure();\n }\n }\n ", "filename": "x-pack/plugin/monitoring/src/test/java/org/elasticsearch/xpack/monitoring/exporter/http/NodeFailureListenerTests.java", "status": "modified" } ] }
{ "body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 6.1.0\r\n\r\n**Plugins installed**: [discovery-ec2, repository-s3, ingest-geoip, ingest-user-agent]\r\n\r\n**JVM version** (`java -version`): same as `docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.0`\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): same as `docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.0`\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nAfter leaving the system running with automatic hourly snapshots to S3 for a few days, I get the following error:\r\n\r\n```\r\n[2018-03-05T14:24:35,534][WARN ][o.e.r.s.S3Repository ] [9FWWGr6] failed to read index file [index-0]\r\ncom.fasterxml.jackson.core.JsonParseException: Duplicate field 'hourly-2018-02-28-17'\r\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@5b9ee173; line: -1, column: 5140]\r\n\tat com.fasterxml.jackson.core.json.JsonReadContext._checkDup(JsonReadContext.java:204) ~[jackson-core-2.8.10.jar:2.8.10]\r\n\tat com.fasterxml.jackson.core.json.JsonReadContext.setCurrentName(JsonReadContext.java:198) ~[jackson-core-2.8.10.jar:2.8.10]\r\n\tat com.fasterxml.jackson.dataformat.smile.SmileParser._handleFieldName(SmileParser.java:1552) ~[jackson-dataformat-smile-2.8.10.jar:2.8.10]\r\n\tat com.fasterxml.jackson.dataformat.smile.SmileParser.nextToken(SmileParser.java:588) ~[jackson-dataformat-smile-2.8.10.jar:2.8.10]\r\n\tat org.elasticsearch.common.xcontent.json.JsonXContentParser.nextToken(JsonXContentParser.java:52) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshots.fromXContent(BlobStoreIndexShardSnapshots.java:257) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreFormat.read(BlobStoreFormat.java:113) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.readBlob(ChecksumBlobStoreFormat.java:114) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreFormat.read(BlobStoreFormat.java:89) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.buildBlobStoreIndexShardSnapshots(BlobStoreRepository.java:1070) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1139) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:811) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:401) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:98) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:355) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]\r\n```\r\n\r\nI already created a new snapshot repository and deleted the old data. This fixed the problem for a while, but it appeared again.\r\n\r\nThe system in question has ~840 indices. Maybe the large number of indices makes this problem more common? (I haven't seen this on other systems).\r\n\r\n", "comments": [ { "body": "hi @JD557 were those indices created with 6.x or earlier versions? I suspect this has to do with #22073 , but docs with duplicated fields should not be accepted in the first place, so it could be that some docs got properly indexed with 5.x and now cause problems.\r\n\r\ncc @elastic/es-distributed ", "created_at": "2018-03-06T14:22:03Z" }, { "body": ">hi @JD557 were those indices created with 6.x or earlier versions?\r\n\r\nSome were created with 5.6.4.\r\n\r\nDo you think that reindexing them would help or should I just set `-Des.json.strict_duplicate_detection=false` until I don't need those indices anymore?", "created_at": "2018-03-06T14:30:20Z" }, { "body": "I think that reindexing them will raise the error at index time, then you'll have to fix those docs in order to get them indexed. Setting the system property is a work-around, not really a solution.", "created_at": "2018-03-06T14:33:13Z" }, { "body": "By the way, I think I should mention that the error is\r\n`com.fasterxml.jackson.core.JsonParseException: Duplicate field 'hourly-2018-02-28-17'`\r\n\r\nWhere `hourly-2018-02-28-17` is the name of one of the snapshots.\r\n\r\nI don't know anything about the internal snapshot representation, but this seems to be a problem in the snapshot metadata, not in an index.\r\n\r\n(The snapshots were all created using 6.1.0)", "created_at": "2018-03-06T14:43:44Z" }, { "body": "that is a very good point @JD557 thanks for noticing that. maybe @tlrx or @imotov know more about this.", "created_at": "2018-03-06T14:55:22Z" }, { "body": "@JD557 Do you perhaps have multiple clusters that are concurrently writing to this repository?", "created_at": "2018-03-06T20:51:12Z" }, { "body": "I have a single cluster with 3 nodes writing to that repository and a hourly curator job to create a snapshot.\r\n\r\nThe curator job accesses the cluster via DNS, so the snapshot requests don't always hit the same machine (but, AFAIK, there are never 2 snapshots running at the same time).", "created_at": "2018-03-06T21:58:20Z" }, { "body": "Are you experiencing frequent network partitions between the nodes? Anything in the logs on the days leading up to this that could indicate failures on some of the nodes?\r\n\r\nLooking at the [following snapshotting code](https://github.com/elastic/elasticsearch/blob/v6.1.0/core/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java#L1230-L1236), we should make it more resilient against duplicates (we check for duplicates at snapshot initialization, but not when writing the `BlobStoreIndexShardSnapshot`) so that we don't end up in the situation here (but fail the current snapshot instead). /cc: @tlrx \r\n\r\nI still don't understand though how we can end up in the presented situation, except for concurrent access to the repository.", "created_at": "2018-03-08T09:21:39Z" }, { "body": "I don't think that there are frequent network partitions.\r\n\r\nHowever, looking at the logs, here are some errors that happened before the first time that I had this problem (sorry for the large logs):\r\n\r\n**Snapshot creation failed**\r\n```\r\n[2018-02-15T20:02:13,395][WARN ][o.e.s.SnapshotShardsService] [9FWWGr6] [[logs-2018-03-16][0]] [s3:hourly-2018-02-15-20/OOoxBKs8S9-QPqblJ-bW5Q] failed to create snapshot\r\norg.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: Failed to write file list\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1006) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1238) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:811) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:401) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:98) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:355) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.1.0.jar:6.1.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]\r\nCaused by: java.io.IOException: com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 27FD59F4EEF2D224; S3 Extended Request ID: MhY2a0feNmbKhNr5TOOWX/OcuQGwlWfeYQEMVd4S0YxFFo3Xo5jTFo2BlbKGIZROM+H69UX/DMk=), S3 Extended Request ID: MhY2a0feNmbKhNr5TOOWX/OcuQGwlWfeYQEMVd4S0YxFFo3Xo5jTFo2BlbKGIZROM+H69UX/DMk=\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.move(S3BlobContainer.java:177) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeAtomic(ChecksumBlobStoreFormat.java:138) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1004) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t... 10 more\r\n\tSuppressed: java.nio.file.NoSuchFileException: Blob [pending-index-951] does not exist\r\n\t\tat org.elasticsearch.repositories.s3.S3BlobContainer.deleteBlob(S3BlobContainer.java:116) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeAtomic(ChecksumBlobStoreFormat.java:142) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1004) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$SnapshotContext.snapshot(BlobStoreRepository.java:1238) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:811) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:401) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:98) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:355) [elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) [elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]\r\n\t\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]\r\nCaused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 27FD59F4EEF2D224; S3 Extended Request ID: MhY2a0feNmbKhNr5TOOWX/OcuQGwlWfeYQEMVd4S0YxFFo3Xo5jTFo2BlbKGIZROM+H69UX/DMk=)\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4247) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:1870) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.lambda$move$6(S3BlobContainer.java:172) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.SocketAccess.lambda$doPrivilegedVoid$0(SocketAccess.java:57) ~[?:?]\r\n\tat java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_161]\r\n\tat org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedVoid(SocketAccess.java:56) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.move(S3BlobContainer.java:171) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeAtomic(ChecksumBlobStoreFormat.java:138) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1004) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t... 10 more\r\n```\r\n\r\n**Snapshot deletion failed**\r\n```\r\n[2018-02-15T21:22:54,284][WARN ][r.suppressed ] path: /_snapshot/s3/hourly-2018-02-14-21, params: {repository=s3, snapshot=hourly-2018-02-14-21}\r\norg.elasticsearch.transport.RemoteTransportException: [qMs-_ZD][172.17.0.3:9300][cluster:admin/snapshot/delete]\r\nCaused by: org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: Failed to write file list\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1006) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.delete(BlobStoreRepository.java:945) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.delete(BlobStoreRepository.java:877) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.deleteSnapshot(BlobStoreRepository.java:390) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.lambda$deleteSnapshotFromRepository$8(SnapshotsService.java:1292) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:568) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_161]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_161]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]\r\nCaused by: java.io.IOException: com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 079F5E4186B33E60; S3 Extended Request ID: vXAMlUjOEdbKjsgK3rBLGfOUbA381Fnn+beCRxQd6bLCCzlMHMrQ+890nQiFH25MIV4YrkBCT1w=), S3 Extended Request ID: vXAMlUjOEdbKjsgK3rBLGfOUbA381Fnn+beCRxQd6bLCCzlMHMrQ+890nQiFH25MIV4YrkBCT1w=\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.move(S3BlobContainer.java:177) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeAtomic(ChecksumBlobStoreFormat.java:138) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1004) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.delete(BlobStoreRepository.java:945) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.delete(BlobStoreRepository.java:877) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.deleteSnapshot(BlobStoreRepository.java:390) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.lambda$deleteSnapshotFromRepository$8(SnapshotsService.java:1292) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:568) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_161]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_161]\r\n\tat java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_161]\r\n\tSuppressed: java.nio.file.NoSuchFileException: Blob [pending-index-957] does not exist\r\n\t\tat org.elasticsearch.repositories.s3.S3BlobContainer.deleteBlob(S3BlobContainer.java:116) ~[?:?]\r\n\t\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeAtomic(ChecksumBlobStoreFormat.java:142) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1004) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.delete(BlobStoreRepository.java:945) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.delete(BlobStoreRepository.java:877) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.deleteSnapshot(BlobStoreRepository.java:390) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.snapshots.SnapshotsService.lambda$deleteSnapshotFromRepository$8(SnapshotsService.java:1292) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:568) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_161]\r\n\t\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_161]\r\n\t\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]\r\nCaused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: amazon_s3_exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 079F5E4186B33E60; S3 Extended Request ID: vXAMlUjOEdbKjsgK3rBLGfOUbA381Fnn+beCRxQd6bLCCzlMHMrQ+890nQiFH25MIV4YrkBCT1w=)\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) ~[?:?]\r\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4247) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194) ~[?:?]\r\n\tat com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:1870) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.lambda$move$6(S3BlobContainer.java:172) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.SocketAccess.lambda$doPrivilegedVoid$0(SocketAccess.java:57) ~[?:?]\r\n\tat java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_161]\r\n\tat org.elasticsearch.repositories.s3.SocketAccess.doPrivilegedVoid(SocketAccess.java:56) ~[?:?]\r\n\tat org.elasticsearch.repositories.s3.S3BlobContainer.move(S3BlobContainer.java:171) ~[?:?]\r\n\tat org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.writeAtomic(ChecksumBlobStoreFormat.java:138) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.finalize(BlobStoreRepository.java:1004) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository$Context.delete(BlobStoreRepository.java:945) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.delete(BlobStoreRepository.java:877) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.repositories.blobstore.BlobStoreRepository.deleteSnapshot(BlobStoreRepository.java:390) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.snapshots.SnapshotsService.lambda$deleteSnapshotFromRepository$8(SnapshotsService.java:1292) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:568) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_161]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_161]\r\n\tat java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_161]\r\n```\r\n\r\n**Concurrent snapshot exceptions** (Maybe a snapshot got \"stuck\" and took too much to complete?)\r\n\r\n```\r\n[2018-02-16T12:05:55,791][WARN ][r.suppressed ] path: /_snapshot/s3/hourly-2018-02-16-10, params: {repository=s3, wait_for_completion=false, snapshot=hourly-2018-02-16-10}\r\norg.elasticsearch.transport.RemoteTransportException: [qMs-_ZD][172.17.0.3:9300][cluster:admin/snapshot/create]\r\nCaused by: org.elasticsearch.snapshots.ConcurrentSnapshotExecutionException: [s3:hourly-2018-02-16-10] a snapshot is already running\r\n\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:266) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:640) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:195) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:130) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:568) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) ~[elasticsearch-6.1.0.jar:6.1.0]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_161]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_161]\r\n```\r\n\r\nAnd finally, I got a `com.fasterxml.jackson.core.JsonParseException: Duplicate field 'hourly-2018-02-16-04'` exception.\r\n\r\nMaybe this can be triggered by transient failures in S3?", "created_at": "2018-03-08T11:31:09Z" }, { "body": "Discussed with @tlrx: We are not sure how we can end up in this situation, but we plan on adding a check in `BlobstoreRepository.snapshot(...)` that would check for duplicate snapshot names before writing out the new index file and fail the snapshot. This would ensure that you cannot end up in this situation where the index file is broken.", "created_at": "2018-03-27T10:19:30Z" }, { "body": "Is this check implemented in 6.2.4?", "created_at": "2018-04-20T10:50:18Z" }, { "body": "@JD557 no, but I've just opened a PR for this (#29634).", "created_at": "2018-04-20T13:32:47Z" }, { "body": "The fix will be in 6.3", "created_at": "2018-04-20T15:35:50Z" }, { "body": "Talked to @ywelsch today, now the check for duplicate snapshot name has been added we think that we can't really do much for now and we are both in favor of closing this issue.\r\n\r\nIf it reappears on versions > 6.3 then we'll reopen for investigation.", "created_at": "2018-04-24T10:52:42Z" }, { "body": "my issue is related, i think...\r\n\r\ntype\":\"pcap_file\",\"_id\":null,\"status\":400,\"error\":{\"type\":\"parse_exception\",\"reason\":\"Failed to parse content to map\",\"caused_by\":{\"type\":\"json_parse_exception\",\"reason\":\"Duplicate field 'ip_ip_addr'\\n at \r\n\r\n\r\n", "created_at": "2018-05-07T08:53:58Z" }, { "body": "@tjc808 I don't think this is related to snapshot/restore feature but instead because you're indexing documents with duplicated fields. You can ask questions like these in the [Elastic forums](https://discuss.elastic.co/) instead.", "created_at": "2018-05-07T09:12:47Z" }, { "body": "The field name `ip_ip_addr` suggests you may be generating your data using an old version of `tshark` which had [a bug in its Elasticsearch output](https://bugs.wireshark.org/bugzilla/show_bug.cgi?id=12958#c55) that's now fixed.", "created_at": "2018-05-08T11:47:53Z" }, { "body": "I am using org.elasticsearch.client:elasticsearch-rest-high-level-client version 6.8.23. I cannot use newer versions due to company requirements.\r\n\r\ncom.fasterxml.jackson.core.JsonParseException: Duplicate field 'id'\r\n at [Source: (org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper); line: 1, column: 4814]\r\n\tat com.fasterxml.jackson.core.json.JsonReadContext._checkDup(JsonReadContext.java:225)\r\n\tat com.fasterxml.jackson.core.json.JsonReadContext.setCurrentName(JsonReadContext.java:219)\r\n\tat com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:799)\r\n\tat com.fasterxml.jackson.core.JsonGenerator._copyCurrentContents(JsonGenerator.java:2566)\r\n\tat com.fasterxml.jackson.core.JsonGenerator.copyCurrentStructure(JsonGenerator.java:2547)\r\n\tat org.elasticsearch.common.xcontent.json.JsonXContentGenerator.copyCurrentStructure(JsonXContentGenerator.java:418)\r\n\tat org.elasticsearch.common.xcontent.XContentBuilder.copyCurrentStructure(XContentBuilder.java:988)\r\n\tat org.elasticsearch.client.RequestConverters.bulk(RequestConverters.java:240)\r\n\tat org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1761)\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1735)\r\n\tat org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1697)\r\n\tat org.elasticsearch.client.RestHighLevelClient.bulk(RestHighLevelClient.java:473)\r\n\r\nHere is a simplified JSON for IndexRequest that I hope reproduces the problem.\r\n\r\n{\r\n \"id\": \"...\",\r\n \"dataset\": {\r\n \"id\": \"...\",\r\n \"datafile\": {\r\n \"id\": \"...\",\r\n \"permissions\": {\r\n \"id\": \"...\"\r\n },\r\n },\r\n \"par\": {\r\n \"id\": \"...\"\r\n }\r\n },\r\n \"metadata\": {\r\n \"id\": \"pcl.1.metadata.472313f06c1140f8a8f390d0230325ab\"\r\n }\r\n}", "created_at": "2022-07-14T22:22:21Z" } ], "number": 28906, "title": "JSON error in snapshots: Duplicate field" }
{ "body": "Adds a check in `BlobstoreRepository.snapshot(...)` that prevents duplicate snapshot names and fails the snapshot before writing out the new index file. This ensures that you cannot end up in this situation where the index file has duplicate names and cannot be read anymore .\r\n\r\nRelates to #28906", "number": 29634, "review_comments": [], "title": "Abort early on finding duplicate snapshot name in internal structures" }
{ "commits": [ { "message": "Abort early on finding duplicate snapshot name in internal structures" } ], "files": [ { "diff": "@@ -1113,6 +1113,11 @@ public void snapshot(final IndexCommit snapshotIndexCommit) {\n BlobStoreIndexShardSnapshots snapshots = tuple.v1();\n int fileListGeneration = tuple.v2();\n \n+ if (snapshots.snapshots().stream().anyMatch(sf -> sf.snapshot().equals(snapshotId.getName()))) {\n+ throw new IndexShardSnapshotFailedException(shardId,\n+ \"Duplicate snapshot name [\" + snapshotId.getName() + \"] detected, aborting\");\n+ }\n+\n final List<BlobStoreIndexShardSnapshot.FileInfo> indexCommitPointFiles = new ArrayList<>();\n \n store.incRef();", "filename": "server/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.IndexShardTestCase;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException;\n import org.elasticsearch.index.store.Store;\n import org.elasticsearch.index.store.StoreFileMetaData;\n import org.elasticsearch.repositories.IndexId;\n@@ -48,6 +49,7 @@\n import java.util.List;\n \n import static org.elasticsearch.cluster.routing.RecoverySource.StoreRecoverySource.EXISTING_STORE_INSTANCE;\n+import static org.hamcrest.Matchers.containsString;\n \n /**\n * This class tests the behavior of {@link BlobStoreRepository} when it\n@@ -126,6 +128,43 @@ public void testRestoreSnapshotWithExistingFiles() throws IOException {\n }\n }\n \n+ public void testSnapshotWithConflictingName() throws IOException {\n+ final IndexId indexId = new IndexId(randomAlphaOfLength(10), UUIDs.randomBase64UUID());\n+ final ShardId shardId = new ShardId(indexId.getName(), indexId.getId(), 0);\n+\n+ IndexShard shard = newShard(shardId, true);\n+ try {\n+ // index documents in the shards\n+ final int numDocs = scaledRandomIntBetween(1, 500);\n+ recoverShardFromStore(shard);\n+ for (int i = 0; i < numDocs; i++) {\n+ indexDoc(shard, \"doc\", Integer.toString(i));\n+ if (rarely()) {\n+ flushShard(shard, false);\n+ }\n+ }\n+ assertDocCount(shard, numDocs);\n+\n+ // snapshot the shard\n+ final Repository repository = createRepository();\n+ final Snapshot snapshot = new Snapshot(repository.getMetadata().name(), new SnapshotId(randomAlphaOfLength(10), \"_uuid\"));\n+ snapshotShard(shard, snapshot, repository);\n+ final Snapshot snapshotWithSameName = new Snapshot(repository.getMetadata().name(), new SnapshotId(\n+ snapshot.getSnapshotId().getName(), \"_uuid2\"));\n+ IndexShardSnapshotFailedException isfe = expectThrows(IndexShardSnapshotFailedException.class,\n+ () -> snapshotShard(shard, snapshotWithSameName, repository));\n+ assertThat(isfe.getMessage(), containsString(\"Duplicate snapshot name\"));\n+ } finally {\n+ if (shard != null && shard.state() != IndexShardState.CLOSED) {\n+ try {\n+ shard.close(\"test\", false);\n+ } finally {\n+ IOUtils.close(shard.store());\n+ }\n+ }\n+ }\n+ }\n+\n /** Create a {@link Repository} with a random name **/\n private Repository createRepository() throws IOException {\n Settings settings = Settings.builder().put(\"location\", randomAlphaOfLength(10)).build();", "filename": "server/src/test/java/org/elasticsearch/repositories/blobstore/BlobStoreRepositoryRestoreTests.java", "status": "modified" } ] }
{ "body": "Currently written as\r\n```\r\n protected boolean doEquals(TermsSetQueryBuilder other) {\r\n return Objects.equals(fieldName, this.fieldName) && Objects.equals(values, this.values) &&\r\n Objects.equals(minimumShouldMatchField, this.minimumShouldMatchField) &&\r\n Objects.equals(minimumShouldMatchScript, this.minimumShouldMatchScript);\r\n }\r\n```\r\nshould be\r\n```\r\n protected boolean doEquals(TermsSetQueryBuilder other) {\r\n return Objects.equals(fieldName, other.fieldName) && Objects.equals(values, other.values) &&\r\n Objects.equals(minimumShouldMatchField, other.minimumShouldMatchField) &&\r\n Objects.equals(minimumShouldMatchScript, other.minimumShouldMatchScript);\r\n }\r\n```", "comments": [ { "body": "Thanks @twilson-palantir, I created #29629", "created_at": "2018-04-20T07:18:59Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-20T07:19:15Z" } ], "number": 29620, "title": "TermsSetQueryBuilder#doEquals is incorrect" }
{ "body": "Closes #29620", "number": 29629, "review_comments": [], "title": "Fix TermsSetQueryBuilder.doEquals() method" }
{ "commits": [ { "message": "Fix TermsSetQueryBuilder.doEquals() method\n\nCloses #29620" }, { "message": "Override mutateInstance in test" } ], "files": [ { "diff": "@@ -18,9 +18,7 @@\n */\n package org.elasticsearch.index.query;\n \n-import org.apache.lucene.index.DocValues;\n import org.apache.lucene.index.LeafReaderContext;\n-import org.apache.lucene.index.NumericDocValues;\n import org.apache.lucene.index.SortedNumericDocValues;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.BooleanQuery;\n@@ -86,6 +84,11 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n out.writeOptionalWriteable(minimumShouldMatchScript);\n }\n \n+ // package protected for testing purpose\n+ String getFieldName() {\n+ return fieldName;\n+ }\n+\n public List<?> getValues() {\n return values;\n }\n@@ -116,9 +119,10 @@ public TermsSetQueryBuilder setMinimumShouldMatchScript(Script minimumShouldMatc\n \n @Override\n protected boolean doEquals(TermsSetQueryBuilder other) {\n- return Objects.equals(fieldName, this.fieldName) && Objects.equals(values, this.values) &&\n- Objects.equals(minimumShouldMatchField, this.minimumShouldMatchField) &&\n- Objects.equals(minimumShouldMatchScript, this.minimumShouldMatchScript);\n+ return Objects.equals(fieldName, other.fieldName)\n+ && Objects.equals(values, other.values)\n+ && Objects.equals(minimumShouldMatchField, other.minimumShouldMatchField)\n+ && Objects.equals(minimumShouldMatchScript, other.minimumShouldMatchScript);\n }\n \n @Override", "filename": "server/src/main/java/org/elasticsearch/index/query/TermsSetQueryBuilder.java", "status": "modified" }, { "diff": "@@ -59,7 +59,9 @@\n import java.util.List;\n import java.util.Map;\n import java.util.function.Function;\n+import java.util.function.Predicate;\n \n+import static java.util.Collections.emptyMap;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n@@ -85,17 +87,13 @@ protected TermsSetQueryBuilder doCreateTestQueryBuilder() {\n do {\n fieldName = randomFrom(MAPPED_FIELD_NAMES);\n } while (fieldName.equals(GEO_POINT_FIELD_NAME) || fieldName.equals(GEO_SHAPE_FIELD_NAME));\n- int numValues = randomIntBetween(0, 10);\n- List<Object> randomTerms = new ArrayList<>(numValues);\n- for (int i = 0; i < numValues; i++) {\n- randomTerms.add(getRandomValueForFieldName(fieldName));\n- }\n+ List<?> randomTerms = randomValues(fieldName);\n TermsSetQueryBuilder queryBuilder = new TermsSetQueryBuilder(STRING_FIELD_NAME, randomTerms);\n if (randomBoolean()) {\n queryBuilder.setMinimumShouldMatchField(\"m_s_m\");\n } else {\n queryBuilder.setMinimumShouldMatchScript(\n- new Script(ScriptType.INLINE, MockScriptEngine.NAME, \"_script\", Collections.emptyMap()));\n+ new Script(ScriptType.INLINE, MockScriptEngine.NAME, \"_script\", emptyMap()));\n }\n return queryBuilder;\n }\n@@ -122,6 +120,41 @@ protected boolean builderGeneratesCacheableQueries() {\n return false;\n }\n \n+ @Override\n+ public TermsSetQueryBuilder mutateInstance(final TermsSetQueryBuilder instance) throws IOException {\n+ String fieldName = instance.getFieldName();\n+ List<?> values = instance.getValues();\n+ String minimumShouldMatchField = null;\n+ Script minimumShouldMatchScript = null;\n+\n+ switch (randomIntBetween(0, 3)) {\n+ case 0:\n+ Predicate<String> predicate = s -> s.equals(instance.getFieldName()) == false && s.equals(GEO_POINT_FIELD_NAME) == false\n+ && s.equals(GEO_SHAPE_FIELD_NAME) == false;\n+ fieldName = randomValueOtherThanMany(predicate, () -> randomFrom(MAPPED_FIELD_NAMES));\n+ values = randomValues(fieldName);\n+ break;\n+ case 1:\n+ values = randomValues(fieldName);\n+ break;\n+ case 2:\n+ minimumShouldMatchField = randomAlphaOfLengthBetween(1, 10);\n+ break;\n+ case 3:\n+ minimumShouldMatchScript = new Script(ScriptType.INLINE, MockScriptEngine.NAME, randomAlphaOfLength(10), emptyMap());\n+ break;\n+ }\n+\n+ TermsSetQueryBuilder newInstance = new TermsSetQueryBuilder(fieldName, values);\n+ if (minimumShouldMatchField != null) {\n+ newInstance.setMinimumShouldMatchField(minimumShouldMatchField);\n+ }\n+ if (minimumShouldMatchScript != null) {\n+ newInstance.setMinimumShouldMatchScript(minimumShouldMatchScript);\n+ }\n+ return newInstance;\n+ }\n+\n public void testBothFieldAndScriptSpecified() {\n TermsSetQueryBuilder queryBuilder = new TermsSetQueryBuilder(\"_field\", Collections.emptyList());\n queryBuilder.setMinimumShouldMatchScript(new Script(\"\"));\n@@ -215,7 +248,7 @@ public void testDoToQuery_msmScriptField() throws Exception {\n \n try (IndexReader ir = DirectoryReader.open(directory)) {\n QueryShardContext context = createShardContext();\n- Script script = new Script(ScriptType.INLINE, MockScriptEngine.NAME, \"_script\", Collections.emptyMap());\n+ Script script = new Script(ScriptType.INLINE, MockScriptEngine.NAME, \"_script\", emptyMap());\n Query query = new TermsSetQueryBuilder(\"message\", Arrays.asList(\"a\", \"b\", \"c\", \"d\"))\n .setMinimumShouldMatchScript(script).doToQuery(context);\n IndexSearcher searcher = new IndexSearcher(ir);\n@@ -228,6 +261,16 @@ public void testDoToQuery_msmScriptField() throws Exception {\n }\n }\n \n+ private static List<?> randomValues(final String fieldName) {\n+ final int numValues = randomIntBetween(0, 10);\n+ final List<Object> values = new ArrayList<>(numValues);\n+\n+ for (int i = 0; i < numValues; i++) {\n+ values.add(getRandomValueForFieldName(fieldName));\n+ }\n+ return values;\n+ }\n+\n public static class CustomScriptPlugin extends MockScriptPlugin {\n \n @Override", "filename": "server/src/test/java/org/elasticsearch/index/query/TermsSetQueryBuilderTests.java", "status": "modified" } ] }
{ "body": "`%ERRORLEVEL%` is reported just fine in DOS. Note the `CALL` is mandatory when calling any batch file and using `||`.\r\n\r\n```batch\r\n$ (CALL bin\\elasticsearch-plugin.bat install --batch asfasf) || echo %ERRORLEVEL%\r\nA tool for managing installed elasticsearch plugins\r\n\r\nCommands\r\n--------\r\nlist - Lists installed elasticsearch plugins\r\ninstall - Install a plugin\r\nremove - removes a plugin from Elasticsearch\r\n\r\nNon-option arguments:\r\ncommand\r\n\r\nOption Description\r\n------ -----------\r\n-h, --help show help\r\n-s, --silent show minimal output\r\n-v, --verbose show verbose output\r\nERROR: Unknown plugin asfasf\r\n64\r\n```\r\n\r\n\r\nBut on powershell `$LASTEXITCODE` is `0`:\r\n\r\n```powershell\r\nPS> bin\\elasticsearch-plugin.bat install --batch asfasf; echo $LASTEXITCODE\r\nA tool for managing installed elasticsearch plugins\r\n\r\n<snip>\r\nERROR: Unknown plugin asfasf\r\n0\r\n```\r\n\r\nHowever if we modify the `.bat` files we distribute to explicitly exit \r\n\r\n```batch\r\nendlocal\r\nendlocal\r\nexit /B %ERRORLEVEL%\r\n```\r\n\r\n*NOTE:* `ENDLOCAL` does not alter `%ERRORLEVEL%`.\r\n\r\n`$LASTEXITCODE` is set correctly in powershell. \r\n\r\n```powershell\r\nPS > bin\\elasticsearch-plugin.bat install --batch asfasf; echo $LASTEXITCODE\r\nA tool for managing installed elasticsearch plugins\r\n\r\n<snip>\r\nERROR: Unknown plugin asfasf\r\n64\r\n```\r\n\r\nA similar result is observed when automating the bat files from `C#`\r\n\r\n```csharp\r\nvar bat = @\"bin\\elasticsearch-plugin.bat\";\r\nvar args = \"install --batch asfasf\";\r\nvar startInfo = new ProcessStartInfo(bat, args)\r\n{\r\n\tUseShellExecute = false,\r\n\tCreateNoWindow = true\r\n};\r\n\t\r\nvar p = Process.Start(startInfo);\r\np.WaitForExit();\r\n\t\r\nvar exitCode = p.ExitCode;\r\n```\r\n\r\n`exitCode` will be `0` right now and `64` when we use the explict `exit /B %ERRORLEVEL%` fix.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-04-18T14:23:59Z" } ], "number": 29582, "title": "ExitCode does not bubble out bat files when not called from DOS" }
{ "body": "This makes sure the exit code is preserved when calling the batch\r\nfiles from different contexts other than DOS\r\n\r\nFixes #29582\r\n", "number": 29583, "review_comments": [ { "body": "Is there a reason this wouldn't work? Then we can keep the exit close to the failing script, we really do want to error out immediately here.\r\n\r\n```suggestion\r\ncall \"%~dp0elasticsearch-cli.bat\" ^\r\n %%* ^\r\n || exit /b %errorlevel%\r\n```", "created_at": "2018-11-02T00:04:12Z" }, { "body": "```suggestion\r\n```", "created_at": "2018-11-02T00:04:15Z" }, { "body": "Thanks for the review @jasontedor, finally took the effort to get back to this PR.\r\n\r\nSadly we need the `exit /b %errorlevel%` after the `endlocal` calls otherwise it does not survive in a non `CMD` context.\r\n\r\nI also tried the special `:eof` label as [documented here](https://blogs.msdn.microsoft.com/oldnewthing/20120802-00/?p=6973)\r\n\r\n\r\n```DOS\r\ncall \"%~dp0elasticsearch-cli.bat\" %%* || goto :eof\r\n```\r\n\r\nWe could get rid of the `endlocal` calls alltogether since the local context is automatically closed when the bat file exits but non CMD contexts still need the explicit `exit /B`\r\n\r\nI will update the PR with the following:\r\n\r\n```bat\r\ncall \"%~dp0elasticsearch-cli.bat\" %%* || goto exit\r\n\r\nendlocal\r\nendlocal\r\n:exit\r\nexit /B %ERRORLEVEL%\r\n```\r\n\r\nWhich I think makes the early exit explicit and satisfies carrying over the exit code to non CMD contexts.\r\n\r\n\r\n\r\n\r\n", "created_at": "2018-12-18T09:12:52Z" }, { "body": "Would you maintain the line breaks that we have? It keeps these scripts looking the same as the Unix scripts too.", "created_at": "2018-12-19T17:40:05Z" } ], "title": "Exit batch files explictly using ERRORLEVEL" }
{ "commits": [ { "message": "Exit batch files explictly using ERRORLEVEL\n\nThis makes sure the exit code is preserved when calling the batch\nfiles from different contexts other than DOS\n\nFixes #29582\n\nThis also fixes specific error codes being masked by an explict\n\nexit /b 1\n\ncausing the useful exitcodes from ExitCodes to be lost." }, { "message": "fix line breaks for calling cli to match the bash scripts" }, { "message": "indent size of bash files is 2, make sure editorconfig does the same for bat files" }, { "message": "update indenting to match bash files" }, { "message": "update elasticsearch-keystore.bat indenting" }, { "message": "Update elasticsearch-node.bat to exit outside of endlocal" } ], "files": [ { "diff": "@@ -8,3 +8,6 @@ indent_style = space\n indent_size = 4\n trim_trailing_whitespace = true\n insert_final_newline = true\n+\n+[*.bat]\n+indent_size = 2", "filename": ".editorconfig", "status": "modified" }, { "diff": "@@ -21,3 +21,5 @@ if defined ES_ADDITIONAL_CLASSPATH_DIRECTORIES (\n -cp \"%ES_CLASSPATH%\" ^\n \"%ES_MAIN_CLASS%\" ^\n %*\n+ \n+exit /b %ERRORLEVEL%", "filename": "distribution/src/bin/elasticsearch-cli.bat", "status": "modified" }, { "diff": "@@ -6,7 +6,9 @@ setlocal enableextensions\n set ES_MAIN_CLASS=org.elasticsearch.common.settings.KeyStoreCli\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "distribution/src/bin/elasticsearch-keystore.bat", "status": "modified" }, { "diff": "@@ -6,7 +6,9 @@ setlocal enableextensions\n set ES_MAIN_CLASS=org.elasticsearch.cluster.coordination.NodeToolCli\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "distribution/src/bin/elasticsearch-node.bat", "status": "modified" }, { "diff": "@@ -7,7 +7,10 @@ set ES_MAIN_CLASS=org.elasticsearch.plugins.PluginCli\n set ES_ADDITIONAL_CLASSPATH_DIRECTORIES=lib/tools/plugin-cli\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n+ \n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "distribution/src/bin/elasticsearch-plugin.bat", "status": "modified" }, { "diff": "@@ -258,3 +258,5 @@ goto:eof\n \n endlocal\n endlocal\n+\n+exit /b %ERRORLEVEL%", "filename": "distribution/src/bin/elasticsearch-service.bat", "status": "modified" }, { "diff": "@@ -6,7 +6,9 @@ setlocal enableextensions\n set ES_MAIN_CLASS=org.elasticsearch.index.shard.ShardToolCli\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "distribution/src/bin/elasticsearch-shard.bat", "status": "modified" }, { "diff": "@@ -55,3 +55,4 @@ cd /d \"%ES_HOME%\"\n \n endlocal\n endlocal\n+exit /b %ERRORLEVEL%", "filename": "distribution/src/bin/elasticsearch.bat", "status": "modified" }, { "diff": "@@ -12,7 +12,9 @@ set ES_ADDITIONAL_SOURCES=x-pack-env;x-pack-security-env\n set ES_ADDITIONAL_CLASSPATH_DIRECTORIES=lib/tools/security-cli\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "x-pack/plugin/security/src/main/bin/elasticsearch-certgen.bat", "status": "modified" }, { "diff": "@@ -12,7 +12,9 @@ set ES_ADDITIONAL_SOURCES=x-pack-env;x-pack-security-env\n set ES_ADDITIONAL_CLASSPATH_DIRECTORIES=lib/tools/security-cli\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "x-pack/plugin/security/src/main/bin/elasticsearch-certutil.bat", "status": "modified" }, { "diff": "@@ -11,7 +11,9 @@ set ES_MAIN_CLASS=org.elasticsearch.xpack.security.authc.esnative.ESNativeRealmM\n set ES_ADDITIONAL_SOURCES=x-pack-env;x-pack-security-env\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "x-pack/plugin/security/src/main/bin/elasticsearch-migrate.bat", "status": "modified" }, { "diff": "@@ -11,7 +11,9 @@ set ES_MAIN_CLASS=org.elasticsearch.xpack.security.authc.saml.SamlMetadataComman\n set ES_ADDITIONAL_SOURCES=x-pack-env;x-pack-security-env\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "x-pack/plugin/security/src/main/bin/elasticsearch-saml-metadata.bat", "status": "modified" }, { "diff": "@@ -11,7 +11,9 @@ set ES_MAIN_CLASS=org.elasticsearch.xpack.security.authc.esnative.tool.SetupPass\n set ES_ADDITIONAL_SOURCES=x-pack-env;x-pack-security-env\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "x-pack/plugin/security/src/main/bin/elasticsearch-setup-passwords.bat", "status": "modified" }, { "diff": "@@ -11,7 +11,9 @@ set ES_MAIN_CLASS=org.elasticsearch.xpack.security.crypto.tool.SystemKeyTool\n set ES_ADDITIONAL_SOURCES=x-pack-env;x-pack-security-env\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "x-pack/plugin/security/src/main/bin/elasticsearch-syskeygen.bat", "status": "modified" }, { "diff": "@@ -11,7 +11,9 @@ set ES_MAIN_CLASS=org.elasticsearch.xpack.security.authc.file.tool.UsersTool\n set ES_ADDITIONAL_SOURCES=x-pack-env;x-pack-security-env\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "x-pack/plugin/security/src/main/bin/elasticsearch-users.bat", "status": "modified" }, { "diff": "@@ -22,3 +22,4 @@ set CLI_JAR=%ES_HOME%/bin/*\n \n endlocal\n endlocal\n+exit /b %ERRORLEVEL%", "filename": "x-pack/plugin/sql/src/main/bin/elasticsearch-sql-cli.bat", "status": "modified" }, { "diff": "@@ -11,7 +11,9 @@ set ES_MAIN_CLASS=org.elasticsearch.xpack.watcher.trigger.schedule.tool.CronEval\n set ES_ADDITIONAL_SOURCES=x-pack-env;x-pack-watcher-env\n call \"%~dp0elasticsearch-cli.bat\" ^\n %%* ^\n- || exit /b 1\n+ || goto exit\n \n endlocal\n endlocal\n+:exit\n+exit /b %ERRORLEVEL%", "filename": "x-pack/plugin/watcher/src/main/bin/elasticsearch-croneval.bat", "status": "modified" } ] }
{ "body": " Binary doc values are retrieved during the DocValueFetchSubPhase through an instance of ScriptDocValues.\r\n Since 6.0 ScriptDocValues instances are not allowed to reuse the object that they return\r\n (https://github.com/elastic/elasticsearch/issues/26775) but BinaryScriptDocValues doesn't follow this\r\n restriction and reuses instances of BytesRefBuilder among different documents.\r\n This results in `field` values assigned to the wrong document in the response.\r\n This commit fixes this issue by recreating the BytesRef for each value that needs to be returned.\r\n\r\n Fixes #29565", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-17T19:30:09Z" }, { "body": "Thanks @rjernst @jpountz \r\nI pushed a commit that removes the additional copy for strings.", "created_at": "2018-04-18T07:51:48Z" }, { "body": "you already merged it which is fine, I just started looking into it and I wonder if we should instead of having a `BytesRefBuilder` that we copy into from `SortedBinaryDocValues` and then copy again when it's consumed. Wouldn't it be simpler to hold on `BytesRef` directly that we `deepCopy` from the incoming values which we do anyway somehow? This would safe on time of copying? @jimczi @jpountz WDYT", "created_at": "2018-04-18T11:13:51Z" }, { "body": "This is what the PR did initially but I asked Jim to change it because it added one more object allocation for strings. I suspect that object allocations may be easier to skip thanks to escape analysis this way as well, since we don't put the copies in a long-living array. In the end, I don't feel strongly about it, if you think it's better to change back to performing a deep copy of the BytesRef, I'm good with it.", "created_at": "2018-04-18T11:28:15Z" }, { "body": "I wonder if we still need to have a shared version and instead can hold a `String[]` that way we can skip all copies to BytesRef for strings and convert to string on `nextDoc`?", "created_at": "2018-04-18T11:37:56Z" }, { "body": "This would save one copy for the `keyword` case and would not affect `binary` .\r\n+1 I can open a new pr for this. ", "created_at": "2018-04-18T11:46:49Z" }, { "body": "> +1 I can open a new pr for this.\r\n\r\n++", "created_at": "2018-04-18T11:52:48Z" } ], "number": 29567, "title": " Fix binary doc values fetching in _search" }
{ "body": "This commit refactors ScriptDocValues.Strings to directly creates String objects\r\ninstead of using an intermediate BytesRef's copy.\r\nScriptDocValues.Binary is also changed to create a single copy of BytesRef per consumed value.\r\n\r\nRelates #29567", "number": 29581, "review_comments": [], "title": "Avoid BytesRef's copying in ScriptDocValues's Strings" }
{ "commits": [ { "message": "Avoid BytesRef's copying in ScriptDocValues's Strings\n\nThis commit refactors ScriptDocValues.Strings to directly creates String objects\ninstead of using an intermediate BytesRef's copy.\nScriptDocValues.Binary is also changed to create a single copy of BytesRef per consumed value.\n\nRelates #29567" }, { "message": "merge with master" }, { "message": "copy values lazily in ScriptDocValues.Strings to avoid unnecessary utf8 conversion" } ], "files": [ { "diff": "@@ -35,6 +35,7 @@\n import org.joda.time.ReadableDateTime;\n \n import java.io.IOException;\n+import java.io.UncheckedIOException;\n import java.security.AccessController;\n import java.security.PrivilegedAction;\n import java.util.AbstractList;\n@@ -570,90 +571,115 @@ private static boolean[] grow(boolean[] array, int minSize) {\n } else\n return array;\n }\n-\n }\n \n- abstract static class BinaryScriptDocValues<T> extends ScriptDocValues<T> {\n-\n+ public static final class Strings extends ScriptDocValues<String> {\n private final SortedBinaryDocValues in;\n- protected BytesRefBuilder[] values = new BytesRefBuilder[0];\n- protected int count;\n+ private String[] values = new String[0];\n+ private int count;\n+ private boolean valuesSet = false;\n \n- BinaryScriptDocValues(SortedBinaryDocValues in) {\n+ public Strings(SortedBinaryDocValues in) {\n this.in = in;\n }\n \n @Override\n- public void setNextDocId(int docId) throws IOException {\n- if (in.advanceExact(docId)) {\n- resize(in.docValueCount());\n- for (int i = 0; i < count; i++) {\n- // We need to make a copy here, because BytesBinaryDVAtomicFieldData's SortedBinaryDocValues\n- // implementation reuses the returned BytesRef. Otherwise we would end up with the same BytesRef\n- // instance for all slots in the values array.\n- values[i].copyBytes(in.nextValue());\n+ public String get(int index) {\n+ if (valuesSet == false) {\n+ try {\n+ fillValues();\n+ } catch (IOException e) {\n+ throw new UncheckedIOException(e);\n }\n- } else {\n- resize(0);\n }\n+ return values[index];\n+ }\n+\n+ public String getValue() {\n+ return count == 0 ? null : values[0];\n }\n \n /**\n * Set the {@link #size()} and ensure that the {@link #values} array can\n * store at least that many entries.\n- */\n- protected void resize(int newSize) {\n+ */\n+ private void resize(int newSize) {\n+ values = ArrayUtil.grow(values, newSize);\n count = newSize;\n- if (newSize > values.length) {\n- final int oldLength = values.length;\n- values = ArrayUtil.grow(values, count);\n- for (int i = oldLength; i < values.length; ++i) {\n- values[i] = new BytesRefBuilder();\n- }\n+ }\n+\n+ private void fillValues() throws IOException {\n+ assert valuesSet == false;\n+ resize(count);\n+ for (int i = 0; i < count; i++) {\n+ values[i] = in.nextValue().utf8ToString();\n+ }\n+ valuesSet = true;\n+ }\n+\n+ @Override\n+ public void setNextDocId(int docId) throws IOException {\n+ if (in.advanceExact(docId)) {\n+ count = in.docValueCount();\n+ valuesSet = false;\n+ } else {\n+ resize(0);\n+ valuesSet = true;\n }\n }\n \n @Override\n public int size() {\n return count;\n }\n-\n }\n \n- public static final class Strings extends BinaryScriptDocValues<String> {\n+ public static final class BytesRefs extends ScriptDocValues<BytesRef> {\n+ private final SortedBinaryDocValues in;\n+ private BytesRef[] values = new BytesRef[0];\n+ private int count;\n \n- public Strings(SortedBinaryDocValues in) {\n- super(in);\n+ public BytesRefs(SortedBinaryDocValues in) {\n+ this.in = in;\n }\n \n @Override\n- public String get(int index) {\n- return values[index].get().utf8ToString();\n+ public void setNextDocId(int docId) throws IOException {\n+ if (in.advanceExact(docId)) {\n+ resize(in.docValueCount());\n+ for (int i = 0; i < count; i++) {\n+ /**\n+ * We need to make a copy here because {@link SortedBinaryDocValues} might reuse the returned value\n+ * and the same instance might be used to return values from multiple documents.\n+ **/\n+ values[i] = BytesRef.deepCopyOf(in.nextValue());\n+ }\n+ } else {\n+ resize(0);\n+ }\n }\n \n- public String getValue() {\n- return count == 0 ? null : get(0);\n+ @Override\n+ public BytesRef get(int index) {\n+ return values[index];\n }\n- }\n-\n- public static final class BytesRefs extends BinaryScriptDocValues<BytesRef> {\n \n- public BytesRefs(SortedBinaryDocValues in) {\n- super(in);\n+ public BytesRef getValue() {\n+ return count == 0 ? new BytesRef() : values[0];\n }\n \n- @Override\n- public BytesRef get(int index) {\n- /**\n- * We need to make a copy here because {@link BinaryScriptDocValues} might reuse the\n- * returned value and the same instance might be used to\n- * return values from multiple documents.\n- **/\n- return values[index].toBytesRef();\n+ /**\n+ * Set the {@link #size()} and ensure that the {@link #values} array can\n+ * store at least that many entries.\n+ */\n+ protected void resize(int newSize) {\n+ values = ArrayUtil.grow(values, newSize);\n+ count = newSize;\n }\n \n- public BytesRef getValue() {\n- return count == 0 ? new BytesRef() : get(0);\n+ @Override\n+ public int size() {\n+ return count;\n }\n \n }", "filename": "server/src/main/java/org/elasticsearch/index/fielddata/ScriptDocValues.java", "status": "modified" } ] }
{ "body": "This issue was reported at https://discuss.elastic.co/t/issue-storing-binary-fields-in-6-2-2/128290 against 6.2 but it also reproduces against master.\r\n\r\nHere is a recreation. All documents report the same value in their doc-value field: the value of the last document in the bulk request.\r\n\r\n```\r\nPUT blob_test\r\n{\r\n \"settings\": {\r\n \"number_of_shards\": 1,\r\n \"number_of_replicas\": 0\r\n }, \r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"blob\": {\r\n \"type\": \"binary\",\r\n \"doc_values\": true\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPOST blob_test/_doc/_bulk\r\n{ \"index\": { \"_id\": \"id_0\" } }\r\n{ \"blob\": \"aWRfMCBhbmQgc29tZSB0ZXh0\" }\r\n{ \"index\": { \"_id\": \"id_1\" } }\r\n{ \"blob\": \"aWRfMSBhbmQgc29tZSB0ZXh0\" }\r\n{ \"index\": { \"_id\": \"id_2\" } }\r\n{ \"blob\": \"aWRfMiBhbmQgc29tZSB0ZXh0\" }\r\n{ \"index\": { \"_id\": \"id_3\" } }\r\n{ \"blob\": \"aWRfMyBhbmQgc29tZSB0ZXh0\" }\r\n{ \"index\": { \"_id\": \"id_4\" } }\r\n{ \"blob\": \"aWRfNCBhbmQgc29tZSB0ZXh0\" }\r\n{ \"index\": { \"_id\": \"id_5\" } }\r\n{ \"blob\": \"aWRfNSBhbmQgc29tZSB0ZXh0\" }\r\n{ \"index\": { \"_id\": \"id_6\" } }\r\n{ \"blob\": \"aWRfNiBhbmQgc29tZSB0ZXh0\" }\r\n{ \"index\": { \"_id\": \"id_7\" } }\r\n{ \"blob\": \"aWRfNyBhbmQgc29tZSB0ZXh0\" }\r\n{ \"index\": { \"_id\": \"id_8\" } }\r\n{ \"blob\": \"aWRfOCBhbmQgc29tZSB0ZXh0\" }\r\n{ \"index\": { \"_id\": \"id_9\" } }\r\n{ \"blob\": \"aWRfOSBhbmQgc29tZSB0ZXh0\" }\r\n\r\n\r\nGET blob_test/_search \r\n{\r\n \"docvalue_fields\": [\"blob\"]\r\n}\r\n```", "comments": [ { "body": "Fortunately this is a search bug, the indexed doc values are fine. I opened https://github.com/elastic/elasticsearch/pull/29567 to fix it.", "created_at": "2018-04-17T19:31:27Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-17T19:31:38Z" } ], "number": 29565, "title": "Doc value get indexed into the wrong document" }
{ "body": " Binary doc values are retrieved during the DocValueFetchSubPhase through an instance of ScriptDocValues.\r\n Since 6.0 ScriptDocValues instances are not allowed to reuse the object that they return\r\n (https://github.com/elastic/elasticsearch/issues/26775) but BinaryScriptDocValues doesn't follow this\r\n restriction and reuses instances of BytesRefBuilder among different documents.\r\n This results in `field` values assigned to the wrong document in the response.\r\n This commit fixes this issue by recreating the BytesRef for each value that needs to be returned.\r\n\r\n Fixes #29565", "number": 29567, "review_comments": [], "title": " Fix binary doc values fetching in _search" }
{ "commits": [ { "message": " Fix binary doc values fetching in _search\n\n Binary doc values are retrieved during the DocValueFetchSubPhase through an instance of ScriptDocValues.\n Since 6.0 ScriptDocValues instances are not allowed to reuse the object that they return\n (https://github.com/elastic/elasticsearch/issues/26775) but BinaryScriptDocValues doesn't follow this\n restriction and reuses instances of BytesRefBuilder among different documents.\n This results in `field` values assigned to the wrong document in the response.\n This commit fixes this issue by recreating the BytesRef for each value that needs to be returned.\n\n Fixes #29565" }, { "message": "bytesref can be reused for string based script doc values" }, { "message": "Merge branch 'master' into bug/binary_doc_values_reuse" }, { "message": "Merge branch 'master' into bug/binary_doc_values_reuse" } ], "files": [ { "diff": "@@ -633,7 +633,12 @@ public String get(int index) {\n \n public BytesRef getBytesValue() {\n if (size() > 0) {\n- return values[0].get();\n+ /**\n+ * We need to make a copy here because {@link BinaryScriptDocValues} might reuse the\n+ * returned value and the same instance might be used to\n+ * return values from multiple documents.\n+ **/\n+ return values[0].toBytesRef();\n } else {\n return null;\n }\n@@ -658,14 +663,19 @@ public BytesRefs(SortedBinaryDocValues in) {\n \n @Override\n public BytesRef get(int index) {\n- return values[index].get();\n+ /**\n+ * We need to make a copy here because {@link BinaryScriptDocValues} might reuse the\n+ * returned value and the same instance might be used to\n+ * return values from multiple documents.\n+ **/\n+ return values[index].toBytesRef();\n }\n \n public BytesRef getValue() {\n if (count == 0) {\n return new BytesRef();\n }\n- return values[0].get();\n+ return values[0].toBytesRef();\n }\n \n }", "filename": "server/src/main/java/org/elasticsearch/index/fielddata/ScriptDocValues.java", "status": "modified" }, { "diff": "@@ -52,7 +52,6 @@ public void testDocValue() throws Exception {\n \n final DocumentMapper mapper = mapperService.documentMapperParser().parse(\"test\", new CompressedXContent(mapping));\n \n-\n List<BytesRef> bytesList1 = new ArrayList<>(2);\n bytesList1.add(randomBytes());\n bytesList1.add(randomBytes());\n@@ -123,22 +122,26 @@ public void testDocValue() throws Exception {\n // Test whether ScriptDocValues.BytesRefs makes a deepcopy\n fieldData = indexFieldData.load(reader);\n ScriptDocValues<?> scriptValues = fieldData.getScriptValues();\n- scriptValues.setNextDocId(0);\n- assertEquals(2, scriptValues.size());\n- assertEquals(bytesList1.get(0), scriptValues.get(0));\n- assertEquals(bytesList1.get(1), scriptValues.get(1));\n-\n- scriptValues.setNextDocId(1);\n- assertEquals(1, scriptValues.size());\n- assertEquals(bytes1, scriptValues.get(0));\n-\n- scriptValues.setNextDocId(2);\n- assertEquals(0, scriptValues.size());\n-\n- scriptValues.setNextDocId(3);\n- assertEquals(2, scriptValues.size());\n- assertEquals(bytesList2.get(0), scriptValues.get(0));\n- assertEquals(bytesList2.get(1), scriptValues.get(1));\n+ Object[][] retValues = new BytesRef[4][0];\n+ for (int i = 0; i < 4; i++) {\n+ scriptValues.setNextDocId(i);\n+ retValues[i] = new BytesRef[scriptValues.size()];\n+ for (int j = 0; j < retValues[i].length; j++) {\n+ retValues[i][j] = scriptValues.get(j);\n+ }\n+ }\n+ assertEquals(2, retValues[0].length);\n+ assertEquals(bytesList1.get(0), retValues[0][0]);\n+ assertEquals(bytesList1.get(1), retValues[0][1]);\n+\n+ assertEquals(1, retValues[1].length);\n+ assertEquals(bytes1, retValues[1][0]);\n+\n+ assertEquals(0, retValues[2].length);\n+\n+ assertEquals(2, retValues[3].length);\n+ assertEquals(bytesList2.get(0), retValues[3][0]);\n+ assertEquals(bytesList2.get(1), retValues[3][1]);\n }\n \n private static BytesRef randomBytes() {", "filename": "server/src/test/java/org/elasticsearch/index/fielddata/BinaryDVFieldDataTests.java", "status": "modified" } ] }
{ "body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**:all\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**:all\r\n\r\n**OS version**:all\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nperhaps setHeaders method can't override default header in RestClient, because httpHeader names are case-insensitive.suggest converting all default header and request header names to lowercase/uppercase.\r\n\r\n![diff](https://cloud.githubusercontent.com/assets/24735504/21955372/f06fdf9e-daa4-11e6-8caf-0dcb51641d0c.png)\r\n\r\n", "comments": [ { "body": "I’d like to take this up if no one is already working on this or has been assigned to this.", "created_at": "2018-04-05T00:17:00Z" }, { "body": "go ahead @adityasrini thanks!", "created_at": "2018-04-05T10:46:47Z" }, { "body": "given that we have decided (see #30616) to remove support for default headers, as they are already supported by the apache http client which we use to perform requests, I am going to close this.", "created_at": "2018-07-16T11:32:15Z" } ], "number": 22623, "title": "Make headers case-insensitive" }
{ "body": "- RestClient\r\n-- remove old Set tracking \"added\" headers.\r\n-- use HttpHeaders.containsHeader instead.\r\n-- Fixes #22623\r\n\r\n<!--\r\nThank you for your interest in and contributing to Elasticsearch! There\r\nare a few simple things to check before submitting your pull request\r\nthat can help with the review process. You should delete these items\r\nfrom your submission, but they are here to help bring them to your\r\nattention.\r\n-->\r\n\r\n- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?\r\nYES\r\n\r\n- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md)?\r\nYES\r\n\r\n- If submitting code, have you built your formula locally prior to submission with `gradle check`?\r\nYES\r\n\r\n- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.\r\nYES\r\n\r\n- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?\r\nYES(osx)\r\n\r\n- If you are submitting this code for a class then read our [policy](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-as-part-of-a-class) for that.\r\nN/A", "number": 29554, "review_comments": [], "title": "Default headers overriding existing headers *FIX*" }
{ "commits": [ { "message": "Default headers overriding existing headers *FIX*\n\n- RestClient\n-- remove old Set tracking \"added\" headers.\n-- use HttpHeaders.containsHeader instead.\n-- Fixes #22623" } ], "files": [ { "diff": "@@ -430,14 +430,13 @@ public void cancelled() {\n private void setHeaders(HttpRequest httpRequest, Header[] requestHeaders) {\n Objects.requireNonNull(requestHeaders, \"request headers must not be null\");\n // request headers override default headers, so we don't add default headers if they exist as request headers\n- final Set<String> requestNames = new HashSet<>(requestHeaders.length);\n for (Header requestHeader : requestHeaders) {\n Objects.requireNonNull(requestHeader, \"request header must not be null\");\n httpRequest.addHeader(requestHeader);\n- requestNames.add(requestHeader.getName());\n }\n+ // default headers shouldnt override existing headers...\n for (Header defaultHeader : defaultHeaders) {\n- if (requestNames.contains(defaultHeader.getName()) == false) {\n+ if (httpRequest.containsHeader(defaultHeader.getName()) == false) {\n httpRequest.addHeader(defaultHeader);\n }\n }", "filename": "client/rest/src/main/java/org/elasticsearch/client/RestClient.java", "status": "modified" } ] }
{ "body": "Some build tasks require older JDKs. For example, the BWC build tasks for older versions of Elasticsearch require older JDKs. It is onerous to require these be configured when merely compiling Elasticsearch, the requirement that they be strictly set to appropriate values should only be enforced if these tasks are going to be executed. To address this, we lazy configure these tasks.\r\n\r\nRelates #29493\r\n", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-04-14T16:44:57Z" }, { "body": "It might be nicer to start using gradles new Property classes (or Provider, I can't remember exactly what they called them), instead of using closures to delay evaluating in a string. Also, it would be nice to still fail to configure if these env vars are not available when the relevant task will be run. For this, we can check if the relevant task is in the task graph after it is constructed, and then verify the envs. I'm happy to do both of these as follow ups.\r\n\r\nLGTM", "created_at": "2018-04-14T18:16:18Z" }, { "body": "> It might be nicer to start using gradles new Property classes (or Provider, I can't remember exactly what they called them), instead of using closures to delay evaluating in a string.\r\n\r\nProvider.\r\n\r\n> Also, it would be nice to still fail to configure if these env vars are not available when the relevant task will be run. For this, we can check if the relevant task is in the task graph after it is constructed, and then verify the envs.\r\n\r\n+1", "created_at": "2018-04-14T19:44:33Z" } ], "number": 29519, "title": "Lazy configure build tasks that require older JDKs" }
{ "body": "This commit moves the checks on JAVAX_HOME (where X is the java version\r\nnumber) existing to the end of gradle's configuration phase, and based\r\non whether the tasks needing the java home are configured to execute.\r\n\r\nrelates #29519\r\n", "number": 29548, "review_comments": [ { "body": "Can you look up the `Project` from the `Task`?", "created_at": "2018-04-17T00:19:29Z" }, { "body": "Yep, changed in a98747a.", "created_at": "2018-04-17T16:50:48Z" } ], "title": "Build: Move java home checks to pre-execution phase" }
{ "commits": [ { "message": "Build: Move java home checks to pre-execution phase\n\nThis commit moves the checks on JAVAX_HOME (where X is the java version\nnumber) existing to the end of gradle's configuration phase, and based\non whether the tasks needing the java home are configured to execute.\n\nrelates #29519" }, { "message": "forgot to set project in node info" }, { "message": "iter" }, { "message": "Merge branch 'master' into java_home_check" }, { "message": "Merge branch 'master' into java_home_check" }, { "message": "Merge branch 'master' into java_home_check" }, { "message": "Merge branch 'master' into java_home_check" } ], "files": [ { "diff": "@@ -38,6 +38,7 @@ import org.gradle.api.artifacts.ModuleVersionIdentifier\n import org.gradle.api.artifacts.ProjectDependency\n import org.gradle.api.artifacts.ResolvedArtifact\n import org.gradle.api.artifacts.dsl.RepositoryHandler\n+import org.gradle.api.execution.TaskExecutionGraph\n import org.gradle.api.plugins.JavaPlugin\n import org.gradle.api.publish.maven.MavenPublication\n import org.gradle.api.publish.maven.plugins.MavenPublishPlugin\n@@ -221,21 +222,34 @@ class BuildPlugin implements Plugin<Project> {\n return System.getenv('JAVA' + version + '_HOME')\n }\n \n- /**\n- * Get Java home for the project for the specified version. If the specified version is not configured, an exception with the specified\n- * message is thrown.\n- *\n- * @param project the project\n- * @param version the version of Java home to obtain\n- * @param message the exception message if Java home for the specified version is not configured\n- * @return Java home for the specified version\n- * @throws GradleException if Java home for the specified version is not configured\n- */\n- static String getJavaHome(final Project project, final int version, final String message) {\n- if (project.javaVersions.get(version) == null) {\n- throw new GradleException(message)\n+ /** Add a check before gradle execution phase which ensures java home for the given java version is set. */\n+ static void requireJavaHome(Task task, int version) {\n+ Project rootProject = task.project.rootProject // use root project for global accounting\n+ if (rootProject.hasProperty('requiredJavaVersions') == false) {\n+ rootProject.rootProject.ext.requiredJavaVersions = [:].withDefault{key -> return []}\n+ rootProject.gradle.taskGraph.whenReady { TaskExecutionGraph taskGraph ->\n+ List<String> messages = []\n+ for (entry in rootProject.requiredJavaVersions) {\n+ if (rootProject.javaVersions.get(entry.key) != null) {\n+ continue\n+ }\n+ List<String> tasks = entry.value.findAll { taskGraph.hasTask(it) }.collect { \" ${it.path}\" }\n+ if (tasks.isEmpty() == false) {\n+ messages.add(\"JAVA${entry.key}_HOME required to run tasks:\\n${tasks.join('\\n')}\")\n+ }\n+ }\n+ if (messages.isEmpty() == false) {\n+ throw new GradleException(messages.join('\\n'))\n+ }\n+ }\n }\n- return project.javaVersions.get(version)\n+ rootProject.requiredJavaVersions.get(version).add(task)\n+ }\n+\n+ /** A convenience method for getting java home for a version of java and requiring that version for the given task to execute */\n+ static String getJavaHome(final Task task, final int version) {\n+ requireJavaHome(task, version)\n+ return task.project.javaVersions.get(version)\n }\n \n private static String findRuntimeJavaHome(final String compilerJavaHome) {", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@ package org.elasticsearch.gradle.test\n \n import org.apache.tools.ant.DefaultLogger\n import org.apache.tools.ant.taskdefs.condition.Os\n+import org.elasticsearch.gradle.BuildPlugin\n import org.elasticsearch.gradle.LoggedExec\n import org.elasticsearch.gradle.Version\n import org.elasticsearch.gradle.VersionProperties\n@@ -607,6 +608,9 @@ class ClusterFormationTasks {\n }\n \n Task start = project.tasks.create(name: name, type: DefaultTask, dependsOn: setup)\n+ if (node.javaVersion != null) {\n+ BuildPlugin.requireJavaHome(start, node.javaVersion)\n+ }\n start.doLast(elasticsearchRunner)\n return start\n }", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy", "status": "modified" }, { "diff": "@@ -36,6 +36,9 @@ import static org.elasticsearch.gradle.BuildPlugin.getJavaHome\n * A container for the files and configuration associated with a single node in a test cluster.\n */\n class NodeInfo {\n+ /** Gradle project this node is part of */\n+ Project project\n+\n /** common configuration for all nodes, including this one */\n ClusterConfiguration config\n \n@@ -84,6 +87,9 @@ class NodeInfo {\n /** directory to install plugins from */\n File pluginsTmpDir\n \n+ /** Major version of java this node runs with, or {@code null} if using the runtime java version */\n+ Integer javaVersion\n+\n /** environment variables to start the node with */\n Map<String, String> env\n \n@@ -109,6 +115,7 @@ class NodeInfo {\n NodeInfo(ClusterConfiguration config, int nodeNum, Project project, String prefix, Version nodeVersion, File sharedDir) {\n this.config = config\n this.nodeNum = nodeNum\n+ this.project = project\n this.sharedDir = sharedDir\n if (config.clusterName != null) {\n clusterName = config.clusterName\n@@ -165,12 +172,11 @@ class NodeInfo {\n args.add(\"${esScript}\")\n }\n \n+\n if (nodeVersion.before(\"6.2.0\")) {\n- env = ['JAVA_HOME': \"${-> getJavaHome(project, 8, \"JAVA8_HOME must be set to run BWC tests against [\" + nodeVersion + \"]\")}\"]\n+ javaVersion = 8\n } else if (nodeVersion.onOrAfter(\"6.2.0\") && nodeVersion.before(\"6.3.0\")) {\n- env = ['JAVA_HOME': \"${-> getJavaHome(project, 9, \"JAVA9_HOME must be set to run BWC tests against [\" + nodeVersion + \"]\")}\"]\n- } else {\n- env = ['JAVA_HOME': (String) project.runtimeJavaHome]\n+ javaVersion = 9\n }\n \n args.addAll(\"-E\", \"node.portsfile=true\")\n@@ -182,7 +188,7 @@ class NodeInfo {\n // in the cluster-specific options\n esJavaOpts = String.join(\" \", \"-ea\", \"-esa\", esJavaOpts)\n }\n- env.put('ES_JAVA_OPTS', esJavaOpts)\n+ env = ['ES_JAVA_OPTS': esJavaOpts]\n for (Map.Entry<String, String> property : System.properties.entrySet()) {\n if (property.key.startsWith('tests.es.')) {\n args.add(\"-E\")\n@@ -242,13 +248,19 @@ class NodeInfo {\n return Native.toString(shortPath).substring(4)\n }\n \n+ /** Return the java home used by this node. */\n+ String getJavaHome() {\n+ return javaVersion == null ? project.runtimeJavaHome : project.javaVersions.get(javaVersion)\n+ }\n+\n /** Returns debug string for the command that started this node. */\n String getCommandString() {\n String esCommandString = \"\\nNode ${nodeNum} configuration:\\n\"\n esCommandString += \"|-----------------------------------------\\n\"\n esCommandString += \"| cwd: ${cwd}\\n\"\n esCommandString += \"| command: ${executable} ${args.join(' ')}\\n\"\n esCommandString += '| environment:\\n'\n+ esCommandString += \"| JAVA_HOME: ${javaHome}\\n\"\n env.each { k, v -> esCommandString += \"| ${k}: ${v}\\n\" }\n if (config.daemonize) {\n esCommandString += \"|\\n| [${wrapperScript.name}]\\n\"", "filename": "buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy", "status": "modified" }, { "diff": "@@ -147,9 +147,9 @@ subprojects {\n workingDir = checkoutDir\n if ([\"5.6\", \"6.0\", \"6.1\"].contains(bwcBranch)) {\n // we are building branches that are officially built with JDK 8, push JAVA8_HOME to JAVA_HOME for these builds\n- environment('JAVA_HOME', \"${-> getJavaHome(project, 8, \"JAVA8_HOME is required to build BWC versions for BWC branch [\" + bwcBranch + \"]\")}\")\n+ environment('JAVA_HOME', getJavaHome(it, 8))\n } else if (\"6.2\".equals(bwcBranch)) {\n- environment('JAVA_HOME', \"${-> getJavaHome(project, 9, \"JAVA9_HOME is required to build BWC versions for BWC branch [\" + bwcBranch + \"]\")}\")\n+ environment('JAVA_HOME', getJavaHome(it, 9))\n } else {\n environment('JAVA_HOME', project.compilerJavaHome)\n }", "filename": "distribution/bwc/build.gradle", "status": "modified" }, { "diff": "@@ -77,7 +77,7 @@ if (Os.isFamily(Os.FAMILY_WINDOWS)) {\n dependsOn unzip\n executable = new File(project.runtimeJavaHome, 'bin/java')\n env 'CLASSPATH', \"${ -> project.configurations.oldesFixture.asPath }\"\n- env 'JAVA_HOME', \"${-> getJavaHome(project, 7, \"JAVA7_HOME must be set to run reindex-from-old\")}\"\n+ env 'JAVA_HOME', getJavaHome(it, 7)\n args 'oldes.OldElasticsearch',\n baseDir,\n unzip.temporaryDir,", "filename": "qa/reindex-from-old/build.gradle", "status": "modified" } ] }
{ "body": "I think the behavior specified in [`IndexAliasesIT#testIndicesGetAliases`](https://github.com/elastic/elasticsearch/blob/5b2ab96364335539affe99151546552423700f6e/core/src/test/java/org/elasticsearch/aliases/IndexAliasesIT.java#L571-L583) is wrong. Namely:\r\n - create indices foobar, test, test123, foobarbaz, bazbar\r\n - add an alias alias1 -> foobar\r\n - add an alias alias2 -> foobar\r\n - execute get aliases on the transport layer specifying alias1 as the only alias to get\r\n - the response includes foobar, test, test123, foobarbaz, bazbar albeit with empty alias metadata for all indices except foobar which contains alias1 only\r\n - previously the response would only contain foobar with alias metadata for alias1\r\n\r\nThis was a breaking change resulting from #25114, the specific change in behavior arising from a [change to MetaData](https://github.com/elastic/elasticsearch/commit/5b2ab96364335539affe99151546552423700f6e#diff-d6d141c41772a9088a29c2b838e5d8c4).\r\n\r\nI am opening this personally considering it a bug but for discussion where we might decide to only document this behavior (not my preference, I think this behavior is weird and not intuitive).\r\n\r\nRelates #27743", "comments": [ { "body": "It's worth noting that this only affects the transport layer. The REST layer doesn't include the extra indices. The transport layer should probably be changed to return only indices that contain the alias when an alias name is specified.", "created_at": "2017-12-13T03:06:17Z" }, { "body": "The transport client is exactly what this issue is being reported for.", "created_at": "2017-12-13T04:51:23Z" }, { "body": "@jasontedor @dakrone \r\nYes Transport client got affected. Now we are explicitly filtering the indices which contains AliasMetaData. Its performance hit too for our application. \r\n\r\nIn which release /version we can expect its fix.? \r\n\r\n", "created_at": "2017-12-13T05:12:13Z" }, { "body": "> The transport client is exactly what this issue is being reported for.\n\nYes I understand, I was only clarifying for the sake of other people\nreading.\n\nOn Dec 12, 2017 9:51 PM, \"Jason Tedor\" <notifications@github.com> wrote:\n\n> The transport client is exactly what this issue is being reported for.\n>\n> —\n> You are receiving this because you were assigned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/elastic/elasticsearch/issues/27763#issuecomment-351281851>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AABKdIlsoCe_HDhcnutz3ym833dV9H3Nks5s_1fNgaJpZM4Q-Cu4>\n> .\n>\n", "created_at": "2017-12-13T05:14:37Z" }, { "body": "> In which release /version we can expect its fix.?\r\n\r\nRight now you can not have any expectation, no decision has been reached. ", "created_at": "2017-12-13T11:30:20Z" }, { "body": "+1", "created_at": "2017-12-15T14:09:58Z" }, { "body": "We discussed this in Fix-it-Friday and agree that this is a bug. Would you take care of this @dakrone.", "created_at": "2017-12-15T14:40:13Z" }, { "body": "@antitech I want to be clear that while we agree this is bug, it is not a high priority bug so there is no expectation about the timeline for a fix.", "created_at": "2017-12-15T14:40:59Z" }, { "body": "@jasontedor its ok. I'll apply any temporary fix for now . Just a request can you please attach this bug with aliases tag.", "created_at": "2017-12-15T16:52:44Z" }, { "body": "+1", "created_at": "2018-01-19T10:09:38Z" }, { "body": "Reopening this issue, because a qa test failed in another project, in this case the aliases were being expanded (via AliasesRequest#aliases(...)) before arriving in the transport action.", "created_at": "2018-01-23T08:08:36Z" }, { "body": "@martijnvg Is it possible to resolve this issue?", "created_at": "2018-04-13T02:01:27Z" } ], "number": 27763, "title": "Get aliases for specific aliases returns all indices" }
{ "body": "If a get alias api call requests a specific alias pattern then\r\nindices not having any matching aliases should not be included in the response.\r\n\r\nThis is a second attempt to fix this (first attempt was #28294).\r\nThe reason that the first attempt was reverted is because when xpack\r\nsecurity is enabled then index expression (like * or _all) are resolved\r\nprior to when a request is processed in the get aliases transport action,\r\nthen `MetaData#findAliases` can't know whether requested all where\r\nrequested since it was already expanded in concrete alias names. This\r\nchange now adds an additional field to the request calss that keeps track\r\nwhether all aliases where requested.\r\n\r\nCloses #27763", "number": 29538, "review_comments": [ { "body": "AFAICS only `{\"_all\"}` / `empty` / `null` will return `true` and `{\"*\"}` -> `false`.\r\nWould this violate #25114 ? ( see comment below )", "created_at": "2018-04-23T09:47:21Z" }, { "body": "if `*` is used, is it possible that only indices with aliases will be added to the result ? ", "created_at": "2018-04-23T09:48:54Z" }, { "body": "The tests in #25114 use wildcard in index names and that is why I think no test failed. But I think it makes sense to also return `true` if `*` is used as alias name.", "created_at": "2018-04-24T08:07:05Z" }, { "body": "As far as I understand in the REST layer, we don't print out any index for which there are no aliases to return, but only in case the alias (name) parameter was provided.(https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java#L139). I believe this change tries to mimic the same behaviour for the transport client, but in the REST layer we don't look at what the name parameter matches, only at whether it's provided or not:\r\n\r\n```\r\ncurl localhost:9200/_alias?pretty\r\n{\r\n \"index2\" : {\r\n \"aliases\" : { }\r\n },\r\n \"index\" : {\r\n \"aliases\" : {\r\n \"alias\" : { }\r\n }\r\n }\r\n}\r\n\r\ncurl localhost:9200/_alias/_all?pretty\r\n{\r\n \"index\" : {\r\n \"aliases\" : {\r\n \"alias\" : { }\r\n }\r\n }\r\n}\r\n```\r\n\r\nThat may be right or wrong, but I think we should try to return the same results if we make this change, so I'd say without changing the behaviour at REST (although we may want to discuss what the right thing to do is) the best we can do at transport is to only look at whether any specific alias was requested rather than whether the expression matched all or not.\r\n\r\nFurthermore, I'd expect that once these changes are made to `MetaData`, the REST action should be updated as some logic can be removed? By the way, related but it should not affect this PR, there's also #28799 under review that is moving the REST logic to the transport layer, yet the REST logic remains in `GetAliasesResponse#toXContent` which doesn't change what the transport client returns.\r\n\r\nI would probably consider renaming the `AliasesRequest#aliases` method (to `replaceAliases`?) and make sure that it's only used internally (although it needs to be public), and have users call the current setter. We clearly can't have both call the same method or we lose information about what was set in the first place. That would make it possible to keep track of whether specific aliases were requested or not through a flag, similarly to what you do now.", "created_at": "2018-05-15T19:32:35Z" }, { "body": "> That may be right or wrong, but I think we should try to return the same results if we make this change, so I'd say without changing the behaviour at REST (although we may want to discuss what the right thing to do is) the best we can do at transport is to only look at whether any specific alias was requested rather than whether the expression matched all or not.\r\n\r\nMakes sense. I will make sure that the transport client only care about whether aliases was provided or not like in the rest action. And like you mentioned in chat, the `matchAllAliases` should then be renamed to something else to reflect this (`wasAliasExpressionSet`?).\r\n\r\n> Furthermore, I'd expect that once these changes are made to MetaData, the REST action should be updated as some logic can be removed? By the way, related but it should not affect this PR, there's also #28799 under review that is moving the REST logic to the transport layer, yet the REST logic remains in GetAliasesResponse#toXContent which doesn't change what the transport client returns.\r\n\r\nAgreed. I will wait for #28799 to get in before working on that.\r\n\r\n> I would probably consider renaming the AliasesRequest#aliases method (to replaceAliases?) and make sure that it's only used internally (although it needs to be public), and have users call the current setter. We clearly can't have both call the same method or we lose information about what was set in the first place. That would make it possible to keep track of whether specific aliases were requested or not through a flag, similarly to what you do now.\r\n\r\nYes, I totally agree here.", "created_at": "2018-05-17T08:56:47Z" }, { "body": "this is not exactly the same as the previous matchAllAliases method. I always wondered why, so maybe a good time to fix it, but this will change the behaviour in subtle ways when a '*' is provided or when an array the contains '_all' with more than one item is provided..", "created_at": "2018-06-15T11:18:17Z" }, { "body": "given how weird this API is, I wonder if we could keep this logic contained in TransportGetAliasesAction, similar to what we have in RestGetAliasesAction, rather than introducing this flag in MetaData and exposing it to potentially other API.", "created_at": "2018-06-15T11:24:20Z" }, { "body": "I don't think this is enough for transport client users. We are trying to fix the transport client behaviour, but effectively for this to work, transport client users need to migrate to the new setter, while the previous \"setter\" method is still around. How will users know that they have to move away from it? I would rather move our own code (x-pack) to use a different method to replace aliases, and leave the original setter alone, but have it set the flag that we need. Would that be possible?", "created_at": "2018-06-15T11:26:35Z" }, { "body": "I was afraid that this change would be much larger compared to what is currently done in the PR. But taking a second look at it, I think is is ok. I will rename the existing aliases(...) setter to replaceAliases(...)", "created_at": "2018-06-15T11:35:47Z" }, { "body": "Makes sense. I will try to move it to TransportGetAliasesAction.", "created_at": "2018-06-15T11:37:10Z" }, { "body": "great!", "created_at": "2018-06-15T11:44:40Z" }, { "body": "> but this will change the behaviour in subtle ways when a '*' is provided or when an array the contains '_all' with more than one item is provided\r\n\r\nIf _all, * with other alias names is provided then `Strings.isAllOrWildcard(...)` return false, so I think that is ok? I will also look into if it is possible to change TransportGetAliasesAction so that we don't have to change this method, like I mentioned in the other comment.", "created_at": "2018-06-15T11:46:38Z" }, { "body": "but the previous matchAllAliases didn't look for the wildcard, and only looked at whether _all was anywhere within the array.", "created_at": "2018-06-15T11:54:18Z" }, { "body": "> and only looked at whether _all was anywhere within the array\r\n\r\nOr whether the array was an empty array, which sort of matched with the logic inside `RestGetAliasesAction`.\r\n\r\nIf you take a look at `RestGetAliasesAction` at line 81 where `paramAsStringArrayOrEmptyIfAll(...)` gets execute, it will replace `_all` and `*` with an empty array.", "created_at": "2018-06-15T12:00:21Z" }, { "body": "good point. ok then we can say that we are making transport client work the same way as REST, that should be good", "created_at": "2018-06-15T12:09:10Z" }, { "body": "I think that you can simplify this to `aliases = randomIndicesNames(0, 5);` as it randomly returns an empty array as well.", "created_at": "2018-06-22T05:58:35Z" }, { "body": "What about calling `setAliases(aliases)` instead ?", "created_at": "2018-06-22T06:10:00Z" }, { "body": "Can you check this comment ;)", "created_at": "2018-06-22T06:10:43Z" }, { "body": "s/where/were", "created_at": "2018-06-22T06:19:54Z" }, { "body": "I keep on having problems remembering what `aliasesProvided` means as with either `aliases` or `setAliases` aliases can be specified ...\r\nWhat about adding some comments to the `aliases` and `setAliases` to make the difference more obvious to the users of this class ?", "created_at": "2018-06-22T07:02:44Z" }, { "body": "Can this lead to user code change? ( as you changed the tests above )", "created_at": "2018-06-22T07:20:54Z" }, { "body": "shouldn't this be in case any alias was requested (what namesProvided does in RestGetAliasesAction), meaning if aliases are null or empty array? I know this is subtle and we may want to change it in the future, but I think we should do exactly what the corresponding REST action does.", "created_at": "2018-06-28T12:23:31Z" }, { "body": "small thing, but I'd prefer if we kept this change out of this PR. It is not required and it changes the behaviour ever so slightly.", "created_at": "2018-06-28T12:33:54Z" }, { "body": "would you mind writing a small unit test for this transport action?", "created_at": "2018-06-28T12:42:41Z" }, { "body": "++", "created_at": "2018-06-28T12:43:07Z" }, { "body": "I think we can be more open on this and say: Used when wildcards expressions need to be replaced with concrete aliases throughout the execution of the request.", "created_at": "2018-06-28T12:44:20Z" }, { "body": "we could as well make this public, I don't see harm in that. After all it's what the user set.", "created_at": "2018-06-28T12:45:08Z" }, { "body": "Ideally, the new logic replaces the corresponding logic that was only applied at REST up until now, hence I would expect a change to RestGetAliasesAction as well. I think that the REST logic though is still needed in a mixed cluster, in case the node that runs the transport action is on an older version and does not have the new logic. It would be nice though to isolate/add comments to the REST action to identify what we can/should remove in the future, so we don't forget to do so. Maybe add an assertion that fails once current version is 8 so we adapt the REST action and remove the code that's not needed (once we know for sure we cannot talk to a node that has the older version of the transport action)", "created_at": "2018-06-28T13:18:18Z" }, { "body": "I am not sure why we need the intersection here. We have just resolved the indices to concrete indices based on the same cluster state, so it should all be existing indices?", "created_at": "2018-06-29T09:15:36Z" }, { "body": "do we want to assert that the index was not already there, just to make sure?", "created_at": "2018-06-29T09:15:53Z" } ], "title": "Do not return all indices if a specific alias is requested via get aliases api." }
{ "commits": [ { "message": "Do not return all indices if a specific alias is requested via get aliases api.\n\nIf a get alias api call requests a specific alias pattern then\nindices not having any matching aliases should not be included in the response.\n\nThis is a second attempt to fix this (first attempt was #28294).\nThe reason that the first attempt was reverted is because when xpack\nsecurity is enabled then index expression (like * or _all) are resolved\nprior to when a request is processed in the get aliases transport action,\nthen `MetaData#findAliases` can't know whether requested all where\nrequested since it was already expanded in concrete alias names. This\nchange replaces aliases(...) replaceAliases(...) method on AliasesRequests\nclass and leave the aliases(...) method on subclasses. So there is a distinction\nbetween when xpack security replaces aliases and a user setting aliases via\nthe transport or high level http client.\n\nCloses #27763" }, { "message": "instead of tracking whether aliases were specified in original request,\nkeep track the original aliases" }, { "message": "Stop using HppcMaps.intersection(...) as concrete indices should always contains indices that exist" }, { "message": "iter" }, { "message": "fixed NPE error" } ], "files": [ { "diff": "@@ -33,9 +33,11 @@ public interface AliasesRequest extends IndicesRequest.Replaceable {\n String[] aliases();\n \n /**\n- * Sets the array of aliases that the action relates to\n+ * Replaces current aliases with the provided aliases.\n+ *\n+ * Sometimes aliases expressions need to be resolved to concrete aliases prior to executing the transport action.\n */\n- AliasesRequest aliases(String... aliases);\n+ void replaceAliases(String... aliases);\n \n /**\n * Returns true if wildcards expressions among aliases should be resolved, false otherwise", "filename": "server/src/main/java/org/elasticsearch/action/AliasesRequest.java", "status": "modified" }, { "diff": "@@ -302,7 +302,6 @@ public AliasActions index(String index) {\n /**\n * Aliases to use with this action.\n */\n- @Override\n public AliasActions aliases(String... aliases) {\n if (type == AliasActions.Type.REMOVE_INDEX) {\n throw new IllegalArgumentException(\"[aliases] is unsupported for [\" + type + \"]\");\n@@ -428,6 +427,11 @@ public String[] aliases() {\n return aliases;\n }\n \n+ @Override\n+ public void replaceAliases(String... aliases) {\n+ this.aliases = aliases;\n+ }\n+\n @Override\n public boolean expandAliasesWildcards() {\n //remove operations support wildcards among aliases, add operations don't", "filename": "server/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java", "status": "modified" }, { "diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.action.admin.indices.alias.get;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.AliasesRequest;\n import org.elasticsearch.action.support.IndicesOptions;\n@@ -32,15 +33,12 @@ public class GetAliasesRequest extends MasterNodeReadRequest<GetAliasesRequest>\n \n private String[] indices = Strings.EMPTY_ARRAY;\n private String[] aliases = Strings.EMPTY_ARRAY;\n-\n private IndicesOptions indicesOptions = IndicesOptions.strictExpand();\n+ private String[] originalAliases = Strings.EMPTY_ARRAY;\n \n- public GetAliasesRequest(String[] aliases) {\n+ public GetAliasesRequest(String... aliases) {\n this.aliases = aliases;\n- }\n-\n- public GetAliasesRequest(String alias) {\n- this.aliases = new String[]{alias};\n+ this.originalAliases = aliases;\n }\n \n public GetAliasesRequest() {\n@@ -51,6 +49,9 @@ public GetAliasesRequest(StreamInput in) throws IOException {\n indices = in.readStringArray();\n aliases = in.readStringArray();\n indicesOptions = IndicesOptions.readIndicesOptions(in);\n+ if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ originalAliases = in.readStringArray();\n+ }\n }\n \n @Override\n@@ -59,6 +60,9 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeStringArray(indices);\n out.writeStringArray(aliases);\n indicesOptions.writeIndicesOptions(out);\n+ if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ out.writeStringArray(originalAliases);\n+ }\n }\n \n @Override\n@@ -67,9 +71,9 @@ public GetAliasesRequest indices(String... indices) {\n return this;\n }\n \n- @Override\n public GetAliasesRequest aliases(String... aliases) {\n this.aliases = aliases;\n+ this.originalAliases = aliases;\n return this;\n }\n \n@@ -88,6 +92,18 @@ public String[] aliases() {\n return aliases;\n }\n \n+ @Override\n+ public void replaceAliases(String... aliases) {\n+ this.aliases = aliases;\n+ }\n+\n+ /**\n+ * Returns the aliases as was originally specified by the user\n+ */\n+ public String[] getOriginalAliases() {\n+ return originalAliases;\n+ }\n+\n @Override\n public boolean expandAliasesWildcards() {\n return true;", "filename": "server/src/main/java/org/elasticsearch/action/admin/indices/alias/get/GetAliasesRequest.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n+import java.util.Collections;\n import java.util.List;\n \n public class TransportGetAliasesAction extends TransportMasterNodeReadAction<GetAliasesRequest, GetAliasesResponse> {\n@@ -62,7 +63,24 @@ protected GetAliasesResponse newResponse() {\n @Override\n protected void masterOperation(GetAliasesRequest request, ClusterState state, ActionListener<GetAliasesResponse> listener) {\n String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(state, request);\n- ImmutableOpenMap<String, List<AliasMetaData>> result = state.metaData().findAliases(request.aliases(), concreteIndices);\n- listener.onResponse(new GetAliasesResponse(result));\n+ ImmutableOpenMap<String, List<AliasMetaData>> aliases = state.metaData().findAliases(request.aliases(), concreteIndices);\n+ listener.onResponse(new GetAliasesResponse(postProcess(request, concreteIndices, aliases)));\n }\n+\n+ /**\n+ * Fills alias result with empty entries for requested indices when no specific aliases were requested.\n+ */\n+ static ImmutableOpenMap<String, List<AliasMetaData>> postProcess(GetAliasesRequest request, String[] concreteIndices,\n+ ImmutableOpenMap<String, List<AliasMetaData>> aliases) {\n+ boolean noAliasesSpecified = request.getOriginalAliases() == null || request.getOriginalAliases().length == 0;\n+ ImmutableOpenMap.Builder<String, List<AliasMetaData>> mapBuilder = ImmutableOpenMap.builder(aliases);\n+ for (String index : concreteIndices) {\n+ if (aliases.get(index) == null && noAliasesSpecified) {\n+ List<AliasMetaData> previous = mapBuilder.put(index, Collections.emptyList());\n+ assert previous == null;\n+ }\n+ }\n+ return mapBuilder.build();\n+ }\n+\n }", "filename": "server/src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -265,8 +265,7 @@ public ImmutableOpenMap<String, List<AliasMetaData>> findAliases(final String[]\n \n boolean matchAllAliases = matchAllAliases(aliases);\n ImmutableOpenMap.Builder<String, List<AliasMetaData>> mapBuilder = ImmutableOpenMap.builder();\n- Iterable<String> intersection = HppcMaps.intersection(ObjectHashSet.from(concreteIndices), indices.keys());\n- for (String index : intersection) {\n+ for (String index : concreteIndices) {\n IndexMetaData indexMetaData = indices.get(index);\n List<AliasMetaData> filteredValues = new ArrayList<>();\n for (ObjectCursor<AliasMetaData> cursor : indexMetaData.getAliases().values()) {\n@@ -276,11 +275,11 @@ public ImmutableOpenMap<String, List<AliasMetaData>> findAliases(final String[]\n }\n }\n \n- if (!filteredValues.isEmpty()) {\n+ if (filteredValues.isEmpty() == false) {\n // Make the list order deterministic\n CollectionUtil.timSort(filteredValues, Comparator.comparing(AliasMetaData::alias));\n+ mapBuilder.put(index, Collections.unmodifiableList(filteredValues));\n }\n- mapBuilder.put(index, Collections.unmodifiableList(filteredValues));\n }\n return mapBuilder.build();\n }", "filename": "server/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java", "status": "modified" }, { "diff": "@@ -77,6 +77,10 @@ public String getName() {\n \n @Override\n public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {\n+ // The TransportGetAliasesAction was improved do the same post processing as is happening here.\n+ // We can't remove this logic yet to support mixed clusters. We should be able to remove this logic here\n+ // in when 8.0 becomes the new version in the master branch.\n+\n final boolean namesProvided = request.hasParam(\"name\");\n final String[] aliases = request.paramAsStringArrayOrEmptyIfAll(\"name\");\n final GetAliasesRequest getAliasesRequest = new GetAliasesRequest(aliases);", "filename": "server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,64 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.action.admin.indices.alias.get;\n+\n+import org.elasticsearch.cluster.metadata.AliasMetaData;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.util.Collections;\n+import java.util.List;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+\n+public class TransportGetAliasesActionTests extends ESTestCase {\n+\n+ public void testPostProcess() {\n+ GetAliasesRequest request = new GetAliasesRequest();\n+ ImmutableOpenMap<String, List<AliasMetaData>> aliases = ImmutableOpenMap.<String, List<AliasMetaData>>builder()\n+ .fPut(\"b\", Collections.singletonList(new AliasMetaData.Builder(\"y\").build()))\n+ .build();\n+ ImmutableOpenMap<String, List<AliasMetaData>> result =\n+ TransportGetAliasesAction.postProcess(request, new String[]{\"a\", \"b\", \"c\"}, aliases);\n+ assertThat(result.size(), equalTo(3));\n+ assertThat(result.get(\"a\").size(), equalTo(0));\n+ assertThat(result.get(\"b\").size(), equalTo(1));\n+ assertThat(result.get(\"c\").size(), equalTo(0));\n+\n+ request = new GetAliasesRequest();\n+ request.replaceAliases(\"y\", \"z\");\n+ aliases = ImmutableOpenMap.<String, List<AliasMetaData>>builder()\n+ .fPut(\"b\", Collections.singletonList(new AliasMetaData.Builder(\"y\").build()))\n+ .build();\n+ result = TransportGetAliasesAction.postProcess(request, new String[]{\"a\", \"b\", \"c\"}, aliases);\n+ assertThat(result.size(), equalTo(3));\n+ assertThat(result.get(\"a\").size(), equalTo(0));\n+ assertThat(result.get(\"b\").size(), equalTo(1));\n+ assertThat(result.get(\"c\").size(), equalTo(0));\n+\n+ request = new GetAliasesRequest(\"y\", \"z\");\n+ aliases = ImmutableOpenMap.<String, List<AliasMetaData>>builder()\n+ .fPut(\"b\", Collections.singletonList(new AliasMetaData.Builder(\"y\").build()))\n+ .build();\n+ result = TransportGetAliasesAction.postProcess(request, new String[]{\"a\", \"b\", \"c\"}, aliases);\n+ assertThat(result.size(), equalTo(1));\n+ assertThat(result.get(\"b\").size(), equalTo(1));\n+ }\n+\n+}", "filename": "server/src/test/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesActionTests.java", "status": "added" }, { "diff": "@@ -570,24 +570,20 @@ public void testIndicesGetAliases() throws Exception {\n logger.info(\"--> getting alias1\");\n GetAliasesResponse getResponse = admin().indices().prepareGetAliases(\"alias1\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(5));\n+ assertThat(getResponse.getAliases().size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"alias1\"));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getSearchRouting(), nullValue());\n- assertTrue(getResponse.getAliases().get(\"test\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"test123\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"foobarbaz\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n AliasesExistResponse existsResponse = admin().indices().prepareAliasesExist(\"alias1\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n \n logger.info(\"--> getting all aliases that start with alias*\");\n getResponse = admin().indices().prepareGetAliases(\"alias*\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(5));\n+ assertThat(getResponse.getAliases().size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").size(), equalTo(2));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"alias1\"));\n@@ -599,10 +595,6 @@ public void testIndicesGetAliases() throws Exception {\n assertThat(getResponse.getAliases().get(\"foobar\").get(1).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(1).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(1).getSearchRouting(), nullValue());\n- assertTrue(getResponse.getAliases().get(\"test\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"test123\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"foobarbaz\").isEmpty());\n- assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n existsResponse = admin().indices().prepareAliasesExist(\"alias*\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n \n@@ -687,13 +679,12 @@ public void testIndicesGetAliases() throws Exception {\n logger.info(\"--> getting f* for index *bar\");\n getResponse = admin().indices().prepareGetAliases(\"f*\").addIndices(\"*bar\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(2));\n+ assertThat(getResponse.getAliases().size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"foo\"));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getSearchRouting(), nullValue());\n- assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n existsResponse = admin().indices().prepareAliasesExist(\"f*\")\n .addIndices(\"*bar\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n@@ -702,14 +693,13 @@ public void testIndicesGetAliases() throws Exception {\n logger.info(\"--> getting f* for index *bac\");\n getResponse = admin().indices().prepareGetAliases(\"foo\").addIndices(\"*bac\").get();\n assertThat(getResponse, notNullValue());\n- assertThat(getResponse.getAliases().size(), equalTo(2));\n+ assertThat(getResponse.getAliases().size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").size(), equalTo(1));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0), notNullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"foo\"));\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getFilter(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getIndexRouting(), nullValue());\n assertThat(getResponse.getAliases().get(\"foobar\").get(0).getSearchRouting(), nullValue());\n- assertTrue(getResponse.getAliases().get(\"bazbar\").isEmpty());\n existsResponse = admin().indices().prepareAliasesExist(\"foo\")\n .addIndices(\"*bac\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n@@ -727,6 +717,19 @@ public void testIndicesGetAliases() throws Exception {\n .addIndices(\"foobar\").get();\n assertThat(existsResponse.exists(), equalTo(true));\n \n+ for (String aliasName : new String[]{null, \"_all\", \"*\"}) {\n+ logger.info(\"--> getting {} alias for index foobar\", aliasName);\n+ getResponse = aliasName != null ? admin().indices().prepareGetAliases(aliasName).addIndices(\"foobar\").get() :\n+ admin().indices().prepareGetAliases().addIndices(\"foobar\").get();\n+ assertThat(getResponse, notNullValue());\n+ assertThat(getResponse.getAliases().size(), equalTo(1));\n+ assertThat(getResponse.getAliases().get(\"foobar\").size(), equalTo(4));\n+ assertThat(getResponse.getAliases().get(\"foobar\").get(0).alias(), equalTo(\"alias1\"));\n+ assertThat(getResponse.getAliases().get(\"foobar\").get(1).alias(), equalTo(\"alias2\"));\n+ assertThat(getResponse.getAliases().get(\"foobar\").get(2).alias(), equalTo(\"bac\"));\n+ assertThat(getResponse.getAliases().get(\"foobar\").get(3).alias(), equalTo(\"foo\"));\n+ }\n+\n // alias at work again\n logger.info(\"--> getting * for index *bac\");\n getResponse = admin().indices().prepareGetAliases(\"*\").addIndices(\"*bac\").get();", "filename": "server/src/test/java/org/elasticsearch/aliases/IndexAliasesIT.java", "status": "modified" }, { "diff": "@@ -200,7 +200,7 @@ ResolvedIndices resolveIndicesAndAliases(IndicesRequest indicesRequest, MetaData\n if (aliasesRequest.expandAliasesWildcards()) {\n List<String> aliases = replaceWildcardsWithAuthorizedAliases(aliasesRequest.aliases(),\n loadAuthorizedAliases(authorizedIndices.get(), metaData));\n- aliasesRequest.aliases(aliases.toArray(new String[aliases.size()]));\n+ aliasesRequest.replaceAliases(aliases.toArray(new String[aliases.size()]));\n }\n if (indicesReplacedWithNoIndices) {\n if (indicesRequest instanceof GetAliasesRequest == false) {", "filename": "x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authz/IndicesAndAliasesResolver.java", "status": "modified" } ] }
{ "body": "Hi all, first a quick disclaimer, I'm not entirely sure if the following is a bug or documentation issue. After reading the [sliced scroll](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/search-request-scroll.html#sliced-scroll) section from the scroll API docs I got the impression that sliced scroll is supposed to work when targeting a single shard.\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 5.5.2, but I've tested with elasticsearch 6 and reproduced the same behaviour.\r\n\r\n**Plugins installed**:\r\n```\r\ncurl localhost:9200/_cat/plugins\r\no2qKP9T ingest-geoip 5.5.2\r\no2qKP9T ingest-user-agent 5.5.2\r\no2qKP9T x-pack 5.5.2\r\n```\r\n\r\n**JVM version** (`java -version`):\r\n```\r\n$ java -version\r\nopenjdk version \"1.8.0_141\"\r\nOpenJDK Runtime Environment (build 1.8.0_141-b16)\r\nOpenJDK 64-Bit Server VM (build 25.141-b16, mixed mode)\r\n```\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): I'm using elasticsearch official docker image.\r\n`docker.elastic.co/elasticsearch/elasticsearch:5.5.2`\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nI'm trying to perform a sliced scroll targeting only one shard through routing and elasticsearch is returning all the results in only one of the 2 slices.\r\n\r\nI expect elasticsearch to slice the query/results across all slices, even when targeting one shard only.\r\n\r\n**Steps to reproduce**:\r\n\r\nI have created a small bash script to reproduce the problem, please find it [here](https://gist.github.com/alissonsales/77f5a50214ccd71b26a86956b6230c07).\r\n\r\nHere are my results when I run the script using 1 and 2 shards.\r\n\r\n### Using 1 shard\r\n```\r\n$ bash sliced_scroll.sh 1\r\nES version\r\n{\r\n \"name\" : \"o2qKP9T\",\r\n \"cluster_name\" : \"docker-cluster\",\r\n \"cluster_uuid\" : \"bhLcjiBTTBaWlq6OuVC-Mg\",\r\n \"version\" : {\r\n \"number\" : \"5.5.2\",\r\n \"build_hash\" : \"b2f0c09\",\r\n \"build_date\" : \"2017-08-14T12:33:14.154Z\",\r\n \"build_snapshot\" : false,\r\n \"lucene_version\" : \"6.6.0\"\r\n },\r\n \"tagline\" : \"You Know, for Search\"\r\n}\r\nCreate index\r\nHTTP/1.1 200 OK\r\ncontent-type: application/json; charset=UTF-8\r\ncontent-length: 48\r\n\r\n{\"acknowledged\":true,\"shards_acknowledged\":true}Adding docs...\r\nslice id 0 search\r\n4\r\nslice id 1 search\r\n5\r\n```\r\n\r\nElasticsearch returns 2 slices, splitting the query/results as expected.\r\n\r\n### Using 2 shards\r\n```\r\n$ bash sliced_scroll.sh 2\r\nES version\r\n{\r\n \"name\" : \"o2qKP9T\",\r\n \"cluster_name\" : \"docker-cluster\",\r\n \"cluster_uuid\" : \"bhLcjiBTTBaWlq6OuVC-Mg\",\r\n \"version\" : {\r\n \"number\" : \"5.5.2\",\r\n \"build_hash\" : \"b2f0c09\",\r\n \"build_date\" : \"2017-08-14T12:33:14.154Z\",\r\n \"build_snapshot\" : false,\r\n \"lucene_version\" : \"6.6.0\"\r\n },\r\n \"tagline\" : \"You Know, for Search\"\r\n}\r\nCreate index\r\nHTTP/1.1 200 OK\r\ncontent-type: application/json; charset=UTF-8\r\ncontent-length: 48\r\n\r\n{\"acknowledged\":true,\"shards_acknowledged\":true}Adding docs...\r\nslice id 0 search\r\n9\r\nslice id 1 search\r\n0\r\n```\r\n\r\nElasticsearch returns 2 slices, but doesn't split the query/results, returning all results in only one slice.\r\n\r\nI hope this covers all details required to reproduce the issue and I apologise in case this is the expected behaviour and I'm missing something.\r\n\r\nRegards,\r\nAlisson Sales\r\n", "comments": [ { "body": "I think it’s expected as of today from how it’s implemented but we should really try to fix it to also work if there is more than one shard and routing is used, I agree it looks like a bug! Thanks for opening this issue", "created_at": "2017-11-27T22:36:25Z" }, { "body": "Yes this is expected because only the total number of shards per index is used to perform the slicing.\r\nWe could take the routing into account but we have multiple ways to filter/route searches based on the sharding. For the simple routing case where a single shard is selected per index this is simple since we just need to pass this information to the shard request (the slices are resolved in the shard directly) but it is more complicated to handle routing index partition: (https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-routing-field.html#routing-index-partition)\r\nand `_shards` preferences (https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-preference.html) since we don't pass this information to shard requests and we would need more than a boolean that indicates if a single shard is requested or not.\r\nI need to think more about this but I agree with @s1monw that we should try to fix it, I'd just add that if we fix it it should work for all types of routing.", "created_at": "2017-11-28T09:26:45Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-26T03:24:41Z" }, { "body": "I've recently come across the same problem (at least I believe it is https://discuss.elastic.co/t/empty-slices-with-scan-scroll/127255). Is this issue being actively looked at, or is it seen as low priority? ", "created_at": "2018-04-10T07:57:51Z" }, { "body": "It's on my todo list, not high priority but I'll try to find some time in the coming days to work on a fix.", "created_at": "2018-04-10T08:02:41Z" } ], "number": 27550, "title": "Issue with sliced scroll using routing" }
{ "body": "This commit propagates the preference and routing of the original SearchRequest in the ShardSearchRequest.\r\nThis information is then use to fix a bug in sliced scrolls when executed with a preference (or a routing).\r\nInstead of computing the slice query from the total number of shards in the index, this commit computes this number from the number of shards per index that participates in the request.\r\n\r\nFixes #27550", "number": 29533, "review_comments": [ { "body": "I wonder if we should call it `shardRequestOrdinal` shard ID is trappy at least for me since I alwasy think of ShardID.java when I read it.", "created_at": "2018-04-19T07:16:35Z" }, { "body": "this is quite a complex operation since we call if for every shard and then do consume the entire iterator again. I wonder if we can pre-sort the ShardRoutings in `SearchShardIterator` and then calculate this on the fly and simply call `SearchShardIterator#getIndexShardOrdinal()` to get it?", "created_at": "2018-04-19T07:25:17Z" }, { "body": "+1. I pushed https://github.com/elastic/elasticsearch/pull/29533/commits/b90716cfa7eb17b10de0d8398220431fbe856da5", "created_at": "2018-04-19T13:31:31Z" }, { "body": "I changed the logic to compute the needed informations only once in `InitialSearchPhase` constructor. I need the complete `GroupShardsIterator` to do so which is why it's not in `SearchShardIterator` but it's the same idea:\r\nhttps://github.com/elastic/elasticsearch/commit/b90716cfa7eb17b10de0d8398220431fbe856da5\r\n", "created_at": "2018-04-19T13:34:13Z" }, { "body": "can we use ` == false`?", "created_at": "2018-04-20T12:57:58Z" }, { "body": "give this assert a message it's a bummer when it fails and we don't have one", "created_at": "2018-04-20T12:58:35Z" }, { "body": "can you leave an inline comment what we are doing here? ", "created_at": "2018-04-20T12:59:30Z" }, { "body": "should this always be the index name?", "created_at": "2018-04-20T13:00:13Z" }, { "body": "oh now I see. Damned I didn't think of Aliases. If you have a routing alias we have a race condition here. I think we can't do it this way. The alias might change on the way to the shard which will cause wrong results. I think that renders my idea here as invalid?", "created_at": "2018-04-20T13:18:00Z" }, { "body": "Maybe we can change the `ShardSearchRequest` and instead of sending the global routing for the request we send the list of extracted routings for the requested index. `ShardSearchRequest#routings` and when there is no alias routing it returns the global routing in a singleton ?", "created_at": "2018-04-20T14:39:56Z" }, { "body": "yeah I think the routing needs to be resolved on the coordinator", "created_at": "2018-04-21T07:37:35Z" } ], "title": "Add additional shards routing info in ShardSearchRequest" }
{ "commits": [ { "message": "Add additional shards routing info in ShardSearchRequest\n\nThis commit adds two new methods to ShardSearchRequest:\n * #numberOfShardsIndex() that returns the number of shards of this index\n that participates in the request.\n * #remapShardId() that returns the remapped shard id of this shard for this request.\n The remapped shard id is the id of the requested shard among all shards\n of this index that are part of the request. Note that the remapped shard id\n is equal to the original shard id if all shards of this index are part of the request.\n\nThese informations are useful when the _search is executed with a preference (or a routing) that\nrestricts the number of shards requested for an index.\nThis change allows to fix a bug in sliced scrolls executed with a preference (or a routing).\nInstead of computing the slice query from the total number of shards in the index, this change allows to\ncompute this number from the number of shards per index that participates in the request.\n\nFixes #27550" }, { "message": "Address reviews\n\nRename remapShardId to shardRequestOrdinal.\nCompute request ordinal per shard and number of request shard per index only once." }, { "message": "Merge branch 'master' into bug/slice_with_routing" }, { "message": "Address reviews\n\nPropagate preference and routing in the ShardSearchRequest in order to compute\nthe shard ordinal and number of requested shard lazily on each node.\nThe computation is done in the slice builder only when preference or routing are set\non the original search request." }, { "message": "Merge branch 'master' into bug/slice_with_routing" }, { "message": "cosmetics" }, { "message": "send resolved routing values for the index in shard search request" }, { "message": "Merge branch 'master' into bug/slice_with_routing" }, { "message": "fix ut" }, { "message": "fix random routings value (null is not allowed)" }, { "message": "handle empty routing" } ], "files": [ { "diff": "@@ -37,8 +37,10 @@\n import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n import org.elasticsearch.transport.Transport;\n \n+import java.util.Collections;\n import java.util.List;\n import java.util.Map;\n+import java.util.Set;\n import java.util.concurrent.Executor;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicInteger;\n@@ -62,6 +64,7 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten\n private final long clusterStateVersion;\n private final Map<String, AliasFilter> aliasFilter;\n private final Map<String, Float> concreteIndexBoosts;\n+ private final Map<String, Set<String>> indexRoutings;\n private final SetOnce<AtomicArray<ShardSearchFailure>> shardFailures = new SetOnce<>();\n private final Object shardFailuresMutex = new Object();\n private final AtomicInteger successfulOps = new AtomicInteger();\n@@ -72,6 +75,7 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten\n protected AbstractSearchAsyncAction(String name, Logger logger, SearchTransportService searchTransportService,\n BiFunction<String, String, Transport.Connection> nodeIdToConnection,\n Map<String, AliasFilter> aliasFilter, Map<String, Float> concreteIndexBoosts,\n+ Map<String, Set<String>> indexRoutings,\n Executor executor, SearchRequest request,\n ActionListener<SearchResponse> listener, GroupShardsIterator<SearchShardIterator> shardsIts,\n TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion,\n@@ -89,6 +93,7 @@ protected AbstractSearchAsyncAction(String name, Logger logger, SearchTransportS\n this.clusterStateVersion = clusterStateVersion;\n this.concreteIndexBoosts = concreteIndexBoosts;\n this.aliasFilter = aliasFilter;\n+ this.indexRoutings = indexRoutings;\n this.results = resultConsumer;\n this.clusters = clusters;\n }\n@@ -128,17 +133,17 @@ public final void executeNextPhase(SearchPhase currentPhase, SearchPhase nextPha\n onPhaseFailure(currentPhase, \"all shards failed\", cause);\n } else {\n Boolean allowPartialResults = request.allowPartialSearchResults();\n- assert allowPartialResults != null : \"SearchRequest missing setting for allowPartialSearchResults\"; \n+ assert allowPartialResults != null : \"SearchRequest missing setting for allowPartialSearchResults\";\n if (allowPartialResults == false && shardFailures.get() != null ){\n if (logger.isDebugEnabled()) {\n final ShardOperationFailedException[] shardSearchFailures = ExceptionsHelper.groupBy(buildShardFailures());\n Throwable cause = shardSearchFailures.length == 0 ? null :\n ElasticsearchException.guessRootCauses(shardSearchFailures[0].getCause())[0];\n- logger.debug(() -> new ParameterizedMessage(\"{} shards failed for phase: [{}]\", \n+ logger.debug(() -> new ParameterizedMessage(\"{} shards failed for phase: [{}]\",\n shardSearchFailures.length, getName()), cause);\n }\n- onPhaseFailure(currentPhase, \"Partial shards failure\", null); \n- } else { \n+ onPhaseFailure(currentPhase, \"Partial shards failure\", null);\n+ } else {\n if (logger.isTraceEnabled()) {\n final String resultsFrom = results.getSuccessfulResults()\n .map(r -> r.getSearchShardTarget().toString()).collect(Collectors.joining(\",\"));\n@@ -271,14 +276,14 @@ public final SearchRequest getRequest() {\n \n @Override\n public final SearchResponse buildSearchResponse(InternalSearchResponse internalSearchResponse, String scrollId) {\n- \n+\n ShardSearchFailure[] failures = buildShardFailures();\n Boolean allowPartialResults = request.allowPartialSearchResults();\n assert allowPartialResults != null : \"SearchRequest missing setting for allowPartialSearchResults\";\n if (allowPartialResults == false && failures.length > 0){\n- raisePhaseFailure(new SearchPhaseExecutionException(\"\", \"Shard failures\", null, failures)); \n- } \n- \n+ raisePhaseFailure(new SearchPhaseExecutionException(\"\", \"Shard failures\", null, failures));\n+ }\n+\n return new SearchResponse(internalSearchResponse, scrollId, getNumShards(), successfulOps.get(),\n skippedOps.get(), buildTookInMillis(), failures, clusters);\n }\n@@ -318,8 +323,11 @@ public final ShardSearchTransportRequest buildShardSearchRequest(SearchShardIter\n AliasFilter filter = aliasFilter.get(shardIt.shardId().getIndex().getUUID());\n assert filter != null;\n float indexBoost = concreteIndexBoosts.getOrDefault(shardIt.shardId().getIndex().getUUID(), DEFAULT_INDEX_BOOST);\n+ String indexName = shardIt.shardId().getIndex().getName();\n+ final String[] routings = indexRoutings.getOrDefault(indexName, Collections.emptySet())\n+ .toArray(new String[0]);\n return new ShardSearchTransportRequest(shardIt.getOriginalIndices(), request, shardIt.shardId(), getNumShards(),\n- filter, indexBoost, timeProvider.getAbsoluteStartMillis(), clusterAlias);\n+ filter, indexBoost, timeProvider.getAbsoluteStartMillis(), clusterAlias, routings);\n }\n \n /**", "filename": "server/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java", "status": "modified" }, { "diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.transport.Transport;\n \n import java.util.Map;\n+import java.util.Set;\n import java.util.concurrent.Executor;\n import java.util.function.BiFunction;\n import java.util.function.Function;\n@@ -47,6 +48,7 @@ final class CanMatchPreFilterSearchPhase extends AbstractSearchAsyncAction<Searc\n CanMatchPreFilterSearchPhase(Logger logger, SearchTransportService searchTransportService,\n BiFunction<String, String, Transport.Connection> nodeIdToConnection,\n Map<String, AliasFilter> aliasFilter, Map<String, Float> concreteIndexBoosts,\n+ Map<String, Set<String>> indexRoutings,\n Executor executor, SearchRequest request,\n ActionListener<SearchResponse> listener, GroupShardsIterator<SearchShardIterator> shardsIts,\n TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion,\n@@ -56,9 +58,9 @@ final class CanMatchPreFilterSearchPhase extends AbstractSearchAsyncAction<Searc\n * We set max concurrent shard requests to the number of shards to otherwise avoid deep recursing that would occur if the local node\n * is the coordinating node for the query, holds all the shards for the request, and there are a lot of shards.\n */\n- super(\"can_match\", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request,\n- listener, shardsIts, timeProvider, clusterStateVersion, task, new BitSetSearchPhaseResults(shardsIts.size()), shardsIts.size(),\n- clusters);\n+ super(\"can_match\", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, indexRoutings,\n+ executor, request, listener, shardsIts, timeProvider, clusterStateVersion, task,\n+ new BitSetSearchPhaseResults(shardsIts.size()), shardsIts.size(), clusters);\n this.phaseFactory = phaseFactory;\n this.shardsIts = shardsIts;\n }", "filename": "server/src/main/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhase.java", "status": "modified" }, { "diff": "@@ -131,7 +131,7 @@ public final void run() throws IOException {\n if (shardsIts.size() > 0) {\n int maxConcurrentShardRequests = Math.min(this.maxConcurrentShardRequests, shardsIts.size());\n final boolean success = shardExecutionIndex.compareAndSet(0, maxConcurrentShardRequests);\n- assert success; \n+ assert success;\n assert request.allowPartialSearchResults() != null : \"SearchRequest missing setting for allowPartialSearchResults\";\n if (request.allowPartialSearchResults() == false) {\n final StringBuilder missingShards = new StringBuilder();\n@@ -140,7 +140,7 @@ public final void run() throws IOException {\n final SearchShardIterator shardRoutings = shardsIts.get(index);\n if (shardRoutings.size() == 0) {\n if(missingShards.length() >0 ){\n- missingShards.append(\", \"); \n+ missingShards.append(\", \");\n }\n missingShards.append(shardRoutings.shardId());\n }", "filename": "server/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.transport.Transport;\n \n import java.util.Map;\n+import java.util.Set;\n import java.util.concurrent.Executor;\n import java.util.function.BiFunction;\n \n@@ -37,11 +38,13 @@ final class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction\n \n SearchDfsQueryThenFetchAsyncAction(final Logger logger, final SearchTransportService searchTransportService,\n final BiFunction<String, String, Transport.Connection> nodeIdToConnection, final Map<String, AliasFilter> aliasFilter,\n- final Map<String, Float> concreteIndexBoosts, final SearchPhaseController searchPhaseController, final Executor executor,\n+ final Map<String, Float> concreteIndexBoosts, final Map<String, Set<String>> indexRoutings,\n+ final SearchPhaseController searchPhaseController, final Executor executor,\n final SearchRequest request, final ActionListener<SearchResponse> listener,\n final GroupShardsIterator<SearchShardIterator> shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider,\n final long clusterStateVersion, final SearchTask task, SearchResponse.Clusters clusters) {\n- super(\"dfs\", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener,\n+ super(\"dfs\", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, indexRoutings,\n+ executor, request, listener,\n shardsIts, timeProvider, clusterStateVersion, task, new ArraySearchPhaseResults<>(shardsIts.size()),\n request.getMaxConcurrentShardRequests(), clusters);\n this.searchPhaseController = searchPhaseController;", "filename": "server/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.transport.Transport;\n \n import java.util.Map;\n+import java.util.Set;\n import java.util.concurrent.Executor;\n import java.util.function.BiFunction;\n \n@@ -37,13 +38,14 @@ final class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction<Se\n \n SearchQueryThenFetchAsyncAction(final Logger logger, final SearchTransportService searchTransportService,\n final BiFunction<String, String, Transport.Connection> nodeIdToConnection, final Map<String, AliasFilter> aliasFilter,\n- final Map<String, Float> concreteIndexBoosts, final SearchPhaseController searchPhaseController, final Executor executor,\n+ final Map<String, Float> concreteIndexBoosts, final Map<String, Set<String>> indexRoutings,\n+ final SearchPhaseController searchPhaseController, final Executor executor,\n final SearchRequest request, final ActionListener<SearchResponse> listener,\n final GroupShardsIterator<SearchShardIterator> shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider,\n long clusterStateVersion, SearchTask task, SearchResponse.Clusters clusters) {\n- super(\"query\", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener,\n- shardsIts, timeProvider, clusterStateVersion, task, searchPhaseController.newSearchPhaseResults(request, shardsIts.size()),\n- request.getMaxConcurrentShardRequests(), clusters);\n+ super(\"query\", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, indexRoutings,\n+ executor, request, listener, shardsIts, timeProvider, clusterStateVersion, task,\n+ searchPhaseController.newSearchPhaseResults(request, shardsIts.size()), request.getMaxConcurrentShardRequests(), clusters);\n this.searchPhaseController = searchPhaseController;\n }\n ", "filename": "server/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java", "status": "modified" }, { "diff": "@@ -297,6 +297,7 @@ private void executeSearch(SearchTask task, SearchTimeProvider timeProvider, Sea\n Map<String, AliasFilter> aliasFilter = buildPerIndexAliasFilter(searchRequest, clusterState, indices, remoteAliasMap);\n Map<String, Set<String>> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, searchRequest.routing(),\n searchRequest.indices());\n+ routingMap = routingMap == null ? Collections.emptyMap() : Collections.unmodifiableMap(routingMap);\n String[] concreteIndices = new String[indices.length];\n for (int i = 0; i < indices.length; i++) {\n concreteIndices[i] = indices[i].getName();\n@@ -350,7 +351,7 @@ private void executeSearch(SearchTask task, SearchTimeProvider timeProvider, Sea\n }\n boolean preFilterSearchShards = shouldPreFilterSearchShards(searchRequest, shardIterators);\n searchAsyncAction(task, searchRequest, shardIterators, timeProvider, connectionLookup, clusterState.version(),\n- Collections.unmodifiableMap(aliasFilter), concreteIndexBoosts, listener, preFilterSearchShards, clusters).start();\n+ Collections.unmodifiableMap(aliasFilter), concreteIndexBoosts, routingMap, listener, preFilterSearchShards, clusters).start();\n }\n \n private boolean shouldPreFilterSearchShards(SearchRequest searchRequest, GroupShardsIterator<SearchShardIterator> shardIterators) {\n@@ -380,17 +381,20 @@ private AbstractSearchAsyncAction searchAsyncAction(SearchTask task, SearchReque\n GroupShardsIterator<SearchShardIterator> shardIterators,\n SearchTimeProvider timeProvider,\n BiFunction<String, String, Transport.Connection> connectionLookup,\n- long clusterStateVersion, Map<String, AliasFilter> aliasFilter,\n+ long clusterStateVersion,\n+ Map<String, AliasFilter> aliasFilter,\n Map<String, Float> concreteIndexBoosts,\n- ActionListener<SearchResponse> listener, boolean preFilter,\n+ Map<String, Set<String>> indexRoutings,\n+ ActionListener<SearchResponse> listener,\n+ boolean preFilter,\n SearchResponse.Clusters clusters) {\n Executor executor = threadPool.executor(ThreadPool.Names.SEARCH);\n if (preFilter) {\n return new CanMatchPreFilterSearchPhase(logger, searchTransportService, connectionLookup,\n- aliasFilter, concreteIndexBoosts, executor, searchRequest, listener, shardIterators,\n+ aliasFilter, concreteIndexBoosts, indexRoutings, executor, searchRequest, listener, shardIterators,\n timeProvider, clusterStateVersion, task, (iter) -> {\n AbstractSearchAsyncAction action = searchAsyncAction(task, searchRequest, iter, timeProvider, connectionLookup,\n- clusterStateVersion, aliasFilter, concreteIndexBoosts, listener, false, clusters);\n+ clusterStateVersion, aliasFilter, concreteIndexBoosts, indexRoutings, listener, false, clusters);\n return new SearchPhase(action.getName()) {\n @Override\n public void run() throws IOException {\n@@ -403,14 +407,14 @@ public void run() throws IOException {\n switch (searchRequest.searchType()) {\n case DFS_QUERY_THEN_FETCH:\n searchAsyncAction = new SearchDfsQueryThenFetchAsyncAction(logger, searchTransportService, connectionLookup,\n- aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators,\n- timeProvider, clusterStateVersion, task, clusters);\n+ aliasFilter, concreteIndexBoosts, indexRoutings, searchPhaseController, executor, searchRequest, listener,\n+ shardIterators, timeProvider, clusterStateVersion, task, clusters);\n break;\n case QUERY_AND_FETCH:\n case QUERY_THEN_FETCH:\n searchAsyncAction = new SearchQueryThenFetchAsyncAction(logger, searchTransportService, connectionLookup,\n- aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators,\n- timeProvider, clusterStateVersion, task, clusters);\n+ aliasFilter, concreteIndexBoosts, indexRoutings, searchPhaseController, executor, searchRequest, listener,\n+ shardIterators, timeProvider, clusterStateVersion, task, clusters);\n break;\n default:\n throw new IllegalStateException(\"Unknown search type: [\" + searchRequest.searchType() + \"]\");", "filename": "server/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java", "status": "modified" }, { "diff": "@@ -24,7 +24,7 @@\n \n /**\n * A simple {@link ShardsIterator} that iterates a list or sub-list of\n- * {@link ShardRouting shard routings}.\n+ * {@link ShardRouting shard indexRoutings}.\n */\n public class PlainShardsIterator implements ShardsIterator {\n ", "filename": "server/src/main/java/org/elasticsearch/cluster/routing/PlainShardsIterator.java", "status": "modified" }, { "diff": "@@ -38,7 +38,7 @@\n \n /**\n * {@link ShardRouting} immutably encapsulates information about shard\n- * routings like id, state, version, etc.\n+ * indexRoutings like id, state, version, etc.\n */\n public final class ShardRouting implements Writeable, ToXContentObject {\n \n@@ -477,7 +477,7 @@ public boolean isRelocationTargetOf(ShardRouting other) {\n \"ShardRouting is a relocation target but current node id isn't equal to source relocating node. This [\" + this + \"], other [\" + other + \"]\";\n \n assert b == false || this.shardId.equals(other.shardId) :\n- \"ShardRouting is a relocation target but both routings are not of the same shard id. This [\" + this + \"], other [\" + other + \"]\";\n+ \"ShardRouting is a relocation target but both indexRoutings are not of the same shard id. This [\" + this + \"], other [\" + other + \"]\";\n \n assert b == false || this.primary == other.primary :\n \"ShardRouting is a relocation target but primary flag is different. This [\" + this + \"], target [\" + other + \"]\";\n@@ -504,7 +504,7 @@ public boolean isRelocationSourceOf(ShardRouting other) {\n \"ShardRouting is a relocation source but relocating node isn't equal to other's current node. This [\" + this + \"], other [\" + other + \"]\";\n \n assert b == false || this.shardId.equals(other.shardId) :\n- \"ShardRouting is a relocation source but both routings are not of the same shard. This [\" + this + \"], target [\" + other + \"]\";\n+ \"ShardRouting is a relocation source but both indexRoutings are not of the same shard. This [\" + this + \"], target [\" + other + \"]\";\n \n assert b == false || this.primary == other.primary :\n \"ShardRouting is a relocation source but primary flag is different. This [\" + this + \"], target [\" + other + \"]\";", "filename": "server/src/main/java/org/elasticsearch/cluster/routing/ShardRouting.java", "status": "modified" }, { "diff": "@@ -25,8 +25,10 @@\n import org.apache.lucene.search.FieldDoc;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.Counter;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.search.SearchTask;\n import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.search.Queries;\n@@ -91,6 +93,7 @@ final class DefaultSearchContext extends SearchContext {\n private final Engine.Searcher engineSearcher;\n private final BigArrays bigArrays;\n private final IndexShard indexShard;\n+ private final ClusterService clusterService;\n private final IndexService indexService;\n private final ContextIndexSearcher searcher;\n private final DfsSearchResult dfsResult;\n@@ -120,6 +123,7 @@ final class DefaultSearchContext extends SearchContext {\n // filter for sliced scroll\n private SliceBuilder sliceBuilder;\n private SearchTask task;\n+ private final Version minNodeVersion;\n \n \n /**\n@@ -152,9 +156,10 @@ final class DefaultSearchContext extends SearchContext {\n private final QueryShardContext queryShardContext;\n private FetchPhase fetchPhase;\n \n- DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarget shardTarget, Engine.Searcher engineSearcher,\n- IndexService indexService, IndexShard indexShard, BigArrays bigArrays, Counter timeEstimateCounter,\n- TimeValue timeout, FetchPhase fetchPhase, String clusterAlias) {\n+ DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarget shardTarget,\n+ Engine.Searcher engineSearcher, ClusterService clusterService, IndexService indexService,\n+ IndexShard indexShard, BigArrays bigArrays, Counter timeEstimateCounter, TimeValue timeout,\n+ FetchPhase fetchPhase, String clusterAlias, Version minNodeVersion) {\n this.id = id;\n this.request = request;\n this.fetchPhase = fetchPhase;\n@@ -168,9 +173,11 @@ final class DefaultSearchContext extends SearchContext {\n this.fetchResult = new FetchSearchResult(id, shardTarget);\n this.indexShard = indexShard;\n this.indexService = indexService;\n+ this.clusterService = clusterService;\n this.searcher = new ContextIndexSearcher(engineSearcher, indexService.cache().query(), indexShard.getQueryCachingPolicy());\n this.timeEstimateCounter = timeEstimateCounter;\n this.timeout = timeout;\n+ this.minNodeVersion = minNodeVersion;\n queryShardContext = indexService.newQueryShardContext(request.shardId().id(), searcher.getIndexReader(), request::nowInMillis,\n clusterAlias);\n queryShardContext.setTypes(request.types());\n@@ -278,8 +285,7 @@ && new NestedHelper(mapperService()).mightMatchNestedDocs(query)\n }\n \n if (sliceBuilder != null) {\n- filters.add(sliceBuilder.toFilter(queryShardContext, shardTarget().getShardId().getId(),\n- queryShardContext.getIndexSettings().getNumberOfShards()));\n+ filters.add(sliceBuilder.toFilter(clusterService, request, queryShardContext, minNodeVersion));\n }\n \n if (filters.isEmpty()) {", "filename": "server/src/main/java/org/elasticsearch/search/DefaultSearchContext.java", "status": "modified" }, { "diff": "@@ -616,8 +616,8 @@ private DefaultSearchContext createSearchContext(ShardSearchRequest request, Tim\n Engine.Searcher engineSearcher = indexShard.acquireSearcher(\"search\");\n \n final DefaultSearchContext searchContext = new DefaultSearchContext(idGenerator.incrementAndGet(), request, shardTarget,\n- engineSearcher, indexService, indexShard, bigArrays, threadPool.estimatedTimeInMillisCounter(), timeout, fetchPhase,\n- request.getClusterAlias());\n+ engineSearcher, clusterService, indexService, indexShard, bigArrays, threadPool.estimatedTimeInMillisCounter(), timeout,\n+ fetchPhase, request.getClusterAlias(), clusterService.state().nodes().getMinNodeVersion());\n boolean success = false;\n try {\n // we clone the query shard context here just for rewriting otherwise we", "filename": "server/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -28,13 +28,10 @@\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryRewriteContext;\n-import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.Rewriteable;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.search.Scroll;\n-import org.elasticsearch.search.SearchService;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n \n import java.io.IOException;\n@@ -61,7 +58,6 @@\n */\n \n public class ShardSearchLocalRequest implements ShardSearchRequest {\n-\n private String clusterAlias;\n private ShardId shardId;\n private int numberOfShards;\n@@ -74,17 +70,18 @@ public class ShardSearchLocalRequest implements ShardSearchRequest {\n private Boolean requestCache;\n private long nowInMillis;\n private boolean allowPartialSearchResults;\n-\n+ private String[] indexRoutings = Strings.EMPTY_ARRAY;\n+ private String preference;\n private boolean profile;\n \n ShardSearchLocalRequest() {\n }\n \n ShardSearchLocalRequest(SearchRequest searchRequest, ShardId shardId, int numberOfShards,\n- AliasFilter aliasFilter, float indexBoost, long nowInMillis, String clusterAlias) {\n+ AliasFilter aliasFilter, float indexBoost, long nowInMillis, String clusterAlias, String[] indexRoutings) {\n this(shardId, numberOfShards, searchRequest.searchType(),\n- searchRequest.source(), searchRequest.types(), searchRequest.requestCache(), aliasFilter, indexBoost, \n- searchRequest.allowPartialSearchResults());\n+ searchRequest.source(), searchRequest.types(), searchRequest.requestCache(), aliasFilter, indexBoost,\n+ searchRequest.allowPartialSearchResults(), indexRoutings, searchRequest.preference());\n // If allowPartialSearchResults is unset (ie null), the cluster-level default should have been substituted\n // at this stage. Any NPEs in the above are therefore an error in request preparation logic.\n assert searchRequest.allowPartialSearchResults() != null;\n@@ -102,7 +99,8 @@ public ShardSearchLocalRequest(ShardId shardId, String[] types, long nowInMillis\n }\n \n public ShardSearchLocalRequest(ShardId shardId, int numberOfShards, SearchType searchType, SearchSourceBuilder source, String[] types,\n- Boolean requestCache, AliasFilter aliasFilter, float indexBoost, boolean allowPartialSearchResults) {\n+ Boolean requestCache, AliasFilter aliasFilter, float indexBoost, boolean allowPartialSearchResults,\n+ String[] indexRoutings, String preference) {\n this.shardId = shardId;\n this.numberOfShards = numberOfShards;\n this.searchType = searchType;\n@@ -112,6 +110,8 @@ public ShardSearchLocalRequest(ShardId shardId, int numberOfShards, SearchType s\n this.aliasFilter = aliasFilter;\n this.indexBoost = indexBoost;\n this.allowPartialSearchResults = allowPartialSearchResults;\n+ this.indexRoutings = indexRoutings;\n+ this.preference = preference;\n }\n \n \n@@ -169,18 +169,28 @@ public long nowInMillis() {\n public Boolean requestCache() {\n return requestCache;\n }\n- \n+\n @Override\n public Boolean allowPartialSearchResults() {\n return allowPartialSearchResults;\n }\n- \n+\n \n @Override\n public Scroll scroll() {\n return scroll;\n }\n \n+ @Override\n+ public String[] indexRoutings() {\n+ return indexRoutings;\n+ }\n+\n+ @Override\n+ public String preference() {\n+ return preference;\n+ }\n+\n @Override\n public void setProfile(boolean profile) {\n this.profile = profile;\n@@ -225,6 +235,13 @@ protected void innerReadFrom(StreamInput in) throws IOException {\n if (in.getVersion().onOrAfter(Version.V_6_3_0)) {\n allowPartialSearchResults = in.readOptionalBoolean();\n }\n+ if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ indexRoutings = in.readStringArray();\n+ preference = in.readOptionalString();\n+ } else {\n+ indexRoutings = Strings.EMPTY_ARRAY;\n+ preference = null;\n+ }\n }\n \n protected void innerWriteTo(StreamOutput out, boolean asKey) throws IOException {\n@@ -240,7 +257,7 @@ protected void innerWriteTo(StreamOutput out, boolean asKey) throws IOException\n if (out.getVersion().onOrAfter(Version.V_5_2_0)) {\n out.writeFloat(indexBoost);\n }\n- if (!asKey) {\n+ if (asKey == false) {\n out.writeVLong(nowInMillis);\n }\n out.writeOptionalBoolean(requestCache);\n@@ -250,7 +267,12 @@ protected void innerWriteTo(StreamOutput out, boolean asKey) throws IOException\n if (out.getVersion().onOrAfter(Version.V_6_3_0)) {\n out.writeOptionalBoolean(allowPartialSearchResults);\n }\n- \n+ if (asKey == false) {\n+ if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ out.writeStringArray(indexRoutings);\n+ out.writeOptionalString(preference);\n+ }\n+ }\n }\n \n @Override", "filename": "server/src/main/java/org/elasticsearch/search/internal/ShardSearchLocalRequest.java", "status": "modified" }, { "diff": "@@ -19,7 +19,9 @@\n \n package org.elasticsearch.search.internal;\n \n+import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.CheckedFunction;\n@@ -28,8 +30,6 @@\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.query.BoolQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n-import org.elasticsearch.index.query.QueryRewriteContext;\n-import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.Rewriteable;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.AliasFilterParsingException;\n@@ -68,11 +68,21 @@ public interface ShardSearchRequest {\n long nowInMillis();\n \n Boolean requestCache();\n- \n+\n Boolean allowPartialSearchResults();\n \n Scroll scroll();\n \n+ /**\n+ * Returns the routing values resolved by the coordinating node for the index pointed by {@link #shardId()}.\n+ */\n+ String[] indexRoutings();\n+\n+ /**\n+ * Returns the preference of the original {@link SearchRequest#preference()}.\n+ */\n+ String preference();\n+\n /**\n * Sets if this shard search needs to be profiled or not\n * @param profile True if the shard should be profiled", "filename": "server/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java", "status": "modified" }, { "diff": "@@ -28,9 +28,6 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.index.query.QueryBuilder;\n-import org.elasticsearch.index.query.QueryRewriteContext;\n-import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.index.query.Rewriteable;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.search.Scroll;\n@@ -57,9 +54,10 @@ public ShardSearchTransportRequest(){\n }\n \n public ShardSearchTransportRequest(OriginalIndices originalIndices, SearchRequest searchRequest, ShardId shardId, int numberOfShards,\n- AliasFilter aliasFilter, float indexBoost, long nowInMillis, String clusterAlias) {\n+ AliasFilter aliasFilter, float indexBoost, long nowInMillis,\n+ String clusterAlias, String[] indexRoutings) {\n this.shardSearchLocalRequest = new ShardSearchLocalRequest(searchRequest, shardId, numberOfShards, aliasFilter, indexBoost,\n- nowInMillis, clusterAlias);\n+ nowInMillis, clusterAlias, indexRoutings);\n this.originalIndices = originalIndices;\n }\n \n@@ -151,17 +149,27 @@ public long nowInMillis() {\n public Boolean requestCache() {\n return shardSearchLocalRequest.requestCache();\n }\n- \n+\n @Override\n public Boolean allowPartialSearchResults() {\n return shardSearchLocalRequest.allowPartialSearchResults();\n- } \n+ }\n \n @Override\n public Scroll scroll() {\n return shardSearchLocalRequest.scroll();\n }\n \n+ @Override\n+ public String[] indexRoutings() {\n+ return shardSearchLocalRequest.indexRoutings();\n+ }\n+\n+ @Override\n+ public String preference() {\n+ return shardSearchLocalRequest.preference();\n+ }\n+\n @Override\n public void readFrom(StreamInput in) throws IOException {\n throw new UnsupportedOperationException(\"usage of Streamable is to be replaced by Writeable\");", "filename": "server/src/main/java/org/elasticsearch/search/internal/ShardSearchTransportRequest.java", "status": "modified" }, { "diff": "@@ -23,13 +23,18 @@\n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.routing.GroupShardsIterator;\n+import org.elasticsearch.cluster.routing.ShardIterator;\n+import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n import org.elasticsearch.common.logging.DeprecationLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.ToXContentObject;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -39,9 +44,13 @@\n import org.elasticsearch.index.mapper.IdFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.search.internal.ShardSearchRequest;\n \n import java.io.IOException;\n+import java.util.Collections;\n+import java.util.Map;\n import java.util.Objects;\n+import java.util.Set;\n \n /**\n * A slice builder allowing to split a scroll in multiple partitions.\n@@ -203,12 +212,49 @@ public int hashCode() {\n return Objects.hash(this.field, this.id, this.max);\n }\n \n- public Query toFilter(QueryShardContext context, int shardId, int numShards) {\n+ /**\n+ * Converts this QueryBuilder to a lucene {@link Query}.\n+ *\n+ * @param context Additional information needed to build the query\n+ */\n+ public Query toFilter(ClusterService clusterService, ShardSearchRequest request, QueryShardContext context, Version minNodeVersion) {\n final MappedFieldType type = context.fieldMapper(field);\n if (type == null) {\n throw new IllegalArgumentException(\"field \" + field + \" not found\");\n }\n \n+ int shardId = request.shardId().id();\n+ int numShards = context.getIndexSettings().getNumberOfShards();\n+ if (minNodeVersion.onOrAfter(Version.V_7_0_0_alpha1) &&\n+ (request.preference() != null || request.indexRoutings().length > 0)) {\n+ GroupShardsIterator<ShardIterator> group = buildShardIterator(clusterService, request);\n+ assert group.size() <= numShards : \"index routing shards: \" + group.size() +\n+ \" cannot be greater than total number of shards: \" + numShards;\n+ if (group.size() < numShards) {\n+ /**\n+ * The routing of this request targets a subset of the shards of this index so we need to we retrieve\n+ * the original {@link GroupShardsIterator} and compute the request shard id and number of\n+ * shards from it.\n+ * This behavior has been added in {@link Version#V_7_0_0_alpha1} so if there is another node in the cluster\n+ * with an older version we use the original shard id and number of shards in order to ensure that all\n+ * slices use the same numbers.\n+ */\n+ numShards = group.size();\n+ int ord = 0;\n+ shardId = -1;\n+ // remap the original shard id with its index (position) in the sorted shard iterator.\n+ for (ShardIterator it : group) {\n+ assert it.shardId().getIndex().equals(request.shardId().getIndex());\n+ if (request.shardId().equals(it.shardId())) {\n+ shardId = ord;\n+ break;\n+ }\n+ ++ord;\n+ }\n+ assert shardId != -1 : \"shard id: \" + request.shardId().getId() + \" not found in index shard routing\";\n+ }\n+ }\n+\n String field = this.field;\n boolean useTermQuery = false;\n if (\"_uid\".equals(field)) {\n@@ -273,6 +319,17 @@ public Query toFilter(QueryShardContext context, int shardId, int numShards) {\n return new MatchAllDocsQuery();\n }\n \n+ /**\n+ * Returns the {@link GroupShardsIterator} for the provided <code>request</code>.\n+ */\n+ private GroupShardsIterator<ShardIterator> buildShardIterator(ClusterService clusterService, ShardSearchRequest request) {\n+ final ClusterState state = clusterService.state();\n+ String[] indices = new String[] { request.shardId().getIndex().getName() };\n+ Map<String, Set<String>> routingMap = request.indexRoutings().length > 0 ?\n+ Collections.singletonMap(indices[0], Sets.newHashSet(request.indexRoutings())) : null;\n+ return clusterService.operationRouting().searchShards(state, indices, routingMap, request.preference());\n+ }\n+\n @Override\n public String toString() {\n return Strings.toString(this, true, true);", "filename": "server/src/main/java/org/elasticsearch/search/slice/SliceBuilder.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.cluster.routing.GroupShardsIterator;\n import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.common.util.set.Sets;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.query.MatchAllQueryBuilder;\n import org.elasticsearch.index.shard.ShardId;\n@@ -62,10 +63,15 @@ private AbstractSearchAsyncAction<SearchPhaseResult> createAction(\n \n final SearchRequest request = new SearchRequest();\n request.allowPartialSearchResults(true);\n+ request.preference(\"_shards:1,3\");\n return new AbstractSearchAsyncAction<SearchPhaseResult>(\"test\", null, null, null,\n- Collections.singletonMap(\"foo\", new AliasFilter(new MatchAllQueryBuilder())), Collections.singletonMap(\"foo\", 2.0f), null,\n- request, null, new GroupShardsIterator<>(Collections.singletonList(\n- new SearchShardIterator(null, null, Collections.emptyList(), null))), timeProvider, 0, null,\n+ Collections.singletonMap(\"foo\", new AliasFilter(new MatchAllQueryBuilder())), Collections.singletonMap(\"foo\", 2.0f),\n+ Collections.singletonMap(\"name\", Sets.newHashSet(\"bar\", \"baz\")),null, request, null,\n+ new GroupShardsIterator<>(\n+ Collections.singletonList(\n+ new SearchShardIterator(null, null, Collections.emptyList(), null)\n+ )\n+ ), timeProvider, 0, null,\n new InitialSearchPhase.ArraySearchPhaseResults<>(10), request.getMaxConcurrentShardRequests(),\n SearchResponse.Clusters.EMPTY) {\n @Override\n@@ -117,5 +123,8 @@ public void testBuildShardSearchTransportRequest() {\n assertArrayEquals(new String[] {\"name\", \"name1\"}, shardSearchTransportRequest.indices());\n assertEquals(new MatchAllQueryBuilder(), shardSearchTransportRequest.getAliasFilter().getQueryBuilder());\n assertEquals(2.0f, shardSearchTransportRequest.indexBoost(), 0.0f);\n+ assertArrayEquals(new String[] {\"name\", \"name1\"}, shardSearchTransportRequest.indices());\n+ assertArrayEquals(new String[] {\"bar\", \"baz\"}, shardSearchTransportRequest.indexRoutings());\n+ assertEquals(\"_shards:1,3\", shardSearchTransportRequest.preference());\n }\n }", "filename": "server/src/test/java/org/elasticsearch/action/search/AbstractSearchAsyncActionTests.java", "status": "modified" }, { "diff": "@@ -78,12 +78,12 @@ public void sendCanMatch(Transport.Connection connection, ShardSearchTransportRe\n 2, randomBoolean(), primaryNode, replicaNode);\n final SearchRequest searchRequest = new SearchRequest();\n searchRequest.allowPartialSearchResults(true);\n- \n+\n CanMatchPreFilterSearchPhase canMatchPhase = new CanMatchPreFilterSearchPhase(logger,\n searchTransportService,\n (clusterAlias, node) -> lookup.get(node),\n Collections.singletonMap(\"_na_\", new AliasFilter(null, Strings.EMPTY_ARRAY)),\n- Collections.emptyMap(), EsExecutors.newDirectExecutorService(),\n+ Collections.emptyMap(), Collections.emptyMap(), EsExecutors.newDirectExecutorService(),\n searchRequest, null, shardsIter, timeProvider, 0, null,\n (iter) -> new SearchPhase(\"test\") {\n @Override\n@@ -159,12 +159,12 @@ public void sendCanMatch(Transport.Connection connection, ShardSearchTransportRe\n \n final SearchRequest searchRequest = new SearchRequest();\n searchRequest.allowPartialSearchResults(true);\n- \n+\n CanMatchPreFilterSearchPhase canMatchPhase = new CanMatchPreFilterSearchPhase(logger,\n searchTransportService,\n (clusterAlias, node) -> lookup.get(node),\n Collections.singletonMap(\"_na_\", new AliasFilter(null, Strings.EMPTY_ARRAY)),\n- Collections.emptyMap(), EsExecutors.newDirectExecutorService(),\n+ Collections.emptyMap(), Collections.emptyMap(), EsExecutors.newDirectExecutorService(),\n searchRequest, null, shardsIter, timeProvider, 0, null,\n (iter) -> new SearchPhase(\"test\") {\n @Override\n@@ -222,6 +222,7 @@ public void sendCanMatch(\n (clusterAlias, node) -> lookup.get(node),\n Collections.singletonMap(\"_na_\", new AliasFilter(null, Strings.EMPTY_ARRAY)),\n Collections.emptyMap(),\n+ Collections.emptyMap(),\n EsExecutors.newDirectExecutorService(),\n searchRequest,\n null,", "filename": "server/src/test/java/org/elasticsearch/action/search/CanMatchPreFilterSearchPhaseTests.java", "status": "modified" }, { "diff": "@@ -106,6 +106,7 @@ public void onFailure(Exception e) {\n return lookup.get(node); },\n aliasFilters,\n Collections.emptyMap(),\n+ Collections.emptyMap(),\n null,\n request,\n responseListener,\n@@ -198,6 +199,7 @@ public void onFailure(Exception e) {\n return lookup.get(node); },\n aliasFilters,\n Collections.emptyMap(),\n+ Collections.emptyMap(),\n null,\n request,\n responseListener,\n@@ -303,6 +305,7 @@ public void sendFreeContext(Transport.Connection connection, long contextId, Ori\n return lookup.get(node); },\n aliasFilters,\n Collections.emptyMap(),\n+ Collections.emptyMap(),\n executor,\n request,\n responseListener,", "filename": "server/src/test/java/org/elasticsearch/action/search/SearchAsyncActionTests.java", "status": "modified" }, { "diff": "@@ -83,7 +83,7 @@ public void setUp() throws Exception {\n }\n \n /**\n- * puts primary shard routings into initializing state\n+ * puts primary shard indexRoutings into initializing state\n */\n private void initPrimaries() {\n logger.info(\"adding {} nodes and performing rerouting\", this.numberOfReplicas + 1);", "filename": "server/src/test/java/org/elasticsearch/cluster/routing/PrimaryTermsTests.java", "status": "modified" }, { "diff": "@@ -83,7 +83,7 @@ public void setUp() throws Exception {\n }\n \n /**\n- * puts primary shard routings into initializing state\n+ * puts primary shard indexRoutings into initializing state\n */\n private void initPrimaries() {\n logger.info(\"adding {} nodes and performing rerouting\", this.numberOfReplicas + 1);", "filename": "server/src/test/java/org/elasticsearch/cluster/routing/RoutingTableTests.java", "status": "modified" }, { "diff": "@@ -122,6 +122,16 @@ public Scroll scroll() {\n return null;\n }\n \n+ @Override\n+ public String[] indexRoutings() {\n+ return null;\n+ }\n+\n+ @Override\n+ public String preference() {\n+ return null;\n+ }\n+\n @Override\n public void setProfile(boolean profile) {\n ", "filename": "server/src/test/java/org/elasticsearch/index/SearchSlowLogTests.java", "status": "modified" }, { "diff": "@@ -170,7 +170,7 @@ public void testAliasSearchRouting() throws Exception {\n assertThat(client().prepareSearch(\"alias1\").setSize(0).setQuery(QueryBuilders.matchAllQuery()).execute().actionGet().getHits().getTotalHits(), equalTo(1L));\n }\n \n- logger.info(\"--> search with 0,1 routings , should find two\");\n+ logger.info(\"--> search with 0,1 indexRoutings , should find two\");\n for (int i = 0; i < 5; i++) {\n assertThat(client().prepareSearch().setRouting(\"0\", \"1\").setQuery(QueryBuilders.matchAllQuery()).execute().actionGet().getHits().getTotalHits(), equalTo(2L));\n assertThat(client().prepareSearch().setSize(0).setRouting(\"0\", \"1\").setQuery(QueryBuilders.matchAllQuery()).execute().actionGet().getHits().getTotalHits(), equalTo(2L));", "filename": "server/src/test/java/org/elasticsearch/routing/AliasRoutingIT.java", "status": "modified" }, { "diff": "@@ -173,13 +173,13 @@ public void testSimpleSearchRouting() {\n assertThat(client().prepareSearch().setSize(0).setRouting(secondRoutingValue).setQuery(QueryBuilders.matchAllQuery()).execute().actionGet().getHits().getTotalHits(), equalTo(1L));\n }\n \n- logger.info(\"--> search with {},{} routings , should find two\", routingValue, \"1\");\n+ logger.info(\"--> search with {},{} indexRoutings , should find two\", routingValue, \"1\");\n for (int i = 0; i < 5; i++) {\n assertThat(client().prepareSearch().setRouting(routingValue, secondRoutingValue).setQuery(QueryBuilders.matchAllQuery()).execute().actionGet().getHits().getTotalHits(), equalTo(2L));\n assertThat(client().prepareSearch().setSize(0).setRouting(routingValue, secondRoutingValue).setQuery(QueryBuilders.matchAllQuery()).execute().actionGet().getHits().getTotalHits(), equalTo(2L));\n }\n \n- logger.info(\"--> search with {},{},{} routings , should find two\", routingValue, secondRoutingValue, routingValue);\n+ logger.info(\"--> search with {},{},{} indexRoutings , should find two\", routingValue, secondRoutingValue, routingValue);\n for (int i = 0; i < 5; i++) {\n assertThat(client().prepareSearch().setRouting(routingValue, secondRoutingValue, routingValue).setQuery(QueryBuilders.matchAllQuery()).execute().actionGet().getHits().getTotalHits(), equalTo(2L));\n assertThat(client().prepareSearch().setSize(0).setRouting(routingValue, secondRoutingValue,routingValue).setQuery(QueryBuilders.matchAllQuery()).execute().actionGet().getHits().getTotalHits(), equalTo(2L));", "filename": "server/src/test/java/org/elasticsearch/routing/SimpleRoutingIT.java", "status": "modified" }, { "diff": "@@ -112,8 +112,8 @@ public void testPreProcess() throws Exception {\n IndexReader reader = w.getReader();\n Engine.Searcher searcher = new Engine.Searcher(\"test\", new IndexSearcher(reader))) {\n \n- DefaultSearchContext context1 = new DefaultSearchContext(1L, shardSearchRequest, null, searcher, indexService,\n- indexShard, bigArrays, null, timeout, null, null);\n+ DefaultSearchContext context1 = new DefaultSearchContext(1L, shardSearchRequest, null, searcher, null, indexService,\n+ indexShard, bigArrays, null, timeout, null, null, Version.CURRENT);\n context1.from(300);\n \n // resultWindow greater than maxResultWindow and scrollContext is null\n@@ -153,8 +153,8 @@ public void testPreProcess() throws Exception {\n + \"] index level setting.\"));\n \n // rescore is null but sliceBuilder is not null\n- DefaultSearchContext context2 = new DefaultSearchContext(2L, shardSearchRequest, null, searcher, indexService,\n- indexShard, bigArrays, null, timeout, null, null);\n+ DefaultSearchContext context2 = new DefaultSearchContext(2L, shardSearchRequest, null, searcher,\n+ null, indexService, indexShard, bigArrays, null, timeout, null, null, Version.CURRENT);\n \n SliceBuilder sliceBuilder = mock(SliceBuilder.class);\n int numSlices = maxSlicesPerScroll + randomIntBetween(1, 100);\n@@ -170,8 +170,8 @@ public void testPreProcess() throws Exception {\n when(shardSearchRequest.getAliasFilter()).thenReturn(AliasFilter.EMPTY);\n when(shardSearchRequest.indexBoost()).thenReturn(AbstractQueryBuilder.DEFAULT_BOOST);\n \n- DefaultSearchContext context3 = new DefaultSearchContext(3L, shardSearchRequest, null, searcher, indexService,\n- indexShard, bigArrays, null, timeout, null, null);\n+ DefaultSearchContext context3 = new DefaultSearchContext(3L, shardSearchRequest, null, searcher, null,\n+ indexService, indexShard, bigArrays, null, timeout, null, null, Version.CURRENT);\n ParsedQuery parsedQuery = ParsedQuery.parsedMatchAllQuery();\n context3.sliceBuilder(null).parsedQuery(parsedQuery).preProcess(false);\n assertEquals(context3.query(), context3.buildFilteredQuery(parsedQuery.query()));", "filename": "server/src/test/java/org/elasticsearch/search/DefaultSearchContextTests.java", "status": "modified" }, { "diff": "@@ -213,7 +213,7 @@ public void onFailure(Exception e) {\n SearchPhaseResult searchPhaseResult = service.executeQueryPhase(\n new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT,\n new SearchSourceBuilder(), new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f,\n- true),\n+ true, null, null),\n new SearchTask(123L, \"\", \"\", \"\", null, Collections.emptyMap()));\n IntArrayList intCursors = new IntArrayList(1);\n intCursors.add(0);\n@@ -249,7 +249,7 @@ public void testTimeout() throws IOException {\n new String[0],\n false,\n new AliasFilter(null, Strings.EMPTY_ARRAY),\n- 1.0f, true)\n+ 1.0f, true, null, null)\n );\n try {\n // the search context should inherit the default timeout\n@@ -269,7 +269,7 @@ public void testTimeout() throws IOException {\n new String[0],\n false,\n new AliasFilter(null, Strings.EMPTY_ARRAY),\n- 1.0f, true)\n+ 1.0f, true, null, null)\n );\n try {\n // the search context should inherit the query timeout\n@@ -297,12 +297,13 @@ public void testMaxDocvalueFieldsSearch() throws IOException {\n searchSourceBuilder.docValueField(\"field\" + i);\n }\n try (SearchContext context = service.createContext(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT,\n- searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f, true))) {\n+ searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f, true, null, null))) {\n assertNotNull(context);\n searchSourceBuilder.docValueField(\"one_field_too_much\");\n IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n () -> service.createContext(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT,\n- searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f, true)));\n+ searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f,\n+ true, null, null)));\n assertEquals(\n \"Trying to retrieve too many docvalue_fields. Must be less than or equal to: [100] but was [101]. \"\n + \"This limit can be set by changing the [index.max_docvalue_fields_search] index level setting.\",\n@@ -328,13 +329,14 @@ public void testMaxScriptFieldsSearch() throws IOException {\n new Script(ScriptType.INLINE, MockScriptEngine.NAME, CustomScriptPlugin.DUMMY_SCRIPT, Collections.emptyMap()));\n }\n try (SearchContext context = service.createContext(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT,\n- searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f, true))) {\n+ searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f, true, null, null))) {\n assertNotNull(context);\n searchSourceBuilder.scriptField(\"anotherScriptField\",\n new Script(ScriptType.INLINE, MockScriptEngine.NAME, CustomScriptPlugin.DUMMY_SCRIPT, Collections.emptyMap()));\n IllegalArgumentException ex = expectThrows(IllegalArgumentException.class,\n () -> service.createContext(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.DEFAULT,\n- searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1.0f, true)));\n+ searchSourceBuilder, new String[0], false, new AliasFilter(null, Strings.EMPTY_ARRAY),\n+ 1.0f, true, null, null)));\n assertEquals(\n \"Trying to retrieve too many script_fields. Must be less than or equal to: [\" + maxScriptFields + \"] but was [\"\n + (maxScriptFields + 1)\n@@ -406,28 +408,28 @@ public void testCanMatch() throws IOException {\n final IndexShard indexShard = indexService.getShard(0);\n final boolean allowPartialSearchResults = true;\n assertTrue(service.canMatch(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.QUERY_THEN_FETCH, null,\n- Strings.EMPTY_ARRAY, false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, allowPartialSearchResults)));\n+ Strings.EMPTY_ARRAY, false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, allowPartialSearchResults, null, null)));\n \n assertTrue(service.canMatch(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.QUERY_THEN_FETCH,\n- new SearchSourceBuilder(), Strings.EMPTY_ARRAY, false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, \n- allowPartialSearchResults)));\n+ new SearchSourceBuilder(), Strings.EMPTY_ARRAY, false, new AliasFilter(null, Strings.EMPTY_ARRAY), 1f,\n+ allowPartialSearchResults, null, null)));\n \n assertTrue(service.canMatch(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.QUERY_THEN_FETCH,\n new SearchSourceBuilder().query(new MatchAllQueryBuilder()), Strings.EMPTY_ARRAY, false,\n- new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, allowPartialSearchResults)));\n+ new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, allowPartialSearchResults, null, null)));\n \n assertTrue(service.canMatch(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.QUERY_THEN_FETCH,\n new SearchSourceBuilder().query(new MatchNoneQueryBuilder())\n .aggregation(new TermsAggregationBuilder(\"test\", ValueType.STRING).minDocCount(0)), Strings.EMPTY_ARRAY, false,\n- new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, allowPartialSearchResults)));\n+ new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, allowPartialSearchResults, null, null)));\n assertTrue(service.canMatch(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.QUERY_THEN_FETCH,\n new SearchSourceBuilder().query(new MatchNoneQueryBuilder())\n .aggregation(new GlobalAggregationBuilder(\"test\")), Strings.EMPTY_ARRAY, false,\n- new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, allowPartialSearchResults)));\n+ new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, allowPartialSearchResults, null, null)));\n \n assertFalse(service.canMatch(new ShardSearchLocalRequest(indexShard.shardId(), 1, SearchType.QUERY_THEN_FETCH,\n new SearchSourceBuilder().query(new MatchNoneQueryBuilder()), Strings.EMPTY_ARRAY, false,\n- new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, allowPartialSearchResults)));\n+ new AliasFilter(null, Strings.EMPTY_ARRAY), 1f, allowPartialSearchResults, null, null)));\n \n }\n ", "filename": "server/src/test/java/org/elasticsearch/search/SearchServiceTests.java", "status": "modified" }, { "diff": "@@ -74,6 +74,8 @@ public void testSerialization() throws Exception {\n assertEquals(deserializedRequest.searchType(), shardSearchTransportRequest.searchType());\n assertEquals(deserializedRequest.shardId(), shardSearchTransportRequest.shardId());\n assertEquals(deserializedRequest.numberOfShards(), shardSearchTransportRequest.numberOfShards());\n+ assertEquals(deserializedRequest.indexRoutings(), shardSearchTransportRequest.indexRoutings());\n+ assertEquals(deserializedRequest.preference(), shardSearchTransportRequest.preference());\n assertEquals(deserializedRequest.cacheKey(), shardSearchTransportRequest.cacheKey());\n assertNotSame(deserializedRequest, shardSearchTransportRequest);\n assertEquals(deserializedRequest.getAliasFilter(), shardSearchTransportRequest.getAliasFilter());\n@@ -92,8 +94,10 @@ private ShardSearchTransportRequest createShardSearchTransportRequest() throws I\n } else {\n filteringAliases = new AliasFilter(null, Strings.EMPTY_ARRAY);\n }\n+ final String[] routings = generateRandomStringArray(5, 10, false, true);\n return new ShardSearchTransportRequest(new OriginalIndices(searchRequest), searchRequest, shardId,\n- randomIntBetween(1, 100), filteringAliases, randomBoolean() ? 1.0f : randomFloat(), Math.abs(randomLong()), null);\n+ randomIntBetween(1, 100), filteringAliases, randomBoolean() ? 1.0f : randomFloat(),\n+ Math.abs(randomLong()), null, routings);\n }\n \n public void testFilteringAliases() throws Exception {", "filename": "server/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.slice;\n \n+import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n@@ -48,9 +49,7 @@\n import static org.hamcrest.Matchers.startsWith;\n \n public class SearchSliceIT extends ESIntegTestCase {\n- private static final int NUM_DOCS = 1000;\n-\n- private int setupIndex(boolean withDocs) throws IOException, ExecutionException, InterruptedException {\n+ private void setupIndex(int numDocs, int numberOfShards) throws IOException, ExecutionException, InterruptedException {\n String mapping = Strings.toString(XContentFactory.jsonBuilder().\n startObject()\n .startObject(\"type\")\n@@ -70,74 +69,112 @@ private int setupIndex(boolean withDocs) throws IOException, ExecutionException,\n .endObject()\n .endObject()\n .endObject());\n- int numberOfShards = randomIntBetween(1, 7);\n assertAcked(client().admin().indices().prepareCreate(\"test\")\n .setSettings(Settings.builder().put(\"number_of_shards\", numberOfShards).put(\"index.max_slices_per_scroll\", 10000))\n .addMapping(\"type\", mapping, XContentType.JSON));\n ensureGreen();\n \n- if (withDocs == false) {\n- return numberOfShards;\n- }\n-\n List<IndexRequestBuilder> requests = new ArrayList<>();\n- for (int i = 0; i < NUM_DOCS; i++) {\n- XContentBuilder builder = jsonBuilder();\n- builder.startObject();\n- builder.field(\"invalid_random_kw\", randomAlphaOfLengthBetween(5, 20));\n- builder.field(\"random_int\", randomInt());\n- builder.field(\"static_int\", 0);\n- builder.field(\"invalid_random_int\", randomInt());\n- builder.endObject();\n+ for (int i = 0; i < numDocs; i++) {\n+ XContentBuilder builder = jsonBuilder()\n+ .startObject()\n+ .field(\"invalid_random_kw\", randomAlphaOfLengthBetween(5, 20))\n+ .field(\"random_int\", randomInt())\n+ .field(\"static_int\", 0)\n+ .field(\"invalid_random_int\", randomInt())\n+ .endObject();\n requests.add(client().prepareIndex(\"test\", \"type\").setSource(builder));\n }\n indexRandom(true, requests);\n- return numberOfShards;\n }\n \n- public void testDocIdSort() throws Exception {\n- int numShards = setupIndex(true);\n- SearchResponse sr = client().prepareSearch(\"test\")\n- .setQuery(matchAllQuery())\n- .setSize(0)\n- .get();\n- int numDocs = (int) sr.getHits().getTotalHits();\n- assertThat(numDocs, equalTo(NUM_DOCS));\n- int max = randomIntBetween(2, numShards*3);\n+ public void testSearchSort() throws Exception {\n+ int numShards = randomIntBetween(1, 7);\n+ int numDocs = randomIntBetween(100, 1000);\n+ setupIndex(numDocs, numShards);\n+ int max = randomIntBetween(2, numShards * 3);\n for (String field : new String[]{\"_id\", \"random_int\", \"static_int\"}) {\n int fetchSize = randomIntBetween(10, 100);\n+ // test _doc sort\n SearchRequestBuilder request = client().prepareSearch(\"test\")\n .setQuery(matchAllQuery())\n .setScroll(new Scroll(TimeValue.timeValueSeconds(10)))\n .setSize(fetchSize)\n .addSort(SortBuilders.fieldSort(\"_doc\"));\n- assertSearchSlicesWithScroll(request, field, max);\n+ assertSearchSlicesWithScroll(request, field, max, numDocs);\n+\n+ // test numeric sort\n+ request = client().prepareSearch(\"test\")\n+ .setQuery(matchAllQuery())\n+ .setScroll(new Scroll(TimeValue.timeValueSeconds(10)))\n+ .addSort(SortBuilders.fieldSort(\"random_int\"))\n+ .setSize(fetchSize);\n+ assertSearchSlicesWithScroll(request, field, max, numDocs);\n }\n }\n \n- public void testNumericSort() throws Exception {\n- int numShards = setupIndex(true);\n- SearchResponse sr = client().prepareSearch(\"test\")\n- .setQuery(matchAllQuery())\n- .setSize(0)\n- .get();\n- int numDocs = (int) sr.getHits().getTotalHits();\n- assertThat(numDocs, equalTo(NUM_DOCS));\n-\n- int max = randomIntBetween(2, numShards*3);\n- for (String field : new String[]{\"_id\", \"random_int\", \"static_int\"}) {\n+ public void testWithPreferenceAndRoutings() throws Exception {\n+ int numShards = 10;\n+ int totalDocs = randomIntBetween(100, 1000);\n+ setupIndex(totalDocs, numShards);\n+ {\n+ SearchResponse sr = client().prepareSearch(\"test\")\n+ .setQuery(matchAllQuery())\n+ .setPreference(\"_shards:1,4\")\n+ .setSize(0)\n+ .get();\n+ int numDocs = (int) sr.getHits().getTotalHits();\n+ int max = randomIntBetween(2, numShards * 3);\n int fetchSize = randomIntBetween(10, 100);\n SearchRequestBuilder request = client().prepareSearch(\"test\")\n .setQuery(matchAllQuery())\n .setScroll(new Scroll(TimeValue.timeValueSeconds(10)))\n- .addSort(SortBuilders.fieldSort(\"random_int\"))\n- .setSize(fetchSize);\n- assertSearchSlicesWithScroll(request, field, max);\n+ .setSize(fetchSize)\n+ .setPreference(\"_shards:1,4\")\n+ .addSort(SortBuilders.fieldSort(\"_doc\"));\n+ assertSearchSlicesWithScroll(request, \"_id\", max, numDocs);\n+ }\n+ {\n+ SearchResponse sr = client().prepareSearch(\"test\")\n+ .setQuery(matchAllQuery())\n+ .setRouting(\"foo\", \"bar\")\n+ .setSize(0)\n+ .get();\n+ int numDocs = (int) sr.getHits().getTotalHits();\n+ int max = randomIntBetween(2, numShards * 3);\n+ int fetchSize = randomIntBetween(10, 100);\n+ SearchRequestBuilder request = client().prepareSearch(\"test\")\n+ .setQuery(matchAllQuery())\n+ .setScroll(new Scroll(TimeValue.timeValueSeconds(10)))\n+ .setSize(fetchSize)\n+ .setRouting(\"foo\", \"bar\")\n+ .addSort(SortBuilders.fieldSort(\"_doc\"));\n+ assertSearchSlicesWithScroll(request, \"_id\", max, numDocs);\n+ }\n+ {\n+ assertAcked(client().admin().indices().prepareAliases()\n+ .addAliasAction(IndicesAliasesRequest.AliasActions.add().index(\"test\").alias(\"alias1\").routing(\"foo\"))\n+ .addAliasAction(IndicesAliasesRequest.AliasActions.add().index(\"test\").alias(\"alias2\").routing(\"bar\"))\n+ .addAliasAction(IndicesAliasesRequest.AliasActions.add().index(\"test\").alias(\"alias3\").routing(\"baz\"))\n+ .get());\n+ SearchResponse sr = client().prepareSearch(\"alias1\", \"alias3\")\n+ .setQuery(matchAllQuery())\n+ .setSize(0)\n+ .get();\n+ int numDocs = (int) sr.getHits().getTotalHits();\n+ int max = randomIntBetween(2, numShards * 3);\n+ int fetchSize = randomIntBetween(10, 100);\n+ SearchRequestBuilder request = client().prepareSearch(\"alias1\", \"alias3\")\n+ .setQuery(matchAllQuery())\n+ .setScroll(new Scroll(TimeValue.timeValueSeconds(10)))\n+ .setSize(fetchSize)\n+ .addSort(SortBuilders.fieldSort(\"_doc\"));\n+ assertSearchSlicesWithScroll(request, \"_id\", max, numDocs);\n }\n }\n \n public void testInvalidFields() throws Exception {\n- setupIndex(false);\n+ setupIndex(0, 1);\n SearchPhaseExecutionException exc = expectThrows(SearchPhaseExecutionException.class,\n () -> client().prepareSearch(\"test\")\n .setQuery(matchAllQuery())\n@@ -161,7 +198,7 @@ public void testInvalidFields() throws Exception {\n }\n \n public void testInvalidQuery() throws Exception {\n- setupIndex(false);\n+ setupIndex(0, 1);\n SearchPhaseExecutionException exc = expectThrows(SearchPhaseExecutionException.class,\n () -> client().prepareSearch()\n .setQuery(matchAllQuery())\n@@ -173,7 +210,7 @@ public void testInvalidQuery() throws Exception {\n equalTo(\"`slice` cannot be used outside of a scroll context\"));\n }\n \n- private void assertSearchSlicesWithScroll(SearchRequestBuilder request, String field, int numSlice) {\n+ private void assertSearchSlicesWithScroll(SearchRequestBuilder request, String field, int numSlice, int numDocs) {\n int totalResults = 0;\n List<String> keys = new ArrayList<>();\n for (int id = 0; id < numSlice; id++) {\n@@ -184,7 +221,7 @@ private void assertSearchSlicesWithScroll(SearchRequestBuilder request, String f\n int numSliceResults = searchResponse.getHits().getHits().length;\n String scrollId = searchResponse.getScrollId();\n for (SearchHit hit : searchResponse.getHits().getHits()) {\n- keys.add(hit.getId());\n+ assertTrue(keys.add(hit.getId()));\n }\n while (searchResponse.getHits().getHits().length > 0) {\n searchResponse = client().prepareSearchScroll(\"test\")\n@@ -195,15 +232,15 @@ private void assertSearchSlicesWithScroll(SearchRequestBuilder request, String f\n totalResults += searchResponse.getHits().getHits().length;\n numSliceResults += searchResponse.getHits().getHits().length;\n for (SearchHit hit : searchResponse.getHits().getHits()) {\n- keys.add(hit.getId());\n+ assertTrue(keys.add(hit.getId()));\n }\n }\n assertThat(numSliceResults, equalTo(expectedSliceResults));\n clearScroll(scrollId);\n }\n- assertThat(totalResults, equalTo(NUM_DOCS));\n- assertThat(keys.size(), equalTo(NUM_DOCS));\n- assertThat(new HashSet(keys).size(), equalTo(NUM_DOCS));\n+ assertThat(totalResults, equalTo(numDocs));\n+ assertThat(keys.size(), equalTo(numDocs));\n+ assertThat(new HashSet(keys).size(), equalTo(numDocs));\n }\n \n private Throwable findRootCause(Exception e) {", "filename": "server/src/test/java/org/elasticsearch/search/slice/SearchSliceIT.java", "status": "modified" }, { "diff": "@@ -30,19 +30,38 @@\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.RAMDirectory;\n import org.elasticsearch.Version;\n+import org.elasticsearch.action.IndicesRequest;\n+import org.elasticsearch.action.search.SearchShardIterator;\n+import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.routing.GroupShardsIterator;\n+import org.elasticsearch.cluster.routing.OperationRouting;\n+import org.elasticsearch.cluster.routing.ShardIterator;\n+import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.fielddata.IndexNumericFieldData;\n import org.elasticsearch.index.mapper.IdFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.query.QueryShardContext;\n+import org.elasticsearch.index.query.Rewriteable;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.search.Scroll;\n+import org.elasticsearch.search.builder.SearchSourceBuilder;\n+import org.elasticsearch.search.internal.AliasFilter;\n+import org.elasticsearch.search.internal.ShardSearchRequest;\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n@@ -58,13 +77,138 @@\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n+import static org.mockito.Matchers.any;\n import static org.mockito.Mockito.mock;\n import static org.mockito.Mockito.when;\n \n public class SliceBuilderTests extends ESTestCase {\n private static final int MAX_SLICE = 20;\n \n- private static SliceBuilder randomSliceBuilder() throws IOException {\n+ static class ShardSearchRequestTest implements IndicesRequest, ShardSearchRequest {\n+ private final String[] indices;\n+ private final int shardId;\n+ private final String[] indexRoutings;\n+ private final String preference;\n+\n+ ShardSearchRequestTest(String index, int shardId, String[] indexRoutings, String preference) {\n+ this.indices = new String[] { index };\n+ this.shardId = shardId;\n+ this.indexRoutings = indexRoutings;\n+ this.preference = preference;\n+ }\n+\n+ @Override\n+ public String[] indices() {\n+ return indices;\n+ }\n+\n+ @Override\n+ public IndicesOptions indicesOptions() {\n+ return null;\n+ }\n+\n+ @Override\n+ public ShardId shardId() {\n+ return new ShardId(new Index(indices[0], indices[0]), shardId);\n+ }\n+\n+ @Override\n+ public String[] types() {\n+ return new String[0];\n+ }\n+\n+ @Override\n+ public SearchSourceBuilder source() {\n+ return null;\n+ }\n+\n+ @Override\n+ public AliasFilter getAliasFilter() {\n+ return null;\n+ }\n+\n+ @Override\n+ public void setAliasFilter(AliasFilter filter) {\n+\n+ }\n+\n+ @Override\n+ public void source(SearchSourceBuilder source) {\n+\n+ }\n+\n+ @Override\n+ public int numberOfShards() {\n+ return 0;\n+ }\n+\n+ @Override\n+ public SearchType searchType() {\n+ return null;\n+ }\n+\n+ @Override\n+ public float indexBoost() {\n+ return 0;\n+ }\n+\n+ @Override\n+ public long nowInMillis() {\n+ return 0;\n+ }\n+\n+ @Override\n+ public Boolean requestCache() {\n+ return null;\n+ }\n+\n+ @Override\n+ public Boolean allowPartialSearchResults() {\n+ return null;\n+ }\n+\n+ @Override\n+ public Scroll scroll() {\n+ return null;\n+ }\n+\n+ @Override\n+ public String[] indexRoutings() {\n+ return indexRoutings;\n+ }\n+\n+ @Override\n+ public String preference() {\n+ return preference;\n+ }\n+\n+ @Override\n+ public void setProfile(boolean profile) {\n+\n+ }\n+\n+ @Override\n+ public boolean isProfile() {\n+ return false;\n+ }\n+\n+ @Override\n+ public BytesReference cacheKey() throws IOException {\n+ return null;\n+ }\n+\n+ @Override\n+ public String getClusterAlias() {\n+ return null;\n+ }\n+\n+ @Override\n+ public Rewriteable<Rewriteable> getRewriteable() {\n+ return null;\n+ }\n+ }\n+\n+ private static SliceBuilder randomSliceBuilder() {\n int max = randomIntBetween(2, MAX_SLICE);\n int id = randomIntBetween(1, max - 1);\n String field = randomAlphaOfLengthBetween(5, 20);\n@@ -75,7 +219,7 @@ private static SliceBuilder serializedCopy(SliceBuilder original) throws IOExcep\n return copyWriteable(original, new NamedWriteableRegistry(Collections.emptyList()), SliceBuilder::new);\n }\n \n- private static SliceBuilder mutate(SliceBuilder original) throws IOException {\n+ private static SliceBuilder mutate(SliceBuilder original) {\n switch (randomIntBetween(0, 2)) {\n case 0: return new SliceBuilder(original.getField() + \"_xyz\", original.getId(), original.getMax());\n case 1: return new SliceBuilder(original.getField(), original.getId() - 1, original.getMax());\n@@ -84,6 +228,63 @@ private static SliceBuilder mutate(SliceBuilder original) throws IOException {\n }\n }\n \n+ private IndexSettings createIndexSettings(Version indexVersionCreated, int numShards) {\n+ Settings settings = Settings.builder()\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, indexVersionCreated)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, numShards)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .build();\n+ IndexMetaData indexState = IndexMetaData.builder(\"index\").settings(settings).build();\n+ return new IndexSettings(indexState, Settings.EMPTY);\n+ }\n+\n+ private ShardSearchRequest createRequest(int shardId) {\n+ return createRequest(shardId, Strings.EMPTY_ARRAY, null);\n+ }\n+\n+ private ShardSearchRequest createRequest(int shardId, String[] routings, String preference) {\n+ return new ShardSearchRequestTest(\"index\", shardId, routings, preference);\n+ }\n+\n+ private QueryShardContext createShardContext(Version indexVersionCreated, IndexReader reader,\n+ String fieldName, DocValuesType dvType, int numShards, int shardId) {\n+ MappedFieldType fieldType = new MappedFieldType() {\n+ @Override\n+ public MappedFieldType clone() {\n+ return null;\n+ }\n+\n+ @Override\n+ public String typeName() {\n+ return null;\n+ }\n+\n+ @Override\n+ public Query termQuery(Object value, @Nullable QueryShardContext context) {\n+ return null;\n+ }\n+\n+ public Query existsQuery(QueryShardContext context) {\n+ return null;\n+ }\n+ };\n+ fieldType.setName(fieldName);\n+ QueryShardContext context = mock(QueryShardContext.class);\n+ when(context.fieldMapper(fieldName)).thenReturn(fieldType);\n+ when(context.getIndexReader()).thenReturn(reader);\n+ when(context.getShardId()).thenReturn(shardId);\n+ IndexSettings indexSettings = createIndexSettings(indexVersionCreated, numShards);\n+ when(context.getIndexSettings()).thenReturn(indexSettings);\n+ if (dvType != null) {\n+ fieldType.setHasDocValues(true);\n+ fieldType.setDocValuesType(dvType);\n+ IndexNumericFieldData fd = mock(IndexNumericFieldData.class);\n+ when(context.getForField(fieldType)).thenReturn(fd);\n+ }\n+ return context;\n+\n+ }\n+\n public void testSerialization() throws Exception {\n SliceBuilder original = randomSliceBuilder();\n SliceBuilder deserialized = serializedCopy(original);\n@@ -131,92 +332,41 @@ public void testInvalidArguments() throws Exception {\n assertEquals(\"max must be greater than id\", e.getMessage());\n }\n \n- public void testToFilter() throws IOException {\n+ public void testToFilterSimple() throws IOException {\n Directory dir = new RAMDirectory();\n try (IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random())))) {\n writer.commit();\n }\n- QueryShardContext context = mock(QueryShardContext.class);\n try (IndexReader reader = DirectoryReader.open(dir)) {\n- MappedFieldType fieldType = new MappedFieldType() {\n- @Override\n- public MappedFieldType clone() {\n- return null;\n- }\n-\n- @Override\n- public String typeName() {\n- return null;\n- }\n-\n- @Override\n- public Query termQuery(Object value, @Nullable QueryShardContext context) {\n- return null;\n- }\n-\n- public Query existsQuery(QueryShardContext context) {\n- return null;\n- }\n- };\n- fieldType.setName(IdFieldMapper.NAME);\n- fieldType.setHasDocValues(false);\n- when(context.fieldMapper(IdFieldMapper.NAME)).thenReturn(fieldType);\n- when(context.getIndexReader()).thenReturn(reader);\n- Settings settings = Settings.builder()\n- .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2)\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n- .build();\n- IndexMetaData indexState = IndexMetaData.builder(\"index\").settings(settings).build();\n- IndexSettings indexSettings = new IndexSettings(indexState, Settings.EMPTY);\n- when(context.getIndexSettings()).thenReturn(indexSettings);\n+ QueryShardContext context =\n+ createShardContext(Version.CURRENT, reader, \"_id\", DocValuesType.SORTED_NUMERIC, 1,0);\n SliceBuilder builder = new SliceBuilder(5, 10);\n- Query query = builder.toFilter(context, 0, 1);\n+ Query query = builder.toFilter(null, createRequest(0), context, Version.CURRENT);\n assertThat(query, instanceOf(TermsSliceQuery.class));\n \n- assertThat(builder.toFilter(context, 0, 1), equalTo(query));\n+ assertThat(builder.toFilter(null, createRequest(0), context, Version.CURRENT), equalTo(query));\n try (IndexReader newReader = DirectoryReader.open(dir)) {\n when(context.getIndexReader()).thenReturn(newReader);\n- assertThat(builder.toFilter(context, 0, 1), equalTo(query));\n+ assertThat(builder.toFilter(null, createRequest(0), context, Version.CURRENT), equalTo(query));\n }\n }\n+ }\n \n+ public void testToFilterRandom() throws IOException {\n+ Directory dir = new RAMDirectory();\n+ try (IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random())))) {\n+ writer.commit();\n+ }\n try (IndexReader reader = DirectoryReader.open(dir)) {\n- MappedFieldType fieldType = new MappedFieldType() {\n- @Override\n- public MappedFieldType clone() {\n- return null;\n- }\n-\n- @Override\n- public String typeName() {\n- return null;\n- }\n-\n- @Override\n- public Query termQuery(Object value, @Nullable QueryShardContext context) {\n- return null;\n- }\n-\n- public Query existsQuery(QueryShardContext context) {\n- return null;\n- }\n- };\n- fieldType.setName(\"field_doc_values\");\n- fieldType.setHasDocValues(true);\n- fieldType.setDocValuesType(DocValuesType.SORTED_NUMERIC);\n- when(context.fieldMapper(\"field_doc_values\")).thenReturn(fieldType);\n- when(context.getIndexReader()).thenReturn(reader);\n- IndexNumericFieldData fd = mock(IndexNumericFieldData.class);\n- when(context.getForField(fieldType)).thenReturn(fd);\n- SliceBuilder builder = new SliceBuilder(\"field_doc_values\", 5, 10);\n- Query query = builder.toFilter(context, 0, 1);\n+ QueryShardContext context =\n+ createShardContext(Version.CURRENT, reader, \"field\", DocValuesType.SORTED_NUMERIC, 1,0);\n+ SliceBuilder builder = new SliceBuilder(\"field\", 5, 10);\n+ Query query = builder.toFilter(null, createRequest(0), context, Version.CURRENT);\n assertThat(query, instanceOf(DocValuesSliceQuery.class));\n-\n- assertThat(builder.toFilter(context, 0, 1), equalTo(query));\n+ assertThat(builder.toFilter(null, createRequest(0), context, Version.CURRENT), equalTo(query));\n try (IndexReader newReader = DirectoryReader.open(dir)) {\n when(context.getIndexReader()).thenReturn(newReader);\n- assertThat(builder.toFilter(context, 0, 1), equalTo(query));\n+ assertThat(builder.toFilter(null, createRequest(0), context, Version.CURRENT), equalTo(query));\n }\n \n // numSlices > numShards\n@@ -226,7 +376,8 @@ public Query existsQuery(QueryShardContext context) {\n for (int i = 0; i < numSlices; i++) {\n for (int j = 0; j < numShards; j++) {\n SliceBuilder slice = new SliceBuilder(\"_id\", i, numSlices);\n- Query q = slice.toFilter(context, j, numShards);\n+ context = createShardContext(Version.CURRENT, reader, \"_id\", DocValuesType.SORTED, numShards, j);\n+ Query q = slice.toFilter(null, createRequest(j), context, Version.CURRENT);\n if (q instanceof TermsSliceQuery || q instanceof MatchAllDocsQuery) {\n AtomicInteger count = numSliceMap.get(j);\n if (count == null) {\n@@ -250,12 +401,13 @@ public Query existsQuery(QueryShardContext context) {\n \n // numShards > numSlices\n numShards = randomIntBetween(4, 100);\n- numSlices = randomIntBetween(2, numShards-1);\n+ numSlices = randomIntBetween(2, numShards - 1);\n List<Integer> targetShards = new ArrayList<>();\n for (int i = 0; i < numSlices; i++) {\n for (int j = 0; j < numShards; j++) {\n SliceBuilder slice = new SliceBuilder(\"_id\", i, numSlices);\n- Query q = slice.toFilter(context, j, numShards);\n+ context = createShardContext(Version.CURRENT, reader, \"_id\", DocValuesType.SORTED, numShards, j);\n+ Query q = slice.toFilter(null, createRequest(j), context, Version.CURRENT);\n if (q instanceof MatchNoDocsQuery == false) {\n assertThat(q, instanceOf(MatchAllDocsQuery.class));\n targetShards.add(j);\n@@ -271,7 +423,7 @@ public Query existsQuery(QueryShardContext context) {\n for (int i = 0; i < numSlices; i++) {\n for (int j = 0; j < numShards; j++) {\n SliceBuilder slice = new SliceBuilder(\"_id\", i, numSlices);\n- Query q = slice.toFilter(context, j, numShards);\n+ Query q = slice.toFilter(null, createRequest(j), context, Version.CURRENT);\n if (i == j) {\n assertThat(q, instanceOf(MatchAllDocsQuery.class));\n } else {\n@@ -280,85 +432,35 @@ public Query existsQuery(QueryShardContext context) {\n }\n }\n }\n+ }\n \n+ public void testInvalidField() throws IOException {\n+ Directory dir = new RAMDirectory();\n+ try (IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random())))) {\n+ writer.commit();\n+ }\n try (IndexReader reader = DirectoryReader.open(dir)) {\n- MappedFieldType fieldType = new MappedFieldType() {\n- @Override\n- public MappedFieldType clone() {\n- return null;\n- }\n-\n- @Override\n- public String typeName() {\n- return null;\n- }\n-\n- @Override\n- public Query termQuery(Object value, @Nullable QueryShardContext context) {\n- return null;\n- }\n-\n- public Query existsQuery(QueryShardContext context) {\n- return null;\n- }\n- };\n- fieldType.setName(\"field_without_doc_values\");\n- when(context.fieldMapper(\"field_without_doc_values\")).thenReturn(fieldType);\n- when(context.getIndexReader()).thenReturn(reader);\n- SliceBuilder builder = new SliceBuilder(\"field_without_doc_values\", 5, 10);\n- IllegalArgumentException exc =\n- expectThrows(IllegalArgumentException.class, () -> builder.toFilter(context, 0, 1));\n+ QueryShardContext context = createShardContext(Version.CURRENT, reader, \"field\", null, 1,0);\n+ SliceBuilder builder = new SliceBuilder(\"field\", 5, 10);\n+ IllegalArgumentException exc = expectThrows(IllegalArgumentException.class,\n+ () -> builder.toFilter(null, createRequest(0), context, Version.CURRENT));\n assertThat(exc.getMessage(), containsString(\"cannot load numeric doc values\"));\n }\n }\n \n-\n public void testToFilterDeprecationMessage() throws IOException {\n Directory dir = new RAMDirectory();\n try (IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random())))) {\n writer.commit();\n }\n- QueryShardContext context = mock(QueryShardContext.class);\n try (IndexReader reader = DirectoryReader.open(dir)) {\n- MappedFieldType fieldType = new MappedFieldType() {\n- @Override\n- public MappedFieldType clone() {\n- return null;\n- }\n-\n- @Override\n- public String typeName() {\n- return null;\n- }\n-\n- @Override\n- public Query termQuery(Object value, @Nullable QueryShardContext context) {\n- return null;\n- }\n-\n- public Query existsQuery(QueryShardContext context) {\n- return null;\n- }\n- };\n- fieldType.setName(\"_uid\");\n- fieldType.setHasDocValues(false);\n- when(context.fieldMapper(\"_uid\")).thenReturn(fieldType);\n- when(context.getIndexReader()).thenReturn(reader);\n- Settings settings = Settings.builder()\n- .put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_6_3_0)\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2)\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n- .build();\n- IndexMetaData indexState = IndexMetaData.builder(\"index\").settings(settings).build();\n- IndexSettings indexSettings = new IndexSettings(indexState, Settings.EMPTY);\n- when(context.getIndexSettings()).thenReturn(indexSettings);\n+ QueryShardContext context = createShardContext(Version.V_6_3_0, reader, \"_uid\", null, 1,0);\n SliceBuilder builder = new SliceBuilder(\"_uid\", 5, 10);\n- Query query = builder.toFilter(context, 0, 1);\n+ Query query = builder.toFilter(null, createRequest(0), context, Version.CURRENT);\n assertThat(query, instanceOf(TermsSliceQuery.class));\n- assertThat(builder.toFilter(context, 0, 1), equalTo(query));\n+ assertThat(builder.toFilter(null, createRequest(0), context, Version.CURRENT), equalTo(query));\n assertWarnings(\"Computing slices on the [_uid] field is deprecated for 6.x indices, use [_id] instead\");\n }\n-\n }\n \n public void testSerializationBackcompat() throws IOException {\n@@ -375,4 +477,35 @@ public void testSerializationBackcompat() throws IOException {\n SliceBuilder::new, Version.V_6_3_0);\n assertEquals(sliceBuilder, copy63);\n }\n+\n+ public void testToFilterWithRouting() throws IOException {\n+ Directory dir = new RAMDirectory();\n+ try (IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random())))) {\n+ writer.commit();\n+ }\n+ ClusterService clusterService = mock(ClusterService.class);\n+ ClusterState state = mock(ClusterState.class);\n+ when(state.metaData()).thenReturn(MetaData.EMPTY_META_DATA);\n+ when(clusterService.state()).thenReturn(state);\n+ OperationRouting routing = mock(OperationRouting.class);\n+ GroupShardsIterator<ShardIterator> it = new GroupShardsIterator<>(\n+ Collections.singletonList(\n+ new SearchShardIterator(null, new ShardId(\"index\", \"index\", 1), null, null)\n+ )\n+ );\n+ when(routing.searchShards(any(), any(), any(), any())).thenReturn(it);\n+ when(clusterService.operationRouting()).thenReturn(routing);\n+ when(clusterService.getSettings()).thenReturn(Settings.EMPTY);\n+ try (IndexReader reader = DirectoryReader.open(dir)) {\n+ QueryShardContext context = createShardContext(Version.CURRENT, reader, \"field\", DocValuesType.SORTED, 5, 0);\n+ SliceBuilder builder = new SliceBuilder(\"field\", 6, 10);\n+ String[] routings = new String[] { \"foo\" };\n+ Query query = builder.toFilter(clusterService, createRequest(1, routings, null), context, Version.CURRENT);\n+ assertEquals(new DocValuesSliceQuery(\"field\", 6, 10), query);\n+ query = builder.toFilter(clusterService, createRequest(1, Strings.EMPTY_ARRAY, \"foo\"), context, Version.CURRENT);\n+ assertEquals(new DocValuesSliceQuery(\"field\", 6, 10), query);\n+ query = builder.toFilter(clusterService, createRequest(1, Strings.EMPTY_ARRAY, \"foo\"), context, Version.V_6_2_0);\n+ assertEquals(new DocValuesSliceQuery(\"field\", 1, 2), query);\n+ }\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/search/slice/SliceBuilderTests.java", "status": "modified" } ] }
{ "body": "#Elasticsearch version: 6.2.1\r\n\r\nPlugins installed: None\r\n\r\nJVM version: 1.8.0_131\r\n\r\nOS version: MacOS 10.13\r\n\r\nI am running across a few test cases where the following error is not making sense to me.\r\n\r\nThe first example includes a [.] in front of `foo`. In this case, there is an error that states that there cannot be a [.] at the beginning of the object field:\r\n\r\nInput:\r\n\r\n```\r\ncurl -XPUT 'localhost:9200/test/test/1?pretty' -H 'Content-Type: application/json' -d' {\r\n \".foo\" : {\r\n \"foo\" : {\r\n \"bar\" : 123 \r\n }\r\n } \r\n}\r\n'\r\n```\r\n\r\nOutput (Error):\r\n```\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse\"\r\n }\r\n ],\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse\",\r\n \"caused_by\" : {\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"object field starting or ending with a [.] makes object resolution ambiguous: [.foo]\"\r\n }\r\n },\r\n \"status\" : 400\r\n}\r\n```\r\n\r\nThis makes sense according to the error.\r\n\r\nThe second example includes a [.] after `foo`. In this case, there is no error.\r\n\r\nInput:\r\n\r\n```\r\ncurl -XPUT 'localhost:9200/test/test/1?pretty' -H 'Content-Type: application/json' -d' {\r\n \"foo.\" : {\r\n \"foo\" : {\r\n \"bar\" : 123 \r\n }\r\n } \r\n}\r\n'\r\n```\r\n\r\nOutput (Success):\r\n```\r\n{\r\n \"_index\" : \"test\",\r\n \"_type\" : \"test\",\r\n \"_id\" : \"1\",\r\n \"_version\" : 9,\r\n \"result\" : \"updated\",\r\n \"_shards\" : {\r\n \"total\" : 2,\r\n \"successful\" : 1,\r\n \"failed\" : 0\r\n },\r\n \"_seq_no\" : 8,\r\n \"_primary_term\" : 1\r\n}\r\n```\r\n\r\nGet (The [.] is included in the field name):\r\n```\r\ncurl -XGET 'localhost:9200/test/test/1'\r\n{ \"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\",\"_version\":9,\"found\":true,\"_source\": {\r\n\"foo.\" : {\r\n \"foo\" : {\r\n \"bar\" : 123 \r\n }\r\n} \r\n}\r\n```\r\n\r\nI was looking at the following GitHub / Elasticsearch Discuss Posts\r\n1) https://discuss.elastic.co/t/name-cannot-be-empty-string-error-if-field-name-starts-with/66808\r\n2) https://github.com/elastic/elasticsearch/issues/15951\r\n\r\nBased on these, it seems like the second example should be throwing the same error as the first example.", "comments": [ { "body": "> I am running across a few test cases where the following error is not making sense to me.\r\n\r\nIf I understand this correctly, it is the lack of error in the second case that doesn't make sense to you, correct ? [Dots are allowed in field names since 2.4](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/dots-in-names.html), why would you think the second example should throw an error ?", "created_at": "2018-03-09T04:58:55Z" }, { "body": "@jkakavas I am basing this off of https://github.com/elastic/elasticsearch/issues/15951, where it states in the first line, \"we had to reject field names containing dots.\"", "created_at": "2018-03-09T05:23:17Z" }, { "body": "@danlevy1 You are correct, this is a bug. The problem is in `DocumentParser.splitAndValidatePath`. Here we call `String.split`, and assume that if any element is an empty string, then there was a double dot. However, it does not actually work for trailing dot, due to oddities in the implementation of `String.split`. From the javadocs there:\r\n\r\n> Trailing empty strings are therefore not included in the resulting array.\r\n\r\nSo the algorithm in `splitAndValidatePath` needs to be amended. Additionally, it seems we are not actually calling that method with the full path all the time, as your example error only contained `.foo` when it should have been `.foo.foo.bar`.", "created_at": "2018-03-09T05:31:45Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-09T05:32:01Z" }, { "body": "~~#15951 is 2 years and explicitly discusses options for re enabling support for dots in field names. It was closed when this was introduced in 2.4 as I linked above. I'm going to close this issue for now as this is expected behavior.~~\r\n\r\nJumped the gun on this one, thanks @rjernst for chiming in.", "created_at": "2018-03-09T05:33:18Z" }, { "body": "@rjernst I will take look more into this. I was actually fixing issue https://github.com/elastic/elasticsearch/issues/21862 when I came across this one. I will update this as soon as I take a look at the `splitAndValidatePath` method. I will hold off on posting the fix on issue https://github.com/elastic/elasticsearch/issues/21862 until I can fix this bug. It's possible the two are intertwined.", "created_at": "2018-03-09T06:33:32Z" }, { "body": "@rjernst Just to confirm, the second example I provided in the bug report should throw the same error that the first example throws. Is this correct?", "created_at": "2018-03-11T17:50:45Z" }, { "body": "Yes, that is correct. ", "created_at": "2018-03-11T22:08:13Z" }, { "body": "cc @elastic/es-search-aggs ", "created_at": "2018-03-14T09:22:20Z" }, { "body": "@rjernst I have been playing around with different test cases and the method `splitAndValidatePath` seems to take in the string in parts. In other words, it never takes in the full field path (`.foo.foo.bar`) in my first example. Do you have a test case where `splitAndValidatePath` is called with the full field path? It seems to me like this is more of a systemic problem.\r\n\r\nAt this point, I can treat the `splitAndValidatePath` method as a method that only takes in one part at a time if that's what you feel is best. Do you have any thoughts?", "created_at": "2018-03-23T02:57:59Z" }, { "body": "`splitAndValidatePath` takes in each element within the json structure. For example, consider following json document:\r\n\r\n```\r\n{\r\n \"foo.bar\" : {\r\n \"baz\" : 1\r\n }\r\n}\r\n```\r\n\r\nIn this example, we would call `splitAndValidatePath(\"foo.bar\")`, followed by `splitAndValidatePath(\"baz\")`. The first call would produce 2 elements, which would cause us to lookup (and possibly dynamically create as an object field) a field in the mappings called `foo`. The intent of the method is to split these field names in json that may contain path separators (ie dot), so that we can mimic the above as if it was specified as:\r\n\r\n```\r\n{\r\n \"foo\" : {\r\n \"bar\" : {\r\n \"baz\" : 1\r\n }\r\n }\r\n}\r\n```", "created_at": "2018-03-23T08:50:31Z" }, { "body": "After running ./gradlew check on my fixed code, I am getting two errors.\r\n\r\n1. `org.elasticsearch.index.mapper.DocumentParserTests.testDynamicFieldsEmptyName`\r\n2. `org.elasticsearch.index.mapper.DocumentParserTests.testDynamicFieldsStartingAndEndingWithDot`\r\n\r\nFor the first error, I ran these two tests to see why the error is occurring:\r\n\r\nTest 1 Input:\r\n\r\n```\r\ncurl -XPUT 'localhost:9200/test/test/1?pretty' -H 'Content-Type: application/json' -d' {\r\n \" \" : {\r\n \"bar\" : {\r\n \"baz\" : 123 \r\n }\r\n } \r\n}\r\n'\r\n```\r\n\r\nTest 1 Output:\r\n\r\n```\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"object field cannot contain only whitespace: [' .bar']\"\r\n }\r\n ],\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"object field cannot contain only whitespace: [' .bar']\"\r\n },\r\n \"status\" : 400\r\n}\r\n```\r\n\r\nTest 2 Input:\r\n\r\n```\r\ncurl -XPUT 'localhost:9200/test/test/1?pretty' -H 'Content-Type: application/json' -d' {\r\n \"foo\" : {\r\n \"bar\" : {\r\n \" \" : 123 \r\n }\r\n } \r\n}\r\n'\r\n```\r\n\r\nTest 2 Output:\r\n\r\n```\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"object field cannot contain only whitespace: ['foo.bar. ']\"\r\n }\r\n ],\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"object field cannot contain only whitespace: ['foo.bar. ']\"\r\n },\r\n \"status\" : 400\r\n}\r\n```\r\n\r\nWhen I looked at `DocumentParserTests.testDynamicFieldsEmptyName` the error looks the same as mine. I am not sure why this test is failing.\r\n\r\nFor the second error, I believe the test is failing because my fix changes the way [.]'s are handled. Here are four test cases:\r\n\r\nTest 1 Input:\r\n\r\n```\r\ncurl -XPUT 'localhost:9200/test/test/1?pretty' -H 'Content-Type: application/json' -d' {\r\n \".foo\" : {\r\n \"bar\" : {\r\n \"baz\" : 123 \r\n }\r\n } \r\n}\r\n'\r\n```\r\n\r\nTest 1 Output:\r\n\r\n```\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse\"\r\n }\r\n ],\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse\",\r\n \"caused_by\" : {\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"object field starting or ending with a [.] makes object resolution ambiguous: ['.foo']\"\r\n }\r\n },\r\n \"status\" : 400\r\n}\r\n```\r\n\r\nTest 2 Input:\r\n\r\n```\r\ncurl -XPUT 'localhost:9200/test/test/1?pretty' -H 'Content-Type: application/json' -d' {\r\n \"foo.\" : {\r\n \"bar\" : {\r\n \"baz\" : 123 \r\n }\r\n } \r\n}\r\n'\r\n```\r\n\r\nTest 2 Output:\r\n\r\n```\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse\"\r\n }\r\n ],\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse\",\r\n \"caused_by\" : {\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"object field starting or ending with a [.] makes object resolution ambiguous: ['foo.']\"\r\n }\r\n },\r\n \"status\" : 400\r\n}\r\n```\r\n\r\nTest 3 Input:\r\n\r\n```\r\ncurl -XPUT 'localhost:9200/test/test/1?pretty' -H 'Content-Type: application/json' -d' {\r\n \"fooOne..fooTwo\" : {\r\n \"bar\" : {\r\n \"baz\" : 123 \r\n }\r\n } \r\n}\r\n'\r\n```\r\n\r\nTest 3 Output:\r\n\r\n```\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse\"\r\n }\r\n ],\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse\",\r\n \"caused_by\" : {\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"object field starting or ending with a [.] makes object resolution ambiguous: [fooOne..fooTwo]\"\r\n }\r\n },\r\n \"status\" : 400\r\n}\r\n```\r\n\r\nTest 4 Input:\r\n\r\n```\r\ncurl -XPUT 'localhost:9200/test/test/1?pretty' -H 'Content-Type: application/json' -d' {\r\n \"fooOne.fooTwo.\" : {\r\n \"bar\" : {\r\n \"baz\" : 123 \r\n }\r\n } \r\n}\r\n'\r\n```\r\n\r\nTest 4 Output:\r\n\r\n```\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse\"\r\n }\r\n ],\r\n \"type\" : \"mapper_parsing_exception\",\r\n \"reason\" : \"failed to parse\",\r\n \"caused_by\" : {\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"object field starting or ending with a [.] makes object resolution ambiguous: ['fooOne.fooTwo.']\"\r\n }\r\n },\r\n \"status\" : 400\r\n}\r\n```\r\n\r\nWhen I looked at `DocumentParserTests.testDynamicFieldsStartingAndEndingWithDot` the error appears to create two [.]'s ('`top..foo..bar`'). My fix doesn't do this and I don't think it should be doing that.\r\n\r\nCan an Elastic team member please advise me on how to move forward? I don't want to submit a PR until all of these tests pass. @rjernst \r\n", "created_at": "2018-03-27T14:42:52Z" }, { "body": "@danlevy1 I'm sorry that it has taken us so long to get back to you. Are you still interested in working on a fix for this issue (and #21862)?\r\n\r\nIf so, it would be great to open a PR with your current approach. You can make a note on the PR that certain tests are failing, and we can discuss how to debug them there. If you'd like, you can open a 'Draft' PR to make it clear it's a work in progress.", "created_at": "2019-04-08T19:36:33Z" }, { "body": "@danlevy1 another friendly ping to see if you'd like to pick this up, otherwise I'll plan to submit a PR. As a heads up, I opened #49946 to address the problem for direct 'put mapping' calls.", "created_at": "2019-12-10T23:59:28Z" }, { "body": "Pinging @elastic/es-search (Team:Search)", "created_at": "2023-05-12T08:42:48Z" } ], "number": 28948, "title": "\"object field starting or ending with a [.] makes object resolution ambiguous\" does not appear to be accurate for all inputs" }
{ "body": "- Fixed detection of field with just '.' with a longer path\r\n- All bad field names now include the full path, rather than\r\n just field name.\r\n- Introduced a single test for data driven tests in DocumentParserTest.\r\n- Fix #21862, #28948\r\n\r\n<!--\r\nThank you for your interest in and contributing to Elasticsearch! There\r\nare a few simple things to check before submitting your pull request\r\nthat can help with the review process. You should delete these items\r\nfrom your submission, but they are here to help bring them to your\r\nattention.\r\n-->\r\n\r\n- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?\r\nYES\r\n- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md)?\r\nYES\r\n\r\n- If submitting code, have you built your formula locally prior to submission with `gradle check`?\r\nYES\r\n\r\n- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.\r\nAGAINST master\r\n\r\n- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?\r\nAgainst OSX\r\n\r\n- If you are submitting this code for a class then read our [policy](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-as-part-of-a-class) for that.\r\n\r\nN/A", "number": 29523, "review_comments": [], "title": "Fixed field name/path checking and report messages during ingest" }
{ "commits": [ { "message": "Fixed field name/path checking and report messages during ingest\n\n- Fixed detection of field with just '.' with a longer path\n- All bad field names now include the full path, rather than\n just field name.\n- Introduced a single test for data driven tests in DocumentParserTest.\n- Fix #21862, #28948" } ], "files": [ { "diff": "@@ -171,29 +171,64 @@ private static MapperParsingException wrapInMapperParsingException(SourceToParse\n return new MapperParsingException(\"failed to parse\", e);\n }\n \n- private static String[] splitAndValidatePath(String fullFieldPath) {\n- if (fullFieldPath.contains(\".\")) {\n- String[] parts = fullFieldPath.split(\"\\\\.\");\n+ private static String[] splitAndValidatePath(final String path, final ParseContext context) {\n+ if (Strings.isEmpty(path)) {\n+ throw new IllegalArgumentException(pathContainsEmptyString(makeFullPath(path, context)));\n+ }\n+ if(path.contains(\"..\")) {\n+ throw new IllegalArgumentException(pathContainsEmptyComponent(makeFullPath(path, context)));\n+ }\n+ // test before split because String.split on dot drops trailing empty in result.\n+\n+ if (path.contains(\".\")) {\n+ if(path.endsWith(\".\")) {\n+ throw new IllegalArgumentException(pathStartOrEndingWithDotAmbiguous(makeFullPath(path, context)));\n+ }\n+\n+ String[] parts = path.split(\"\\\\.\");\n for (String part : parts) {\n if (Strings.hasText(part) == false) {\n // check if the field name contains only whitespace\n if (Strings.isEmpty(part) == false) {\n- throw new IllegalArgumentException(\n- \"object field cannot contain only whitespace: ['\" + fullFieldPath + \"']\");\n+ throw new IllegalArgumentException(pathContainsOnlyWhitespace(makeFullPath(path, context)));\n }\n- throw new IllegalArgumentException(\n- \"object field starting or ending with a [.] makes object resolution ambiguous: [\" + fullFieldPath + \"]\");\n+ throw new IllegalArgumentException(pathStartOrEndingWithDotAmbiguous(makeFullPath(path, context)));\n }\n }\n return parts;\n } else {\n- if (Strings.isEmpty(fullFieldPath)) {\n- throw new IllegalArgumentException(\"field name cannot be an empty string\");\n+ if (Strings.hasText(path) == false) {\n+ throw new IllegalArgumentException(pathContainsOnlyWhitespace(makeFullPath(path, context)));\n }\n- return new String[] {fullFieldPath};\n+ if(path.endsWith(\".\")) {\n+ throw new IllegalArgumentException(pathStartOrEndingWithDotAmbiguous(makeFullPath(path, context)));\n+ }\n+\n+ return new String[] {path};\n }\n }\n \n+ private static String makeFullPath(final String path, final ParseContext context)\n+ {\n+ return null == context ? path : context.path().pathAsText(path);\n+ }\n+\n+ static String pathContainsEmptyString(final String path) {\n+ return \"field name cannot be an empty string ['\" + path + \"']\";\n+ }\n+\n+ static String pathContainsEmptyComponent(final String path) {\n+ return \"object field cannot contain empty component: ['\" + path + \"']\";\n+ }\n+\n+ static String pathContainsOnlyWhitespace(final String path){\n+ return \"object field cannot contain only whitespace: ['\" + path + \"']\";\n+ }\n+\n+ static String pathStartOrEndingWithDotAmbiguous(final String path){\n+ return \"object field starting or ending with a [.] makes object resolution ambiguous: [\" + path + \"]\";\n+ }\n+\n /** Creates a Mapping containing any dynamically added fields, or returns null if there were no dynamic mappings. */\n static Mapping createDynamicUpdate(Mapping mapping, DocumentMapper docMapper, List<Mapper> dynamicMappers) {\n if (dynamicMappers.isEmpty()) {\n@@ -206,7 +241,7 @@ static Mapping createDynamicUpdate(Mapping mapping, DocumentMapper docMapper, Li\n Iterator<Mapper> dynamicMapperItr = dynamicMappers.iterator();\n List<ObjectMapper> parentMappers = new ArrayList<>();\n Mapper firstUpdate = dynamicMapperItr.next();\n- parentMappers.add(createUpdate(mapping.root(), splitAndValidatePath(firstUpdate.name()), 0, firstUpdate));\n+ parentMappers.add(createUpdate(mapping.root(), splitAndValidatePath(firstUpdate.name(), null), 0, firstUpdate));\n Mapper previousMapper = null;\n while (dynamicMapperItr.hasNext()) {\n Mapper newMapper = dynamicMapperItr.next();\n@@ -218,7 +253,7 @@ static Mapping createDynamicUpdate(Mapping mapping, DocumentMapper docMapper, Li\n continue;\n }\n previousMapper = newMapper;\n- String[] nameParts = splitAndValidatePath(newMapper.name());\n+ String[] nameParts = splitAndValidatePath(newMapper.name(), null);\n \n // We first need the stack to only contain mappers in common with the previously processed mapper\n // For example, if the first mapper processed was a.b.c, and we now have a.d, the stack will contain\n@@ -472,7 +507,7 @@ private static void parseObjectOrField(ParseContext context, Mapper mapper) thro\n private static void parseObject(final ParseContext context, ObjectMapper mapper, String currentFieldName) throws IOException {\n assert currentFieldName != null;\n \n- final String[] paths = splitAndValidatePath(currentFieldName);\n+ final String[] paths = splitAndValidatePath(currentFieldName, context);\n Mapper objectMapper = getMapper(mapper, currentFieldName, paths);\n if (objectMapper != null) {\n context.path().add(currentFieldName);\n@@ -509,7 +544,7 @@ private static void parseObject(final ParseContext context, ObjectMapper mapper,\n private static void parseArray(ParseContext context, ObjectMapper parentMapper, String lastFieldName) throws IOException {\n String arrayFieldName = lastFieldName;\n \n- final String[] paths = splitAndValidatePath(arrayFieldName);\n+ final String[] paths = splitAndValidatePath(arrayFieldName, context);\n Mapper mapper = getMapper(parentMapper, lastFieldName, paths);\n if (mapper != null) {\n // There is a concrete mapper for this field already. Need to check if the mapper\n@@ -580,7 +615,7 @@ private static void parseValue(final ParseContext context, ObjectMapper parentMa\n throw new MapperParsingException(\"object mapping [\" + parentMapper.name() + \"] trying to serialize a value with no field associated with it, current value [\" + context.parser().textOrNull() + \"]\");\n }\n \n- final String[] paths = splitAndValidatePath(currentFieldName);\n+ final String[] paths = splitAndValidatePath(currentFieldName, context);\n Mapper mapper = getMapper(parentMapper, currentFieldName, paths);\n if (mapper != null) {\n parseObjectOrField(context, mapper);\n@@ -597,7 +632,7 @@ private static void parseValue(final ParseContext context, ObjectMapper parentMa\n \n private static void parseNullValue(ParseContext context, ObjectMapper parentMapper, String lastFieldName) throws IOException {\n // we can only handle null values if we have mappings for them\n- Mapper mapper = getMapper(parentMapper, lastFieldName, splitAndValidatePath(lastFieldName));\n+ Mapper mapper = getMapper(parentMapper, lastFieldName, splitAndValidatePath(lastFieldName, context));\n if (mapper != null) {\n // TODO: passing null to an object seems bogus?\n parseObjectOrField(context, mapper);\n@@ -820,21 +855,21 @@ private static void parseCopyFields(ParseContext context, List<String> copyToFie\n } else {\n copyToContext = context.switchDoc(targetDoc);\n }\n- parseCopy(field, copyToContext);\n+ parseCopy(field, \"\", copyToContext);\n }\n }\n }\n \n /** Creates an copy of the current field with given field name and boost */\n- private static void parseCopy(String field, ParseContext context) throws IOException {\n+ private static void parseCopy(String field, String parentPath, ParseContext context) throws IOException {\n FieldMapper fieldMapper = context.docMapper().mappers().getMapper(field);\n if (fieldMapper != null) {\n fieldMapper.parse(context);\n } else {\n // The path of the dest field might be completely different from the current one so we need to reset it\n context = context.overridePath(new ContentPath(0));\n \n- final String[] paths = splitAndValidatePath(field);\n+ final String[] paths = splitAndValidatePath(field, context);\n final String fieldName = paths[paths.length-1];\n Tuple<Integer, ObjectMapper> parentMapperTuple = getDynamicParentMapper(context, paths, null);\n ObjectMapper mapper = parentMapperTuple.v2();", "filename": "server/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java", "status": "modified" }, { "diff": "@@ -38,10 +38,7 @@\n \n import java.io.IOException;\n import java.nio.charset.StandardCharsets;\n-import java.util.ArrayList;\n-import java.util.Collection;\n-import java.util.Collections;\n-import java.util.List;\n+import java.util.*;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.StreamsUtils.copyToBytesFromClasspath;\n@@ -1319,74 +1316,145 @@ public void testDynamicDateDetectionEnabledWithNoSpecialCharacters() throws IOEx\n assertThat(dateMapper, instanceOf(DateFieldMapper.class));\n }\n \n- public void testDynamicFieldsStartingAndEndingWithDot() throws Exception {\n- BytesReference bytes = BytesReference.bytes(XContentFactory.jsonBuilder().startObject().startArray(\"top.\")\n- .startObject().startArray(\"foo.\")\n- .startObject()\n- .field(\"thing\", \"bah\")\n- .endObject().endArray()\n- .endObject().endArray()\n- .endObject());\n+ public void testFieldName() throws Exception {\n+ checkFieldNameAndValidation(\"root\", \"branch\", \"leaf\");\n+ }\n \n- client().prepareIndex(\"idx\", \"type\").setSource(bytes, XContentType.JSON).get();\n+ public void testFieldNameBlank1() throws Exception {\n+ checkFieldNameAndValidationFails(\"\", \"branch\", \"leaf\", DocumentParser.pathContainsEmptyString(\"\"));\n+ }\n \n- bytes = BytesReference.bytes(XContentFactory.jsonBuilder().startObject().startArray(\"top.\")\n- .startObject().startArray(\"foo.\")\n- .startObject()\n- .startObject(\"bar.\")\n- .startObject(\"aoeu\")\n- .field(\"a\", 1).field(\"b\", 2)\n- .endObject()\n- .endObject()\n- .endObject()\n- .endArray().endObject().endArray()\n- .endObject());\n+ public void testFieldNameBlank2() throws Exception {\n+ checkFieldNameAndValidationFails(\"root\", \"\", \"leaf\", DocumentParser.pathContainsEmptyString(\"root.\"));\n+ }\n \n- try {\n- client().prepareIndex(\"idx\", \"type\").setSource(bytes, XContentType.JSON).get();\n- fail(\"should have failed to dynamically introduce a double-dot field\");\n- } catch (IllegalArgumentException e) {\n- assertThat(e.getMessage(),\n- containsString(\"object field starting or ending with a [.] makes object resolution ambiguous: [top..foo..bar]\"));\n- }\n+ public void testFieldNameBlank3() throws Exception {\n+ checkFieldNameAndValidationFails(\"root\", \"branch\", \"\", DocumentParser.pathContainsEmptyString(\"root.branch.\"));\n }\n \n- public void testDynamicFieldsEmptyName() throws Exception {\n- BytesReference bytes = BytesReference.bytes(XContentFactory.jsonBuilder()\n- .startObject().startArray(\"top.\")\n- .startObject()\n- .startObject(\"aoeu\")\n- .field(\"a\", 1).field(\" \", 2)\n- .endObject()\n- .endObject().endArray()\n- .endObject());\n+ public void testFieldNameWhitespaceOnly1() throws Exception {\n+ checkFieldNameAndValidationFails(\" \", \"branch\", \"leaf\", DocumentParser.pathContainsOnlyWhitespace(\" \"));\n+ }\n \n- IllegalArgumentException emptyFieldNameException = expectThrows(IllegalArgumentException.class,\n- () -> client().prepareIndex(\"idx\", \"type\").setSource(bytes, XContentType.JSON).get());\n+ public void testFieldNameWhitespaceOnly2() throws Exception {\n+ checkFieldNameAndValidationFails(\"root\", \" \", \"leaf\", DocumentParser.pathContainsOnlyWhitespace(\"root. \"));\n+ }\n \n- assertThat(emptyFieldNameException.getMessage(), containsString(\n- \"object field cannot contain only whitespace: ['top.aoeu. ']\"));\n+ public void testFieldNameWhitespaceOnly3() throws Exception {\n+ checkFieldNameAndValidationFails(\"root\", \"branch\", \" \", DocumentParser.pathContainsOnlyWhitespace(\"root.branch. \"));\n }\n \n- public void testBlankFieldNames() throws Exception {\n- final BytesReference bytes = BytesReference.bytes(XContentFactory.jsonBuilder()\n- .startObject()\n- .field(\"\", \"foo\")\n- .endObject());\n+ public void testFieldNameWhitespaceWithin1() throws Exception {\n+ checkFieldNameAndValidation(\"ro ot\", \"branch\", \"leaf\");\n+ }\n \n- MapperParsingException err = expectThrows(MapperParsingException.class, () ->\n- client().prepareIndex(\"idx\", \"type\").setSource(bytes, XContentType.JSON).get());\n- assertThat(ExceptionsHelper.detailedMessage(err), containsString(\"field name cannot be an empty string\"));\n+ public void testFieldNameWhitespaceWithin2() throws Exception {\n+ checkFieldNameAndValidation(\"root\", \"bra nch\", \"leaf\");\n+ }\n \n- final BytesReference bytes2 = BytesReference.bytes(XContentFactory.jsonBuilder()\n- .startObject()\n- .startObject(\"foo\")\n- .field(\"\", \"bar\")\n- .endObject()\n- .endObject());\n+ public void testFieldNameWhitespaceWithin3() throws Exception {\n+ checkFieldNameAndValidation(\"root\", \"branch\", \"le af\");\n+ }\n+\n+ public void testFieldNameComponentOnlyDot1() throws Exception {\n+ checkFieldNameAndValidationFails(\".\", \"branch\", \"leaf\", DocumentParser.pathStartOrEndingWithDotAmbiguous(\".\"));\n+ }\n+\n+ public void testFieldNameComponentOnlyDot2() throws Exception {\n+ checkFieldNameAndValidationFails(\"root\", \".\", \"leaf\", DocumentParser.pathStartOrEndingWithDotAmbiguous(\"root..\"));\n+ }\n+\n+ public void testFieldNameComponentOnlyDot3() throws Exception {\n+ checkFieldNameAndValidationFails(\"root\", \"branch\", \".\", DocumentParser.pathStartOrEndingWithDotAmbiguous(\"root.branch..\"));\n+ }\n+\n+ public void testFieldNameComponentStartDot1() throws Exception {\n+ checkFieldNameAndValidationFails(\".root\", \"branch\", \"leaf\", DocumentParser.pathStartOrEndingWithDotAmbiguous(\".root\"));\n+ }\n \n- err = expectThrows(MapperParsingException.class, () ->\n- client().prepareIndex(\"idx\", \"type\").setSource(bytes2, XContentType.JSON).get());\n- assertThat(ExceptionsHelper.detailedMessage(err), containsString(\"field name cannot be an empty string\"));\n+ public void testFieldNameComponentStartDot2() throws Exception {\n+ checkFieldNameAndValidationFails(\"root\", \".branch\", \"leaf\", DocumentParser.pathStartOrEndingWithDotAmbiguous(\"root..branch\"));\n+ }\n+\n+ public void testFieldNameComponentStartDot3() throws Exception {\n+ checkFieldNameAndValidationFails(\"root\", \"branch\", \".leaf\", DocumentParser.pathStartOrEndingWithDotAmbiguous(\"root.branch..leaf\"));\n+ }\n+\n+ public void testFieldNameComponentEndDot1() throws Exception {\n+ checkFieldNameAndValidationFails(\"root.\", \"branch\", \"leaf\", DocumentParser.pathStartOrEndingWithDotAmbiguous(\"root.\"));\n+ }\n+\n+ public void testFieldNameComponentEndDot2() throws Exception {\n+ checkFieldNameAndValidationFails(\"root\", \"branch.\", \"leaf\", DocumentParser.pathStartOrEndingWithDotAmbiguous(\"root.branch.\"));\n+ }\n+\n+ public void testFieldNameComponentEndDot3() throws Exception {\n+ checkFieldNameAndValidationFails(\"root\", \"branch\", \"leaf.\", DocumentParser.pathStartOrEndingWithDotAmbiguous(\"root.branch.leaf.\"));\n+ }\n+\n+ private void checkFieldNameAndValidation(final String root,\n+ final String branch,\n+ final String leaf) throws Exception {\n+ checkFieldNameAndValidation0(bytesReferenceArray(root, branch, leaf));\n+ checkFieldNameAndValidation0(bytesReferenceNull(root, branch, leaf));\n+ checkFieldNameAndValidation0(bytesReferenceObject(root, branch, leaf));\n+ }\n+\n+ private void checkFieldNameAndValidationFails(final String root,\n+ final String branch,\n+ final String leaf,\n+ final String exceptionMessageContains) throws Exception {\n+ checkFieldNameAndValidationFails0(bytesReferenceArray(root, branch, leaf), exceptionMessageContains);\n+ checkFieldNameAndValidationFails0(bytesReferenceNull(root, branch, leaf), exceptionMessageContains);\n+ checkFieldNameAndValidationFails0(bytesReferenceObject(root, branch, leaf), exceptionMessageContains);\n+ }\n+\n+ private void checkFieldNameAndValidationFails0(final BytesReference bytes,\n+ final String exceptionMessageContains) {\n+ MapperParsingException thrown = expectThrows(MapperParsingException.class, () -> checkFieldNameAndValidation0(bytes));\n+ assertThat(bytes.utf8ToString(), ExceptionsHelper.detailedMessage(thrown), containsString(exceptionMessageContains));\n+ }\n+\n+ private BytesReference bytesReferenceArray(final String root,\n+ final String branch,\n+ final String leaf) throws Exception {\n+ return BytesReference.bytes(XContentFactory.jsonBuilder()\n+ .startObject().startArray(root)\n+ .startObject().startArray(branch)\n+ .startObject()\n+ .field(leaf, \"*value*\")\n+ .endObject().endArray()\n+ .endObject().endArray()\n+ .endObject()) ;\n+ }\n+\n+ private BytesReference bytesReferenceNull(final String root,\n+ final String branch,\n+ final String leaf) throws Exception {\n+ return BytesReference.bytes(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(root)\n+ .startObject(branch)\n+ .field(leaf, (String)null)\n+ .endObject()\n+ .endObject()\n+ .endObject());\n+ }\n+\n+ private BytesReference bytesReferenceObject(final String root,\n+ final String branch,\n+ final String leaf) throws Exception {\n+ return BytesReference.bytes(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(root)\n+ .startObject(branch)\n+ .field(leaf, \"*value*\")\n+ .endObject()\n+ .endObject()\n+ .endObject());\n+ }\n+\n+ private void checkFieldNameAndValidation0(final BytesReference bytes) throws Exception {\n+ client().prepareIndex(\"idx\", \"type\").setSource(bytes, XContentType.JSON).get();\n }\n }", "filename": "server/src/test/java/org/elasticsearch/index/mapper/DocumentParserTests.java", "status": "modified" } ] }
{ "body": "Today we close the translog write tragically if we experience any I/O exception on a write. These tragic closes lead to use closing the translog and failing the engine. Yet, there is one case that is missed which is when we touch the write channel during a read (checking if reading from the writer would put us past what has been flushed). This commit addresses this by closing the writer tragically if we encounter an I/O exception on the write channel while reading. This becomes interesting when we consider that this method is invoked from the engine through the translog as part of getting a document from the translog. This means we have to consider closing the translog here as well which will cascade up into us finally failing the engine.\r\n\r\nNote that there is no semantic change to, for example, primary/replica resync and recovery. These actions will take a snapshot of the translog which syncs the translog to disk. If an I/O exception occurs during the sync we already close the writer tragically and once we have synced we do not ever read past the position that was synced while taking the snapshot.\r\n\r\nCloses #29390", "comments": [ { "body": "Pinging @elastic/es-distributed", "created_at": "2018-04-05T17:53:44Z" }, { "body": "This addresses the test failure in #29390 exactly because the test is asserting that the translog is closed after a tragic event occurred on the writer but because of the missed handling of an I/O exception on the write channel in the read method, the translog will not be closed by the random readOperation that was added to TranslogThread#run.", "created_at": "2018-04-05T17:55:03Z" }, { "body": "@ywelsch I pushed. I want to refactor the exception handling for `TranslogWriter#closeWithTragicEvent` and `Translog#closeOnTragicEvent` immediately after this PR. Can you take another look?", "created_at": "2018-04-06T12:50:00Z" } ], "number": 29401, "title": "Close translog writer if exception on write channel" }
{ "body": "Today when reading an operation from the current generation fails tragically we attempt to close the translog. However, by invoking close before releasing the read lock we end up in self-deadlock because closing tries to acquire the write lock and the read lock can not be upgraded to a write lock. To avoid this, we move the close invocation outside of the try-with-resources that acquired the read lock. As an extra guard against this, we document the problem and add an assertion that we are not trying to invoke close while holding the read lock.\r\n\r\nCloses #29509, relates #29401\r\n", "number": 29520, "review_comments": [ { "body": "💯 ", "created_at": "2018-04-15T17:19:31Z" }, { "body": "As we never expect a null value to be passed as parameter to this internal method, I'm not a fan of sprinkling this check here. This is defensive programming at its worst.", "created_at": "2018-04-15T18:37:17Z" } ], "title": "Avoid self-deadlock in the translog" }
{ "commits": [ { "message": "Avoid self-deadlock in the translog\n\nToday when reading an operation from the current generation fails\ntragically we attempt to close the translog. However, by invoking close\nbefore releasing the read lock we end up in self-deadlock because\nclosing tries to acquire the write lock and the read lock can not be\nupgraded to a write lock. To avoid this, we move the close invocation\noutside of the try-with-resources that acquired the read lock. As an\nextra guard against this, we document the problem and add an assertion\nthat we are not trying to invoke close while holding the read lock." }, { "message": "Upgrade assertion to hard exception, we will throw anyway" }, { "message": "Remove null check" } ], "files": [ { "diff": "@@ -583,12 +583,7 @@ public Operation readOperation(Location location) throws IOException {\n if (current.generation == location.generation) {\n // no need to fsync here the read operation will ensure that buffers are written to disk\n // if they are still in RAM and we are reading onto that position\n- try {\n- return current.read(location);\n- } catch (final Exception ex) {\n- closeOnTragicEvent(ex);\n- throw ex;\n- }\n+ return current.read(location);\n } else {\n // read backwards - it's likely we need to read on that is recent\n for (int i = readers.size() - 1; i >= 0; i--) {\n@@ -598,6 +593,9 @@ public Operation readOperation(Location location) throws IOException {\n }\n }\n }\n+ } catch (final Exception ex) {\n+ closeOnTragicEvent(ex);\n+ throw ex;\n }\n return null;\n }\n@@ -735,15 +733,28 @@ public boolean ensureSynced(Stream<Location> locations) throws IOException {\n }\n }\n \n+ /**\n+ * Closes the translog if the current translog writer experienced a tragic exception.\n+ *\n+ * Note that in case this thread closes the translog it must not already be holding a read lock on the translog as it will acquire a\n+ * write lock in the course of closing the translog\n+ *\n+ * @param ex if an exception occurs closing the translog, it will be suppressed into the provided exception\n+ */\n private void closeOnTragicEvent(final Exception ex) {\n+ // we can not hold a read lock here because closing will attempt to obtain a write lock and that would result in self-deadlock\n+ assert readLock.isHeldByCurrentThread() == false : Thread.currentThread().getName();\n if (current.getTragicException() != null) {\n try {\n close();\n } catch (final AlreadyClosedException inner) {\n- // don't do anything in this case. The AlreadyClosedException comes from TranslogWriter and we should not add it as suppressed because\n- // will contain the Exception ex as cause. See also https://github.com/elastic/elasticsearch/issues/15941\n+ /*\n+ * Don't do anything in this case. The AlreadyClosedException comes from TranslogWriter and we should not add it as\n+ * suppressed because it will contain the provided exception as its cause. See also\n+ * https://github.com/elastic/elasticsearch/issues/15941.\n+ */\n } catch (final Exception inner) {\n- assert (ex != inner.getCause());\n+ assert ex != inner.getCause();\n ex.addSuppressed(inner);\n }\n }", "filename": "server/src/main/java/org/elasticsearch/index/translog/Translog.java", "status": "modified" }, { "diff": "@@ -1812,7 +1812,6 @@ public void testTragicEventCanBeAnyException() throws IOException {\n assertTrue(translog.getTragicException() instanceof UnknownException);\n }\n \n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/29509\")\n public void testFatalIOExceptionsWhileWritingConcurrently() throws IOException, InterruptedException {\n Path tempDir = createTempDir();\n final FailSwitch fail = new FailSwitch();", "filename": "server/src/test/java/org/elasticsearch/index/translog/TranslogTests.java", "status": "modified" } ] }
{ "body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\n\r\n```\r\n$ elasticsearch --version\r\nVersion: 6.2.3, Build: c59ff00/2018-03-13T10:06:29.741383Z, JVM: 1.8.0_161\r\n```\r\n\r\n**Plugins installed**: \r\n\r\n```\r\n[]\r\n```\r\n\r\n**JVM version** (`java -version`):\r\n\r\n```\r\n$ java -version\r\njava version \"9.0.1\"\r\nJava(TM) SE Runtime Environment (build 9.0.1+11)\r\nJava HotSpot(TM) 64-Bit Server VM (build 9.0.1+11, mixed mode)\r\n```\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\n\r\n```\r\n$ uname -a\r\nDarwin XXX 17.4.0 Darwin Kernel Version 17.4.0: Sun Dec 17 09:19:54 PST 2017; root:xnu-4570.41.2~1/RELEASE_X86_64 x86_64\r\n```\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nUsing a rather large size value: `1_000_000_000` will lead to a Java OutOfMemory error and crash the ElasticSearch node (on a local machine)\r\n\r\nWhile this number is outrageously large, if you use a multiple number of sources it is easy to lower this and thus hit the out of memory error earlier, even in indexes with very few documents and fields.\r\n\r\nExpectation is that the aggregation would be more intelligent about handling large sizes (either by failing the request or limiting them to the matrix possibilities of sources and documents).\r\n\r\n**Steps to reproduce**:\r\n\r\nexample.json\r\n```JSON\r\n{ \"index\" : { \"_id\" : \"1\" } }\r\n{\"my_field\": \"example a\"}\r\n{ \"index\" : { \"_id\" : \"2\" } }\r\n{\"my_field\": \"example b\"}\r\n```\r\n\r\n1. Bulk insert our data into ElasticSearch, amount unimportant.\r\n\r\n```bash\r\ncurl -s -XPOST 'localhost:9200/my_index/my_type/_bulk?pretty' -H \"Content-Type: application/x-ndjson\" --data-binary @example.json\r\n```\r\n\r\n2. Request a composite aggregation (do not set size), response is as expected.\r\n\r\n```\r\ncurl -s -XPOST 'localhost:9200/my_index/my_type/_search?pretty' -H \"Content-Type: application/json\" -id'\r\n{\r\n \"size\": 0,\r\n \"track_total_hits\": false,\r\n \"aggregations\": {\r\n \"attributes\": {\r\n \"composite\": {\r\n \"sources\": [\r\n {\r\n \"my_field\": {\r\n \"terms\": {\r\n \"field\": \"my_field.keyword\"\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n}\r\n'\r\n```\r\n\r\n3. Set an unreasonably high size for the composite bucket, (again only two documents).\r\n\r\n```\r\ncurl -s -XPOST 'localhost:9200/my_index/my_type/_search?pretty' -H \"Content-Type: application/json\" -id'\r\n{\r\n \"size\": 0,\r\n \"track_total_hits\": false,\r\n \"aggregations\": {\r\n \"attributes\": {\r\n \"composite\": {\r\n \"size\": 1000000000,\r\n \"sources\": [\r\n {\r\n \"my_field\": {\r\n \"terms\": {\r\n \"field\": \"my_field.keyword\"\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n}\r\n'\r\n```\r\n\r\n4. ElasticSearch node will crash if it runs out of heap memory space and throw the log seen below.\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n```\r\n[2018-04-04T21:19:49,478][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [] fatal error in thread [elasticsearch[Q3vBGVN][search][T#5]], exiting\r\njava.lang.OutOfMemoryError: Java heap space\r\n\tat org.elasticsearch.search.aggregations.bucket.composite.CompositeValuesSource$GlobalOrdinalValuesSource.<init>(CompositeValuesSource.java:137) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.aggregations.bucket.composite.CompositeValuesSource.wrapGlobalOrdinals(CompositeValuesSource.java:123) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.aggregations.bucket.composite.CompositeValuesComparator.<init>(CompositeValuesComparator.java:50) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.aggregations.bucket.composite.CompositeAggregator.<init>(CompositeAggregator.java:69) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.aggregations.bucket.composite.CompositeAggregationFactory.createInternal(CompositeAggregationFactory.java:52) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.aggregations.AggregatorFactory.create(AggregatorFactory.java:216) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.aggregations.AggregatorFactories.createTopLevelAggregators(AggregatorFactories.java:216) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.aggregations.AggregationPhase.preProcess(AggregationPhase.java:55) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:105) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.indices.IndicesService.lambda$loadIntoContext$14(IndicesService.java:1133) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.indices.IndicesService$$Lambda$1869/1694278624.accept(Unknown Source) ~[?:?]\r\n\tat org.elasticsearch.indices.IndicesService.lambda$cacheShardLevelResult$15(IndicesService.java:1186) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.indices.IndicesService$$Lambda$1870/957087815.get(Unknown Source) ~[?:?]\r\n\tat org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:160) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:143) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:412) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.indices.IndicesRequestCache.getOrCompute(IndicesRequestCache.java:116) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.indices.IndicesService.cacheShardLevelResult(IndicesService.java:1192) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.indices.IndicesService.loadIntoContext(IndicesService.java:1132) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:305) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:340) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:316) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:312) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.search.SearchService$3.doRun(SearchService.java:1002) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.2.3.jar:6.2.3]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_161]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_161]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]\r\n```", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-05T16:49:21Z" }, { "body": "@jimczi could you look into this one?", "created_at": "2018-04-05T17:00:45Z" } ], "number": 29380, "title": "Composite Aggregation runs out of heap space when composite size is too large" }
{ "body": "This change adds a new option to the composite aggregation named `missing_bucket`.\r\nThis option can be set by source and dictates whether documents without a value for the\r\nsource should be ignored. When set to true, documents without a value for a field emits\r\nan explicit `null` value which is then added in the composite bucket.\r\nThe `missing` option that allows to set an explicit value (instead of `null`) is deprecated in this change\r\nand will be removed in a follow up (only in 7.x).\r\nThis commit also changes how the big arrays are allocated, instead of reserving\r\nthe provided `size` for all sources they are created with a small intial size and they grow\r\ndepending on the number of buckets created by the aggregation:\r\nCloses #29380", "number": 29465, "review_comments": [ { "body": "The option is `missing_bucket` below, is this a typo?", "created_at": "2018-04-11T09:27:14Z" }, { "body": "I think somewhere in the docs we need to say that the missing option is deprecated and will be removed in favour of this", "created_at": "2018-04-11T09:27:37Z" }, { "body": "Yes, I plan to add the deprecation in the docs during the backport to 6x since the deprecation is not for master. After the backport to 6x I'll remove the missing option and add a note in the breaking change.", "created_at": "2018-04-11T09:51:36Z" }, { "body": "Yes this is a typo, thanks", "created_at": "2018-04-11T09:51:55Z" }, { "body": "I noticed that `LongValuesSource` creates the BitArray with a size of min(size, 100), whereas this one is just set to 1. Is there a reason for that?", "created_at": "2018-05-15T20:38:49Z" }, { "body": "It looks like `breakerConsumer` isn't used here, but is used in `DoubleValuesSource`. Was that intentional?", "created_at": "2018-05-15T20:39:04Z" }, { "body": "It looks like `breakerConsumer` isn't used here, but is used in `DoubleValuesSource`. Was that intentional?", "created_at": "2018-05-15T20:39:09Z" }, { "body": "Do `copyCurrent()` and `compare()` need to be public? The other value sources seem to have them as package-private", "created_at": "2018-05-15T20:39:17Z" }, { "body": "I wonder if RoaringDocIdSet could be reused here instead of a custom bit array class? I'm thinking it'd provide better compression in the case when missing keys are sparse, and similar compression when missing keys are dense?\r\n\r\nAlthough it seems to require that IDs are added in monotonically increasing order, and I'm not sure if composite follows that pattern.", "created_at": "2018-05-15T20:39:32Z" }, { "body": "In fact this is the only place where the breaker consumer is needed (I removed it from the other values source). It is used to take the `BytesRef` in the `ObjectArray` into account in the circuit breaker. ", "created_at": "2018-05-22T21:29:09Z" }, { "body": "Nope that's a typo, thanks. I changed it back to `Math.min(size, 100)`", "created_at": "2018-05-22T21:33:28Z" }, { "body": "It's a leftover, it is not used anymore so I removed it.", "created_at": "2018-05-22T21:35:19Z" }, { "body": "Same here", "created_at": "2018-05-22T21:35:30Z" }, { "body": "It is used to mark slots with missing values and not doc ids so the size remains small (capped by the requested size of the `composite` agg) and values are mutable (we reuse slots if a new competitive composite bucket is found and the queue is full) so we need a fixed bit set. \r\nIt also uses `BigArrays` to create the underlying `LongArray` so the memory it uses is accounted in the circuit breaker.", "created_at": "2018-05-22T21:44:44Z" }, { "body": ":+1: ", "created_at": "2018-05-24T15:08:55Z" }, { "body": "This seems to have 1-2 extra `=` signs, and isn't rendered properly/well:\r\nhttps://www.elastic.co/guide/en/elasticsearch/reference/6.4/search-aggregations-bucket-composite-aggregation.html", "created_at": "2018-08-24T00:25:59Z" } ], "title": "Add missing_bucket option in the composite agg" }
{ "commits": [ { "message": "Add missing_bucket option in the composite agg\n\nThis change adds a new option to the composite aggregation named `missing_bucket`.\nThis option can be set by source and dictates whether documents without a value for the\nsource should be ignored. When set to true, documents without a value for a field emits\nan explicit `null` value which is then added in the composite bucket.\nThe `missing` option that allows to set an explicit value (instead of `null`) is deprecated in this change\nand will be removed in a follow up (only in 7.x).\nThis commit also changes how the big arrays are allocated, instead of reserving\nthe provided `size` for all sources they are created with a small intial size and they grow\ndepending on the number of buckets created by the aggregation:\nCloses #29380" }, { "message": "fix typo in docs" }, { "message": "fix javadoc" }, { "message": "Merge branch 'master' into composite_missing_bucket" }, { "message": "address review" }, { "message": "Cap the double values source bit array to 100" }, { "message": "Merge branch 'master' into composite_missing_bucket" }, { "message": "Merge branch 'master' into composite_missing_bucket" } ], "files": [ { "diff": "@@ -348,6 +348,34 @@ GET /_search\n \\... will sort the composite bucket in descending order when comparing values from the `date_histogram` source\n and in ascending order when comparing values from the `terms` source.\n \n+====== Missing bucket\n+\n+By default documents without a value for a given source are ignored.\n+It is possible to include them in the response by setting `missing_bucket` to\n+`true` (defaults to `false`):\n+\n+[source,js]\n+--------------------------------------------------\n+GET /_search\n+{\n+ \"aggs\" : {\n+ \"my_buckets\": {\n+ \"composite\" : {\n+ \"sources\" : [\n+ { \"product_name\": { \"terms\" : { \"field\": \"product\", \"missing_bucket\": true } } }\n+ ]\n+ }\n+ }\n+ }\n+}\n+--------------------------------------------------\n+// CONSOLE\n+\n+In the example above the source `product_name` will emit an explicit `null` value\n+for documents without a value for the field `product`.\n+The `order` specified in the source dictates whether the `null` values should rank\n+first (ascending order, `asc`) or last (descending order, `desc`).\n+\n ==== Size\n \n The `size` parameter can be set to define how many composite buckets should be returned.", "filename": "docs/reference/aggregations/bucket/composite-aggregation.asciidoc", "status": "modified" }, { "diff": "@@ -323,3 +323,32 @@ setup:\n - length: { aggregations.test.buckets: 2 }\n - length: { aggregations.test.after_key: 1 }\n - match: { aggregations.test.after_key.keyword: \"foo\" }\n+\n+---\n+\"Composite aggregation and array size\":\n+ - skip:\n+ version: \" - 6.99.99\"\n+ reason: starting in 7.0 the composite sources do not allocate arrays eagerly.\n+\n+ - do:\n+ search:\n+ index: test\n+ body:\n+ aggregations:\n+ test:\n+ composite:\n+ size: 1000000000\n+ sources: [\n+ {\n+ \"keyword\": {\n+ \"terms\": {\n+ \"field\": \"keyword\",\n+ }\n+ }\n+ }\n+ ]\n+\n+ - match: {hits.total: 6}\n+ - length: { aggregations.test.buckets: 2 }\n+ - length: { aggregations.test.after_key: 1 }\n+ - match: { aggregations.test.after_key.keyword: \"foo\" }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/230_composite.yml", "status": "modified" }, { "diff": "@@ -24,49 +24,93 @@\n import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n+import org.apache.lucene.util.BytesRefBuilder;\n import org.elasticsearch.common.CheckedFunction;\n+import org.elasticsearch.common.lease.Releasables;\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.common.util.ObjectArray;\n import org.elasticsearch.index.fielddata.SortedBinaryDocValues;\n-import org.elasticsearch.index.mapper.KeywordFieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.StringFieldType;\n-import org.elasticsearch.index.mapper.TextFieldMapper;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n \n import java.io.IOException;\n+import java.util.function.LongConsumer;\n \n /**\n * A {@link SingleDimensionValuesSource} for binary source ({@link BytesRef}).\n */\n class BinaryValuesSource extends SingleDimensionValuesSource<BytesRef> {\n+ private final LongConsumer breakerConsumer;\n private final CheckedFunction<LeafReaderContext, SortedBinaryDocValues, IOException> docValuesFunc;\n- private final BytesRef[] values;\n+ private ObjectArray<BytesRef> values;\n+ private ObjectArray<BytesRefBuilder> valueBuilders;\n private BytesRef currentValue;\n \n- BinaryValuesSource(MappedFieldType fieldType, CheckedFunction<LeafReaderContext, SortedBinaryDocValues, IOException> docValuesFunc,\n- DocValueFormat format, Object missing, int size, int reverseMul) {\n- super(format, fieldType, missing, size, reverseMul);\n+ BinaryValuesSource(BigArrays bigArrays, LongConsumer breakerConsumer,\n+ MappedFieldType fieldType, CheckedFunction<LeafReaderContext, SortedBinaryDocValues, IOException> docValuesFunc,\n+ DocValueFormat format, boolean missingBucket, Object missing, int size, int reverseMul) {\n+ super(bigArrays, format, fieldType, missingBucket, missing, size, reverseMul);\n+ this.breakerConsumer = breakerConsumer;\n this.docValuesFunc = docValuesFunc;\n- this.values = new BytesRef[size];\n+ this.values = bigArrays.newObjectArray(Math.min(size, 100));\n+ this.valueBuilders = bigArrays.newObjectArray(Math.min(size, 100));\n }\n \n @Override\n- public void copyCurrent(int slot) {\n- values[slot] = BytesRef.deepCopyOf(currentValue);\n+ void copyCurrent(int slot) {\n+ values = bigArrays.grow(values, slot+1);\n+ valueBuilders = bigArrays.grow(valueBuilders, slot+1);\n+ BytesRefBuilder builder = valueBuilders.get(slot);\n+ int byteSize = builder == null ? 0 : builder.bytes().length;\n+ if (builder == null) {\n+ builder = new BytesRefBuilder();\n+ valueBuilders.set(slot, builder);\n+ }\n+ if (missingBucket && currentValue == null) {\n+ values.set(slot, null);\n+ } else {\n+ assert currentValue != null;\n+ builder.copyBytes(currentValue);\n+ breakerConsumer.accept(builder.bytes().length - byteSize);\n+ values.set(slot, builder.get());\n+ }\n }\n \n @Override\n- public int compare(int from, int to) {\n- return compareValues(values[from], values[to]);\n+ int compare(int from, int to) {\n+ if (missingBucket) {\n+ if (values.get(from) == null) {\n+ return values.get(to) == null ? 0 : -1 * reverseMul;\n+ } else if (values.get(to) == null) {\n+ return reverseMul;\n+ }\n+ }\n+ return compareValues(values.get(from), values.get(to));\n }\n \n @Override\n int compareCurrent(int slot) {\n- return compareValues(currentValue, values[slot]);\n+ if (missingBucket) {\n+ if (currentValue == null) {\n+ return values.get(slot) == null ? 0 : -1 * reverseMul;\n+ } else if (values.get(slot) == null) {\n+ return reverseMul;\n+ }\n+ }\n+ return compareValues(currentValue, values.get(slot));\n }\n \n @Override\n int compareCurrentWithAfter() {\n+ if (missingBucket) {\n+ if (currentValue == null) {\n+ return afterValue == null ? 0 : -1 * reverseMul;\n+ } else if (afterValue == null) {\n+ return reverseMul;\n+ }\n+ }\n return compareValues(currentValue, afterValue);\n }\n \n@@ -76,7 +120,9 @@ int compareValues(BytesRef v1, BytesRef v2) {\n \n @Override\n void setAfter(Comparable<?> value) {\n- if (value.getClass() == String.class) {\n+ if (missingBucket && value == null) {\n+ afterValue = null;\n+ } else if (value.getClass() == String.class) {\n afterValue = format.parseBytesRef(value.toString());\n } else {\n throw new IllegalArgumentException(\"invalid value, expected string, got \" + value.getClass().getSimpleName());\n@@ -85,7 +131,7 @@ void setAfter(Comparable<?> value) {\n \n @Override\n BytesRef toComparable(int slot) {\n- return values[slot];\n+ return values.get(slot);\n }\n \n @Override\n@@ -100,6 +146,9 @@ public void collect(int doc, long bucket) throws IOException {\n currentValue = dvs.nextValue();\n next.collect(doc, bucket);\n }\n+ } else if (missingBucket) {\n+ currentValue = null;\n+ next.collect(doc, bucket);\n }\n }\n };\n@@ -130,5 +179,7 @@ SortedDocsProducer createSortedDocsProducerOrNull(IndexReader reader, Query quer\n }\n \n @Override\n- public void close() {}\n+ public void close() {\n+ Releasables.close(values, valueBuilders);\n+ }\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/BinaryValuesSource.java", "status": "modified" }, { "diff": "@@ -0,0 +1,68 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.lease.Releasables;\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.common.util.LongArray;\n+\n+/**\n+ * A bit array that is implemented using a growing {@link LongArray}\n+ * created from {@link BigArrays}.\n+ * The underlying long array grows lazily based on the biggest index\n+ * that needs to be set.\n+ */\n+final class BitArray implements Releasable {\n+ private final BigArrays bigArrays;\n+ private LongArray bits;\n+\n+ BitArray(BigArrays bigArrays, int initialSize) {\n+ this.bigArrays = bigArrays;\n+ this.bits = bigArrays.newLongArray(initialSize, true);\n+ }\n+\n+ public void set(int index) {\n+ fill(index, true);\n+ }\n+\n+ public void clear(int index) {\n+ fill(index, false);\n+ }\n+\n+ public boolean get(int index) {\n+ int wordNum = index >> 6;\n+ long bitmask = 1L << index;\n+ return (bits.get(wordNum) & bitmask) != 0;\n+ }\n+\n+ private void fill(int index, boolean bit) {\n+ int wordNum = index >> 6;\n+ bits = bigArrays.grow(bits,wordNum+1);\n+ long bitmask = 1L << index;\n+ long value = bit ? bits.get(wordNum) | bitmask : bits.get(wordNum) & ~bitmask;\n+ bits.set(wordNum, value);\n+ }\n+\n+ @Override\n+ public void close() {\n+ Releasables.close(bits);\n+ }\n+}", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/BitArray.java", "status": "added" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.search.aggregations.bucket.composite;\n \n-import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.aggregations.bucket.MultiBucketsAggregation;\n \n@@ -66,11 +65,7 @@ static XContentBuilder toXContentFragment(CompositeAggregation aggregation, XCon\n static void buildCompositeMap(String fieldName, Map<String, Object> composite, XContentBuilder builder) throws IOException {\n builder.startObject(fieldName);\n for (Map.Entry<String, Object> entry : composite.entrySet()) {\n- if (entry.getValue().getClass() == BytesRef.class) {\n- builder.field(entry.getKey(), ((BytesRef) entry.getValue()).utf8ToString());\n- } else {\n- builder.field(entry.getKey(), entry.getValue());\n- }\n+ builder.field(entry.getKey(), entry.getValue());\n }\n builder.endObject();\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeAggregation.java", "status": "modified" }, { "diff": "@@ -170,7 +170,9 @@ protected AggregatorFactory<?> doBuild(SearchContext context, AggregatorFactory<\n throw new IllegalArgumentException(\"Missing value for [after.\" + sources.get(i).name() + \"]\");\n }\n Object obj = after.get(sourceName);\n- if (obj instanceof Comparable) {\n+ if (configs[i].missingBucket() && obj == null) {\n+ values[i] = null;\n+ } else if (obj instanceof Comparable) {\n values[i] = (Comparable<?>) obj;\n } else {\n throw new IllegalArgumentException(\"Invalid value for [after.\" + sources.get(i).name() +", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.Weight;\n import org.apache.lucene.util.RoaringDocIdSet;\n+import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.search.DocValueFormat;\n@@ -50,6 +51,7 @@\n import java.util.Collections;\n import java.util.List;\n import java.util.Map;\n+import java.util.function.LongUnaryOperator;\n import java.util.stream.Collectors;\n \n final class CompositeAggregator extends BucketsAggregator {\n@@ -59,9 +61,10 @@ final class CompositeAggregator extends BucketsAggregator {\n private final int[] reverseMuls;\n private final List<DocValueFormat> formats;\n \n+ private final SingleDimensionValuesSource<?>[] sources;\n private final CompositeValuesCollectorQueue queue;\n \n- private final List<Entry> entries;\n+ private final List<Entry> entries = new ArrayList<>();\n private LeafReaderContext currentLeaf;\n private RoaringDocIdSet.Builder docIdSetBuilder;\n private BucketCollector deferredCollectors;\n@@ -74,19 +77,19 @@ final class CompositeAggregator extends BucketsAggregator {\n this.sourceNames = Arrays.stream(sourceConfigs).map(CompositeValuesSourceConfig::name).collect(Collectors.toList());\n this.reverseMuls = Arrays.stream(sourceConfigs).mapToInt(CompositeValuesSourceConfig::reverseMul).toArray();\n this.formats = Arrays.stream(sourceConfigs).map(CompositeValuesSourceConfig::format).collect(Collectors.toList());\n- final SingleDimensionValuesSource<?>[] sources =\n- createValuesSources(context.bigArrays(), context.searcher().getIndexReader(), context.query(), sourceConfigs, size);\n- this.queue = new CompositeValuesCollectorQueue(sources, size);\n- this.sortedDocsProducer = sources[0].createSortedDocsProducerOrNull(context.searcher().getIndexReader(), context.query());\n- if (rawAfterKey != null) {\n- queue.setAfter(rawAfterKey.values());\n+ this.sources = new SingleDimensionValuesSource[sourceConfigs.length];\n+ for (int i = 0; i < sourceConfigs.length; i++) {\n+ this.sources[i] = createValuesSource(context.bigArrays(), context.searcher().getIndexReader(),\n+ context.query(), sourceConfigs[i], size, i);\n }\n- this.entries = new ArrayList<>();\n+ this.queue = new CompositeValuesCollectorQueue(context.bigArrays(), sources, size, rawAfterKey);\n+ this.sortedDocsProducer = sources[0].createSortedDocsProducerOrNull(context.searcher().getIndexReader(), context.query());\n }\n \n @Override\n protected void doClose() {\n Releasables.close(queue);\n+ Releasables.close(sources);\n }\n \n @Override\n@@ -256,94 +259,93 @@ public void collect(int doc, long zeroBucket) throws IOException {\n };\n }\n \n- private static SingleDimensionValuesSource<?>[] createValuesSources(BigArrays bigArrays, IndexReader reader, Query query,\n- CompositeValuesSourceConfig[] configs, int size) {\n- final SingleDimensionValuesSource<?>[] sources = new SingleDimensionValuesSource[configs.length];\n- for (int i = 0; i < sources.length; i++) {\n- final int reverseMul = configs[i].reverseMul();\n- if (configs[i].valuesSource() instanceof ValuesSource.Bytes.WithOrdinals && reader instanceof DirectoryReader) {\n- ValuesSource.Bytes.WithOrdinals vs = (ValuesSource.Bytes.WithOrdinals) configs[i].valuesSource();\n- sources[i] = new GlobalOrdinalValuesSource(\n+ private SingleDimensionValuesSource<?> createValuesSource(BigArrays bigArrays, IndexReader reader, Query query,\n+ CompositeValuesSourceConfig config, int sortRank, int size) {\n+\n+ final int reverseMul = config.reverseMul();\n+ if (config.valuesSource() instanceof ValuesSource.Bytes.WithOrdinals && reader instanceof DirectoryReader) {\n+ ValuesSource.Bytes.WithOrdinals vs = (ValuesSource.Bytes.WithOrdinals) config.valuesSource();\n+ SingleDimensionValuesSource<?> source = new GlobalOrdinalValuesSource(\n+ bigArrays,\n+ config.fieldType(),\n+ vs::globalOrdinalsValues,\n+ config.format(),\n+ config.missingBucket(),\n+ config.missing(),\n+ size,\n+ reverseMul\n+ );\n+\n+ if (sortRank == 0 && source.createSortedDocsProducerOrNull(reader, query) != null) {\n+ // this the leading source and we can optimize it with the sorted docs producer but\n+ // we don't want to use global ordinals because the number of visited documents\n+ // should be low and global ordinals need one lookup per visited term.\n+ Releasables.close(source);\n+ return new BinaryValuesSource(\n bigArrays,\n- configs[i].fieldType(),\n- vs::globalOrdinalsValues,\n- configs[i].format(),\n- configs[i].missing(),\n+ this::addRequestCircuitBreakerBytes,\n+ config.fieldType(),\n+ vs::bytesValues,\n+ config.format(),\n+ config.missingBucket(),\n+ config.missing(),\n size,\n reverseMul\n );\n+ } else {\n+ return source;\n+ }\n+ } else if (config.valuesSource() instanceof ValuesSource.Bytes) {\n+ ValuesSource.Bytes vs = (ValuesSource.Bytes) config.valuesSource();\n+ return new BinaryValuesSource(\n+ bigArrays,\n+ this::addRequestCircuitBreakerBytes,\n+ config.fieldType(),\n+ vs::bytesValues,\n+ config.format(),\n+ config.missingBucket(),\n+ config.missing(),\n+ size,\n+ reverseMul\n+ );\n \n- if (i == 0 && sources[i].createSortedDocsProducerOrNull(reader, query) != null) {\n- // this the leading source and we can optimize it with the sorted docs producer but\n- // we don't want to use global ordinals because the number of visited documents\n- // should be low and global ordinals need one lookup per visited term.\n- Releasables.close(sources[i]);\n- sources[i] = new BinaryValuesSource(\n- configs[i].fieldType(),\n- vs::bytesValues,\n- configs[i].format(),\n- configs[i].missing(),\n- size,\n- reverseMul\n- );\n- }\n- } else if (configs[i].valuesSource() instanceof ValuesSource.Bytes) {\n- ValuesSource.Bytes vs = (ValuesSource.Bytes) configs[i].valuesSource();\n- sources[i] = new BinaryValuesSource(\n- configs[i].fieldType(),\n- vs::bytesValues,\n- configs[i].format(),\n- configs[i].missing(),\n+ } else if (config.valuesSource() instanceof ValuesSource.Numeric) {\n+ final ValuesSource.Numeric vs = (ValuesSource.Numeric) config.valuesSource();\n+ if (vs.isFloatingPoint()) {\n+ return new DoubleValuesSource(\n+ bigArrays,\n+ config.fieldType(),\n+ vs::doubleValues,\n+ config.format(),\n+ config.missingBucket(),\n+ config.missing(),\n size,\n reverseMul\n );\n \n- } else if (configs[i].valuesSource() instanceof ValuesSource.Numeric) {\n- final ValuesSource.Numeric vs = (ValuesSource.Numeric) configs[i].valuesSource();\n- if (vs.isFloatingPoint()) {\n- sources[i] = new DoubleValuesSource(\n- bigArrays,\n- configs[i].fieldType(),\n- vs::doubleValues,\n- configs[i].format(),\n- configs[i].missing(),\n- size,\n- reverseMul\n- );\n-\n+ } else {\n+ final LongUnaryOperator rounding;\n+ if (vs instanceof RoundingValuesSource) {\n+ rounding = ((RoundingValuesSource) vs)::round;\n } else {\n- if (vs instanceof RoundingValuesSource) {\n- sources[i] = new LongValuesSource(\n- bigArrays,\n- configs[i].fieldType(),\n- vs::longValues,\n- ((RoundingValuesSource) vs)::round,\n- configs[i].format(),\n- configs[i].missing(),\n- size,\n- reverseMul\n- );\n-\n- } else {\n- sources[i] = new LongValuesSource(\n- bigArrays,\n- configs[i].fieldType(),\n- vs::longValues,\n- (value) -> value,\n- configs[i].format(),\n- configs[i].missing(),\n- size,\n- reverseMul\n- );\n-\n- }\n+ rounding = LongUnaryOperator.identity();\n }\n- } else {\n- throw new IllegalArgumentException(\"Unknown value source: \" + configs[i].valuesSource().getClass().getName() +\n- \" for field: \" + sources[i].fieldType.name());\n+ return new LongValuesSource(\n+ bigArrays,\n+ config.fieldType(),\n+ vs::longValues,\n+ rounding,\n+ config.format(),\n+ config.missingBucket(),\n+ config.missing(),\n+ size,\n+ reverseMul\n+ );\n }\n+ } else {\n+ throw new IllegalArgumentException(\"Unknown values source type: \" + config.valuesSource().getClass().getName() +\n+ \" for source: \" + config.name());\n }\n- return sources;\n }\n \n private static class Entry {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeAggregator.java", "status": "modified" }, { "diff": "@@ -22,10 +22,11 @@\n import org.apache.lucene.index.LeafReaderContext;\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.common.util.IntArray;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n \n import java.io.IOException;\n-import java.util.Arrays;\n import java.util.Set;\n import java.util.TreeMap;\n \n@@ -36,29 +37,33 @@ final class CompositeValuesCollectorQueue implements Releasable {\n // the slot for the current candidate\n private static final int CANDIDATE_SLOT = Integer.MAX_VALUE;\n \n+ private final BigArrays bigArrays;\n private final int maxSize;\n private final TreeMap<Integer, Integer> keys;\n private final SingleDimensionValuesSource<?>[] arrays;\n- private final int[] docCounts;\n- private boolean afterValueSet = false;\n+ private IntArray docCounts;\n+ private boolean afterKeyIsSet = false;\n \n /**\n * Constructs a composite queue with the specified size and sources.\n *\n * @param sources The list of {@link CompositeValuesSourceConfig} to build the composite buckets.\n * @param size The number of composite buckets to keep.\n+ * @param afterKey\n */\n- CompositeValuesCollectorQueue(SingleDimensionValuesSource<?>[] sources, int size) {\n+ CompositeValuesCollectorQueue(BigArrays bigArrays, SingleDimensionValuesSource<?>[] sources, int size, CompositeKey afterKey) {\n+ this.bigArrays = bigArrays;\n this.maxSize = size;\n this.arrays = sources;\n- this.docCounts = new int[size];\n this.keys = new TreeMap<>(this::compare);\n- }\n-\n- void clear() {\n- keys.clear();\n- Arrays.fill(docCounts, 0);\n- afterValueSet = false;\n+ if (afterKey != null) {\n+ assert afterKey.size() == sources.length;\n+ afterKeyIsSet = true;\n+ for (int i = 0; i < afterKey.size(); i++) {\n+ sources[i].setAfter(afterKey.get(i));\n+ }\n+ }\n+ this.docCounts = bigArrays.newIntArray(1, false);\n }\n \n /**\n@@ -94,7 +99,7 @@ Integer compareCurrent() {\n * Returns the lowest value (exclusive) of the leading source.\n */\n Comparable<?> getLowerValueLeadSource() {\n- return afterValueSet ? arrays[0].getAfter() : null;\n+ return afterKeyIsSet ? arrays[0].getAfter() : null;\n }\n \n /**\n@@ -107,7 +112,7 @@ Comparable<?> getUpperValueLeadSource() throws IOException {\n * Returns the document count in <code>slot</code>.\n */\n int getDocCount(int slot) {\n- return docCounts[slot];\n+ return docCounts.get(slot);\n }\n \n /**\n@@ -117,7 +122,8 @@ private void copyCurrent(int slot) {\n for (int i = 0; i < arrays.length; i++) {\n arrays[i].copyCurrent(slot);\n }\n- docCounts[slot] = 1;\n+ docCounts = bigArrays.grow(docCounts, slot+1);\n+ docCounts.set(slot, 1);\n }\n \n /**\n@@ -134,17 +140,6 @@ int compare(int slot1, int slot2) {\n return 0;\n }\n \n- /**\n- * Sets the after values for this comparator.\n- */\n- void setAfter(Comparable<?>[] values) {\n- assert values.length == arrays.length;\n- afterValueSet = true;\n- for (int i = 0; i < arrays.length; i++) {\n- arrays[i].setAfter(values[i]);\n- }\n- }\n-\n /**\n * Compares the after values with the values in <code>slot</code>.\n */\n@@ -207,10 +202,10 @@ int addIfCompetitive() {\n Integer topSlot = compareCurrent();\n if (topSlot != null) {\n // this key is already in the top N, skip it\n- docCounts[topSlot] += 1;\n+ docCounts.increment(topSlot, 1);\n return topSlot;\n }\n- if (afterValueSet && compareCurrentWithAfter() <= 0) {\n+ if (afterKeyIsSet && compareCurrentWithAfter() <= 0) {\n // this key is greater than the top value collected in the previous round, skip it\n return -1;\n }\n@@ -239,9 +234,8 @@ int addIfCompetitive() {\n return newSlot;\n }\n \n-\n @Override\n public void close() {\n- Releasables.close(arrays);\n+ Releasables.close(docCounts);\n }\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeValuesCollectorQueue.java", "status": "modified" }, { "diff": "@@ -23,6 +23,8 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.xcontent.ToXContentFragment;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.QueryShardException;\n@@ -40,10 +42,14 @@\n * A {@link ValuesSource} builder for {@link CompositeAggregationBuilder}\n */\n public abstract class CompositeValuesSourceBuilder<AB extends CompositeValuesSourceBuilder<AB>> implements Writeable, ToXContentFragment {\n+ private static final DeprecationLogger DEPRECATION_LOGGER =\n+ new DeprecationLogger(Loggers.getLogger(CompositeValuesSourceBuilder.class));\n+\n protected final String name;\n private String field = null;\n private Script script = null;\n private ValueType valueType = null;\n+ private boolean missingBucket = false;\n private Object missing = null;\n private SortOrder order = SortOrder.ASC;\n private String format = null;\n@@ -66,6 +72,11 @@ public abstract class CompositeValuesSourceBuilder<AB extends CompositeValuesSou\n if (in.readBoolean()) {\n this.valueType = ValueType.readFromStream(in);\n }\n+ if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ this.missingBucket = in.readBoolean();\n+ } else {\n+ this.missingBucket = false;\n+ }\n this.missing = in.readGenericValue();\n this.order = SortOrder.readFromStream(in);\n if (in.getVersion().onOrAfter(Version.V_6_3_0)) {\n@@ -89,6 +100,9 @@ public final void writeTo(StreamOutput out) throws IOException {\n if (hasValueType) {\n valueType.writeTo(out);\n }\n+ if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ out.writeBoolean(missingBucket);\n+ }\n out.writeGenericValue(missing);\n order.writeTo(out);\n if (out.getVersion().onOrAfter(Version.V_6_3_0)) {\n@@ -110,6 +124,7 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params)\n if (script != null) {\n builder.field(\"script\", script);\n }\n+ builder.field(\"missing_bucket\", missingBucket);\n if (missing != null) {\n builder.field(\"missing\", missing);\n }\n@@ -127,7 +142,7 @@ public final XContentBuilder toXContent(XContentBuilder builder, Params params)\n \n @Override\n public final int hashCode() {\n- return Objects.hash(field, missing, script, valueType, order, format, innerHashCode());\n+ return Objects.hash(field, missingBucket, missing, script, valueType, order, format, innerHashCode());\n }\n \n protected abstract int innerHashCode();\n@@ -142,6 +157,7 @@ public boolean equals(Object o) {\n return Objects.equals(field, that.field()) &&\n Objects.equals(script, that.script()) &&\n Objects.equals(valueType, that.valueType()) &&\n+ Objects.equals(missingBucket, that.missingBucket()) &&\n Objects.equals(missing, that.missing()) &&\n Objects.equals(order, that.order()) &&\n Objects.equals(format, that.format()) &&\n@@ -215,21 +231,43 @@ public ValueType valueType() {\n \n /**\n * Sets the value to use when the source finds a missing value in a\n- * document\n+ * document.\n+ *\n+ * @deprecated Use {@link #missingBucket(boolean)} instead.\n */\n @SuppressWarnings(\"unchecked\")\n+ @Deprecated\n public AB missing(Object missing) {\n if (missing == null) {\n throw new IllegalArgumentException(\"[missing] must not be null\");\n }\n+ DEPRECATION_LOGGER.deprecated(\"[missing] is deprecated. Please use [missing_bucket] instead.\");\n this.missing = missing;\n return (AB) this;\n }\n \n+ @Deprecated\n public Object missing() {\n return missing;\n }\n \n+ /**\n+ * If true an explicit `null bucket will represent documents with missing values.\n+ */\n+ @SuppressWarnings(\"unchecked\")\n+ public AB missingBucket(boolean missingBucket) {\n+ this.missingBucket = missingBucket;\n+ return (AB) this;\n+ }\n+\n+ /**\n+ * False if documents with missing values are ignored, otherwise missing values are\n+ * represented by an explicit `null` value.\n+ */\n+ public boolean missingBucket() {\n+ return missingBucket;\n+ }\n+\n /**\n * Sets the {@link SortOrder} to use to sort values produced this source\n */\n@@ -292,11 +330,15 @@ public final CompositeValuesSourceConfig build(SearchContext context) throws IOE\n ValuesSourceConfig<?> config = ValuesSourceConfig.resolve(context.getQueryShardContext(),\n valueType, field, script, missing, null, format);\n \n- if (config.unmapped() && field != null && config.missing() == null) {\n+ if (config.unmapped() && field != null && missing == null && missingBucket == false) {\n // this source cannot produce any values so we refuse to build\n- // since composite buckets are not created on null values\n+ // since composite buckets are not created on null values by default.\n+ throw new QueryShardException(context.getQueryShardContext(),\n+ \"failed to find field [\" + field + \"] and [missing_bucket] is not set\");\n+ }\n+ if (missingBucket && missing != null) {\n throw new QueryShardException(context.getQueryShardContext(),\n- \"failed to find field [\" + field + \"] and [missing] is not provided\");\n+ \"cannot use [missing] option in conjunction with [missing_bucket]\");\n }\n return innerBuild(context, config);\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeValuesSourceBuilder.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@ class CompositeValuesSourceConfig {\n private final DocValueFormat format;\n private final int reverseMul;\n private final Object missing;\n+ private final boolean missingBucket;\n \n /**\n * Creates a new {@link CompositeValuesSourceConfig}.\n@@ -44,12 +45,14 @@ class CompositeValuesSourceConfig {\n * @param missing The missing value or null if documents with missing value should be ignored.\n */\n CompositeValuesSourceConfig(String name, @Nullable MappedFieldType fieldType, ValuesSource vs, DocValueFormat format,\n- SortOrder order, @Nullable Object missing) {\n+ SortOrder order, boolean missingBucket, @Nullable Object missing) {\n this.name = name;\n this.fieldType = fieldType;\n this.vs = vs;\n this.format = format;\n this.reverseMul = order == SortOrder.ASC ? 1 : -1;\n+ this.missingBucket = missingBucket;\n+ assert missingBucket == false || missing == null;\n this.missing = missing;\n }\n \n@@ -89,6 +92,13 @@ Object missing() {\n return missing;\n }\n \n+ /**\n+ * If true, an explicit `null bucket represents documents with missing values.\n+ */\n+ boolean missingBucket() {\n+ return missingBucket;\n+ }\n+\n /**\n * The sort order for the values source (e.g. -1 for descending and 1 for ascending).\n */", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeValuesSourceConfig.java", "status": "modified" }, { "diff": "@@ -38,9 +38,9 @@ static <VB extends CompositeValuesSourceBuilder<VB>, T> void declareValuesSource\n ValueType targetValueType) {\n objectParser.declareField(VB::field, XContentParser::text,\n new ParseField(\"field\"), ObjectParser.ValueType.STRING);\n-\n objectParser.declareField(VB::missing, XContentParser::objectText,\n new ParseField(\"missing\"), ObjectParser.ValueType.VALUE);\n+ objectParser.declareBoolean(VB::missingBucket, new ParseField(\"missing_bucket\"));\n \n objectParser.declareField(VB::valueType, p -> {\n ValueType valueType = ValueType.resolveForScript(p.text());", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeValuesSourceParserHelper.java", "status": "modified" }, { "diff": "@@ -226,7 +226,7 @@ protected CompositeValuesSourceConfig innerBuild(SearchContext context, ValuesSo\n // is specified in the builder.\n final DocValueFormat docValueFormat = format() == null ? DocValueFormat.RAW : config.format();\n final MappedFieldType fieldType = config.fieldContext() != null ? config.fieldContext().fieldType() : null;\n- return new CompositeValuesSourceConfig(name, fieldType, vs, docValueFormat, order(), missing());\n+ return new CompositeValuesSourceConfig(name, fieldType, vs, docValueFormat, order(), missingBucket(), missing());\n } else {\n throw new IllegalArgumentException(\"invalid source, expected numeric, got \" + orig.getClass().getSimpleName());\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/DateHistogramValuesSourceBuilder.java", "status": "modified" }, { "diff": "@@ -38,34 +38,67 @@\n */\n class DoubleValuesSource extends SingleDimensionValuesSource<Double> {\n private final CheckedFunction<LeafReaderContext, SortedNumericDoubleValues, IOException> docValuesFunc;\n- private final DoubleArray values;\n+ private final BitArray bits;\n+ private DoubleArray values;\n private double currentValue;\n+ private boolean missingCurrentValue;\n \n DoubleValuesSource(BigArrays bigArrays, MappedFieldType fieldType,\n CheckedFunction<LeafReaderContext, SortedNumericDoubleValues, IOException> docValuesFunc,\n- DocValueFormat format, Object missing, int size, int reverseMul) {\n- super(format, fieldType, missing, size, reverseMul);\n+ DocValueFormat format, boolean missingBucket, Object missing, int size, int reverseMul) {\n+ super(bigArrays, format, fieldType, missingBucket, missing, size, reverseMul);\n this.docValuesFunc = docValuesFunc;\n- this.values = bigArrays.newDoubleArray(size, false);\n+ this.bits = missingBucket ? new BitArray(bigArrays, 100) : null;\n+ this.values = bigArrays.newDoubleArray(Math.min(size, 100), false);\n }\n \n @Override\n void copyCurrent(int slot) {\n- values.set(slot, currentValue);\n+ values = bigArrays.grow(values, slot+1);\n+ if (missingBucket && missingCurrentValue) {\n+ bits.clear(slot);\n+ } else {\n+ assert missingCurrentValue == false;\n+ if (missingBucket) {\n+ bits.set(slot);\n+ }\n+ values.set(slot, currentValue);\n+ }\n }\n \n @Override\n int compare(int from, int to) {\n+ if (missingBucket) {\n+ if (bits.get(from) == false) {\n+ return bits.get(to) ? -1 * reverseMul : 0;\n+ } else if (bits.get(to) == false) {\n+ return reverseMul;\n+ }\n+ }\n return compareValues(values.get(from), values.get(to));\n }\n \n @Override\n int compareCurrent(int slot) {\n+ if (missingBucket) {\n+ if (missingCurrentValue) {\n+ return bits.get(slot) ? -1 * reverseMul : 0;\n+ } else if (bits.get(slot) == false) {\n+ return reverseMul;\n+ }\n+ }\n return compareValues(currentValue, values.get(slot));\n }\n \n @Override\n int compareCurrentWithAfter() {\n+ if (missingBucket) {\n+ if (missingCurrentValue) {\n+ return afterValue != null ? -1 * reverseMul : 0;\n+ } else if (afterValue == null) {\n+ return reverseMul;\n+ }\n+ }\n return compareValues(currentValue, afterValue);\n }\n \n@@ -75,7 +108,9 @@ private int compareValues(double v1, double v2) {\n \n @Override\n void setAfter(Comparable<?> value) {\n- if (value instanceof Number) {\n+ if (missingBucket && value == null) {\n+ afterValue = null;\n+ } else if (value instanceof Number) {\n afterValue = ((Number) value).doubleValue();\n } else {\n afterValue = format.parseDouble(value.toString(), false, () -> {\n@@ -86,6 +121,10 @@ void setAfter(Comparable<?> value) {\n \n @Override\n Double toComparable(int slot) {\n+ if (missingBucket && bits.get(slot) == false) {\n+ return null;\n+ }\n+ assert missingBucket == false || bits.get(slot);\n return values.get(slot);\n }\n \n@@ -99,8 +138,12 @@ public void collect(int doc, long bucket) throws IOException {\n int num = dvs.docValueCount();\n for (int i = 0; i < num; i++) {\n currentValue = dvs.nextValue();\n+ missingCurrentValue = false;\n next.collect(doc, bucket);\n }\n+ } else if (missingBucket) {\n+ missingCurrentValue = true;\n+ next.collect(doc, bucket);\n }\n }\n };\n@@ -127,6 +170,6 @@ SortedDocsProducer createSortedDocsProducerOrNull(IndexReader reader, Query quer\n \n @Override\n public void close() {\n- Releasables.close(values);\n+ Releasables.close(values, bits);\n }\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/DoubleValuesSource.java", "status": "modified" }, { "diff": "@@ -43,7 +43,7 @@\n */\n class GlobalOrdinalValuesSource extends SingleDimensionValuesSource<BytesRef> {\n private final CheckedFunction<LeafReaderContext, SortedSetDocValues, IOException> docValuesFunc;\n- private final LongArray values;\n+ private LongArray values;\n private SortedSetDocValues lookup;\n private long currentValue;\n private Long afterValueGlobalOrd;\n@@ -52,16 +52,17 @@ class GlobalOrdinalValuesSource extends SingleDimensionValuesSource<BytesRef> {\n private long lastLookupOrd = -1;\n private BytesRef lastLookupValue;\n \n- GlobalOrdinalValuesSource(BigArrays bigArrays,\n- MappedFieldType type, CheckedFunction<LeafReaderContext, SortedSetDocValues, IOException> docValuesFunc,\n- DocValueFormat format, Object missing, int size, int reverseMul) {\n- super(format, type, missing, size, reverseMul);\n+ GlobalOrdinalValuesSource(BigArrays bigArrays, MappedFieldType type,\n+ CheckedFunction<LeafReaderContext, SortedSetDocValues, IOException> docValuesFunc,\n+ DocValueFormat format, boolean missingBucket, Object missing, int size, int reverseMul) {\n+ super(bigArrays, format, type, missingBucket, missing, size, reverseMul);\n this.docValuesFunc = docValuesFunc;\n- this.values = bigArrays.newLongArray(size, false);\n+ this.values = bigArrays.newLongArray(Math.min(size, 100), false);\n }\n \n @Override\n void copyCurrent(int slot) {\n+ values = bigArrays.grow(values, slot+1);\n values.set(slot, currentValue);\n }\n \n@@ -89,7 +90,10 @@ int compareCurrentWithAfter() {\n \n @Override\n void setAfter(Comparable<?> value) {\n- if (value.getClass() == String.class) {\n+ if (missingBucket && value == null) {\n+ afterValue = null;\n+ afterValueGlobalOrd = -1L;\n+ } else if (value.getClass() == String.class) {\n afterValue = format.parseBytesRef(value.toString());\n } else {\n throw new IllegalArgumentException(\"invalid value, expected string, got \" + value.getClass().getSimpleName());\n@@ -99,10 +103,12 @@ void setAfter(Comparable<?> value) {\n @Override\n BytesRef toComparable(int slot) throws IOException {\n long globalOrd = values.get(slot);\n- if (globalOrd == lastLookupOrd) {\n+ if (missingBucket && globalOrd == -1) {\n+ return null;\n+ } else if (globalOrd == lastLookupOrd) {\n return lastLookupValue;\n } else {\n- lastLookupOrd= globalOrd;\n+ lastLookupOrd = globalOrd;\n lastLookupValue = BytesRef.deepCopyOf(lookup.lookupOrd(values.get(slot)));\n return lastLookupValue;\n }\n@@ -123,6 +129,9 @@ public void collect(int doc, long bucket) throws IOException {\n currentValue = ord;\n next.collect(doc, bucket);\n }\n+ } else if (missingBucket) {\n+ currentValue = -1;\n+ next.collect(doc, bucket);\n }\n }\n };\n@@ -143,7 +152,7 @@ LeafBucketCollector getLeafCollector(Comparable<?> value, LeafReaderContext cont\n \n @Override\n public void collect(int doc, long bucket) throws IOException {\n- if (!currentValueIsSet) {\n+ if (currentValueIsSet == false) {\n if (dvs.advanceExact(doc)) {\n long ord;\n while ((ord = dvs.nextOrd()) != NO_MORE_ORDS) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/GlobalOrdinalValuesSource.java", "status": "modified" }, { "diff": "@@ -115,7 +115,7 @@ protected CompositeValuesSourceConfig innerBuild(SearchContext context, ValuesSo\n ValuesSource.Numeric numeric = (ValuesSource.Numeric) orig;\n final HistogramValuesSource vs = new HistogramValuesSource(numeric, interval);\n final MappedFieldType fieldType = config.fieldContext() != null ? config.fieldContext().fieldType() : null;\n- return new CompositeValuesSourceConfig(name, fieldType, vs, config.format(), order(), missing());\n+ return new CompositeValuesSourceConfig(name, fieldType, vs, config.format(), order(), missingBucket(), missing());\n } else {\n throw new IllegalArgumentException(\"invalid source, expected numeric, got \" + orig.getClass().getSimpleName());\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/HistogramValuesSourceBuilder.java", "status": "modified" }, { "diff": "@@ -332,6 +332,14 @@ InternalBucket reduce(List<InternalBucket> buckets, ReduceContext reduceContext)\n @Override\n public int compareKey(InternalBucket other) {\n for (int i = 0; i < key.size(); i++) {\n+ if (key.get(i) == null) {\n+ if (other.key.get(i) == null) {\n+ continue;\n+ }\n+ return -1 * reverseMuls[i];\n+ } else if (other.key.get(i) == null) {\n+ return reverseMuls[i];\n+ }\n assert key.get(i).getClass() == other.key.get(i).getClass();\n @SuppressWarnings(\"unchecked\")\n int cmp = ((Comparable) key.get(i)).compareTo(other.key.get(i)) * reverseMuls[i];\n@@ -357,26 +365,29 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n * for numbers and a string for {@link BytesRef}s.\n */\n static Object formatObject(Object obj, DocValueFormat format) {\n+ if (obj == null) {\n+ return null;\n+ }\n if (obj.getClass() == BytesRef.class) {\n BytesRef value = (BytesRef) obj;\n if (format == DocValueFormat.RAW) {\n return value.utf8ToString();\n } else {\n- return format.format((BytesRef) obj);\n+ return format.format(value);\n }\n } else if (obj.getClass() == Long.class) {\n- Long value = (Long) obj;\n+ long value = (long) obj;\n if (format == DocValueFormat.RAW) {\n return value;\n } else {\n return format.format(value);\n }\n } else if (obj.getClass() == Double.class) {\n- Double value = (Double) obj;\n+ double value = (double) obj;\n if (format == DocValueFormat.RAW) {\n return value;\n } else {\n- return format.format((Double) obj);\n+ return format.format(value);\n }\n }\n return obj;", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/InternalComposite.java", "status": "modified" }, { "diff": "@@ -45,38 +45,73 @@\n * A {@link SingleDimensionValuesSource} for longs.\n */\n class LongValuesSource extends SingleDimensionValuesSource<Long> {\n+ private final BigArrays bigArrays;\n private final CheckedFunction<LeafReaderContext, SortedNumericDocValues, IOException> docValuesFunc;\n private final LongUnaryOperator rounding;\n \n- private final LongArray values;\n+ private BitArray bits;\n+ private LongArray values;\n private long currentValue;\n+ private boolean missingCurrentValue;\n \n- LongValuesSource(BigArrays bigArrays, MappedFieldType fieldType,\n- CheckedFunction<LeafReaderContext, SortedNumericDocValues, IOException> docValuesFunc,\n- LongUnaryOperator rounding, DocValueFormat format, Object missing, int size, int reverseMul) {\n- super(format, fieldType, missing, size, reverseMul);\n+ LongValuesSource(BigArrays bigArrays,\n+ MappedFieldType fieldType, CheckedFunction<LeafReaderContext, SortedNumericDocValues, IOException> docValuesFunc,\n+ LongUnaryOperator rounding, DocValueFormat format, boolean missingBucket, Object missing, int size, int reverseMul) {\n+ super(bigArrays, format, fieldType, missingBucket, missing, size, reverseMul);\n+ this.bigArrays = bigArrays;\n this.docValuesFunc = docValuesFunc;\n this.rounding = rounding;\n- this.values = bigArrays.newLongArray(size, false);\n+ this.bits = missingBucket ? new BitArray(bigArrays, Math.min(size, 100)) : null;\n+ this.values = bigArrays.newLongArray(Math.min(size, 100), false);\n }\n \n @Override\n void copyCurrent(int slot) {\n- values.set(slot, currentValue);\n+ values = bigArrays.grow(values, slot+1);\n+ if (missingBucket && missingCurrentValue) {\n+ bits.clear(slot);\n+ } else {\n+ assert missingCurrentValue == false;\n+ if (missingBucket) {\n+ bits.set(slot);\n+ }\n+ values.set(slot, currentValue);\n+ }\n }\n \n @Override\n int compare(int from, int to) {\n+ if (missingBucket) {\n+ if (bits.get(from) == false) {\n+ return bits.get(to) ? -1 * reverseMul : 0;\n+ } else if (bits.get(to) == false) {\n+ return reverseMul;\n+ }\n+ }\n return compareValues(values.get(from), values.get(to));\n }\n \n @Override\n int compareCurrent(int slot) {\n+ if (missingBucket) {\n+ if (missingCurrentValue) {\n+ return bits.get(slot) ? -1 * reverseMul : 0;\n+ } else if (bits.get(slot) == false) {\n+ return reverseMul;\n+ }\n+ }\n return compareValues(currentValue, values.get(slot));\n }\n \n @Override\n int compareCurrentWithAfter() {\n+ if (missingBucket) {\n+ if (missingCurrentValue) {\n+ return afterValue != null ? -1 * reverseMul : 0;\n+ } else if (afterValue == null) {\n+ return reverseMul;\n+ }\n+ }\n return compareValues(currentValue, afterValue);\n }\n \n@@ -86,7 +121,9 @@ private int compareValues(long v1, long v2) {\n \n @Override\n void setAfter(Comparable<?> value) {\n- if (value instanceof Number) {\n+ if (missingBucket && value == null) {\n+ afterValue = null;\n+ } else if (value instanceof Number) {\n afterValue = ((Number) value).longValue();\n } else {\n // for date histogram source with \"format\", the after value is formatted\n@@ -99,6 +136,9 @@ void setAfter(Comparable<?> value) {\n \n @Override\n Long toComparable(int slot) {\n+ if (missingBucket && bits.get(slot) == false) {\n+ return null;\n+ }\n return values.get(slot);\n }\n \n@@ -112,8 +152,12 @@ public void collect(int doc, long bucket) throws IOException {\n int num = dvs.docValueCount();\n for (int i = 0; i < num; i++) {\n currentValue = dvs.nextValue();\n+ missingCurrentValue = false;\n next.collect(doc, bucket);\n }\n+ } else if (missingBucket) {\n+ missingCurrentValue = true;\n+ next.collect(doc, bucket);\n }\n }\n };\n@@ -182,6 +226,6 @@ SortedDocsProducer createSortedDocsProducerOrNull(IndexReader reader, Query quer\n \n @Override\n public void close() {\n- Releasables.close(values);\n+ Releasables.close(values, bits);\n }\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/LongValuesSource.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.search.Query;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.search.DocValueFormat;\n import org.elasticsearch.search.aggregations.LeafBucketCollector;\n@@ -36,11 +37,13 @@\n * A source that can record and compare values of similar type.\n */\n abstract class SingleDimensionValuesSource<T extends Comparable<T>> implements Releasable {\n+ protected final BigArrays bigArrays;\n protected final DocValueFormat format;\n @Nullable\n protected final MappedFieldType fieldType;\n @Nullable\n protected final Object missing;\n+ protected final boolean missingBucket;\n \n protected final int size;\n protected final int reverseMul;\n@@ -50,17 +53,23 @@ abstract class SingleDimensionValuesSource<T extends Comparable<T>> implements R\n /**\n * Creates a new {@link SingleDimensionValuesSource}.\n *\n+ * @param bigArrays The big arrays object.\n * @param format The format of the source.\n * @param fieldType The field type or null if the source is a script.\n+ * @param missingBucket If true, an explicit `null bucket represents documents with missing values.\n * @param missing The missing value or null if documents with missing value should be ignored.\n * @param size The number of values to record.\n * @param reverseMul -1 if the natural order ({@link SortOrder#ASC} should be reversed.\n */\n- SingleDimensionValuesSource(DocValueFormat format, @Nullable MappedFieldType fieldType, @Nullable Object missing,\n+ SingleDimensionValuesSource(BigArrays bigArrays, DocValueFormat format,\n+ @Nullable MappedFieldType fieldType, boolean missingBucket, @Nullable Object missing,\n int size, int reverseMul) {\n+ assert missing == null || missingBucket == false;\n+ this.bigArrays = bigArrays;\n this.format = format;\n this.fieldType = fieldType;\n this.missing = missing;\n+ this.missingBucket = missingBucket;\n this.size = size;\n this.reverseMul = reverseMul;\n this.afterValue = null;\n@@ -139,6 +148,7 @@ abstract LeafBucketCollector getLeafCollector(Comparable<?> value,\n protected boolean checkIfSortedDocsIsApplicable(IndexReader reader, MappedFieldType fieldType) {\n if (fieldType == null ||\n missing != null ||\n+ (missingBucket && afterValue == null) ||\n fieldType.indexOptions() == IndexOptions.NONE ||\n // inverse of the natural order\n reverseMul == -1) {", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/SingleDimensionValuesSource.java", "status": "modified" }, { "diff": "@@ -61,8 +61,9 @@ DocIdSet processLeaf(Query query, CompositeValuesCollectorQueue queue,\n DocIdSetBuilder builder = fillDocIdSet ? new DocIdSetBuilder(context.reader().maxDoc(), terms) : null;\n PostingsEnum reuse = null;\n boolean first = true;\n+ final BytesRef upper = upperValue == null ? null : BytesRef.deepCopyOf(upperValue);\n do {\n- if (upperValue != null && upperValue.compareTo(te.term()) < 0) {\n+ if (upper != null && upper.compareTo(te.term()) < 0) {\n break;\n }\n reuse = te.postings(reuse, PostingsEnum.NONE);", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/TermsSortedDocsProducer.java", "status": "modified" }, { "diff": "@@ -93,6 +93,6 @@ protected CompositeValuesSourceConfig innerBuild(SearchContext context, ValuesSo\n } else {\n format = config.format();\n }\n- return new CompositeValuesSourceConfig(name, fieldType, vs, format, order(), missing());\n+ return new CompositeValuesSourceConfig(name, fieldType, vs, format, order(), missingBucket(), missing());\n }\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/composite/TermsValuesSourceBuilder.java", "status": "modified" }, { "diff": "@@ -0,0 +1,54 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket.composite;\n+\n+import org.elasticsearch.common.util.BigArrays;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.List;\n+\n+public class BitArrayTests extends ESTestCase {\n+ public void testRandom() {\n+ try (BitArray bitArray = new BitArray(BigArrays.NON_RECYCLING_INSTANCE, 1)) {\n+ int numBits = randomIntBetween(1000, 10000);\n+ for (int step = 0; step < 3; step++) {\n+ boolean[] bits = new boolean[numBits];\n+ List<Integer> slots = new ArrayList<>();\n+ for (int i = 0; i < numBits; i++) {\n+ bits[i] = randomBoolean();\n+ slots.add(i);\n+ }\n+ Collections.shuffle(slots, random());\n+ for (int i : slots) {\n+ if (bits[i]) {\n+ bitArray.set(i);\n+ } else {\n+ bitArray.clear(i);\n+ }\n+ }\n+ for (int i = 0; i < numBits; i++) {\n+ assertEquals(bitArray.get(i), bits[i]);\n+ }\n+ }\n+ }\n+ }\n+}", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/composite/BitArrayTests.java", "status": "added" }, { "diff": "@@ -44,6 +44,9 @@ private DateHistogramValuesSourceBuilder randomDateHistogramSourceBuilder() {\n if (randomBoolean()) {\n histo.timeZone(randomDateTimeZone());\n }\n+ if (randomBoolean()) {\n+ histo.missingBucket(true);\n+ }\n return histo;\n }\n \n@@ -55,6 +58,9 @@ private TermsValuesSourceBuilder randomTermsSourceBuilder() {\n terms.script(new Script(randomAlphaOfLengthBetween(10, 20)));\n }\n terms.order(randomFrom(SortOrder.values()));\n+ if (randomBoolean()) {\n+ terms.missingBucket(true);\n+ }\n return terms;\n }\n \n@@ -65,6 +71,9 @@ private HistogramValuesSourceBuilder randomHistogramSourceBuilder() {\n } else {\n histo.script(new Script(randomAlphaOfLengthBetween(10, 20)));\n }\n+ if (randomBoolean()) {\n+ histo.missingBucket(true);\n+ }\n histo.interval(randomDoubleBetween(Math.nextUp(0), Double.MAX_VALUE, false));\n return histo;\n }", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeAggregationBuilderTests.java", "status": "modified" }, { "diff": "@@ -136,12 +136,25 @@ public void testUnmappedField() throws Exception {\n IndexSearcher searcher = new IndexSearcher(new MultiReader());\n QueryShardException exc =\n expectThrows(QueryShardException.class, () -> createAggregatorFactory(builder, searcher));\n- assertThat(exc.getMessage(), containsString(\"failed to find field [unknown] and [missing] is not provided\"));\n- // should work when missing is provided\n- terms.missing(\"missing\");\n+ assertThat(exc.getMessage(), containsString(\"failed to find field [unknown] and [missing_bucket] is not set\"));\n+ // should work when missing_bucket is set\n+ terms.missingBucket(true);\n createAggregatorFactory(builder, searcher);\n }\n \n+ public void testMissingBucket() throws Exception {\n+ TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(randomAlphaOfLengthBetween(5, 10))\n+ .field(\"unknown\")\n+ .missingBucket(true)\n+ .missing(\"MISSING\");\n+ CompositeAggregationBuilder builder = new CompositeAggregationBuilder(\"test\", Collections.singletonList(terms));\n+ IndexSearcher searcher = new IndexSearcher(new MultiReader());\n+ QueryShardException exc =\n+ expectThrows(QueryShardException.class, () -> createAggregator(builder, searcher));\n+ assertWarnings(\"[missing] is deprecated. Please use [missing_bucket] instead.\");\n+ assertThat(exc.getMessage(), containsString(\"cannot use [missing] option in conjunction with [missing_bucket]\"));\n+ }\n+\n public void testWithKeyword() throws Exception {\n final List<Map<String, List<Object>>> dataset = new ArrayList<>();\n dataset.addAll(\n@@ -187,6 +200,97 @@ public void testWithKeyword() throws Exception {\n );\n }\n \n+ public void testWithKeywordAndMissingBucket() throws Exception {\n+ final List<Map<String, List<Object>>> dataset = new ArrayList<>();\n+ dataset.addAll(\n+ Arrays.asList(\n+ createDocument(\"keyword\", \"a\"),\n+ createDocument(\"long\", 0L),\n+ createDocument(\"keyword\", \"c\"),\n+ createDocument(\"keyword\", \"a\"),\n+ createDocument(\"keyword\", \"d\"),\n+ createDocument(\"keyword\", \"c\"),\n+ createDocument(\"long\", 5L)\n+ )\n+ );\n+\n+ // sort ascending, null bucket is first\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery()), dataset,\n+ () -> {\n+ TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n+ .field(\"keyword\")\n+ .missingBucket(true);\n+ return new CompositeAggregationBuilder(\"name\", Collections.singletonList(terms));\n+ }, (result) -> {\n+ assertEquals(4, result.getBuckets().size());\n+ assertEquals(\"{keyword=d}\", result.afterKey().toString());\n+ assertEquals(\"{keyword=null}\", result.getBuckets().get(0).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(0).getDocCount());\n+ assertEquals(\"{keyword=a}\", result.getBuckets().get(1).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(1).getDocCount());\n+ assertEquals(\"{keyword=c}\", result.getBuckets().get(2).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(2).getDocCount());\n+ assertEquals(\"{keyword=d}\", result.getBuckets().get(3).getKeyAsString());\n+ assertEquals(1L, result.getBuckets().get(3).getDocCount());\n+ }\n+ );\n+\n+ // sort descending, null bucket is last\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery()), dataset,\n+ () -> {\n+ TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n+ .field(\"keyword\")\n+ .missingBucket(true)\n+ .order(SortOrder.DESC);\n+ return new CompositeAggregationBuilder(\"name\", Collections.singletonList(terms));\n+ }, (result) -> {\n+ assertEquals(4, result.getBuckets().size());\n+ assertEquals(\"{keyword=null}\", result.afterKey().toString());\n+ assertEquals(\"{keyword=null}\", result.getBuckets().get(3).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(3).getDocCount());\n+ assertEquals(\"{keyword=a}\", result.getBuckets().get(2).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(2).getDocCount());\n+ assertEquals(\"{keyword=c}\", result.getBuckets().get(1).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(1).getDocCount());\n+ assertEquals(\"{keyword=d}\", result.getBuckets().get(0).getKeyAsString());\n+ assertEquals(1L, result.getBuckets().get(0).getDocCount());\n+ }\n+ );\n+\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n+ () -> {\n+ TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n+ .field(\"keyword\")\n+ .missingBucket(true);\n+ return new CompositeAggregationBuilder(\"name\", Collections.singletonList(terms))\n+ .aggregateAfter(Collections.singletonMap(\"keyword\", null));\n+ }, (result) -> {\n+ assertEquals(3, result.getBuckets().size());\n+ assertEquals(\"{keyword=d}\", result.afterKey().toString());\n+ assertEquals(\"{keyword=a}\", result.getBuckets().get(0).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(0).getDocCount());\n+ assertEquals(\"{keyword=c}\", result.getBuckets().get(1).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(1).getDocCount());\n+ assertEquals(\"{keyword=d}\", result.getBuckets().get(2).getKeyAsString());\n+ assertEquals(1L, result.getBuckets().get(2).getDocCount());\n+ }\n+ );\n+\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n+ () -> {\n+ TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder(\"keyword\")\n+ .field(\"keyword\")\n+ .missingBucket(true)\n+ .order(SortOrder.DESC);\n+ return new CompositeAggregationBuilder(\"name\", Collections.singletonList(terms))\n+ .aggregateAfter(Collections.singletonMap(\"keyword\", null));\n+ }, (result) -> {\n+ assertEquals(0, result.getBuckets().size());\n+ assertNull(result.afterKey());\n+ }\n+ );\n+ }\n+\n public void testWithKeywordMissingAfter() throws Exception {\n final List<Map<String, List<Object>>> dataset = new ArrayList<>();\n dataset.addAll(\n@@ -518,6 +622,67 @@ public void testWithKeywordAndLongDesc() throws Exception {\n );\n }\n \n+ public void testWithKeywordLongAndMissingBucket() throws Exception {\n+ final List<Map<String, List<Object>>> dataset = new ArrayList<>();\n+ dataset.addAll(\n+ Arrays.asList(\n+ createDocument(\"keyword\", \"a\", \"long\", 100L),\n+ createDocument(\"double\", 0d),\n+ createDocument(\"keyword\", \"c\", \"long\", 100L),\n+ createDocument(\"keyword\", \"a\", \"long\", 0L),\n+ createDocument(\"keyword\", \"d\", \"long\", 10L),\n+ createDocument(\"keyword\", \"c\"),\n+ createDocument(\"keyword\", \"c\", \"long\", 100L),\n+ createDocument(\"long\", 100L),\n+ createDocument(\"double\", 0d)\n+ )\n+ );\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery()), dataset,\n+ () -> new CompositeAggregationBuilder(\"name\",\n+ Arrays.asList(\n+ new TermsValuesSourceBuilder(\"keyword\").field(\"keyword\").missingBucket(true),\n+ new TermsValuesSourceBuilder(\"long\").field(\"long\").missingBucket(true)\n+ )\n+ ),\n+ (result) -> {\n+ assertEquals(7, result.getBuckets().size());\n+ assertEquals(\"{keyword=d, long=10}\", result.afterKey().toString());\n+ assertEquals(\"{keyword=null, long=null}\", result.getBuckets().get(0).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(0).getDocCount());\n+ assertEquals(\"{keyword=null, long=100}\", result.getBuckets().get(1).getKeyAsString());\n+ assertEquals(1L, result.getBuckets().get(1).getDocCount());\n+ assertEquals(\"{keyword=a, long=0}\", result.getBuckets().get(2).getKeyAsString());\n+ assertEquals(1L, result.getBuckets().get(2).getDocCount());\n+ assertEquals(\"{keyword=a, long=100}\", result.getBuckets().get(3).getKeyAsString());\n+ assertEquals(1L, result.getBuckets().get(3).getDocCount());\n+ assertEquals(\"{keyword=c, long=null}\", result.getBuckets().get(4).getKeyAsString());\n+ assertEquals(1L, result.getBuckets().get(4).getDocCount());\n+ assertEquals(\"{keyword=c, long=100}\", result.getBuckets().get(5).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(5).getDocCount());\n+ assertEquals(\"{keyword=d, long=10}\", result.getBuckets().get(6).getKeyAsString());\n+ assertEquals(1L, result.getBuckets().get(6).getDocCount());\n+ }\n+ );\n+\n+ testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery(\"keyword\")), dataset,\n+ () -> new CompositeAggregationBuilder(\"name\",\n+ Arrays.asList(\n+ new TermsValuesSourceBuilder(\"keyword\").field(\"keyword\").missingBucket(true),\n+ new TermsValuesSourceBuilder(\"long\").field(\"long\").missingBucket(true)\n+ )\n+ ).aggregateAfter(createAfterKey(\"keyword\", \"c\", \"long\", null)\n+ ),\n+ (result) -> {\n+ assertEquals(2, result.getBuckets().size());\n+ assertEquals(\"{keyword=d, long=10}\", result.afterKey().toString());\n+ assertEquals(\"{keyword=c, long=100}\", result.getBuckets().get(0).getKeyAsString());\n+ assertEquals(2L, result.getBuckets().get(0).getDocCount());\n+ assertEquals(\"{keyword=d, long=10}\", result.getBuckets().get(1).getKeyAsString());\n+ assertEquals(1L, result.getBuckets().get(1).getDocCount());\n+ }\n+ );\n+ }\n+\n public void testMultiValuedWithKeywordAndLong() throws Exception {\n final List<Map<String, List<Object>>> dataset = new ArrayList<>();\n dataset.addAll(", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeAggregatorTests.java", "status": "modified" }, { "diff": "@@ -129,21 +129,24 @@ public void testRandom() throws IOException {\n assert(false);\n }\n }\n- testRandomCase(true, types);\n+ testRandomCase(types);\n }\n \n private void testRandomCase(ClassAndName... types) throws IOException {\n- testRandomCase(true, types);\n- testRandomCase(false, types);\n+ testRandomCase(true, true, types);\n+ testRandomCase(true, false, types);\n+ testRandomCase(false, true, types);\n+ testRandomCase(false, false, types);\n }\n \n- private void testRandomCase(boolean forceMerge, ClassAndName... types) throws IOException {\n+ private void testRandomCase(boolean forceMerge, boolean missingBucket, ClassAndName... types) throws IOException {\n final BigArrays bigArrays = BigArrays.NON_RECYCLING_INSTANCE;\n int numDocs = randomIntBetween(50, 100);\n List<Comparable<?>[]> possibleValues = new ArrayList<>();\n for (ClassAndName type : types) {\n- int numValues = randomIntBetween(1, numDocs*2);\n- Comparable<?>[] values = new Comparable[numValues];\n+ final Comparable<?>[] values;\n+ int numValues = randomIntBetween(1, numDocs * 2);\n+ values = new Comparable[numValues];\n if (type.clazz == Long.class) {\n for (int i = 0; i < numValues; i++) {\n values[i] = randomLong();\n@@ -157,7 +160,7 @@ private void testRandomCase(boolean forceMerge, ClassAndName... types) throws IO\n values[i] = new BytesRef(randomAlphaOfLengthBetween(5, 50));\n }\n } else {\n- assert(false);\n+ assert (false);\n }\n possibleValues.add(values);\n }\n@@ -171,30 +174,34 @@ private void testRandomCase(boolean forceMerge, ClassAndName... types) throws IO\n boolean hasAllField = true;\n for (int j = 0; j < types.length; j++) {\n int numValues = randomIntBetween(0, 5);\n+ List<Comparable<?>> values = new ArrayList<>();\n if (numValues == 0) {\n hasAllField = false;\n- }\n- List<Comparable<?>> values = new ArrayList<>();\n- for (int k = 0; k < numValues; k++) {\n- values.add(possibleValues.get(j)[randomIntBetween(0, possibleValues.get(j).length-1)]);\n- if (types[j].clazz == Long.class) {\n- long value = (Long) values.get(k);\n- document.add(new SortedNumericDocValuesField(types[j].fieldType.name(), value));\n- document.add(new LongPoint(types[j].fieldType.name(), value));\n- } else if (types[j].clazz == Double.class) {\n- document.add(new SortedNumericDocValuesField(types[j].fieldType.name(),\n- NumericUtils.doubleToSortableLong((Double) values.get(k))));\n- } else if (types[j].clazz == BytesRef.class) {\n- BytesRef value = (BytesRef) values.get(k);\n- document.add(new SortedSetDocValuesField(types[j].fieldType.name(), (BytesRef) values.get(k)));\n- document.add(new TextField(types[j].fieldType.name(), value.utf8ToString(), Field.Store.NO));\n- } else {\n- assert(false);\n+ if (missingBucket) {\n+ values.add(null);\n+ }\n+ } else {\n+ for (int k = 0; k < numValues; k++) {\n+ values.add(possibleValues.get(j)[randomIntBetween(0, possibleValues.get(j).length - 1)]);\n+ if (types[j].clazz == Long.class) {\n+ long value = (Long) values.get(k);\n+ document.add(new SortedNumericDocValuesField(types[j].fieldType.name(), value));\n+ document.add(new LongPoint(types[j].fieldType.name(), value));\n+ } else if (types[j].clazz == Double.class) {\n+ document.add(new SortedNumericDocValuesField(types[j].fieldType.name(),\n+ NumericUtils.doubleToSortableLong((Double) values.get(k))));\n+ } else if (types[j].clazz == BytesRef.class) {\n+ BytesRef value = (BytesRef) values.get(k);\n+ document.add(new SortedSetDocValuesField(types[j].fieldType.name(), (BytesRef) values.get(k)));\n+ document.add(new TextField(types[j].fieldType.name(), value.utf8ToString(), Field.Store.NO));\n+ } else {\n+ assert (false);\n+ }\n }\n }\n docValues.add(values);\n }\n- if (hasAllField) {\n+ if (hasAllField || missingBucket) {\n List<CompositeKey> comb = createListCombinations(docValues);\n keys.addAll(comb);\n }\n@@ -210,29 +217,53 @@ private void testRandomCase(boolean forceMerge, ClassAndName... types) throws IO\n for (int i = 0; i < types.length; i++) {\n final MappedFieldType fieldType = types[i].fieldType;\n if (types[i].clazz == Long.class) {\n- sources[i] = new LongValuesSource(bigArrays, fieldType,\n- context -> DocValues.getSortedNumeric(context.reader(), fieldType.name()), value -> value,\n- DocValueFormat.RAW, null, size, 1);\n+ sources[i] = new LongValuesSource(\n+ bigArrays,\n+ fieldType,\n+ context -> DocValues.getSortedNumeric(context.reader(), fieldType.name()),\n+ value -> value,\n+ DocValueFormat.RAW,\n+ missingBucket,\n+ null,\n+ size,\n+ 1\n+ );\n } else if (types[i].clazz == Double.class) {\n sources[i] = new DoubleValuesSource(\n- bigArrays, fieldType,\n+ bigArrays,\n+ fieldType,\n context -> FieldData.sortableLongBitsToDoubles(DocValues.getSortedNumeric(context.reader(), fieldType.name())),\n- DocValueFormat.RAW, null, size, 1\n+ DocValueFormat.RAW,\n+ missingBucket,\n+ null,\n+ size,\n+ 1\n );\n } else if (types[i].clazz == BytesRef.class) {\n if (forceMerge) {\n // we don't create global ordinals but we test this mode when the reader has a single segment\n // since ordinals are global in this case.\n sources[i] = new GlobalOrdinalValuesSource(\n- bigArrays, fieldType,\n+ bigArrays,\n+ fieldType,\n context -> DocValues.getSortedSet(context.reader(), fieldType.name()),\n- DocValueFormat.RAW, null, size, 1\n+ DocValueFormat.RAW,\n+ missingBucket,\n+ null,\n+ size,\n+ 1\n );\n } else {\n sources[i] = new BinaryValuesSource(\n+ bigArrays,\n+ (b) -> {},\n fieldType,\n context -> FieldData.toString(DocValues.getSortedSet(context.reader(), fieldType.name())),\n- DocValueFormat.RAW, null, size, 1\n+ DocValueFormat.RAW,\n+ missingBucket,\n+ null,\n+ size,\n+ 1\n );\n }\n } else {\n@@ -241,28 +272,21 @@ private void testRandomCase(boolean forceMerge, ClassAndName... types) throws IO\n }\n CompositeKey[] expected = keys.toArray(new CompositeKey[0]);\n Arrays.sort(expected, (a, b) -> compareKey(a, b));\n- CompositeValuesCollectorQueue queue = new CompositeValuesCollectorQueue(sources, size);\n- final SortedDocsProducer docsProducer = sources[0].createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery());\n for (boolean withProducer : new boolean[] {true, false}) {\n- if (withProducer && docsProducer == null) {\n- continue;\n- }\n int pos = 0;\n CompositeKey last = null;\n while (pos < size) {\n- queue.clear();\n- if (last != null) {\n- queue.setAfter(last.values());\n- }\n-\n+ final CompositeValuesCollectorQueue queue =\n+ new CompositeValuesCollectorQueue(BigArrays.NON_RECYCLING_INSTANCE, sources, size, last);\n+ final SortedDocsProducer docsProducer = sources[0].createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery());\n for (LeafReaderContext leafReaderContext : reader.leaves()) {\n final LeafBucketCollector leafCollector = new LeafBucketCollector() {\n @Override\n public void collect(int doc, long bucket) throws IOException {\n queue.addIfCompetitive();\n }\n };\n- if (withProducer) {\n+ if (docsProducer != null && withProducer) {\n assertEquals(DocIdSet.EMPTY,\n docsProducer.processLeaf(new MatchAllDocsQuery(), queue, leafReaderContext, false));\n } else {\n@@ -310,6 +334,14 @@ private static MappedFieldType createKeyword(String name) {\n private static int compareKey(CompositeKey key1, CompositeKey key2) {\n assert key1.size() == key2.size();\n for (int i = 0; i < key1.size(); i++) {\n+ if (key1.get(i) == null) {\n+ if (key2.get(i) == null) {\n+ continue;\n+ }\n+ return -1;\n+ } else if (key2.get(i) == null) {\n+ return 1;\n+ }\n Comparable<Object> cmp1 = (Comparable<Object>) key1.get(i);\n int cmp = cmp1.compareTo(key2.get(i));\n if (cmp != 0) {", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/composite/CompositeValuesCollectorQueueTests.java", "status": "modified" }, { "diff": "@@ -40,9 +40,12 @@ public void testBinarySorted() {\n MappedFieldType keyword = new KeywordFieldMapper.KeywordFieldType();\n keyword.setName(\"keyword\");\n BinaryValuesSource source = new BinaryValuesSource(\n+ BigArrays.NON_RECYCLING_INSTANCE,\n+ (b) -> {},\n keyword,\n context -> null,\n DocValueFormat.RAW,\n+ false,\n null,\n 1,\n 1\n@@ -55,9 +58,12 @@ public void testBinarySorted() {\n new TermQuery(new Term(\"keyword\", \"toto)\"))));\n \n source = new BinaryValuesSource(\n+ BigArrays.NON_RECYCLING_INSTANCE,\n+ (b) -> {},\n keyword,\n context -> null,\n DocValueFormat.RAW,\n+ false,\n \"missing_value\",\n 1,\n 1\n@@ -66,9 +72,26 @@ public void testBinarySorted() {\n assertNull(source.createSortedDocsProducerOrNull(reader, null));\n \n source = new BinaryValuesSource(\n+ BigArrays.NON_RECYCLING_INSTANCE,\n+ (b) -> {},\n keyword,\n context -> null,\n DocValueFormat.RAW,\n+ true,\n+ null,\n+ 1,\n+ 1\n+ );\n+ assertNull(source.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));\n+ assertNull(source.createSortedDocsProducerOrNull(reader, null));\n+\n+ source = new BinaryValuesSource(\n+ BigArrays.NON_RECYCLING_INSTANCE,\n+ (b) -> {},\n+ keyword,\n+ context -> null,\n+ DocValueFormat.RAW,\n+ false,\n null,\n 0,\n -1\n@@ -77,7 +100,16 @@ public void testBinarySorted() {\n \n MappedFieldType ip = new IpFieldMapper.IpFieldType();\n ip.setName(\"ip\");\n- source = new BinaryValuesSource(ip, context -> null, DocValueFormat.RAW,null, 1, 1);\n+ source = new BinaryValuesSource(\n+ BigArrays.NON_RECYCLING_INSTANCE,\n+ (b) -> {},\n+ ip,\n+ context -> null,\n+ DocValueFormat.RAW,\n+ false,\n+ null,\n+ 1,\n+ 1);\n assertNull(source.createSortedDocsProducerOrNull(reader, null));\n }\n \n@@ -88,6 +120,7 @@ public void testGlobalOrdinalsSorted() {\n BigArrays.NON_RECYCLING_INSTANCE,\n keyword, context -> null,\n DocValueFormat.RAW,\n+ false,\n null,\n 1,\n 1\n@@ -104,6 +137,7 @@ public void testGlobalOrdinalsSorted() {\n keyword,\n context -> null,\n DocValueFormat.RAW,\n+ false,\n \"missing_value\",\n 1,\n 1\n@@ -116,6 +150,20 @@ public void testGlobalOrdinalsSorted() {\n keyword,\n context -> null,\n DocValueFormat.RAW,\n+ true,\n+ null,\n+ 1,\n+ 1\n+ );\n+ assertNull(source.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));\n+ assertNull(source.createSortedDocsProducerOrNull(reader, null));\n+\n+ source = new GlobalOrdinalValuesSource(\n+ BigArrays.NON_RECYCLING_INSTANCE,\n+ keyword,\n+ context -> null,\n+ DocValueFormat.RAW,\n+ false,\n null,\n 1,\n -1\n@@ -129,6 +177,7 @@ public void testGlobalOrdinalsSorted() {\n ip,\n context -> null,\n DocValueFormat.RAW,\n+ false,\n null,\n 1,\n 1\n@@ -152,6 +201,7 @@ public void testNumericSorted() {\n context -> null,\n value -> value,\n DocValueFormat.RAW,\n+ false,\n null,\n 1,\n 1\n@@ -169,19 +219,35 @@ public void testNumericSorted() {\n context -> null,\n value -> value,\n DocValueFormat.RAW,\n+ false,\n 0d,\n 1,\n 1);\n assertNull(sourceWithMissing.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));\n assertNull(sourceWithMissing.createSortedDocsProducerOrNull(reader, null));\n assertNull(sourceWithMissing.createSortedDocsProducerOrNull(reader, new TermQuery(new Term(\"keyword\", \"toto)\"))));\n \n+ sourceWithMissing = new LongValuesSource(\n+ BigArrays.NON_RECYCLING_INSTANCE,\n+ number,\n+ context -> null,\n+ value -> value,\n+ DocValueFormat.RAW,\n+ true,\n+ null,\n+ 1,\n+ 1);\n+ assertNull(sourceWithMissing.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));\n+ assertNull(sourceWithMissing.createSortedDocsProducerOrNull(reader, null));\n+ assertNull(sourceWithMissing.createSortedDocsProducerOrNull(reader, new TermQuery(new Term(\"keyword\", \"toto)\"))));\n+\n LongValuesSource sourceRev = new LongValuesSource(\n BigArrays.NON_RECYCLING_INSTANCE,\n number,\n context -> null,\n value -> value,\n DocValueFormat.RAW,\n+ false,\n null,\n 1,\n -1\n@@ -195,6 +261,7 @@ public void testNumericSorted() {\n number,\n context -> null,\n DocValueFormat.RAW,\n+ false,\n null,\n 1,\n 1", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/bucket/composite/SingleDimensionValuesSourceTests.java", "status": "modified" } ] }
{ "body": "This issue was first reported in https://github.com/elastic/elasticsearch/issues/28985.\r\nAny text after the last closing bracket is simply ignored by the request json parser. So for instance a request like:\r\n```\r\n{\r\n \"query\": {\r\n \"term\": {\r\n \"ProductID\": \"one\"\r\n }\r\n }\r\n},\r\n\"field_after_last_bracket\": {\r\n}\r\n}\r\n```\r\n... is accepted.\r\nI don't know why we have this leniency which is why I am opening this issue as a bug.", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-12T15:15:06Z" }, { "body": "Is this issue accepted?", "created_at": "2018-03-14T19:42:40Z" }, { "body": "> Is this issue accepted?\r\n\r\nWe consider it a real issue worth fixing, so, yeah.", "created_at": "2018-03-14T20:00:02Z" } ], "number": 28995, "title": "Rest _search endpoint accepts invalid json body" }
{ "body": "This change validates that the `_search` request does not have trailing\r\ntokens after the main object and fails the request with a parsing exception otherwise.\r\n\r\nCloses #28995", "number": 29428, "review_comments": [ { "body": "Should this default not be the other way around (pass in true by default)? The only time I think we should allow trailing tokens is in the _msearch API which is a special case. It seems safer to me to check and error on trailing tokens in the default case and then have the _msearch API use the more specialised method below? or is there something I am missing here?", "created_at": "2018-04-10T08:04:02Z" }, { "body": "Agreed. I pushed https://github.com/elastic/elasticsearch/pull/29428/commits/c6a113b6c3a23d40dc02e773c00a30ed397bb6e1 to change the default", "created_at": "2018-04-10T13:19:48Z" } ], "title": "Fail _search request with trailing tokens" }
{ "commits": [ { "message": "Fail _search request with trailing tokens\n\nThis change validates that the `_search` request does not have trailing\ntokens after the main object and fails the request with a parsing exception otherwise.\n\nCloses #28995" }, { "message": "do not check for extra tokens if outside of the search action" }, { "message": "fix compil" }, { "message": "remove unused import" }, { "message": "defaults to checkTrailingTokens for SearchSourceBuilder#fromXContent" } ], "files": [ { "diff": "@@ -1155,7 +1155,7 @@ public void testMultiSearch() throws IOException {\n \n List<SearchRequest> requests = new ArrayList<>();\n CheckedBiConsumer<SearchRequest, XContentParser, IOException> consumer = (searchRequest, p) -> {\n- SearchSourceBuilder searchSourceBuilder = SearchSourceBuilder.fromXContent(p);\n+ SearchSourceBuilder searchSourceBuilder = SearchSourceBuilder.fromXContent(p, false);\n if (searchSourceBuilder.equals(new SearchSourceBuilder()) == false) {\n searchRequest.source(searchSourceBuilder);\n }", "filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java", "status": "modified" }, { "diff": "@@ -70,3 +70,8 @@ Executing a Regexp Query with a long regex string may degrade search performance\n To safeguard against this, the maximum length of regex that can be used in a\n Regexp Query request has been limited to 1000. This default maximum can be changed\n for a particular index with the index setting `index.max_regex_length`.\n+\n+==== Invalid `_search` request body\n+\n+Search requests with extra content after the main object will no longer be accepted\n+by the `_search` endpoint. A parsing exception will be thrown instead.", "filename": "docs/reference/migration/migrate_7_0/search.asciidoc", "status": "modified" }, { "diff": "@@ -112,7 +112,7 @@ static SearchRequest convert(SearchTemplateRequest searchTemplateRequest, Search\n try (XContentParser parser = XContentFactory.xContent(XContentType.JSON)\n .createParser(xContentRegistry, LoggingDeprecationHandler.INSTANCE, source)) {\n SearchSourceBuilder builder = SearchSourceBuilder.searchSource();\n- builder.parseXContent(parser);\n+ builder.parseXContent(parser, true);\n builder.explain(searchTemplateRequest.isExplain());\n builder.profile(searchTemplateRequest.isProfile());\n searchRequest.source(builder);", "filename": "modules/lang-mustache/src/main/java/org/elasticsearch/script/mustache/TransportSearchTemplateAction.java", "status": "modified" }, { "diff": "@@ -223,7 +223,7 @@ public void addSummaryFields(List<String> summaryFields) {\n return RatedDocument.fromXContent(p);\n }, RATINGS_FIELD);\n PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), (p, c) ->\n- SearchSourceBuilder.fromXContent(p), REQUEST_FIELD);\n+ SearchSourceBuilder.fromXContent(p, false), REQUEST_FIELD);\n PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), (p, c) -> p.map(), PARAMS_FIELD);\n PARSER.declareStringArray(RatedRequest::addSummaryFields, FIELDS_FIELD);\n PARSER.declareString(ConstructingObjectParser.optionalConstructorArg(), TEMPLATE_ID_FIELD);", "filename": "modules/rank-eval/src/main/java/org/elasticsearch/index/rankeval/RatedRequest.java", "status": "modified" }, { "diff": "@@ -107,7 +107,7 @@ protected void doExecute(RankEvalRequest request, ActionListener<RankEvalRespons\n String resolvedRequest = templateScript.newInstance(params).execute();\n try (XContentParser subParser = createParser(namedXContentRegistry,\n LoggingDeprecationHandler.INSTANCE, new BytesArray(resolvedRequest), XContentType.JSON)) {\n- ratedSearchSource = SearchSourceBuilder.fromXContent(subParser);\n+ ratedSearchSource = SearchSourceBuilder.fromXContent(subParser, false);\n } catch (IOException e) {\n // if we fail parsing, put the exception into the errors map and continue\n errors.put(ratedRequest.getId(), e);", "filename": "modules/rank-eval/src/main/java/org/elasticsearch/index/rankeval/TransportRankEvalAction.java", "status": "modified" }, { "diff": "@@ -77,7 +77,7 @@ public class RestReindexAction extends AbstractBaseReindexRestHandler<ReindexReq\n try (InputStream stream = BytesReference.bytes(builder).streamInput();\n XContentParser innerParser = parser.contentType().xContent()\n .createParser(parser.getXContentRegistry(), parser.getDeprecationHandler(), stream)) {\n- request.getSearchRequest().source().parseXContent(innerParser);\n+ request.getSearchRequest().source().parseXContent(innerParser, false);\n }\n };\n ", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestReindexAction.java", "status": "modified" }, { "diff": "@@ -94,7 +94,7 @@ public static MultiSearchRequest parseRequest(RestRequest restRequest, boolean a\n \n \n parseMultiLineRequest(restRequest, multiRequest.indicesOptions(), allowExplicitIndex, (searchRequest, parser) -> {\n- searchRequest.source(SearchSourceBuilder.fromXContent(parser));\n+ searchRequest.source(SearchSourceBuilder.fromXContent(parser, false));\n multiRequest.add(searchRequest);\n });\n List<SearchRequest> requests = multiRequest.requests();", "filename": "server/src/main/java/org/elasticsearch/rest/action/search/RestMultiSearchAction.java", "status": "modified" }, { "diff": "@@ -109,7 +109,7 @@ public static void parseSearchRequest(SearchRequest searchRequest, RestRequest r\n }\n searchRequest.indices(Strings.splitStringByCommaToArray(request.param(\"index\")));\n if (requestContentParser != null) {\n- searchRequest.source().parseXContent(requestContentParser);\n+ searchRequest.source().parseXContent(requestContentParser, true);\n }\n \n final int batchedReduceSize = request.paramAsInt(\"batched_reduce_size\", searchRequest.getBatchedReduceSize());\n@@ -128,7 +128,7 @@ public static void parseSearchRequest(SearchRequest searchRequest, RestRequest r\n // only set if we have the parameter passed to override the cluster-level default\n searchRequest.allowPartialSearchResults(request.paramAsBoolean(\"allow_partial_search_results\", null));\n }\n- \n+\n // do not allow 'query_and_fetch' or 'dfs_query_and_fetch' search types\n // from the REST layer. these modes are an internal optimization and should\n // not be specified explicitly by the user.", "filename": "server/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java", "status": "modified" }, { "diff": "@@ -111,8 +111,12 @@ public final class SearchSourceBuilder implements Writeable, ToXContentObject, R\n public static final ParseField ALL_FIELDS_FIELDS = new ParseField(\"all_fields\");\n \n public static SearchSourceBuilder fromXContent(XContentParser parser) throws IOException {\n+ return fromXContent(parser, true);\n+ }\n+\n+ public static SearchSourceBuilder fromXContent(XContentParser parser, boolean checkTrailingTokens) throws IOException {\n SearchSourceBuilder builder = new SearchSourceBuilder();\n- builder.parseXContent(parser);\n+ builder.parseXContent(parser, checkTrailingTokens);\n return builder;\n }\n \n@@ -951,12 +955,19 @@ private SearchSourceBuilder shallowCopy(QueryBuilder queryBuilder, QueryBuilder\n return rewrittenBuilder;\n }\n \n+ public void parseXContent(XContentParser parser) throws IOException {\n+ parseXContent(parser, true);\n+ }\n+\n /**\n * Parse some xContent into this SearchSourceBuilder, overwriting any values specified in the xContent. Use this if you need to set up\n- * different defaults than a regular SearchSourceBuilder would have and use\n- * {@link #fromXContent(XContentParser)} if you have normal defaults.\n+ * different defaults than a regular SearchSourceBuilder would have and use {@link #fromXContent(XContentParser, boolean)} if you have\n+ * normal defaults.\n+ *\n+ * @param parser The xContent parser.\n+ * @param checkTrailingTokens If true throws a parsing exception when extra tokens are found after the main object.\n */\n- public void parseXContent(XContentParser parser) throws IOException {\n+ public void parseXContent(XContentParser parser, boolean checkTrailingTokens) throws IOException {\n XContentParser.Token token = parser.currentToken();\n String currentFieldName = null;\n if (token != XContentParser.Token.START_OBJECT && (token = parser.nextToken()) != XContentParser.Token.START_OBJECT) {\n@@ -1106,6 +1117,12 @@ public void parseXContent(XContentParser parser) throws IOException {\n parser.getTokenLocation());\n }\n }\n+ if (checkTrailingTokens) {\n+ token = parser.nextToken();\n+ if (token != null) {\n+ throw new ParsingException(parser.getTokenLocation(), \"Unexpected token [\" + token + \"] found after the main object.\");\n+ }\n+ }\n }\n \n @Override", "filename": "server/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java", "status": "modified" }, { "diff": "@@ -165,7 +165,7 @@ public void testResponseErrorToXContent() throws IOException {\n new MultiSearchResponse.Item(null, new IllegalStateException(\"baaaaaazzzz\"))\n }, tookInMillis);\n \n- assertEquals(\"{\\\"took\\\":\" \n+ assertEquals(\"{\\\"took\\\":\"\n + tookInMillis\n + \",\\\"responses\\\":[\"\n + \"{\"\n@@ -225,7 +225,7 @@ public void testMultiLineSerialization() throws IOException {\n byte[] originalBytes = MultiSearchRequest.writeMultiLineFormat(originalRequest, xContentType.xContent());\n MultiSearchRequest parsedRequest = new MultiSearchRequest();\n CheckedBiConsumer<SearchRequest, XContentParser, IOException> consumer = (r, p) -> {\n- SearchSourceBuilder searchSourceBuilder = SearchSourceBuilder.fromXContent(p);\n+ SearchSourceBuilder searchSourceBuilder = SearchSourceBuilder.fromXContent(p, false);\n if (searchSourceBuilder.equals(new SearchSourceBuilder()) == false) {\n r.source(searchSourceBuilder);\n }\n@@ -273,7 +273,7 @@ private static MultiSearchRequest createMultiSearchRequest() throws IOException\n if (randomBoolean()) {\n searchRequest.allowPartialSearchResults(true);\n }\n- \n+\n // scroll is not supported in the current msearch api, so unset it:\n searchRequest.scroll((Scroll) null);\n ", "filename": "server/src/test/java/org/elasticsearch/action/search/MultiSearchRequestTests.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.builder;\n \n+import com.fasterxml.jackson.core.JsonParseException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -67,6 +68,18 @@ public void testFromXContent() throws IOException {\n assertParseSearchSource(testSearchSourceBuilder, createParser(builder));\n }\n \n+ public void testFromXContentInvalid() throws IOException {\n+ try (XContentParser parser = createParser(JsonXContent.jsonXContent, \"{}}\")) {\n+ JsonParseException exc = expectThrows(JsonParseException.class, () -> SearchSourceBuilder.fromXContent(parser));\n+ assertThat(exc.getMessage(), containsString(\"Unexpected close marker\"));\n+ }\n+\n+ try (XContentParser parser = createParser(JsonXContent.jsonXContent, \"{}{}\")) {\n+ ParsingException exc = expectThrows(ParsingException.class, () -> SearchSourceBuilder.fromXContent(parser));\n+ assertThat(exc.getDetailedMessage(), containsString(\"found after the main object\"));\n+ }\n+ }\n+\n private static void assertParseSearchSource(SearchSourceBuilder testBuilder, XContentParser parser) throws IOException {\n if (randomBoolean()) {\n parser.nextToken(); // sometimes we move it on the START_OBJECT to", "filename": "server/src/test/java/org/elasticsearch/search/builder/SearchSourceBuilderTests.java", "status": "modified" } ] }
{ "body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**:all\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**:all\r\n\r\n**OS version**:all\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nperhaps setHeaders method can't override default header in RestClient, because httpHeader names are case-insensitive.suggest converting all default header and request header names to lowercase/uppercase.\r\n\r\n![diff](https://cloud.githubusercontent.com/assets/24735504/21955372/f06fdf9e-daa4-11e6-8caf-0dcb51641d0c.png)\r\n\r\n", "comments": [ { "body": "I’d like to take this up if no one is already working on this or has been assigned to this.", "created_at": "2018-04-05T00:17:00Z" }, { "body": "go ahead @adityasrini thanks!", "created_at": "2018-04-05T10:46:47Z" }, { "body": "given that we have decided (see #30616) to remove support for default headers, as they are already supported by the apache http client which we use to perform requests, I am going to close this.", "created_at": "2018-07-16T11:32:15Z" } ], "number": 22623, "title": "Make headers case-insensitive" }
{ "body": "Converts requestHeader names to lowercase in the English Locale. Makes overriding default headers in RestClient possible. Converted the `requestNames.contains... == false` statement to `!requestNames.contains...`. Closes #22623", "number": 29419, "review_comments": [ { "body": "Like it or not, our style is to use ` == false` instead of `!`.", "created_at": "2018-04-07T15:00:38Z" }, { "body": "I’ll change it back then!", "created_at": "2018-04-07T15:59:09Z" }, { "body": "@adityasrini \r\nRather than doing the two String.toLowerCase(Locale.ENGLISH) which requires 2 changes, you should replace the new HashMap() with a new TreeMap(String.CASE_INSENSITIVE_ORDER).", "created_at": "2018-04-16T13:10:12Z" }, { "body": "I don't necessarily agree on this. We don't need a sorted map here, we just need to make sure that keys are case insensitive, which is one of the properties of treemap when used in this way, but we don't need all of its other properties which also affect that data structure that's internally used to store entries. Makes sense?", "created_at": "2018-04-16T13:17:53Z" }, { "body": "@javanna \r\n\r\nSorry my suggestion should have been to replace the `new HashSet()` with `new TreeSet(String.CASE_INSENSITIVE_ORDER)` at line 433 , and remove the 2x `toLowerCase(Locale.ENGLISH)` additions, obviously mentioning TreeMap was nonsense.\r\n", "created_at": "2018-04-16T14:08:00Z" }, { "body": "I don't necessarily agree with this either, we would be introducing a sorted data structure where we don't need ordering, but only case-insensitive lookups. Not sure it is a good trade-off.", "created_at": "2018-04-16T14:29:26Z" }, { "body": "@javanna \r\n\r\n> we would be introducing a sorted data structure where we don't need ordering, but only case-insensitive lookups.\r\n\r\nThe internal structure of treeset doesnt matter, any more than the internals of hashset matter.\r\n\r\nHashsets also sort/arrange their keys, it might not be alphabetically, but the entire buckets thing also has its own system sorting system when it allocates keys into a chain of buckets, but who cares.\r\n\r\nAll that matters is the *set allows us to determine if some key already exists in it.", "created_at": "2018-04-16T14:57:21Z" }, { "body": "Maybe in this case it won't make a huge difference, but `HashSet` is backed by a `HashMap` while `TreeSet` is back by a `TreeMap`. `TreeMap` is implemented using a red-black tree, which is very different compared to `HashMap`, the reason being that the former will have to re-balance the tree to make it possible to iterate through entries in their natural ordering. We are using a set here though just to quickly check if it already contains something, and we never iterate through it. Conceptually, I still like more the two calls to lowercase than using a red-black tree just because treeset allows to make strings case-insensitive. Hopefully I explained what I meant. It would be interesting to measure which of the two solutions is faster, not sure it's worth it in this case given that this is probably not a hotspot.", "created_at": "2018-04-16T19:12:31Z" }, { "body": "> TreeMap is implemented using a red-black tree, which is very different compared to HashMap, \r\n\r\nToday TreeMap is a red black tree, tomorrow it might change, it doesnt matter, all we care about is that it keeps the contract that allows case insensitive look ups of keys. We can be sure they are close enough to the same speed, the jdk collection guys know this important they arent going to make one 10x slower than the other.\r\n\r\nA HashMap turns into a tree when it gets really full anyway, which basicaly means the jdk guys dont care why should you ?\r\n\r\nhttps://stackoverflow.com/questions/30164087/how-does-java-8s-hashmap-degenerate-to-balanced-trees-when-many-keys-have-the-s\r\n\r\n>> The implementation notes comment in HashMap is a better description of HashMap's operation than I could write myself. The relevant parts for understanding the tree nodes and their ordering are:\r\n\r\n>> This map usually acts as a binned (bucketed) hash table, but when bins get too large, they are transformed into bins of TreeNodes, each structured similarly to those in java.util.TreeMap. [...] Bins of TreeNodes may be traversed and used like any others, but additionally support faster lookup when overpopulated. [...]\r\n\r\n\r\n> which is very different compared to HashMap, the reason being that the former will have to re-balance the tree to make it possible to iterate through entries in their natural ordering\r\n\r\nAnd a hashmap also has to rebalance when it hits some threshold. To solve that we simply call the ctor that takes initial size so in both case the rebalance never happens.\r\n\r\n> We are using a set here though just to quickly check if it already contains something, and we never iterate through it.\r\n\r\nThats right so why talk about something that never happens.\r\n\r\n> It would be interesting to measure which of the two solutions is faster, not sure it's worth it in this case given that this is probably not a hotspot\r\n\r\nExcept that enough of this is, because HM and TM are used everywhere and will be compiled by hotspot. If one is compiled so will be the other and vice versa.\r\n\r\nWhy ? This path takes 100s/100s of cycles vs billions for a complete request, it doesnt matter. Even if one is 2x or 10x sloewr the total request time will be basically the same.", "created_at": "2018-04-16T22:02:54Z" }, { "body": "+1 to keeping a hash set. Moving to a tree set would make operations perform in `O(log(size))` rather than constant time. Even if this set would usually be small so that it wouldn't matter, we have seen in the past that users are sometimes very creative when it comes to pushing the system to its boundaries.", "created_at": "2018-04-17T08:45:25Z" } ], "title": "Makes request headers lowercase in English locale." }
{ "commits": [ { "message": "Makes request headers lowercase in English locale. Closes #22623" }, { "message": "Reverted from ! to ==" } ], "files": [ { "diff": "@@ -434,10 +434,10 @@ private void setHeaders(HttpRequest httpRequest, Header[] requestHeaders) {\n for (Header requestHeader : requestHeaders) {\n Objects.requireNonNull(requestHeader, \"request header must not be null\");\n httpRequest.addHeader(requestHeader);\n- requestNames.add(requestHeader.getName());\n+ requestNames.add(requestHeader.getName().toLowerCase(Locale.ENGLISH));\n }\n for (Header defaultHeader : defaultHeaders) {\n- if (requestNames.contains(defaultHeader.getName()) == false) {\n+ if (requestNames.contains(defaultHeader.getName().toLowerCase(Locale.ENGLISH)) == false) {\n httpRequest.addHeader(defaultHeader);\n }\n }", "filename": "client/rest/src/main/java/org/elasticsearch/client/RestClient.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: tried with 2.3.5, 2.4.4, 5.2.1 and 5.4.0\r\n\r\n**Plugins installed**: nope\r\n\r\n**JVM version** : 1.8.0_121-b13 (on mac os) and 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14 \r\n\r\n**OS version** : mac os 10.12.4 and Ubuntu 16.04\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nWhen using a `geo-point` the inserts sometimes fail with an 'illegal longitude value' error. The longitude value in the error could be interpreted as valid, when you subtract 360 it will give you the valid value (360 degrees are in a sphere like the earth). It might have something to do with the length in characters of the geohash (h4rr13h4rr13x fails, h4rr13h4rr13 succeeds). Another search direction might be a rounding error.\r\n\r\n**Steps to reproduce**:\r\n\r\nCreate an index-mapping with a `geo_point`\r\n\r\n```\r\ncurl -XPUT 'http://localhost:9200/my-index' -d '{\r\n \"mappings\": {\r\n \"user\": { \r\n \"_all\": { \"enabled\": false }, \r\n \"properties\": { \r\n \"geoLocation\": {\r\n \"type\": \"geo_point\"\r\n }\r\n }\r\n }\r\n }\r\n}'\r\n```\r\n\r\nInsert two documents : \r\n\r\n```\r\ncurl -XPUT 'http://localhost:9200/my-index/user/4' -d '{\"geoLocation\":\"h4rr13\"}'\r\ncurl -XPUT 'http://localhost:9200/my-index/user/5' -d '{\"geoLocation\":\"h4rr13h4rr13x\"}'\r\n```\r\n\r\nThe first insert succeeds and the second one fails : \r\n\r\n```\r\n{\"error\":{\"root_cause\":[{\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse\"}],\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse\",\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"illegal longitude value [370.25605011731386] for geoLocation\"}},\"status\":400}\r\n```\r\n\r\nThe second call should also succeed since it is a valid place : http://geohash.org/h4rr13h4rr13x\r\n", "comments": [ { "body": "With geohash.org, `h4rr13h4rr13x` resolves to `-76.1073640 10.2560504` while our utility method `GeoPoint.fromGeohash(\"h4rr13h4rr13x\")` returns `-76.10736412927508, 370.25605011731386` causing the `GeoPointFieldMapper` to fail.\r\n\r\nThis looks like a bug but I'd call for @nknize 's opinion?", "created_at": "2017-05-11T15:34:18Z" }, { "body": "I digged around in the source and found it has something to do with the precision of the geohash. The culprit is : `GeoPointField#unscaleLon(3282405194L)` (value should be < 180 and > -180)\r\n\r\nI can't however find this class on the master branch (the last version where this class is visible is 6.6.0 of Lucene), \r\n\r\nSo I'm not sure if this is an ES issue or Lucene.\r\n\r\nThe issue might be related to https://issues.apache.org/jira/browse/LUCENE-6710 (which was reported by @nknize 😄 ) ", "created_at": "2017-06-15T12:43:53Z" } ], "number": 24616, "title": "Illegal longitude value for geoLocation on valid geo_point" }
{ "body": "Fixes a possible overflow error that geohashes longer than 12 characters\r\ncan cause during parsing.\r\n\r\nFixes #24616 ", "number": 29418, "review_comments": [], "title": "Fix overflow error in parsing of long geohashes" }
{ "commits": [ { "message": "Fix overflow error in parsing of long geohashes\n\nFixes a possible overflow error that geohashes longer than 12 character\ncan cause during parsing." }, { "message": "Add a note about geohash precision limitations" } ], "files": [ { "diff": "@@ -92,6 +92,16 @@ format was changed early on to conform to the format used by GeoJSON.\n \n ==================================================\n \n+[NOTE]\n+A point can be expressed as a http://en.wikipedia.org/wiki/Geohash[geohash].\n+Geohashes are https://en.wikipedia.org/wiki/Base32[base32] encoded strings of\n+the bits of the latitude and longitude interleaved. Each character in a geohash\n+adds additional 5 bits to the precision. So the longer the hash, the more\n+precise it is. For the indexing purposed geohashs are translated into\n+latitude-longitude pairs. During this process only first 12 characters are\n+used, so specifying more than 12 characters in a geohash doesn't increase the\n+precision. The 12 characters provide 60 bits, which should reduce a possible\n+error to less than 2cm.\n \n [[geo-point-params]]\n ==== Parameters for `geo_point` fields", "filename": "docs/reference/mapping/types/geo-point.asciidoc", "status": "modified" }, { "diff": "@@ -72,15 +72,19 @@ public static final long longEncode(final double lon, final double lat, final in\n /**\n * Encode from geohash string to the geohash based long format (lon/lat interleaved, 4 least significant bits = level)\n */\n- public static final long longEncode(final String hash) {\n- int level = hash.length()-1;\n+ private static long longEncode(final String hash, int length) {\n+ int level = length - 1;\n long b;\n long l = 0L;\n for(char c : hash.toCharArray()) {\n b = (long)(BASE_32_STRING.indexOf(c));\n l |= (b<<(level--*5));\n+ if (level < 0) {\n+ // We cannot handle more than 12 levels\n+ break;\n+ }\n }\n- return (l<<4)|hash.length();\n+ return (l << 4) | length;\n }\n \n /**\n@@ -173,6 +177,10 @@ public static final long mortonEncode(final String hash) {\n for(char c : hash.toCharArray()) {\n b = (long)(BASE_32_STRING.indexOf(c));\n l |= (b<<((level--*5) + MORTON_OFFSET));\n+ if (level < 0) {\n+ // We cannot handle more than 12 levels\n+ break;\n+ }\n }\n return BitUtil.flipFlop(l);\n }\n@@ -200,13 +208,14 @@ private static char encode(int x, int y) {\n public static Rectangle bbox(final String geohash) {\n // bottom left is the coordinate\n GeoPoint bottomLeft = GeoPoint.fromGeohash(geohash);\n- long ghLong = longEncode(geohash);\n+ int len = Math.min(12, geohash.length());\n+ long ghLong = longEncode(geohash, len);\n // shift away the level\n ghLong >>>= 4;\n // deinterleave and add 1 to lat and lon to get topRight\n long lat = BitUtil.deinterleave(ghLong >>> 1) + 1;\n long lon = BitUtil.deinterleave(ghLong) + 1;\n- GeoPoint topRight = GeoPoint.fromGeohash(BitUtil.interleave((int)lon, (int)lat) << 4 | geohash.length());\n+ GeoPoint topRight = GeoPoint.fromGeohash(BitUtil.interleave((int)lon, (int)lat) << 4 | len);\n \n return new Rectangle(bottomLeft.lat(), topRight.lat(), bottomLeft.lon(), topRight.lon());\n }", "filename": "server/src/main/java/org/elasticsearch/common/geo/GeoHashUtils.java", "status": "modified" }, { "diff": "@@ -82,4 +82,20 @@ public void testGeohashExtremes() {\n assertEquals(\"xbpbpbpbpbpb\", GeoHashUtils.stringEncode(180, 0));\n assertEquals(\"zzzzzzzzzzzz\", GeoHashUtils.stringEncode(180, 90));\n }\n+\n+ public void testLongGeohashes() {\n+ for (int i = 0; i < 100000; i++) {\n+ String geohash = randomGeohash(12, 12);\n+ GeoPoint expected = GeoPoint.fromGeohash(geohash);\n+ // Adding some random geohash characters at the end\n+ String extendedGeohash = geohash + randomGeohash(1, 10);\n+ GeoPoint actual = GeoPoint.fromGeohash(extendedGeohash);\n+ assertEquals(\"Additional data points above 12 should be ignored [\" + extendedGeohash + \"]\" , expected, actual);\n+\n+ Rectangle expectedBbox = GeoHashUtils.bbox(geohash);\n+ Rectangle actualBbox = GeoHashUtils.bbox(extendedGeohash);\n+ assertEquals(\"Additional data points above 12 should be ignored [\" + extendedGeohash + \"]\" , expectedBbox, actualBbox);\n+\n+ }\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/common/geo/GeoHashTests.java", "status": "modified" } ] }
{ "body": "Today we close the translog write tragically if we experience any I/O exception on a write. These tragic closes lead to use closing the translog and failing the engine. Yet, there is one case that is missed which is when we touch the write channel during a read (checking if reading from the writer would put us past what has been flushed). This commit addresses this by closing the writer tragically if we encounter an I/O exception on the write channel while reading. This becomes interesting when we consider that this method is invoked from the engine through the translog as part of getting a document from the translog. This means we have to consider closing the translog here as well which will cascade up into us finally failing the engine.\r\n\r\nNote that there is no semantic change to, for example, primary/replica resync and recovery. These actions will take a snapshot of the translog which syncs the translog to disk. If an I/O exception occurs during the sync we already close the writer tragically and once we have synced we do not ever read past the position that was synced while taking the snapshot.\r\n\r\nCloses #29390", "comments": [ { "body": "Pinging @elastic/es-distributed", "created_at": "2018-04-05T17:53:44Z" }, { "body": "This addresses the test failure in #29390 exactly because the test is asserting that the translog is closed after a tragic event occurred on the writer but because of the missed handling of an I/O exception on the write channel in the read method, the translog will not be closed by the random readOperation that was added to TranslogThread#run.", "created_at": "2018-04-05T17:55:03Z" }, { "body": "@ywelsch I pushed. I want to refactor the exception handling for `TranslogWriter#closeWithTragicEvent` and `Translog#closeOnTragicEvent` immediately after this PR. Can you take another look?", "created_at": "2018-04-06T12:50:00Z" } ], "number": 29401, "title": "Close translog writer if exception on write channel" }
{ "body": "This commit simplifies the invocations to Translog#closeOnTragicEvent. This method already catches all possible exceptions and suppresses the non-AlreadyClosedExceptions into the exception that triggered the invocation. Therefore, there is no need for callers to do this same logic (which would never execute).\r\n\r\nRelates #29401", "number": 29413, "review_comments": [], "title": "Simplify Translog#closeOnTragicEvent" }
{ "commits": [ { "message": "Simplify Translog#closeOnTragicEvent\n\nThis commit simplifies the invocations to\nTranslog#closeOnTragicEvent. This method already catches all possible\nexceptions and suppresses the non-AlreadyClosedExceptions into the\nexception that triggered the invocation. Therefore, there is no need for\ncallers to do this same logic (which would never execute)." } ], "files": [ { "diff": "@@ -490,19 +490,11 @@ public Location add(final Operation operation) throws IOException {\n return current.add(bytes, operation.seqNo());\n }\n } catch (final AlreadyClosedException | IOException ex) {\n- try {\n- closeOnTragicEvent(ex);\n- } catch (final Exception inner) {\n- ex.addSuppressed(inner);\n- }\n+ closeOnTragicEvent(ex);\n throw ex;\n- } catch (final Exception e) {\n- try {\n- closeOnTragicEvent(e);\n- } catch (final Exception inner) {\n- e.addSuppressed(inner);\n- }\n- throw new TranslogException(shardId, \"Failed to write operation [\" + operation + \"]\", e);\n+ } catch (final Exception ex) {\n+ closeOnTragicEvent(ex);\n+ throw new TranslogException(shardId, \"Failed to write operation [\" + operation + \"]\", ex);\n } finally {\n Releasables.close(out);\n }\n@@ -586,13 +578,9 @@ public Operation readOperation(Location location) throws IOException {\n // if they are still in RAM and we are reading onto that position\n try {\n return current.read(location);\n- } catch (final IOException e) {\n- try {\n- closeOnTragicEvent(e);\n- } catch (final Exception inner) {\n- e.addSuppressed(inner);\n- }\n- throw e;\n+ } catch (final Exception ex) {\n+ closeOnTragicEvent(ex);\n+ throw ex;\n }\n } else {\n // read backwards - it's likely we need to read on that is recent\n@@ -679,12 +667,8 @@ public void sync() throws IOException {\n if (closed.get() == false) {\n current.sync();\n }\n- } catch (Exception ex) {\n- try {\n- closeOnTragicEvent(ex);\n- } catch (Exception inner) {\n- ex.addSuppressed(inner);\n- }\n+ } catch (final Exception ex) {\n+ closeOnTragicEvent(ex);\n throw ex;\n }\n }\n@@ -719,12 +703,8 @@ public boolean ensureSynced(Location location) throws IOException {\n ensureOpen();\n return current.syncUpTo(location.translogLocation + location.size);\n }\n- } catch (Exception ex) {\n- try {\n- closeOnTragicEvent(ex);\n- } catch (Exception inner) {\n- ex.addSuppressed(inner);\n- }\n+ } catch (final Exception ex) {\n+ closeOnTragicEvent(ex);\n throw ex;\n }\n return false;\n@@ -748,14 +728,14 @@ public boolean ensureSynced(Stream<Location> locations) throws IOException {\n }\n }\n \n- private void closeOnTragicEvent(Exception ex) {\n+ private void closeOnTragicEvent(final Exception ex) {\n if (current.getTragicException() != null) {\n try {\n close();\n- } catch (AlreadyClosedException inner) {\n+ } catch (final AlreadyClosedException inner) {\n // don't do anything in this case. The AlreadyClosedException comes from TranslogWriter and we should not add it as suppressed because\n // will contain the Exception ex as cause. See also https://github.com/elastic/elasticsearch/issues/15941\n- } catch (Exception inner) {\n+ } catch (final Exception inner) {\n assert (ex != inner.getCause());\n ex.addSuppressed(inner);\n }\n@@ -1609,12 +1589,8 @@ public void trimUnreferencedReaders() throws IOException {\n assert readers.isEmpty() == false || current.generation == minReferencedGen :\n \"all readers were cleaned but the minReferenceGen [\" + minReferencedGen + \"] is not the current writer's gen [\" +\n current.generation + \"]\";\n- } catch (Exception ex) {\n- try {\n- closeOnTragicEvent(ex);\n- } catch (final Exception inner) {\n- ex.addSuppressed(inner);\n- }\n+ } catch (final Exception ex) {\n+ closeOnTragicEvent(ex);\n throw ex;\n }\n }", "filename": "server/src/main/java/org/elasticsearch/index/translog/Translog.java", "status": "modified" } ] }
{ "body": "Today we close the translog write tragically if we experience any I/O exception on a write. These tragic closes lead to use closing the translog and failing the engine. Yet, there is one case that is missed which is when we touch the write channel during a read (checking if reading from the writer would put us past what has been flushed). This commit addresses this by closing the writer tragically if we encounter an I/O exception on the write channel while reading. This becomes interesting when we consider that this method is invoked from the engine through the translog as part of getting a document from the translog. This means we have to consider closing the translog here as well which will cascade up into us finally failing the engine.\r\n\r\nNote that there is no semantic change to, for example, primary/replica resync and recovery. These actions will take a snapshot of the translog which syncs the translog to disk. If an I/O exception occurs during the sync we already close the writer tragically and once we have synced we do not ever read past the position that was synced while taking the snapshot.\r\n\r\nCloses #29390", "comments": [ { "body": "Pinging @elastic/es-distributed", "created_at": "2018-04-05T17:53:44Z" }, { "body": "This addresses the test failure in #29390 exactly because the test is asserting that the translog is closed after a tragic event occurred on the writer but because of the missed handling of an I/O exception on the write channel in the read method, the translog will not be closed by the random readOperation that was added to TranslogThread#run.", "created_at": "2018-04-05T17:55:03Z" }, { "body": "@ywelsch I pushed. I want to refactor the exception handling for `TranslogWriter#closeWithTragicEvent` and `Translog#closeOnTragicEvent` immediately after this PR. Can you take another look?", "created_at": "2018-04-06T12:50:00Z" } ], "number": 29401, "title": "Close translog writer if exception on write channel" }
{ "body": "This commit simplifies the exception handling in TranslogWriter#closeWithTragicEvent. When invoking this method, the inner close method could throw an exception which we always catch and suppress into the exception that led us to tragically close. This commit moves that repeated logic into closeWithTragicException and now callers simply need to catch, invoke closeWithTragicException, and rethrow. Note also that a catch block that was only catching `IOException` has been generalized to catch any exception for consistency with the remaining invocations of TranslogWriter#closeWithTragicEvent.\r\n\r\nRelates #29401", "number": 29412, "review_comments": [ { "body": "why not just `catch (final Exception e) {` here?", "created_at": "2018-04-06T17:26:03Z" }, { "body": "I want a compile-time error if a new checked exception crops here so that it's deliberately thought through.", "created_at": "2018-04-06T17:45:16Z" } ], "title": "Simplify TranslogWriter#closeWithTragicEvent" }
{ "commits": [ { "message": "Simplify TranslogWriter#closeWithTragicEvent\n\nThis commit simplifies the exception handling in\nTranslogWriter#closeWithTragicEvent. When invoking this method, the\ninner close method could throw an exception which we always catch and\nsuppress into the exception that led us to tragically close. This commit\nmoves that repeated logic into closeWithTragicException and now callers\nsimply need to catch, invoke closeWithTragicException, and rethrow." }, { "message": "Consistency" } ], "files": [ { "diff": "@@ -164,16 +164,20 @@ public Exception getTragicException() {\n return tragedy;\n }\n \n- private synchronized void closeWithTragicEvent(Exception exception) throws IOException {\n- assert exception != null;\n+ private synchronized void closeWithTragicEvent(final Exception ex) {\n+ assert ex != null;\n if (tragedy == null) {\n- tragedy = exception;\n- } else if (tragedy != exception) {\n+ tragedy = ex;\n+ } else if (tragedy != ex) {\n // it should be safe to call closeWithTragicEvents on multiple layers without\n // worrying about self suppression.\n- tragedy.addSuppressed(exception);\n+ tragedy.addSuppressed(ex);\n+ }\n+ try {\n+ close();\n+ } catch (final IOException | RuntimeException e) {\n+ ex.addSuppressed(e);\n }\n- close();\n }\n \n /**\n@@ -194,11 +198,7 @@ public synchronized Translog.Location add(final BytesReference data, final long\n try {\n data.writeTo(outputStream);\n } catch (final Exception ex) {\n- try {\n- closeWithTragicEvent(ex);\n- } catch (final Exception inner) {\n- ex.addSuppressed(inner);\n- }\n+ closeWithTragicEvent(ex);\n throw ex;\n }\n totalOffset += data.length();\n@@ -290,13 +290,9 @@ public TranslogReader closeIntoReader() throws IOException {\n synchronized (this) {\n try {\n sync(); // sync before we close..\n- } catch (IOException e) {\n- try {\n- closeWithTragicEvent(e);\n- } catch (Exception inner) {\n- e.addSuppressed(inner);\n- }\n- throw e;\n+ } catch (final Exception ex) {\n+ closeWithTragicEvent(ex);\n+ throw ex;\n }\n if (closed.compareAndSet(false, true)) {\n return new TranslogReader(getLastSyncedCheckpoint(), channel, path, getFirstOperationOffset());\n@@ -346,12 +342,8 @@ public boolean syncUpTo(long offset) throws IOException {\n try {\n outputStream.flush();\n checkpointToSync = getCheckpoint();\n- } catch (Exception ex) {\n- try {\n- closeWithTragicEvent(ex);\n- } catch (Exception inner) {\n- ex.addSuppressed(inner);\n- }\n+ } catch (final Exception ex) {\n+ closeWithTragicEvent(ex);\n throw ex;\n }\n }\n@@ -360,12 +352,8 @@ public boolean syncUpTo(long offset) throws IOException {\n try {\n channel.force(false);\n writeCheckpoint(channelFactory, path.getParent(), checkpointToSync);\n- } catch (Exception ex) {\n- try {\n- closeWithTragicEvent(ex);\n- } catch (Exception inner) {\n- ex.addSuppressed(inner);\n- }\n+ } catch (final Exception ex) {\n+ closeWithTragicEvent(ex);\n throw ex;\n }\n assert lastSyncedCheckpoint.offset <= checkpointToSync.offset :\n@@ -392,13 +380,9 @@ protected void readBytes(ByteBuffer targetBuffer, long position) throws IOExcept\n }\n }\n }\n- } catch (final IOException e) {\n- try {\n- closeWithTragicEvent(e);\n- } catch (final IOException inner) {\n- e.addSuppressed(inner);\n- }\n- throw e;\n+ } catch (final Exception ex) {\n+ closeWithTragicEvent(ex);\n+ throw ex;\n }\n // we don't have to have a lock here because we only write ahead to the file, so all writes has been complete\n // for the requested location.\n@@ -451,12 +435,8 @@ public synchronized void flush() throws IOException {\n try {\n ensureOpen();\n super.flush();\n- } catch (Exception ex) {\n- try {\n- closeWithTragicEvent(ex);\n- } catch (Exception inner) {\n- ex.addSuppressed(inner);\n- }\n+ } catch (final Exception ex) {\n+ closeWithTragicEvent(ex);\n throw ex;\n }\n }", "filename": "server/src/main/java/org/elasticsearch/index/translog/TranslogWriter.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version** (`bin/elasticsearch --version`): 6.2.3\r\n\r\n**Plugins installed**: [] D.N.A.\r\n\r\n**JVM version** (`java -version`): openjdk version \"1.8.0_161\"\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Linux jecstar-laptop 4.15.13-300.fc27.x86_64 #1 SMP Mon Mar 26 19:06:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**: The doEquals() method in the org.elasticsearch.index.query.QueryStringQueryBuilder is not working properly. When two different instances with different queries are compared by the equals method everything works as expected (equals() == false). \r\nWhen adding an equal timezone id to both instances of the QueryStringQueryBuilder, the equals() method will return true despite the query values are not equal. This is caused because parenthesis are missing around the statement at line 928. \r\n\r\n**Steps to reproduce**:\r\nFor a complete example see the attached gradle project.\r\n[querystringquerybug.tar.gz](https://github.com/elastic/elasticsearch/files/1881442/querystringquerybug.tar.gz)\r\n \r\n\r\n**Provide logs (if relevant)**: D.N.A.\r\n\r\n", "comments": [ { "body": "@jecstarinnovations This is indeed a bug. Thanks for reporting it.", "created_at": "2018-04-05T20:16:56Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-04-05T20:17:52Z" }, { "body": "@jecstarinnovations thanks for reporting this bug. I've raised https://github.com/elastic/elasticsearch/pull/29406 which should fix this", "created_at": "2018-04-06T07:59:46Z" }, { "body": "@jecstarinnovations thanks for raising this issue, I've merged a fix into the master branch and will back port to the 6.x branch shortly", "created_at": "2018-04-06T10:47:56Z" } ], "number": 29403, "title": "QueryStringQueryBuilder equals == true when query is not equal" }
{ "body": "This change fixes a bug where two `QueryStringQueryBuilder`s were found\r\nto be equal if they had the same timezone set even if the query string\r\nin the builders were different\r\n\r\nCloses #29403", "number": 29406, "review_comments": [ { "body": "The preceding line expects that `timeZone` might be `null`, so this line seems to risk a NPE?", "created_at": "2018-04-06T08:22:26Z" }, { "body": "Yeah, I noticed this when testing and have pushed a fix for it", "created_at": "2018-04-06T09:03:45Z" }, { "body": "nit: I find equals impls easier to read when they are symetric, eg. something like this should be correct?\r\n```\r\nObjects.equals(\r\n timeZone == null ? null : timeZone.getID(),\r\n other.timeZone == null ? null : otherTimeZone.getID())`", "created_at": "2018-04-06T09:06:48Z" }, { "body": "+1", "created_at": "2018-04-06T09:24:25Z" } ], "title": "Fixes query_string query equals timezone check" }
{ "commits": [ { "message": "Fixes query_string query equals timezone check\n\nThis change fixes a bug where two `QueryStringQueryBuilder`s were found\nto be equal if they had the same timezone set even if the query string\nin the builders were different\n\nCloses #29403" }, { "message": "Adds mutate function to QueryStringQueryBuilderTests" }, { "message": "iter" } ], "files": [ { "diff": "@@ -42,7 +42,6 @@\n \n import java.io.IOException;\n import java.util.ArrayList;\n-import java.util.Collections;\n import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n@@ -358,7 +357,7 @@ public QueryStringQueryBuilder tieBreaker(float tieBreaker) {\n return this;\n }\n \n- public float tieBreaker() {\n+ public Float tieBreaker() {\n return this.tieBreaker;\n }\n \n@@ -389,6 +388,22 @@ public QueryStringQueryBuilder analyzer(String analyzer) {\n this.analyzer = analyzer;\n return this;\n }\n+ \n+ /**\n+ * The optional analyzer used to analyze the query string. Note, if a field has search analyzer\n+ * defined for it, then it will be used automatically. Defaults to the smart search analyzer.\n+ */\n+ public String analyzer() {\n+ return analyzer;\n+ }\n+\n+ /**\n+ * The optional analyzer used to analyze the query string for phrase searches. Note, if a field has search (quote) analyzer\n+ * defined for it, then it will be used automatically. Defaults to the smart search analyzer.\n+ */\n+ public String quoteAnalyzer() {\n+ return quoteAnalyzer;\n+ }\n \n /**\n * The optional analyzer used to analyze the query string for phrase searches. Note, if a field has search (quote) analyzer\n@@ -884,9 +899,10 @@ protected boolean doEquals(QueryStringQueryBuilder other) {\n Objects.equals(tieBreaker, other.tieBreaker) &&\n Objects.equals(rewrite, other.rewrite) &&\n Objects.equals(minimumShouldMatch, other.minimumShouldMatch) &&\n- Objects.equals(lenient, other.lenient) &&\n- timeZone == null ? other.timeZone == null : other.timeZone != null &&\n- Objects.equals(timeZone.getID(), other.timeZone.getID()) &&\n+ Objects.equals(lenient, other.lenient) && \n+ Objects.equals(\n+ timeZone == null ? null : timeZone.getID(), \n+ other.timeZone == null ? null : other.timeZone.getID()) &&\n Objects.equals(escape, other.escape) &&\n Objects.equals(maxDeterminizedStates, other.maxDeterminizedStates) &&\n Objects.equals(autoGenerateSynonymsPhraseQuery, other.autoGenerateSynonymsPhraseQuery) &&", "filename": "server/src/main/java/org/elasticsearch/index/query/QueryStringQueryBuilder.java", "status": "modified" }, { "diff": "@@ -66,7 +66,9 @@\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.Arrays;\n+import java.util.HashMap;\n import java.util.List;\n+import java.util.Map;\n \n import static org.elasticsearch.index.query.AbstractQueryBuilder.parseInnerQueryBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery;\n@@ -172,6 +174,206 @@ protected QueryStringQueryBuilder doCreateTestQueryBuilder() {\n return queryStringQueryBuilder;\n }\n \n+ @Override\n+ public QueryStringQueryBuilder mutateInstance(QueryStringQueryBuilder instance) throws IOException {\n+ String query = instance.queryString();\n+ String defaultField = instance.defaultField();\n+ Map<String, Float> fields = instance.fields();\n+ Operator operator = instance.defaultOperator();\n+ Fuzziness fuzziness = instance.fuzziness();\n+ String analyzer = instance.analyzer();\n+ String quoteAnalyzer = instance.quoteAnalyzer();\n+ Boolean allowLeadingWildCard = instance.allowLeadingWildcard();\n+ Boolean analyzeWildcard = instance.analyzeWildcard();\n+ int maxDeterminizedStates = instance.maxDeterminizedStates();\n+ boolean enablePositionIncrements = instance.enablePositionIncrements();\n+ boolean escape = instance.escape();\n+ int phraseSlop = instance.phraseSlop();\n+ int fuzzyMaxExpansions = instance.fuzzyMaxExpansions();\n+ int fuzzyPrefixLength = instance.fuzzyPrefixLength();\n+ String fuzzyRewrite = instance.fuzzyRewrite();\n+ String rewrite = instance.rewrite();\n+ String quoteFieldSuffix = instance.quoteFieldSuffix();\n+ Float tieBreaker = instance.tieBreaker();\n+ String minimumShouldMatch = instance.minimumShouldMatch();\n+ String timeZone = instance.timeZone() == null ? null : instance.timeZone().getID();\n+ boolean autoGenerateSynonymsPhraseQuery = instance.autoGenerateSynonymsPhraseQuery();\n+ boolean fuzzyTranspositions = instance.fuzzyTranspositions();\n+\n+ switch (between(0, 23)) {\n+ case 0:\n+ query = query + \" foo\";\n+ break;\n+ case 1:\n+ if (defaultField == null) {\n+ defaultField = randomAlphaOfLengthBetween(1, 10);\n+ } else {\n+ defaultField = defaultField + randomAlphaOfLength(5);\n+ }\n+ break;\n+ case 2:\n+ fields = new HashMap<>(fields);\n+ fields.put(randomAlphaOfLength(10), 1.0f);\n+ break;\n+ case 3:\n+ operator = randomValueOtherThan(operator, () -> randomFrom(Operator.values()));\n+ break;\n+ case 4:\n+ fuzziness = randomValueOtherThan(fuzziness, () -> randomFrom(Fuzziness.AUTO, Fuzziness.ZERO, Fuzziness.ONE, Fuzziness.TWO));\n+ break;\n+ case 5:\n+ if (analyzer == null) {\n+ analyzer = randomAnalyzer();\n+ } else {\n+ analyzer = null;\n+ }\n+ break;\n+ case 6:\n+ if (quoteAnalyzer == null) {\n+ quoteAnalyzer = randomAnalyzer();\n+ } else {\n+ quoteAnalyzer = null;\n+ }\n+ break;\n+ case 7:\n+ if (allowLeadingWildCard == null) {\n+ allowLeadingWildCard = randomBoolean();\n+ } else {\n+ allowLeadingWildCard = randomBoolean() ? null : (allowLeadingWildCard == false);\n+ }\n+ break;\n+ case 8:\n+ if (analyzeWildcard == null) {\n+ analyzeWildcard = randomBoolean();\n+ } else {\n+ analyzeWildcard = randomBoolean() ? null : (analyzeWildcard == false);\n+ }\n+ break;\n+ case 9:\n+ maxDeterminizedStates += 5;\n+ break;\n+ case 10:\n+ enablePositionIncrements = (enablePositionIncrements == false);\n+ break;\n+ case 11:\n+ escape = (escape == false);\n+ break;\n+ case 12:\n+ phraseSlop += 5;\n+ break;\n+ case 13:\n+ fuzzyMaxExpansions += 5;\n+ break;\n+ case 14:\n+ fuzzyPrefixLength += 5;\n+ break;\n+ case 15:\n+ if (fuzzyRewrite == null) {\n+ fuzzyRewrite = getRandomRewriteMethod();\n+ } else {\n+ fuzzyRewrite = null;\n+ }\n+ break;\n+ case 16:\n+ if (rewrite == null) {\n+ rewrite = getRandomRewriteMethod();\n+ } else {\n+ rewrite = null;\n+ }\n+ break;\n+ case 17:\n+ if (quoteFieldSuffix == null) {\n+ quoteFieldSuffix = randomAlphaOfLengthBetween(1, 3);\n+ } else {\n+ quoteFieldSuffix = quoteFieldSuffix + randomAlphaOfLength(1);\n+ }\n+ break;\n+ case 18:\n+ if (tieBreaker == null) {\n+ tieBreaker = randomFloat();\n+ } else {\n+ tieBreaker += 0.05f;\n+ }\n+ break;\n+ case 19:\n+ if (minimumShouldMatch == null) {\n+ minimumShouldMatch = randomMinimumShouldMatch();\n+ } else {\n+ minimumShouldMatch = null;\n+ }\n+ break;\n+ case 20:\n+ if (timeZone == null) {\n+ timeZone = randomDateTimeZone().getID();\n+ } else {\n+ if (randomBoolean()) {\n+ timeZone = null;\n+ } else {\n+ timeZone = randomValueOtherThan(timeZone, () -> randomDateTimeZone().getID());\n+ }\n+ }\n+ break;\n+ case 21:\n+ autoGenerateSynonymsPhraseQuery = (autoGenerateSynonymsPhraseQuery == false);\n+ break;\n+ case 22:\n+ fuzzyTranspositions = (fuzzyTranspositions == false);\n+ break;\n+ case 23:\n+ return changeNameOrBoost(instance);\n+ default:\n+ throw new AssertionError(\"Illegal randomisation branch\");\n+ }\n+\n+ QueryStringQueryBuilder newInstance = new QueryStringQueryBuilder(query);\n+ if (defaultField != null) {\n+ newInstance.defaultField(defaultField);\n+ }\n+ newInstance.fields(fields);\n+ newInstance.defaultOperator(operator);\n+ newInstance.fuzziness(fuzziness);\n+ if (analyzer != null) {\n+ newInstance.analyzer(analyzer);\n+ }\n+ if (quoteAnalyzer != null) {\n+ newInstance.quoteAnalyzer(quoteAnalyzer);\n+ }\n+ if (allowLeadingWildCard != null) {\n+ newInstance.allowLeadingWildcard(allowLeadingWildCard);\n+ }\n+ if (analyzeWildcard != null) {\n+ newInstance.analyzeWildcard(analyzeWildcard);\n+ }\n+ newInstance.maxDeterminizedStates(maxDeterminizedStates);\n+ newInstance.enablePositionIncrements(enablePositionIncrements);\n+ newInstance.escape(escape);\n+ newInstance.phraseSlop(phraseSlop);\n+ newInstance.fuzzyMaxExpansions(fuzzyMaxExpansions);\n+ newInstance.fuzzyPrefixLength(fuzzyPrefixLength);\n+ if (fuzzyRewrite != null) {\n+ newInstance.fuzzyRewrite(fuzzyRewrite);\n+ }\n+ if (rewrite != null) {\n+ newInstance.rewrite(rewrite);\n+ }\n+ if (quoteFieldSuffix != null) {\n+ newInstance.quoteFieldSuffix(quoteFieldSuffix);\n+ }\n+ if (tieBreaker != null) {\n+ newInstance.tieBreaker(tieBreaker);\n+ }\n+ if (minimumShouldMatch != null) {\n+ newInstance.minimumShouldMatch(minimumShouldMatch);\n+ }\n+ if (timeZone != null) {\n+ newInstance.timeZone(timeZone);\n+ }\n+ newInstance.autoGenerateSynonymsPhraseQuery(autoGenerateSynonymsPhraseQuery);\n+ newInstance.fuzzyTranspositions(fuzzyTranspositions);\n+\n+ return newInstance;\n+ }\n+\n @Override\n protected void doAssertLuceneQuery(QueryStringQueryBuilder queryBuilder,\n Query query, SearchContext context) throws IOException {\n@@ -182,6 +384,16 @@ protected void doAssertLuceneQuery(QueryStringQueryBuilder queryBuilder,\n .or(instanceOf(MatchNoDocsQuery.class)));\n }\n \n+ // Tests fix for https://github.com/elastic/elasticsearch/issues/29403\n+ public void testTimezoneEquals() {\n+ QueryStringQueryBuilder builder1 = new QueryStringQueryBuilder(\"bar\");\n+ QueryStringQueryBuilder builder2 = new QueryStringQueryBuilder(\"foo\");\n+ assertNotEquals(builder1, builder2);\n+ builder1.timeZone(\"Europe/London\");\n+ builder2.timeZone(\"Europe/London\");\n+ assertNotEquals(builder1, builder2);\n+ }\n+\n public void testIllegalArguments() {\n expectThrows(IllegalArgumentException.class, () -> new QueryStringQueryBuilder((String) null));\n }", "filename": "server/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.test;\n \n import com.fasterxml.jackson.core.io.JsonStringEncoder;\n+\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermQuery;\n@@ -55,7 +56,6 @@\n import org.elasticsearch.common.xcontent.DeprecationHandler;\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n import org.elasticsearch.common.xcontent.ToXContent;\n-import org.elasticsearch.common.xcontent.XContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentGenerator;\n@@ -742,10 +742,14 @@ public void testEqualsAndHashcode() {\n for (int runs = 0; runs < NUMBER_OF_TESTQUERIES; runs++) {\n // TODO we only change name and boost, we should extend by any sub-test supplying a \"mutate\" method that randomly changes one\n // aspect of the object under test\n- checkEqualsAndHashCode(createTestQueryBuilder(), this::copyQuery, this::changeNameOrBoost);\n+ checkEqualsAndHashCode(createTestQueryBuilder(), this::copyQuery, this::mutateInstance);\n }\n }\n \n+ public QB mutateInstance(QB instance) throws IOException {\n+ return changeNameOrBoost(instance);\n+ }\n+\n /**\n * Generic test that checks that the <code>Strings.toString()</code> method\n * renders the XContent correctly.\n@@ -761,7 +765,7 @@ public void testValidOutput() throws IOException {\n }\n }\n \n- private QB changeNameOrBoost(QB original) throws IOException {\n+ protected QB changeNameOrBoost(QB original) throws IOException {\n QB secondQuery = copyQuery(original);\n if (randomBoolean()) {\n secondQuery.queryName(secondQuery.queryName() == null ? randomAlphaOfLengthBetween(1, 30) : secondQuery.queryName()", "filename": "test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java", "status": "modified" } ] }
{ "body": "Today we have a silent batch mode in the install plugin command when standard input is closed or there is no tty. It appears that historically this was useful when running tests where we want to accept plugin permissions without having to acknowledge them. Now that we have an explicit batch mode flag, this use-case is removed. The motivation for removing this now is that there is another place where silent batch mode arises and that is when a user attempts to install a plugin inside a Docker container without keeping standard input open and attaching a tty. In this case, the install plugin command will treat the situation as a silent batch mode and therefore the user will never have the chance to acknowledge the additional permissions required by a plugin. This commit removes this silent batch mode in favor of using the --batch flag when running tests and requiring the user to take explicit action to acknowledge the additional permissions (either by leaving standard input open and attaching a tty, or by passing the --batch flags themselves).\r\n\r\nNote that with this change the user will now see a null pointer exception when they try to install a plugin in a Docker container without keeping standard input open and attaching a tty. This will be addressed in an immediate follow-up, but because the implications of that change are larger, they should be handled separately from this one.\r\n\r\n", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-04-03T20:45:42Z" }, { "body": "run packaging tests", "created_at": "2018-04-03T20:49:49Z" }, { "body": "retest this please ", "created_at": "2018-04-03T22:37:02Z" }, { "body": "Thanks @rjernst.", "created_at": "2018-04-04T01:47:09Z" } ], "number": 29359, "title": "Remove silent batch mode from install plugin" }
{ "body": "A previous change removed silent batch mode for the plugin installer. This commit adds a note to the migration docs regarding this change.\r\n\r\nRelates #29359 \r\n", "number": 29365, "review_comments": [], "title": "Add note to migration docs on silent batch mode" }
{ "commits": [ { "message": "Add note to migration docs on silent batch mode\n\nA previous change removed silent batch mode for the plugin\ninstaller. This commit adds a note to the migration docs regarding this\nchange." }, { "message": "Fix stray characters" }, { "message": "Merge branch '6.x' into silent-batch-mode-migration\n\n* 6.x:\n Move testMappingConflictRootCause to different class\n Enhance error for out of bounds byte size settings (#29338)" }, { "message": "Merge branch '6.x' into silent-batch-mode-migration\n\n* 6.x:\n Improve similarity integration. (#29187)\n Fix some query extraction bugs. (#29283)\n Fixed quote_field_suffix in query_string (#29332)\n TEST: Update negative byte size setting error msg\n Fix bwc in GeoDistanceQuery serialization (#29325)" } ], "files": [ { "diff": "@@ -17,4 +17,15 @@ See {plugins}/repository-gcs-client.html#repository-gcs-client[Google Cloud Stor\n save disk space. As a consequence, database files had to be loaded in memory. Now the default database files\n that are stored uncompressed as `.mmdb` files which allows to memory-map them and save heap memory. Any\n custom database files must also be stored uncompressed. Consequently, the `database_file` property in any\n-ingest pipelines that use the Geoip Processor must refer to the uncompressed database files as well.\n\\ No newline at end of file\n+ingest pipelines that use the Geoip Processor must refer to the uncompressed database files as well.\n+\n+==== Using the plugin installer without a TTY\n+\n+The Elasticsearch plugin installer (`elasticsearch-plugin install`) would\n+previously silently accept additional security permissions required by a plugin\n+if standard input was closed or there was no TTY attached (e.g., `docker exec\n+<container ID> elasticsearch-plugin install`). This silent accepting of\n+additional security permissions has been removed. Now, a user must deliberately\n+accept these permissions either by keeping standard input open and attaching a\n+TTY (i.e., using interactive mode to accept the permissions), or by passing the\n+`--batch` flag.", "filename": "docs/reference/migration/migrate_6_3.asciidoc", "status": "modified" } ] }
{ "body": "In #28350, we fixed an endless flushing loop which may happen on replicas by tightening the relation between the flush action and the periodically flush condition.\r\n\r\n1. The periodically flush condition is enabled only if it is disabled after a flush.\r\n\r\n2. If the periodically flush condition is enabled then a flush will actually happen regardless of Lucene state.\r\n\r\n(1) and (2) guarantee a flushing loop will be terminated. Sadly, the condition 1 can be violated in edge cases as we used two different algorithms to evaluate the current and future uncommitted translog size.\r\n\r\n- We use method `uncommittedSizeInBytes` to calculate current uncommitted size. It is the sum of translogs whose generation at least the minGen (determined by a given seqno). We pick a continuous range of translogs since the minGen to evaluate the current uncommitted size.\r\n\r\n- We use method `sizeOfGensAboveSeqNoInBytes` to calculate the future uncommitted size. It is the sum of translogs whose maxSeqNo at least the given seqNo. Here we don't pick a range but select translog one by one.\r\n\r\nSuppose we have 3 translogs `gen1={#1,#2}, gen2={}, gen3={#3} and seqno=#1`, `uncommittedSizeInBytes` is the sum of gen1, gen2, and gen3 while `sizeOfGensAboveSeqNoInBytes` is the sum of gen1 and gen3. Gen2 is excluded because its maxSeqno is still -1.\r\n\r\nThis commit removes both `sizeOfGensAboveSeqNoInBytes` and `uncommittedSizeInBytes` methods, then enforces an engine to use only `sizeInBytesByMinGen` method to evaluate the periodically flush condition.\r\n\r\nCloses #29097\r\nRelates ##28350", "comments": [ { "body": "Pinging @elastic/es-distributed", "created_at": "2018-03-17T14:35:09Z" }, { "body": "@bleskes \r\n> I suggest removing the translog's uncommittedOps and uncommittedBytes and only expose the sizeInBytesByMinGen and a new equivalent totalOpsByMinGen.\r\n\r\nYes, I will definitely give this a try.\r\n> I also think we can have a stronger test - set the flush threshold to something smallish and the randomly index and flush (both based on a periodic check and just because). After each flush we should check that shouldPeriodicallyFlush returns false.\r\n\r\nOk, I will add a new test for this.", "created_at": "2018-03-19T13:53:01Z" }, { "body": "@bleskes I've removed both uncommittedOps and uncommittedBytes methods from translog and added a stress test for shouldPeriodicallyFlush. This test was failed around 25% without the patch. Please have a look, thank you!", "created_at": "2018-03-19T23:48:02Z" }, { "body": "@bleskes I've addressed your comments. Can you have another look? Thank you!", "created_at": "2018-03-20T22:36:06Z" }, { "body": "@bleskes Can you take another look?", "created_at": "2018-03-21T22:37:24Z" }, { "body": "> PS - as far as I can tell this can only happen if the translog generation file size limit is close or above to the flush threshold and we can end up here if one indexes faster then it takes to roll generations. If that's true, can you add this to the comment? This just too subtle but I can't see how to avoid it.\r\n\r\nWe can avoid if the default generation is not a factor of the flush threshold (eg. adding an empty translog + N * generations != flush). WDYT?", "created_at": "2018-03-22T12:53:30Z" }, { "body": "> We can avoid if the default generation is not a factor of the flush threshold (eg. adding an empty translog + N * generations != flush). WDYT?\r\n\r\nI'm not sure I follow. Can you please unpack this?", "created_at": "2018-03-22T12:56:36Z" }, { "body": "@bleskes What happened may be slightly different from your statement. I think an endless loop may have occurred when the uncommitted size is close to the flush threshold, the current generation is also close to the generation threshold, and a faster operation rolls a new generation, then a slower operation gets into an endless loop. If this is the case, the size of N translog files and an empty translog satisfies these conditions:\r\n\r\n- Size of N translog files <= the flush threshold\r\n- Size of N translog files + an empty translog file > the flush threshold\r\n\r\nAssuming that there was no manual flush, these conditions should not be satisfied at the same time if the generation size is not a factor of the flush threshold.", "created_at": "2018-03-22T13:42:19Z" }, { "body": "Discussed with Boaz on another channel. My last comment is not valid as it's based on the old code while Boaz's on the new code. I've updated comment in https://github.com/elastic/elasticsearch/pull/29125/commits/42402417bd6ae1b39533571bb4256167feeb17f0.", "created_at": "2018-03-22T16:38:57Z" }, { "body": "@bleskes Thank you very much for your helpful reviews.", "created_at": "2018-03-22T18:23:24Z" } ], "number": 29125, "title": "Harden periodically check to avoid endless flush loop" }
{ "body": "Currently, a flush stats contains only the total flush which is the sum\r\nof manual flush (via API) and periodic flush (async triggered when the\r\nuncommitted translog size is exceeded the flush threshold). Sometimes,\r\nit's useful to know these two numbers independently. This commit tracks\r\nand returns the periodic flush count in a flush stats.\r\n\r\nRelates #29125", "number": 29360, "review_comments": [ { "body": "can we have a rest test that checks that this is there?", "created_at": "2018-04-04T12:35:57Z" }, { "body": "@s1monw \r\nI had a test but I could not verify the periodic value as the periodic flush is executed async. Do you have any suggestion for this? Or just check its presence (eg. 0 is ok)?", "created_at": "2018-04-04T12:48:55Z" }, { "body": "I used the analogue of `gt: { periodic: 0}` when I did something similar.", "created_at": "2018-04-04T13:14:24Z" }, { "body": "Thanks @DaveCTurner. Unfortunately, sometimes a period flush might still be executing (or just scheduled) and `periodic` is still 0.", "created_at": "2018-04-04T13:18:32Z" }, { "body": "Apologies, I meant `gte: { periodic: 0 }`", "created_at": "2018-04-04T13:20:13Z" }, { "body": "yeah ++", "created_at": "2018-04-04T14:51:29Z" }, { "body": "I will go this suggestion. Thanks @DaveCTurner and @s1monw.", "created_at": "2018-04-04T15:11:20Z" } ], "title": "Add periodic flush count to flush stats" }
{ "commits": [ { "message": "Add periodic flush count to flush stats\n\nCurrently, a flush stats contains only the total flush which is the sum\nof manual flush (via API) and periodic flush (async triggered when the\nuncommitted translog size is exceeded the flush threshold). Sometimes,\nit's useful to know these two numbers independently. This commit tracks\nand returns a periodic flush count in a flush stats." }, { "message": "Merge branch 'master' into periodic-flushstats" }, { "message": "add a rest test" }, { "message": "Merge branch 'master' into periodic-flushstats" } ], "files": [ { "diff": "@@ -21,3 +21,34 @@\n indices.stats: {level: shards}\n \n - is_true: indices.testing.shards.0.0.commit.user_data.sync_id\n+\n+---\n+\"Flush stats\":\n+ - skip:\n+ version: \" - 6.99.99\"\n+ reason: periodic flush stats is introduced in 7.0\n+ - do:\n+ indices.create:\n+ index: test\n+ body:\n+ settings:\n+ number_of_shards: 1\n+ index.translog.flush_threshold_size: 160b\n+ - do:\n+ indices.flush:\n+ index: test\n+ - do:\n+ indices.stats: { index: test }\n+ - match: { indices.test.primaries.flush.periodic: 0 }\n+ - match: { indices.test.primaries.flush.total: 1 }\n+ - do:\n+ index:\n+ index: test\n+ type: doc\n+ id: 1\n+ body: { \"message\": \"a long message to make a periodic flush happen after this index operation\" }\n+ - do:\n+ indices.stats: { index: test }\n+ # periodic flush is async\n+ - gte: { indices.test.primaries.flush.periodic: 0 }\n+ - gte: { indices.test.primaries.flush.total: 1 }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.flush/10_basic.yml", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.flush;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n@@ -31,20 +32,22 @@\n public class FlushStats implements Streamable, ToXContentFragment {\n \n private long total;\n-\n+ private long periodic;\n private long totalTimeInMillis;\n \n public FlushStats() {\n \n }\n \n- public FlushStats(long total, long totalTimeInMillis) {\n+ public FlushStats(long total, long periodic, long totalTimeInMillis) {\n this.total = total;\n+ this.periodic = periodic;\n this.totalTimeInMillis = totalTimeInMillis;\n }\n \n- public void add(long total, long totalTimeInMillis) {\n+ public void add(long total, long periodic, long totalTimeInMillis) {\n this.total += total;\n+ this.periodic += periodic;\n this.totalTimeInMillis += totalTimeInMillis;\n }\n \n@@ -57,6 +60,7 @@ public void addTotals(FlushStats flushStats) {\n return;\n }\n this.total += flushStats.total;\n+ this.periodic += flushStats.periodic;\n this.totalTimeInMillis += flushStats.totalTimeInMillis;\n }\n \n@@ -67,6 +71,13 @@ public long getTotal() {\n return this.total;\n }\n \n+ /**\n+ * The number of flushes that were periodically triggered when translog exceeded the flush threshold.\n+ */\n+ public long getPeriodic() {\n+ return periodic;\n+ }\n+\n /**\n * The total time merges have been executed (in milliseconds).\n */\n@@ -85,6 +96,7 @@ public TimeValue getTotalTime() {\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject(Fields.FLUSH);\n builder.field(Fields.TOTAL, total);\n+ builder.field(Fields.PERIODIC, periodic);\n builder.humanReadableField(Fields.TOTAL_TIME_IN_MILLIS, Fields.TOTAL_TIME, getTotalTime());\n builder.endObject();\n return builder;\n@@ -93,6 +105,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n static final class Fields {\n static final String FLUSH = \"flush\";\n static final String TOTAL = \"total\";\n+ static final String PERIODIC = \"periodic\";\n static final String TOTAL_TIME = \"total_time\";\n static final String TOTAL_TIME_IN_MILLIS = \"total_time_in_millis\";\n }\n@@ -101,11 +114,17 @@ static final class Fields {\n public void readFrom(StreamInput in) throws IOException {\n total = in.readVLong();\n totalTimeInMillis = in.readVLong();\n+ if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ periodic = in.readVLong();\n+ }\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n out.writeVLong(total);\n out.writeVLong(totalTimeInMillis);\n+ if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ out.writeVLong(periodic);\n+ }\n }\n }", "filename": "server/src/main/java/org/elasticsearch/index/flush/FlushStats.java", "status": "modified" }, { "diff": "@@ -57,6 +57,7 @@\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.Lucene;\n+import org.elasticsearch.common.metrics.CounterMetric;\n import org.elasticsearch.common.metrics.MeanMetric;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n@@ -208,6 +209,7 @@ Runnable getGlobalCheckpointSyncer() {\n private final RecoveryStats recoveryStats = new RecoveryStats();\n private final MeanMetric refreshMetric = new MeanMetric();\n private final MeanMetric flushMetric = new MeanMetric();\n+ private final CounterMetric periodicFlushMetric = new CounterMetric();\n \n private final ShardEventListener shardEventListener = new ShardEventListener();\n \n@@ -827,7 +829,7 @@ public RefreshStats refreshStats() {\n }\n \n public FlushStats flushStats() {\n- return new FlushStats(flushMetric.count(), TimeUnit.NANOSECONDS.toMillis(flushMetric.sum()));\n+ return new FlushStats(flushMetric.count(), periodicFlushMetric.count(), TimeUnit.NANOSECONDS.toMillis(flushMetric.sum()));\n }\n \n public DocsStats docStats() {\n@@ -2344,6 +2346,7 @@ public void onFailure(final Exception e) {\n @Override\n protected void doRun() throws IOException {\n flush(new FlushRequest());\n+ periodicFlushMetric.inc();\n }\n \n @Override", "filename": "server/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n package org.elasticsearch.index.shard;\n \n import org.apache.lucene.store.LockObtainFailedException;\n-import org.elasticsearch.core.internal.io.IOUtils;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n@@ -42,6 +41,7 @@\n import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.cluster.service.ClusterService;\n import org.elasticsearch.common.CheckedRunnable;\n+import org.elasticsearch.common.UUIDs;\n import org.elasticsearch.common.breaker.CircuitBreaker;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.lucene.uid.Versions;\n@@ -50,6 +50,7 @@\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.core.internal.io.IOUtils;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.env.ShardLock;\n@@ -102,6 +103,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoSearchHits;\n+import static org.hamcrest.Matchers.allOf;\n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n@@ -347,6 +349,7 @@ public void testMaybeFlush() throws Exception {\n .setRefreshPolicy(randomBoolean() ? IMMEDIATE : NONE).get();\n assertBusy(() -> { // this is async\n assertFalse(shard.shouldPeriodicallyFlush());\n+ assertThat(shard.flushStats().getPeriodic(), greaterThan(0L));\n });\n assertEquals(0, translog.stats().getUncommittedOperations());\n translog.sync();\n@@ -444,8 +447,12 @@ public void testStressMaybeFlushOrRollTranslogGeneration() throws Exception {\n if (flush) {\n final FlushStats flushStats = shard.flushStats();\n final long total = flushStats.getTotal();\n+ final long periodic = flushStats.getPeriodic();\n client().prepareIndex(\"test\", \"test\", \"1\").setSource(\"{}\", XContentType.JSON).get();\n- check = () -> assertEquals(total + 1, shard.flushStats().getTotal());\n+ check = () -> {\n+ assertThat(shard.flushStats().getTotal(), equalTo(total + 1));\n+ assertThat(shard.flushStats().getPeriodic(), equalTo(periodic + 1));\n+ };\n } else {\n final long generation = shard.getEngine().getTranslog().currentFileGeneration();\n client().prepareIndex(\"test\", \"test\", \"1\").setSource(\"{}\", XContentType.JSON).get();\n@@ -461,6 +468,30 @@ public void testStressMaybeFlushOrRollTranslogGeneration() throws Exception {\n check.run();\n }\n \n+ public void testFlushStats() throws Exception {\n+ final IndexService indexService = createIndex(\"test\");\n+ ensureGreen();\n+ Settings settings = Settings.builder().put(\"index.translog.flush_threshold_size\", \"\" + between(200, 300) + \"b\").build();\n+ client().admin().indices().prepareUpdateSettings(\"test\").setSettings(settings).get();\n+ final int numDocs = between(10, 100);\n+ for (int i = 0; i < numDocs; i++) {\n+ client().prepareIndex(\"test\", \"doc\", Integer.toString(i)).setSource(\"{}\", XContentType.JSON).get();\n+ }\n+ // A flush stats may include the new total count but the old period count - assert eventually.\n+ assertBusy(() -> {\n+ final FlushStats flushStats = client().admin().indices().prepareStats(\"test\").clear().setFlush(true).get().getTotal().flush;\n+ assertThat(flushStats.getPeriodic(), allOf(equalTo(flushStats.getTotal()), greaterThan(0L)));\n+ });\n+ assertBusy(() -> assertThat(indexService.getShard(0).shouldPeriodicallyFlush(), equalTo(false)));\n+ settings = Settings.builder().put(\"index.translog.flush_threshold_size\", (String) null).build();\n+ client().admin().indices().prepareUpdateSettings(\"test\").setSettings(settings).get();\n+\n+ client().prepareIndex(\"test\", \"doc\", UUIDs.randomBase64UUID()).setSource(\"{}\", XContentType.JSON).get();\n+ client().admin().indices().prepareFlush(\"test\").setForce(randomBoolean()).setWaitIfOngoing(true).get();\n+ final FlushStats flushStats = client().admin().indices().prepareStats(\"test\").clear().setFlush(true).get().getTotal().flush;\n+ assertThat(flushStats.getTotal(), greaterThan(flushStats.getPeriodic()));\n+ }\n+\n public void testShardHasMemoryBufferOnTranslogRecover() throws Throwable {\n createIndex(\"test\");\n ensureGreen();", "filename": "server/src/test/java/org/elasticsearch/index/shard/IndexShardIT.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**\r\n6.2.3\r\n\r\n**Plugins installed**: []\r\nX-Pack\r\n\r\n**JVM version** (`java -version`):\r\n1.8.0_162\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nQuery string query with `quote_field_suffix` not using the correct field for search. Simple query string query seems to work correctly. Also, this works fine in version 5.6 and 6.0 alpha 2\r\n\r\n**Steps to reproduce**:\r\n\r\n```\r\nPUT testindex\r\n{\r\n \"settings\": {\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"english_exact\": {\r\n \"tokenizer\": \"standard\",\r\n \"filter\": [\r\n \"lowercase\"\r\n ]\r\n }\r\n }\r\n },\r\n \"number_of_shards\": 1,\r\n \"number_of_replicas\": 0\r\n },\r\n \"mappings\": {\r\n \"type\": {\r\n \"properties\": {\r\n \"body\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"english\",\r\n \"fields\": {\r\n \"exact\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"english_exact\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\nPUT testindex/type/1\r\n{\r\n \"body\": \"Ski resort\"\r\n}\r\n\r\nPUT testindex/type/2\r\n{\r\n \"body\": \"A pair of skis\"\r\n}\r\n\r\nPUT testindex/type/3\r\n{\r\n \"body\": \"ski boots\"\r\n}\r\n\r\nPOST testindex/_refresh\r\n\r\nGET testindex/_search <-- works fine \r\n{\r\n \"query\": {\r\n \"simple_query_string\": {\r\n \"fields\": [ \"body\" ],\r\n \"quote_field_suffix\": \".exact\",\r\n \"query\": \"\\\"ski boot\\\"\"\r\n }\r\n }\r\n}\r\n\r\nGET testindex/_search\r\n{\r\n \"query\": {\r\n \"query_string\": {\r\n \"fields\": [ \"body\" ],\r\n \"quote_field_suffix\": \".exact\",\r\n \"query\": \"\\\"ski boot\\\"\"\r\n }\r\n }\r\n}\r\n\r\nGET testindex/_validate/query?rewrite \r\n{\r\n \"query\": {\r\n \"query_string\": {\r\n \"fields\": [ \"body\" ],\r\n \"quote_field_suffix\": \".exact\",\r\n \"query\": \"\\\"ski boot\\\"\"\r\n }\r\n }\r\n}\r\n```\r\n\r\nThe result of the rewrite seems to drop the `exact` multi-field.\r\n", "comments": [ { "body": "Hi Sherry, thank you for filing the bug. Is there an ETA on this? One of our feature delivery is dependent on this.\r\n\r\nIs there an alternate interim solution?", "created_at": "2018-03-30T21:58:34Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-30T22:12:21Z" }, { "body": "Does anyone know when this will be released officially?", "created_at": "2018-04-13T17:44:13Z" }, { "body": "@imranazad It should be included in 6.3.0, but as a matter of policy we do not provide release dates.", "created_at": "2018-04-13T17:54:02Z" }, { "body": "@jasontedor Thanks, I've downloaded the version (6.2.3) from here: https://www.elastic.co/downloads/elasticsearch\r\n\r\nBut the issue persists, I've noticed there are various references to 6.3.0 but I can't seem to figure out where to download it from. \r\n\r\nCan you please advise?", "created_at": "2018-04-13T22:11:03Z" }, { "body": "6.3.0 is not released yet; that’s why I say we do not provide release dates.", "created_at": "2018-04-13T23:12:04Z" }, { "body": "@jasontedor Ah I see, thanks I realised that afterwards. I'll keep an eye out.", "created_at": "2018-04-14T10:38:33Z" }, { "body": "You're welcome; sorry for the previous confusion.", "created_at": "2018-04-14T12:12:01Z" } ], "number": 29324, "title": "Query String Query with quote_field_suffix not using the quote_field_suffix field to search" }
{ "body": "This change fixes the handling of the `quote_field_suffix` option on `query_string`\r\n query. The expansion was not applied to default fields query.\r\n\r\nCloses #29324\r\n", "number": 29332, "review_comments": [ { "body": "When looking at how the field suffix is handled in SimpleQueryStringQuery#newPhraseQuery I saw there is a helper called QueryParserHelper.resolveMappingFields which might be reused here. At least the new unit tests pass when using that helper. It does a little more than the code here, so I'm not sure if that would be a good idea though.", "created_at": "2018-04-03T11:38:25Z" }, { "body": "Good call, I pushed https://github.com/elastic/elasticsearch/pull/29332/commits/0abf44c2e0dded87f69b1fdf67dc7820f13cb433", "created_at": "2018-04-04T06:32:47Z" }, { "body": "Thanks, thats what I tried locally. I wasn't sure it works in this case though because it also seems to resolve some regex patterns around wildcards etc... and I thought maybe this isn't wanted here, but glad it can be reused.", "created_at": "2018-04-04T07:45:36Z" } ], "title": "Fixed quote_field_suffix in query_string" }
{ "commits": [ { "message": "Fixed quote_field_suffix in query_string\n\nThis change fixes the handling of the `quote_field_suffix` option on `query_string`\n query. The expansion was not applied to default fields query.\n\nCloses #29324" }, { "message": "address review" }, { "message": "Merge branch 'master' into query_string_quote_field_suffix" }, { "message": "Merge branch 'master' into query_string_quote_field_suffix" } ], "files": [ { "diff": "@@ -66,6 +66,7 @@\n import static org.elasticsearch.common.lucene.search.Queries.newLenientFieldQuery;\n import static org.elasticsearch.common.lucene.search.Queries.newUnmappedFieldQuery;\n import static org.elasticsearch.index.search.QueryParserHelper.resolveMappingField;\n+import static org.elasticsearch.index.search.QueryParserHelper.resolveMappingFields;\n \n /**\n * A {@link XQueryParser} that uses the {@link MapperService} in order to build smarter\n@@ -264,6 +265,8 @@ private Map<String, Float> extractMultiFields(String field, boolean quoted) {\n // Filters unsupported fields if a pattern is requested\n // Filters metadata fields if all fields are requested\n return resolveMappingField(context, field, 1.0f, !allFields, !multiFields, quoted ? quoteFieldSuffix : null);\n+ } else if (quoted && quoteFieldSuffix != null) {\n+ return resolveMappingFields(context, fieldsAndWeights, quoteFieldSuffix);\n } else {\n return fieldsAndWeights;\n }", "filename": "server/src/main/java/org/elasticsearch/index/search/QueryStringQueryParser.java", "status": "modified" }, { "diff": "@@ -1040,6 +1040,37 @@ public void testQuoteAnalyzer() throws Exception {\n assertEquals(expectedQuery, query);\n }\n \n+ public void testQuoteFieldSuffix() throws IOException {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ QueryShardContext context = createShardContext();\n+ assertEquals(new TermQuery(new Term(STRING_FIELD_NAME, \"bar\")),\n+ new QueryStringQueryBuilder(\"bar\")\n+ .quoteFieldSuffix(\"_2\")\n+ .field(STRING_FIELD_NAME)\n+ .doToQuery(context)\n+ );\n+ assertEquals(new TermQuery(new Term(STRING_FIELD_NAME_2, \"bar\")),\n+ new QueryStringQueryBuilder(\"\\\"bar\\\"\")\n+ .quoteFieldSuffix(\"_2\")\n+ .field(STRING_FIELD_NAME)\n+ .doToQuery(context)\n+ );\n+\n+ // Now check what happens if the quote field does not exist\n+ assertEquals(new TermQuery(new Term(STRING_FIELD_NAME, \"bar\")),\n+ new QueryStringQueryBuilder(\"bar\")\n+ .quoteFieldSuffix(\".quote\")\n+ .field(STRING_FIELD_NAME)\n+ .doToQuery(context)\n+ );\n+ assertEquals(new TermQuery(new Term(STRING_FIELD_NAME, \"bar\")),\n+ new QueryStringQueryBuilder(\"\\\"bar\\\"\")\n+ .quoteFieldSuffix(\".quote\")\n+ .field(STRING_FIELD_NAME)\n+ .doToQuery(context)\n+ );\n+ }\n+\n public void testToFuzzyQuery() throws Exception {\n assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n ", "filename": "server/src/test/java/org/elasticsearch/index/query/QueryStringQueryBuilderTests.java", "status": "modified" } ] }
{ "body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\nTested on both 5.6 and 6.2\r\n\r\n**Plugins installed**: [ None ]\r\n\r\n**JVM version** (`java -version`): 1.8.0_31-b13 \r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Using the Dockerized version from https://quay.io/repository/pires/docker-elasticsearch\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n(This is pretty much just a copy of my original description at: https://discuss.elastic.co/t/using-java-high-level-rest-client-does-not-auto-retry-bulk-requests/121724)\r\n\r\nIf the `BulkProcessor` is made to use the High Level Rest Client to issue requests, it is unable to issue retries even though it passes through the `Retry` handler's `canRetry` logic\r\n\r\n**Steps to reproduce**:\r\n\r\n1. Create an index in ES\r\n\r\n1. Configure the thread_pool to limit the bulk requests and make more likely a rejection. In my case I used a very small queue size of 2\r\n\r\n1. To demonstrate \"expected\" functionality, create a `BulkProcessor` which submits requests using `TransportClient`\r\n\r\n`BulkProcessor.builder(client, listener);`\r\n\r\n1. Submit requests which result in rejection. You will find that the resulting `BulkResponse` does not contain failures (unless `BackoffPolicy` was exhausted), however querying the /_cat/thread_pool will show rejections, and the document count should have went up based on the total submitted, indicating all documents eventually made it via retries. \r\n\r\n1. Create a BulkProcessor which submits requests using the High Level Rest client's client.bulkAsyncmethod:\r\n\r\n ```java\r\n BulkProcessor.Builder builder(BulkProcessor.Listener listener) {\r\n return BulkProcessor.builder(client::bulkAsync, listener);\r\n }\r\n\r\n1. Submit requests at a rate to create rejection \r\n\r\n1. Perform the same set of inserts, you will find that the `BulkResponse` contains failures and the individual `Failure` objects have an ElasticsearchException which contain \"type=es_rejected_execution_exception\"\r\n\r\n**Additional Notes**\r\nI think the \"root\" cause is that with the High Level Rest Client, the `ElasticsearchException` that is extracted is not one of the sub-types such as `EsRejectedExceptionException` (_this is actually documented behavior in the `fromXContent` method of ElasticsearchException_)\r\n\r\nI made a naive attempt to modify `fromXContent` to return the correct typed `ElasticsearchException`, but in its current form this results in a deadlock during retry attempts due (I think) to the synchronization that occurs in `BulkProcessor`. You can make it work by setting high enough concurrency but this is a workaround. \r\n\r\nProbably not relevant (except for anyone that might stumble on this same issue): We are using Apache Flink with an Elasticsearch sink. We identified this issue during attempts to upgrade from ES 5.6 to 6.2 to get additional features. However Flink's pending ES6 support is High Level Rest client based, and does not include TransportClient support for 6.2. It has code attempting to perform retries but it is never triggered due to the same issue with typed exceptions (and in fact, would deadlock in any case). \r\n\r\n", "comments": [ { "body": "Ouch, I was looking into using the High Level Rest Client as well but this sounds like blocker if it won't retry on errors. @javanna I see an \"adoptme\" label does it mean Elastic is not going to pro-actively look into this? Is there any recommended workaround?", "created_at": "2018-03-08T16:49:17Z" }, { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-03-09T04:33:44Z" }, { "body": "> I see an \"adoptme\" label does it mean Elastic is not going to pro-actively look into this?\r\n\r\nno it just means that nobody is actively working on it today and somebody, either from Elastic or the community may adopt this issue. I will get to it soon-ish unless anybody else does earlier than me.", "created_at": "2018-03-12T15:42:23Z" }, { "body": "Hi all and @javanna, I made a PR for this. the problem is as @jdewald describes I think. Feel free to discuss.", "created_at": "2018-03-26T22:04:00Z" }, { "body": "which version(s?) of Elasticsearch will contain that fix?", "created_at": "2018-05-09T12:59:00Z" }, { "body": "@cjolif See the labels in the corresponding PR (#29329). The fix will be in 6.3.1 .", "created_at": "2018-05-09T18:38:08Z" } ], "number": 28885, "title": "Using Java High Level Rest client does not auto-retry Bulk requests" }
{ "body": "Previously bulk's retry logic was based on the exception type (`EsRejectedExecutionException`) of the failed response, this changes it to be based on RestStatus (`RestStatus.TOO_MANY_REQUESTS`), in order to support rest hight level client. (more information can be found #29254).\r\n\r\nClose #28885", "number": 29329, "review_comments": [ { "body": "shall we consider making this hardcoded rather than an argument given that we always pass in the same value for it?", "created_at": "2018-04-03T12:10:42Z" }, { "body": "just to double check: we don't need to unwrap here anymore because the status of the root cause is propagated to its ancestor? ", "created_at": "2018-04-03T12:13:47Z" }, { "body": "can you leave the previous `rejectedExecutionExpected == false` please? we prefer this one for readability.", "created_at": "2018-04-03T12:14:49Z" }, { "body": "maybe we do not even need to unwrap it here anymore? could we just use getCause instead when throwing assertion error below?", "created_at": "2018-04-03T12:16:28Z" }, { "body": "this static block should go away. I don't see why it's needed.", "created_at": "2018-04-03T12:22:58Z" }, { "body": "the assertions seem more accurate here, thanks! Would you mind making the same change in the original test for the transport client? It should work there too right?", "created_at": "2018-04-03T12:32:50Z" }, { "body": "do you think that we should also have the last check based on the search API and the returned total hits? Or maybe now that we are using multi_get that step is not necessary?", "created_at": "2018-04-03T12:34:09Z" }, { "body": "can you rename this variable, this is not about search anymore, rather the result of multi_get", "created_at": "2018-04-03T12:34:32Z" }, { "body": "Yes !", "created_at": "2018-04-04T19:05:31Z" }, { "body": "Yeah, I think the `multi get request` we prepared when we indexed the documents is doing this, the same thing ? do a search (multi get) for all the indices in the end, to compare ? Seems in rest high level tests we are using rather multi get.", "created_at": "2018-04-04T19:15:52Z" }, { "body": "Yes, as it's only used here in bulk for this case.", "created_at": "2018-04-04T19:24:17Z" }, { "body": "Here we get the `status` field of `BulkItemResponse`'s `Failure`, it is a seperate field than `Exception cause` in `Failure` class. So I find it always has the good value (`RestStatus.TOO_MANY_REQUESTS`) ? because the exception type was changed only through `toXContent/fromXContent` of `BulkItemResponse`, but in it the `status` was already parsed seperatly. So the status should always be good ? (except after toXContent/fromXContent, the `BulkItemResponse` was transfered again using `readFrom/writeTo`, which I don't think it's the case ?)\r\n\r\nIf it's this, I changed `bulkItemResponse.getFailure().getStatus();` to `bulkItemResponse.status();`, because it's the same.", "created_at": "2018-04-04T19:50:10Z" }, { "body": "I think this is correct. I also don't follow why `readFrom/writeTo` Would cause issues, the exception type does change but the status stays the same right?", "created_at": "2018-04-09T08:56:23Z" }, { "body": "I see that this has not been addressed. Is that on purpose? I see that you have done this on the client version of the test so it should be fine here too.", "created_at": "2018-04-09T09:08:06Z" }, { "body": "Could you address this please?", "created_at": "2018-04-09T09:08:15Z" } ], "title": "Change bulk's retry condition to be based on RestStatus" }
{ "commits": [ { "message": "Change bulk's retry condition to be based on RestStatus\n\nPreviously bulk's retry logic was based on Exception type of the failed response, here we change it to be based on RestStatus, in order to support rest hight level's request." }, { "message": "changes according to comments" }, { "message": "Merge branch 'master' of https://github.com/elastic/elasticsearch into rest_highlevel_bulk_retry" }, { "message": "small changes" }, { "message": "still a change" }, { "message": "Merge branch 'master' into rest_highlevel_bulk_retry" }, { "message": "Address a test failure" }, { "message": "Merge branch 'master' into rest_highlevel_bulk_retry" }, { "message": "Merge branch 'master' into rest_highlevel_bulk_retry" } ], "files": [ { "diff": "@@ -0,0 +1,219 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.client;\n+\n+import org.elasticsearch.action.admin.indices.refresh.RefreshRequest;\n+import org.elasticsearch.action.bulk.BackoffPolicy;\n+import org.elasticsearch.action.bulk.BulkItemResponse;\n+import org.elasticsearch.action.bulk.BulkProcessor;\n+import org.elasticsearch.action.bulk.BulkRequest;\n+import org.elasticsearch.action.bulk.BulkResponse;\n+import org.elasticsearch.action.get.MultiGetRequest;\n+import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.rest.RestStatus;\n+\n+import java.util.Collections;\n+import java.util.Iterator;\n+import java.util.Map;\n+import java.util.Set;\n+import java.util.concurrent.ConcurrentHashMap;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.lessThan;\n+import static org.hamcrest.Matchers.lessThanOrEqualTo;\n+\n+public class BulkProcessorRetryIT extends ESRestHighLevelClientTestCase {\n+\n+ private static final String INDEX_NAME = \"index\";\n+ private static final String TYPE_NAME = \"type\";\n+\n+ private static BulkProcessor.Builder initBulkProcessorBuilder(BulkProcessor.Listener listener) {\n+ return BulkProcessor.builder(highLevelClient()::bulkAsync, listener);\n+ }\n+\n+ public void testBulkRejectionLoadWithoutBackoff() throws Exception {\n+ boolean rejectedExecutionExpected = true;\n+ executeBulkRejectionLoad(BackoffPolicy.noBackoff(), rejectedExecutionExpected);\n+ }\n+\n+ public void testBulkRejectionLoadWithBackoff() throws Throwable {\n+ boolean rejectedExecutionExpected = false;\n+ executeBulkRejectionLoad(BackoffPolicy.exponentialBackoff(), rejectedExecutionExpected);\n+ }\n+\n+ private void executeBulkRejectionLoad(BackoffPolicy backoffPolicy, boolean rejectedExecutionExpected) throws Exception {\n+ final CorrelatingBackoffPolicy internalPolicy = new CorrelatingBackoffPolicy(backoffPolicy);\n+ final int numberOfAsyncOps = randomIntBetween(600, 700);\n+ final CountDownLatch latch = new CountDownLatch(numberOfAsyncOps);\n+ final Set<Object> responses = Collections.newSetFromMap(new ConcurrentHashMap<>());\n+\n+ BulkProcessor bulkProcessor = initBulkProcessorBuilder(new BulkProcessor.Listener() {\n+ @Override\n+ public void beforeBulk(long executionId, BulkRequest request) {\n+ }\n+\n+ @Override\n+ public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {\n+ internalPolicy.logResponse(response);\n+ responses.add(response);\n+ latch.countDown();\n+ }\n+\n+ @Override\n+ public void afterBulk(long executionId, BulkRequest request, Throwable failure) {\n+ responses.add(failure);\n+ latch.countDown();\n+ }\n+ }).setBulkActions(1)\n+ .setConcurrentRequests(randomIntBetween(0, 100))\n+ .setBackoffPolicy(internalPolicy)\n+ .build();\n+\n+ MultiGetRequest multiGetRequest = indexDocs(bulkProcessor, numberOfAsyncOps);\n+ latch.await(10, TimeUnit.SECONDS);\n+ bulkProcessor.close();\n+\n+ assertEquals(responses.size(), numberOfAsyncOps);\n+\n+ boolean rejectedAfterAllRetries = false;\n+ for (Object response : responses) {\n+ if (response instanceof BulkResponse) {\n+ BulkResponse bulkResponse = (BulkResponse) response;\n+ for (BulkItemResponse bulkItemResponse : bulkResponse.getItems()) {\n+ if (bulkItemResponse.isFailed()) {\n+ BulkItemResponse.Failure failure = bulkItemResponse.getFailure();\n+ if (failure.getStatus() == RestStatus.TOO_MANY_REQUESTS) {\n+ if (rejectedExecutionExpected == false) {\n+ Iterator<TimeValue> backoffState = internalPolicy.backoffStateFor(bulkResponse);\n+ assertNotNull(\"backoffState is null (indicates a bulk request got rejected without retry)\", backoffState);\n+ if (backoffState.hasNext()) {\n+ // we're not expecting that we overwhelmed it even once when we maxed out the number of retries\n+ throw new AssertionError(\"Got rejected although backoff policy would allow more retries\",\n+ failure.getCause());\n+ } else {\n+ rejectedAfterAllRetries = true;\n+ logger.debug(\"We maxed out the number of bulk retries and got rejected (this is ok).\");\n+ }\n+ }\n+ } else {\n+ throw new AssertionError(\"Unexpected failure with status: \" + failure.getStatus());\n+ }\n+ }\n+ }\n+ } else {\n+ Throwable t = (Throwable) response;\n+ // we're not expecting any other errors\n+ throw new AssertionError(\"Unexpected failure\", t);\n+ }\n+ }\n+\n+ highLevelClient().indices().refresh(new RefreshRequest());\n+ int multiGetResponsesCount = highLevelClient().multiGet(multiGetRequest).getResponses().length;\n+\n+ if (rejectedExecutionExpected) {\n+ assertThat(multiGetResponsesCount, lessThanOrEqualTo(numberOfAsyncOps));\n+ } else if (rejectedAfterAllRetries) {\n+ assertThat(multiGetResponsesCount, lessThan(numberOfAsyncOps));\n+ } else {\n+ assertThat(multiGetResponsesCount, equalTo(numberOfAsyncOps));\n+ }\n+\n+ }\n+\n+ private static MultiGetRequest indexDocs(BulkProcessor processor, int numDocs) {\n+ MultiGetRequest multiGetRequest = new MultiGetRequest();\n+ for (int i = 1; i <= numDocs; i++) {\n+ processor.add(new IndexRequest(INDEX_NAME, TYPE_NAME, Integer.toString(i))\n+ .source(XContentType.JSON, \"field\", randomRealisticUnicodeOfCodepointLengthBetween(1, 30)));\n+ multiGetRequest.add(INDEX_NAME, TYPE_NAME, Integer.toString(i));\n+ }\n+ return multiGetRequest;\n+ }\n+\n+ /**\n+ * Internal helper class to correlate backoff states with bulk responses. This is needed to check whether we maxed out the number\n+ * of retries but still got rejected (which is perfectly fine and can also happen from time to time under heavy load).\n+ *\n+ * This implementation relies on an implementation detail in Retry, namely that the bulk listener is notified on the same thread\n+ * as the last call to the backoff policy's iterator. The advantage is that this is non-invasive to the rest of the production code.\n+ */\n+ private static class CorrelatingBackoffPolicy extends BackoffPolicy {\n+ private final Map<BulkResponse, Iterator<TimeValue>> correlations = new ConcurrentHashMap<>();\n+ // this is intentionally *not* static final. We will only ever have one instance of this class per test case and want the\n+ // thread local to be eligible for garbage collection right after the test to avoid leaks.\n+ private final ThreadLocal<Iterator<TimeValue>> iterators = new ThreadLocal<>();\n+\n+ private final BackoffPolicy delegate;\n+\n+ private CorrelatingBackoffPolicy(BackoffPolicy delegate) {\n+ this.delegate = delegate;\n+ }\n+\n+ public Iterator<TimeValue> backoffStateFor(BulkResponse response) {\n+ return correlations.get(response);\n+ }\n+\n+ // Assumption: This method is called from the same thread as the last call to the internal iterator's #hasNext() / #next()\n+ // see also Retry.AbstractRetryHandler#onResponse().\n+ public void logResponse(BulkResponse response) {\n+ Iterator<TimeValue> iterator = iterators.get();\n+ // did we ever retry?\n+ if (iterator != null) {\n+ // we should correlate any iterator only once\n+ iterators.remove();\n+ correlations.put(response, iterator);\n+ }\n+ }\n+\n+ @Override\n+ public Iterator<TimeValue> iterator() {\n+ return new CorrelatingIterator(iterators, delegate.iterator());\n+ }\n+\n+ private static class CorrelatingIterator implements Iterator<TimeValue> {\n+ private final Iterator<TimeValue> delegate;\n+ private final ThreadLocal<Iterator<TimeValue>> iterators;\n+\n+ private CorrelatingIterator(ThreadLocal<Iterator<TimeValue>> iterators, Iterator<TimeValue> delegate) {\n+ this.iterators = iterators;\n+ this.delegate = delegate;\n+ }\n+\n+ @Override\n+ public boolean hasNext() {\n+ // update on every invocation as we might get rescheduled on a different thread. Unfortunately, there is a chance that\n+ // we pollute the thread local map with stale values. Due to the implementation of Retry and the life cycle of the\n+ // enclosing class CorrelatingBackoffPolicy this should not pose a major problem though.\n+ iterators.set(this);\n+ return delegate.hasNext();\n+ }\n+\n+ @Override\n+ public TimeValue next() {\n+ // update on every invocation\n+ iterators.set(this);\n+ return delegate.next();\n+ }\n+ }\n+ }\n+}", "filename": "client/rest-high-level/src/test/java/org/elasticsearch/client/BulkProcessorRetryIT.java", "status": "added" }, { "diff": "@@ -32,7 +32,6 @@\n import org.elasticsearch.action.bulk.BulkRequest;\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.bulk.Retry;\n-import org.elasticsearch.index.reindex.ScrollableHitSource.SearchFailure;\n import org.elasticsearch.action.delete.DeleteRequest;\n import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.client.ParentTaskAssigningClient;\n@@ -41,14 +40,14 @@\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n-import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.mapper.IdFieldMapper;\n import org.elasticsearch.index.mapper.IndexFieldMapper;\n import org.elasticsearch.index.mapper.RoutingFieldMapper;\n import org.elasticsearch.index.mapper.SourceFieldMapper;\n import org.elasticsearch.index.mapper.TypeFieldMapper;\n import org.elasticsearch.index.mapper.VersionFieldMapper;\n+import org.elasticsearch.index.reindex.ScrollableHitSource.SearchFailure;\n import org.elasticsearch.script.ExecutableScript;\n import org.elasticsearch.script.Script;\n import org.elasticsearch.script.ScriptService;\n@@ -75,8 +74,8 @@\n import static java.util.Collections.emptyList;\n import static java.util.Collections.unmodifiableList;\n import static org.elasticsearch.action.bulk.BackoffPolicy.exponentialBackoff;\n-import static org.elasticsearch.index.reindex.AbstractBulkByScrollRequest.SIZE_ALL_MATCHES;\n import static org.elasticsearch.common.unit.TimeValue.timeValueNanos;\n+import static org.elasticsearch.index.reindex.AbstractBulkByScrollRequest.SIZE_ALL_MATCHES;\n import static org.elasticsearch.rest.RestStatus.CONFLICT;\n import static org.elasticsearch.search.sort.SortBuilders.fieldSort;\n \n@@ -139,7 +138,7 @@ public AbstractAsyncBulkByScrollAction(BulkByScrollTask task, Logger logger, Par\n this.mainRequest = mainRequest;\n this.listener = listener;\n BackoffPolicy backoffPolicy = buildBackoffPolicy();\n- bulkRetry = new Retry(EsRejectedExecutionException.class, BackoffPolicy.wrap(backoffPolicy, worker::countBulkRetry), threadPool);\n+ bulkRetry = new Retry(BackoffPolicy.wrap(backoffPolicy, worker::countBulkRetry), threadPool);\n scrollSource = buildScrollableResultSource(backoffPolicy);\n scriptApplier = Objects.requireNonNull(buildScriptApplier(), \"script applier must not be null\");\n /*", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/AbstractAsyncBulkByScrollAction.java", "status": "modified" }, { "diff": "@@ -186,7 +186,7 @@ private void testCase(\n bulk.add(client().prepareIndex(\"source\", \"test\").setSource(\"foo\", \"bar \" + i));\n }\n \n- Retry retry = new Retry(EsRejectedExecutionException.class, BackoffPolicy.exponentialBackoff(), client().threadPool());\n+ Retry retry = new Retry(BackoffPolicy.exponentialBackoff(), client().threadPool());\n BulkResponse initialBulkResponse = retry.withBackoff(client()::bulk, bulk.request(), client().settings()).actionGet();\n assertFalse(initialBulkResponse.buildFailureMessage(), initialBulkResponse.hasFailures());\n client().admin().indices().prepareRefresh(\"source\").get();", "filename": "modules/reindex/src/test/java/org/elasticsearch/index/reindex/RetryTests.java", "status": "modified" }, { "diff": "@@ -23,7 +23,6 @@\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.threadpool.Scheduler;\n \n import java.util.concurrent.CountDownLatch;\n@@ -49,7 +48,7 @@ public final class BulkRequestHandler {\n this.consumer = consumer;\n this.listener = listener;\n this.concurrentRequests = concurrentRequests;\n- this.retry = new Retry(EsRejectedExecutionException.class, backoffPolicy, scheduler);\n+ this.retry = new Retry(backoffPolicy, scheduler);\n this.semaphore = new Semaphore(concurrentRequests > 0 ? concurrentRequests : 1);\n }\n ", "filename": "server/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java", "status": "modified" }, { "diff": "@@ -19,13 +19,13 @@\n package org.elasticsearch.action.bulk;\n \n import org.apache.logging.log4j.Logger;\n-import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.support.PlainActionFuture;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n+import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.threadpool.Scheduler;\n import org.elasticsearch.threadpool.ThreadPool;\n \n@@ -40,12 +40,10 @@\n * Encapsulates synchronous and asynchronous retry logic.\n */\n public class Retry {\n- private final Class<? extends Throwable> retryOnThrowable;\n private final BackoffPolicy backoffPolicy;\n private final Scheduler scheduler;\n \n- public Retry(Class<? extends Throwable> retryOnThrowable, BackoffPolicy backoffPolicy, Scheduler scheduler) {\n- this.retryOnThrowable = retryOnThrowable;\n+ public Retry(BackoffPolicy backoffPolicy, Scheduler scheduler) {\n this.backoffPolicy = backoffPolicy;\n this.scheduler = scheduler;\n }\n@@ -60,7 +58,7 @@ public Retry(Class<? extends Throwable> retryOnThrowable, BackoffPolicy backoffP\n */\n public void withBackoff(BiConsumer<BulkRequest, ActionListener<BulkResponse>> consumer, BulkRequest bulkRequest,\n ActionListener<BulkResponse> listener, Settings settings) {\n- RetryHandler r = new RetryHandler(retryOnThrowable, backoffPolicy, consumer, listener, settings, scheduler);\n+ RetryHandler r = new RetryHandler(backoffPolicy, consumer, listener, settings, scheduler);\n r.execute(bulkRequest);\n }\n \n@@ -81,12 +79,13 @@ public PlainActionFuture<BulkResponse> withBackoff(BiConsumer<BulkRequest, Actio\n }\n \n static class RetryHandler implements ActionListener<BulkResponse> {\n+ private static final RestStatus RETRY_STATUS = RestStatus.TOO_MANY_REQUESTS;\n+\n private final Logger logger;\n private final Scheduler scheduler;\n private final BiConsumer<BulkRequest, ActionListener<BulkResponse>> consumer;\n private final ActionListener<BulkResponse> listener;\n private final Iterator<TimeValue> backoff;\n- private final Class<? extends Throwable> retryOnThrowable;\n // Access only when holding a client-side lock, see also #addResponses()\n private final List<BulkItemResponse> responses = new ArrayList<>();\n private final long startTimestampNanos;\n@@ -95,10 +94,8 @@ static class RetryHandler implements ActionListener<BulkResponse> {\n private volatile BulkRequest currentBulkRequest;\n private volatile ScheduledFuture<?> scheduledRequestFuture;\n \n- RetryHandler(Class<? extends Throwable> retryOnThrowable, BackoffPolicy backoffPolicy,\n- BiConsumer<BulkRequest, ActionListener<BulkResponse>> consumer, ActionListener<BulkResponse> listener,\n- Settings settings, Scheduler scheduler) {\n- this.retryOnThrowable = retryOnThrowable;\n+ RetryHandler(BackoffPolicy backoffPolicy, BiConsumer<BulkRequest, ActionListener<BulkResponse>> consumer,\n+ ActionListener<BulkResponse> listener, Settings settings, Scheduler scheduler) {\n this.backoff = backoffPolicy.iterator();\n this.consumer = consumer;\n this.listener = listener;\n@@ -160,9 +157,8 @@ private boolean canRetry(BulkResponse bulkItemResponses) {\n }\n for (BulkItemResponse bulkItemResponse : bulkItemResponses) {\n if (bulkItemResponse.isFailed()) {\n- final Throwable cause = bulkItemResponse.getFailure().getCause();\n- final Throwable rootCause = ExceptionsHelper.unwrapCause(cause);\n- if (!rootCause.getClass().equals(retryOnThrowable)) {\n+ final RestStatus status = bulkItemResponse.status();\n+ if (status != RETRY_STATUS) {\n return false;\n }\n }", "filename": "server/src/main/java/org/elasticsearch/action/bulk/Retry.java", "status": "modified" }, { "diff": "@@ -18,15 +18,13 @@\n */\n package org.elasticsearch.action.bulk;\n \n-import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.admin.indices.refresh.RefreshRequest;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.test.ESIntegTestCase;\n-import org.hamcrest.Matcher;\n \n import java.util.Collections;\n import java.util.Iterator;\n@@ -38,6 +36,7 @@\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.lessThan;\n import static org.hamcrest.Matchers.lessThanOrEqualTo;\n \n @ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.SUITE, numDataNodes = 2)\n@@ -108,26 +107,28 @@ public void afterBulk(long executionId, BulkRequest request, Throwable failure)\n assertThat(responses.size(), equalTo(numberOfAsyncOps));\n \n // validate all responses\n+ boolean rejectedAfterAllRetries = false;\n for (Object response : responses) {\n if (response instanceof BulkResponse) {\n BulkResponse bulkResponse = (BulkResponse) response;\n for (BulkItemResponse bulkItemResponse : bulkResponse.getItems()) {\n if (bulkItemResponse.isFailed()) {\n BulkItemResponse.Failure failure = bulkItemResponse.getFailure();\n- Throwable rootCause = ExceptionsHelper.unwrapCause(failure.getCause());\n- if (rootCause instanceof EsRejectedExecutionException) {\n+ if (failure.getStatus() == RestStatus.TOO_MANY_REQUESTS) {\n if (rejectedExecutionExpected == false) {\n Iterator<TimeValue> backoffState = internalPolicy.backoffStateFor(bulkResponse);\n assertNotNull(\"backoffState is null (indicates a bulk request got rejected without retry)\", backoffState);\n if (backoffState.hasNext()) {\n // we're not expecting that we overwhelmed it even once when we maxed out the number of retries\n- throw new AssertionError(\"Got rejected although backoff policy would allow more retries\", rootCause);\n+ throw new AssertionError(\"Got rejected although backoff policy would allow more retries\",\n+ failure.getCause());\n } else {\n+ rejectedAfterAllRetries = true;\n logger.debug(\"We maxed out the number of bulk retries and got rejected (this is ok).\");\n }\n }\n } else {\n- throw new AssertionError(\"Unexpected failure\", rootCause);\n+ throw new AssertionError(\"Unexpected failure status: \" + failure.getStatus());\n }\n }\n }\n@@ -140,18 +141,20 @@ public void afterBulk(long executionId, BulkRequest request, Throwable failure)\n \n client().admin().indices().refresh(new RefreshRequest()).get();\n \n- // validate we did not create any duplicates due to retries\n- Matcher<Long> searchResultCount;\n- // it is ok if we lost some index operations to rejected executions (which is possible even when backing off (although less likely)\n- searchResultCount = lessThanOrEqualTo((long) numberOfAsyncOps);\n-\n SearchResponse results = client()\n .prepareSearch(INDEX_NAME)\n .setTypes(TYPE_NAME)\n .setQuery(QueryBuilders.matchAllQuery())\n .setSize(0)\n .get();\n- assertThat(results.getHits().getTotalHits(), searchResultCount);\n+\n+ if (rejectedExecutionExpected) {\n+ assertThat((int) results.getHits().getTotalHits(), lessThanOrEqualTo(numberOfAsyncOps));\n+ } else if (rejectedAfterAllRetries) {\n+ assertThat((int) results.getHits().getTotalHits(), lessThan(numberOfAsyncOps));\n+ } else {\n+ assertThat((int) results.getHits().getTotalHits(), equalTo(numberOfAsyncOps));\n+ }\n }\n \n private static void indexDocs(BulkProcessor processor, int numDocs) {", "filename": "server/src/test/java/org/elasticsearch/action/bulk/BulkProcessorRetryIT.java", "status": "modified" }, { "diff": "@@ -84,7 +84,7 @@ public void testRetryBacksOff() throws Exception {\n BackoffPolicy backoff = BackoffPolicy.constantBackoff(DELAY, CALLS_TO_FAIL);\n \n BulkRequest bulkRequest = createBulkRequest();\n- BulkResponse response = new Retry(EsRejectedExecutionException.class, backoff, bulkClient.threadPool())\n+ BulkResponse response = new Retry(backoff, bulkClient.threadPool())\n .withBackoff(bulkClient::bulk, bulkRequest, bulkClient.settings())\n .actionGet();\n \n@@ -96,7 +96,7 @@ public void testRetryFailsAfterBackoff() throws Exception {\n BackoffPolicy backoff = BackoffPolicy.constantBackoff(DELAY, CALLS_TO_FAIL - 1);\n \n BulkRequest bulkRequest = createBulkRequest();\n- BulkResponse response = new Retry(EsRejectedExecutionException.class, backoff, bulkClient.threadPool())\n+ BulkResponse response = new Retry(backoff, bulkClient.threadPool())\n .withBackoff(bulkClient::bulk, bulkRequest, bulkClient.settings())\n .actionGet();\n \n@@ -109,7 +109,7 @@ public void testRetryWithListenerBacksOff() throws Exception {\n AssertingListener listener = new AssertingListener();\n \n BulkRequest bulkRequest = createBulkRequest();\n- Retry retry = new Retry(EsRejectedExecutionException.class, backoff, bulkClient.threadPool());\n+ Retry retry = new Retry(backoff, bulkClient.threadPool());\n retry.withBackoff(bulkClient::bulk, bulkRequest, listener, bulkClient.settings());\n \n listener.awaitCallbacksCalled();\n@@ -124,7 +124,7 @@ public void testRetryWithListenerFailsAfterBacksOff() throws Exception {\n AssertingListener listener = new AssertingListener();\n \n BulkRequest bulkRequest = createBulkRequest();\n- Retry retry = new Retry(EsRejectedExecutionException.class, backoff, bulkClient.threadPool());\n+ Retry retry = new Retry(backoff, bulkClient.threadPool());\n retry.withBackoff(bulkClient::bulk, bulkRequest, listener, bulkClient.settings());\n \n listener.awaitCallbacksCalled();", "filename": "server/src/test/java/org/elasticsearch/action/bulk/RetryTests.java", "status": "modified" } ] }
{ "body": "<!-- Bug report -->\r\n\r\n**Elasticsearch server version**: `{\"number\": \"5.6.6\",\"lucene_version\": \"6.6.1\"}`\r\n**Elasticsearch client version**: `6.2.1`\r\n\r\n**Plugins installed**: `[{\r\n\"name\": \"x-pack\",\r\n\"version\": \"5.6.6\",\r\n\"description\": \"Elasticsearch Expanded Pack Plugin\",\r\n\"classname\": \"org.elasticsearch.xpack.XPackPlugin\",\r\n\"has_native_controller\": true\r\n}]`\r\n\r\n**JVM version**: `1.8.0_151`\r\n\r\n**OS version**: Linux ubuntu 4.4.0-116-generic\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nWhen I include a geo distance query into the should clause of my master bool query and use this query to search for results, it would produce a `java.io.IOException: Unknown GeoDistance ordinal [2]`\r\n*Expected*: search to return with correct results. *Actual*: search produces an IOException.\r\n\r\n**Notes & Observations**:\r\n- The master bool query also contains other queries in its other clauses. However, the search with the same parameters works when the geo distance query is excluded.\r\n- The search is successful if I manually copy the master bool query string and paste it into my ElasticSearch Head browser plugin (and also curl I'm assuming).\r\n\r\n**Steps to reproduce**:\r\n\r\n```java\r\nBoolQueryBuilder queryBuilder = QueryBuilders.boolQuery();\r\nqueryBuilder.should(\r\n QueryBuilders.geoDistanceQuery(term).point(origin).distance(distance)\r\n);\r\n\r\nclient.prepareSearch(index)\r\n .setTypes(type)\r\n .setFrom(from)\r\n .setSize(size)\r\n .setQuery(queryBuilder);\r\n```\r\n\r\n**Stack trace**:\r\n[java.io.IOException Unknown GeoDistance ordinal [2].txt](https://github.com/elastic/elasticsearch/files/1861428/java.io.IOException.Unknown.GeoDistance.ordinal.2.txt)", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-30T07:04:36Z" }, { "body": "It looks like we accidently broke backward compatibility [here](https://github.com/elastic/elasticsearch/commit/b1a6b227e18cd71de78b77810a284b3a8cc39175#diff-07ff0f9821841e51d05e48d81d599582L113). Unfortunately, I cannot think of any temporary workaround besides using a client and a server with the same major version.", "created_at": "2018-03-30T15:43:20Z" }, { "body": "@mloho, thanks for the detailed report. It was fixed by #29325 and should be release in v6.3.0.", "created_at": "2018-04-06T14:54:48Z" } ], "number": 29301, "title": "IOException: Unknown GeoDistance ordinal [2]" }
{ "body": "Restores backward compatibility in GeoDistanceQueryBuilder serialization\r\nbetween 6.x and 5.6 after removal of optimize_bbox parameter.\r\n\r\nCloses #29301\r\n\r\nThis will only go to 6.x branch since optimize_bbox was removed in 6.x (#22876) and shouldn't affect 7.x -> 6.last communication. ", "number": 29325, "review_comments": [], "title": "Fix bwc in GeoDistanceQuery serialization" }
{ "commits": [ { "message": "Fix bwc in GeoDistanceQuery serialization\n\nRestores backward compatibility in GeoDistanceQueryBuilder serialization\nbetween 6.x and 5.6 after removal of optimize_bbox parameter.\n\nCloses #29301" }, { "message": "Cleanup the test" } ], "files": [ { "diff": "@@ -43,6 +43,7 @@ for (Version version : bwcVersions.wireCompatible) {\n numNodes = 2\n clusterName = 'rolling-upgrade'\n setting 'repositories.url.allowed_urls', 'http://snapshot.test*'\n+ setting 'node.attr.gen', 'old'\n if (version.onOrAfter('5.3.0')) {\n setting 'http.content_type.required', 'true'\n }\n@@ -64,6 +65,7 @@ for (Version version : bwcVersions.wireCompatible) {\n * just stopped's data directory. */\n dataDir = { nodeNumber -> oldClusterTest.nodes[1].dataDir }\n setting 'repositories.url.allowed_urls', 'http://snapshot.test*'\n+ setting 'node.attr.gen', 'new'\n }\n \n Task mixedClusterTestRunner = tasks.getByName(\"${baseName}#mixedClusterTestRunner\")\n@@ -83,6 +85,7 @@ for (Version version : bwcVersions.wireCompatible) {\n * just stopped's data directory. */\n dataDir = { nodeNumber -> oldClusterTest.nodes[0].dataDir}\n setting 'repositories.url.allowed_urls', 'http://snapshot.test*'\n+ setting 'node.attr.gen', 'new'\n }\n \n Task upgradedClusterTestRunner = tasks.getByName(\"${baseName}#upgradedClusterTestRunner\")", "filename": "qa/rolling-upgrade/build.gradle", "status": "modified" }, { "diff": "@@ -266,4 +266,48 @@ public void testRelocationWithConcurrentIndexing() throws Exception {\n }\n }\n \n+ public void testSearchGeoPoints() throws Exception {\n+ final String index = \"geo_index\";\n+ if (clusterType == CLUSTER_TYPE.OLD) {\n+ Settings.Builder settings = Settings.builder()\n+ .put(IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1)\n+ .put(IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 1)\n+ // if the node with the replica is the first to be restarted, while a replica is still recovering\n+ // then delayed allocation will kick in. When the node comes back, the master will search for a copy\n+ // but the recovering copy will be seen as invalid and the cluster health won't return to GREEN\n+ // before timing out\n+ .put(INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), \"100ms\");\n+ createIndex(index, settings.build(), \"\\\"doc\\\": {\\\"properties\\\": {\\\"location\\\": {\\\"type\\\": \\\"geo_point\\\"}}}\");\n+ ensureGreen(index);\n+ } else if (clusterType == CLUSTER_TYPE.MIXED) {\n+ ensureGreen(index);\n+ String requestBody = \"{\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"bool\\\": {\\n\" +\n+ \" \\\"should\\\": [\\n\" +\n+ \" {\\n\" +\n+ \" \\\"geo_distance\\\": {\\n\" +\n+ \" \\\"distance\\\": \\\"1000km\\\",\\n\" +\n+ \" \\\"location\\\": {\\n\" +\n+ \" \\\"lat\\\": 40,\\n\" +\n+ \" \\\"lon\\\": -70\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \" {\\\"match_all\\\": {}}\\n\" +\n+ \" ]\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ // we need to make sure that requests are routed from a new node to the old node so we are sending the request a few times\n+ for (int i = 0; i < 10; i++) {\n+ Response response = client().performRequest(\"GET\", index + \"/_search\",\n+ Collections.singletonMap(\"preference\", \"_only_nodes:gen:old\"), // Make sure we only send this request to old nodes\n+ new StringEntity(requestBody, ContentType.APPLICATION_JSON));\n+ assertOK(response);\n+ }\n+ }\n+ }\n+\n }", "filename": "qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/RecoveryIT.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.apache.lucene.search.IndexOrDocValuesQuery;\n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.Query;\n+import org.elasticsearch.Version;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParsingException;\n import org.elasticsearch.common.Strings;\n@@ -97,6 +98,10 @@ public GeoDistanceQueryBuilder(StreamInput in) throws IOException {\n distance = in.readDouble();\n validationMethod = GeoValidationMethod.readFromStream(in);\n center = in.readGeoPoint();\n+ if (in.getVersion().before(Version.V_6_0_0_alpha1)) {\n+ // optimize bounding box was removed in 6.0\n+ in.readOptionalString();\n+ }\n geoDistance = GeoDistance.readFromStream(in);\n ignoreUnmapped = in.readBoolean();\n }\n@@ -107,6 +112,10 @@ protected void doWriteTo(StreamOutput out) throws IOException {\n out.writeDouble(distance);\n validationMethod.writeTo(out);\n out.writeGeoPoint(center);\n+ if (out.getVersion().before(Version.V_6_0_0_alpha1)) {\n+ // optimize bounding box was removed in 6.0\n+ out.writeOptionalString(null);\n+ }\n geoDistance.writeTo(out);\n out.writeBoolean(ignoreUnmapped);\n }", "filename": "server/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java", "status": "modified" } ] }
{ "body": "Tested on 6.2.3:\r\n\r\nThis is a Range query on a date field:\r\n\r\n```\r\nDELETE test\r\nPUT test\r\n{\r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"mydate\": {\r\n \"type\": \"date\", \r\n \"format\": \"yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\nPUT test/_doc/1\r\n{\r\n \"mydate\" : \"2015-10-31 12:00:00\"\r\n}\r\nGET test/_search\r\n{\r\n \"query\": {\r\n \"range\": {\r\n \"mydate\": {\r\n \"gte\": \"2015-10-31 12:00:00\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nThis works well.\r\n\r\nThis is a Range query on a date_range field:\r\n\r\n```\r\nDELETE range_index\r\nPUT range_index\r\n{\r\n \"mappings\": {\r\n \"_doc\": {\r\n \"properties\": {\r\n \"time_frame\": {\r\n \"type\": \"date_range\", \r\n \"format\": \"yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\nPUT range_index/_doc/1\r\n{\r\n \"time_frame\" : { \r\n \"gte\" : \"2015-10-31 12:00:00\", \r\n \"lte\" : \"2015-11-01\"\r\n }\r\n}\r\nGET range_index/_search\r\n{\r\n \"query\" : {\r\n \"range\" : {\r\n \"time_frame\" : {\r\n \"gte\" : \"2015-10-31 12:00:00\",\r\n \"lte\" : \"2015-11-01\",\r\n \"relation\" : \"within\" \r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nThis is failing with:\r\n\r\n```\r\n {\r\n \"type\": \"parse_exception\",\r\n \"reason\": \"failed to parse date field [2015-10-31 12:00:00] with format [strict_date_optional_time||epoch_millis]\"\r\n }\r\n```\r\n\r\nWe can see that the format defined in the mapping is not used here. It should use `yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis` but it uses `strict_date_optional_time||epoch_millis`.\r\n\r\nIf we manually force the `format` to be the same as the `mapping`, then it works:\r\n\r\n```\r\nGET range_index/_search\r\n{\r\n \"query\" : {\r\n \"range\" : {\r\n \"time_frame\" : {\r\n \"gte\" : \"2015-10-31 12:00:00\",\r\n \"lte\" : \"2015-11-01\",\r\n \"format\": \"yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||yyyy\", \r\n \"relation\" : \"within\" \r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nI suggest that we try first to use the mapping defined for the field and fallback to the default one if needed.\r\n\r\ncc @melvynator ", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-29T02:46:33Z" } ], "number": 29282, "title": "range filter on date_range datatype should use the format defined in the mapping" }
{ "body": "If the date format is not forced in query, use the format in mapping before fallback to the default format.\r\n\r\nCloses #29282 \r\n\r\n\r\n", "number": 29310, "review_comments": [ { "body": "I'm not sure what is tested here, can you explain? \r\nAs far as I can see, `fieldType.rangeQuery` only redirects to `DATE#rangeQuery`, so other than checking if the parser gets picked up, this test is not doing much in my opinion, but I might miss something.\r\nI'd prefer it it would check that the from/to values are parsed correctly using the parser supplied by `fieldType.setDateTimeFormatter`. For that I think the from/to values should be String representations of dates that DEFAULT_DATE_TIME_FORMATTER would reject otherwise. I'd like to see a test that checks that using the DEFAULT_DATE_TIME_FORMATTER we get an error (expectThrows) and with the right parser the query contains the expected min/max values. Accessing the min/max values of the Lucene query might be a bit tricky on first inspection of the code, but I think most shape relations should produce LongRange queries which Query can be cast to to use the getters. ", "created_at": "2018-04-06T16:34:51Z" } ], "title": "Use date format in `date_range` mapping before fallback to default" }
{ "commits": [ { "message": "Use date format in mapping before fallback to default if not force format (#29282)" }, { "message": "Fix test" }, { "message": "Merge branch 'master' of github.com:elastic/elasticsearch into fix-issues/29282" }, { "message": "Fix test" }, { "message": "Merge branch 'master' into liketic/fix-issues/29282" }, { "message": "Update changelog" }, { "message": "Merge branch 'master' into fix-issues/29282" }, { "message": "Merge branch 'master' into fix-issues/29282" } ], "files": [ { "diff": "@@ -104,6 +104,8 @@ ones that the user is authorized to access in case field level security is enabl\n [float]\n === Bug Fixes\n \n+Use date format in `date_range` mapping before fallback to default ({pull}29310[#29310])\n+\n Fix NPE in 'more_like_this' when field has zero tokens ({pull}30365[#30365])\n \n Fixed prerelease version of elasticsearch in the `deb` package to sort before GA versions\n@@ -171,6 +173,8 @@ Added put index template API to the high level rest client ({pull}30400[#30400])\n [float]\n === Bug Fixes\n \n+Use date format in `date_range` mapping before fallback to default ({pull}29310[#29310])\n+\n Fix NPE in 'more_like_this' when field has zero tokens ({pull}30365[#30365])\n \n Do not ignore request analysis/similarity settings on index resize operations when the source index already contains such settings ({pull}30216[#30216])", "filename": "docs/CHANGELOG.asciidoc", "status": "modified" }, { "diff": "@@ -287,6 +287,9 @@ public Query termQuery(Object value, QueryShardContext context) {\n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper,\n ShapeRelation relation, DateTimeZone timeZone, DateMathParser parser, QueryShardContext context) {\n failIfNotIndexed();\n+ if (parser == null) {\n+ parser = dateMathParser();\n+ }\n return rangeType.rangeQuery(name(), hasDocValues(), lowerTerm, upperTerm, includeLower, includeUpper, relation,\n timeZone, parser, context);\n }", "filename": "server/src/main/java/org/elasticsearch/index/mapper/RangeFieldMapper.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.index.mapper;\n \n-import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n import org.apache.lucene.document.DoubleRange;\n import org.apache.lucene.document.FloatRange;\n import org.apache.lucene.document.InetAddressPoint;\n@@ -31,13 +30,16 @@\n import org.apache.lucene.search.IndexOrDocValuesQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.geo.ShapeRelation;\n+import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.network.InetAddresses;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexSettings;\n+import org.elasticsearch.index.mapper.RangeFieldMapper.RangeFieldType;\n import org.elasticsearch.index.mapper.RangeFieldMapper.RangeType;\n import org.elasticsearch.index.query.QueryShardContext;\n import org.elasticsearch.test.IndexSettingsModule;\n@@ -55,49 +57,80 @@ public class RangeFieldTypeTests extends FieldTypeTestCase {\n \n @Before\n public void setupProperties() {\n- type = RandomPicks.randomFrom(random(), RangeType.values());\n+ type = randomFrom(RangeType.values());\n nowInMillis = randomNonNegativeLong();\n if (type == RangeType.DATE) {\n addModifier(new Modifier(\"format\", true) {\n @Override\n public void modify(MappedFieldType ft) {\n- ((RangeFieldMapper.RangeFieldType) ft).setDateTimeFormatter(Joda.forPattern(\"basic_week_date\", Locale.ROOT));\n+ ((RangeFieldType) ft).setDateTimeFormatter(Joda.forPattern(\"basic_week_date\", Locale.ROOT));\n }\n });\n addModifier(new Modifier(\"locale\", true) {\n @Override\n public void modify(MappedFieldType ft) {\n- ((RangeFieldMapper.RangeFieldType) ft).setDateTimeFormatter(Joda.forPattern(\"date_optional_time\", Locale.CANADA));\n+ ((RangeFieldType) ft).setDateTimeFormatter(Joda.forPattern(\"date_optional_time\", Locale.CANADA));\n }\n });\n }\n }\n \n @Override\n- protected RangeFieldMapper.RangeFieldType createDefaultFieldType() {\n- return new RangeFieldMapper.RangeFieldType(type, Version.CURRENT);\n+ protected RangeFieldType createDefaultFieldType() {\n+ return new RangeFieldType(type, Version.CURRENT);\n }\n \n public void testRangeQuery() throws Exception {\n- Settings indexSettings = Settings.builder()\n- .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n- IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(randomAlphaOfLengthBetween(1, 10), indexSettings);\n- QueryShardContext context = new QueryShardContext(0, idxSettings, null, null, null, null, null, xContentRegistry(),\n- writableRegistry(), null, null, () -> nowInMillis, null);\n- RangeFieldMapper.RangeFieldType ft = new RangeFieldMapper.RangeFieldType(type, Version.CURRENT);\n+ QueryShardContext context = createContext();\n+ RangeFieldType ft = new RangeFieldType(type, Version.CURRENT);\n ft.setName(FIELDNAME);\n ft.setIndexOptions(IndexOptions.DOCS);\n \n- ShapeRelation relation = RandomPicks.randomFrom(random(), ShapeRelation.values());\n- boolean includeLower = random().nextBoolean();\n- boolean includeUpper = random().nextBoolean();\n+ ShapeRelation relation = randomFrom(ShapeRelation.values());\n+ boolean includeLower = randomBoolean();\n+ boolean includeUpper = randomBoolean();\n Object from = nextFrom();\n Object to = nextTo(from);\n \n assertEquals(getExpectedRangeQuery(relation, from, to, includeLower, includeUpper),\n ft.rangeQuery(from, to, includeLower, includeUpper, relation, null, null, context));\n }\n \n+ private QueryShardContext createContext() {\n+ Settings indexSettings = Settings.builder()\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n+ IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(randomAlphaOfLengthBetween(1, 10), indexSettings);\n+ return new QueryShardContext(0, idxSettings, null, null, null, null, null, xContentRegistry(),\n+ writableRegistry(), null, null, () -> nowInMillis, null);\n+ }\n+\n+ public void testDateRangeQueryUsingMappingFormat() {\n+ QueryShardContext context = createContext();\n+ RangeFieldType fieldType = new RangeFieldType(RangeType.DATE, Version.CURRENT);\n+ fieldType.setName(FIELDNAME);\n+ fieldType.setIndexOptions(IndexOptions.DOCS);\n+ fieldType.setHasDocValues(false);\n+ ShapeRelation relation = randomFrom(ShapeRelation.values());\n+\n+ // dates will break the default format\n+ final String from = \"2016-15-06T15:29:50+08:00\";\n+ final String to = \"2016-16-06T15:29:50+08:00\";\n+\n+ ElasticsearchParseException ex = expectThrows(ElasticsearchParseException.class,\n+ () -> fieldType.rangeQuery(from, to, true, true, relation, null, null, context));\n+ assertEquals(\"failed to parse date field [2016-15-06T15:29:50+08:00] with format [strict_date_optional_time||epoch_millis]\",\n+ ex.getMessage());\n+\n+ // setting mapping format which is compatible with those dates\n+ final FormatDateTimeFormatter formatter = Joda.forPattern(\"yyyy-dd-MM'T'HH:mm:ssZZ\");\n+ assertEquals(1465975790000L, formatter.parser().parseMillis(from));\n+ assertEquals(1466062190000L, formatter.parser().parseMillis(to));\n+\n+ fieldType.setDateTimeFormatter(formatter);\n+ final Query query = fieldType.rangeQuery(from, to, true, true, relation, null, null, context);\n+ assertEquals(\"field:<ranges:[1465975790000 : 1466062190000]>\", query.toString());\n+ }\n+\n private Query getExpectedRangeQuery(ShapeRelation relation, Object from, Object to, boolean includeLower, boolean includeUpper) {\n switch (type) {\n case DATE:\n@@ -277,14 +310,10 @@ public void testParseIp() {\n assertEquals(InetAddresses.forString(\"::1\"), RangeFieldMapper.RangeType.IP.parse(new BytesRef(\"::1\"), randomBoolean()));\n }\n \n- public void testTermQuery() throws Exception, IllegalArgumentException {\n+ public void testTermQuery() throws Exception {\n // See https://github.com/elastic/elasticsearch/issues/25950\n- Settings indexSettings = Settings.builder()\n- .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n- IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(randomAlphaOfLengthBetween(1, 10), indexSettings);\n- QueryShardContext context = new QueryShardContext(0, idxSettings, null, null, null, null, null, xContentRegistry(),\n- writableRegistry(), null, null, () -> nowInMillis, null);\n- RangeFieldMapper.RangeFieldType ft = new RangeFieldMapper.RangeFieldType(type, Version.CURRENT);\n+ QueryShardContext context = createContext();\n+ RangeFieldType ft = new RangeFieldType(type, Version.CURRENT);\n ft.setName(FIELDNAME);\n ft.setIndexOptions(IndexOptions.DOCS);\n ", "filename": "server/src/test/java/org/elasticsearch/index/mapper/RangeFieldTypeTests.java", "status": "modified" } ] }
{ "body": "When I use the api to define an pipeline like this:\r\n`\r\n{\r\n \"processors\" : [\r\n {\r\n \"grok\" : {\r\n \"field\": \"message\",\r\n \"patterns\": [\"...\"],\r\n \"pattern_definitions\":{\r\n \t\"JAVACLASS\":\"\\\\s*%{JAVACLASS}\r\n }\r\n }\r\n }\r\n ]\r\n}`\r\n\r\nNote the \"pattern_definitions\" parameter, I use the old value of \"JAVACLASS\" pattern to overwrite JAVACLASS. It cause the master node shut down.\r\n\r\nMaybe it is not a correct operate to define a customer pattern like this, but master node shut down immediately is a problem.", "comments": [ { "body": "Hey @ivanjz93 I tried to reproduce it here locally, but here the put pipeline api did not fail.\r\nCan you provide all the exact commands to lead to the master node shutdown? Provide any errors you saw in the log? Also what version of ES are you using?", "created_at": "2018-03-27T07:38:04Z" }, { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-03-27T07:38:22Z" }, { "body": "Hi, @martijnvg.\r\n\r\nThe version is 6.2.3.\r\n\r\nThe command like this:\r\n`PUT _ingest/pipeline/log-filebeat-pipeline\r\n{\r\n \"description\" : \"extract structured log information\",\r\n \"processors\" : [\r\n {\r\n \"grok\" : {\r\n \"field\": \"message\",\r\n \"patterns\": [\"%{TIMESTAMP_ISO8601:time} %{THREADNAME:thread_name} %{LOGLEVEL:level} %{JAVACLASS:class} %{JAVASTACKTRACEPART1:stracktrace} %{JAVALOGMESSAGE:content}\"],\r\n \"pattern_definitions\":{\r\n \t\"THREADNAME\":\"\\\\[%{WORD}\\\\]\\\\s*\",\r\n \t\"JAVACLASS\":\"\\\\s*%{JAVACLASS}\",\r\n \t\"JAVASTACKTRACEPART1\": \"%{JAVACLASS}\\\\.%{WORD}\\\\(%{JAVAFILE:file}:%{NUMBER:line}\\\\):\"\r\n }\r\n }\r\n }\r\n ]\r\n}`\r\nAnd the error log is (omit many same lines in the end)\r\n\r\n> java.nio.channels.ClosedChannelException: null\r\n\tat io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source) ~[?:?]\r\n[2018-03-26T11:58:00,909][INFO ][o.e.c.r.a.AllocationService] [es-1] Cluster health status changed from [GREEN] to [YELLOW] (reason: [{es-2}{7GdKsmEUT9-g-h8KYwd3_g}{imnenS_cQ2aRO-V7mdJgCQ}{nn1-ha}{172.16.0.21:9300} transport disconnected]).\r\n[2018-03-26T11:57:59,791][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [es-1] fatal error in thread [Thread-6], exiting\r\njava.lang.StackOverflowError: null\r\n\tat java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3799) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$GroupHead.match(Pattern.java:4660) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$Branch.match(Pattern.java:4606) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$Branch.match(Pattern.java:4604) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$Branch.match(Pattern.java:4604) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$BranchConn.match(Pattern.java:4570) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$GroupTail.match(Pattern.java:4719) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$Curly.match0(Pattern.java:4281) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$Curly.match(Pattern.java:4236) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$GroupHead.match(Pattern.java:4660) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$Branch.match(Pattern.java:4606) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$Branch.match(Pattern.java:4604) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3800) ~[?:1.8.0_161]\r\n\tat java.util.regex.Pattern$Start.match(Pattern.java:3463) ~[?:1.8.0_161]\r\n\tat java.util.regex.Matcher.search(Matcher.java:1248) ~[?:1.8.0_161]\r\n\tat java.util.regex.Matcher.find(Matcher.java:664) ~[?:1.8.0_161]\r\n\tat java.util.Formatter.parse(Formatter.java:2549) ~[?:1.8.0_161]\r\n\tat java.util.Formatter.format(Formatter.java:2501) ~[?:1.8.0_161]\r\n\tat java.util.Formatter.format(Formatter.java:2455) ~[?:1.8.0_161]\r\n\tat java.lang.String.format(String.java:2981) ~[?:1.8.0_161]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:122) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]\r\n\tat org.elasticsearch.ingest.common.Grok.toRegex(Grok.java:127) ~[?:?]", "created_at": "2018-03-27T07:53:13Z" }, { "body": "@ivanjz93 Thanks, my node died here too. I'll look into how to fix this.", "created_at": "2018-03-27T07:59:31Z" } ], "number": 29257, "title": "Grok pipeline definition parameter \"pattern_definitions\" cause the master node shut down" }
{ "body": "Otherwise the grok code throws a stackoverflow error.\r\n\r\nCloses #29257", "number": 29295, "review_comments": [ { "body": "if this is an attempt to catch more things... maybe add an example with type coercion as well?\r\n\r\n```\r\nbank.put(\"NAME\", \"!!!%{NAME:name:int}!!!\");\r\n```", "created_at": "2018-03-29T23:29:04Z" }, { "body": "I feel like `pattern bank` is an internal naming convention, and externally we just call it `patterns`. not sure it matters here though, since one can only assume we are talking about the same thing.", "created_at": "2018-04-03T14:57:32Z" }, { "body": "we should probably add some javadocs here so when anyone picks this up in the future, they will have some context. it is very easy to get into the details of all the curly braces and lose track of why we are doing this.", "created_at": "2018-04-03T14:58:56Z" }, { "body": "s/fo/for?\r\n\r\nmaybe this can be simplified to just say ```pattern [<pattern>] has circular references to other pattern definitions```?", "created_at": "2018-04-03T15:00:03Z" }, { "body": "nice! I like this. super helpful for keeping track", "created_at": "2018-04-03T15:01:25Z" } ], "title": "Don't allow referencing the pattern bank name in the pattern bank" }
{ "commits": [ { "message": "ingest: Don't allow circular referencing of named patterns in the grok processor.\n\nOtherwise the grok code throws a stackoverflow error.\n\nCloses #29257" } ], "files": [ { "diff": "@@ -34,8 +34,10 @@\n import java.io.InputStreamReader;\n import java.io.UncheckedIOException;\n import java.nio.charset.StandardCharsets;\n+import java.util.ArrayList;\n import java.util.HashMap;\n import java.util.Iterator;\n+import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n import java.util.Collections;\n@@ -74,8 +76,6 @@ public final class Grok {\n private final Map<String, String> patternBank;\n private final boolean namedCaptures;\n private final Regex compiledExpression;\n- private final String expression;\n-\n \n public Grok(Map<String, String> patternBank, String grokPattern) {\n this(patternBank, grokPattern, true);\n@@ -86,11 +86,59 @@ public Grok(Map<String, String> patternBank, String grokPattern) {\n this.patternBank = patternBank;\n this.namedCaptures = namedCaptures;\n \n- this.expression = toRegex(grokPattern);\n+ for (Map.Entry<String, String> entry : patternBank.entrySet()) {\n+ String name = entry.getKey();\n+ String pattern = entry.getValue();\n+ forbidCircularReferences(name, new ArrayList<>(), pattern);\n+ }\n+\n+ String expression = toRegex(grokPattern);\n byte[] expressionBytes = expression.getBytes(StandardCharsets.UTF_8);\n this.compiledExpression = new Regex(expressionBytes, 0, expressionBytes.length, Option.DEFAULT, UTF8Encoding.INSTANCE);\n }\n \n+ /**\n+ * Checks whether patterns reference each other in a circular manner and if so fail with an exception\n+ *\n+ * In a pattern, anything between <code>%{</code> and <code>}</code> or <code>:</code> is considered\n+ * a reference to another named pattern. This method will navigate to all these named patterns and\n+ * check for a circular reference.\n+ */\n+ private void forbidCircularReferences(String patternName, List<String> path, String pattern) {\n+ if (pattern.contains(\"%{\" + patternName + \"}\") || pattern.contains(\"%{\" + patternName + \":\")) {\n+ String message;\n+ if (path.isEmpty()) {\n+ message = \"circular reference in pattern [\" + patternName + \"][\" + pattern + \"]\";\n+ } else {\n+ message = \"circular reference in pattern [\" + path.remove(path.size() - 1) + \"][\" + pattern +\n+ \"] back to pattern [\" + patternName + \"]\";\n+ // add rest of the path:\n+ if (path.isEmpty() == false) {\n+ message += \" via patterns [\" + String.join(\"=>\", path) + \"]\";\n+ }\n+ }\n+ throw new IllegalArgumentException(message);\n+ }\n+\n+ for (int i = pattern.indexOf(\"%{\"); i != -1; i = pattern.indexOf(\"%{\", i + 1)) {\n+ int begin = i + 2;\n+ int brackedIndex = pattern.indexOf('}', begin);\n+ int columnIndex = pattern.indexOf(':', begin);\n+ int end;\n+ if (brackedIndex != -1 && columnIndex == -1) {\n+ end = brackedIndex;\n+ } else if (columnIndex != -1 && brackedIndex == -1) {\n+ end = columnIndex;\n+ } else if (brackedIndex != -1 && columnIndex != -1) {\n+ end = Math.min(brackedIndex, columnIndex);\n+ } else {\n+ throw new IllegalArgumentException(\"pattern [\" + pattern + \"] has circular references to other pattern definitions\");\n+ }\n+ String otherPatternName = pattern.substring(begin, end);\n+ path.add(otherPatternName);\n+ forbidCircularReferences(patternName, path, patternBank.get(otherPatternName));\n+ }\n+ }\n \n public String groupMatch(String name, Region region, String pattern) {\n try {\n@@ -125,10 +173,12 @@ public String toRegex(String grokPattern) {\n String patternName = groupMatch(PATTERN_GROUP, region, grokPattern);\n \n String pattern = patternBank.get(patternName);\n-\n if (pattern == null) {\n throw new IllegalArgumentException(\"Unable to find pattern [\" + patternName + \"] in Grok's pattern dictionary\");\n }\n+ if (pattern.contains(\"%{\" + patternName + \"}\") || pattern.contains(\"%{\" + patternName + \":\")) {\n+ throw new IllegalArgumentException(\"circular reference in pattern back [\" + patternName + \"]\");\n+ }\n \n String grokPart;\n if (namedCaptures && subName != null) {", "filename": "libs/grok/src/main/java/org/elasticsearch/grok/Grok.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n+import java.util.TreeMap;\n \n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n@@ -205,6 +206,65 @@ public void testNoNamedCaptures() {\n assertEquals(expected, actual);\n }\n \n+ public void testCircularReference() {\n+ Exception e = expectThrows(IllegalArgumentException.class, () -> {\n+ Map<String, String> bank = new HashMap<>();\n+ bank.put(\"NAME\", \"!!!%{NAME}!!!\");\n+ String pattern = \"%{NAME}\";\n+ new Grok(bank, pattern, false);\n+ });\n+ assertEquals(\"circular reference in pattern [NAME][!!!%{NAME}!!!]\", e.getMessage());\n+\n+ e = expectThrows(IllegalArgumentException.class, () -> {\n+ Map<String, String> bank = new HashMap<>();\n+ bank.put(\"NAME\", \"!!!%{NAME:name}!!!\");\n+ String pattern = \"%{NAME}\";\n+ new Grok(bank, pattern, false);\n+ });\n+ assertEquals(\"circular reference in pattern [NAME][!!!%{NAME:name}!!!]\", e.getMessage());\n+\n+ e = expectThrows(IllegalArgumentException.class, () -> {\n+ Map<String, String> bank = new HashMap<>();\n+ bank.put(\"NAME\", \"!!!%{NAME:name:int}!!!\");\n+ String pattern = \"%{NAME}\";\n+ new Grok(bank, pattern, false);\n+ });\n+ assertEquals(\"circular reference in pattern [NAME][!!!%{NAME:name:int}!!!]\", e.getMessage());\n+\n+ e = expectThrows(IllegalArgumentException.class, () -> {\n+ Map<String, String> bank = new TreeMap<>();\n+ bank.put(\"NAME1\", \"!!!%{NAME2}!!!\");\n+ bank.put(\"NAME2\", \"!!!%{NAME1}!!!\");\n+ String pattern = \"%{NAME1}\";\n+ new Grok(bank, pattern, false);\n+ });\n+ assertEquals(\"circular reference in pattern [NAME2][!!!%{NAME1}!!!] back to pattern [NAME1]\", e.getMessage());\n+\n+ e = expectThrows(IllegalArgumentException.class, () -> {\n+ Map<String, String> bank = new TreeMap<>();\n+ bank.put(\"NAME1\", \"!!!%{NAME2}!!!\");\n+ bank.put(\"NAME2\", \"!!!%{NAME3}!!!\");\n+ bank.put(\"NAME3\", \"!!!%{NAME1}!!!\");\n+ String pattern = \"%{NAME1}\";\n+ new Grok(bank, pattern, false);\n+ });\n+ assertEquals(\"circular reference in pattern [NAME3][!!!%{NAME1}!!!] back to pattern [NAME1] via patterns [NAME2]\",\n+ e.getMessage());\n+\n+ e = expectThrows(IllegalArgumentException.class, () -> {\n+ Map<String, String> bank = new TreeMap<>();\n+ bank.put(\"NAME1\", \"!!!%{NAME2}!!!\");\n+ bank.put(\"NAME2\", \"!!!%{NAME3}!!!\");\n+ bank.put(\"NAME3\", \"!!!%{NAME4}!!!\");\n+ bank.put(\"NAME4\", \"!!!%{NAME5}!!!\");\n+ bank.put(\"NAME5\", \"!!!%{NAME1}!!!\");\n+ String pattern = \"%{NAME1}\";\n+ new Grok(bank, pattern, false);\n+ });\n+ assertEquals(\"circular reference in pattern [NAME5][!!!%{NAME1}!!!] back to pattern [NAME1] \" +\n+ \"via patterns [NAME2=>NAME3=>NAME4]\", e.getMessage());\n+ }\n+\n public void testBooleanCaptures() {\n Map<String, String> bank = new HashMap<>();\n ", "filename": "libs/grok/src/test/java/org/elasticsearch/grok/GrokTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version** (`bin/elasticsearch --version`): 6.1.2\r\n\r\n**Plugins installed**: []\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nThe `_update` API doesn't seem to reject a command that is wrong. Actually, this example comes from [The Definitive Guide](https://www.elastic.co/guide/en/elasticsearch/guide/current/partial-updates.html) (which I know it's WIP), but still the ES API should reply back with an error for such a command:\r\n\r\n```\r\nPOST /website/blog/1/_update\r\n{\r\n \"script\": \"ctx.op = ctx._source.views == params.count ? 'delete' : 'none'\",\r\n \"params\": {\r\n \"count\": 1\r\n }\r\n}\r\n```\r\n\r\nThis returns:\r\n\r\n```\r\n{\r\n \"_index\": \"website\",\r\n \"_type\": \"blog\",\r\n \"_id\": \"1\",\r\n \"_version\": 2,\r\n \"result\": \"noop\",\r\n \"_shards\": {\r\n \"total\": 0,\r\n \"successful\": 0,\r\n \"failed\": 0\r\n }\r\n}\r\n```\r\n\r\nDebugging this with `Debug.explain(params)` shows that there is no `params` registered. Which, in fact, is fine since the syntax is incorrect: there shouldn't be any `params` definitions at the root level, but inside `script` itself. The correct syntax should be:\r\n\r\n```\r\nPOST /website/blog/1/_update\r\n{\r\n \"script\": {\r\n \"source\": \"ctx.op = ctx._source.views == params.count ? 'delete' : 'none'\",\r\n \"params\": {\r\n \"count\": 1\r\n }\r\n }\r\n}\r\n```", "comments": [ { "body": "cc @elastic/es-core-infra ", "created_at": "2018-03-01T12:46:57Z" }, { "body": "For anyone interested in fixing this, the lenient code is in `UpdateRequest.fromXContent`. The if/elseifs in the while loop need to handle unknown keys. Better yet would be to convert this to ObjectParser.", "created_at": "2018-03-08T21:02:35Z" }, { "body": "Is this issue still open? I'd like to pick this up.", "created_at": "2018-03-14T19:35:47Z" }, { "body": "@nikhilbarar Please feel free to open a PR for this (or any other issues marked with `adoptme`). Thanks for picking this up!", "created_at": "2018-03-15T06:54:22Z" } ], "number": 28740, "title": "_update API should check the syntax in case of scripted updates" }
{ "body": "Reject unknown field and switch ```fromXContent``` to ```ObjectParser```. \r\n\r\nCloses #28740", "number": 29293, "review_comments": [ { "body": "I *think* you can use `PARSER.declareStringArray(UpdateRequest::fields, FIELDS_FIELD)`. It doens't reject non-array things but I think that is fine. Another choice would be to simply pitch this. It has been deprecated for a year and a half (since 1764ec56b38) and we are only going to merge this to the master branch anyway so it is going to be released during a major version so we can make breaking changes like dropping a deprecated field.", "created_at": "2018-03-30T15:29:25Z" }, { "body": "Removing the stuff under the `else` actually makes us start parsing requests with arrays very strangely. I think it is worth keeping the `throw new IllegalArgumentException(\"Malformed action/metadata line [\" + line + \"], expected a simple value for field [\" + currentFieldName + \"] but found [\" + token + \"]\");` part in an if statement that checks for `START_ARRAY`. And probably adding a test with an array.", "created_at": "2018-04-02T13:55:34Z" }, { "body": "Hmmm. I *think* this is true by default. Can you check what happens if you drop the `.setFetchSource(true)` part?", "created_at": "2018-04-02T14:02:46Z" }, { "body": ":heart:", "created_at": "2018-04-02T14:02:54Z" }, { "body": "Usually I do\r\n```\r\nException e = expectThrows(DocumentMissingException.class, () ->\r\n the thing);\r\nassertEquals(\"whatever\", e.getMessage());\r\n```\r\n\r\nJust to make sure it was the *right* exception. In this case there is only one kind of message for `DocumentMissingException` so it is probably safe to do what you are doing, but it still might be nice anyway.", "created_at": "2018-04-02T14:04:36Z" }, { "body": "By default the field ```fetchSource``` is null in the class UpdateRequest:\r\n```\r\n private FetchSourceContext fetchSourceContext;\r\n```\r\nIf we drop this, we'll get a AssertionError.\r\n", "created_at": "2018-04-02T14:45:25Z" }, { "body": "I see it. I was tired when I was reading it. You are right to do what you did.", "created_at": "2018-04-02T15:04:37Z" }, { "body": "Thanks! I put the ```throw``` back and add a test.", "created_at": "2018-04-03T03:35:23Z" }, { "body": "So removing this line is going to cause problems in mixed clusters. Elasticsearch versions before 7.0.0 will continue to send this array. You have to read it and decide what to do with it. Something like:\r\n```\r\n if (false == in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {\r\n String[] fields = in.readOptionalStringArray();\r\n if (fields != null) {\r\n throw new IllegalArgumentException(\"[fields] is no longer supported\");\r\n }\r\n }\r\n```\r\n\r\nis probably fine to be honest.", "created_at": "2018-04-03T15:14:49Z" }, { "body": "You'll want to do something like:\r\n```\r\nif (false == out.getVersion().onOrAfter(Version.V_7_0_0_alpha_1) {\r\n out.writeOptionalStringArray(null);\r\n}\r\n```\r\n\r\nThat way 6.x can parse the request. We know there aren't any `fields` anyway because we couldn't have parsed them.", "created_at": "2018-04-03T15:16:11Z" }, { "body": "Can you also add a note about the change you update request?\r\n\r\nI'm not *sure* this is the right file for them but you may as well put them here for now and we'll move them if we decide there is a better spot.", "created_at": "2018-04-03T15:17:21Z" } ], "title": "Using ObjectParser in UpdateRequest" }
{ "commits": [ { "message": "Using ObjectParser in UpdateRequest (#28740)" }, { "message": "Merge branch 'master' into fix-issues/28740" }, { "message": "Address comments" }, { "message": "Drop parameter fields" }, { "message": "Fix test bwc issue" }, { "message": "Fix bwc issue caused by removing fields" }, { "message": "Merge branch 'master' into fix-issues/28740" }, { "message": "Resolve conflicts" } ], "files": [ { "diff": "@@ -27,7 +27,6 @@\n import org.elasticsearch.action.update.UpdateResponse;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.client.node.NodeClient;\n-import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.shard.ShardId;\n@@ -68,17 +67,15 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC\n String defaultIndex = request.param(\"index\");\n String defaultType = request.param(\"type\");\n String defaultRouting = request.param(\"routing\");\n- String fieldsParam = request.param(\"fields\");\n String defaultPipeline = request.param(\"pipeline\");\n- String[] defaultFields = fieldsParam != null ? Strings.commaDelimitedListToStringArray(fieldsParam) : null;\n \n String waitForActiveShards = request.param(\"wait_for_active_shards\");\n if (waitForActiveShards != null) {\n bulkRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards));\n }\n bulkRequest.timeout(request.paramAsTime(\"timeout\", BulkShardRequest.DEFAULT_TIMEOUT));\n bulkRequest.setRefreshPolicy(request.param(\"refresh\"));\n- bulkRequest.add(request.requiredContent(), defaultIndex, defaultType, defaultRouting, defaultFields,\n+ bulkRequest.add(request.requiredContent(), defaultIndex, defaultType, defaultRouting,\n null, defaultPipeline, null, true, request.getXContentType());\n \n // short circuit the call to the transport layer", "filename": "client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java", "status": "modified" }, { "diff": "@@ -2,7 +2,7 @@\n === Breaking API changes in 7.0\n \n ==== Camel case and underscore parameters deprecated in 6.x have been removed\n-A number of duplicate parameters deprecated in 6.x have been removed from\n+A number of duplicate parameters deprecated in 6.x have been removed from\n Bulk request, Multi Get request, Term Vectors request, and More Like This Query\n requests.\n \n@@ -22,3 +22,7 @@ The following parameters starting with underscore have been removed:\n Instead of these removed parameters, use their non camel case equivalents without\n starting underscore, e.g. use `version_type` instead of `_version_type` or `versionType`.\n \n+\n+==== The parameter `fields` deprecated in 6.x has been removed from Bulk request \n+and Update request. The Update API returns `400 - Bad request` if request contains \n+unknown parameters (instead of ignored in the previous version).", "filename": "docs/reference/migration/migrate_7_0/api.asciidoc", "status": "modified" }, { "diff": "@@ -37,10 +37,6 @@\n \"type\" : \"string\",\n \"description\" : \"Default document type for items which don't provide one\"\n },\n- \"fields\": {\n- \"type\": \"list\",\n- \"description\" : \"Default comma-separated list of fields to return in the response for updates, can be overridden on each sub-request\"\n- },\n \"_source\": {\n \"type\" : \"list\",\n \"description\" : \"True or false to return the _source field or not, or default list of fields to return, can be overridden on each sub-request\"", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/bulk.json", "status": "modified" }, { "diff": "@@ -27,10 +27,6 @@\n \"type\": \"string\",\n \"description\": \"Sets the number of shard copies that must be active before proceeding with the update operation. Defaults to 1, meaning the primary shard only. Set to `all` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1)\"\n },\n- \"fields\": {\n- \"type\": \"list\",\n- \"description\": \"A comma-separated list of fields to return in the response\"\n- },\n \"_source\": {\n \"type\" : \"list\",\n \"description\" : \"True or false to return the _source field or not, or a list of fields to return\"", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/update.json", "status": "modified" }, { "diff": "@@ -299,7 +299,7 @@ public BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nu\n */\n public synchronized BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType,\n @Nullable String defaultPipeline, @Nullable Object payload, XContentType xContentType) throws Exception {\n- bulkRequest.add(data, defaultIndex, defaultType, null, null, null, defaultPipeline, payload, true, xContentType);\n+ bulkRequest.add(data, defaultIndex, defaultType, null, null, defaultPipeline, payload, true, xContentType);\n executeIfNeeded();\n return this;\n }", "filename": "server/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java", "status": "modified" }, { "diff": "@@ -36,8 +36,6 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.common.logging.DeprecationLogger;\n-import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.lucene.uid.Versions;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;\n@@ -66,8 +64,6 @@\n * @see org.elasticsearch.client.Client#bulk(BulkRequest)\n */\n public class BulkRequest extends ActionRequest implements CompositeIndicesRequest, WriteRequest<BulkRequest> {\n- private static final DeprecationLogger DEPRECATION_LOGGER =\n- new DeprecationLogger(Loggers.getLogger(BulkRequest.class));\n \n private static final int REQUEST_OVERHEAD = 50;\n \n@@ -80,7 +76,6 @@ public class BulkRequest extends ActionRequest implements CompositeIndicesReques\n private static final ParseField VERSION_TYPE = new ParseField(\"version_type\");\n private static final ParseField RETRY_ON_CONFLICT = new ParseField(\"retry_on_conflict\");\n private static final ParseField PIPELINE = new ParseField(\"pipeline\");\n- private static final ParseField FIELDS = new ParseField(\"fields\");\n private static final ParseField SOURCE = new ParseField(\"_source\");\n \n /**\n@@ -277,20 +272,21 @@ public BulkRequest add(byte[] data, int from, int length, @Nullable String defau\n */\n public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType,\n XContentType xContentType) throws IOException {\n- return add(data, defaultIndex, defaultType, null, null, null, null, null, true, xContentType);\n+ return add(data, defaultIndex, defaultType, null, null, null, null, true, xContentType);\n }\n \n /**\n * Adds a framed data in binary format\n */\n public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, boolean allowExplicitIndex,\n XContentType xContentType) throws IOException {\n- return add(data, defaultIndex, defaultType, null, null, null, null, null, allowExplicitIndex, xContentType);\n+ return add(data, defaultIndex, defaultType, null, null, null, null, allowExplicitIndex, xContentType);\n }\n \n- public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String\n- defaultRouting, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSourceContext, @Nullable String\n- defaultPipeline, @Nullable Object payload, boolean allowExplicitIndex, XContentType xContentType) throws IOException {\n+ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType,\n+ @Nullable String defaultRouting, @Nullable FetchSourceContext defaultFetchSourceContext,\n+ @Nullable String defaultPipeline, @Nullable Object payload, boolean allowExplicitIndex,\n+ XContentType xContentType) throws IOException {\n XContent xContent = xContentType.xContent();\n int line = 0;\n int from = 0;\n@@ -333,7 +329,6 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null\n String id = null;\n String routing = defaultRouting;\n FetchSourceContext fetchSourceContext = defaultFetchSourceContext;\n- String[] fields = defaultFields;\n String opType = null;\n long version = Versions.MATCH_ANY;\n VersionType versionType = VersionType.INTERNAL;\n@@ -371,21 +366,14 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null\n retryOnConflict = parser.intValue();\n } else if (PIPELINE.match(currentFieldName, parser.getDeprecationHandler())) {\n pipeline = parser.text();\n- } else if (FIELDS.match(currentFieldName, parser.getDeprecationHandler())) {\n- throw new IllegalArgumentException(\"Action/metadata line [\" + line + \"] contains a simple value for parameter [fields] while a list is expected\");\n } else if (SOURCE.match(currentFieldName, parser.getDeprecationHandler())) {\n fetchSourceContext = FetchSourceContext.fromXContent(parser);\n } else {\n throw new IllegalArgumentException(\"Action/metadata line [\" + line + \"] contains an unknown parameter [\" + currentFieldName + \"]\");\n }\n } else if (token == XContentParser.Token.START_ARRAY) {\n- if (FIELDS.match(currentFieldName, parser.getDeprecationHandler())) {\n- DEPRECATION_LOGGER.deprecated(\"Deprecated field [fields] used, expected [_source] instead\");\n- List<Object> values = parser.list();\n- fields = values.toArray(new String[values.size()]);\n- } else {\n- throw new IllegalArgumentException(\"Malformed action/metadata line [\" + line + \"], expected a simple value for field [\" + currentFieldName + \"] but found [\" + token + \"]\");\n- }\n+ throw new IllegalArgumentException(\"Malformed action/metadata line [\" + line +\n+ \"], expected a simple value for field [\" + currentFieldName + \"] but found [\" + token + \"]\");\n } else if (token == XContentParser.Token.START_OBJECT && SOURCE.match(currentFieldName, parser.getDeprecationHandler())) {\n fetchSourceContext = FetchSourceContext.fromXContent(parser);\n } else if (token != XContentParser.Token.VALUE_NULL) {\n@@ -435,10 +423,6 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null\n if (fetchSourceContext != null) {\n updateRequest.fetchSource(fetchSourceContext);\n }\n- if (fields != null) {\n- updateRequest.fields(fields);\n- }\n-\n IndexRequest upsertRequest = updateRequest.upsertRequest();\n if (upsertRequest != null) {\n upsertRequest.version(version);", "filename": "server/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java", "status": "modified" }, { "diff": "@@ -291,8 +291,7 @@ static BulkItemResultHolder processUpdateResponse(final UpdateRequest updateRequ\n indexResponse.getId(), indexResponse.getSeqNo(), indexResponse.getPrimaryTerm(), indexResponse.getVersion(),\n indexResponse.getResult());\n \n- if ((updateRequest.fetchSource() != null && updateRequest.fetchSource().fetchSource()) ||\n- (updateRequest.fields() != null && updateRequest.fields().length > 0)) {\n+ if (updateRequest.fetchSource() != null && updateRequest.fetchSource().fetchSource()) {\n final BytesReference indexSourceAsBytes = updateIndexRequest.source();\n final Tuple<XContentType, Map<String, Object>> sourceAndContent =\n XContentHelper.convertToMap(indexSourceAsBytes, true, updateIndexRequest.getContentType());", "filename": "server/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java", "status": "modified" }, { "diff": "@@ -180,8 +180,7 @@ protected void shardOperation(final UpdateRequest request, final ActionListener<\n bulkAction.execute(toSingleItemBulkRequest(upsertRequest), wrapBulkResponse(\n ActionListener.<IndexResponse>wrap(response -> {\n UpdateResponse update = new UpdateResponse(response.getShardInfo(), response.getShardId(), response.getType(), response.getId(), response.getSeqNo(), response.getPrimaryTerm(), response.getVersion(), response.getResult());\n- if ((request.fetchSource() != null && request.fetchSource().fetchSource()) ||\n- (request.fields() != null && request.fields().length > 0)) {\n+ if (request.fetchSource() != null && request.fetchSource().fetchSource()) {\n Tuple<XContentType, Map<String, Object>> sourceAndContent =\n XContentHelper.convertToMap(upsertSourceBytes, true, upsertRequest.getContentType());\n update.setGetResult(UpdateHelper.extractGetResult(request, request.concreteIndex(), response.getVersion(), sourceAndContent.v2(), sourceAndContent.v1(), upsertSourceBytes));", "filename": "server/src/main/java/org/elasticsearch/action/update/TransportUpdateAction.java", "status": "modified" }, { "diff": "@@ -29,7 +29,6 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n-import org.elasticsearch.common.document.DocumentField;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n import org.elasticsearch.common.settings.Settings;\n@@ -49,7 +48,7 @@\n import org.elasticsearch.search.lookup.SourceLookup;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n+import java.util.Collections;\n import java.util.HashMap;\n import java.util.Map;\n import java.util.function.LongSupplier;\n@@ -292,61 +291,33 @@ private Map<String, Object> executeScript(Script script, Map<String, Object> ctx\n \n /**\n * Applies {@link UpdateRequest#fetchSource()} to the _source of the updated document to be returned in a update response.\n- * For BWC this function also extracts the {@link UpdateRequest#fields()} from the updated document to be returned in a update response\n */\n public static GetResult extractGetResult(final UpdateRequest request, String concreteIndex, long version,\n final Map<String, Object> source, XContentType sourceContentType,\n @Nullable final BytesReference sourceAsBytes) {\n- if ((request.fields() == null || request.fields().length == 0) &&\n- (request.fetchSource() == null || request.fetchSource().fetchSource() == false)) {\n+ if (request.fetchSource() == null || request.fetchSource().fetchSource() == false) {\n return null;\n }\n- SourceLookup sourceLookup = new SourceLookup();\n- sourceLookup.setSource(source);\n- boolean sourceRequested = false;\n- Map<String, DocumentField> fields = null;\n- if (request.fields() != null && request.fields().length > 0) {\n- for (String field : request.fields()) {\n- if (field.equals(\"_source\")) {\n- sourceRequested = true;\n- continue;\n- }\n- Object value = sourceLookup.extractValue(field);\n- if (value != null) {\n- if (fields == null) {\n- fields = new HashMap<>(2);\n- }\n- DocumentField documentField = fields.get(field);\n- if (documentField == null) {\n- documentField = new DocumentField(field, new ArrayList<>(2));\n- fields.put(field, documentField);\n- }\n- documentField.getValues().add(value);\n- }\n- }\n- }\n \n BytesReference sourceFilteredAsBytes = sourceAsBytes;\n- if (request.fetchSource() != null && request.fetchSource().fetchSource()) {\n- sourceRequested = true;\n- if (request.fetchSource().includes().length > 0 || request.fetchSource().excludes().length > 0) {\n- Object value = sourceLookup.filter(request.fetchSource());\n- try {\n- final int initialCapacity = Math.min(1024, sourceAsBytes.length());\n- BytesStreamOutput streamOutput = new BytesStreamOutput(initialCapacity);\n- try (XContentBuilder builder = new XContentBuilder(sourceContentType.xContent(), streamOutput)) {\n- builder.value(value);\n- sourceFilteredAsBytes = BytesReference.bytes(builder);\n- }\n- } catch (IOException e) {\n- throw new ElasticsearchException(\"Error filtering source\", e);\n+ if (request.fetchSource().includes().length > 0 || request.fetchSource().excludes().length > 0) {\n+ SourceLookup sourceLookup = new SourceLookup();\n+ sourceLookup.setSource(source);\n+ Object value = sourceLookup.filter(request.fetchSource());\n+ try {\n+ final int initialCapacity = Math.min(1024, sourceAsBytes.length());\n+ BytesStreamOutput streamOutput = new BytesStreamOutput(initialCapacity);\n+ try (XContentBuilder builder = new XContentBuilder(sourceContentType.xContent(), streamOutput)) {\n+ builder.value(value);\n+ sourceFilteredAsBytes = BytesReference.bytes(builder);\n }\n+ } catch (IOException e) {\n+ throw new ElasticsearchException(\"Error filtering source\", e);\n }\n }\n \n // TODO when using delete/none, we can still return the source as bytes by generating it (using the sourceContentType)\n- return new GetResult(concreteIndex, request.type(), request.id(), version, true,\n- sourceRequested ? sourceFilteredAsBytes : null, fields);\n+ return new GetResult(concreteIndex, request.type(), request.id(), version, true, sourceFilteredAsBytes, Collections.emptyMap());\n }\n \n public static class Result {", "filename": "server/src/main/java/org/elasticsearch/action/update/UpdateHelper.java", "status": "modified" }, { "diff": "@@ -19,8 +19,6 @@\n \n package org.elasticsearch.action.update;\n \n-import java.util.Arrays;\n-\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.DocWriteRequest;\n@@ -30,11 +28,14 @@\n import org.elasticsearch.action.support.replication.ReplicationRequest;\n import org.elasticsearch.action.support.single.instance.InstanceShardOperationRequest;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.ParseField;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.lucene.uid.Versions;\n import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n+import org.elasticsearch.common.xcontent.ObjectParser;\n import org.elasticsearch.common.xcontent.ToXContentObject;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -48,15 +49,46 @@\n import org.elasticsearch.search.fetch.subphase.FetchSourceContext;\n \n import java.io.IOException;\n-import java.util.Collections;\n import java.util.HashMap;\n-import java.util.List;\n import java.util.Map;\n \n import static org.elasticsearch.action.ValidateActions.addValidationError;\n \n public class UpdateRequest extends InstanceShardOperationRequest<UpdateRequest>\n implements DocWriteRequest<UpdateRequest>, WriteRequest<UpdateRequest>, ToXContentObject {\n+ private static ObjectParser<UpdateRequest, Void> PARSER;\n+\n+ private static final ParseField SCRIPT_FIELD = new ParseField(\"script\");\n+ private static final ParseField SCRIPTED_UPSERT_FIELD = new ParseField(\"scripted_upsert\");\n+ private static final ParseField UPSERT_FIELD = new ParseField(\"upsert\");\n+ private static final ParseField DOC_FIELD = new ParseField(\"doc\");\n+ private static final ParseField DOC_AS_UPSERT_FIELD = new ParseField(\"doc_as_upsert\");\n+ private static final ParseField DETECT_NOOP_FIELD = new ParseField(\"detect_noop\");\n+ private static final ParseField SOURCE_FIELD = new ParseField(\"_source\");\n+\n+ static {\n+ PARSER = new ObjectParser<>(UpdateRequest.class.getSimpleName());\n+ PARSER.declareField((request, script) -> request.script = script,\n+ (parser, context) -> Script.parse(parser), SCRIPT_FIELD, ObjectParser.ValueType.OBJECT_OR_STRING);\n+ PARSER.declareBoolean(UpdateRequest::scriptedUpsert, SCRIPTED_UPSERT_FIELD);\n+ PARSER.declareObject((request, builder) -> request.safeUpsertRequest().source(builder),\n+ (parser, context) -> {\n+ XContentBuilder builder = XContentFactory.contentBuilder(parser.contentType());\n+ builder.copyCurrentStructure(parser);\n+ return builder;\n+ }, UPSERT_FIELD);\n+ PARSER.declareObject((request, builder) -> request.safeDoc().source(builder),\n+ (parser, context) -> {\n+ XContentBuilder docBuilder = XContentFactory.contentBuilder(parser.contentType());\n+ docBuilder.copyCurrentStructure(parser);\n+ return docBuilder;\n+ }, DOC_FIELD);\n+ PARSER.declareBoolean(UpdateRequest::docAsUpsert, DOC_AS_UPSERT_FIELD);\n+ PARSER.declareBoolean(UpdateRequest::detectNoop, DETECT_NOOP_FIELD);\n+ PARSER.declareField(UpdateRequest::fetchSource,\n+ (parser, context) -> FetchSourceContext.fromXContent(parser), SOURCE_FIELD,\n+ ObjectParser.ValueType.OBJECT_ARRAY_BOOLEAN_OR_STRING);\n+ }\n \n private String type;\n private String id;\n@@ -66,7 +98,6 @@ public class UpdateRequest extends InstanceShardOperationRequest<UpdateRequest>\n @Nullable\n Script script;\n \n- private String[] fields;\n private FetchSourceContext fetchSourceContext;\n \n private long version = Versions.MATCH_ANY;\n@@ -365,16 +396,6 @@ public UpdateRequest script(String script, @Nullable String scriptLang, ScriptTy\n return this;\n }\n \n- /**\n- * Explicitly specify the fields that will be returned. By default, nothing is returned.\n- * @deprecated Use {@link UpdateRequest#fetchSource(String[], String[])} instead\n- */\n- @Deprecated\n- public UpdateRequest fields(String... fields) {\n- this.fields = fields;\n- return this;\n- }\n-\n /**\n * Indicate that _source should be returned with every hit, with an\n * \"include\" and/or \"exclude\" set which can include simple wildcard\n@@ -389,7 +410,9 @@ public UpdateRequest fields(String... fields) {\n */\n public UpdateRequest fetchSource(@Nullable String include, @Nullable String exclude) {\n FetchSourceContext context = this.fetchSourceContext == null ? FetchSourceContext.FETCH_SOURCE : this.fetchSourceContext;\n- this.fetchSourceContext = new FetchSourceContext(context.fetchSource(), new String[] {include}, new String[]{exclude});\n+ String[] includes = include == null ? Strings.EMPTY_ARRAY : new String[]{include};\n+ String[] excludes = exclude == null ? Strings.EMPTY_ARRAY : new String[]{exclude};\n+ this.fetchSourceContext = new FetchSourceContext(context.fetchSource(), includes, excludes);\n return this;\n }\n \n@@ -428,16 +451,6 @@ public UpdateRequest fetchSource(FetchSourceContext context) {\n return this;\n }\n \n-\n- /**\n- * Get the fields to be returned.\n- * @deprecated Use {@link UpdateRequest#fetchSource()} instead\n- */\n- @Deprecated\n- public String[] fields() {\n- return fields;\n- }\n-\n /**\n * Gets the {@link FetchSourceContext} which defines how the _source should\n * be fetched.\n@@ -707,49 +720,7 @@ public boolean detectNoop() {\n }\n \n public UpdateRequest fromXContent(XContentParser parser) throws IOException {\n- Script script = null;\n- XContentParser.Token token = parser.nextToken();\n- if (token == null) {\n- return this;\n- }\n- String currentFieldName = null;\n- while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n- if (token == XContentParser.Token.FIELD_NAME) {\n- currentFieldName = parser.currentName();\n- } else if (\"script\".equals(currentFieldName)) {\n- script = Script.parse(parser);\n- } else if (\"scripted_upsert\".equals(currentFieldName)) {\n- scriptedUpsert = parser.booleanValue();\n- } else if (\"upsert\".equals(currentFieldName)) {\n- XContentBuilder builder = XContentFactory.contentBuilder(parser.contentType());\n- builder.copyCurrentStructure(parser);\n- safeUpsertRequest().source(builder);\n- } else if (\"doc\".equals(currentFieldName)) {\n- XContentBuilder docBuilder = XContentFactory.contentBuilder(parser.contentType());\n- docBuilder.copyCurrentStructure(parser);\n- safeDoc().source(docBuilder);\n- } else if (\"doc_as_upsert\".equals(currentFieldName)) {\n- docAsUpsert(parser.booleanValue());\n- } else if (\"detect_noop\".equals(currentFieldName)) {\n- detectNoop(parser.booleanValue());\n- } else if (\"fields\".equals(currentFieldName)) {\n- List<Object> fields = null;\n- if (token == XContentParser.Token.START_ARRAY) {\n- fields = (List) parser.list();\n- } else if (token.isValue()) {\n- fields = Collections.singletonList(parser.text());\n- }\n- if (fields != null) {\n- fields(fields.toArray(new String[fields.size()]));\n- }\n- } else if (\"_source\".equals(currentFieldName)) {\n- fetchSourceContext = FetchSourceContext.fromXContent(parser);\n- }\n- }\n- if (script != null) {\n- this.script = script;\n- }\n- return this;\n+ return PARSER.parse(parser, this, null);\n }\n \n public boolean docAsUpsert() {\n@@ -789,7 +760,12 @@ public void readFrom(StreamInput in) throws IOException {\n doc = new IndexRequest();\n doc.readFrom(in);\n }\n- fields = in.readOptionalStringArray();\n+ if (in.getVersion().before(Version.V_7_0_0_alpha1)) {\n+ String[] fields = in.readOptionalStringArray();\n+ if (fields != null) {\n+ throw new IllegalArgumentException(\"[fields] is no longer supported\");\n+ }\n+ }\n fetchSourceContext = in.readOptionalWriteable(FetchSourceContext::new);\n if (in.readBoolean()) {\n upsertRequest = new IndexRequest();\n@@ -812,7 +788,7 @@ public void writeTo(StreamOutput out) throws IOException {\n if (out.getVersion().before(Version.V_7_0_0_alpha1)) {\n out.writeOptionalString(null); // _parent\n }\n- \n+\n boolean hasScript = script != null;\n out.writeBoolean(hasScript);\n if (hasScript) {\n@@ -830,7 +806,9 @@ public void writeTo(StreamOutput out) throws IOException {\n doc.id(id);\n doc.writeTo(out);\n }\n- out.writeOptionalStringArray(fields);\n+ if (out.getVersion().before(Version.V_7_0_0_alpha1)) {\n+ out.writeOptionalStringArray(null);\n+ }\n out.writeOptionalWriteable(fetchSourceContext);\n if (upsertRequest == null) {\n out.writeBoolean(false);\n@@ -880,9 +858,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n if (detectNoop == false) {\n builder.field(\"detect_noop\", detectNoop);\n }\n- if (fields != null) {\n- builder.array(\"fields\", fields);\n- }\n if (fetchSourceContext != null) {\n builder.field(\"_source\", fetchSourceContext);\n }\n@@ -908,9 +883,6 @@ public String toString() {\n }\n res.append(\", scripted_upsert[\").append(scriptedUpsert).append(\"]\");\n res.append(\", detect_noop[\").append(detectNoop).append(\"]\");\n- if (fields != null) {\n- res.append(\", fields[\").append(Arrays.toString(fields)).append(\"]\");\n- }\n return res.append(\"}\").toString();\n }\n }", "filename": "server/src/main/java/org/elasticsearch/action/update/UpdateRequest.java", "status": "modified" }, { "diff": "@@ -26,20 +26,15 @@\n import org.elasticsearch.action.support.single.instance.InstanceShardOperationRequestBuilder;\n import org.elasticsearch.client.ElasticsearchClient;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.logging.DeprecationLogger;\n-import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.index.VersionType;\n-import org.elasticsearch.rest.action.document.RestUpdateAction;\n import org.elasticsearch.script.Script;\n \n import java.util.Map;\n \n public class UpdateRequestBuilder extends InstanceShardOperationRequestBuilder<UpdateRequest, UpdateResponse, UpdateRequestBuilder>\n implements WriteRequestBuilder<UpdateRequestBuilder> {\n- private static final DeprecationLogger DEPRECATION_LOGGER =\n- new DeprecationLogger(Loggers.getLogger(RestUpdateAction.class));\n \n public UpdateRequestBuilder(ElasticsearchClient client, UpdateAction action) {\n super(client, action, new UpdateRequest());\n@@ -87,17 +82,6 @@ public UpdateRequestBuilder setScript(Script script) {\n return this;\n }\n \n- /**\n- * Explicitly specify the fields that will be returned. By default, nothing is returned.\n- * @deprecated Use {@link UpdateRequestBuilder#setFetchSource(String[], String[])} instead\n- */\n- @Deprecated\n- public UpdateRequestBuilder setFields(String... fields) {\n- DEPRECATION_LOGGER.deprecated(\"Deprecated field [fields] used, expected [_source] instead\");\n- request.fields(fields);\n- return this;\n- }\n-\n /**\n * Indicate that _source should be returned with every hit, with an\n * \"include\" and/or \"exclude\" set which can include simple wildcard", "filename": "server/src/main/java/org/elasticsearch/action/update/UpdateRequestBuilder.java", "status": "modified" }, { "diff": "@@ -24,9 +24,6 @@\n import org.elasticsearch.action.support.ActiveShardCount;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.client.node.NodeClient;\n-import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.logging.DeprecationLogger;\n-import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.rest.BaseRestHandler;\n import org.elasticsearch.rest.RestController;\n@@ -49,8 +46,6 @@\n * </pre>\n */\n public class RestBulkAction extends BaseRestHandler {\n- private static final DeprecationLogger DEPRECATION_LOGGER =\n- new DeprecationLogger(Loggers.getLogger(RestBulkAction.class));\n \n private final boolean allowExplicitIndex;\n \n@@ -79,19 +74,14 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC\n String defaultType = request.param(\"type\");\n String defaultRouting = request.param(\"routing\");\n FetchSourceContext defaultFetchSourceContext = FetchSourceContext.parseFromRestRequest(request);\n- String fieldsParam = request.param(\"fields\");\n- if (fieldsParam != null) {\n- DEPRECATION_LOGGER.deprecated(\"Deprecated field [fields] used, expected [_source] instead\");\n- }\n- String[] defaultFields = fieldsParam != null ? Strings.commaDelimitedListToStringArray(fieldsParam) : null;\n String defaultPipeline = request.param(\"pipeline\");\n String waitForActiveShards = request.param(\"wait_for_active_shards\");\n if (waitForActiveShards != null) {\n bulkRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards));\n }\n bulkRequest.timeout(request.paramAsTime(\"timeout\", BulkShardRequest.DEFAULT_TIMEOUT));\n bulkRequest.setRefreshPolicy(request.param(\"refresh\"));\n- bulkRequest.add(request.requiredContent(), defaultIndex, defaultType, defaultRouting, defaultFields,\n+ bulkRequest.add(request.requiredContent(), defaultIndex, defaultType, defaultRouting,\n defaultFetchSourceContext, defaultPipeline, null, allowExplicitIndex, request.getXContentType());\n \n return channel -> client.bulk(bulkRequest, new RestStatusToXContentListener<>(channel));", "filename": "server/src/main/java/org/elasticsearch/rest/action/document/RestBulkAction.java", "status": "modified" }, { "diff": "@@ -23,9 +23,6 @@\n import org.elasticsearch.action.support.ActiveShardCount;\n import org.elasticsearch.action.update.UpdateRequest;\n import org.elasticsearch.client.node.NodeClient;\n-import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.logging.DeprecationLogger;\n-import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.VersionType;\n import org.elasticsearch.rest.BaseRestHandler;\n@@ -40,8 +37,6 @@\n import static org.elasticsearch.rest.RestRequest.Method.POST;\n \n public class RestUpdateAction extends BaseRestHandler {\n- private static final DeprecationLogger DEPRECATION_LOGGER =\n- new DeprecationLogger(Loggers.getLogger(RestUpdateAction.class));\n \n public RestUpdateAction(Settings settings, RestController controller) {\n super(settings);\n@@ -65,15 +60,7 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC\n }\n updateRequest.docAsUpsert(request.paramAsBoolean(\"doc_as_upsert\", updateRequest.docAsUpsert()));\n FetchSourceContext fetchSourceContext = FetchSourceContext.parseFromRestRequest(request);\n- String sField = request.param(\"fields\");\n- if (sField != null && fetchSourceContext != null) {\n- throw new IllegalArgumentException(\"[fields] and [_source] cannot be used in the same request\");\n- }\n- if (sField != null) {\n- DEPRECATION_LOGGER.deprecated(\"Deprecated field [fields] used, expected [_source] instead\");\n- String[] sFields = Strings.splitStringByCommaToArray(sField);\n- updateRequest.fields(sFields);\n- } else if (fetchSourceContext != null) {\n+ if (fetchSourceContext != null) {\n updateRequest.fetchSource(fetchSourceContext);\n }\n ", "filename": "server/src/main/java/org/elasticsearch/rest/action/document/RestUpdateAction.java", "status": "modified" }, { "diff": "@@ -94,33 +94,31 @@ public void testSimpleBulk4() throws Exception {\n BulkRequest bulkRequest = new BulkRequest();\n bulkRequest.add(bulkAction.getBytes(StandardCharsets.UTF_8), 0, bulkAction.length(), null, null, XContentType.JSON);\n assertThat(bulkRequest.numberOfActions(), equalTo(4));\n- assertThat(((UpdateRequest) bulkRequest.requests().get(0)).id(), equalTo(\"1\"));\n+ assertThat(bulkRequest.requests().get(0).id(), equalTo(\"1\"));\n assertThat(((UpdateRequest) bulkRequest.requests().get(0)).retryOnConflict(), equalTo(2));\n assertThat(((UpdateRequest) bulkRequest.requests().get(0)).doc().source().utf8ToString(), equalTo(\"{\\\"field\\\":\\\"value\\\"}\"));\n- assertThat(((UpdateRequest) bulkRequest.requests().get(1)).id(), equalTo(\"0\"));\n- assertThat(((UpdateRequest) bulkRequest.requests().get(1)).type(), equalTo(\"type1\"));\n- assertThat(((UpdateRequest) bulkRequest.requests().get(1)).index(), equalTo(\"index1\"));\n+ assertThat(bulkRequest.requests().get(1).id(), equalTo(\"0\"));\n+ assertThat(bulkRequest.requests().get(1).type(), equalTo(\"type1\"));\n+ assertThat(bulkRequest.requests().get(1).index(), equalTo(\"index1\"));\n Script script = ((UpdateRequest) bulkRequest.requests().get(1)).script();\n assertThat(script, notNullValue());\n assertThat(script.getIdOrCode(), equalTo(\"counter += param1\"));\n assertThat(script.getLang(), equalTo(\"javascript\"));\n Map<String, Object> scriptParams = script.getParams();\n assertThat(scriptParams, notNullValue());\n assertThat(scriptParams.size(), equalTo(1));\n- assertThat(((Integer) scriptParams.get(\"param1\")), equalTo(1));\n+ assertThat(scriptParams.get(\"param1\"), equalTo(1));\n assertThat(((UpdateRequest) bulkRequest.requests().get(1)).upsertRequest().source().utf8ToString(), equalTo(\"{\\\"counter\\\":1}\"));\n }\n \n public void testBulkAllowExplicitIndex() throws Exception {\n- String bulkAction = copyToStringFromClasspath(\"/org/elasticsearch/action/bulk/simple-bulk.json\");\n- try {\n- new BulkRequest().add(new BytesArray(bulkAction.getBytes(StandardCharsets.UTF_8)), null, null, false, XContentType.JSON);\n- fail();\n- } catch (Exception e) {\n-\n- }\n+ String bulkAction1 = copyToStringFromClasspath(\"/org/elasticsearch/action/bulk/simple-bulk.json\");\n+ Exception ex = expectThrows(Exception.class,\n+ () -> new BulkRequest().add(\n+ new BytesArray(bulkAction1.getBytes(StandardCharsets.UTF_8)), null, null, false, XContentType.JSON));\n+ assertEquals(\"explicit index in bulk is not allowed\", ex.getMessage());\n \n- bulkAction = copyToStringFromClasspath(\"/org/elasticsearch/action/bulk/simple-bulk5.json\");\n+ String bulkAction = copyToStringFromClasspath(\"/org/elasticsearch/action/bulk/simple-bulk5.json\");\n new BulkRequest().add(new BytesArray(bulkAction.getBytes(StandardCharsets.UTF_8)), \"test\", null, false, XContentType.JSON);\n }\n \n@@ -177,6 +175,16 @@ public void testSimpleBulk10() throws Exception {\n assertThat(bulkRequest.numberOfActions(), equalTo(9));\n }\n \n+ public void testBulkActionShouldNotContainArray() throws Exception {\n+ String bulkAction = \"{ \\\"index\\\":{\\\"_index\\\":[\\\"index1\\\", \\\"index2\\\"],\\\"_type\\\":\\\"type1\\\",\\\"_id\\\":\\\"1\\\"} }\\r\\n\"\n+ + \"{ \\\"field1\\\" : \\\"value1\\\" }\\r\\n\";\n+ BulkRequest bulkRequest = new BulkRequest();\n+ IllegalArgumentException exc = expectThrows(IllegalArgumentException.class,\n+ () -> bulkRequest.add(bulkAction.getBytes(StandardCharsets.UTF_8), 0, bulkAction.length(), null, null, XContentType.JSON));\n+ assertEquals(exc.getMessage(), \"Malformed action/metadata line [1]\" +\n+ \", expected a simple value for field [_index] but found [START_ARRAY]\");\n+ }\n+\n public void testBulkEmptyObject() throws Exception {\n String bulkIndexAction = \"{ \\\"index\\\":{\\\"_index\\\":\\\"test\\\",\\\"_type\\\":\\\"type1\\\",\\\"_id\\\":\\\"1\\\"} }\\r\\n\";\n String bulkIndexSource = \"{ \\\"field1\\\" : \\\"value1\\\" }\\r\\n\";\n@@ -299,7 +307,7 @@ public void testToValidateUpsertRequestAndVersionInBulkRequest() throws IOExcept\n out.write(xContentType.xContent().streamSeparator());\n try(XContentBuilder builder = XContentFactory.contentBuilder(xContentType, out)) {\n builder.startObject();\n- builder.field(\"doc\", \"{}\");\n+ builder.startObject(\"doc\").endObject();\n Map<String,Object> values = new HashMap<>();\n values.put(\"version\", 2L);\n values.put(\"_index\", \"index\");", "filename": "server/src/test/java/org/elasticsearch/action/bulk/BulkRequestTests.java", "status": "modified" }, { "diff": "@@ -260,13 +260,13 @@ public void testBulkUpdateMalformedScripts() throws Exception {\n assertThat(bulkResponse.getItems().length, equalTo(3));\n \n bulkResponse = client().prepareBulk()\n- .add(client().prepareUpdate().setIndex(\"test\").setType(\"type1\").setId(\"1\").setFields(\"field\")\n+ .add(client().prepareUpdate().setIndex(\"test\").setType(\"type1\").setId(\"1\").setFetchSource(\"field\", null)\n .setScript(new Script(\n ScriptType.INLINE, CustomScriptPlugin.NAME, \"throw script exception on unknown var\", Collections.emptyMap())))\n- .add(client().prepareUpdate().setIndex(\"test\").setType(\"type1\").setId(\"2\").setFields(\"field\")\n+ .add(client().prepareUpdate().setIndex(\"test\").setType(\"type1\").setId(\"2\").setFetchSource(\"field\", null)\n .setScript(new Script(\n ScriptType.INLINE, CustomScriptPlugin.NAME, \"ctx._source.field += 1\", Collections.emptyMap())))\n- .add(client().prepareUpdate().setIndex(\"test\").setType(\"type1\").setId(\"3\").setFields(\"field\")\n+ .add(client().prepareUpdate().setIndex(\"test\").setType(\"type1\").setId(\"3\").setFetchSource(\"field\", null)\n .setScript(new Script(\n ScriptType.INLINE, CustomScriptPlugin.NAME, \"throw script exception on unknown var\", Collections.emptyMap())))\n .execute().actionGet();\n@@ -279,7 +279,7 @@ public void testBulkUpdateMalformedScripts() throws Exception {\n \n assertThat(bulkResponse.getItems()[1].getResponse().getId(), equalTo(\"2\"));\n assertThat(bulkResponse.getItems()[1].getResponse().getVersion(), equalTo(2L));\n- assertThat(((UpdateResponse) bulkResponse.getItems()[1].getResponse()).getGetResult().field(\"field\").getValue(), equalTo(2));\n+ assertThat(((UpdateResponse) bulkResponse.getItems()[1].getResponse()).getGetResult().sourceAsMap().get(\"field\"), equalTo(2));\n assertThat(bulkResponse.getItems()[1].getFailure(), nullValue());\n \n assertThat(bulkResponse.getItems()[2].getFailure().getId(), equalTo(\"3\"));\n@@ -303,7 +303,7 @@ public void testBulkUpdateLargerVolume() throws Exception {\n builder.add(\n client().prepareUpdate()\n .setIndex(\"test\").setType(\"type1\").setId(Integer.toString(i))\n- .setFields(\"counter\")\n+ .setFetchSource(\"counter\", null)\n .setScript(script)\n .setUpsert(jsonBuilder().startObject().field(\"counter\", 1).endObject()));\n }\n@@ -319,7 +319,7 @@ public void testBulkUpdateLargerVolume() throws Exception {\n assertThat(response.getItems()[i].getOpType(), equalTo(OpType.UPDATE));\n assertThat(response.getItems()[i].getResponse().getId(), equalTo(Integer.toString(i)));\n assertThat(response.getItems()[i].getResponse().getVersion(), equalTo(1L));\n- assertThat(((UpdateResponse) response.getItems()[i].getResponse()).getGetResult().field(\"counter\").getValue(), equalTo(1));\n+ assertThat(((UpdateResponse) response.getItems()[i].getResponse()).getGetResult().sourceAsMap().get(\"counter\"), equalTo(1));\n \n for (int j = 0; j < 5; j++) {\n GetResponse getResponse = client().prepareGet(\"test\", \"type1\", Integer.toString(i)).execute()\n@@ -333,7 +333,7 @@ public void testBulkUpdateLargerVolume() throws Exception {\n builder = client().prepareBulk();\n for (int i = 0; i < numDocs; i++) {\n UpdateRequestBuilder updateBuilder = client().prepareUpdate().setIndex(\"test\").setType(\"type1\").setId(Integer.toString(i))\n- .setFields(\"counter\");\n+ .setFetchSource(\"counter\", null);\n if (i % 2 == 0) {\n updateBuilder.setScript(script);\n } else {\n@@ -357,7 +357,7 @@ public void testBulkUpdateLargerVolume() throws Exception {\n assertThat(response.getItems()[i].getOpType(), equalTo(OpType.UPDATE));\n assertThat(response.getItems()[i].getResponse().getId(), equalTo(Integer.toString(i)));\n assertThat(response.getItems()[i].getResponse().getVersion(), equalTo(2L));\n- assertThat(((UpdateResponse) response.getItems()[i].getResponse()).getGetResult().field(\"counter\").getValue(), equalTo(2));\n+ assertThat(((UpdateResponse) response.getItems()[i].getResponse()).getGetResult().sourceAsMap().get(\"counter\"), equalTo(2));\n }\n \n builder = client().prepareBulk();", "filename": "server/src/test/java/org/elasticsearch/action/bulk/BulkWithUpdatesIT.java", "status": "modified" }, { "diff": "@@ -61,7 +61,6 @@\n import static org.elasticsearch.common.xcontent.XContentHelper.toXContent;\n import static org.elasticsearch.script.MockScriptEngine.mockInlineScript;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertToXContentEquivalent;\n-import static org.hamcrest.Matchers.arrayContaining;\n import static org.hamcrest.Matchers.contains;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n@@ -277,17 +276,26 @@ public void testFromXContent() throws Exception {\n assertThat(((Map) doc.get(\"compound\")).get(\"field2\").toString(), equalTo(\"value2\"));\n }\n \n- // Related to issue 15338\n- public void testFieldsParsing() throws Exception {\n- UpdateRequest request = new UpdateRequest(\"test\", \"type1\", \"1\").fromXContent(\n- createParser(JsonXContent.jsonXContent, new BytesArray(\"{\\\"doc\\\": {\\\"field1\\\": \\\"value1\\\"}, \\\"fields\\\": \\\"_source\\\"}\")));\n- assertThat(request.doc().sourceAsMap().get(\"field1\").toString(), equalTo(\"value1\"));\n- assertThat(request.fields(), arrayContaining(\"_source\"));\n-\n- request = new UpdateRequest(\"test\", \"type2\", \"2\").fromXContent(createParser(JsonXContent.jsonXContent,\n- new BytesArray(\"{\\\"doc\\\": {\\\"field2\\\": \\\"value2\\\"}, \\\"fields\\\": [\\\"field1\\\", \\\"field2\\\"]}\")));\n- assertThat(request.doc().sourceAsMap().get(\"field2\").toString(), equalTo(\"value2\"));\n- assertThat(request.fields(), arrayContaining(\"field1\", \"field2\"));\n+ public void testUnknownFieldParsing() throws Exception {\n+ UpdateRequest request = new UpdateRequest(\"test\", \"type\", \"1\");\n+ XContentParser contentParser = createParser(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"unknown_field\", \"test\")\n+ .endObject());\n+\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, () -> request.fromXContent(contentParser));\n+ assertEquals(\"[UpdateRequest] unknown field [unknown_field], parser not found\", ex.getMessage());\n+\n+ UpdateRequest request2 = new UpdateRequest(\"test\", \"type\", \"1\");\n+ XContentParser unknownObject = createParser(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"script\", \"ctx.op = ctx._source.views == params.count ? 'delete' : 'none'\")\n+ .startObject(\"params\")\n+ .field(\"count\", 1)\n+ .endObject()\n+ .endObject());\n+ ex = expectThrows(IllegalArgumentException.class, () -> request2.fromXContent(unknownObject));\n+ assertEquals(\"[UpdateRequest] unknown field [params], parser not found\", ex.getMessage());\n }\n \n public void testFetchSourceParsing() throws Exception {\n@@ -444,13 +452,6 @@ public void testToAndFromXContent() throws IOException {\n BytesReference source = RandomObjects.randomSource(random(), xContentType);\n updateRequest.upsert(new IndexRequest().source(source, xContentType));\n }\n- if (randomBoolean()) {\n- String[] fields = new String[randomIntBetween(0, 5)];\n- for (int i = 0; i < fields.length; i++) {\n- fields[i] = randomAlphaOfLength(5);\n- }\n- updateRequest.fields(fields);\n- }\n if (randomBoolean()) {\n if (randomBoolean()) {\n updateRequest.fetchSource(randomBoolean());\n@@ -487,10 +488,8 @@ public void testToAndFromXContent() throws IOException {\n \n assertEquals(updateRequest.detectNoop(), parsedUpdateRequest.detectNoop());\n assertEquals(updateRequest.docAsUpsert(), parsedUpdateRequest.docAsUpsert());\n- assertEquals(updateRequest.docAsUpsert(), parsedUpdateRequest.docAsUpsert());\n assertEquals(updateRequest.script(), parsedUpdateRequest.script());\n assertEquals(updateRequest.scriptedUpsert(), parsedUpdateRequest.scriptedUpsert());\n- assertArrayEquals(updateRequest.fields(), parsedUpdateRequest.fields());\n assertEquals(updateRequest.fetchSource(), parsedUpdateRequest.fetchSource());\n \n BytesReference finalBytes = toXContent(parsedUpdateRequest, xContentType, humanReadable);", "filename": "server/src/test/java/org/elasticsearch/action/update/UpdateRequestTests.java", "status": "modified" }, { "diff": "@@ -225,7 +225,7 @@ public void testUpsertDoc() throws Exception {\n UpdateResponse updateResponse = client().prepareUpdate(indexOrAlias(), \"type1\", \"1\")\n .setDoc(XContentFactory.jsonBuilder().startObject().field(\"bar\", \"baz\").endObject())\n .setDocAsUpsert(true)\n- .setFields(\"_source\")\n+ .setFetchSource(true)\n .execute().actionGet();\n assertThat(updateResponse.getIndex(), equalTo(\"test\"));\n assertThat(updateResponse.getGetResult(), notNullValue());\n@@ -241,7 +241,7 @@ public void testNotUpsertDoc() throws Exception {\n assertThrows(client().prepareUpdate(indexOrAlias(), \"type1\", \"1\")\n .setDoc(XContentFactory.jsonBuilder().startObject().field(\"bar\", \"baz\").endObject())\n .setDocAsUpsert(false)\n- .setFields(\"_source\")\n+ .setFetchSource(true)\n .execute(), DocumentMissingException.class);\n }\n \n@@ -264,7 +264,7 @@ public void testUpsertFields() throws Exception {\n updateResponse = client().prepareUpdate(indexOrAlias(), \"type1\", \"1\")\n .setUpsert(XContentFactory.jsonBuilder().startObject().field(\"bar\", \"baz\").endObject())\n .setScript(new Script(ScriptType.INLINE, UPDATE_SCRIPTS, PUT_VALUES_SCRIPT, Collections.singletonMap(\"extra\", \"foo\")))\n- .setFields(\"_source\")\n+ .setFetchSource(true)\n .execute().actionGet();\n \n assertThat(updateResponse.getIndex(), equalTo(\"test\"));\n@@ -293,12 +293,9 @@ public void testUpdate() throws Exception {\n ensureGreen();\n \n Script fieldIncScript = new Script(ScriptType.INLINE, UPDATE_SCRIPTS, FIELD_INC_SCRIPT, Collections.singletonMap(\"field\", \"field\"));\n- try {\n- client().prepareUpdate(indexOrAlias(), \"type1\", \"1\").setScript(fieldIncScript).execute().actionGet();\n- fail();\n- } catch (DocumentMissingException e) {\n- // all is well\n- }\n+ DocumentMissingException ex = expectThrows(DocumentMissingException.class,\n+ () -> client().prepareUpdate(indexOrAlias(), \"type1\", \"1\").setScript(fieldIncScript).execute().actionGet());\n+ assertEquals(\"[type1][1]: document missing\", ex.getMessage());\n \n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", 1).execute().actionGet();\n \n@@ -353,19 +350,6 @@ public void testUpdate() throws Exception {\n assertThat(getResponse.isExists(), equalTo(false));\n }\n \n- // check fields parameter\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", 1).execute().actionGet();\n- updateResponse = client().prepareUpdate(indexOrAlias(), \"type1\", \"1\")\n- .setScript(fieldIncScript)\n- .setFields(\"field\")\n- .setFetchSource(true)\n- .execute().actionGet();\n- assertThat(updateResponse.getIndex(), equalTo(\"test\"));\n- assertThat(updateResponse.getGetResult(), notNullValue());\n- assertThat(updateResponse.getGetResult().getIndex(), equalTo(\"test\"));\n- assertThat(updateResponse.getGetResult().sourceRef(), notNullValue());\n- assertThat(updateResponse.getGetResult().field(\"field\").getValue(), notNullValue());\n-\n // check _source parameter\n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field1\", 1, \"field2\", 2).execute().actionGet();\n updateResponse = client().prepareUpdate(indexOrAlias(), \"type1\", \"1\")\n@@ -383,15 +367,15 @@ public void testUpdate() throws Exception {\n // check updates without script\n // add new field\n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", 1).execute().actionGet();\n- updateResponse = client().prepareUpdate(indexOrAlias(), \"type1\", \"1\").setDoc(XContentFactory.jsonBuilder().startObject().field(\"field2\", 2).endObject()).execute().actionGet();\n+ client().prepareUpdate(indexOrAlias(), \"type1\", \"1\").setDoc(XContentFactory.jsonBuilder().startObject().field(\"field2\", 2).endObject()).execute().actionGet();\n for (int i = 0; i < 5; i++) {\n GetResponse getResponse = client().prepareGet(\"test\", \"type1\", \"1\").execute().actionGet();\n assertThat(getResponse.getSourceAsMap().get(\"field\").toString(), equalTo(\"1\"));\n assertThat(getResponse.getSourceAsMap().get(\"field2\").toString(), equalTo(\"2\"));\n }\n \n // change existing field\n- updateResponse = client().prepareUpdate(indexOrAlias(), \"type1\", \"1\").setDoc(XContentFactory.jsonBuilder().startObject().field(\"field\", 3).endObject()).execute().actionGet();\n+ client().prepareUpdate(indexOrAlias(), \"type1\", \"1\").setDoc(XContentFactory.jsonBuilder().startObject().field(\"field\", 3).endObject()).execute().actionGet();\n for (int i = 0; i < 5; i++) {\n GetResponse getResponse = client().prepareGet(\"test\", \"type1\", \"1\").execute().actionGet();\n assertThat(getResponse.getSourceAsMap().get(\"field\").toString(), equalTo(\"3\"));\n@@ -409,7 +393,7 @@ public void testUpdate() throws Exception {\n testMap.put(\"map1\", 8);\n \n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"map\", testMap).execute().actionGet();\n- updateResponse = client().prepareUpdate(indexOrAlias(), \"type1\", \"1\").setDoc(XContentFactory.jsonBuilder().startObject().field(\"map\", testMap3).endObject()).execute().actionGet();\n+ client().prepareUpdate(indexOrAlias(), \"type1\", \"1\").setDoc(XContentFactory.jsonBuilder().startObject().field(\"map\", testMap3).endObject()).execute().actionGet();\n for (int i = 0; i < 5; i++) {\n GetResponse getResponse = client().prepareGet(\"test\", \"type1\", \"1\").execute().actionGet();\n Map map1 = (Map) getResponse.getSourceAsMap().get(\"map\");\n@@ -581,7 +565,7 @@ public void run() {\n assertThat(response.getId(), equalTo(Integer.toString(i)));\n assertThat(response.isExists(), equalTo(true));\n assertThat(response.getVersion(), equalTo((long) numberOfThreads));\n- assertThat((Integer) response.getSource().get(\"field\"), equalTo(numberOfThreads));\n+ assertThat(response.getSource().get(\"field\"), equalTo(numberOfThreads));\n }\n }\n ", "filename": "server/src/test/java/org/elasticsearch/update/UpdateIT.java", "status": "modified" }, { "diff": "@@ -248,7 +248,7 @@ private UpdateResponse update(Boolean detectNoop, long expectedVersion, XContent\n UpdateRequestBuilder updateRequest = client().prepareUpdate(\"test\", \"type1\", \"1\")\n .setDoc(xContentBuilder)\n .setDocAsUpsert(true)\n- .setFields(\"_source\");\n+ .setFetchSource(true);\n if (detectNoop != null) {\n updateRequest.setDetectNoop(detectNoop);\n }", "filename": "server/src/test/java/org/elasticsearch/update/UpdateNoopIT.java", "status": "modified" } ] }
{ "body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version**:\r\n5.1.2\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nHere is the search I want to run:\r\n\r\n```\r\nPOST stores_search/_search?\r\n{\r\n \"suggest\": {\r\n \"store-suggest\": {\r\n \"prefix\": \"Combs\",\r\n \"completion\": {\r\n \"field\": \"suggest\",\r\n \"fuzzy\": {\r\n \"fuzziness\": \"AUTO\",\r\n \"transpositions\": true\r\n },\r\n \"size\": 100,\r\n \"contexts\": {\r\n \"location\": [\r\n {\r\n \"lat\": 47.6062,\r\n \"lon\": -122.3321,\r\n \"precision\": \"100m\"\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nI need to provide my precision in distance. The docs state this is possible:\r\n```\r\nThe precision of the geohash to encode the query geo point. This can be specified as a distance value (5m, 10km etc.), or as a raw geohash precision (1..12). Defaults to index time precision level.\r\n```\r\n\r\nBut instead of getting stores within that geo location, I get this error response:\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"parsing_exception\",\r\n \"reason\": \"[geo] failed to parse field [precision]\",\r\n \"line\": 1,\r\n \"col\": 57\r\n }\r\n ],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"all shards failed\",\r\n \"phase\": \"query_fetch\",\r\n \"grouped\": true,\r\n \"failed_shards\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"stores_search\",\r\n \"node\": \"Se4Iz_2cS4OGdz33kndgGg\",\r\n \"reason\": {\r\n \"type\": \"parsing_exception\",\r\n \"reason\": \"[geo] failed to parse field [precision]\",\r\n \"line\": 1,\r\n \"col\": 57,\r\n \"caused_by\": {\r\n \"type\": \"number_format_exception\",\r\n \"reason\": \"For input string: \\\"1000mi\\\"\"\r\n }\r\n }\r\n }\r\n ],\r\n \"caused_by\": {\r\n \"type\": \"parsing_exception\",\r\n \"reason\": \"[geo] failed to parse field [precision]\",\r\n \"line\": 1,\r\n \"col\": 57,\r\n \"caused_by\": {\r\n \"type\": \"number_format_exception\",\r\n \"reason\": \"For input string: \\\"1000mi\\\"\"\r\n }\r\n }\r\n },\r\n \"status\": 400\r\n}\r\n```\r\n\r\nIf i change precision to `1` (or any number in (1..12)), my search completes with no errors.\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Create a mapping as an example try:\r\n```\r\nPUT place\r\n{\r\n \"mappings\": {\r\n \"shops\" : {\r\n \"properties\" : {\r\n \"suggest\" : {\r\n \"type\" : \"completion\",\r\n \"contexts\": [\r\n { \r\n \"name\": \"place_type\",\r\n \"type\": \"category\",\r\n \"path\": \"cat\"\r\n },\r\n { \r\n \"name\": \"location\",\r\n \"type\": \"geo\",\r\n \"precision\": 4\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n 2. Put some documents in the new index, as an example:\r\n```\r\nPUT place/shops/1\r\n{\r\n \"suggest\": {\r\n \"input\": \"timmy's\",\r\n \"contexts\": {\r\n \"location\": [\r\n {\r\n \"lat\": 43.6624803,\r\n \"lon\": -79.3863353\r\n },\r\n {\r\n \"lat\": 43.6624718,\r\n \"lon\": -79.3873227\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```\r\n 3. Do a lil' query:\r\n```\r\nPOST place/_suggest?pretty\r\n{\r\n \"suggest\" : {\r\n \"prefix\" : \"tim\",\r\n \"completion\" : {\r\n \"field\" : \"suggest\",\r\n \"size\": 10,\r\n \"contexts\": {\r\n \"location\": [ \r\n {\r\n \"lat\": 43.6624803,\r\n \"lon\": -79.3863353,\r\n \"precision\": \"2m\"\r\n },\r\n {\r\n \"context\": {\r\n \"lat\": 43.6624803,\r\n \"lon\": -79.3863353\r\n },\r\n \"boost\": 2\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nresult:\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"parsing_exception\",\r\n \"reason\": \"[geo] failed to parse field [precision]\",\r\n \"line\": 1,\r\n \"col\": 62\r\n }\r\n ],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"all shards failed\",\r\n \"phase\": \"query\",\r\n \"grouped\": true,\r\n \"failed_shards\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"place\",\r\n \"node\": \"Se4Iz_2cS4OGdz33kndgGg\",\r\n \"reason\": {\r\n \"type\": \"parsing_exception\",\r\n \"reason\": \"[geo] failed to parse field [precision]\",\r\n \"line\": 1,\r\n \"col\": 62,\r\n \"caused_by\": {\r\n \"type\": \"number_format_exception\",\r\n \"reason\": \"For input string: \\\"2m\\\"\"\r\n }\r\n }\r\n }\r\n ],\r\n \"caused_by\": {\r\n \"type\": \"parsing_exception\",\r\n \"reason\": \"[geo] failed to parse field [precision]\",\r\n \"line\": 1,\r\n \"col\": 62,\r\n \"caused_by\": {\r\n \"type\": \"number_format_exception\",\r\n \"reason\": \"For input string: \\\"2m\\\"\"\r\n }\r\n }\r\n },\r\n \"status\": 400\r\n}\r\n```\r\n\r\n**Provide logs (if relevant)**:\r\n\r\n", "comments": [ { "body": "Any update on this? I am also facing the same issue.\r\nI am also trying to query in a similar fashion as @KellyBennett.\r\n\r\nFrom the logs it looks like the 'precision' field can only be numeric type and thus fails when a string is parsed into it. \r\n\r\nIs the documentation incorrect or am I providing a wrong query here. Any help will be greatly appreciated.", "created_at": "2017-08-29T08:09:15Z" }, { "body": "I'm also noticing this bug on 5.5 as well.", "created_at": "2018-01-13T00:15:22Z" }, { "body": "I am using ES version 6,2.0, but still getting the same parsing error for \"precision\" if value is \"5m\" etc. Works ok if integer.", "created_at": "2018-07-11T09:19:23Z" }, { "body": "@vandanachadha it was fixed by #29273 As you can see from labels on the issue it was fixed only in 6.3.0 and above. So, it is expected to get the same parsing error in 6.2.0, which was released 2 months before the issue was fixed.", "created_at": "2018-07-11T14:48:53Z" } ], "number": 24807, "title": "Error using miles for percision in Geo Location Context" }
{ "body": "Adds support for distance measure, such as \"4km\", \"5m\" in the precision\r\nfield of the geo location context in context suggesters.\r\n\r\nFixes #24807\r\n", "number": 29273, "review_comments": [ { "body": "Just for completeness, can we check that the current token pointer after parsing is as expected, e.g. by consuming the remaining tokens and asserting that they are closing elements or something like that?", "created_at": "2018-04-03T10:40:13Z" }, { "body": "Can you add a short javadoc?", "created_at": "2018-04-03T10:42:52Z" }, { "body": "Maybe mention that this \"precision\" is the geohash length?", "created_at": "2018-04-04T08:47:38Z" }, { "body": "you should call either parseInt or nodeIntegerValue, not both?", "created_at": "2018-04-04T08:49:26Z" }, { "body": "should we catch separately the call to checkPrecisionRange and geoHashLevelsForPrecision?", "created_at": "2018-04-04T08:51:03Z" } ], "title": "Allow using distance measure in the geo context precision" }
{ "commits": [ { "message": "Allow using distance measure in the geo context precision\n\nAdds support for distance measure, such as \"4km\", \"5m\" in the precision\nfield of the geo location context in context suggesters.\n\nFixes #24807" }, { "message": "Address @cbuescher's review comments" }, { "message": "Add javadoc" }, { "message": "Address @jpountz's review comments" }, { "message": "Merge remote-tracking branch 'elastic/master' into issue-24807-fix-precision-parsing-in-suggestions" } ], "files": [ { "diff": "@@ -24,10 +24,10 @@\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n import org.apache.lucene.util.SloppyMath;\n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.unit.DistanceUnit;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentParser.Token;\n+import org.elasticsearch.common.xcontent.support.XContentMapValues;\n import org.elasticsearch.index.fielddata.FieldData;\n import org.elasticsearch.index.fielddata.GeoPointValues;\n import org.elasticsearch.index.fielddata.MultiGeoPointValues;\n@@ -459,6 +459,51 @@ public static GeoPoint parseGeoPoint(XContentParser parser, GeoPoint point, fina\n }\n }\n \n+ /**\n+ * Parse a precision that can be expressed as an integer or a distance measure like \"1km\", \"10m\".\n+ *\n+ * The precision is expressed as a number between 1 and 12 and indicates the length of geohash\n+ * used to represent geo points.\n+ *\n+ * @param parser {@link XContentParser} to parse the value from\n+ * @return int representing precision\n+ */\n+ public static int parsePrecision(XContentParser parser) throws IOException, ElasticsearchParseException {\n+ XContentParser.Token token = parser.currentToken();\n+ if (token.equals(XContentParser.Token.VALUE_NUMBER)) {\n+ return XContentMapValues.nodeIntegerValue(parser.intValue());\n+ } else {\n+ String precision = parser.text();\n+ try {\n+ // we want to treat simple integer strings as precision levels, not distances\n+ return XContentMapValues.nodeIntegerValue(precision);\n+ } catch (NumberFormatException e) {\n+ // try to parse as a distance value\n+ final int parsedPrecision = GeoUtils.geoHashLevelsForPrecision(precision);\n+ try {\n+ return checkPrecisionRange(parsedPrecision);\n+ } catch (IllegalArgumentException e2) {\n+ // this happens when distance too small, so precision > 12. We'd like to see the original string\n+ throw new IllegalArgumentException(\"precision too high [\" + precision + \"]\", e2);\n+ }\n+ }\n+ }\n+ }\n+\n+ /**\n+ * Checks that the precision is within range supported by elasticsearch - between 1 and 12\n+ *\n+ * Returns the precision value if it is in the range and throws an IllegalArgumentException if it\n+ * is outside the range.\n+ */\n+ public static int checkPrecisionRange(int precision) {\n+ if ((precision < 1) || (precision > 12)) {\n+ throw new IllegalArgumentException(\"Invalid geohash aggregation precision of \" + precision\n+ + \". Must be between 1 and 12.\");\n+ }\n+ return precision;\n+ }\n+\n /** Returns the maximum distance/radius (in meters) from the point 'center' before overlapping */\n public static double maxRadialDistanceMeters(final double centerLat, final double centerLon) {\n if (Math.abs(centerLat) == MAX_LAT) {", "filename": "server/src/main/java/org/elasticsearch/common/geo/GeoUtils.java", "status": "modified" }, { "diff": "@@ -54,6 +54,8 @@\n import java.util.Map;\n import java.util.Objects;\n \n+import static org.elasticsearch.common.geo.GeoUtils.parsePrecision;\n+\n public class GeoGridAggregationBuilder extends ValuesSourceAggregationBuilder<ValuesSource.GeoPoint, GeoGridAggregationBuilder>\n implements MultiBucketAggregationBuilder {\n public static final String NAME = \"geohash_grid\";\n@@ -64,29 +66,8 @@ public class GeoGridAggregationBuilder extends ValuesSourceAggregationBuilder<Va\n static {\n PARSER = new ObjectParser<>(GeoGridAggregationBuilder.NAME);\n ValuesSourceParserHelper.declareGeoFields(PARSER, false, false);\n- PARSER.declareField((parser, builder, context) -> {\n- XContentParser.Token token = parser.currentToken();\n- if (token.equals(XContentParser.Token.VALUE_NUMBER)) {\n- builder.precision(XContentMapValues.nodeIntegerValue(parser.intValue()));\n- } else {\n- String precision = parser.text();\n- try {\n- // we want to treat simple integer strings as precision levels, not distances\n- builder.precision(XContentMapValues.nodeIntegerValue(Integer.parseInt(precision)));\n- } catch (NumberFormatException e) {\n- // try to parse as a distance value\n- try {\n- builder.precision(GeoUtils.geoHashLevelsForPrecision(precision));\n- } catch (NumberFormatException e2) {\n- // can happen when distance unit is unknown, in this case we simply want to know the reason\n- throw e2;\n- } catch (IllegalArgumentException e3) {\n- // this happens when distance too small, so precision > 12. We'd like to see the original string\n- throw new IllegalArgumentException(\"precision too high [\" + precision + \"]\", e3);\n- }\n- }\n- }\n- }, GeoHashGridParams.FIELD_PRECISION, org.elasticsearch.common.xcontent.ObjectParser.ValueType.INT);\n+ PARSER.declareField((parser, builder, context) -> builder.precision(parsePrecision(parser)), GeoHashGridParams.FIELD_PRECISION,\n+ org.elasticsearch.common.xcontent.ObjectParser.ValueType.INT);\n PARSER.declareInt(GeoGridAggregationBuilder::size, GeoHashGridParams.FIELD_SIZE);\n PARSER.declareInt(GeoGridAggregationBuilder::shardSize, GeoHashGridParams.FIELD_SHARD_SIZE);\n }\n@@ -133,7 +114,7 @@ protected void innerWriteTo(StreamOutput out) throws IOException {\n }\n \n public GeoGridAggregationBuilder precision(int precision) {\n- this.precision = GeoHashGridParams.checkPrecision(precision);\n+ this.precision = GeoUtils.checkPrecisionRange(precision);\n return this;\n }\n ", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoGridAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -30,15 +30,6 @@ final class GeoHashGridParams {\n static final ParseField FIELD_SIZE = new ParseField(\"size\");\n static final ParseField FIELD_SHARD_SIZE = new ParseField(\"shard_size\");\n \n-\n- static int checkPrecision(int precision) {\n- if ((precision < 1) || (precision > 12)) {\n- throw new IllegalArgumentException(\"Invalid geohash aggregation precision of \" + precision\n- + \". Must be between 1 and 12.\");\n- }\n- return precision;\n- }\n-\n private GeoHashGridParams() {\n throw new AssertionError(\"No instances intended\");\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParams.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import java.util.List;\n import java.util.Objects;\n \n+import static org.elasticsearch.common.geo.GeoUtils.parsePrecision;\n import static org.elasticsearch.search.suggest.completion.context.GeoContextMapping.CONTEXT_BOOST;\n import static org.elasticsearch.search.suggest.completion.context.GeoContextMapping.CONTEXT_NEIGHBOURS;\n import static org.elasticsearch.search.suggest.completion.context.GeoContextMapping.CONTEXT_PRECISION;\n@@ -115,10 +116,10 @@ public static Builder builder() {\n static {\n GEO_CONTEXT_PARSER.declareField((parser, geoQueryContext, geoContextMapping) -> geoQueryContext.setGeoPoint(GeoUtils.parseGeoPoint(parser)), new ParseField(CONTEXT_VALUE), ObjectParser.ValueType.OBJECT);\n GEO_CONTEXT_PARSER.declareInt(GeoQueryContext.Builder::setBoost, new ParseField(CONTEXT_BOOST));\n- // TODO : add string support for precision for GeoUtils.geoHashLevelsForPrecision()\n- GEO_CONTEXT_PARSER.declareInt(GeoQueryContext.Builder::setPrecision, new ParseField(CONTEXT_PRECISION));\n- // TODO : add string array support for precision for GeoUtils.geoHashLevelsForPrecision()\n- GEO_CONTEXT_PARSER.declareIntArray(GeoQueryContext.Builder::setNeighbours, new ParseField(CONTEXT_NEIGHBOURS));\n+ GEO_CONTEXT_PARSER.declareField((parser, builder, context) -> builder.setPrecision(parsePrecision(parser)),\n+ new ParseField(CONTEXT_PRECISION), ObjectParser.ValueType.INT);\n+ GEO_CONTEXT_PARSER.declareFieldArray(GeoQueryContext.Builder::setNeighbours, (parser, builder) -> parsePrecision(parser),\n+ new ParseField(CONTEXT_NEIGHBOURS), ObjectParser.ValueType.INT_ARRAY);\n GEO_CONTEXT_PARSER.declareDouble(GeoQueryContext.Builder::setLat, new ParseField(\"lat\"));\n GEO_CONTEXT_PARSER.declareDouble(GeoQueryContext.Builder::setLon, new ParseField(\"lon\"));\n }", "filename": "server/src/main/java/org/elasticsearch/search/suggest/completion/context/GeoQueryContext.java", "status": "modified" }, { "diff": "@@ -0,0 +1,71 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.common.geo;\n+\n+import org.elasticsearch.common.CheckedConsumer;\n+import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.IOException;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+\n+public class GeoUtilTests extends ESTestCase {\n+\n+ public void testPrecisionParser() throws IOException {\n+ assertEquals(10, parsePrecision(builder -> builder.field(\"test\", 10)));\n+ assertEquals(10, parsePrecision(builder -> builder.field(\"test\", 10.2)));\n+ assertEquals(6, parsePrecision(builder -> builder.field(\"test\", \"6\")));\n+ assertEquals(7, parsePrecision(builder -> builder.field(\"test\", \"1km\")));\n+ assertEquals(7, parsePrecision(builder -> builder.field(\"test\", \"1.1km\")));\n+ }\n+\n+ public void testIncorrectPrecisionParser() {\n+ expectThrows(NumberFormatException.class, () -> parsePrecision(builder -> builder.field(\"test\", \"10.1.1.1\")));\n+ expectThrows(NumberFormatException.class, () -> parsePrecision(builder -> builder.field(\"test\", \"364.4smoots\")));\n+ assertEquals(\n+ \"precision too high [0.01mm]\",\n+ expectThrows(IllegalArgumentException.class, () -> parsePrecision(builder -> builder.field(\"test\", \"0.01mm\"))).getMessage()\n+ );\n+ }\n+\n+ /**\n+ * Invokes GeoUtils.parsePrecision parser on the value generated by tokenGenerator\n+ * <p>\n+ * The supplied tokenGenerator should generate a single field that contains the precision in\n+ * one of the supported formats or malformed precision value if error handling is tested. The\n+ * method return the parsed value or throws an exception, if precision value is malformed.\n+ */\n+ private int parsePrecision(CheckedConsumer<XContentBuilder, IOException> tokenGenerator) throws IOException {\n+ XContentBuilder builder = jsonBuilder().startObject();\n+ tokenGenerator.accept(builder);\n+ builder.endObject();\n+ XContentParser parser = createParser(JsonXContent.jsonXContent, BytesReference.bytes(builder));\n+ assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken()); // {\n+ assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken()); // field name\n+ assertTrue(parser.nextToken().isValue()); // field value\n+ int precision = GeoUtils.parsePrecision(parser);\n+ assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken()); // }\n+ assertNull(parser.nextToken()); // no more tokens\n+ return precision;\n+ }\n+}", "filename": "server/src/test/java/org/elasticsearch/common/geo/GeoUtilTests.java", "status": "added" }, { "diff": "@@ -19,15 +19,20 @@\n \n package org.elasticsearch.search.suggest.completion;\n \n+import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.search.suggest.completion.context.GeoQueryContext;\n \n import java.io.IOException;\n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.Collections;\n import java.util.List;\n \n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.hamcrest.Matchers.equalTo;\n \n public class GeoQueryContextTests extends QueryContextTestCase<GeoQueryContext> {\n@@ -105,4 +110,36 @@ public void testIllegalArguments() {\n assertEquals(e.getMessage(), \"neighbour value must be between 1 and 12\");\n }\n }\n+\n+ public void testStringPrecision() throws IOException {\n+ XContentBuilder builder = jsonBuilder().startObject();\n+ {\n+ builder.startObject(\"context\").field(\"lat\", 23.654242).field(\"lon\", 90.047153).endObject();\n+ builder.field(\"boost\", 10);\n+ builder.field(\"precision\", 12);\n+ builder.array(\"neighbours\", 1, 2);\n+ }\n+ builder.endObject();\n+ XContentParser parser = createParser(JsonXContent.jsonXContent, BytesReference.bytes(builder));\n+ parser.nextToken();\n+ GeoQueryContext queryContext = fromXContent(parser);\n+ assertEquals(10, queryContext.getBoost());\n+ assertEquals(12, queryContext.getPrecision());\n+ assertEquals(Arrays.asList(1, 2), queryContext.getNeighbours());\n+\n+ builder = jsonBuilder().startObject();\n+ {\n+ builder.startObject(\"context\").field(\"lat\", 23.654242).field(\"lon\", 90.047153).endObject();\n+ builder.field(\"boost\", 10);\n+ builder.field(\"precision\", \"12m\");\n+ builder.array(\"neighbours\", \"4km\", \"10km\");\n+ }\n+ builder.endObject();\n+ parser = createParser(JsonXContent.jsonXContent, BytesReference.bytes(builder));\n+ parser.nextToken();\n+ queryContext = fromXContent(parser);\n+ assertEquals(10, queryContext.getBoost());\n+ assertEquals(9, queryContext.getPrecision());\n+ assertEquals(Arrays.asList(6, 5), queryContext.getNeighbours());\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/search/suggest/completion/GeoQueryContextTests.java", "status": "modified" } ] }
{ "body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`):\r\nTested on both 5.6 and 6.2\r\n\r\n**Plugins installed**: [ None ]\r\n\r\n**JVM version** (`java -version`): 1.8.0_31-b13 \r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Using the Dockerized version from https://quay.io/repository/pires/docker-elasticsearch\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n(This is pretty much just a copy of my original description at: https://discuss.elastic.co/t/using-java-high-level-rest-client-does-not-auto-retry-bulk-requests/121724)\r\n\r\nIf the `BulkProcessor` is made to use the High Level Rest Client to issue requests, it is unable to issue retries even though it passes through the `Retry` handler's `canRetry` logic\r\n\r\n**Steps to reproduce**:\r\n\r\n1. Create an index in ES\r\n\r\n1. Configure the thread_pool to limit the bulk requests and make more likely a rejection. In my case I used a very small queue size of 2\r\n\r\n1. To demonstrate \"expected\" functionality, create a `BulkProcessor` which submits requests using `TransportClient`\r\n\r\n`BulkProcessor.builder(client, listener);`\r\n\r\n1. Submit requests which result in rejection. You will find that the resulting `BulkResponse` does not contain failures (unless `BackoffPolicy` was exhausted), however querying the /_cat/thread_pool will show rejections, and the document count should have went up based on the total submitted, indicating all documents eventually made it via retries. \r\n\r\n1. Create a BulkProcessor which submits requests using the High Level Rest client's client.bulkAsyncmethod:\r\n\r\n ```java\r\n BulkProcessor.Builder builder(BulkProcessor.Listener listener) {\r\n return BulkProcessor.builder(client::bulkAsync, listener);\r\n }\r\n\r\n1. Submit requests at a rate to create rejection \r\n\r\n1. Perform the same set of inserts, you will find that the `BulkResponse` contains failures and the individual `Failure` objects have an ElasticsearchException which contain \"type=es_rejected_execution_exception\"\r\n\r\n**Additional Notes**\r\nI think the \"root\" cause is that with the High Level Rest Client, the `ElasticsearchException` that is extracted is not one of the sub-types such as `EsRejectedExceptionException` (_this is actually documented behavior in the `fromXContent` method of ElasticsearchException_)\r\n\r\nI made a naive attempt to modify `fromXContent` to return the correct typed `ElasticsearchException`, but in its current form this results in a deadlock during retry attempts due (I think) to the synchronization that occurs in `BulkProcessor`. You can make it work by setting high enough concurrency but this is a workaround. \r\n\r\nProbably not relevant (except for anyone that might stumble on this same issue): We are using Apache Flink with an Elasticsearch sink. We identified this issue during attempts to upgrade from ES 5.6 to 6.2 to get additional features. However Flink's pending ES6 support is High Level Rest client based, and does not include TransportClient support for 6.2. It has code attempting to perform retries but it is never triggered due to the same issue with typed exceptions (and in fact, would deadlock in any case). \r\n\r\n", "comments": [ { "body": "Ouch, I was looking into using the High Level Rest Client as well but this sounds like blocker if it won't retry on errors. @javanna I see an \"adoptme\" label does it mean Elastic is not going to pro-actively look into this? Is there any recommended workaround?", "created_at": "2018-03-08T16:49:17Z" }, { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-03-09T04:33:44Z" }, { "body": "> I see an \"adoptme\" label does it mean Elastic is not going to pro-actively look into this?\r\n\r\nno it just means that nobody is actively working on it today and somebody, either from Elastic or the community may adopt this issue. I will get to it soon-ish unless anybody else does earlier than me.", "created_at": "2018-03-12T15:42:23Z" }, { "body": "Hi all and @javanna, I made a PR for this. the problem is as @jdewald describes I think. Feel free to discuss.", "created_at": "2018-03-26T22:04:00Z" }, { "body": "which version(s?) of Elasticsearch will contain that fix?", "created_at": "2018-05-09T12:59:00Z" }, { "body": "@cjolif See the labels in the corresponding PR (#29329). The fix will be in 6.3.1 .", "created_at": "2018-05-09T18:38:08Z" } ], "number": 28885, "title": "Using Java High Level Rest client does not auto-retry Bulk requests" }
{ "body": "Currently when we parse BulkItemResponse to XContent and then parsed it back, the original Exception type has not been kept and always transfered to `ElasticsearchException`.\r\nBut seems this doesn't work very well in case of `EsRejectedExecutionException`, because the `BulkRequestHandler`'s retry logic relies on `EsRejectedExecutionException`(only retry on this type of Exception). So if it was parsed back to an `ElasticsearchException`, the bulk request through Rest high level client cannot be retried, leads to issues like #28885.\r\n\r\nThis change makes `EsRejectedExecutionException` can be parsed to and from XContent. I'm not really sure if it is the right way to solve the problem but I just open this PR to see and discuss.\r\n\r\nRelates to #28885", "number": 29254, "review_comments": [ { "body": "instead of parsing back this exception into its own type, which we don't do for a lot of other exceptions, why don't we adapt the `BulkProcessor` to retry based on the status returned with the exception? That should work for both transport client and REST client I think. I think that configuring the exception to be retried on is not necessary, as we only ever use it for rejections (or we could make the status configurable instead of the class), we should check if the root cause is an `ElasticsearchException` , if so cast and check the returned `status`.\r\n\r\nThis needs testing, for instance porting the existing `BulkProcessorRetryIT` to the rest-high-level tests and adapting it similar to what I am doing in #29263 for `BulkProcessorIT`.\r\n\r\nOne other thing that I noticed is that when `canRetry` returns `true`, we will retry all failed items from that response, but we don't check again the status code, nor the exception type. As a follow-up, we may want to fix that in `createBulkRequestForRetry`.", "created_at": "2018-03-28T09:04:22Z" }, { "body": "Hello @javanna, thx for having looked at it and the comments. I'm definitely agree with that. I also saw that the `status` was processed correctly (`RestStatus.TOO_MANY_REQUESTS`) as it is derived from Exception but parsed seperately, but I was just not sure it is preferred to change the bulk side directly or the rest hight level side when I was doing this. So I'll change it in this way soon.", "created_at": "2018-03-28T18:23:46Z" } ], "title": "Enable BulkItemResponse to parse back EsRejectedExecutionException through XContent" }
{ "commits": [], "files": [] }
{ "body": "https://github.com/elastic/elasticsearch/blob/master/plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuFoldingTokenFilterFactory.java#L48\r\n\r\n\r\nhttps://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-icu-folding.html", "comments": [ { "body": "Same is true for the expert settings under https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-icu-collation.html\r\n\r\nShould we document these as snake case and deprecate the camelCase variants?", "created_at": "2017-01-26T20:19:04Z" }, { "body": "Hi @Mpdreamz Is this issue still there? Would you mind if I taking this issue?", "created_at": "2017-03-13T09:18:47Z" }, { "body": "Hi, I'm trying to get started with some low hanging fruit and I'd like to take this issue on if you don't mind.", "created_at": "2017-09-08T13:44:34Z" }, { "body": "@gmaddex I don't think anyone is working on it, go for it!", "created_at": "2017-09-08T14:26:29Z" }, { "body": "@Mpdreamz What's I need to do if I want to fix this? Thanks very much!", "created_at": "2017-10-01T01:15:21Z" }, { "body": "cc @elastic/es-search-aggs ", "created_at": "2018-03-14T13:51:04Z" } ], "number": 22823, "title": "IcuFoldingTokenFilter still accepts camelCase parameter" }
{ "body": "Depreate parameter unicodeSetFilter and replace it with unicode_set_filter for IcuFoldingTokenFilter. \r\n\r\nClose #22823", "number": 29215, "review_comments": [ { "body": "@romseygeek this line is causing checkstyle violations\r\n```\r\n> Task :plugins:analysis-icu:checkstyleMain FAILED\r\n[ant:checkstyle] [ERROR] /home/alpar/work/elastic/elasticsearch/plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuNormalizerTokenFilterFactory.java:41:19: 'static' modifier out of order with the JLS suggestions. [ModifierOrder]\r\n```\r\nI'm pushing a fix to master now. FYI", "created_at": "2018-11-01T12:42:07Z" }, { "body": "Master was merged in a few days ago and the checkstyle rules last changed 4 weeks ago, yet the PR build passed. ", "created_at": "2018-11-01T12:44:14Z" }, { "body": "I opened #35207 to fix these", "created_at": "2018-11-02T14:34:17Z" } ], "title": "Replace parameter unicodeSetFilter with unicode_set_filter " }
{ "commits": [ { "message": "Replace parameter unicodeSetFilter with unicode_set_filter (#22823)" }, { "message": "Merge branch 'master' of github.com:elastic/elasticsearch into fix-issues/22823" }, { "message": "Update to latest getLogger" }, { "message": "Replace unicodeSetFilter with unicode_set_filter in document" } ], "files": [ { "diff": "@@ -38,7 +38,7 @@ normalization can be specified with the `name` parameter, which accepts `nfc`,\n convert `nfc` to `nfd` or `nfkc` to `nfkd` respectively:\n \n Which letters are normalized can be controlled by specifying the\n-`unicodeSetFilter` parameter, which accepts a\n+`unicode_set_filter` parameter, which accepts a\n http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet].\n \n Here are two examples, the default usage and a customised character filter:\n@@ -194,7 +194,7 @@ with the `name` parameter, which accepts `nfc`, `nfkc`, and `nfkc_cf`\n (default).\n \n Which letters are normalized can be controlled by specifying the\n-`unicodeSetFilter` parameter, which accepts a\n+`unicode_set_filter` parameter, which accepts a\n http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet].\n \n You should probably prefer the <<analysis-icu-normalization-charfilter,Normalization character filter>>.\n@@ -273,7 +273,7 @@ The ICU folding token filter already does Unicode normalization, so there is\n no need to use Normalize character or token filter as well.\n \n Which letters are folded can be controlled by specifying the\n-`unicodeSetFilter` parameter, which accepts a\n+`unicode_set_filter` parameter, which accepts a\n http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet].\n \n The following example exempts Swedish characters from folding. It is important\n@@ -300,7 +300,7 @@ PUT icu_sample\n \"filter\": {\n \"swedish_folding\": {\n \"type\": \"icu_folding\",\n- \"unicodeSetFilter\": \"[^åäöÅÄÖ]\"\n+ \"unicode_set_filter\": \"[^åäöÅÄÖ]\"\n }\n }\n }", "filename": "docs/plugins/analysis-icu.asciidoc", "status": "modified" }, { "diff": "@@ -50,7 +50,7 @@ public class IcuFoldingTokenFilterFactory extends AbstractTokenFilterFactory imp\n \n public IcuFoldingTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {\n super(indexSettings, name, settings);\n- this.normalizer = IcuNormalizerTokenFilterFactory.wrapWithUnicodeSetFilter(ICU_FOLDING_NORMALIZER, settings);\n+ this.normalizer = IcuNormalizerTokenFilterFactory.wrapWithUnicodeSetFilter(indexSettings, ICU_FOLDING_NORMALIZER, settings);\n }\n \n @Override", "filename": "plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuFoldingTokenFilterFactory.java", "status": "modified" }, { "diff": "@@ -49,7 +49,7 @@ public IcuNormalizerCharFilterFactory(IndexSettings indexSettings, Environment e\n }\n Normalizer2 normalizer = Normalizer2.getInstance(\n null, method, \"compose\".equals(mode) ? Normalizer2.Mode.COMPOSE : Normalizer2.Mode.DECOMPOSE);\n- this.normalizer = IcuNormalizerTokenFilterFactory.wrapWithUnicodeSetFilter(normalizer, settings);\n+ this.normalizer = IcuNormalizerTokenFilterFactory.wrapWithUnicodeSetFilter(indexSettings, normalizer, settings);\n }\n \n @Override", "filename": "plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuNormalizerCharFilterFactory.java", "status": "modified" }, { "diff": "@@ -23,7 +23,10 @@\n import com.ibm.icu.text.Normalizer2;\n import com.ibm.icu.text.UnicodeSet;\n \n+import org.apache.logging.log4j.LogManager;\n import org.apache.lucene.analysis.TokenStream;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.common.logging.DeprecationLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.IndexSettings;\n@@ -35,14 +38,15 @@\n * <p>The {@code unicodeSetFilter} attribute can be used to provide the UniCodeSet for filtering.</p>\n */\n public class IcuNormalizerTokenFilterFactory extends AbstractTokenFilterFactory implements MultiTermAwareComponent {\n-\n+ private final static DeprecationLogger deprecationLogger =\n+ new DeprecationLogger(LogManager.getLogger(IcuNormalizerTokenFilterFactory.class));\n private final Normalizer2 normalizer;\n \n public IcuNormalizerTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {\n super(indexSettings, name, settings);\n String method = settings.get(\"name\", \"nfkc_cf\");\n Normalizer2 normalizer = Normalizer2.getInstance(null, method, Normalizer2.Mode.COMPOSE);\n- this.normalizer = wrapWithUnicodeSetFilter(normalizer, settings);\n+ this.normalizer = wrapWithUnicodeSetFilter(indexSettings, normalizer, settings);\n }\n \n @Override\n@@ -55,8 +59,17 @@ public Object getMultiTermComponent() {\n return this;\n }\n \n- static Normalizer2 wrapWithUnicodeSetFilter(final Normalizer2 normalizer, Settings settings) {\n+ static Normalizer2 wrapWithUnicodeSetFilter(final IndexSettings indexSettings,\n+ final Normalizer2 normalizer,\n+ final Settings settings) {\n String unicodeSetFilter = settings.get(\"unicodeSetFilter\");\n+ if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_7_0_0_alpha1)) {\n+ if (unicodeSetFilter != null) {\n+ deprecationLogger.deprecated(\"[unicodeSetFilter] has been deprecated in favor of [unicode_set_filter]\");\n+ } else {\n+ unicodeSetFilter = settings.get(\"unicode_set_filter\");\n+ }\n+ }\n if (unicodeSetFilter != null) {\n UnicodeSet unicodeSet = new UnicodeSet(unicodeSetFilter);\n ", "filename": "plugins/analysis-icu/src/main/java/org/elasticsearch/index/analysis/IcuNormalizerTokenFilterFactory.java", "status": "modified" }, { "diff": "@@ -48,6 +48,61 @@\n ---\n \"Normalization with a UnicodeSet Filter\":\n - do:\n+ indices.create:\n+ index: test\n+ body:\n+ settings:\n+ index:\n+ analysis:\n+ char_filter:\n+ charfilter_icu_normalizer:\n+ type: icu_normalizer\n+ unicode_set_filter: \"[^ß]\"\n+ filter:\n+ tokenfilter_icu_normalizer:\n+ type: icu_normalizer\n+ unicode_set_filter: \"[^ßB]\"\n+ tokenfilter_icu_folding:\n+ type: icu_folding\n+ unicode_set_filter: \"[^â]\"\n+ - do:\n+ indices.analyze:\n+ index: test\n+ body:\n+ char_filter: [\"charfilter_icu_normalizer\"]\n+ tokenizer: keyword\n+ text: charfilter Föo Bâr Ruß\n+ - length: { tokens: 1 }\n+ - match: { tokens.0.token: charfilter föo bâr ruß }\n+ - do:\n+ indices.analyze:\n+ index: test\n+ body:\n+ tokenizer: keyword\n+ filter: [\"tokenfilter_icu_normalizer\"]\n+ text: tokenfilter Föo Bâr Ruß\n+ - length: { tokens: 1 }\n+ - match: { tokens.0.token: tokenfilter föo Bâr ruß }\n+ - do:\n+ indices.analyze:\n+ index: test\n+ body:\n+ tokenizer: keyword\n+ filter: [\"tokenfilter_icu_folding\"]\n+ text: icufolding Föo Bâr Ruß\n+ - length: { tokens: 1 }\n+ - match: { tokens.0.token: icufolding foo bâr russ }\n+\n+---\n+\"Normalization with a CamcelCase UnicodeSet Filter\":\n+ - skip:\n+ version: \" - 6.99.99\"\n+ reason: unicodeSetFilter deprecated in 7.0.0, replaced by unicode_set_filter\n+ features: \"warnings\"\n+\n+ - do:\n+ warnings:\n+ - \"[unicodeSetFilter] has been deprecated in favor of [unicode_set_filter]\"\n indices.create:\n index: test\n body:", "filename": "plugins/analysis-icu/src/test/resources/rest-api-spec/test/analysis_icu/10_basic.yml", "status": "modified" } ] }
{ "body": "If you write an aggregation as\r\n\r\n```\r\n POST /vehicles/_search\r\n {\r\n\t \"aggs\": {\r\n \t \"myname\": {\r\n \t\t \"missing\": {\r\n \t\t \"field\": \"make.keyword\"\r\n \t\t }\r\n \t }\r\n\t }\r\n }\r\n```\r\n\r\nThen we say `Unknown BaseAggregationBuilder [missing]`. It'd be *much* more user friendly to say \"Unknown Aggregation [missing].\"\r\n", "comments": [ { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-09T04:09:34Z" }, { "body": "I agree, we should fix this.", "created_at": "2018-03-09T10:01:29Z" }, { "body": "Thanks for fixing my missing ping @hub-cap !", "created_at": "2018-03-09T18:57:57Z" }, { "body": "Is this issue fixed?", "created_at": "2018-03-14T19:42:11Z" }, { "body": "@nikhilbarar not yet. It will most likely be closed and associated to the Pull Request that implements the fix itself.", "created_at": "2018-03-14T20:55:58Z" }, { "body": "Hi @nik9000 When I trying to reproduce this on master, I got:\r\n```\r\n{\r\n \"took\": 2,\r\n \"timed_out\": false,\r\n \"_shards\": {\r\n \"total\": 5,\r\n \"successful\": 5,\r\n \"skipped\": 0,\r\n \"failed\": 0\r\n },\r\n \"hits\": {\r\n \"total\": 0,\r\n \"max_score\": null,\r\n \"hits\": [\r\n \r\n ],\r\n \r\n },\r\n \"aggregations\": {\r\n \"myname\": {\r\n \"doc_count\": 0\r\n }\r\n }\r\n}\r\n```\r\nHow can I reproduce this bug? Thanks in advance. ", "created_at": "2018-03-18T14:43:42Z" }, { "body": "Oops! I meant to use `missing` as an example of an aggregation that doesn't exist. But it does exist! Here is a better example:\r\n```\r\ncurl -HContent-Type:application/json -XPOST localhost:9200/vehicles/test?pretty -d'{\"test\":\"test\"}'\r\ncurl -HContent-Type:application/json -XPOST localhost:9200/vehicles/_search?pretty -d'{\r\n \"aggs\": {\r\n \"myname\": {\r\n \"agg_that_does_not_exist\": {\r\n \"field\": \"make.keyword\"\r\n }\r\n }\r\n }\r\n}'\r\n```", "created_at": "2018-03-19T15:44:20Z" }, { "body": "Fixed by #58255. #29201 has another fix that I think is actually better. Let's see if we can get that one in too!", "created_at": "2020-07-06T22:15:18Z" } ], "number": 28950, "title": "Bad aggregation name complains about \"unknown BaseAggregationBuilder\" instead of \"unknown Aggregation\"" }
{ "body": "Fix the exception message if we caught a ```UnknownNamedObjectException``` when looking up aggregation by type name. \r\n\r\nCloses #28950 ", "number": 29201, "review_comments": [ { "body": "I think it'd be nice to include the cause here just in case we need the stack trace for some reason.", "created_at": "2018-03-23T14:28:47Z" }, { "body": "This is actually a little tricky because its possible for this to throw an UnknownNamedObjectException but not because the aggregation is unknown but because another named object that the aggregation uses (like a SignificanceHeuristic or a MovingAvgModel) was not found. In that case this new exception message will be misleading.", "created_at": "2018-03-23T14:33:14Z" }, { "body": "Thanks @colings86 @nik9000 , how about add a check in the catch block like:\r\n```\r\nif (ex.getCategoryClass().equals(BaseAggregationBuilder.class.getName())) {\r\n throw new ParsingException(new XContentLocation(ex.getLineNumber(), ex.getColumnNumber()),\r\n \"Unknown Aggregation [\" + fieldName + \"]\", ex);\r\n} else {\r\n throw ex;\r\n}\r\n```\r\n?", "created_at": "2018-03-23T15:08:56Z" }, { "body": "@colings86 @nik9000 thoughts on the above proposal?", "created_at": "2018-05-07T12:46:07Z" }, { "body": "@liketic sorry its taken so long to respond, I had missed your message above (thanks @javanna for the ping). I wonder if there are other places where the category class name is not a friendly name to the end user? I wonder if we should make it so that a friendly name needs to be associated with each category class in the NamedXContentRegistry (and maybe in the NamedWriteableRegistry too)? In order to do this I guess we would need to change the registries so that each category class is explicitly registered (at the moment you can register a named XContent which is part of a new category class directly in one call) so there is a place to associate the category class and its friendly name. We would then need to reject any named XContent which is registered against an unknown category class. \r\n\r\n@nik9000 - what do you think of the above? Do you think we should go with this change anyway even if we intend to go with the above in the future?", "created_at": "2018-05-08T10:30:13Z" }, { "body": "@colings86 @nik9000 ping :)\r\n\r\nI'd be inclined to get the incrementally better error message merged, then investigate if we can do the more sophisticated \"associated name\" in a followup.", "created_at": "2019-02-28T21:28:01Z" }, { "body": "++ I'm fine with that approach", "created_at": "2019-03-01T08:12:04Z" }, { "body": "Great, thanks @colings86.\r\n\r\n@liketic would you be willing to update this PR to most recent master and add the check in the catch block as you suggested? I can take over the review and get this PR tested/merged. Thanks!", "created_at": "2019-03-04T15:38:11Z" }, { "body": "Could this be a `Class<?>`?", "created_at": "2020-07-06T22:12:50Z" } ], "title": "Fix message of aggregation not found " }
{ "commits": [ { "message": "Fix message of aggregation not found (#28950)" }, { "message": "Merge branch 'master' into fix-issues/28950" }, { "message": "Add class check" }, { "message": "Merge branch 'master' of github.com:elastic/elasticsearch into fix-issues/28950" }, { "message": "Merge master" } ], "files": [ { "diff": "@@ -19,17 +19,25 @@\n \n package org.elasticsearch.common.xcontent;\n \n+import static java.util.Objects.requireNonNull;\n+\n /**\n * Thrown when {@link NamedXContentRegistry} cannot locate a named object to\n * parse for a particular name\n */\n public class NamedObjectNotFoundException extends XContentParseException {\n+ private final String categoryClass;\n \n- public NamedObjectNotFoundException(String message) {\n- this(null, message);\n+ public NamedObjectNotFoundException(String message, Class<?> categoryClass) {\n+ this(null, message, categoryClass);\n }\n \n- public NamedObjectNotFoundException(XContentLocation location, String message) {\n+ public NamedObjectNotFoundException(XContentLocation location, String message, Class<?> categoryClass) {\n super(location, message);\n+ this.categoryClass = requireNonNull(categoryClass, \"categoryClass is required\").getName();\n+ }\n+\n+ public String getCategoryClass() {\n+ return categoryClass;\n }\n }", "filename": "libs/x-content/src/main/java/org/elasticsearch/common/xcontent/NamedObjectNotFoundException.java", "status": "modified" }, { "diff": "@@ -123,20 +123,20 @@ public <T, C> T parseNamedObject(Class<T> categoryClass, String name, XContentPa\n if (parsers == null) {\n if (registry.isEmpty()) {\n // The \"empty\" registry will never work so we throw a better exception as a hint.\n- throw new NamedObjectNotFoundException(\"named objects are not supported for this parser\");\n+ throw new NamedObjectNotFoundException(\"named objects are not supported for this parser\", categoryClass);\n }\n- throw new NamedObjectNotFoundException(\"unknown named object category [\" + categoryClass.getName() + \"]\");\n+ throw new NamedObjectNotFoundException(\"unknown named object category [\" + categoryClass.getName() + \"]\", categoryClass);\n }\n Entry entry = parsers.get(name);\n if (entry == null) {\n throw new NamedObjectNotFoundException(parser.getTokenLocation(), \"unable to parse \" + categoryClass.getSimpleName() +\n- \" with name [\" + name + \"]: parser not found\");\n+ \" with name [\" + name + \"]: parser not found\", categoryClass);\n }\n if (false == entry.name.match(name, parser.getDeprecationHandler())) {\n /* Note that this shouldn't happen because we already looked up the entry using the names but we need to call `match` anyway\n * because it is responsible for logging deprecation warnings. */\n throw new NamedObjectNotFoundException(parser.getTokenLocation(),\n- \"unable to parse \" + categoryClass.getSimpleName() + \" with name [\" + name + \"]: parser didn't match\");\n+ \"unable to parse \" + categoryClass.getSimpleName() + \" with name [\" + name + \"]: parser didn't match\", categoryClass);\n }\n return categoryClass.cast(entry.parser.parse(parser, context));\n }", "filename": "libs/x-content/src/main/java/org/elasticsearch/common/xcontent/NamedXContentRegistry.java", "status": "modified" }, { "diff": "@@ -23,8 +23,10 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Writeable;\n+import org.elasticsearch.common.xcontent.NamedObjectNotFoundException;\n import org.elasticsearch.common.xcontent.ToXContentObject;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParseException;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryRewriteContext;\n import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;\n@@ -117,9 +119,16 @@ private static AggregatorFactories.Builder parseAggregators(XContentParser parse\n throw new ParsingException(parser.getTokenLocation(), \"Found two aggregation type definitions in [\"\n + aggregationName + \"]: [\" + aggBuilder.getType() + \"] and [\" + fieldName + \"]\");\n }\n-\n- aggBuilder = parser.namedObject(BaseAggregationBuilder.class, fieldName,\n- new AggParseContext(aggregationName));\n+ try {\n+ aggBuilder = parser.namedObject(BaseAggregationBuilder.class, fieldName,\n+ new AggParseContext(aggregationName));\n+ } catch (NamedObjectNotFoundException ex) {\n+ if (ex.getCategoryClass().equals(BaseAggregationBuilder.class.getName())) {\n+ throw new XContentParseException(ex.getLocation(), \"Unknown Aggregation [\" + fieldName + \"]\", ex);\n+ } else {\n+ throw ex;\n+ }\n+ }\n }\n } else {\n throw new ParsingException(parser.getTokenLocation(), \"Expected [\" + XContentParser.Token.START_OBJECT + \"] under [\"", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.xcontent.NamedXContentRegistry;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentParseException;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n@@ -228,6 +229,21 @@ public void testRewrite() throws Exception {\n assertSame(rewritten, secondRewritten);\n }\n \n+ public void testUnknownAggregation() throws Exception {\n+ XContentBuilder source = JsonXContent.contentBuilder()\n+ .startObject()\n+ .startObject(\"aggName\")\n+ .startObject(\"agg_that_does_not_exist\")\n+ .field(\"field\", \"make.keyword\")\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ XContentParser parser = createParser(source);\n+ assertSame(XContentParser.Token.START_OBJECT, parser.nextToken());\n+ XContentParseException ex = expectThrows(XContentParseException.class, () -> AggregatorFactories.parseAggregators(parser));\n+ assertEquals(\"[1:39] Unknown Aggregation [agg_that_does_not_exist]\", ex.getMessage());\n+ }\n+\n @Override\n protected NamedXContentRegistry xContentRegistry() {\n return xContentRegistry;", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/AggregatorFactoriesTests.java", "status": "modified" } ] }
{ "body": "When scripts are specified, either in json or via `new Script()`, they take a `params` Map. Scripted metrics use multiple scripts, and take a `params` map at the outer level which applies to all of the 4 scripts of a scripted metric agg. However, the params passed directly to a script are lost.\r\n\r\nI traced this to a flaw in ScriptedMetricAggregationBuilder. The original Script object for each script, eg mapScript, needs to be passed through (or at least the params from it) to the ScriptedMetricAggregatorFactory, and then before running each script, the relevant per script params need to be merged in (by creating a new map and added the per script params).\r\n", "comments": [ { "body": "/cc @elastic/es-search-aggs ", "created_at": "2018-02-25T21:36:10Z" }, { "body": "@rjernst What should the behavior be if script params and aggregation params are provided with the same name?\r\n\r\nI'm inclined to think that should cause an error at `ScriptedMetricAggregatorFactory` construction time. If conflicts must be silently allowed then I think the aggregation level params have to win (at least for `_agg` to ensure it keeps the same identity across all the scripts that are run -- the general case is to ensure that **all** aggregation level params have the same identity across scripts, which is directly in conflict with letting individual scripts override such params). That works too but could be confusing and a fail-fast check ensuring no conflicts seems more developer friendly.\r\n\r\nI have a start to a PR on this. Pretty sure I have it working but it needs some trivial cleanup plus refinement for this conflict case. Input is appreciated if you'd welcome a PR.", "created_at": "2018-03-18T17:15:54Z" }, { "body": "I think an error is the right thing. Right now, anything passed into params at the script level is ignored, so any errors that result of this change could only happen if a user is already either passing in params in both the script and one level up, or they are passing in `_agg`/`_aggs` at the script level, and they really should be notified that this is completely broken from what they are trying to do (and they can stop doing this before upgrading).\r\n\r\nLong term, I think `_agg` and `_aggs` need to be removed from params, and replaced with specific `ScriptContext`s for each of the 4 script types for a metric agg. This would allow making the variables local to each script (no more need to access via params) and at the same time possibly renaming to be more descriptive. But a PR to fix this existing bug in the current setup would be greatly appreciated.", "created_at": "2018-03-18T18:20:13Z" }, { "body": "Thanks, that makes sense. Agree that the aggregation status/results are really part of the context, not part of the parameters -- that does sound like a better design.\r\n\r\n`ScriptedMetricIT` has a few test cases where the same params are being passed in at both the aggregator and script level (looks like by accident) so we'll get easy test coverage of the new error case :)\r\n\r\nI will clean up my progress other than this new error case today and should be able to take care of the error case either this week or next weekend and submit the PR.", "created_at": "2018-03-18T18:39:37Z" }, { "body": "(Another option is we could treat _agg/_aggs as a special case to disallow passing at the script level, but given your long term change suggestion that is not really moving in the right direction in general. Even passing non-\"special\" params in at both the aggregate and script level seems more like a confusing mistake than an important use case.)", "created_at": "2018-03-18T18:41:52Z" }, { "body": "/cc @colings86 for your thoughts on the approach discussed here.", "created_at": "2018-03-18T21:26:10Z" }, { "body": "Agreed that an error would be the best approach for handling colliding keys in the params. I also agree that now we have script contexts it would be much better to expose the `_agg` and `_aggs` variables there rather than abusing the params for this.", "created_at": "2018-03-19T09:24:25Z" }, { "body": "@rjernst can this be closed?", "created_at": "2018-06-27T05:14:41Z" }, { "body": "Yes, closed by #29154.", "created_at": "2018-06-29T15:48:51Z" } ], "number": 28819, "title": "Scripted metric aggs ignore script level params" }
{ "body": "This PR attempts to fix issue #28819 by merging agg params and script params when building `ScriptedMetricAggregator`. Conflicting param names throw `IllegalArgumentException`.", "number": 29154, "review_comments": [ { "body": "testMapWithParams was previously actually working because the params were on the aggregation, not because the params were on the map script. This test may be extraneous now but I renamed it instead of deleting it since it does provide specific coverage for an explicit _agg param, even if that may be kind of a weird case. I wouldn't presume to delete it without an opinion from a dev more familiar with this code..", "created_at": "2018-03-20T05:51:28Z" }, { "body": "I considered merging the parameter lists at construction time (which would also move conflict detection to construction time) but hit some test failures in that case. I did not rigorously track down the problem because I wasn't sure that it was a good idea to evaluate the params list before the last minute anyway, in case they could be modified in any way between AggregatorFactory construction and calling createInternal(). This is a smaller behavior change.", "created_at": "2018-03-20T05:55:42Z" }, { "body": "Would it make sense to combine together the compiled script factories and the params into some kind of a tuple object rather than just adding more constructor parameters? I opted not to because they're different types and AFAIK passing them around paired with parameters isn't a common case.", "created_at": "2018-03-20T05:57:54Z" } ], "title": "Pass through script params in scripted metric agg" }
{ "commits": [ { "message": "Pass script level params into scripted metric aggs (#28819)\n\nNow params that are passed at the script level and at the aggregation level\nare merged and can both be used in the aggregation scripts. If there are\nany conflicts, aggregation level params will win. This may be followed\nby another change detecting that case and throwing an exception to\ndisallow such conflicts." }, { "message": "Disallow duplicate parameter names between scripted agg and script (#28819)\n\nIf a scripted metric aggregation has aggregation params and script params\nwhich have the same name, throw an IllegalArgumentException when merging\nthe parameter lists." }, { "message": "Merge branch 'master' into rationull-28819-merge-agg-and-script-params" } ], "files": [ { "diff": "@@ -37,6 +37,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n+import java.util.Collections;\n import java.util.Map;\n import java.util.Objects;\n \n@@ -198,20 +199,34 @@ protected ScriptedMetricAggregatorFactory doBuild(SearchContext context, Aggrega\n Builder subfactoriesBuilder) throws IOException {\n \n QueryShardContext queryShardContext = context.getQueryShardContext();\n+\n+ // Extract params from scripts and pass them along to ScriptedMetricAggregatorFactory, since it won't have\n+ // access to them for the scripts it's given precompiled.\n+\n ExecutableScript.Factory executableInitScript;\n+ Map<String, Object> initScriptParams;\n if (initScript != null) {\n executableInitScript = queryShardContext.getScriptService().compile(initScript, ExecutableScript.AGGS_CONTEXT);\n+ initScriptParams = initScript.getParams();\n } else {\n executableInitScript = p -> null;\n+ initScriptParams = Collections.emptyMap();\n }\n+\n SearchScript.Factory searchMapScript = queryShardContext.getScriptService().compile(mapScript, SearchScript.AGGS_CONTEXT);\n+ Map<String, Object> mapScriptParams = mapScript.getParams();\n+\n ExecutableScript.Factory executableCombineScript;\n+ Map<String, Object> combineScriptParams;\n if (combineScript != null) {\n- executableCombineScript =queryShardContext.getScriptService().compile(combineScript, ExecutableScript.AGGS_CONTEXT);\n+ executableCombineScript = queryShardContext.getScriptService().compile(combineScript, ExecutableScript.AGGS_CONTEXT);\n+ combineScriptParams = combineScript.getParams();\n } else {\n executableCombineScript = p -> null;\n+ combineScriptParams = Collections.emptyMap();\n }\n- return new ScriptedMetricAggregatorFactory(name, searchMapScript, executableInitScript, executableCombineScript, reduceScript,\n+ return new ScriptedMetricAggregatorFactory(name, searchMapScript, mapScriptParams, executableInitScript, initScriptParams,\n+ executableCombineScript, combineScriptParams, reduceScript,\n params, queryShardContext.lookup(), context, parent, subfactoriesBuilder, metaData);\n }\n ", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregationBuilder.java", "status": "modified" }, { "diff": "@@ -35,28 +35,35 @@\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n-import java.util.function.Function;\n \n public class ScriptedMetricAggregatorFactory extends AggregatorFactory<ScriptedMetricAggregatorFactory> {\n \n private final SearchScript.Factory mapScript;\n+ private final Map<String, Object> mapScriptParams;\n private final ExecutableScript.Factory combineScript;\n+ private final Map<String, Object> combineScriptParams;\n private final Script reduceScript;\n- private final Map<String, Object> params;\n+ private final Map<String, Object> aggParams;\n private final SearchLookup lookup;\n private final ExecutableScript.Factory initScript;\n+ private final Map<String, Object> initScriptParams;\n \n- public ScriptedMetricAggregatorFactory(String name, SearchScript.Factory mapScript, ExecutableScript.Factory initScript,\n- ExecutableScript.Factory combineScript, Script reduceScript, Map<String, Object> params,\n+ public ScriptedMetricAggregatorFactory(String name, SearchScript.Factory mapScript, Map<String, Object> mapScriptParams,\n+ ExecutableScript.Factory initScript, Map<String, Object> initScriptParams,\n+ ExecutableScript.Factory combineScript, Map<String, Object> combineScriptParams,\n+ Script reduceScript, Map<String, Object> aggParams,\n SearchLookup lookup, SearchContext context, AggregatorFactory<?> parent,\n AggregatorFactories.Builder subFactories, Map<String, Object> metaData) throws IOException {\n super(name, context, parent, subFactories, metaData);\n this.mapScript = mapScript;\n+ this.mapScriptParams = mapScriptParams;\n this.initScript = initScript;\n+ this.initScriptParams = initScriptParams;\n this.combineScript = combineScript;\n+ this.combineScriptParams = combineScriptParams;\n this.reduceScript = reduceScript;\n this.lookup = lookup;\n- this.params = params;\n+ this.aggParams = aggParams;\n }\n \n @Override\n@@ -65,26 +72,26 @@ public Aggregator createInternal(Aggregator parent, boolean collectsFromSingleBu\n if (collectsFromSingleBucket == false) {\n return asMultiBucketAggregator(this, context, parent);\n }\n- Map<String, Object> params = this.params;\n- if (params != null) {\n- params = deepCopyParams(params, context);\n+ Map<String, Object> aggParams = this.aggParams;\n+ if (aggParams != null) {\n+ aggParams = deepCopyParams(aggParams, context);\n } else {\n- params = new HashMap<>();\n+ aggParams = new HashMap<>();\n }\n- if (params.containsKey(\"_agg\") == false) {\n- params.put(\"_agg\", new HashMap<String, Object>());\n+ if (aggParams.containsKey(\"_agg\") == false) {\n+ aggParams.put(\"_agg\", new HashMap<String, Object>());\n }\n \n- final ExecutableScript initScript = this.initScript.newInstance(params);\n- final SearchScript.LeafFactory mapScript = this.mapScript.newFactory(params, lookup);\n- final ExecutableScript combineScript = this.combineScript.newInstance(params);\n+ final ExecutableScript initScript = this.initScript.newInstance(mergeParams(aggParams, initScriptParams));\n+ final SearchScript.LeafFactory mapScript = this.mapScript.newFactory(mergeParams(aggParams, mapScriptParams), lookup);\n+ final ExecutableScript combineScript = this.combineScript.newInstance(mergeParams(aggParams, combineScriptParams));\n \n final Script reduceScript = deepCopyScript(this.reduceScript, context);\n if (initScript != null) {\n initScript.run();\n }\n return new ScriptedMetricAggregator(name, mapScript,\n- combineScript, reduceScript, params, context, parent,\n+ combineScript, reduceScript, aggParams, context, parent,\n pipelineAggregators, metaData);\n }\n \n@@ -128,5 +135,18 @@ private static <T> T deepCopyParams(T original, SearchContext context) {\n return clone;\n }\n \n+ private static Map<String, Object> mergeParams(Map<String, Object> agg, Map<String, Object> script) {\n+ // Start with script params\n+ Map<String, Object> combined = new HashMap<>(script);\n \n+ // Add in agg params, throwing an exception if any conflicts are detected\n+ for (Map.Entry<String, Object> aggEntry : agg.entrySet()) {\n+ if (combined.putIfAbsent(aggEntry.getKey(), aggEntry.getValue()) != null) {\n+ throw new IllegalArgumentException(\"Parameter name \\\"\" + aggEntry.getKey() +\n+ \"\\\" used in both aggregation and script parameters\");\n+ }\n+ }\n+\n+ return combined;\n+ }\n }", "filename": "server/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorFactory.java", "status": "modified" }, { "diff": "@@ -20,6 +20,8 @@\n package org.elasticsearch.search.aggregations.metrics;\n \n import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n+import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.settings.Settings;\n@@ -62,6 +64,7 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.allOf;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n@@ -322,11 +325,11 @@ public void testMap() {\n assertThat(numShardsRun, greaterThan(0));\n }\n \n- public void testMapWithParams() {\n+ public void testExplicitAggParam() {\n Map<String, Object> params = new HashMap<>();\n params.put(\"_agg\", new ArrayList<>());\n \n- Script mapScript = new Script(ScriptType.INLINE, CustomScriptPlugin.NAME, \"_agg.add(1)\", params);\n+ Script mapScript = new Script(ScriptType.INLINE, CustomScriptPlugin.NAME, \"_agg.add(1)\", Collections.emptyMap());\n \n SearchResponse response = client().prepareSearch(\"idx\")\n .setQuery(matchAllQuery())\n@@ -361,17 +364,17 @@ public void testMapWithParams() {\n }\n \n public void testMapWithParamsAndImplicitAggMap() {\n- Map<String, Object> params = new HashMap<>();\n- // don't put any _agg map in params\n- params.put(\"param1\", \"12\");\n- params.put(\"param2\", 1);\n+ // Split the params up between the script and the aggregation.\n+ // Don't put any _agg map in params.\n+ Map<String, Object> scriptParams = Collections.singletonMap(\"param1\", \"12\");\n+ Map<String, Object> aggregationParams = Collections.singletonMap(\"param2\", 1);\n \n // The _agg hashmap will be available even if not declared in the params map\n- Script mapScript = new Script(ScriptType.INLINE, CustomScriptPlugin.NAME, \"_agg[param1] = param2\", params);\n+ Script mapScript = new Script(ScriptType.INLINE, CustomScriptPlugin.NAME, \"_agg[param1] = param2\", scriptParams);\n \n SearchResponse response = client().prepareSearch(\"idx\")\n .setQuery(matchAllQuery())\n- .addAggregation(scriptedMetric(\"scripted\").params(params).mapScript(mapScript))\n+ .addAggregation(scriptedMetric(\"scripted\").params(aggregationParams).mapScript(mapScript))\n .get();\n assertSearchResponse(response);\n assertThat(response.getHits().getTotalHits(), equalTo(numDocs));\n@@ -1001,4 +1004,16 @@ public void testDontCacheScripts() throws Exception {\n assertThat(client().admin().indices().prepareStats(\"cache_test_idx\").setRequestCache(true).get().getTotal().getRequestCache()\n .getMissCount(), equalTo(0L));\n }\n+\n+ public void testConflictingAggAndScriptParams() {\n+ Map<String, Object> params = Collections.singletonMap(\"param1\", \"12\");\n+ Script mapScript = new Script(ScriptType.INLINE, CustomScriptPlugin.NAME, \"_agg.add(1)\", params);\n+\n+ SearchRequestBuilder builder = client().prepareSearch(\"idx\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(scriptedMetric(\"scripted\").params(params).mapScript(mapScript));\n+\n+ SearchPhaseExecutionException ex = expectThrows(SearchPhaseExecutionException.class, builder::get);\n+ assertThat(ex.getCause().getMessage(), containsString(\"Parameter name \\\"param1\\\" used in both aggregation and script parameters\"));\n+ }\n }", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/metrics/ScriptedMetricIT.java", "status": "modified" }, { "diff": "@@ -64,8 +64,16 @@ public class ScriptedMetricAggregatorTests extends AggregatorTestCase {\n Collections.emptyMap());\n private static final Script COMBINE_SCRIPT_SCORE = new Script(ScriptType.INLINE, MockScriptEngine.NAME, \"combineScriptScore\",\n Collections.emptyMap());\n- private static final Map<String, Function<Map<String, Object>, Object>> SCRIPTS = new HashMap<>();\n \n+ private static final Script INIT_SCRIPT_PARAMS = new Script(ScriptType.INLINE, MockScriptEngine.NAME, \"initScriptParams\",\n+ Collections.singletonMap(\"initialValue\", 24));\n+ private static final Script MAP_SCRIPT_PARAMS = new Script(ScriptType.INLINE, MockScriptEngine.NAME, \"mapScriptParams\",\n+ Collections.singletonMap(\"itemValue\", 12));\n+ private static final Script COMBINE_SCRIPT_PARAMS = new Script(ScriptType.INLINE, MockScriptEngine.NAME, \"combineScriptParams\",\n+ Collections.singletonMap(\"divisor\", 4));\n+ private static final String CONFLICTING_PARAM_NAME = \"initialValue\";\n+\n+ private static final Map<String, Function<Map<String, Object>, Object>> SCRIPTS = new HashMap<>();\n \n @BeforeClass\n @SuppressWarnings(\"unchecked\")\n@@ -99,6 +107,26 @@ public static void initMockScripts() {\n Map<String, Object> agg = (Map<String, Object>) params.get(\"_agg\");\n return ((List<Double>) agg.get(\"collector\")).stream().mapToDouble(Double::doubleValue).sum();\n });\n+\n+ SCRIPTS.put(\"initScriptParams\", params -> {\n+ Map<String, Object> agg = (Map<String, Object>) params.get(\"_agg\");\n+ Integer initialValue = (Integer)params.get(\"initialValue\");\n+ ArrayList<Integer> collector = new ArrayList();\n+ collector.add(initialValue);\n+ agg.put(\"collector\", collector);\n+ return agg;\n+ });\n+ SCRIPTS.put(\"mapScriptParams\", params -> {\n+ Map<String, Object> agg = (Map<String, Object>) params.get(\"_agg\");\n+ Integer itemValue = (Integer) params.get(\"itemValue\");\n+ ((List<Integer>) agg.get(\"collector\")).add(itemValue);\n+ return agg;\n+ });\n+ SCRIPTS.put(\"combineScriptParams\", params -> {\n+ Map<String, Object> agg = (Map<String, Object>) params.get(\"_agg\");\n+ int divisor = ((Integer) params.get(\"divisor\"));\n+ return ((List<Integer>) agg.get(\"collector\")).stream().mapToInt(Integer::intValue).map(i -> i / divisor).sum();\n+ });\n }\n \n @SuppressWarnings(\"unchecked\")\n@@ -187,6 +215,48 @@ public void testScriptedMetricWithCombineAccessesScores() throws IOException {\n }\n }\n \n+ public void testScriptParamsPassedThrough() throws IOException {\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {\n+ for (int i = 0; i < 100; i++) {\n+ indexWriter.addDocument(singleton(new SortedNumericDocValuesField(\"number\", i)));\n+ }\n+ }\n+\n+ try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ ScriptedMetricAggregationBuilder aggregationBuilder = new ScriptedMetricAggregationBuilder(AGG_NAME);\n+ aggregationBuilder.initScript(INIT_SCRIPT_PARAMS).mapScript(MAP_SCRIPT_PARAMS).combineScript(COMBINE_SCRIPT_PARAMS);\n+ ScriptedMetric scriptedMetric = search(newSearcher(indexReader, true, true), new MatchAllDocsQuery(), aggregationBuilder);\n+\n+ // The result value depends on the script params.\n+ assertEquals(306, scriptedMetric.aggregation());\n+ }\n+ }\n+ }\n+\n+ public void testConflictingAggAndScriptParams() throws IOException {\n+ try (Directory directory = newDirectory()) {\n+ try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {\n+ for (int i = 0; i < 100; i++) {\n+ indexWriter.addDocument(singleton(new SortedNumericDocValuesField(\"number\", i)));\n+ }\n+ }\n+\n+ try (IndexReader indexReader = DirectoryReader.open(directory)) {\n+ ScriptedMetricAggregationBuilder aggregationBuilder = new ScriptedMetricAggregationBuilder(AGG_NAME);\n+ Map<String, Object> aggParams = Collections.singletonMap(CONFLICTING_PARAM_NAME, \"blah\");\n+ aggregationBuilder.params(aggParams).initScript(INIT_SCRIPT_PARAMS).mapScript(MAP_SCRIPT_PARAMS).\n+ combineScript(COMBINE_SCRIPT_PARAMS);\n+\n+ IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, () ->\n+ search(newSearcher(indexReader, true, true), new MatchAllDocsQuery(), aggregationBuilder)\n+ );\n+ assertEquals(\"Parameter name \\\"\" + CONFLICTING_PARAM_NAME + \"\\\" used in both aggregation and script parameters\",\n+ ex.getMessage());\n+ }\n+ }\n+ }\n+\n /**\n * We cannot use Mockito for mocking QueryShardContext in this case because\n * script-related methods (e.g. QueryShardContext#getLazyExecutableScript)", "filename": "server/src/test/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregatorTests.java", "status": "modified" } ] }
{ "body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 5.4.1\r\n\r\n**Plugins installed**: N/A\r\n\r\n**JVM version**: 1.8.0_72\r\n\r\n**OS version**: Windows 10\r\n\r\n**Description of the problem including expected versus actual behavior**: The documentation [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html) is misleading. Copying it below:\r\n```\r\nMultiple awareness attributes can be specified, in which case the combination of values from each attribute is considered to be a separate value.\r\n```\r\n\r\n**Steps to reproduce**:\r\n1. Set up an Elasticsearch cluster with three nodes, say, n1, n2 and n3 with the following configuration:\r\n\r\n- elasticsearch.yml of n1:\r\n```\r\ncluster.name: c2\r\nnode.name: n1\r\ncluster.routing.allocation.awareness.attributes: updateDomain, faultDomain\r\ncluster.routing.allocation.awareness.force.updateDomain.values: 0,1\r\ncluster.routing.allocation.awareness.force.faultDomain.values: 0,1,2\r\nnode.attr.updateDomain: 1\r\nnode.attr.faultDomain: 0\r\n```\r\n- elasticsearch.yml of n2:\r\n```\r\ncluster.name: c2\r\nnode.name: n2\r\ncluster.routing.allocation.awareness.attributes: updateDomain, faultDomain\r\ncluster.routing.allocation.awareness.force.updateDomain.values: 0,1\r\ncluster.routing.allocation.awareness.force.faultDomain.values: 0,1,2\r\nnode.attr.updateDomain: 0\r\nnode.attr.faultDomain: 1\r\n```\r\n- elasticsearch.yml of n3:\r\n```\r\ncluster.name: c2\r\nnode.name: n3\r\ncluster.routing.allocation.awareness.attributes: updateDomain, faultDomain\r\ncluster.routing.allocation.awareness.force.updateDomain.values: 0,1\r\ncluster.routing.allocation.awareness.force.faultDomain.values: 0,1,2\r\nnode.attr.updateDomain: 1\r\nnode.attr.faultDomain: 2\r\n```\r\n 2. Create a test index:\r\n```\r\nPUT idx1\r\n{\r\n \"settings\": {\r\n \"number_of_shards\": 1, \r\n \"number_of_replicas\": 1\r\n }\r\n}\r\n```\r\n 3. Shards should be allocated on either n1 and n2 or n2 and n3. Say, the primary is allocated on n1 and replica is allocated on n2. Try moving the replica to n3:\r\n```\r\nPOST _cluster/reroute?explain\r\n{\r\n \"commands\": [\r\n {\r\n \"move\": {\r\n \"index\": \"idx1\",\r\n \"shard\": 0,\r\n \"from_node\": \"n2\",\r\n \"to_node\": \"n3\"\r\n }\r\n }\r\n ]\r\n}\r\n```\r\n\r\nIt will fail with the below explanation:\r\n```\r\n{\r\n \"decider\": \"awareness\",\r\n \"decision\": \"NO\",\r\n \"explanation\": \"there are too many copies of the shard allocated to nodes with attribute [updateDomain], there are [2] total configured shard copies for this shard id and [2] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]\"\r\n}\r\n```\r\n\r\nWhat this means is that for multiple awareness attributes, each on its own is considered to be a unique value and not a combination of them. If combination was considered to be unique, then the cluster reroute should have worked but it did not. ", "comments": [ { "body": "Pinging @elastic/es-distributed", "created_at": "2018-03-16T11:44:29Z" }, { "body": "Thanks for the detailed instructions. A slightly simpler test is not to involve node `n2` at all, which is almost in the test suite:\r\n\r\n```diff\r\n--- a/server/src/test/java/org/elasticsearch/cluster/routing/allocation/AwarenessAllocationTests.java\r\n+++ b/server/src/test/java/org/elasticsearch/cluster/routing/allocation/AwarenessAllocationTests.java\r\n@@ -834,7 +834,7 @@ public class AwarenessAllocationTests extends ESAllocationTestCase {\r\n nodeAAttributes.put(\"rack\", \"c\");\r\n Map<String, String> nodeBAttributes = new HashMap<>();\r\n nodeBAttributes.put(\"zone\", \"b\");\r\n- nodeBAttributes.put(\"rack\", \"d\");\r\n+ nodeBAttributes.put(\"rack\", \"c\");\r\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\r\n .add(newNode(\"A-0\", nodeAAttributes))\r\n .add(newNode(\"B-0\", nodeBAttributes))\r\n```\r\n\r\nThis seems like the desired behaviour so I agree it's the docs that need fixing.", "created_at": "2018-03-16T16:39:22Z" } ], "number": 29105, "title": "Incorrect documentation about multiple awareness attributes" }
{ "body": "Today, the docs imply that if multiple attributes are specified the the\r\nwhole combination of values is considered as a single entity when\r\nperforming allocation. In fact, each attribute is considered separately. This\r\nchange fixes this discrepancy.\r\n\r\nIt also replaces the use of the term \"awareness zone\" with \"zone or domain\", and\r\nreformats some paragraphs to the right width.\r\n\r\nFixes #29105\r\n", "number": 29116, "review_comments": [], "title": "Update allocation awareness docs" }
{ "commits": [ { "message": "Update allocation awareness docs\n\nToday, the docs imply that if multiple attributes are specified the the\nwhole combination of values is considered as a single entity when\nperforming allocation. In fact, each attribute is considered separately. This\nchange fixes this discrepancy.\n\nIt also replaces the use of the term \"awareness zone\" with \"zone or domain\", and\nreformats some paragraphs to the right width.\n\nFixes #29105" }, { "message": "Missed a couple of refs to 'awareness zone'" } ], "files": [ { "diff": "@@ -2,8 +2,8 @@\n === Shard Allocation Awareness\n \n When running nodes on multiple VMs on the same physical server, on multiple\n-racks, or across multiple awareness zones, it is more likely that two nodes on\n-the same physical server, in the same rack, or in the same awareness zone will\n+racks, or across multiple zones or domains, it is more likely that two nodes on\n+the same physical server, in the same rack, or in the same zone or domain will\n crash at the same time, rather than two unrelated nodes crashing\n simultaneously.\n \n@@ -25,7 +25,7 @@ attribute called `rack_id` -- we could use any attribute name. For example:\n ----------------------\n <1> This setting could also be specified in the `elasticsearch.yml` config file.\n \n-Now, we need to setup _shard allocation awareness_ by telling Elasticsearch\n+Now, we need to set up _shard allocation awareness_ by telling Elasticsearch\n which attributes to use. This can be configured in the `elasticsearch.yml`\n file on *all* master-eligible nodes, or it can be set (and changed) with the\n <<cluster-update-settings,cluster-update-settings>> API.\n@@ -37,51 +37,51 @@ For our example, we'll set the value in the config file:\n cluster.routing.allocation.awareness.attributes: rack_id\n --------------------------------------------------------\n \n-With this config in place, let's say we start two nodes with `node.attr.rack_id`\n-set to `rack_one`, and we create an index with 5 primary shards and 1 replica\n-of each primary. All primaries and replicas are allocated across the two\n-nodes.\n+With this config in place, let's say we start two nodes with\n+`node.attr.rack_id` set to `rack_one`, and we create an index with 5 primary\n+shards and 1 replica of each primary. All primaries and replicas are\n+allocated across the two nodes.\n \n Now, if we start two more nodes with `node.attr.rack_id` set to `rack_two`,\n Elasticsearch will move shards across to the new nodes, ensuring (if possible)\n-that no two copies of the same shard will be in the same rack. However if `rack_two`\n-were to fail, taking down both of its nodes, Elasticsearch will still allocate the lost\n-shard copies to nodes in `rack_one`. \n+that no two copies of the same shard will be in the same rack. However if\n+`rack_two` were to fail, taking down both of its nodes, Elasticsearch will\n+still allocate the lost shard copies to nodes in `rack_one`. \n \n .Prefer local shards\n *********************************************\n \n When executing search or GET requests, with shard awareness enabled,\n Elasticsearch will prefer using local shards -- shards in the same awareness\n-group -- to execute the request. This is usually faster than crossing racks or\n-awareness zones.\n+group -- to execute the request. This is usually faster than crossing between\n+racks or across zone boundaries.\n \n *********************************************\n \n-Multiple awareness attributes can be specified, in which case the combination\n-of values from each attribute is considered to be a separate value.\n+Multiple awareness attributes can be specified, in which case each attribute\n+is considered separately when deciding where to allocate the shards.\n \n [source,yaml]\n -------------------------------------------------------------\n cluster.routing.allocation.awareness.attributes: rack_id,zone\n -------------------------------------------------------------\n \n-NOTE: When using awareness attributes, shards will not be allocated to\n-nodes that don't have values set for those attributes.\n+NOTE: When using awareness attributes, shards will not be allocated to nodes\n+that don't have values set for those attributes.\n \n-NOTE: Number of primary/replica of a shard allocated on a specific group\n-of nodes with the same awareness attribute value is determined by the number\n-of attribute values. When the number of nodes in groups is unbalanced and\n-there are many replicas, replica shards may be left unassigned.\n+NOTE: Number of primary/replica of a shard allocated on a specific group of\n+nodes with the same awareness attribute value is determined by the number of\n+attribute values. When the number of nodes in groups is unbalanced and there\n+are many replicas, replica shards may be left unassigned.\n \n [float]\n [[forced-awareness]]\n === Forced Awareness\n \n-Imagine that you have two awareness zones and enough hardware across the two\n-zones to host all of your primary and replica shards. But perhaps the\n-hardware in a single zone, while sufficient to host half the shards, would be\n-unable to host *ALL* the shards.\n+Imagine that you have two zones and enough hardware across the two zones to\n+host all of your primary and replica shards. But perhaps the hardware in a\n+single zone, while sufficient to host half the shards, would be unable to host\n+*ALL* the shards.\n \n With ordinary awareness, if one zone lost contact with the other zone,\n Elasticsearch would assign all of the missing replica shards to a single zone.\n@@ -91,9 +91,9 @@ remaining zone to be overloaded.\n Forced awareness solves this problem by *NEVER* allowing copies of the same\n shard to be allocated to the same zone.\n \n-For example, lets say we have an awareness attribute called `zone`, and\n-we know we are going to have two zones, `zone1` and `zone2`. Here is how\n-we can force awareness on a node:\n+For example, lets say we have an awareness attribute called `zone`, and we\n+know we are going to have two zones, `zone1` and `zone2`. Here is how we can\n+force awareness on a node:\n \n [source,yaml]\n -------------------------------------------------------------------\n@@ -102,10 +102,10 @@ cluster.routing.allocation.awareness.attributes: zone\n -------------------------------------------------------------------\n <1> We must list all possible values that the `zone` attribute can have.\n \n-Now, if we start 2 nodes with `node.attr.zone` set to `zone1` and create an index\n-with 5 shards and 1 replica. The index will be created, but only the 5 primary\n-shards will be allocated (with no replicas). Only when we start more nodes\n-with `node.attr.zone` set to `zone2` will the replicas be allocated.\n+Now, if we start 2 nodes with `node.attr.zone` set to `zone1` and create an\n+index with 5 shards and 1 replica. The index will be created, but only the 5\n+primary shards will be allocated (with no replicas). Only when we start more\n+nodes with `node.attr.zone` set to `zone2` will the replicas be allocated.\n \n The `cluster.routing.allocation.awareness.*` settings can all be updated\n dynamically on a live cluster with the", "filename": "docs/reference/modules/cluster/allocation_awareness.asciidoc", "status": "modified" } ] }
{ "body": "The setting `index.routing.allocation.enable` was [documented in v1.7](https://www.elastic.co/guide/en/elasticsearch/reference/1.7/indices-update-settings.html) but lost in the refactoring in f123a53d7258349a171e47a35f4581899d8fa776 so has been undocumented since 2.0.\r\n\r\nIt might also be worth checking if any other settings were lost in the same change. I have not done so yet.\r\n\r\n/cc @elastic/es-distributed.", "comments": [ { "body": "The following settings appear on lines removed in f123a53 and not in the `docs/` folder of `master`:\r\n\r\n```\r\nindex.cache.filter.expire\r\nindex.cache.filter.max_size\r\nindex.gateway.snapshot_interval\r\nindex.gc_deletes\r\nindex.recovery.initial_shards\r\nindex.routing.allocation.disable_allocation\r\nindex.routing.allocation.disable_new_allocation\r\nindex.routing.allocation.disable_replica_allocation\r\nindex.routing.allocation.enable\r\nindex.routing.rebalance.enable\r\nindex.translog.disable_flush\r\nindex.translog.flush_threshold_ops\r\nindex.translog.flush_threshold_period\r\nindex.translog.fs.type\r\nindex.ttl.disable_purge\r\nindex.warmer.enabled\r\nindices.cache.filter.size\r\nindices.recovery.compress\r\nindices.recovery.concurrent_small_file_streams\r\nindices.recovery.concurrent_streams\r\nindices.recovery.file_chunk_size\r\nindices.recovery.translog_ops\r\nindices.recovery.translog_size\r\nindices.ttl.interval\r\n```\r\n\r\nOf these, the following still seem to be valid settings:\r\n\r\n```\r\nindex.gc_deletes\r\nindex.routing.allocation.enable\r\nindex.routing.rebalance.enable\r\nindex.ttl.disable_purge\r\nindex.warmer.enabled\r\n```\r\n\r\nThe following do not seem to be valid settings, but are still mentioned in a test:\r\n\r\n```\r\nindices.recovery.concurrent_streams\r\nindices.recovery.concurrent_small_file_streams\r\n```\r\n\r\nhttps://github.com/elastic/elasticsearch/blob/5904d936fabfa25b832b757461efa6b0d3e9acb9/server/src/test/java/org/elasticsearch/indices/recovery/RecoverySourceHandlerTests.java#L104-L105", "created_at": "2018-03-15T12:09:37Z" }, { "body": "@Sue-Gallagher @debadair I'm happy to try and resurrect these lost docs, but I'm not sure _where_ in the docs tree they should go, as things seem to have moved around quite a lot since 1.7. Your guidance would be appreciated.", "created_at": "2018-03-15T16:41:42Z" }, { "body": "It looks like the missing settings should be added to https://www.elastic.co/guide/en/elasticsearch/reference/6.2/index-modules.html#index-modules-settings. \r\n\r\nIf you want to open a PR for that, that would be most awesome.", "created_at": "2018-03-28T23:29:50Z" }, { "body": "I reinstated docs for `index.gc_deletes`, `index.routing.allocation.enable` and `index.routing.rebalance.enable` here.\r\n\r\nSetting `index.ttl.disable_purge` is unused so I opened #29526 and #29527 to remove it.\r\n\r\nSetting `index.warmer.enabled` requires knowledge that I do not have so I opened #29529 about this one.", "created_at": "2018-04-16T10:23:09Z" } ], "number": 28781, "title": "Reinstate missing documentation" }
{ "body": "The settings `indices.recovery.concurrent_streams` and\r\n`indices.recovery.concurrent_small_file_streams` were removed in\r\nf5e4cd46164630e09f308ed78c512eea8bda8a05. This commit removes their last traces\r\nfrom the codebase.\r\n\r\nRelates #28781\r\n\r\nPinging @elastic/es-core-infra", "number": 29087, "review_comments": [ { "body": "s/Exceptino/Exception/ while you are here?", "created_at": "2018-03-15T13:38:39Z" }, { "body": "Sounds like a worthwhile correctino. Pushed pushed f4759cd8fc, fixing the send send files too.", "created_at": "2018-03-15T13:51:18Z" } ], "title": "Remove usages of obsolete settings" }
{ "commits": [ { "message": "Remove usages of obsolete settings\n\nThe settings `indices.recovery.concurrent_streams` and\n`indices.recovery.concurrent_small_file_streams` were removed in\nf5e4cd46164630e09f308ed78c512eea8bda8a05. This commit removes their last traces\nfrom the codebase.\n\nRelates #28781" }, { "message": "Fix test name" } ], "files": [ { "diff": "@@ -332,10 +332,8 @@ public void close() throws IOException {\n }\n \n \n- public void testHandleExceptinoOnSendSendFiles() throws Throwable {\n- Settings settings = Settings.builder().put(\"indices.recovery.concurrent_streams\", 1).\n- put(\"indices.recovery.concurrent_small_file_streams\", 1).build();\n- final RecoverySettings recoverySettings = new RecoverySettings(settings, service);\n+ public void testHandleExceptionOnSendFiles() throws Throwable {\n+ final RecoverySettings recoverySettings = new RecoverySettings(Settings.EMPTY, service);\n final StartRecoveryRequest request = getStartRecoveryRequest();\n Path tempDir = createTempDir();\n Store store = newStore(tempDir, false);", "filename": "server/src/test/java/org/elasticsearch/indices/recovery/RecoverySourceHandlerTests.java", "status": "modified" } ] }
{ "body": "As part of #28878 we discovered that the recovery API accepts a `detailed` parameter, also documented, that is unused. The parameter should rather be called `details` (see https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java#L918), although it's no longer accepted by the recovery API since we introduced parameters validation. Also the `detailed` flag is transported to the response, without being serialized nor printed out, where it should be removed.\r\n\r\nWe should make this work, by ~~either accepting the `details` param instead and updating docs and spec or~~ modifying `RecoveryState` to look for `detailed` instead of `details`. The detailed flag though doesn't need to be part of the response, and does not need to be part of the request either, given that `RestRequest` is provided as `Params` argument to the `toXContent` method.", "comments": [ { "body": "In a few rest apis `detailed` flag is used : \r\nhttps://github.com/elastic/elasticsearch/blob/e7d1e12675d72b5b297832b6bb45cbb3fed753d5/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestListTasksAction.java#L72\r\nAnd `details` is used as a filedname in few responses : \r\nhttps://github.com/elastic/elasticsearch/blob/e7d1e12675d72b5b297832b6bb45cbb3fed753d5/server/src/main/java/org/elasticsearch/rest/action/search/RestExplainAction.java#L141\r\nhttps://github.com/elastic/elasticsearch/blob/e7d1e12675d72b5b297832b6bb45cbb3fed753d5/server/src/main/java/org/elasticsearch/rest/action/search/RestExplainAction.java#L121\r\nMaybe it would be more consistent to change the `details` to `detailed` ?", "created_at": "2018-03-06T13:21:03Z" }, { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-03-09T04:32:20Z" }, { "body": "Pinging @elastic/es-distributed", "created_at": "2018-03-15T17:24:56Z" }, { "body": ">We should make this work, by either accepting the `details` param instead and updating docs and spec or modifying `RecoveryState` to look for `detailed` instead of `\r\n\r\nI think that it would be better to go the second way : use `detailed` as a url parameter as done in other requests ( basically go with the documentation - similar to what was done for `field_data` vs `fielddata` for clear cache api )\r\n\r\n@PnPie @javanna what do you think ?\r\n\r\n\r\n", "created_at": "2018-03-27T17:59:08Z" }, { "body": "@olcbean I am not sure, you would go that way because the param was accepted before as `detailed` although it had no effect? On the other hand it will have some effect now, I wonder if any user ever use that parameter.", "created_at": "2018-03-27T19:02:04Z" }, { "body": "Yes, and even if there were users using `detailed` flag before it wouldn't work right ? because in `RecoveryState` we parsed it as `details`.", "created_at": "2018-03-27T20:33:23Z" }, { "body": "@PnPie one could say there is a subtle difference between not doing anything and throwing error. Probably the latter is better than the former at this point, at least you know that something does not work upfront.", "created_at": "2018-03-28T08:03:55Z" }, { "body": "Sorry, I think that I was not really clear. Let me try to re-elaborate. \r\n\r\nWhat I meant is that at this case it could be better to go with the documentation and stick with the `detailed` ( as controversial as it may be, sometimes it could be better to go with the documentation over the concrete implementation ;) )\r\n\r\nWithout digging too deep, a quick search through the `resp-api-spec` shows that `detailed` parameter is used in several apis and `details` in none. \r\n\r\nMaybe making `detailed` work as expected would be more consistent. Am I missing something ?", "created_at": "2018-03-28T09:46:38Z" }, { "body": "thanks for the explanation @olcbean ! I am fine either way, @bleskes you reviewed the PR associated to this issue, do you have an opinion on this?", "created_at": "2018-03-28T10:03:46Z" }, { "body": "@olcbean Yes, I agree with that (keep it the same as others), and according to all we discussed, I'm more in favor of keeping it as `detailed`.", "created_at": "2018-03-28T18:38:30Z" }, { "body": "agreed @PnPie, I have updated the description of this issue accordingly.", "created_at": "2019-03-05T10:00:34Z" } ], "number": 28910, "title": "Recovery API supports `details` param not `detailed`" }
{ "body": "Properly forward the `detailed` parameter to show extra `details` in the recovery stats.\r\n\r\nClose #28910", "number": 29076, "review_comments": [ { "body": "can we remove this too?", "created_at": "2018-03-25T14:30:55Z" }, { "body": "when you remove this, watch out for BWC aspects - see https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java#L59 as an example.", "created_at": "2018-03-25T14:33:07Z" }, { "body": "wasn't this already removed in another PR? Maybe you need to merge master in?", "created_at": "2018-03-27T18:55:39Z" }, { "body": "The `details` variable in `RecoveryRequest` ? It's to pass the `details` flag in Recovery API ? can we remove it ?", "created_at": "2018-03-27T20:09:02Z" }, { "body": "Here I just changed the name of variable (to be the same as flag name), the value is always a boolean, so we don't need to do some bwc changes ?", "created_at": "2018-03-27T20:16:38Z" }, { "body": "Yes, I didn't merge master before.", "created_at": "2018-03-27T20:17:14Z" }, { "body": "undo this change", "created_at": "2019-03-19T13:47:49Z" }, { "body": "let's leave this field as is. \"details\" in the output is nicer, and we don't break existing parsers.", "created_at": "2019-03-19T13:49:35Z" }, { "body": "this is the only change that we need to do in this class", "created_at": "2019-03-19T13:51:12Z" }, { "body": "I would prefer a REST test to show that the `detailed` flag is honoured (see `rest-api-spec/test/cat.recovery/10_basic.yml`). This test here is not needed.", "created_at": "2019-03-19T13:54:37Z" }, { "body": "I think we will have to skip this test for versions that don't have this fix.", "created_at": "2019-06-18T09:00:28Z" } ], "title": "Make Recovery API support `detailed` params" }
{ "commits": [ { "message": "Make recovery API support only `detailed` params" }, { "message": "Merge branch 'master' into recovery_param" }, { "message": "update and add rest layer test" }, { "message": "restore changes for Strings.java" }, { "message": "Merge remote-tracking branch 'elastic/master' into pr/29076" }, { "message": "Fix docs recovery test" }, { "message": "Add BWC conditon for newly added test" } ], "files": [ { "diff": "@@ -249,7 +249,7 @@ Response:\n }\n --------------------------------------------------\n // TESTRESPONSE[s/\"source\" : \\{[^}]*\\}/\"source\" : $body.$_path/]\n-// TESTRESPONSE[s/\"details\" : \\[[^\\]]*\\]//]\n+// TESTRESPONSE[s/\"details\" : \\[[^\\]]*\\]/\"details\" : $body.$_path/]\n // TESTRESPONSE[s/: (\\-)?[0-9]+/: $body.$_path/]\n // TESTRESPONSE[s/: \"[^\"]*\"/: $body.$_path/]\n ////", "filename": "docs/reference/indices/recovery.asciidoc", "status": "modified" }, { "diff": "@@ -130,3 +130,28 @@\n index: [v*]\n \n - match: { $body: {} }\n+---\n+\"Indices recovery test with detailed parameter\":\n+ - skip:\n+ version: \" - 7.9.99\"\n+ reason: bug with detailed parameter fixed in 8.0\n+\n+ - do:\n+ indices.create:\n+ index: test_3\n+ body:\n+ settings:\n+ index:\n+ number_of_replicas: 0\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: green\n+\n+ - do:\n+ indices.recovery:\n+ index: [test_3]\n+ human: true\n+ detailed: true\n+\n+ - match: { test_3.shards.0.index.files.details: [] }", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.recovery/10_basic.yml", "status": "modified" }, { "diff": "@@ -927,7 +927,7 @@ public synchronized XContentBuilder toXContent(XContentBuilder builder, Params p\n builder.field(Fields.REUSED, reusedFileCount());\n builder.field(Fields.RECOVERED, recoveredFileCount());\n builder.field(Fields.PERCENT, String.format(Locale.ROOT, \"%1.1f%%\", recoveredFilesPercent()));\n- if (params.paramAsBoolean(\"details\", false)) {\n+ if (params.paramAsBoolean(\"detailed\", false)) {\n builder.startArray(Fields.DETAILS);\n for (File file : fileDetails.values()) {\n file.toXContent(builder, params);", "filename": "server/src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version**: 1.7.5\n\n**JVM version**: 1.8.0_77\n\n**Description of the problem including expected versus actual behavior**:\n\nThe slowlogs are hidding and not logging the original IOException thrown in the [slowlog indexing](https://github.com/elastic/elasticsearch/blob/v1.7.5/src/main/java/org/elasticsearch/index/indexing/slowlog/ShardSlowLogIndexingService.java#L160-L168) and [searching](https://github.com/elastic/elasticsearch/blob/v1.7.5/src/main/java/org/elasticsearch/index/search/slowlog/ShardSlowLogSearchService.java#L203-L220) utility classes. The same seems to be happening for 2.3.x.\n\nAs such in slowlogs entries like the following are logged and there are no clues as to what the original request was and why it failed:\n\n```\n[2016-07-25 09:10:38,310][INFO ][index.indexing.slowlog.index] [some_node] [some_index][3] took[7.2s], took_millis[7200], type[some_type], id[12345], routing[12345], source[source[_failed_to_convert_]\n```\n", "comments": [ { "body": "This is a typical case of exception swallowing. For master, the code is in `IndexingSlowLog` line 193. We could easily change this catch to rethrow as an `UncheckedIOException`. However, that would propagate up and cause the indexing request to fail, even though the actual indexing already succeeded. So we could either do that, or log the exception in the slow log itself. I don't know which is better, I've marked this issue for discussion.\n", "created_at": "2016-07-25T15:48:18Z" }, { "body": "Maybe we should log it in the main log file (not the slow log) at warn level? Then we can keep the slow log the way it is. I don't expect this to be especially common so warn level ought to be fine. If it become common we can bump that log level to error as a work around and we can fix the issue that makes it common in the mean time.\n", "created_at": "2016-07-25T15:55:12Z" }, { "body": "I don't think slowlog should cause a request to fail, but we should totally not lose what the problem was and log all its details. The odd part of polluting the slowlog file would be that it may break parsers out there that rely on a fixed format? So I think I agree with Nik to try and log it in the main log file if possible.\n", "created_at": "2016-07-25T15:55:54Z" }, { "body": "Well, one problem here is rethrowing and putting into the general log means we will not have this indexing request in the slow log. I think that is kind of bogus? Propagating to fail the indexing request would have been painful yes, but we would know what the issue is and could fix much faster than it just being hidden away in another log file.\n", "created_at": "2016-07-25T15:58:15Z" }, { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-03-14T01:52:47Z" }, { "body": "Letting the exception percolate up here will not fail the indexing request. The slow logs are implemented as indexing operation listeners. The post indexing operation loop that invokes such listeners catches all exceptions and logs them at the warn level. After logging, these exceptions percolate no further. Therefore, we can choose to not write the entry to the slow log, but instead carry the exception up to the post index listener loop where it will be logged at the warn level. With it, we carry the entry that would have been entered into the slow log (modulo the failed source conversion). I opened #29043.", "created_at": "2018-03-14T02:05:13Z" } ], "number": 19573, "title": "Don't swallow the original IOException thrown at JSON parsing in slowlogs" }
{ "body": "When converting the source for an indexing request to JSON, the conversion can throw an I/O exception which we swallow and proceed with logging to the slow log. The cause of the I/O exception is lost. This commit changes this behavior and chooses to drop the entry from the slow logs and instead lets an exception percolate up to the indexing operation listener loop. Here, the exception will be caught and logged at the warn level.\r\n\r\nCloses #19573\r\n", "number": 29043, "review_comments": [], "title": "Do not swallow fail to convert exceptions" }
{ "commits": [ { "message": "Do not swallow fail to convert exceptions\n\nWhen converting the source for an indexing request to JSON, the\nconversion can throw an I/O exception which we swallow and proceed with\nlogging to the slow log. The cause of the I/O exception is lost. This\ncommit changes this behavior and chooses to drop the entry from the slow\nlogs and instead lets an exception percolate up to the indexing\noperation listener loop. Here, the exception will be caught and logged\nat the warn level." }, { "message": "Fix test" } ], "files": [ { "diff": "@@ -33,6 +33,8 @@\n import org.elasticsearch.index.shard.ShardId;\n \n import java.io.IOException;\n+import java.io.UncheckedIOException;\n+import java.util.Locale;\n import java.util.concurrent.TimeUnit;\n \n public final class IndexingSlowLog implements IndexingOperationListener {\n@@ -194,6 +196,12 @@ public String toString() {\n sb.append(\", source[\").append(Strings.cleanTruncate(source, maxSourceCharsToLog)).append(\"]\");\n } catch (IOException e) {\n sb.append(\", source[_failed_to_convert_[\").append(e.getMessage()).append(\"]]\");\n+ /*\n+ * We choose to fail to write to the slow log and instead let this percolate up to the post index listener loop where this\n+ * will be logged at the warn level.\n+ */\n+ final String message = String.format(Locale.ROOT, \"failed to convert source for slow log entry [%s]\", sb.toString());\n+ throw new UncheckedIOException(message, e);\n }\n return sb.toString();\n }", "filename": "server/src/main/java/org/elasticsearch/index/IndexingSlowLog.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index;\n \n+import com.fasterxml.jackson.core.JsonParseException;\n import org.apache.lucene.document.NumericDocValuesField;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -34,6 +35,7 @@\n import org.elasticsearch.test.ESTestCase;\n \n import java.io.IOException;\n+import java.io.UncheckedIOException;\n \n import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.hasToString;\n@@ -70,9 +72,15 @@ public void testSlowLogParsedDocumentPrinterSourceToLog() throws IOException {\n \"test\", null, null, source, XContentType.JSON, null);\n p = new SlowLogParsedDocumentPrinter(index, pd, 10, true, 3);\n \n- assertThat(p.toString(), containsString(\"_failed_to_convert_[Unrecognized token 'invalid':\"\n+ final UncheckedIOException e = expectThrows(UncheckedIOException.class, p::toString);\n+ assertThat(e, hasToString(containsString(\"_failed_to_convert_[Unrecognized token 'invalid':\"\n + \" was expecting ('true', 'false' or 'null')\\n\"\n- + \" at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper\"));\n+ + \" at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper\")));\n+ assertNotNull(e.getCause());\n+ assertThat(e.getCause(), instanceOf(JsonParseException.class));\n+ assertThat(e.getCause(), hasToString(containsString(\"Unrecognized token 'invalid':\"\n+ + \" was expecting ('true', 'false' or 'null')\\n\"\n+ + \" at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper\")));\n }\n \n public void testReformatSetting() {", "filename": "server/src/test/java/org/elasticsearch/index/IndexingSlowLogTests.java", "status": "modified" } ] }
{ "body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): \r\nVersion: 6.1.1, Build: bd92e7f/2017-12-17T20:23:25.338Z, JVM: 1.8.0_151\r\n\r\n**Plugins installed**: \r\n[\"x-pack\"]\r\n\r\n**JVM version** (`java -version`):\r\njava version \"1.8.0_151\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_151-b12)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system):\r\nDarwin MacPro.local 17.3.0 Darwin Kernel Version 17.3.0: Thu Nov 9 18:09:22 PST 2017; root:xnu-4570.31.3~1/RELEASE_X86_64 x86_64\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nI'm unable to use stored scripts to update by query. Elasticsearch is returning `\"lang cannot be specified for stored scripts` error.\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Create a stored script\r\n\r\n```\r\nPOST _scripts/dummy_script\r\n{\r\n \"script\": {\r\n \"source\": \"ctx._source.my_field = 8\",\r\n \"lang\": \"painless\"\r\n }\r\n}\r\n```\r\n\r\nPlease note that \"lang\" parameter is obligatory here.\r\n\r\n 2. Index a test document:\r\n\r\n```\r\nPUT test/foo/1\r\n{\r\n \"my_field\": 10,\r\n \"user_id\": 12\r\n}\r\n```\r\n\r\n 3. Try to update by query:\r\n\r\n```\r\nPOST test/_update_by_query\r\n{\r\n \"query\": {\r\n \"term\": {\r\n \"user_id\": 12\r\n }\r\n },\r\n \"script\": {\r\n \"id\": \"dummy_script\"\r\n }\r\n}\r\n```\r\n\r\nResponse:\r\n\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"lang cannot be specified for stored scripts\"\r\n }\r\n ],\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"lang cannot be specified for stored scripts\"\r\n },\r\n \"status\": 400\r\n}\r\n```\r\n\r\n**Provide logs (if relevant)**: No logs are emitted.\r\n\r\n", "comments": [ { "body": "Thanks for the report @luizgpsantos! I could reproduce this issue with the provided reproduction scenario on a fresh install of Elasticsearch 6.1.1.\r\n\r\nThis is probably a side-effect of #25610. @jdconrad can you please have a look?", "created_at": "2017-12-29T10:32:49Z" }, { "body": "Will take a look at this next week.", "created_at": "2017-12-29T17:13:51Z" }, { "body": "The current workaround that I use is to pass:\r\n```\r\n\"script\": {\r\n \"id\": \"dummy_script\",\r\n \"lang\": null,\r\n}\r\n```", "created_at": "2018-01-07T17:53:57Z" }, { "body": "Hello, I'm trying scripting & _update_by_query on my data but, with \"lang\": null on elastic cloud (6.1.1), I still get the same error (\"lang cannot be specified for stored scripts\").", "created_at": "2018-01-08T17:20:32Z" }, { "body": "Hello, just to tell that the workaround of jughead is working (probably was my fault on last tentative, sorry) and that I succeeded in using it in elasticsearch-py passing python \"lang\": None instead of keyword ‘null’ (see below for whom is interested). \r\nBye.\r\n\r\nes.put_script (id=\"dummy_script\", body={ \"script\": { \"lang\": \"painless\", \"source\": \"ctx._source.xxx = params.param1; …\" } } )\r\n\r\nes.update_by_query (index=dummy_index, body={\"script\": {\"id\": \"dummy_script\", \"lang\": None, \"params\": { \"param1\": value_yyy, …}}, \"query\": {\"term\": {… } } } )", "created_at": "2018-02-07T10:37:13Z" }, { "body": "Hello,\r\nI'm encountering the same bug using NEST and I cannot find a workaround since I don't handle the serialization myself.\r\nIt's blocking the upgrade of my product to ElasticSearch 6.X\r\nMatthias.", "created_at": "2018-02-09T14:32:04Z" }, { "body": "I looked at this with @jdconrad. The bug here is in `RestUpdateByQueryAction.parseScript`. It defaults lang to the default lang, but for stored scripts this should be null. This code should really use `Script.parse`.\r\n\r\n/cc @nik9000 ", "created_at": "2018-03-01T23:58:39Z" }, { "body": "I am using ES 6.2.3, \r\n\r\n```json\r\n{\r\n \"script\": {\r\n \"id\": \"merge-asset-profiles\",\r\n \"lang\": null,\r\n \"params\": {\r\n \"updated_at\": 1527048588\r\n }\r\n }\r\n}\r\n```\r\n\r\nis not working. I got\r\n```json\r\n{\r\n \"error\": {\r\n \"reason\": \"[script] lang doesn't support values of type: VALUE_NULL\",\r\n \"root_cause\": [\r\n {\r\n \"reason\": \"[script] lang doesn't support values of type: VALUE_NULL\",\r\n \"type\": \"parsing_exception\"\r\n }\r\n ],\r\n \"type\": \"parsing_exception\"\r\n },\r\n \"status\": 400\r\n}\r\n```", "created_at": "2018-05-23T04:07:16Z" } ], "number": 28002, "title": "Unable to use stored scripts in _update_by_query" }
{ "body": "This changes the parsing logic for stored scripts in update by query to match the parsing logic for scripts in general Elasticsearch. Ultimately, this should be changed to use Script.parseScript, but for now this is a much simpler fix to get our users up and going again.\r\n\r\nCloses #28002\r\n", "number": 29039, "review_comments": [], "title": "Fix Parsing Bug with Update By Query for Stored Scripts" }
{ "commits": [ { "message": "Fixes a parsing bug where stored scripts could not be used in update by\nquery for reindex.\n\nCloses #28002" }, { "message": "Merge branch 'master' into fix_update_stored" } ], "files": [ { "diff": "@@ -86,7 +86,7 @@ private static Script parseScript(Object config) {\n Map<String,Object> configMap = (Map<String, Object>) config;\n String script = null;\n ScriptType type = null;\n- String lang = DEFAULT_SCRIPT_LANG;\n+ String lang = null;\n Map<String, Object> params = Collections.emptyMap();\n for (Iterator<Map.Entry<String, Object>> itr = configMap.entrySet().iterator(); itr.hasNext();) {\n Map.Entry<String, Object> entry = itr.next();\n@@ -126,7 +126,15 @@ private static Script parseScript(Object config) {\n }\n assert type != null : \"if script is not null, type should definitely not be null\";\n \n- return new Script(type, lang, script, params);\n+ if (type == ScriptType.STORED) {\n+ if (lang != null) {\n+ throw new IllegalArgumentException(\"lang cannot be specified for stored scripts\");\n+ }\n+\n+ return new Script(type, null, script, null, params);\n+ } else {\n+ return new Script(type, lang == null ? DEFAULT_SCRIPT_LANG : lang, script, params);\n+ }\n } else {\n throw new IllegalArgumentException(\"Script value should be a String or a Map\");\n }", "filename": "modules/reindex/src/main/java/org/elasticsearch/index/reindex/RestUpdateByQueryAction.java", "status": "modified" }, { "diff": "@@ -340,3 +340,84 @@\n source: if (ctx._source.user == \"kimchy\") {ctx.op = \"index\"} else {ctx.op = \"junk\"}\n \n - match: { error.reason: 'Operation type [junk] not allowed, only [noop, index, delete] are allowed' }\n+\n+---\n+\"Update all docs with one deletion and one noop using a stored script\":\n+ - do:\n+ index:\n+ index: twitter\n+ type: tweet\n+ id: 1\n+ body: { \"level\": 9, \"last_updated\": \"2016-01-01T12:10:30Z\" }\n+ - do:\n+ index:\n+ index: twitter\n+ type: tweet\n+ id: 2\n+ body: { \"level\": 10, \"last_updated\": \"2016-01-01T12:10:30Z\" }\n+ - do:\n+ index:\n+ index: twitter\n+ type: tweet\n+ id: 3\n+ body: { \"level\": 11, \"last_updated\": \"2016-01-01T12:10:30Z\" }\n+ - do:\n+ index:\n+ index: twitter\n+ type: tweet\n+ id: 4\n+ body: { \"level\": 12, \"last_updated\": \"2016-01-01T12:10:30Z\" }\n+ - do:\n+ indices.refresh: {}\n+ - do:\n+ put_script:\n+ id: \"my_update_script\"\n+ body: { \"script\": {\"lang\": \"painless\",\n+ \"source\": \"int choice = ctx._source.level % 3;\n+ if (choice == 0) {\n+ ctx._source.last_updated = '2016-01-02T00:00:00Z';\n+ } else if (choice == 1) {\n+ ctx.op = 'noop';\n+ } else {\n+ ctx.op = 'delete';\n+ }\" } }\n+ - match: { acknowledged: true }\n+\n+ - do:\n+ update_by_query:\n+ refresh: true\n+ index: twitter\n+ body:\n+ script:\n+ id: \"my_update_script\"\n+\n+ - match: {updated: 2}\n+ - match: {deleted: 1}\n+ - match: {noops: 1}\n+\n+ - do:\n+ search:\n+ index: twitter\n+ body:\n+ query:\n+ match:\n+ last_updated: \"2016-01-02T00:00:00Z\"\n+ - match: { hits.total: 2 }\n+\n+ - do:\n+ search:\n+ index: twitter\n+ body:\n+ query:\n+ match:\n+ last_updated: \"2016-01-01T12:10:30Z\"\n+ - match: { hits.total: 1 }\n+\n+ - do:\n+ search:\n+ index: twitter\n+ body:\n+ query:\n+ term:\n+ level: 11\n+ - match: { hits.total: 0 }", "filename": "qa/smoke-test-reindex-with-all-modules/src/test/resources/rest-api-spec/test/update_by_query/10_script.yml", "status": "modified" } ] }
{ "body": "<!-- Bug report -->\r\n\r\n**Elasticsearch version**: 2.4.1 (docker)\r\n\r\n**Plugins installed**: [ license, marvel-agent ]\r\n\r\n**JVM version** (`java -version`): java-8-openjdk-amd64 (openjdk:8-jre docker)\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): Linux HOSTNAME 4.4.0-34-generic #53~14.04.1-Ubuntu SMP Wed Jul 27 16:56:40 UTC 2016 x86_64 GNU/Linux\r\n\r\n**Description of the problem including expected versus actual behavior**: Elasticsearch is periodically (and seemingly randomly) rejecting log messages due to a mapper parsing exception with a nested pattern syntax expression. See below for a sample log message. There is nothing special in our setup. We have separated master/client/data nodes. We have two index templates, one which matches all indexes \"*\" at priority 0 and one for each dedicated index, e.g. \"storage-*\" at priority 1. Each of those index templates defines some index properties (that don't overlap), as well as some mapping properties and dynamic mapping templates, e.g.:\r\n```\r\n{\r\n \"long_fields\": {\r\n \"mapping\": {\r\n \"doc_values\": true,\r\n \"type\": \"long\"\r\n },\r\n \"match_mapping_type\": \"long\",\r\n \"match\": \"*\"\r\n }\r\n}\r\n```\r\n\r\nThe issue only affects *some* logs in each index. The logs it affects don't have anything special in their content either. The error appears to be erroneous, but any insight would be greatly appreciated.\r\n\r\n**Steps to reproduce**:\r\n\r\n 1. Run elasticsearch cluster\r\n 2. Index data\r\n 3. See problem\r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n[2017-05-17 17:43:51,725][DEBUG][action.bulk ] [d-d08p03r05u05-t0g2] [storage-2017.05.17][0] failed to execute bulk item (index) index {[storage-2017.05.17][events][AVwXgkKowptZJ_-jLodQ], source[REDACTED_JSON_SOURCE]}\r\nMapperParsingException[failed to parse]; nested: PatternSyntaxException[Dangling meta character '*' near index 0\r\n```", "comments": [ { "body": "hey, can you provide a full stacktrace from your logs?", "created_at": "2017-05-18T07:59:46Z" }, { "body": "...plus the full index templates that you're using, and some examples of failed docs", "created_at": "2017-05-18T08:48:22Z" }, { "body": "Also, are you using an ingest pipeline?", "created_at": "2017-05-18T10:09:18Z" }, { "body": "With some values obscured, here's the full stack trace with the document data:\r\n\r\n```\r\n[2017-05-17 23:37:17,348][DEBUG][action.bulk ] [d-d08p03r05u05-t0g2] [storage-2017.05.17][0] failed to execute bulk item (index) index {[storage-2017.05.17][events][AVwYxdUg3Xm6BAlG1BiH], source[{\"@timestamp\":\"2017-05-17T23:37:17.216887+00:00\",\"syslog_host\":\"somehost\",\"syslog_program\":\"cleaner_tuna\",\"syslog_severity\":\"info\",\"syslog_facility\":\"local7\",\"syslog_tag\":\"cleaner_tuna[1]:\",\"syslog_region\":\"someregion\",\"noidx_rawmsg\":\"<190>2017-05-17T23:37:17.216887+00:00 somehost cleaner_tuna[1]: @cee:{\\\"egid\\\":999,\\\"eid\\\":999,\\\"env\\\":\\\"production\\\",\\\"host\\\":\\\"ephemeralhost\\\",\\\"intent\\\":{\\\"Intent\\\":{\\\"DeleteSnapshot\\\":{\\\"snapshot\\\":{\\\"allocation_id\\\":\\\"someallocationid\\\",\\\"backend\\\":{\\\"Details\\\":{\\\"Ceph\\\":{\\\"pool\\\":\\\"rbd\\\"}},\\\"cluster_id\\\":\\\"someclusterid\\\"},\\\"created_at\\\":\\\"2017-03-23T22:20:12Z\\\",\\\"id\\\":\\\"someid\\\",\\\"lifecycle_event_count\\\":1,\\\"name\\\":\\\"somename\\\",\\\"old_ceph_details\\\":{\\\"cluster\\\":\\\"somecluster\\\",\\\"pool\\\":\\\"rbd\\\"},\\\"parent_size_bytes\\\":107374182400,\\\"pending_events\\\":[{\\\"Event\\\":{\\\"SnapshotCreate\\\":{\\\"created_at\\\":\\\"2017-03-23T22:20:12Z\\\",\\\"name\\\":\\\"somename\\\"}},\\\"parent_id\\\":\\\"someparentid\\\",\\\"user_id\\\":117102}],\\\"user_id\\\":123456}}},\\\"claimed_at\\\":\\\"2017-05-17T23:37:17Z\\\",\\\"created_at\\\":\\\"2017-03-23T22:22:13Z\\\",\\\"delay_secs\\\":120,\\\"id\\\":\\\"someid\\\",\\\"owner\\\":{\\\"name\\\":\\\"somename\\\"}},\\\"level\\\":\\\"info\\\",\\\"msg\\\":\\\"releasing intent\\\",\\\"pid\\\":1,\\\"pname\\\":\\\"\\/cleanerd\\\",\\\"time\\\":\\\"2017-05-17T23:37:17Z\\\",\\\"version\\\":\\\"d8c41ea964\\\"}\", \"egid\": 999, \"eid\": 999, \"env\": \"production\", \"host\": \"cleaner-2795265912-tqeu6\", \"intent\": { \"Intent\": { \"DeleteSnapshot\": { \"snapshot\": { \"allocation_id\": \"someallocationid\", \"backend\": { \"Details\": { \"Ceph\": { \"pool\": \"rbd\" } }, \"cluster_id\": \"fra1-prod\" }, \"created_at\": \"2017-03-23T22:20:12Z\", \"id\": \"someid\", \"lifecycle_event_count\": 1, \"name\": \"volume-fra1-test-snapshot-1\", \"old_ceph_details\": { \"cluster\": \"fra1-prod\", \"pool\": \"rbd\" }, \"parent_size_bytes\": 107374182400, \"pending_events\": [ { \"Event\": { \"SnapshotCreate\": { \"created_at\": \"2017-03-23T22:20:12Z\", \"name\": \"somename\" } }, \"parent_id\": \"someparentid\", \"user_id\": 12345 } ], \"user_id\": 12345 } } }, \"claimed_at\": \"2017-05-17T23:37:17Z\", \"created_at\": \"2017-03-23T22:22:13Z\", \"delay_secs\": 120, \"id\": \"someid\", \"owner\": { \"name\": \"somename\" } }, \"level\": \"info\", \"msg\": \"releasing intent\", \"pid\": 1, \"pname\": \"\\/cleanerd\", \"time\": \"2017-05-17T23:37:17Z\", \"version\": \"d8c41ea964\" }]}\r\nMapperParsingException[failed to parse]; nested: PatternSyntaxException[Dangling meta character '*' near index 0\r\n*\r\n^];\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:156)\r\n\tat org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:309)\r\n\tat org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:529)\r\n\tat org.elasticsearch.index.shard.IndexShard.prepareCreateOnPrimary(IndexShard.java:506)\r\n\tat org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:214)\r\n\tat org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:223)\r\n\tat org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:327)\r\n\tat org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:120)\r\n\tat org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:68)\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657)\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:287)\r\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)\r\n\tat org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n\tat java.lang.Thread.run(Thread.java:745)\r\nCaused by: java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 0\r\n*\r\n^\r\n\tat java.util.regex.Pattern.error(Pattern.java:1955)\r\n\tat java.util.regex.Pattern.sequence(Pattern.java:2123)\r\n\tat java.util.regex.Pattern.expr(Pattern.java:1996)\r\n\tat java.util.regex.Pattern.compile(Pattern.java:1696)\r\n\tat java.util.regex.Pattern.<init>(Pattern.java:1351)\r\n\tat java.util.regex.Pattern.compile(Pattern.java:1028)\r\n\tat java.util.regex.Pattern.matches(Pattern.java:1133)\r\n\tat java.lang.String.matches(String.java:2121)\r\n\tat org.elasticsearch.index.mapper.object.DynamicTemplate.patternMatch(DynamicTemplate.java:163)\r\n\tat org.elasticsearch.index.mapper.object.DynamicTemplate.match(DynamicTemplate.java:131)\r\n\tat org.elasticsearch.index.mapper.object.RootObjectMapper.findTemplate(RootObjectMapper.java:263)\r\n\tat org.elasticsearch.index.mapper.object.RootObjectMapper.findTemplateBuilder(RootObjectMapper.java:248)\r\n\tat org.elasticsearch.index.mapper.object.RootObjectMapper.findTemplateBuilder(RootObjectMapper.java:244)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.createBuilderFromDynamicValue(DocumentParser.java:557)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseDynamicValue(DocumentParser.java:619)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:444)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:264)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:308)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseAndMergeUpdate(DocumentParser.java:740)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:354)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:254)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:308)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseAndMergeUpdate(DocumentParser.java:740)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:354)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:254)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:308)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseAndMergeUpdate(DocumentParser.java:740)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:354)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:254)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:308)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseAndMergeUpdate(DocumentParser.java:740)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:354)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:254)\r\n\tat org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:124)\r\n\t... 18 more\r\n```\r\n\r\nOur setup is has queue-backed rsyslog instances writing bulk indexing requests to ES. The full index templates are as follows:\r\n\r\n```\r\n{\r\n \"template\": \"*\",\r\n \"order\": 0,\r\n \"settings\": {\r\n \"index\": {\r\n \"refresh_interval\": \"5s\"\r\n }\r\n },\r\n \"mappings\": {\r\n \"_default_\": {\r\n \"properties\": {\r\n \"geoip\": {\r\n \"properties\": {\r\n \"longitude\": {\r\n \"doc_values\": true,\r\n \"type\": \"float\"\r\n },\r\n \"latitude\": {\r\n \"doc_values\": true,\r\n \"type\": \"float\"\r\n },\r\n \"location\": {\r\n \"doc_values\": true,\r\n \"type\": \"geo_point\"\r\n },\r\n \"ip\": {\r\n \"doc_values\": true,\r\n \"type\": \"ip\"\r\n }\r\n },\r\n \"dynamic\": true,\r\n \"type\": \"object\"\r\n },\r\n \"@version\": {\r\n \"doc_values\": true,\r\n \"index\": \"not_analyzed\",\r\n \"type\": \"string\"\r\n },\r\n \"@timestamp\": {\r\n \"doc_values\": true,\r\n \"type\": \"date\"\r\n }\r\n },\r\n \"dynamic_templates\": [\r\n {\r\n \"numeric_fields\": {\r\n \"mapping\": {\r\n \"index\": \"not_analyzed\",\r\n \"type\": \"double\"\r\n },\r\n \"match\": \"(?i)(.*[_-])?(code|size|length|count|seconds?|sec|us|ms|s)\",\r\n \"unmatch\": \"grpc_code\",\r\n \"match_pattern\": \"regex\"\r\n }\r\n },\r\n {\r\n \"date_fields\": {\r\n \"mapping\": {\r\n \"index\": \"not_analyzed\",\r\n \"type\": \"date\"\r\n },\r\n \"path_match\": \"(?i)(.*[_-])?(timestamp|time|at)\",\r\n \"match_pattern\": \"regex\"\r\n }\r\n },\r\n {\r\n \"large_text_fields\": {\r\n \"mapping\": {\r\n \"index\": \"no\",\r\n \"type\": \"string\"\r\n },\r\n \"path_match\": \"(?i)(noidx.*)\",\r\n \"match_pattern\": \"regex\"\r\n }\r\n },\r\n {\r\n \"message_field\": {\r\n \"mapping\": {\r\n \"omit_norms\": true,\r\n \"index\": \"analyzed\",\r\n \"type\": \"string\"\r\n },\r\n \"match_mapping_type\": \"string\",\r\n \"match\": \"message\"\r\n }\r\n },\r\n {\r\n \"id_field\": {\r\n \"mapping\": {\r\n \"fields\": {\r\n \"raw\": {\r\n \"ignore_above\": 256,\r\n \"doc_values\": true,\r\n \"index\": \"not_analyzed\",\r\n \"type\": \"string\"\r\n }\r\n },\r\n \"omit_norms\": true,\r\n \"index\": \"analyzed\",\r\n \"type\": \"string\"\r\n },\r\n \"type\": \"string\",\r\n \"match\": \"*_id\"\r\n }\r\n },\r\n {\r\n \"process_id_field\": {\r\n \"mapping\": {\r\n \"fields\": {\r\n \"raw\": {\r\n \"ignore_above\": 256,\r\n \"doc_values\": true,\r\n \"index\": \"not_analyzed\",\r\n \"type\": \"string\"\r\n }\r\n },\r\n \"omit_norms\": true,\r\n \"index\": \"analyzed\",\r\n \"type\": \"string\"\r\n },\r\n \"type\": \"string\",\r\n \"match\": \"pid\"\r\n }\r\n },\r\n {\r\n \"req_params_image_field\": {\r\n \"mapping\": {\r\n \"fields\": {\r\n \"raw\": {\r\n \"ignore_above\": 256,\r\n \"doc_values\": true,\r\n \"index\": \"not_analyzed\",\r\n \"type\": \"string\"\r\n }\r\n },\r\n \"omit_norms\": true,\r\n \"index\": \"analyzed\",\r\n \"type\": \"string\"\r\n },\r\n \"type\": \"string\",\r\n \"match\": \"req_params_image\"\r\n }\r\n },\r\n {\r\n \"string_fields\": {\r\n \"mapping\": {\r\n \"fields\": {\r\n \"raw\": {\r\n \"ignore_above\": 256,\r\n \"doc_values\": true,\r\n \"index\": \"not_analyzed\",\r\n \"type\": \"string\"\r\n }\r\n },\r\n \"omit_norms\": true,\r\n \"index\": \"analyzed\",\r\n \"type\": \"string\"\r\n },\r\n \"match_mapping_type\": \"string\",\r\n \"match\": \"*\"\r\n }\r\n },\r\n {\r\n \"float_fields\": {\r\n \"mapping\": {\r\n \"doc_values\": true,\r\n \"type\": \"float\"\r\n },\r\n \"match_mapping_type\": \"float\",\r\n \"match\": \"*\"\r\n }\r\n },\r\n {\r\n \"double_fields\": {\r\n \"mapping\": {\r\n \"doc_values\": true,\r\n \"type\": \"double\"\r\n },\r\n \"match_mapping_type\": \"double\",\r\n \"match\": \"*\"\r\n }\r\n },\r\n {\r\n \"byte_fields\": {\r\n \"mapping\": {\r\n \"doc_values\": true,\r\n \"type\": \"byte\"\r\n },\r\n \"match_mapping_type\": \"byte\",\r\n \"match\": \"*\"\r\n }\r\n },\r\n {\r\n \"short_fields\": {\r\n \"mapping\": {\r\n \"doc_values\": true,\r\n \"type\": \"short\"\r\n },\r\n \"match_mapping_type\": \"short\",\r\n \"match\": \"*\"\r\n }\r\n },\r\n {\r\n \"integer_fields\": {\r\n \"mapping\": {\r\n \"doc_values\": true,\r\n \"type\": \"integer\"\r\n },\r\n \"match_mapping_type\": \"integer\",\r\n \"match\": \"*\"\r\n }\r\n },\r\n {\r\n \"long_fields\": {\r\n \"mapping\": {\r\n \"doc_values\": true,\r\n \"type\": \"long\"\r\n },\r\n \"match_mapping_type\": \"long\",\r\n \"match\": \"*\"\r\n }\r\n },\r\n {\r\n \"date_fields\": {\r\n \"mapping\": {\r\n \"doc_values\": true,\r\n \"type\": \"date\"\r\n },\r\n \"match_mapping_type\": \"date\",\r\n \"match\": \"*\"\r\n }\r\n },\r\n {\r\n \"geo_point_fields\": {\r\n \"mapping\": {\r\n \"doc_values\": true,\r\n \"type\": \"geo_point\"\r\n },\r\n \"match_mapping_type\": \"geo_point\",\r\n \"match\": \"*\"\r\n }\r\n }\r\n ],\r\n \"_all\": {\r\n \"omit_norms\": true,\r\n \"enabled\": true\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\n```\r\n{\r\n \"template\": \"storage-*\",\r\n \"order\" : 1,\r\n \"settings\": {\r\n \"number_of_shards\" : \"1\",\r\n \"number_of_replicas\": \"2\"\r\n }\r\n}\r\n```", "created_at": "2017-05-18T17:49:58Z" }, { "body": "Do you have any examples of failed documents?", "created_at": "2017-05-24T11:03:09Z" }, { "body": "There was an example in the exception trace:\r\n```\r\n{\r\n \"@timestamp\": \"2017-05-17T23:37:17.216887+00:00\",\r\n \"syslog_host\": \"somehost\",\r\n \"syslog_program\": \"cleaner_tuna\",\r\n \"syslog_severity\": \"info\",\r\n \"syslog_facility\": \"local7\",\r\n \"syslog_tag\": \"cleaner_tuna[1]:\",\r\n \"syslog_region\": \"someregion\",\r\n \"noidx_rawmsg\": \"<190>2017-05-17T23:37:17.216887+00:00 somehost cleaner_tuna[1]: @cee:{\\\"egid\\\":999,\\\"eid\\\":999,\\\"env\\\":\\\"production\\\",\\\"host\\\":\\\"ephemeralhost\\\",\\\"intent\\\":{\\\"Intent\\\":{\\\"DeleteSnapshot\\\":{\\\"snapshot\\\":{\\\"allocation_id\\\":\\\"someallocationid\\\",\\\"backend\\\":{\\\"Details\\\":{\\\"Ceph\\\":{\\\"pool\\\":\\\"rbd\\\"}},\\\"cluster_id\\\":\\\"someclusterid\\\"},\\\"created_at\\\":\\\"2017-03-23T22:20:12Z\\\",\\\"id\\\":\\\"someid\\\",\\\"lifecycle_event_count\\\":1,\\\"name\\\":\\\"somename\\\",\\\"old_ceph_details\\\":{\\\"cluster\\\":\\\"somecluster\\\",\\\"pool\\\":\\\"rbd\\\"},\\\"parent_size_bytes\\\":107374182400,\\\"pending_events\\\":[{\\\"Event\\\":{\\\"SnapshotCreate\\\":{\\\"created_at\\\":\\\"2017-03-23T22:20:12Z\\\",\\\"name\\\":\\\"somename\\\"}},\\\"parent_id\\\":\\\"someparentid\\\",\\\"user_id\\\":117102}],\\\"user_id\\\":123456}}},\\\"claimed_at\\\":\\\"2017-05-17T23:37:17Z\\\",\\\"created_at\\\":\\\"2017-03-23T22:22:13Z\\\",\\\"delay_secs\\\":120,\\\"id\\\":\\\"someid\\\",\\\"owner\\\":{\\\"name\\\":\\\"somename\\\"}},\\\"level\\\":\\\"info\\\",\\\"msg\\\":\\\"releasing intent\\\",\\\"pid\\\":1,\\\"pname\\\":\\\"\\/cleanerd\\\",\\\"time\\\":\\\"2017-05-17T23:37:17Z\\\",\\\"version\\\":\\\"d8c41ea964\\\"}\",\r\n \"egid\": 999,\r\n \"eid\": 999,\r\n \"env\": \"production\",\r\n \"host\": \"cleaner-2795265912-tqeu6\",\r\n \"intent\": {\r\n \"Intent\": {\r\n \"DeleteSnapshot\": {\r\n \"snapshot\": {\r\n \"allocation_id\": \"someallocationid\",\r\n \"backend\": {\r\n \"Details\": {\r\n \"Ceph\": {\r\n \"pool\": \"rbd\"\r\n }\r\n },\r\n \"cluster_id\": \"fra1-prod\"\r\n },\r\n \"created_at\": \"2017-03-23T22:20:12Z\",\r\n \"id\": \"someid\",\r\n \"lifecycle_event_count\": 1,\r\n \"name\": \"somename\",\r\n \"old_ceph_details\": {\r\n \"cluster\": \"fra1-prod\",\r\n \"pool\": \"rbd\"\r\n },\r\n \"parent_size_bytes\": 107374182400,\r\n \"pending_events\": [\r\n {\r\n \"Event\": {\r\n \"SnapshotCreate\": {\r\n \"created_at\": \"2017-03-23T22:20:12Z\",\r\n \"name\": \"somename\"\r\n }\r\n },\r\n \"parent_id\": \"someparentid\",\r\n \"user_id\": 12345\r\n }\r\n ],\r\n \"user_id\": 12345\r\n }\r\n }\r\n },\r\n \"claimed_at\": \"2017-05-17T23:37:17Z\",\r\n \"created_at\": \"2017-03-23T22:22:13Z\",\r\n \"delay_secs\": 120,\r\n \"id\": \"someid\",\r\n \"owner\": {\r\n \"name\": \"somename\"\r\n }\r\n },\r\n \"level\": \"info\",\r\n \"msg\": \"releasing intent\",\r\n \"pid\": 1,\r\n \"pname\": \"\\/cleanerd\",\r\n \"time\": \"2017-05-17T23:37:17Z\",\r\n \"version\": \"d8c41ea964\"\r\n}\r\n```", "created_at": "2017-05-24T11:06:49Z" }, { "body": "sorry, didn't see that\r\n", "created_at": "2017-05-24T11:07:33Z" }, { "body": "Hmmm nothing obvious that I can see. Your regexes should use `[_\\-]` instead of `[_-]`, but I doubt that's the issue. It doesn't replicate for me, but you said it happens only sometimes. I wonder if a simple pattern like `*` is being parsed as a regex somewhere?\r\n\r\nHave you been able to narrow it down at all? eg removing parts of the template to see if the problem goes away? I'd also be interested to know if upgrading ES or Java helps.\r\n\r\nI don't have any ideas - I'll leave it for somebody else to investigate.", "created_at": "2017-05-24T11:35:04Z" }, { "body": "This is our production cluster ingesting about 2TB of log data per day, with ~40TB total data. We don't have a lot of freedom to play around with arbitrary tests to see if the issue goes away, when it's seemingly sporadic as is. We're already on the 2.4.1 series running in docker, so I could bump to 2.4.4 but I don't have any reason to believe that that would fix the underlying problem.\r\n\r\nAt this point, we don't have to operational capacity to migrate to 5.x, so I was hoping the issue would be pretty obvious from the stack trace.", "created_at": "2017-05-24T12:08:33Z" }, { "body": "I don't think 2.4.4 would fix the problem either. I suspect @clintongormley is right that a simple pattern is sometimes parsed as a regex even though it shouldn't. The fact that you mentioned that this issue is sporadic also leads me to thinking that it might be specific to one node of the cluster.\r\n\r\nI will close this issue and improve validation of the match pattern for now: #29013. Let's revisit if/when someone reports this issue with a newer version. We will have more information to understand what is happening.", "created_at": "2018-03-13T13:37:58Z" } ], "number": 24749, "title": "Dangling meta character '*' near index 0" }
{ "body": "Today you would only get these errors at index time.\r\n\r\nRelates #24749", "number": 29013, "review_comments": [], "title": "Validate regular expressions in dynamic templates." }
{ "commits": [ { "message": "Validate regular expressions in dynamic templates.\n\nToday you would only get these errors at index time.\n\nRelates #24749" }, { "message": "Update DynamicTemplateTests.java" } ], "files": [ { "diff": "@@ -216,7 +216,24 @@ public static DynamicTemplate parse(String name, Map<String, Object> conf,\n }\n }\n }\n- return new DynamicTemplate(name, pathMatch, pathUnmatch, match, unmatch, xcontentFieldType, MatchType.fromString(matchPattern), mapping);\n+\n+ final MatchType matchType = MatchType.fromString(matchPattern);\n+\n+ if (indexVersionCreated.onOrAfter(Version.V_6_3_0)) {\n+ // Validate that the pattern\n+ for (String regex : new String[] { pathMatch, match, pathUnmatch, unmatch }) {\n+ if (regex == null) {\n+ continue;\n+ }\n+ try {\n+ matchType.matches(regex, \"\");\n+ } catch (IllegalArgumentException e) {\n+ throw new IllegalArgumentException(\"Pattern [\" + regex + \"] of type [\" + matchType + \"] is invalid. Cannot create dynamic template [\" + name + \"].\", e);\n+ }\n+ }\n+ }\n+\n+ return new DynamicTemplate(name, pathMatch, pathUnmatch, match, unmatch, xcontentFieldType, matchType, mapping);\n }\n \n private final String name;", "filename": "server/src/main/java/org/elasticsearch/index/mapper/DynamicTemplate.java", "status": "modified" }, { "diff": "@@ -61,6 +61,19 @@ public void testParseUnknownMatchType() {\n e.getMessage());\n }\n \n+ public void testParseInvalidRegex() {\n+ for (String param : new String[] { \"path_match\", \"match\", \"path_unmatch\", \"unmatch\" }) {\n+ Map<String, Object> templateDef = new HashMap<>();\n+ templateDef.put(\"match\", \"foo\");\n+ templateDef.put(param, \"*a\");\n+ templateDef.put(\"match_pattern\", \"regex\");\n+ templateDef.put(\"mapping\", Collections.singletonMap(\"store\", true));\n+ IllegalArgumentException e = expectThrows(IllegalArgumentException.class,\n+ () -> DynamicTemplate.parse(\"my_template\", templateDef, Version.V_6_3_0));\n+ assertEquals(\"Pattern [*a] of type [regex] is invalid. Cannot create dynamic template [my_template].\", e.getMessage());\n+ }\n+ }\n+\n public void testMatchAllTemplate() {\n Map<String, Object> templateDef = new HashMap<>();\n templateDef.put(\"match_mapping_type\", \"*\");", "filename": "server/src/test/java/org/elasticsearch/index/mapper/DynamicTemplateTests.java", "status": "modified" } ] }
{ "body": "Causes issue if you try to run a cron job as the ES user. Issue on centos 6, package from the repo. \n\n```\n[centos@ip-10-233-237-10 ~]$ sudo su - elasticsearch -s /bin/bash\nsu: warning: cannot change directory to /home/elasticsearch: No such file or directory\n\n-bash-4.1$ grep elastic /etc/passwd\nelasticsearch:x:498:498:elasticsearch user:/home/elasticsearch:/sbin/nologin\n\n-bash-4.1$ ls -l /home/\ntotal 4\ndrwx------. 3 centos centos 4096 Nov 2 20:23 centos\n```\n", "comments": [ { "body": "The value of HOME used when creating the elasticsearch user should probably be set to something other than /home/elasticsearch that exists and has executable permissions for the elasticsearch user.\n\nIn the interim, you can just set HOME to something else, like `HOME=/` to get your cron jobs running again.\n", "created_at": "2015-11-05T00:44:23Z" }, { "body": "I just have the puppet role create /home/elasticseach. Its a packaging bug, creating a user with the wrong home directory set. Either create the home directory, or like you said, set home to /var/lib/elasticsearch or whatever.\n", "created_at": "2015-11-05T02:12:30Z" }, { "body": "On 2.0 it doesn't look like we're setting the home directory, no? https://github.com/elastic/elasticsearch/blob/2.0/distribution/src/main/packaging/scripts/preinst#L67\n\nIs this user not a hangover from a previous installation?\n", "created_at": "2015-11-08T23:10:08Z" }, { "body": "If we don't set the home directory, I think it defaults to whatever the base home directory is in /etc/default/useradd plus the username. So usually this ends up being `/home/username`, even for system users.\n", "created_at": "2015-11-08T23:15:34Z" }, { "body": "@clintongormley I don't think that the file has changed since 1.6 at least https://github.com/elastic/elasticsearch/blob/1.6/src/packaging/common/scripts/preinst#L67\n\nBefore that, it was using the specific scripts: https://github.com/elastic/elasticsearch/blob/1.4/src/rpm/scripts/preinstall\n\n---\n\nI went ahead and installed ES 1.4.3, 1.7.3, and ES 2.0.0 (in reverse order) onto a CentOS 7 VM via `yum`. None of them created a `/home/elasticsearch` entry. I cannot find where we may have created it at some point. Perhaps we never did?\n", "created_at": "2015-11-10T18:08:29Z" }, { "body": "> @clintongormley I don't think that the file has changed since 1.6 at least https://github.com/elastic/elasticsearch/blob/1.6/src/packaging/common/scripts/preinst#L67\n\nIn ES 1.6 an effort has been made to unify the behaviour of package & install scripts among the different Linux distributions and I think the change has been made at this time. Elasticsearch does not need any home directory to work so it is not created by the scripts. \n\nDepending of the distribution, `su -` does a bunch of things and among them it tries to change directory to user's home which does not exist... You're not forced to `su -` in order to run a cron job.\n", "created_at": "2015-11-10T21:52:33Z" }, { "body": "I don't know what the original version of ES was installed on that cluster, but somewhere along the line /home/elasticsearch was created. It was a cron job failing on a redeploy that kicked out the message. \n\nIf it's not expected to create it, then I guess it's not a bug. If I look at most other \"system type\" users in RHEL/CentOS they normally set the homedir to / in the passwd file, or in the case of some services that store a lot of data, their var dir. Debian seems to just create a /home/USER dir for non data storing services. \n\nTLDR; Seems weird to create a user with a home directory that doesn't exist. I'd vote either create the dir (to be consistent with debian), or point it to / to be consistent RHEL. \n", "created_at": "2015-11-10T22:43:03Z" }, { "body": "/home/elasticsearch is also useful to house .java.policy files as discussed here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-scripting-security.html#_customising_the_classloader_whitelist\n", "created_at": "2016-07-07T00:54:25Z" }, { "body": "Just extend the `useradd` command by `-d /usr/share/elasticsearch`, please.\n", "created_at": "2016-09-21T10:38:19Z" }, { "body": "Guys are you planning first anniversary celebration because it is almost 1 year old!?\nI have checked latest version 5.0.0 rpm and it is still making mess in user configuration.\nIs this one liner fix so time consuming?\n\nThe bug #20599 which was closed as duplicate was actually better titled because the problem is not that home directory is not being created but because the correct one is undefined during user creation in rpm. JakeFromTheDark solution is correct. The other is similar but with /var/lib/elasticsearch set as home dir.\n\nPretty please with a sugar on top fix it :)\n", "created_at": "2016-10-27T08:12:08Z" }, { "body": "The issue is still prevalent in Elasticsearch 6.x where the `elasticsearch` user is created with a non-existing home directory `/home/elasticsearch`. \r\n\r\nConfirmed with package `elasticsearch-6.3.0-1.noarch` on CentOS 7. Please re-open this issue. ", "created_at": "2018-06-25T12:53:01Z" }, { "body": "@prupert It was an intentional decision to explicitly *not* create the home dir, as `elasticsearch` is a system user. Please don't run cron with the `elasticsearch` user. ", "created_at": "2018-06-25T14:29:03Z" }, { "body": "Although I'm not sure setting an invalid home directory is the _right way_ to prevent/nudge users not to run cron with the `elasticsearch` user, I do understand now why you wouldn't want that happening. \r\n\r\nThank you for your reply. I will discontinue using the `elasticsearch` user for our `curator` cron jobs. ", "created_at": "2018-06-25T14:44:33Z" } ], "number": 14453, "title": "ES 2.0 rpm does not create /home/elasticsearch" }
{ "body": "This commit adds setting the homedir for the elasticsearch user to the\r\nadduser command in the packaging preinstall script to a non-existent directory.\r\nWhile the elasticsearch user is a system user, creation of the user still adds\r\nthe default entry in /etc/passwd for the homedir. With this change the entry has\r\nan explicitly non-existent directory.\r\n\r\ncloses #14453", "number": 29007, "review_comments": [ { "body": "Why `/usr/share/elasticsearch`? Is `/var/lib/elasticsearch` more appropriate?", "created_at": "2018-03-14T05:53:41Z" }, { "body": "I chose `/usr/share/elasticsearch` because it is ES_HOME, and it is not expected to be written to. I'm open to being persuaded either way.", "created_at": "2018-03-14T09:18:26Z" } ], "title": "Packaging: Set elasticsearch user to have non-existent homedir" }
{ "commits": [ { "message": "Packaging: Set elasticsearch homedir\n\nThis commit adds setting the homedir for the elasticsearch user to the\nadduser command in the packaging preinstall script. While the\nelasticsearch user is a system user, it is sometimes conventient to have\nan existing homedir (even if it is not writeable). For example, running\ncron as the elasticsearch user will try to change dir to the homedir.\n\ncloses #14453" }, { "message": "Merge branch 'master' into es_homedir" }, { "message": "fix homedir option" }, { "message": "Merge branch 'master' into es_homedir" }, { "message": "change to nonexistent directory" }, { "message": "Merge branch 'master' into es_homedir" }, { "message": "Merge branch 'master' into es_homedir" }, { "message": "Merge branch 'master' into es_homedir" }, { "message": "Merge branch 'master' into es_homedir" }, { "message": "Merge branch 'master' into es_homedir" } ], "files": [ { "diff": "@@ -27,6 +27,7 @@ case \"$1\" in\n adduser --quiet \\\n --system \\\n --no-create-home \\\n+ --home /nonexistent \\\n --ingroup elasticsearch \\\n --disabled-password \\\n --shell /bin/false \\\n@@ -50,8 +51,9 @@ case \"$1\" in\n # Create elasticsearch user if not existing\n if ! id elasticsearch > /dev/null 2>&1 ; then\n echo -n \"Creating elasticsearch user...\"\n- useradd -r \\\n- -M \\\n+ useradd --system \\\n+ --no-create-home \\\n+ --home-dir /nonexistent \\\n --gid elasticsearch \\\n --shell /sbin/nologin \\\n --comment \"elasticsearch user\" \\", "filename": "distribution/packages/src/common/scripts/preinst", "status": "modified" }, { "diff": "@@ -88,6 +88,8 @@ verify_package_installation() {\n id elasticsearch\n \n getent group elasticsearch\n+ # homedir is set in /etc/passwd but to a non existent directory\n+ assert_file_not_exist $(getent passwd elasticsearch | cut -d: -f6)\n \n assert_file \"$ESHOME\" d root root 755\n assert_file \"$ESHOME/bin\" d root root 755", "filename": "qa/vagrant/src/test/resources/packaging/utils/packages.bash", "status": "modified" } ] }
{ "body": "GeoIP cannot clean up the database files because they are still being used by another test?\r\n\r\nbuild failure: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-windows-compatibility/1368/console\r\n\r\n```\r\n20:49:52 1> [2018-03-12T21:49:49,082][INFO ][o.e.i.g.GeoIpProcessorFactoryTests] [testBuildWithCountryDbAndAsnFields]: before test\r\n20:49:52 \r\n20:49:52 1> [2018-03-12T21:49:49,083][INFO ][o.e.i.g.GeoIpProcessorFactoryTests] [testBuildWithCountryDbAndAsnFields]: after test\r\n20:49:52 \r\n20:49:52 ERROR 0.00s J1 | GeoIpProcessorFactoryTests (suite) <<< FAILURES!\r\n20:49:52 \r\n20:49:52 2> NOTE: All tests run in this JVM: [GeoIpProcessorFactoryTests]\r\n20:49:52 \r\n20:49:52 > Throwable #1: java.io.IOException: Could not remove the following files (in the order of attempts):\r\n20:49:52 > C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\\tempDir-002\\ingest-geoip\\GeoLite2-ASN.mmdb: java.nio.file.FileSystemException: C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\\tempDir-002\\ingest-geoip\\GeoLite2-ASN.mmdb: The process cannot access the file because it is being used by another process.\r\n20:49:52 > C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\\tempDir-002\\ingest-geoip\\GeoLite2-City.mmdb: java.nio.file.FileSystemException: C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\\tempDir-002\\ingest-geoip\\GeoLite2-City.mmdb: The process cannot access the file because it is being used by another process.\r\n20:49:52 > C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\\tempDir-002\\ingest-geoip\\GeoLite2-Country.mmdb: java.nio.file.FileSystemException: C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\\tempDir-002\\ingest-geoip\\GeoLite2-Country.mmdb: The process cannot access the file because it is being used by another process.\r\n20:49:52 > C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\\tempDir-002\\ingest-geoip: java.nio.file.DirectoryNotEmptyException: C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\\tempDir-002\\ingest-geoip\r\n20:49:52 > C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\\tempDir-002: java.nio.file.DirectoryNotEmptyException: C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\\tempDir-002\r\n20:49:52 > C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001: java.nio.file.DirectoryNotEmptyException: C:\\Users\\jenkins\\workspace\\elastic+elasticsearch+master+multijob-windows-compatibility\\plugins\\ingest-geoip\\build\\testrun\\test\\J1\\temp\\org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests_A72F0DB2EDA125D3-001\r\n20:49:52 > \tat __randomizedtesting.SeedInfo.seed([A72F0DB2EDA125D3]:0)\r\n20:49:52 > \tat org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)\r\n20:49:52 > \tat java.lang.Thread.run(Thread.java:748)\r\n20:49:52 Completed [2/3] on J1 in 6.90s, 12 tests, 1 error <<< FAILURES!\r\n20:49:52 \r\n20:49:55 Suite: org.elasticsearch.ingest.geoip.GeoIpProcessorTests\r\n20:49:55 Completed [3/3] on J2 in 10.06s, 12 tests\r\n20:49:55 \r\n20:49:55 Tests with failures:\r\n20:49:55 - org.elasticsearch.ingest.geoip.GeoIpProcessorFactoryTests (suite)```", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-03-12T23:02:51Z" }, { "body": "It is because these files can not be deleted while a mapped byte buffer that is backed by a memory mappings of these files has not been garbage collected. We should merely skip these tests on Windows.", "created_at": "2018-03-12T23:36:49Z" }, { "body": "cool.\r\n\r\nI will go ahead and add an `assumeTrue` in the right places in this test class as to skip this on Windows", "created_at": "2018-03-12T23:55:28Z" }, { "body": "Thanks @talevy.", "created_at": "2018-03-13T02:53:26Z" }, { "body": "Any suggestions on how to go about doing this?\r\n\r\nissue: If one sticks a `assumeFalse(..., Windows)` in the `BeforeClass` segment, all the tests within \r\nthe suite are skipped, which leads to the suite to fail.\r\n\r\nSince the problem is outside the scope of any one test, I can't think of anything cleaner than moving the `BeforeClass` and `AfterClass` methods to be called internally within each test method, and introducing a dummy test that will never be skipped.", "created_at": "2018-03-13T04:46:25Z" }, { "body": "I can take that one @talevy.", "created_at": "2018-03-13T07:30:39Z" } ], "number": 29001, "title": "[TEST] GeoIpProcessorFactoryTests fails handling files on Windows" }
{ "body": "With this commit we skip all GeoIpProcessorFactoryTests on Windows.\r\nThese tests use a MappedByteBuffer which will keep its file mappings\r\nuntil it is garbage-collected. As a consequence, the corresponding\r\nfile appears to be still in use, Windows cannot delete it and the test\r\nwill fail in teardown.\r\n\r\nCloses #29001", "number": 29005, "review_comments": [], "title": "Skip GeoIpProcessorFactoryTests on Windows" }
{ "commits": [ { "message": "Skip GeoIpProcessorFactoryTests on Windows\n\nWith this commit we skip all GeoIpProcessorFactoryTests on Windows.\nThese tests use a MappedByteBuffer which will keep its file mappings\nuntil it is garbage-collected. As a consequence, the corresponding\nfile appears to be still in use, Windows cannot delete it and the test\nwill fail in teardown.\n\nCloses #29001" } ], "files": [ { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n import com.maxmind.db.NoCache;\n import com.maxmind.db.NodeCache;\n+import org.apache.lucene.util.Constants;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.Randomness;\n import org.elasticsearch.test.ESTestCase;\n@@ -51,6 +52,13 @@ public class GeoIpProcessorFactoryTests extends ESTestCase {\n \n @BeforeClass\n public static void loadDatabaseReaders() throws IOException {\n+ // Skip setup because Windows cannot cleanup these files properly. The reason is that they are using\n+ // a MappedByteBuffer which will keep the file mappings active until it is garbage-collected. As a consequence,\n+ // the corresponding file appears to be still in use and Windows cannot delete it.\n+ if (Constants.WINDOWS) {\n+ return;\n+ }\n+\n Path configDir = createTempDir();\n Path geoIpConfigDir = configDir.resolve(\"ingest-geoip\");\n Files.createDirectories(geoIpConfigDir);\n@@ -67,13 +75,23 @@ public static void loadDatabaseReaders() throws IOException {\n \n @AfterClass\n public static void closeDatabaseReaders() throws IOException {\n+ // Skip setup because Windows cannot cleanup these files properly. The reason is that they are using\n+ // a MappedByteBuffer which will keep the file mappings active until it is garbage-collected. As a consequence,\n+ // the corresponding file appears to be still in use and Windows cannot delete it.\n+ if (Constants.WINDOWS) {\n+ return;\n+ }\n+\n for (DatabaseReaderLazyLoader reader : databaseReaders.values()) {\n reader.close();\n }\n databaseReaders = null;\n }\n \n public void testBuildDefaults() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n \n Map<String, Object> config = new HashMap<>();\n@@ -90,6 +108,9 @@ public void testBuildDefaults() throws Exception {\n }\n \n public void testSetIgnoreMissing() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n \n Map<String, Object> config = new HashMap<>();\n@@ -107,6 +128,9 @@ public void testSetIgnoreMissing() throws Exception {\n }\n \n public void testCountryBuildDefaults() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n \n Map<String, Object> config = new HashMap<>();\n@@ -125,6 +149,9 @@ public void testCountryBuildDefaults() throws Exception {\n }\n \n public void testAsnBuildDefaults() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n \n Map<String, Object> config = new HashMap<>();\n@@ -143,6 +170,9 @@ public void testAsnBuildDefaults() throws Exception {\n }\n \n public void testBuildTargetField() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n Map<String, Object> config = new HashMap<>();\n config.put(\"field\", \"_field\");\n@@ -154,6 +184,9 @@ public void testBuildTargetField() throws Exception {\n }\n \n public void testBuildDbFile() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n Map<String, Object> config = new HashMap<>();\n config.put(\"field\", \"_field\");\n@@ -167,6 +200,9 @@ public void testBuildDbFile() throws Exception {\n }\n \n public void testBuildWithCountryDbAndAsnFields() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n Map<String, Object> config = new HashMap<>();\n config.put(\"field\", \"_field\");\n@@ -181,6 +217,9 @@ public void testBuildWithCountryDbAndAsnFields() throws Exception {\n }\n \n public void testBuildWithAsnDbAndCityFields() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n Map<String, Object> config = new HashMap<>();\n config.put(\"field\", \"_field\");\n@@ -195,6 +234,9 @@ public void testBuildWithAsnDbAndCityFields() throws Exception {\n }\n \n public void testBuildNonExistingDbFile() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n \n Map<String, Object> config = new HashMap<>();\n@@ -205,6 +247,9 @@ public void testBuildNonExistingDbFile() throws Exception {\n }\n \n public void testBuildFields() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n \n Set<GeoIpProcessor.Property> properties = EnumSet.noneOf(GeoIpProcessor.Property.class);\n@@ -229,6 +274,9 @@ public void testBuildFields() throws Exception {\n }\n \n public void testBuildIllegalFieldOption() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n GeoIpProcessor.Factory factory = new GeoIpProcessor.Factory(databaseReaders);\n \n Map<String, Object> config1 = new HashMap<>();\n@@ -246,6 +294,9 @@ public void testBuildIllegalFieldOption() throws Exception {\n }\n \n public void testLazyLoading() throws Exception {\n+ // This test uses a MappedByteBuffer which will keep the file mappings active until it is garbage-collected.\n+ // As a consequence, the corresponding file appears to be still in use and Windows cannot delete it.\n+ assumeFalse(\"windows deletion behavior is asinine\", Constants.WINDOWS);\n Path configDir = createTempDir();\n Path geoIpConfigDir = configDir.resolve(\"ingest-geoip\");\n Files.createDirectories(geoIpConfigDir);", "filename": "plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpProcessorFactoryTests.java", "status": "modified" } ] }
{ "body": "Hi,\n\nI'm not 100% sure about that but there might be an issue for Debian users who have already installed a beta/rc version.\n\nOn a Debian 8.6, I've been using the \"prerelease\" source list, and I've installed Elasticsearch, Logstash and Kibana (`5.0.0-rc1`).\n\nAn hour ago, after the 5.0.0 GA announcement, I've changed my source list to the \"normal\" repository URL, did an `apt update` and `apt upgrade`.\n\nOnly Logstash was available for upgrade.\n\nIn the packages list for the \"elastic\" source list, the packages were all present, but a version number issue was preventing APT to consider them as newer versions.\n\nI did `apt remove elasticsearch kibana && apt install elasticsearch kibana` then I got the correct (latest) versions.\n", "comments": [ { "body": "This is indeed a bug, and I'm sorry it has taken so long to get a fix. The problem was with the underlying library used for building the deb package. I've opened #29000 to fix this, which will apply for 7.0 prerelease versions when they are released.", "created_at": "2018-03-12T23:03:47Z" } ], "number": 21139, "title": "Upgrading from 5.0.0-rc1 to stable might be broken on Debian" }
{ "body": "This commit converts the deb package to use tildes in place of dash in\r\nthe internal package version. This is only relevant for prerelease\r\nversions of elasticsearch, and important for prerelease versions to \r\nsort before released versions. Previously, this was not possible due to\r\nproblems with the underlying library used by the ospackage plugin, but\r\nsince a recent upgrade, it now works.\r\n\r\ncloses #21139", "number": 29000, "review_comments": [], "title": "Build: Fix deb version to use tilde with prerelease versions" }
{ "commits": [ { "message": "Build: Fix deb version to use tilde with prerelease versions\n\nThis commit converts the deb package to use tildes in place of dash in\nthe internal package version. This is only relevant for prerelease\nversions of elasticsearch. Previously, this was not possible due to\nproblems with the underlying library used by the ospackage plugin, but\nsince a recent upgrade, it now works.\n\ncloses #21139" }, { "message": "Merge branch 'master' into deb_prerelease_version" }, { "message": "Merge branch 'master' into deb_prerelease_version" } ], "files": [ { "diff": "@@ -270,7 +270,7 @@ Closure commonDebConfig(boolean oss) {\n customFields['License'] = 'Elastic-License'\n }\n \n- version = project.version\n+ version = project.version.replace('-', '~')\n packageGroup 'web'\n requires 'bash'\n requires 'libc6'", "filename": "distribution/packages/build.gradle", "status": "modified" } ] }
{ "body": "<!--\r\n\r\n** Please read the guidelines below. **\r\n\r\nIssues that do not follow these guidelines are likely to be closed.\r\n\r\n1. GitHub is reserved for bug reports and feature requests. The best place to\r\n ask a general question is at the Elastic [forums](https://discuss.elastic.co).\r\n GitHub is not the place for general questions.\r\n\r\n2. Is this bug report or feature request for a supported OS? If not, it\r\n is likely to be closed. See https://www.elastic.co/support/matrix#show_os\r\n\r\n3. Please fill out EITHER the feature request block or the bug report block\r\n below, and delete the other block.\r\n\r\n-->\r\n\r\n<!-- Bug report -->\r\n\r\n**Elasticsearch version** (`bin/elasticsearch --version`): 6.1.3\r\n\r\n**Plugins installed**: [ingest-geoip,ingest-user-agent,repository-s3,x-pack]\r\n\r\n**JVM version** (`java -version`): \r\n```\r\njava version \"1.8.0_144\"\r\nJava(TM) SE Runtime Environment (build 1.8.0_144-b01)\r\nJava HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)\r\n```\r\n\r\n**OS version** (`uname -a` if on a Unix-like system): `Linux bd9a03495c76 4.4.0-66-generic #87~14.04.1-Ubuntu SMP Fri Mar 3 17:32:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux`\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nNode suffers an OOM error and the following message is logged:\r\n`[2018-03-09T04:18:01,808][ERROR][org.elasticsearch.bootstrap.ElasticsearchUncaughtExceptionHandler] fatal error in thread [elasticsearch[instance-0000000008][bulk][T#739]], exiting java.lang.OutOfMemoryError: Java heap space`\r\nHowever, the instance does not die and it keeps on throwing OOMs from time remaining totally unusable.\r\n\r\n**Steps to reproduce**:\r\n\r\n**Provide logs (if relevant)**:\r\nLots of similar logs are there in the last 24h:\r\n```\r\n[2018-03-09T11:21:55,312][ERROR][org.elasticsearch.index.engine.Engine] tragic event in index writer java.lang.OutOfMemoryError: Java heap space\r\n```\r\n\r\nThis is the full stacktrace of one of the errors:\r\n\r\nThis is the first OOM that was thrown and did not result on the instance dying:\r\n```\r\n[2018-03-08T12:31:30,448][ERROR][org.elasticsearch.bootstrap.ElasticsearchUncaughtExceptionHandler] fatal error in thread [Thread-35], exiting\r\njava.lang.OutOfMemoryError: Java heap space\r\n\tat io.netty.buffer.UnpooledHeapByteBuf.allocateArray(UnpooledHeapByteBuf.java:85) ~[?:?]\r\n\tat io.netty.buffer.UnpooledByteBufAllocator.allocateArray(UnpooledByteBufAllocator.java:147) ~[?:?]\r\n\tat io.netty.buffer.UnpooledHeapByteBuf.<init>(UnpooledHeapByteBuf.java:58) ~[?:?]\r\n\tat io.netty.buffer.UnpooledByteBufAllocator.<init>(UnpooledByteBufAllocator.java:142) ~[?:?]\r\n\tat io.netty.buffer.UnpooledByteBufAllocator.newHeapBuffer(UnpooledByteBufAllocator.java:64) ~[?:?]\r\n\tat io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:162) ~[?:?]\r\n\tat io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:153) ~[?:?]\r\n\tat io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:135) ~[?:?]\r\n\tat io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:80) ~[?:?]\r\n\tat io.netty.channel.nio.AbstractNioByteChannel.read(AbstractNioByteChannel.java:122) ~[?:?]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) ~[?:?]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) ~[?:?]\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) ~[?:?]\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) ~[?:?]\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor.run(SingleThreadEventExecutor.java:858) ~[?:?]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\n```\r\n\r\nOther, later, error I've seen that does not result on the instance dying:\r\n```\r\n[2018-03-09T06:34:19,412][WARN ][org.elasticsearch.cluster.action.shard.ShardStateAction] [filebeat-2018.03.06][3] received shard failed for shard id [[filebeat-2018.03.06][3]], allocation id [2-mEAryVT_2ElzJAYhZWcQ], primary term [0], message [shard failure, reason [lucene commit failed]], failure [AlreadyClosedException[this IndexWriter is closed]; nested: OutOfMemoryError[Java heap space]; ]\r\norg.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed\r\n\tat org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:896) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]\r\n\tat org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:910) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]\r\n\tat org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3377) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]\r\n\tat org.elasticsearch.index.engine.InternalEngine.commitIndexWriter(InternalEngine.java:2113) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.index.engine.InternalEngine.commitIndexWriter(InternalEngine.java:2106) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.index.engine.InternalEngine.flush(InternalEngine.java:1493) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.index.shard.IndexShard.flush(IndexShard.java:1024) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.indices.flush.SyncedFlushService.performPreSyncedFlush(SyncedFlushService.java:414) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.indices.flush.SyncedFlushService.access(SyncedFlushService.java:70) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.indices.flush.SyncedFlushService.messageReceived(SyncedFlushService.java:696) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.indices.flush.SyncedFlushService.messageReceived(SyncedFlushService.java:692) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:30) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor.doRun(SecurityServerTransportInterceptor.java:258) ~[?:?]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.common.util.concurrent.EsExecutors.execute(EsExecutors.java:135) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor.lambda-zsh(SecurityServerTransportInterceptor.java:307) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener.onResponse(ActionListener.java:60) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.xpack.security.transport.ServerTransportFilter.lambda(ServerTransportFilter.java:165) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authz.AuthorizationUtils.maybeRun(AuthorizationUtils.java:182) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authz.AuthorizationUtils.setRunAsRoles(AuthorizationUtils.java:176) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authz.AuthorizationUtils.authorize(AuthorizationUtils.java:158) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.transport.ServerTransportFilter.lambda(ServerTransportFilter.java:167) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener.onResponse(ActionListener.java:60) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.lambda(AuthenticationService.java:195) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.lambda(AuthenticationService.java:228) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.lookForExistingAuthentication(AuthenticationService.java:239) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.authenticateAsync(AuthenticationService.java:193) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.access-zsh(AuthenticationService.java:147) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:116) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.transport.ServerTransportFilter.inbound(ServerTransportFilter.java:141) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor.messageReceived(SecurityServerTransportInterceptor.java:314) ~[?:?]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.transport.TransportService.doRun(TransportService.java:652) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext.doRun(ThreadContext.java:637) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]\r\n\tat java.util.concurrent.ThreadPoolExecutor.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\nCaused by: java.lang.OutOfMemoryError: Java heap space\r\n```\r\n", "comments": [ { "body": "Pinging @elastic/es-core-infra", "created_at": "2018-03-09T17:25:48Z" }, { "body": "Relates to #21721. /cc @jasontedor @s1monw ", "created_at": "2018-03-09T17:30:06Z" }, { "body": "@nachogiljaldo Can you attach your complete log (in a gist)? This should not be possible. After the error message `fatal error in thread [Thread-35], exiting`, we call `Runtime.halt()`. This should cause the jvm to shutdown.", "created_at": "2018-03-09T17:37:46Z" }, { "body": "The first one is a bug and I will open a fix soon. The second one I do not believe without sufficiently strong evidence. The third one I am still investigating.", "created_at": "2018-03-09T17:55:46Z" }, { "body": ">This is the first OOM that was thrown and did not result on the instance dying:\r\n\r\n```\r\n[2018-03-08T12:31:30,448][ERROR][org.elasticsearch.bootstrap.ElasticsearchUncaughtExceptionHandler] fatal error in thread [Thread-35], exiting\r\njava.lang.OutOfMemoryError: Java heap space\r\n```\r\n\r\n@nachogiljaldo Would you please provide evidence that this one did not result in the node dying?", "created_at": "2018-03-09T18:00:08Z" }, { "body": "> Other, later, error I've seen that does not result on the instance dying:\r\n\r\n```\r\n[2018-03-09T06:34:19,412][WARN ][org.elasticsearch.cluster.action.shard.ShardStateAction] [filebeat-2018.03.06][3] received shard failed for shard id [[filebeat-2018.03.06][3]], allocation id [2-mEAryVT_2ElzJAYhZWcQ], primary term [0], message [shard failure, reason [lucene commit failed]], failure [AlreadyClosedException[this IndexWriter is closed]; nested: OutOfMemoryError[Java heap space]; ]\r\norg.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed\r\n\tat org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:896) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]\r\n.\r\n.\r\n.\r\nCaused by: java.lang.OutOfMemoryError: Java heap space\r\n```\r\n\r\n@nachogiljaldo Do you have any more to this stack trace?", "created_at": "2018-03-09T18:01:27Z" }, { "body": "@jasontedor / @rjernst \r\n> @nachogiljaldo Would you please provide evidence that this one did not result in the node dying?\r\n```\r\n[2018-03-09T17:49:16,336][WARN ][io.netty.channel.AbstractChannelHandlerContext] An exception 'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:\r\njava.lang.OutOfMemoryError: Java heap space\r\njava.lang.OutOfMemoryError: Java heap space\r\njava.lang.OutOfMemoryError: Java heap space\r\njava.lang.OutOfMemoryError: Java heap space\r\norg.elasticsearch.ElasticsearchException: java.lang.OutOfMemoryError: Java heap space\r\nCaused by: java.lang.OutOfMemoryError: Java heap space\r\norg.elasticsearch.ElasticsearchException: java.lang.OutOfMemoryError: Java heap space\r\nCaused by: java.lang.OutOfMemoryError: Java heap space\r\nroot@aadf270ee991:/# ps -ef | grep -i java\r\nfoundus+ 929 16 28 Jan31 ? 10-12:49:11 /usr/bin/java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Des.allow_insecure_settings=true -XX:ParallelGCThreads=2 -XX:ConcGCThreads=1 -Xms409M -Xmx409M -Dio.netty.allocator.type=unpooled -Djava.nio.file.spi.DefaultFileSystemProvider=co.elastic.cloud.quotaawarefs.QuotaAwareFileSystemProvider -Dio.netty.recycler.maxCapacityPerThread=0 -Djava.security.policy=file:///app/config/gelf.policy -Des.cgroups.hierarchy.override=/ -Des.path.home=/elasticsearch -Des.path.conf=/app/config -cp /elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /app/es.pid -Epath.logs=/app/logs -Epath.data=/app/data\r\nroot 103337 103318 0 18:19 ? 00:00:00 grep --color=auto -i java\r\n```\r\n\r\n> @nachogiljaldo Do you have any more to this stack trace?\r\nI have more examples of those if that's what you mean. If you ask if there's a stacktrace following it, there aren't.\r\n```\r\n[2018-03-09T06:34:19,412][WARN ][org.elasticsearch.cluster.action.shard.ShardStateAction] [filebeat-2018.03.06][3] received shard failed for shard id [[filebeat-2018.03.06][3]], allocation id [2-mEAryVT_2ElzJAYhZWcQ], primary term [0], message [shard failure, reason [lucene commit failed]], failure [AlreadyClosedException[this IndexWriter is closed]; nested: OutOfMemoryError[Java heap space]; ]\r\norg.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed\r\n\tat org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:896) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]\r\n\tat org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:910) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]\r\n\tat org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3377) ~[lucene-core-7.1.0.jar:7.1.0 84c90ad2c0218156c840e19a64d72b8a38550659 - ubuntu - 2017-10-13 16:12:42]\r\n\tat org.elasticsearch.index.engine.InternalEngine.commitIndexWriter(InternalEngine.java:2113) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.index.engine.InternalEngine.commitIndexWriter(InternalEngine.java:2106) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.index.engine.InternalEngine.flush(InternalEngine.java:1493) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.index.shard.IndexShard.flush(IndexShard.java:1024) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.indices.flush.SyncedFlushService.performPreSyncedFlush(SyncedFlushService.java:414) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.indices.flush.SyncedFlushService.access(SyncedFlushService.java:70) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.indices.flush.SyncedFlushService.messageReceived(SyncedFlushService.java:696) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.indices.flush.SyncedFlushService.messageReceived(SyncedFlushService.java:692) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:30) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor.doRun(SecurityServerTransportInterceptor.java:258) ~[?:?]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.common.util.concurrent.EsExecutors.execute(EsExecutors.java:135) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor.lambda-zsh(SecurityServerTransportInterceptor.java:307) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener.onResponse(ActionListener.java:60) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.xpack.security.transport.ServerTransportFilter.lambda(ServerTransportFilter.java:165) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authz.AuthorizationUtils.maybeRun(AuthorizationUtils.java:182) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authz.AuthorizationUtils.setRunAsRoles(AuthorizationUtils.java:176) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authz.AuthorizationUtils.authorize(AuthorizationUtils.java:158) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.transport.ServerTransportFilter.lambda(ServerTransportFilter.java:167) ~[?:?]\r\n\tat org.elasticsearch.action.ActionListener.onResponse(ActionListener.java:60) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.lambda(AuthenticationService.java:195) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.lambda(AuthenticationService.java:228) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.lookForExistingAuthentication(AuthenticationService.java:239) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.authenticateAsync(AuthenticationService.java:193) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.access-zsh(AuthenticationService.java:147) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:116) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.transport.ServerTransportFilter.inbound(ServerTransportFilter.java:141) ~[?:?]\r\n\tat org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor.messageReceived(SecurityServerTransportInterceptor.java:314) ~[?:?]\r\n\tat org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.transport.TransportService.doRun(TransportService.java:652) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext.doRun(ThreadContext.java:637) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.1.3.jar:6.1.3]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]\r\n\tat java.util.concurrent.ThreadPoolExecutor.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]\r\nCaused by: java.lang.OutOfMemoryError: Java heap space\r\n```", "created_at": "2018-03-09T18:36:23Z" }, { "body": "Btw. this is printed to stdout: `Exception: java.lang.SecurityException thrown from the UncaughtExceptionHandler in thread \"Thread-428833\"`", "created_at": "2018-03-09T18:39:28Z" }, { "body": "@henrikno Is there a stack trace?", "created_at": "2018-03-09T18:41:07Z" }, { "body": "No, that's the weird thing. Just that line, several times, with different thread names.\r\n```\r\nException: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread \"java-sdk-http-connection-reaper\"\r\n\r\nException: java.lang.SecurityException thrown from the UncaughtExceptionHandler in thread \"Thread-233\"\r\n\r\nException: java.lang.SecurityException thrown from the UncaughtExceptionHandler in thread \"elasticsearch[instance-0000000008][generic][T#9274]\"\r\n```", "created_at": "2018-03-09T18:43:59Z" }, { "body": "Okay, no worries; I have a gut feeling what is occurring. I will try to reproduce and validate. This will knock out the first two of these. I will still need more information on the third one.", "created_at": "2018-03-09T18:53:46Z" }, { "body": "Okay, here is the problem. The change #27482 introduced a bug that causes a security manager exception when we attempt to exit after a fatal error. This was immediately detected and #27518 was introduced. The bug from #27482 was suppose to have never been released yet the backport from #27518 to 6.1 was missed, it never happened. This means that #27482 is in 6.1 without the corresponding fix. This bug preventing exit in situations like this occurs in 6.1 only, upgrading to 6.2 will address the issue.", "created_at": "2018-03-09T19:12:07Z" }, { "body": "I understand the situation with the third exception and will work on a fix.", "created_at": "2018-03-09T19:31:18Z" }, { "body": "I opened #28973.", "created_at": "2018-03-09T22:42:21Z" } ], "number": 28967, "title": "OutOfMemoryError not killing the node" }
{ "body": "Today we check for a few cases where we should maybe die before failing the engine (e.g., when a merge fails). However, there are still other cases where a fatal error can be hidden from us (for example, a failed index writer commit). This commit modifies the mechanism for failing the engine to always check for a fatal error before failing the engine.\r\n\r\nRelates #27265, closes #28967", "number": 28973, "review_comments": [ { "body": "As long as you're using Optional here, I think this could be\r\n\r\n```java\r\nExceptionsHelper.maybeError(maybeFatal, logger).ifPresent(e -> {\r\n try {\r\n logger.error(maybeMessage, e);\r\n } finally {\r\n throw e;\r\n }});\r\n```\r\n\r\nUp to you if you want though", "created_at": "2018-03-09T22:33:52Z" }, { "body": "Thanks.", "created_at": "2018-03-09T23:20:19Z" }, { "body": "Extremely insignificant, but just for clarity: I think you mean \"does\" (indeed contain a fatal error) on this line.", "created_at": "2018-03-10T01:47:30Z" }, { "body": "Thank you @richardun, I appreciate the attention to detail. I will push a correction in the morning.", "created_at": "2018-03-10T02:04:59Z" } ], "title": "Maybe die before failing engine" }
{ "commits": [ { "message": "Maybe die before failing engine\n\nToday we check for a few cases where we should maybe die before failing\nthe engine (e.g., when a merge fails). However, there are still other\ncases where a fatal error can be hidden from us (for example, a failed\nindex writer commit). This commit modifies the mechanism for failing the\nengine to always check for a fatal error before failing the engine." }, { "message": "Simplify" }, { "message": "Null check" }, { "message": "Fix typo" } ], "files": [ { "diff": "@@ -847,11 +847,35 @@ public void forceMerge(boolean flush) throws IOException {\n */\n public abstract IndexCommitRef acquireSafeIndexCommit() throws EngineException;\n \n+ /**\n+ * If the specified throwable contains a fatal error in the throwable graph, such a fatal error will be thrown. Callers should ensure\n+ * that there are no catch statements that would catch an error in the stack as the fatal error here should go uncaught and be handled\n+ * by the uncaught exception handler that we install during bootstrap. If the specified throwable does indeed contain a fatal error, the\n+ * specified message will attempt to be logged before throwing the fatal error. If the specified throwable does not contain a fatal\n+ * error, this method is a no-op.\n+ *\n+ * @param maybeMessage the message to maybe log\n+ * @param maybeFatal the throwable that maybe contains a fatal error\n+ */\n+ @SuppressWarnings(\"finally\")\n+ private void maybeDie(final String maybeMessage, final Throwable maybeFatal) {\n+ ExceptionsHelper.maybeError(maybeFatal, logger).ifPresent(error -> {\n+ try {\n+ logger.error(maybeMessage, error);\n+ } finally {\n+ throw error;\n+ }\n+ });\n+ }\n+\n /**\n * fail engine due to some error. the engine will also be closed.\n * The underlying store is marked corrupted iff failure is caused by index corruption\n */\n public void failEngine(String reason, @Nullable Exception failure) {\n+ if (failure != null) {\n+ maybeDie(reason, failure);\n+ }\n if (failEngineLock.tryLock()) {\n store.incRef();\n try {", "filename": "server/src/main/java/org/elasticsearch/index/engine/Engine.java", "status": "modified" }, { "diff": "@@ -1730,7 +1730,6 @@ private boolean failOnTragicEvent(AlreadyClosedException ex) {\n // we need to fail the engine. it might have already been failed before\n // but we are double-checking it's failed and closed\n if (indexWriter.isOpen() == false && indexWriter.getTragicException() != null) {\n- maybeDie(\"tragic event in index writer\", indexWriter.getTragicException());\n failEngine(\"already closed by tragic event on the index writer\", (Exception) indexWriter.getTragicException());\n engineFailed = true;\n } else if (translog.isOpen() == false && translog.getTragicException() != null) {\n@@ -2080,34 +2079,12 @@ protected void doRun() throws Exception {\n * confidence that the call stack does not contain catch statements that would cause the error that might be thrown\n * here from being caught and never reaching the uncaught exception handler.\n */\n- maybeDie(\"fatal error while merging\", exc);\n- logger.error(\"failed to merge\", exc);\n failEngine(\"merge failed\", new MergePolicy.MergeException(exc, dir));\n }\n });\n }\n }\n \n- /**\n- * If the specified throwable is a fatal error, this throwable will be thrown. Callers should ensure that there are no catch statements\n- * that would catch an error in the stack as the fatal error here should go uncaught and be handled by the uncaught exception handler\n- * that we install during bootstrap. If the specified throwable is indeed a fatal error, the specified message will attempt to be logged\n- * before throwing the fatal error. If the specified throwable is not a fatal error, this method is a no-op.\n- *\n- * @param maybeMessage the message to maybe log\n- * @param maybeFatal the throwable that is maybe fatal\n- */\n- @SuppressWarnings(\"finally\")\n- private void maybeDie(final String maybeMessage, final Throwable maybeFatal) {\n- if (maybeFatal instanceof Error) {\n- try {\n- logger.error(maybeMessage, maybeFatal);\n- } finally {\n- throw (Error) maybeFatal;\n- }\n- }\n- }\n-\n /**\n * Commits the specified index writer.\n *", "filename": "server/src/main/java/org/elasticsearch/index/engine/InternalEngine.java", "status": "modified" } ] }
{ "body": "The stored scripts API today accepts malformed requests instead of throwing an exception. For instance:\r\n\r\n```\r\nPOST _scripts/foo\r\n{\r\n \"source\": \"return\"\r\n}\r\n\r\nGET _scripts/foo\r\n```\r\n\r\nreturns\r\n\r\n```\r\n{\r\n \"_id\": \"foo\",\r\n \"found\": true,\r\n \"script\": {\r\n \"lang\": \"mustache\",\r\n \"source\": \"\"\"{\"source\":\"return\"}\"\"\"\r\n }\r\n}\r\n```", "comments": [ { "body": "You need to tell it to compile against a script context. Pass the `context` parameter (url) to the put stored script api with a context name, eg `template` if you are using a mustache script.", "created_at": "2017-12-02T22:59:50Z" }, { "body": "The [documentation](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/modules-scripting-using.html#modules-scripting-stored-scripts) for this API doesn't mention contexts, so I can't figure out exactly how this is supposed to be used, but it seems like my original example above should throw an exception, no?", "created_at": "2017-12-05T11:25:57Z" }, { "body": "No, your example does not compile the script, it only stores it. We discussed making context required in previous issues before adding script contexts, but you thought that was too cumbersome for users. :) . It sounds like we just need to update the documentation.", "created_at": "2017-12-05T17:34:09Z" }, { "body": "My apologies, I misread the issue here. The problem is due to having support for old templates. This was where the entire template was passed. Essentially there is a heuristic that looks for \"template\", looks for \"script\", and if it doesn't find those, then it takes the entire body as the script source.", "created_at": "2017-12-05T17:46:57Z" }, { "body": "Hi all, I'd like to start tinkering with the code and this seemed like a good issue to start with, but to me it seems this loose script definition might have been intended.\r\n\r\nAs @rjernst said, if \"script\" and \"template\" are not found, an object with the provided JSON contents is created in \r\n`StoredScriptSource#parse(BytesReference content, XContentType xContentType)` so I'm not sure what the desired fix is.\r\n\r\nDo you think the object should be restricted to a more specific template? If not, should this issue be closed?", "created_at": "2017-12-12T13:26:52Z" }, { "body": "@topalavlad The fix here is to deprecate/remove support for this leniency. It was my original intention to remove this for 6.0, but i did not get to deprecating it in time in 5.x. Additionally, \"template\" should also be deprecated/removed, but that is a separate issue (and possibly a bit more involved, I don't remember all the details from the last time I looked).", "created_at": "2017-12-12T20:48:12Z" }, { "body": "Cool, I'll try to remove the part which accepts anything and see which tests will fail", "created_at": "2017-12-13T00:22:39Z" }, { "body": "Hi, any updates on that issue? \r\n\r\nIn addition to much mentioned here, **any** script created will be stored successfully: \r\n\r\n```\r\nPOST _scripts/foo\r\n{\r\n \"script\": {\r\n \"source\": \"fsafsfsfsfsafasf\",\r\n \"lang\": \"painless\"\r\n }\r\n}\r\n```\r\nwill return:\r\n```\r\n{\r\n \"acknowledged\": true\r\n}\r\n```\r\nand `GET _scripts/foo`\r\nwill return: \r\n```\r\n{\r\n \"_id\": \"foo\",\r\n \"found\": true,\r\n \"script\": {\r\n \"lang\": \"painless\",\r\n \"source\": \"fsafsfsfsfsafasf\"\r\n }\r\n}\r\n```", "created_at": "2018-02-01T14:57:19Z" }, { "body": "We should probably discuss whether or not context should be required for stored scripts again now that they're completed. ", "created_at": "2018-03-12T17:58:38Z" }, { "body": "Discussion with @rjernst -- we may be able to compile against each existing context when a stored script is entered to ensure that at least one context will successfully compile. As a follow up possibly even marking which contexts are good for use with the stored script.", "created_at": "2018-03-16T22:36:52Z" }, { "body": "This is one of the goals of https://github.com/elastic/elasticsearch/issues/53702 the issues in https://github.com/elastic/elasticsearch/issues/54677 need to be resolved first to move forward.", "created_at": "2020-04-02T19:00:34Z" }, { "body": "This has been open for quite a while, and we haven't made much progress on this due to focus in other areas. For now I'm going to close this as something we aren't planning on implementing. We can re-open it later if needed.", "created_at": "2024-05-25T02:58:47Z" } ], "number": 27612, "title": "Stored scripts API accepting malformed requests" }
{ "body": "The stored scripts API today accepts malformed requests instead of throwing an exception. \r\nThis PR deprecates accepting malformed put stored script requests (requests not using the official script format).\r\n\r\nRelates to #27612\r\n", "number": 28939, "review_comments": [], "title": "Deprecate accepting malformed requests in stored script API" }
{ "commits": [ { "message": "removed support for random scripts. Fixes #27612" }, { "message": "Merge branch 'master' of https://github.com/elastic/elasticsearch into 27612" }, { "message": "deprecate template context and no context for stored scripts" }, { "message": "fixed tests" }, { "message": "Fixed deprecation message." }, { "message": "Merge branch 'master' of https://github.com/elastic/elasticsearch into 27612" }, { "message": "added warnings to mustache lang tests" } ], "files": [ { "diff": "@@ -198,6 +198,7 @@ public void testIndexedTemplateClient() throws Exception {\n \n getResponse = client().admin().cluster().prepareGetStoredScript(\"testTemplate\").get();\n assertNull(getResponse.getSource());\n+ assertWarnings(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n }\n \n public void testIndexedTemplate() throws Exception {\n@@ -267,6 +268,7 @@ public void testIndexedTemplate() throws Exception {\n .setScript(\"2\").setScriptType(ScriptType.STORED).setScriptParams(templateParams)\n .get();\n assertHitCount(searchResponse.getResponse(), 1);\n+ assertWarnings(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n }\n \n // Relates to #10397\n@@ -311,6 +313,7 @@ public void testIndexedTemplateOverwrite() throws Exception {\n .get();\n assertHitCount(searchResponse.getResponse(), 1);\n }\n+ assertWarnings(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n }\n \n public void testIndexedTemplateWithArray() throws Exception {\n@@ -339,6 +342,7 @@ public void testIndexedTemplateWithArray() throws Exception {\n .setScript(\"4\").setScriptType(ScriptType.STORED).setScriptParams(arrayTemplateParams)\n .get();\n assertHitCount(searchResponse.getResponse(), 5);\n+ assertWarnings(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n }\n \n }", "filename": "modules/lang-mustache/src/test/java/org/elasticsearch/script/mustache/SearchTemplateIT.java", "status": "modified" }, { "diff": "@@ -74,6 +74,11 @@ public class StoredScriptSource extends AbstractDiffable<StoredScriptSource> imp\n */\n public static final ParseField TEMPLATE_PARSE_FIELD = new ParseField(\"template\");\n \n+ /**\n+ * Standard {@link ParseField} for query on the inner field.\n+ */\n+ public static final ParseField TEMPLATE_NO_WRAPPER_PARSE_FIELD = new ParseField(\"query\");\n+\n /**\n * Standard {@link ParseField} for lang on the inner level.\n */\n@@ -189,6 +194,26 @@ private StoredScriptSource build(boolean ignoreEmpty) {\n PARSER.declareField(Builder::setOptions, XContentParser::mapStrings, OPTIONS_PARSE_FIELD, ValueType.OBJECT);\n }\n \n+ private static StoredScriptSource parseRemaining(Token token, XContentParser parser) throws IOException {\n+ try (XContentBuilder builder = XContentFactory.jsonBuilder()) {\n+ if (token != Token.START_OBJECT) {\n+ builder.startObject();\n+ builder.copyCurrentStructure(parser);\n+ builder.endObject();\n+ } else {\n+ builder.copyCurrentStructure(parser);\n+ }\n+\n+ String source = Strings.toString(builder);\n+\n+ if (source == null || source.isEmpty()) {\n+ DEPRECATION_LOGGER.deprecated(\"empty templates should no longer be used\");\n+ }\n+\n+ return new StoredScriptSource(Script.DEFAULT_TEMPLATE_LANG, source, Collections.emptyMap());\n+ }\n+ }\n+\n /**\n * This will parse XContent into a {@link StoredScriptSource}. The following formats can be parsed:\n *\n@@ -304,38 +329,28 @@ public static StoredScriptSource parse(BytesReference content, XContentType xCon\n } else {\n throw new ParsingException(parser.getTokenLocation(), \"unexpected token [\" + token + \"], expected [{, <source>]\");\n }\n- } else {\n- if (TEMPLATE_PARSE_FIELD.getPreferredName().equals(name)) {\n- token = parser.nextToken();\n-\n- if (token == Token.VALUE_STRING) {\n- String source = parser.text();\n-\n- if (source == null || source.isEmpty()) {\n- DEPRECATION_LOGGER.deprecated(\"empty templates should no longer be used\");\n- }\n-\n- return new StoredScriptSource(Script.DEFAULT_TEMPLATE_LANG, source, Collections.emptyMap());\n- }\n- }\n+ } else if (TEMPLATE_PARSE_FIELD.getPreferredName().equals(name)) {\n \n- try (XContentBuilder builder = XContentFactory.jsonBuilder()) {\n- if (token != Token.START_OBJECT) {\n- builder.startObject();\n- builder.copyCurrentStructure(parser);\n- builder.endObject();\n- } else {\n- builder.copyCurrentStructure(parser);\n- }\n+ DEPRECATION_LOGGER.deprecated(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n \n- String source = Strings.toString(builder);\n+ token = parser.nextToken();\n+ if (token == Token.VALUE_STRING) {\n+ String source = parser.text();\n \n if (source == null || source.isEmpty()) {\n DEPRECATION_LOGGER.deprecated(\"empty templates should no longer be used\");\n }\n \n return new StoredScriptSource(Script.DEFAULT_TEMPLATE_LANG, source, Collections.emptyMap());\n+ } else {\n+ return parseRemaining(token, parser);\n }\n+ } else if (TEMPLATE_NO_WRAPPER_PARSE_FIELD.getPreferredName().equals(name)) {\n+ DEPRECATION_LOGGER.deprecated(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n+ return parseRemaining(token, parser);\n+ } else {\n+ DEPRECATION_LOGGER.deprecated(\"scripts should not be stored without a context. Specify them in a \\\"script\\\" element.\");\n+ return parseRemaining(token, parser);\n }\n } catch (IOException ioe) {\n throw new UncheckedIOException(ioe);", "filename": "server/src/main/java/org/elasticsearch/script/StoredScriptSource.java", "status": "modified" }, { "diff": "@@ -81,10 +81,12 @@ public void testGetScript() throws Exception {\n XContentBuilder sourceBuilder = XContentFactory.jsonBuilder();\n sourceBuilder.startObject().startObject(\"template\").field(\"field\", \"value\").endObject().endObject();\n builder.storeScript(\"template\", StoredScriptSource.parse(BytesReference.bytes(sourceBuilder), sourceBuilder.contentType()));\n+ assertWarnings(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n \n sourceBuilder = XContentFactory.jsonBuilder();\n sourceBuilder.startObject().field(\"template\", \"value\").endObject();\n builder.storeScript(\"template_field\", StoredScriptSource.parse(BytesReference.bytes(sourceBuilder), sourceBuilder.contentType()));\n+ assertWarnings(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n \n sourceBuilder = XContentFactory.jsonBuilder();\n sourceBuilder.startObject().startObject(\"script\").field(\"lang\", \"_lang\").field(\"source\", \"_source\").endObject().endObject();\n@@ -99,14 +101,19 @@ public void testGetScript() throws Exception {\n public void testDiff() throws Exception {\n ScriptMetaData.Builder builder = new ScriptMetaData.Builder(null);\n builder.storeScript(\"1\", StoredScriptSource.parse(new BytesArray(\"{\\\"foo\\\":\\\"abc\\\"}\"), XContentType.JSON));\n+ assertWarnings(\"scripts should not be stored without a context. Specify them in a \\\"script\\\" element.\");\n builder.storeScript(\"2\", StoredScriptSource.parse(new BytesArray(\"{\\\"foo\\\":\\\"def\\\"}\"), XContentType.JSON));\n+ assertWarnings(\"scripts should not be stored without a context. Specify them in a \\\"script\\\" element.\");\n builder.storeScript(\"3\", StoredScriptSource.parse(new BytesArray(\"{\\\"foo\\\":\\\"ghi\\\"}\"), XContentType.JSON));\n+ assertWarnings(\"scripts should not be stored without a context. Specify them in a \\\"script\\\" element.\");\n ScriptMetaData scriptMetaData1 = builder.build();\n \n builder = new ScriptMetaData.Builder(scriptMetaData1);\n builder.storeScript(\"2\", StoredScriptSource.parse(new BytesArray(\"{\\\"foo\\\":\\\"changed\\\"}\"), XContentType.JSON));\n+ assertWarnings(\"scripts should not be stored without a context. Specify them in a \\\"script\\\" element.\");\n builder.deleteScript(\"3\");\n builder.storeScript(\"4\", StoredScriptSource.parse(new BytesArray(\"{\\\"foo\\\":\\\"jkl\\\"}\"), XContentType.JSON));\n+ assertWarnings(\"scripts should not be stored without a context. Specify them in a \\\"script\\\" element.\");\n ScriptMetaData scriptMetaData2 = builder.build();\n \n ScriptMetaData.ScriptMetadataDiff diff = (ScriptMetaData.ScriptMetadataDiff) scriptMetaData2.diff(scriptMetaData1);", "filename": "server/src/test/java/org/elasticsearch/script/ScriptMetaDataTests.java", "status": "modified" }, { "diff": "@@ -50,7 +50,9 @@ protected StoredScriptSource createTestInstance() {\n if (randomBoolean()) {\n options.put(Script.CONTENT_TYPE_OPTION, xContentType.mediaType());\n }\n- return StoredScriptSource.parse(BytesReference.bytes(template), xContentType);\n+ StoredScriptSource source = StoredScriptSource.parse(BytesReference.bytes(template), xContentType);\n+ assertWarnings(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n+ return source;\n } catch (IOException e) {\n throw new AssertionError(\"Failed to create test instance\", e);\n }", "filename": "server/src/test/java/org/elasticsearch/script/StoredScriptSourceTests.java", "status": "modified" }, { "diff": "@@ -74,6 +74,7 @@ public void testSourceParsing() throws Exception {\n StoredScriptSource source = new StoredScriptSource(\"mustache\", \"code\", Collections.emptyMap());\n \n assertThat(parsed, equalTo(source));\n+ assertWarnings(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n }\n \n // complex template with wrapper template object\n@@ -89,6 +90,7 @@ public void testSourceParsing() throws Exception {\n StoredScriptSource source = new StoredScriptSource(\"mustache\", code, Collections.emptyMap());\n \n assertThat(parsed, equalTo(source));\n+ assertWarnings(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n }\n \n // complex template with no wrapper object\n@@ -104,6 +106,7 @@ public void testSourceParsing() throws Exception {\n StoredScriptSource source = new StoredScriptSource(\"mustache\", code, Collections.emptyMap());\n \n assertThat(parsed, equalTo(source));\n+ assertWarnings(\"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\");\n }\n \n // complex template using script as the field name\n@@ -223,7 +226,10 @@ public void testEmptyTemplateDeprecations() throws IOException {\n StoredScriptSource source = new StoredScriptSource(Script.DEFAULT_TEMPLATE_LANG, \"\", Collections.emptyMap());\n \n assertThat(parsed, equalTo(source));\n- assertWarnings(\"empty templates should no longer be used\");\n+ assertWarnings(\n+ \"the template context is now deprecated. Specify templates in a \\\"script\\\" element.\",\n+ \"empty templates should no longer be used\"\n+ );\n }\n \n try (XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON)) {", "filename": "server/src/test/java/org/elasticsearch/script/StoredScriptTests.java", "status": "modified" } ] }
{ "body": "**Elasticsearch version** : 6.0.0 - 6.2.2\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\n\r\nA multi_match query with type cross_fields ignores tie_breaker since https://github.com/elastic/elasticsearch/pull/25115. If this is intentional, the documentation should be adapted. If not, the following patch seems to fix it without any test failures:\r\n\r\n```\r\ndiff --git a/server/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java b/server/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java\r\nindex 8a85c67b68..91c8f2471c 100644\r\n--- a/server/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java\r\n+++ b/server/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java\r\n@@ -82,7 +82,7 @@ public class MultiMatchQuery extends MatchQuery {\r\n queryBuilder = new QueryBuilder(tieBreaker);\r\n break;\r\n case CROSS_FIELDS:\r\n- queryBuilder = new CrossFieldsQueryBuilder();\r\n+ queryBuilder = new CrossFieldsQueryBuilder(tieBreaker);\r\n break;\r\n default:\r\n throw new IllegalStateException(\"No such type: \" + type);\r\n@@ -152,8 +152,8 @@ public class MultiMatchQuery extends MatchQuery {\r\n final class CrossFieldsQueryBuilder extends QueryBuilder {\r\n private FieldAndFieldType[] blendedFields;\r\n \r\n- CrossFieldsQueryBuilder() {\r\n- super(0.0f);\r\n+ CrossFieldsQueryBuilder(float tieBreaker) {\r\n+ super(tieBreaker);\r\n }\r\n \r\n @Override\r\n```\r\n\r\nI can make a proper pull request if necessary and if this is the correct fix.\r\n\r\nThe problem was also reported in https://discuss.elastic.co/t/sum-score-in-multi-match/122547\r\n\r\n**Steps to reproduce**:\r\n\r\nThe following script should output \"correct\" as the last line.\r\n\r\n```\r\ncurl -XDELETE localhost:9200/test?ignore_unavailable=true\r\n\r\ncurl -XPOST 'localhost:9200/test/type1?pretty' -H 'Content-Type: application/json' -d'\r\n {\r\n \"title\": \"Great Gatsby\",\r\n \"description\": \"Great Gatsby is the book about ...\"\r\n }\r\n'\r\n\r\nsleep 1\r\n\r\ncurl --silent -XGET 'localhost:9200/test/_search?pretty' -H 'Content-Type: application/json' -d'\r\n {\r\n \"explain\": true, \r\n \"query\": {\r\n \"multi_match\": {\r\n \"query\": \"Great Gatsby\",\r\n \"type\": \"cross_fields\", \r\n \"tie_breaker\": 1, \r\n \"fields\": [\"title^0.9\", \"description^0.4\"]\r\n }\r\n }\r\n }\r\n' | (grep -q 'max of:' && echo wrong || echo correct)\r\n```\r\n", "comments": [ { "body": "Thanks @xabbu42 , I opened https://github.com/elastic/elasticsearch/pull/28935 for a quick fix that should be included in the next minor release (6.3).", "created_at": "2018-03-08T10:48:49Z" }, { "body": "Pinging @elastic/es-search-aggs", "created_at": "2018-03-09T04:30:15Z" } ], "number": 28933, "title": "multi_match type cross_fields ignores tie_breaker" }
{ "body": "This commit restores the handling of tiebreaker for multi_match\r\ncross fields query. This functionality was lost during a refactoring\r\nof the multi_match query (#25115).\r\n\r\nFixes #28933", "number": 28935, "review_comments": [], "title": "Restore tiebreaker for cross fields query" }
{ "commits": [ { "message": "Restore tiebreaker for cross fields query\n\nThis commit restores the handling of tiebreaker for multi_match\ncross fields query. This functionality was lost during a refactoring\nof the multi_match query (#25115).\n\nFixes #28933" } ], "files": [ { "diff": "@@ -82,7 +82,7 @@ public Query parse(MultiMatchQueryBuilder.Type type, Map<String, Float> fieldNam\n queryBuilder = new QueryBuilder(tieBreaker);\n break;\n case CROSS_FIELDS:\n- queryBuilder = new CrossFieldsQueryBuilder();\n+ queryBuilder = new CrossFieldsQueryBuilder(tieBreaker);\n break;\n default:\n throw new IllegalStateException(\"No such type: \" + type);\n@@ -152,8 +152,8 @@ public Query blendPhrase(PhraseQuery query, MappedFieldType type) {\n final class CrossFieldsQueryBuilder extends QueryBuilder {\n private FieldAndFieldType[] blendedFields;\n \n- CrossFieldsQueryBuilder() {\n- super(0.0f);\n+ CrossFieldsQueryBuilder(float tiebreaker) {\n+ super(tiebreaker);\n }\n \n @Override\n@@ -239,7 +239,7 @@ public Query blendPhrase(PhraseQuery query, MappedFieldType type) {\n /**\n * We build phrase queries for multi-word synonyms when {@link QueryBuilder#autoGenerateSynonymsPhraseQuery} is true.\n */\n- return MultiMatchQuery.blendPhrase(query, blendedFields);\n+ return MultiMatchQuery.blendPhrase(query, tieBreaker, blendedFields);\n }\n }\n \n@@ -307,7 +307,7 @@ static Query blendTerms(QueryShardContext context, BytesRef[] values, Float comm\n * Expand a {@link PhraseQuery} to multiple fields that share the same analyzer.\n * Returns a {@link DisjunctionMaxQuery} with a disjunction for each expanded field.\n */\n- static Query blendPhrase(PhraseQuery query, FieldAndFieldType... fields) {\n+ static Query blendPhrase(PhraseQuery query, float tiebreaker, FieldAndFieldType... fields) {\n List<Query> disjunctions = new ArrayList<>();\n for (FieldAndFieldType field : fields) {\n int[] positions = query.getPositions();\n@@ -322,7 +322,7 @@ static Query blendPhrase(PhraseQuery query, FieldAndFieldType... fields) {\n }\n disjunctions.add(q);\n }\n- return new DisjunctionMaxQuery(disjunctions, 0.0f);\n+ return new DisjunctionMaxQuery(disjunctions, tiebreaker);\n }\n \n @Override", "filename": "server/src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java", "status": "modified" }, { "diff": "@@ -95,17 +95,24 @@ public void testCrossFieldMultiMatchQuery() throws IOException {\n QueryShardContext queryShardContext = indexService.newQueryShardContext(\n randomInt(20), null, () -> { throw new UnsupportedOperationException(); }, null);\n queryShardContext.setAllowUnmappedFields(true);\n- Query parsedQuery = multiMatchQuery(\"banon\").field(\"name.first\", 2).field(\"name.last\", 3).field(\"foobar\").type(MultiMatchQueryBuilder.Type.CROSS_FIELDS).toQuery(queryShardContext);\n- try (Engine.Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n- Query rewrittenQuery = searcher.searcher().rewrite(parsedQuery);\n- Query tq1 = new BoostQuery(new TermQuery(new Term(\"name.first\", \"banon\")), 2);\n- Query tq2 = new BoostQuery(new TermQuery(new Term(\"name.last\", \"banon\")), 3);\n- Query expected = new DisjunctionMaxQuery(\n- Arrays.asList(\n- new MatchNoDocsQuery(\"unknown field foobar\"),\n- new DisjunctionMaxQuery(Arrays.asList(tq2, tq1), 0f)\n- ), 0f);\n- assertEquals(expected, rewrittenQuery);\n+ for (float tieBreaker : new float[] {0.0f, 0.5f}) {\n+ Query parsedQuery = multiMatchQuery(\"banon\")\n+ .field(\"name.first\", 2)\n+ .field(\"name.last\", 3).field(\"foobar\")\n+ .type(MultiMatchQueryBuilder.Type.CROSS_FIELDS)\n+ .tieBreaker(tieBreaker)\n+ .toQuery(queryShardContext);\n+ try (Engine.Searcher searcher = indexService.getShard(0).acquireSearcher(\"test\")) {\n+ Query rewrittenQuery = searcher.searcher().rewrite(parsedQuery);\n+ Query tq1 = new BoostQuery(new TermQuery(new Term(\"name.first\", \"banon\")), 2);\n+ Query tq2 = new BoostQuery(new TermQuery(new Term(\"name.last\", \"banon\")), 3);\n+ Query expected = new DisjunctionMaxQuery(\n+ Arrays.asList(\n+ new MatchNoDocsQuery(\"unknown field foobar\"),\n+ new DisjunctionMaxQuery(Arrays.asList(tq2, tq1), tieBreaker)\n+ ), tieBreaker);\n+ assertEquals(expected, rewrittenQuery);\n+ }\n }\n }\n ", "filename": "server/src/test/java/org/elasticsearch/index/search/MultiMatchQueryTests.java", "status": "modified" } ] }
{ "body": "elasticsearch-2.0.0.deb\n\nelasticsearch/distribution/src/main/packaging/scripts/preinst is plain wrong, because (at least on debian/ubuntu systems) the ES_ENV_FILE (/etc/default/elasticsearch) is not yet in place/configured and therefore this script now modifies the system in a manner, which might not be allowed per system/org policy!!!\n\nIf you wanna try, e.g. on ubuntu:\n\n```\napt-get purge elasticsearch\ndpkg --unpack elasticsearch-2.0.0.deb\nsed -r -e '/^#?ES_(USER|GROUP)=/ { s,^#,, ; s,=.*,=esearch, }' -i /etc/default/elasticsearch.dpkg-new\nsed -e '/rmdir/ s,$, || true,' -i /var/lib/dpkg/info/elasticsearch.postrm # yepp, buggy\ndpkg --configure elasticsearch\n```\n", "comments": [ { "body": "Hi @jelmd \n\nSorry to be obtuse, but could you talk me through the problem here? I'm not following.\n", "created_at": "2015-11-17T17:02:23Z" }, { "body": "Hi,\n\nHave a look at http://man7.org/linux/man-pages/man1/dpkg.1.html#ACTIONS , option -i : here you can see the single steps executed, when a package gets installed. Obviously the package's preinst script gets executed BEFORE the package puts /etc/default/elasticsearch into place (this happens in step 6) aka configuration). However, in line 19 of the preinst script you are trying to source in this file and thus you created a chicken and egg problem. \n\nLast but not least now it is not possible anymore, to do a `dpkg --unpack e*.deb`, fix the script//etc/default/elasticsearch before it gets installed, i.e. when one runs finally the `dpkg --configure e*` ...\n\nThe 1.x packages did it right, 2.0.0 packages are plain wrong.\n", "created_at": "2015-11-23T04:50:58Z" }, { "body": "@jelmd I took a glance at the file you mentioned, and it seems ok to me.\n\nThe way I read it, the env file is only loaded if it exists. \n\nIs the problem that the package would create a user 'elasticsearch' because the value ES_USER is unlikely to be modified by the operator at the time of `preinst` ?\n", "created_at": "2016-06-09T04:42:17Z" }, { "body": "I think this was fixed by #28918 (@jasontedor please correct me if I'm wrong). `preinst` no longer sources any files.", "created_at": "2018-03-13T07:20:44Z" }, { "body": "The problem was not fixed. Now it is even worse, because it uses hard coded values instead for no reason. ", "created_at": "2018-03-13T09:13:00Z" }, { "body": "@jelmd We want to listen to you and we are open to feedback, but I encourage you to understand that we do not make any changes for \"no reason\" and that you state reasoned objections to those changes (namely, a concrete problem that you're trying to solve that can not be solved because of those changes).", "created_at": "2018-03-13T10:15:12Z" }, { "body": "Hmmm, thought the issue was explained well - please scroll to the top of the page. Please tell me, what is missing/not understood.\r\nBTW: \"no reason\" refers to: if the SW would be properly assembled and related scripts properly written, there would be no reason to throw away required customizations like this in a BFH manner. ", "created_at": "2018-03-13T14:26:28Z" }, { "body": "@jelmd Yes, it's unclear. The initial problem statement is:\r\n\r\n> elasticsearch/distribution/src/main/packaging/scripts/preinst is plain wrong, because (at least on debian/ubuntu systems) the ES_ENV_FILE (/etc/default/elasticsearch) is not yet in place/configured and therefore this script now modifies the system in a manner, which might not be allowed per system/org policy!!!\r\n\r\nWe no longer source the environment file, therefore it does not matter whether or not it is in place. You say something about violating system policy, but it is not clear what is being violated, and how sourcing the environment file relates to that. Your next [comment](https://github.com/elastic/elasticsearch/issues/14630#issuecomment-158853711) elaborates only about the problems that arise from sourcing the environment file before it exists, which again does not matter if we no longer source the environment file. Then you [say](https://github.com/elastic/elasticsearch/issues/14630#issuecomment-372596456):\r\n\r\n> The problem was not fixed.\r\n\r\nwhich again, as far as I can discern from your problem statement is that we source the environment file before it exists. Except, no more with the change that was referenced.\r\n\r\nWhich brings me to: if this change did not solve the problem, I don't understand the problem which is why I ask you for a concrete problem statement that is not solved by these changes.", "created_at": "2018-03-15T02:54:01Z" } ], "number": 14630, "title": " elasticsearch/distribution/src/main/packaging/scripts/preinst is plain wrong" }
{ "body": "Previously we allowed a lot of customization of Elasticsearch during package installation (e.g., the username and group). This customization was achieved by sourcing the env script (e.g., /etc/sysconfig/elasticsearch) during installation. Since we no longer allow such flexibility, we do not need to source these env scripts during package installation and removal.\r\n\r\nRelates #14630\r\n", "number": 28918, "review_comments": [], "title": "Stop sourcing scripts during installation/removal" }
{ "commits": [ { "message": "Stop sourcing scripts during installation/removal\n\nPreviously we allowed a lot of customization of Elasticsearch during\npackage installation (e.g., the username and group). This customization\nwas achieved by sourcing the env script (e.g.,\n/etc/sysconfig/elasticsearch) during installation. Since we no longer\nallow such flexibility, we do not need to source these env scripts\nduring package installation and removal." }, { "message": "Fix removal of dirs" } ], "files": [ { "diff": "@@ -8,14 +8,6 @@\n # $1=0 : indicates a removal\n # $1=1 : indicates an upgrade\n \n-\n-\n-# Source the default env file\n-ES_ENV_FILE=\"${path.env}\"\n-if [ -f \"$ES_ENV_FILE\" ]; then\n- . \"$ES_ENV_FILE\"\n-fi\n-\n IS_UPGRADE=false\n \n case \"$1\" in", "filename": "distribution/packages/src/common/scripts/postinst", "status": "modified" }, { "diff": "@@ -9,9 +9,6 @@\n # $1=0 : indicates a removal\n # $1=1 : indicates an upgrade\n \n-\n-\n-SOURCE_ENV_FILE=true\n REMOVE_DIRS=false\n REMOVE_USER_AND_GROUP=false\n \n@@ -24,7 +21,6 @@ case \"$1\" in\n \n purge)\n REMOVE_USER_AND_GROUP=true\n- SOURCE_ENV_FILE=false\n ;;\n failed-upgrade|abort-install|abort-upgrade|disappear|upgrade|disappear)\n ;;\n@@ -45,49 +41,34 @@ case \"$1\" in\n ;;\n esac\n \n-# Sets the default values for elasticsearch variables used in this script\n-LOG_DIR=\"/var/log/elasticsearch\"\n-PLUGINS_DIR=\"/usr/share/elasticsearch/plugins\"\n-PID_DIR=\"/var/run/elasticsearch\"\n-DATA_DIR=\"/var/lib/elasticsearch\"\n-ES_PATH_CONF=\"/etc/elasticsearch\"\n-\n-# Source the default env file\n-if [ \"$SOURCE_ENV_FILE\" = \"true\" ]; then\n- ES_ENV_FILE=\"${path.env}\"\n- if [ -f \"$ES_ENV_FILE\" ]; then\n- . \"$ES_ENV_FILE\"\n- fi\n-fi\n-\n if [ \"$REMOVE_DIRS\" = \"true\" ]; then\n \n- if [ -d \"$LOG_DIR\" ]; then\n+ if [ -d /var/log/elasticsearch ]; then\n echo -n \"Deleting log directory...\"\n- rm -rf \"$LOG_DIR\"\n+ rm -rf /var/log/elasticsearch\n echo \" OK\"\n fi\n \n- if [ -d \"$PLUGINS_DIR\" ]; then\n+ if [ -d /usr/share/elasticsearch/plugins ]; then\n echo -n \"Deleting plugins directory...\"\n- rm -rf \"$PLUGINS_DIR\"\n+ rm -rf /usr/share/elasticsearch/plugins\n echo \" OK\"\n fi\n \n- if [ -d \"$PID_DIR\" ]; then\n+ if [ -d /var/run/elasticsearch ]; then\n echo -n \"Deleting PID directory...\"\n- rm -rf \"$PID_DIR\"\n+ rm -rf /var/run/elasticsearch\n echo \" OK\"\n fi\n \n # Delete the data directory if and only if empty\n- if [ -d \"$DATA_DIR\" ]; then\n- rmdir --ignore-fail-on-non-empty \"$DATA_DIR\"\n+ if [ -d /var/lib/elasticsearch ]; then\n+ rmdir --ignore-fail-on-non-empty /var/lib/elasticsearch\n fi\n \n # delete the conf directory if and only if empty\n- if [ -d \"$ES_PATH_CONF\" ]; then\n- rmdir --ignore-fail-on-non-empty \"$ES_PATH_CONF\"\n+ if [ -d /etc/elasticsearch ]; then\n+ rmdir --ignore-fail-on-non-empty /etc/elasticsearch\n fi\n \n fi", "filename": "distribution/packages/src/common/scripts/postrm", "status": "modified" }, { "diff": "@@ -9,14 +9,6 @@\n # $1=1 : indicates an new install\n # $1=2 : indicates an upgrade\n \n-\n-\n-# Source the default env file\n-ES_ENV_FILE=\"${path.env}\"\n-if [ -f \"$ES_ENV_FILE\" ]; then\n- . \"$ES_ENV_FILE\"\n-fi\n-\n case \"$1\" in\n \n # Debian ####################################################", "filename": "distribution/packages/src/common/scripts/preinst", "status": "modified" }, { "diff": "@@ -9,8 +9,6 @@\n # $1=0 : indicates a removal\n # $1=1 : indicates an upgrade\n \n-\n-\n STOP_REQUIRED=false\n REMOVE_SERVICE=false\n \n@@ -81,5 +79,4 @@ if [ \"$REMOVE_SERVICE\" = \"true\" ]; then\n fi\n fi\n \n-\n ${scripts.footer}", "filename": "distribution/packages/src/common/scripts/prerm", "status": "modified" } ] }
{ "body": "<!--\r\nGitHub is reserved for bug reports and feature requests. The best place\r\nto ask a general question is at the Elastic Discourse forums at\r\nhttps://discuss.elastic.co. If you are in fact posting a bug report or\r\na feature request, please include one and only one of the below blocks\r\nin your new issue. Note that whether you're filing a bug report or a\r\nfeature request, ensure that your submission is for an\r\n[OS that we support](https://www.elastic.co/support/matrix#show_os).\r\nBug reports on an OS that we do not support or feature requests\r\nspecific to an OS that we do not support will be closed.\r\n-->\r\n\r\n<!--\r\nIf you are filing a bug report, please remove the below feature\r\nrequest block and provide responses for all of the below items.\r\n-->\r\n\r\n**Elasticsearch version**: 5.0.1\r\n\r\n**Plugins installed**: []\r\n\r\n**JVM version**: OpenJDK 1.8.0_111\r\n\r\n**OS version**: Ubuntu 16.04.1 LTS\r\n\r\n**Description of the problem including expected versus actual behavior**:\r\nIf query parameters are passed and the request URL ends with `%`, no response is returned and Netty throws an exception (noticed in ES log)\r\n\r\n**Steps to reproduce**:\r\n 1. Do `GET /?%`\r\n 2. Notice that no response is returned.\r\n 3. Notice that Netty threw an exception by looking in ES's log. \r\n\r\n**Provide logs (if relevant)**:\r\n```\r\n[2016-12-05T14:30:07,049][WARN ][o.e.h.n.Netty4HttpServerTransport] [results-qa3] caught exception while handling client http traffic, closing connection [id: 0xd9952abd, L:/192.168.62.163:9201 - R:/10.4.20.252:52107]\r\njava.lang.IllegalArgumentException: unterminated escape sequence at end of string: %\r\n at org.elasticsearch.rest.RestUtils.decode(RestUtils.java:171) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.rest.RestUtils.decodeComponent(RestUtils.java:139) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.rest.RestUtils.decodeComponent(RestUtils.java:103) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.rest.RestUtils.decodeQueryString(RestUtils.java:77) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.rest.RestRequest.<init>(RestRequest.java:54) ~[elasticsearch-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.http.netty4.Netty4HttpRequest.<init>(Netty4HttpRequest.java:44) ~[transport-netty4-5.0.1.jar:5.0.1]\r\n at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:67) ~[transport-netty4-5.0.1.jar:5.0.1]\r\n at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ~[netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:66) [transport-netty4-5.0.1.jar:5.0.1]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at org.elasticsearch.http.netty4.cors.Netty4CorsHandler.channelRead(Netty4CorsHandler.java:76) [transport-netty4-5.0.1.jar:5.0.1]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) [netty-codec-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) [netty-codec-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:610) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:513) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:467) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:437) [netty-transport-4.1.5.Final.jar:4.1.5.Final]\r\n at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873) [netty-common-4.1.5.Final.jar:4.1.5.Final]\r\n at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]\r\n```\r\n", "comments": [ { "body": "Yeah, you certainly don't expect a good response, but you expect some kind of response....", "created_at": "2016-12-05T14:55:23Z" }, { "body": "@nik9000 i would expect the same response i'm getting from `GET /%`:\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"unterminated escape sequence at end of string: %\"\r\n }\r\n ],\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"unterminated escape sequence at end of string: %\"\r\n },\r\n \"status\": 400\r\n}\r\n```", "created_at": "2016-12-05T15:20:48Z" }, { "body": "> expect the same response i'm getting from `GET /%`\r\n\r\nPretty much, yes.", "created_at": "2016-12-05T15:26:57Z" }, { "body": "The difference is that `%` in `/%` is part of the path, and `%` in `/?%` is part of a parameter.", "created_at": "2016-12-05T15:32:42Z" }, { "body": "This bug turned out to be really serious in that there were leaked byte buffers in hand here. A lesson for future triaging.", "created_at": "2018-03-28T20:38:57Z" } ], "number": 21974, "title": "No response returned when URL contains query parameters and ends with %" }
{ "body": "Previously if the `path` contained incorrectly encoded sequence, an error was returned : \r\n```\r\ncurl -XGET http://localhost:9200/index%?pretty\r\n```\r\n```\r\n{\r\n \"error\" : {\r\n \"root_cause\" : [\r\n {\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"unterminated escape sequence at end of string: index%\"\r\n }\r\n ],\r\n \"type\" : \"illegal_argument_exception\",\r\n \"reason\" : \"unterminated escape sequence at end of string: index%\"\r\n },\r\n \"status\" : 400\r\n}\r\n```\r\nBut if the incorrect sequence was part of the `parameters`, **no** response was returned.\r\n```\r\ncurl -XGET http://localhost:9200/index?pretty%\r\n```\r\n```\r\ncurl: (52) Empty reply from server\r\n```\r\nThis PR analyze the `parameters` and returns an error when there is a wrong encoding sequence in the url params ( The errors are consistent with the ones returned for incorrect `path` )\r\n```\r\ncurl -XGET http://localhost:9200/index?pretty%\r\n```\r\n```\r\n{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"unterminated escape sequence at end of string: pretty%\"\r\n }\r\n ],\r\n \"type\": \"illegal_argument_exception\",\r\n \"reason\": \"unterminated escape sequence at end of string: pretty%\"\r\n },\r\n \"status\": 400\r\n}\r\n``` \r\nFixes : #21974", "number": 28909, "review_comments": [ { "body": "Adding this mutable field (and thus making `RestRequest` not immutable) is not attractive.", "created_at": "2018-03-06T16:11:25Z" }, { "body": "@jasontedor thanks!\r\n Have another look ?", "created_at": "2018-03-06T16:41:59Z" }, { "body": "This change still bothers me a lot. Again, I appreciate the motivation for it but the implementation that we are being forced into here leaves a lot to be desired (I am not faulting you for it, we are somewhat painted into a corner here). Another problem here is that now the object is a garbage state, we have no idea what the state of `params` is here yet we are allowing construction of this object to complete as if nothing happened. This is dangerous and I think that we should not do it.", "created_at": "2018-03-08T11:01:53Z" } ], "title": "Send REST response for incorrectly encoded url params" }
{ "commits": [ { "message": "Send REST response for incorrectly encoded url params" }, { "message": "addressing reviewers comments" } ], "files": [ { "diff": "@@ -77,7 +77,11 @@ protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Except\n new Netty4HttpChannel(serverTransport, httpRequest, pipelinedRequest, detailedErrorsEnabled, threadContext);\n \n if (request.decoderResult().isSuccess()) {\n- serverTransport.dispatchRequest(httpRequest, channel);\n+ if (httpRequest.getException() != null) {\n+ serverTransport.dispatchBadRequest(httpRequest, channel, httpRequest.getException());\n+ } else {\n+ serverTransport.dispatchRequest(httpRequest, channel);\n+ }\n } else {\n assert request.decoderResult().isFailure();\n serverTransport.dispatchBadRequest(httpRequest, channel, request.decoderResult().cause());", "filename": "modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpRequestHandler.java", "status": "modified" }, { "diff": "@@ -62,6 +62,7 @@ public abstract class RestRequest implements ToXContent.Params {\n private final String rawPath;\n private final Set<String> consumedParams = new HashSet<>();\n private final SetOnce<XContentType> xContentType = new SetOnce<>();\n+ private final SetOnce<Exception> exception = new SetOnce<>();\n \n /**\n * Creates a new RestRequest\n@@ -74,12 +75,18 @@ public RestRequest(NamedXContentRegistry xContentRegistry, String uri, Map<Strin\n this.xContentRegistry = xContentRegistry;\n final Map<String, String> params = new HashMap<>();\n int pathEndPos = uri.indexOf('?');\n+ Exception parsingQueryStringException= null;\n if (pathEndPos < 0) {\n this.rawPath = uri;\n } else {\n this.rawPath = uri.substring(0, pathEndPos);\n- RestUtils.decodeQueryString(uri, pathEndPos + 1, params);\n+ try {\n+ RestUtils.decodeQueryString(uri, pathEndPos + 1, params);\n+ } catch (Exception e) {\n+ parsingQueryStringException = e;\n+ }\n }\n+ this.exception.set(parsingQueryStringException);\n this.params = params;\n this.headers = Collections.unmodifiableMap(headers);\n final List<String> contentType = getAllHeaderValues(\"Content-Type\");\n@@ -206,6 +213,10 @@ public SocketAddress getLocalAddress() {\n return null;\n }\n \n+ public Exception getException() {\n+ return exception.get();\n+ }\n+\n public final boolean hasParam(String key) {\n return params.containsKey(key);\n }", "filename": "server/src/main/java/org/elasticsearch/rest/RestRequest.java", "status": "modified" }, { "diff": "@@ -42,11 +42,13 @@\n import java.io.FileNotFoundException;\n import java.io.IOException;\n import java.util.Collections;\n+import java.util.List;\n import java.util.Map;\n \n import static org.elasticsearch.ElasticsearchExceptionTests.assertDeepEquals;\n import static org.hamcrest.Matchers.contains;\n import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.not;\n import static org.hamcrest.Matchers.notNullValue;\n \n@@ -197,6 +199,58 @@ public BytesReference content() {\n assertThat(content, containsString(\"\\\"status\\\":\" + 400));\n }\n \n+ public void testResponseWhenParametersEncodingError() throws IOException {\n+ parametersEncodingError(\"path?param=value%\", \"unterminated escape sequence at end of string: value%\");\n+ parametersEncodingError(\"path?param%=value%\", \"unterminated escape sequence at end of string: param%\");\n+ parametersEncodingError(\"path?param=value%a\", \"partial escape sequence at end of string: value%a\");\n+ parametersEncodingError(\"path?param%a=value%a\", \"partial escape sequence at end of string: param%a\");\n+ parametersEncodingError(\"path?param%aZ=value%a\", \"invalid escape sequence `%aZ' at index 5 of: param%aZ\");\n+ }\n+\n+ private void parametersEncodingError(String uri, String errorString) throws IOException {\n+ final RestRequest request = new TestParamsRequest(NamedXContentRegistry.EMPTY, uri, Collections.emptyMap());\n+ assertThat(request.getException().getClass(), equalTo(IllegalArgumentException.class));\n+ assertThat(request.getException().getMessage(), equalTo(errorString));\n+\n+ final RestChannel channel = new DetailedExceptionRestChannel(request);\n+ final BytesRestResponse response = new BytesRestResponse(channel, request.getException());\n+ assertNotNull(response.content());\n+ final String content = response.content().utf8ToString();\n+ assertThat(content, containsString(\"\\\"type\\\":\\\"illegal_argument_exception\\\"\"));\n+ assertThat(content, containsString(\"\\\"reason\\\":\\\"\" + errorString + \"\\\"\"));\n+ assertThat(content, containsString(\"\\\"status\\\":\" + 400));\n+ }\n+\n+ private class TestParamsRequest extends RestRequest {\n+\n+ private final String uri;\n+\n+ TestParamsRequest(NamedXContentRegistry xContentRegistry, String uri, Map<String, List<String>> headers) {\n+ super(xContentRegistry, uri, headers);\n+ this.uri = uri;\n+ }\n+\n+ @Override\n+ public Method method() {\n+ return null;\n+ }\n+\n+ @Override\n+ public String uri() {\n+ return uri;\n+ }\n+\n+ @Override\n+ public boolean hasContent() {\n+ return false;\n+ }\n+\n+ @Override\n+ public BytesReference content() {\n+ return null;\n+ }\n+ }\n+\n public void testResponseWhenInternalServerError() throws IOException {\n final RestRequest request = new FakeRestRequest();\n final RestChannel channel = new DetailedExceptionRestChannel(request);", "filename": "server/src/test/java/org/elasticsearch/rest/BytesRestResponseTests.java", "status": "modified" } ] }