issue
dict
pr
dict
pr_details
dict
{ "body": "Ran into this error while regression testing master.\n\n```\nSuite: org.elasticsearch.index.query.BoostingQueryBuilderTests\n 2> REPRODUCE WITH: gradle :core:test -Dtests.seed=1817BD8B51E70A2 -Dtests.class=org.elasticsearch.index.query.BoostingQueryBuilderTests -Dtests.method=\"testToQuery\" -Des.logger.level=WARN -Dtests.security.manager=true -Dtests.locale=es_EC -Dtests.timezone=America/Martinique\nFAILURE 0.01s J1 | BoostingQueryBuilderTests.testToQuery <<< FAILURES!\n > Throwable #1: java.lang.AssertionError: \n > Expected: null\n > but: was <mapped_boolean:F^0.1>\n > at __randomizedtesting.SeedInfo.seed([1817BD8B51E70A2:F67A79E6C49DB548]:0)\n > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)\n > at org.elasticsearch.index.query.AbstractQueryTestCase.assertLuceneQuery(AbstractQueryTestCase.java:528)\n > at org.elasticsearch.index.query.AbstractQueryTestCase.testToQuery(AbstractQueryTestCase.java:476)\n > at java.lang.Thread.run(Thread.java:745)\n```\n", "comments": [ { "body": "Great catch, found the root cause for this. When we set up those query builder tests we randomly assign the `_name` field to a query. In rare cases we might accidentally pick the same name for subqueries and the main query under test, like in this case:\n\n```\n{\n \"boosting\" : {\n \"positive\" : { },\n \"negative\" : {\n \"term\" : {\n \"mapped_boolean\" : {\n \"value\" : false,\n \"boost\" : 0.1,\n \"_name\" : \"q\"\n }\n }\n },\n \"negative_boost\" : 0.6666667,\n \"boost\" : 1.0,\n \"_name\" : \"q\"\n }\n}\n```\n\nWhen storing the queries by name in the shard context namedQueries map, this can lead to conflicts. Will change the test setup so this cannot happen any longer.\n", "created_at": "2015-11-16T14:16:14Z" } ], "number": 14746, "title": "Reproducible - null Query in BoostingQueryBuilderTests throws assertion" }
{ "body": "In AbstractQueryTestCase we randomly add the `_name` property to some of the queries. There are exceptional cases where we assign the same name to two queries in the setup which leads to test failures later. Adding a counter to the base tests that gets appended to all random query names will avoid this name clashes.\n\nCloses #14746 \n", "number": 14775, "review_comments": [], "title": "Add unique id to query names to avoid naming conflicts" }
{ "commits": [ { "message": "Tests: Add unique id to query names to avoid naming conflicts\n\nIn AbstractQueryTestCase we randomly add the `_name` property to\nsome of the queries. While this generally works, there are exceptional\ncases where we assign the same name to two queries in the setup which\nleads to test failures later. This PR adds an increasing counter value\nto the base tests that gets appended to all random query names to\navoid this name clashes." } ], "files": [ { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.randomizedtesting.generators.CodepointSetGenerator;\n import com.fasterxml.jackson.core.JsonParseException;\n import com.fasterxml.jackson.core.io.JsonStringEncoder;\n+\n import org.apache.lucene.search.BoostQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermQuery;\n@@ -128,6 +129,7 @@ public abstract class AbstractQueryTestCase<QB extends AbstractQueryBuilder<QB>>\n private static IndicesQueriesRegistry indicesQueriesRegistry;\n private static QueryShardContext queryShardContext;\n private static IndexFieldDataService indexFieldDataService;\n+ private static int queryNameId = 0;\n \n \n protected static QueryShardContext queryShardContext() {\n@@ -316,12 +318,21 @@ protected final QB createTestQueryBuilder() {\n query.boost(2.0f / randomIntBetween(1, 20));\n }\n if (randomBoolean()) {\n- query.queryName(randomAsciiOfLengthBetween(1, 10));\n+ query.queryName(createUniqueRandomName());\n }\n }\n return query;\n }\n \n+ /**\n+ * make sure query names are unique by suffixing them with increasing counter\n+ */\n+ private static String createUniqueRandomName() {\n+ String queryName = randomAsciiOfLengthBetween(1, 10) + queryNameId;\n+ queryNameId++;\n+ return queryName;\n+ }\n+\n /**\n * Create the query that is being tested\n */", "filename": "core/src/test/java/org/elasticsearch/index/query/AbstractQueryTestCase.java", "status": "modified" } ] }
{ "body": "If you try to use extended_bounds a request which contains the .kibana index the extended bounds are ignored. See the following sense script to reproduce (must be run on a cluster with a .kibana index to reproduce the bug):\n\n``` js\nPOST index-1/doc/1\n{\n \"@timestamp\": \"2015-01-01\"\n}\nPOST index-2/doc/1\n{\n \"@timestamp\": \"2013-01-01\"\n}\nPOST index-3/doc/1\n{\n \"@timestamp\": \"2011-01-01\"\n}\nPOST index-4/doc/1\n{\n \"@timestamp\": \"2009-01-01\"\n}\n\n# Correctly returns buckets from 1970 to 2015\nGET /index-*/_search\n{\n \"aggs\": {\n \"series\": {\n \"date_histogram\": {\n \"field\": \"@timestamp\",\n \"interval\": \"1y\",\n \"extended_bounds\": {\n \"min\": 0,\n \"max\": 1447407264268\n },\n \"min_doc_count\": 0\n }\n }\n },\n \"size\": 0\n}\n\n# Only return buckets between 2009 and 2015 (extended_bounds is ignored)\nGET /index-*,.kibana/_search\n{\n \"aggs\": {\n \"series\": {\n \"date_histogram\": {\n \"field\": \"@timestamp\",\n \"interval\": \"1y\",\n \"extended_bounds\": {\n \"min\": 0,\n \"max\": 1447407264268\n },\n \"min_doc_count\": 0\n }\n }\n },\n \"size\": 0\n}\n```\n", "comments": [], "number": 14735, "title": "extended_bounds does not play nicely with .kibana index" }
{ "body": "This fixes an issue where if the field for the aggregation was unmapped the extended bounds would get dropped and the resulting buckets would not cover the extended bounds requested.\n\nCloses #14735\n", "number": 14742, "review_comments": [], "title": "Pass extended bounds into HistogramAggregator when creating an unmapped aggregator" }
{ "commits": [ { "message": "Aggregations: Pass extended bounds into HistogramAggregator when creating an unmapped aggregator\n\nThis fixes an issue where if the field for the aggregation was unmapped the extended bounds would get dropped and the resulting buckets would not cover the extended bounds requested.\n\nCloses #14735" } ], "files": [ { "diff": "@@ -173,7 +173,7 @@ public long minDocCount() {\n @Override\n protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<PipelineAggregator> pipelineAggregators,\n Map<String, Object> metaData) throws IOException {\n- return new HistogramAggregator(name, factories, rounding, order, keyed, minDocCount, null, null, config.formatter(),\n+ return new HistogramAggregator(name, factories, rounding, order, keyed, minDocCount, extendedBounds, null, config.formatter(),\n histogramFactory, aggregationContext, parent, pipelineAggregators, metaData);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregator.java", "status": "modified" }, { "diff": "@@ -777,7 +777,6 @@ public void testNoBucketsInHistogram() {\n .prepareSearch(\"idx\").setTypes(\"type\")\n .addAggregation(\n histogram(\"histo\").field(\"test\").interval(interval)\n- .extendedBounds(0L, (long) (interval * (numBuckets - 1)))\n .subAggregation(randomMetric(\"the_metric\", VALUE_FIELD))\n .subAggregation(movingAvg(\"movavg_counts\")\n .window(windowSize)\n@@ -801,7 +800,6 @@ public void testNoBucketsInHistogramWithPredict() {\n .prepareSearch(\"idx\").setTypes(\"type\")\n .addAggregation(\n histogram(\"histo\").field(\"test\").interval(interval)\n- .extendedBounds(0L, (long) (interval * (numBuckets - 1)))\n .subAggregation(randomMetric(\"the_metric\", VALUE_FIELD))\n .subAggregation(movingAvg(\"movavg_counts\")\n .window(windowSize)", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/moving/avg/MovAvgIT.java", "status": "modified" }, { "diff": "@@ -892,6 +892,39 @@ public void testPartiallyUnmapped() throws Exception {\n }\n }\n \n+ public void testPartiallyUnmappedWithExtendedBounds() throws Exception {\n+ SearchResponse response = client()\n+ .prepareSearch(\"idx\", \"idx_unmapped\")\n+ .addAggregation(\n+ histogram(\"histo\").field(SINGLE_VALUED_FIELD_NAME).interval(interval)\n+ .extendedBounds((long) -1 * 2 * interval, (long) valueCounts.length * interval)).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ List<? extends Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(numValueBuckets + 3));\n+\n+ Histogram.Bucket bucket = buckets.get(0);\n+ assertThat(bucket, notNullValue());\n+ assertThat(((Number) bucket.getKey()).longValue(), equalTo((long) -1 * 2 * interval));\n+ assertThat(bucket.getDocCount(), equalTo(0l));\n+\n+ bucket = buckets.get(1);\n+ assertThat(bucket, notNullValue());\n+ assertThat(((Number) bucket.getKey()).longValue(), equalTo((long) -1 * interval));\n+ assertThat(bucket.getDocCount(), equalTo(0l));\n+\n+ for (int i = 2; i < numValueBuckets + 2; ++i) {\n+ bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(((Number) bucket.getKey()).longValue(), equalTo((long) (i - 2) * interval));\n+ assertThat(bucket.getDocCount(), equalTo(valueCounts[i - 2]));\n+ }\n+ }\n+\n public void testEmptyAggregation() throws Exception {\n SearchResponse searchResponse = client().prepareSearch(\"empty_bucket_idx\")\n .setQuery(matchAllQuery())", "filename": "plugins/lang-groovy/src/test/java/org/elasticsearch/messy/tests/HistogramTests.java", "status": "modified" } ] }
{ "body": "If you try to use extended_bounds a request which contains the .kibana index the extended bounds are ignored. See the following sense script to reproduce (must be run on a cluster with a .kibana index to reproduce the bug):\n\n``` js\nPOST index-1/doc/1\n{\n \"@timestamp\": \"2015-01-01\"\n}\nPOST index-2/doc/1\n{\n \"@timestamp\": \"2013-01-01\"\n}\nPOST index-3/doc/1\n{\n \"@timestamp\": \"2011-01-01\"\n}\nPOST index-4/doc/1\n{\n \"@timestamp\": \"2009-01-01\"\n}\n\n# Correctly returns buckets from 1970 to 2015\nGET /index-*/_search\n{\n \"aggs\": {\n \"series\": {\n \"date_histogram\": {\n \"field\": \"@timestamp\",\n \"interval\": \"1y\",\n \"extended_bounds\": {\n \"min\": 0,\n \"max\": 1447407264268\n },\n \"min_doc_count\": 0\n }\n }\n },\n \"size\": 0\n}\n\n# Only return buckets between 2009 and 2015 (extended_bounds is ignored)\nGET /index-*,.kibana/_search\n{\n \"aggs\": {\n \"series\": {\n \"date_histogram\": {\n \"field\": \"@timestamp\",\n \"interval\": \"1y\",\n \"extended_bounds\": {\n \"min\": 0,\n \"max\": 1447407264268\n },\n \"min_doc_count\": 0\n }\n }\n },\n \"size\": 0\n}\n```\n", "comments": [], "number": 14735, "title": "extended_bounds does not play nicely with .kibana index" }
{ "body": "This fixes an issue where if the field for the aggregation was unmapped the extended bounds would get dropped and the resulting buckets would not cover the extended bounds requested.\n\nCloses #14735\n", "number": 14736, "review_comments": [], "title": "Pass extended bounds into HistogramAggregator when creating an unmapped aggregator" }
{ "commits": [ { "message": "Aggregations: Pass extended bounds into HistogramAggregator when creating an unmapped aggregator\n\nThis fixes an issue where if the field for the aggregation was unmapped the extended bounds would get dropped and the resulting buckets would not cover the extended bounds requested.\n\nCloses #14735" } ], "files": [ { "diff": "@@ -173,7 +173,7 @@ public long minDocCount() {\n @Override\n protected Aggregator createUnmapped(AggregationContext aggregationContext, Aggregator parent, List<PipelineAggregator> pipelineAggregators,\n Map<String, Object> metaData) throws IOException {\n- return new HistogramAggregator(name, factories, rounding, order, keyed, minDocCount, null, null, config.formatter(),\n+ return new HistogramAggregator(name, factories, rounding, order, keyed, minDocCount, extendedBounds, null, config.formatter(),\n histogramFactory, aggregationContext, parent, pipelineAggregators, metaData);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramAggregator.java", "status": "modified" }, { "diff": "@@ -892,6 +892,39 @@ public void testPartiallyUnmapped() throws Exception {\n }\n }\n \n+ public void testPartiallyUnmappedWithExtendedBounds() throws Exception {\n+ SearchResponse response = client()\n+ .prepareSearch(\"idx\", \"idx_unmapped\")\n+ .addAggregation(\n+ histogram(\"histo\").field(SINGLE_VALUED_FIELD_NAME).interval(interval)\n+ .extendedBounds((long) -1 * 2 * interval, (long) valueCounts.length * interval)).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Histogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ List<? extends Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(numValueBuckets + 3));\n+\n+ Histogram.Bucket bucket = buckets.get(0);\n+ assertThat(bucket, notNullValue());\n+ assertThat(((Number) bucket.getKey()).longValue(), equalTo((long) -1 * 2 * interval));\n+ assertThat(bucket.getDocCount(), equalTo(0l));\n+\n+ bucket = buckets.get(1);\n+ assertThat(bucket, notNullValue());\n+ assertThat(((Number) bucket.getKey()).longValue(), equalTo((long) -1 * interval));\n+ assertThat(bucket.getDocCount(), equalTo(0l));\n+\n+ for (int i = 2; i < numValueBuckets + 2; ++i) {\n+ bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(((Number) bucket.getKey()).longValue(), equalTo((long) (i - 2) * interval));\n+ assertThat(bucket.getDocCount(), equalTo(valueCounts[i - 2]));\n+ }\n+ }\n+\n public void testEmptyAggregation() throws Exception {\n SearchResponse searchResponse = client().prepareSearch(\"empty_bucket_idx\")\n .setQuery(matchAllQuery())", "filename": "plugins/lang-groovy/src/test/java/org/elasticsearch/messy/tests/HistogramTests.java", "status": "modified" } ] }
{ "body": "If you force your cluster to remain in a Red state on startup, the cluster health and cluster stats disagree on the status.\n\nE.g. Set `recover_after_nodes: 10` in a one-node cluster. Cluster never recovers because the number of nodes is not satisfied. `/_cluster/health` shows Red, `/_cluster/stats` shows Green\n\n``` bash\nGET _cluster/health\n\n{\n \"cluster_name\": \"elasticsearch_macbookair_zach\",\n \"status\": \"red\",\n \"timed_out\": false,\n \"number_of_nodes\": 1,\n \"number_of_data_nodes\": 1,\n \"active_primary_shards\": 0,\n \"active_shards\": 0,\n \"relocating_shards\": 0,\n \"initializing_shards\": 0,\n \"unassigned_shards\": 0\n}\n```\n\n``` bash\nGET _cluster/stats\n\n{\n \"timestamp\": 1408649084720,\n \"cluster_name\": \"elasticsearch_macbookair_zach\",\n \"status\": \"green\",\n \"indices\": {\n \"count\": 0,\n \"shards\": {},\n \"docs\": {\n \"count\": 0,\n \"deleted\": 0\n },\n ...\n\n}\n```\n", "comments": [ { "body": "@polyfractal I noticed the issue and reproduced it, what do you think the possible reason is? Is it the right way to find out where to judge the cluster status colour and see how it works?\n", "created_at": "2015-04-27T09:08:25Z" }, { "body": "This looks like a bug in cluster stats.\n", "created_at": "2015-04-27T09:16:54Z" } ], "number": 7390, "title": "Cluster/Health and Cluster/Stats disagree on status" }
{ "body": "The cluster health and cluster stats disagree on the status.\nAdd a extra validation step in `cluster/stats`.\n\nThis PR should fix #7390\n", "number": 14699, "review_comments": [], "title": "Add extra validation into `cluster/stats`" }
{ "commits": [ { "message": "Add extra validation into `cluster/stats`\n\nThe cluster health and cluster stats disagree on the status.\nAdd a extra validation step in `cluster/stats`.\n\ncloses #7390" } ], "files": [ { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n+import org.elasticsearch.cluster.routing.RoutingTableValidation;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -42,6 +43,7 @@\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.node.service.NodeService;\n+import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n@@ -134,6 +136,14 @@ protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeReq\n break;\n }\n }\n+\n+ RoutingTableValidation validation = clusterService.state().routingTable().validate(clusterService.state().metaData());\n+\n+ if (!validation.failures().isEmpty()) {\n+ clusterStatus = ClusterHealthStatus.RED;\n+ } else if (clusterService.state().blocks().hasGlobalBlock(RestStatus.SERVICE_UNAVAILABLE)) {\n+ clusterStatus = ClusterHealthStatus.RED;\n+ }\n }\n \n return new ClusterStatsNodeResponse(nodeInfo.getNode(), clusterStatus, nodeInfo, nodeStats, shardsStats.toArray(new ShardStats[shardsStats.size()]));", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java", "status": "modified" }, { "diff": "@@ -170,4 +170,17 @@ public void testAllocatedProcessors() throws Exception {\n ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().get();\n assertThat(response.getNodesStats().getOs().getAllocatedProcessors(), equalTo(7));\n }\n+\n+ public void testClusterStatus() throws Exception {\n+ // stop all other nodes\n+ internalCluster().ensureAtMostNumDataNodes(0);\n+\n+ internalCluster().startNode(Settings.builder().put(\"gateway.recover_after_nodes\", 2).build());\n+ ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().get();\n+ assertThat(response.getStatus(), equalTo(ClusterHealthStatus.RED));\n+\n+ internalCluster().ensureAtLeastNumDataNodes(3);\n+ response = client().admin().cluster().prepareClusterStats().get();\n+ assertThat(response.getStatus(), equalTo(ClusterHealthStatus.GREEN));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsIT.java", "status": "modified" } ] }
{ "body": "This is _cluster/health output after one of nodes restarted:\n\n```\n{\n \"cluster_name\" : \"<cluster_name>\",\n \"status\" : \"yellow\",\n \"timed_out\" : false,\n \"number_of_nodes\" : 14,\n \"number_of_data_nodes\" : 10,\n \"active_primary_shards\" : 1022,\n \"active_shards\" : 1839,\n \"relocating_shards\" : 2,\n \"initializing_shards\" : 0,\n \"unassigned_shards\" : 205,\n \"delayed_unassigned_shards\" : 0,\n \"number_of_pending_tasks\" : 0,\n \"number_of_in_flight_fetch\" : 0\n}\n```\n\nIt says cluster is in `yellow` status but it is relocating shards.\nThis cluster doesn't have custom `cluster.routing.allocation.allow_rebalance` configuration. So, it should be using `indices_all_active`.\n\nNo relocation should happen in this case?\n", "comments": [ { "body": "@masaruh all the shards have initialized, so there isn't a reason why relocation wouldn't be allowed. Also, just because the cluster is yellow doesn't mean that the particular index relocating isn't green?\n", "created_at": "2015-11-11T00:04:25Z" }, { "body": "All primaries have initialized but not all shards are, right?\n\nIt says:\n\n```\n /**\n * Re-balancing is allowed only once all shards on all indices are active. \n */\n INDICES_ALL_ACTIVE;\n```\n\nI expect relocation happens only when cluster state is green.\n", "created_at": "2015-11-11T01:58:22Z" }, { "body": "@masaruh ahh okay, I see what you are saying, my mistake for misinterpreting it :)\n", "created_at": "2015-11-11T02:03:20Z" } ], "number": 14670, "title": "Shard relocation happens while cluster is yellow" }
{ "body": "ClusterRebalanceAllocationDecider did not take unassigned shards into account\nthat are temporarily marked as ignored. This can cause unexpected behaviour\nwhen gateway allocator is still fetching shards or has marked shards as ignored\nsince their quorum is not met yet.\n\nCloses #14670\n", "number": 14678, "review_comments": [ { "body": "Should we rename the method `hasUnassignedOrIgnoredShards`?\n", "created_at": "2015-11-11T15:16:06Z" }, { "body": "I think this `drain` method might need to reset `ignoredPrimaries` to 0 also?\n", "created_at": "2015-11-11T15:22:08Z" }, { "body": "it doesn't reset the ignored list..\n", "created_at": "2015-11-11T19:15:48Z" }, { "body": "there is an `hasUnassigned` method already, so yeah, I'm +1 on being explicit here...\n", "created_at": "2015-11-11T19:17:04Z" }, { "body": "why remove the primaries?\n", "created_at": "2015-11-11T19:19:03Z" }, { "body": "-1 this is overloaded and has a specific meaning why would we pollute this with the impl detail of the ingore list?\n", "created_at": "2015-11-11T19:44:01Z" }, { "body": "it only drains the no-ignored ones\n", "created_at": "2015-11-11T19:44:30Z" }, { "body": "hmm I think this was unintentional\n", "created_at": "2015-11-11T19:46:08Z" }, { "body": "imho having `hasUnassigned` and `hasUnassignedShards` methods is very confusing.\n", "created_at": "2015-11-12T10:41:26Z" }, { "body": "that really sucks I didn't know we had that! I removed the undocumented `hasUnassigned` \n", "created_at": "2015-11-12T18:58:45Z" } ], "title": "Take ignored unallocated shards into account when making allocation decision" }
{ "commits": [ { "message": "Take ingored unallocated shards into account when makeing allocation decision\n\nClusterRebalanceAllocationDecider did not take unassigned shards into account\nthat are temporarily marked as ingored. This can cause unexpected behavior\nwhen gateway allocator is still fetching shards or has marked shareds as ignored\nsince their quorum is not met yet.\n\nCloses #14670" }, { "message": "apply review comments && remove UnassignedShards#clear()" }, { "message": "Remove RoutingNodes#hasUnassigned" } ], "files": [ { "diff": "@@ -167,7 +167,7 @@ public void onFailure(String source, Throwable t) {\n \n @Override\n public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n- if (oldState != newState && newState.getRoutingNodes().hasUnassigned()) {\n+ if (oldState != newState && newState.getRoutingNodes().unassigned().size() > 0) {\n logger.trace(\"unassigned shards after shard failures. scheduling a reroute.\");\n routingService.reroute(\"unassigned shards after shard failures, scheduling a reroute\");\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/action/shard/ShardStateAction.java", "status": "modified" }, { "diff": "@@ -183,13 +183,7 @@ public ImmutableOpenMap<String, ClusterState.Custom> customs() {\n return this.customs;\n }\n \n- public <T extends ClusterState.Custom> T custom(String type) {\n- return (T) customs.get(type);\n- }\n-\n- public boolean hasUnassigned() {\n- return !unassignedShards.isEmpty();\n- }\n+ public <T extends ClusterState.Custom> T custom(String type) { return (T) customs.get(type); }\n \n public UnassignedShards unassigned() {\n return this.unassignedShards;\n@@ -217,12 +211,22 @@ public ObjectIntHashMap<String> nodesPerAttributesCounts(String attributeName) {\n return nodesPerAttributesCounts;\n }\n \n+ /**\n+ * Returns <code>true</code> iff this {@link RoutingNodes} instance has any unassigned primaries even if the\n+ * primaries are marked as temporarily ignored.\n+ */\n public boolean hasUnassignedPrimaries() {\n- return unassignedShards.numPrimaries() > 0;\n+ return unassignedShards.getNumPrimaries() + unassignedShards.getNumIgnoredPrimaries() > 0;\n }\n \n+ /**\n+ * Returns <code>true</code> iff this {@link RoutingNodes} instance has any unassigned shards even if the\n+ * shards are marked as temporarily ignored.\n+ * @see UnassignedShards#isEmpty()\n+ * @see UnassignedShards#isIgnoredEmpty()\n+ */\n public boolean hasUnassignedShards() {\n- return !unassignedShards.isEmpty();\n+ return unassignedShards.isEmpty() == false || unassignedShards.isIgnoredEmpty() == false;\n }\n \n public boolean hasInactivePrimaries() {\n@@ -524,47 +528,47 @@ public static final class UnassignedShards implements Iterable<ShardRouting> {\n private final List<ShardRouting> ignored;\n \n private int primaries = 0;\n- private long transactionId = 0;\n- private final UnassignedShards source;\n- private final long sourceTransactionId;\n-\n- public UnassignedShards(UnassignedShards other) {\n- this.nodes = other.nodes;\n- source = other;\n- sourceTransactionId = other.transactionId;\n- unassigned = new ArrayList<>(other.unassigned);\n- ignored = new ArrayList<>(other.ignored);\n- primaries = other.primaries;\n- }\n+ private int ignoredPrimaries = 0;\n \n public UnassignedShards(RoutingNodes nodes) {\n this.nodes = nodes;\n unassigned = new ArrayList<>();\n ignored = new ArrayList<>();\n- source = null;\n- sourceTransactionId = -1;\n }\n \n public void add(ShardRouting shardRouting) {\n if(shardRouting.primary()) {\n primaries++;\n }\n unassigned.add(shardRouting);\n- transactionId++;\n }\n \n public void sort(Comparator<ShardRouting> comparator) {\n CollectionUtil.timSort(unassigned, comparator);\n }\n \n- public int size() {\n- return unassigned.size();\n- }\n+ /**\n+ * Returns the size of the non-ignored unassigned shards\n+ */\n+ public int size() { return unassigned.size(); }\n \n- public int numPrimaries() {\n+ /**\n+ * Returns the size of the temporarily marked as ignored unassigned shards\n+ */\n+ public int ignoredSize() { return ignored.size(); }\n+\n+ /**\n+ * Returns the number of non-ignored unassigned primaries\n+ */\n+ public int getNumPrimaries() {\n return primaries;\n }\n \n+ /**\n+ * Returns the number of temporarily marked as ignored unassigned primaries\n+ */\n+ public int getNumIgnoredPrimaries() { return ignoredPrimaries; }\n+\n @Override\n public UnassignedIterator iterator() {\n return new UnassignedIterator();\n@@ -580,12 +584,18 @@ public List<ShardRouting> ignored() {\n }\n \n /**\n- * Adds a shard to the ignore unassigned list. Should be used with caution, typically,\n+ * Marks a shard as temporarily ignored and adds it to the ignore unassigned list.\n+ * Should be used with caution, typically,\n * the correct usage is to removeAndIgnore from the iterator.\n+ * @see #ignored()\n+ * @see UnassignedIterator#removeAndIgnore()\n+ * @see #isIgnoredEmpty()\n */\n public void ignoreShard(ShardRouting shard) {\n+ if (shard.primary()) {\n+ ignoredPrimaries++;\n+ }\n ignored.add(shard);\n- transactionId++;\n }\n \n public class UnassignedIterator implements Iterator<ShardRouting> {\n@@ -618,6 +628,8 @@ public void initialize(String nodeId, long version, long expectedShardSize) {\n /**\n * Removes and ignores the unassigned shard (will be ignored for this run, but\n * will be added back to unassigned once the metadata is constructed again).\n+ * Typically this is used when an allocation decision prevents a shard from being allocated such\n+ * that subsequent consumers of this API won't try to allocate this shard again.\n */\n public void removeAndIgnore() {\n innerRemove();\n@@ -639,45 +651,37 @@ private void innerRemove() {\n if (current.primary()) {\n primaries--;\n }\n- transactionId++;\n }\n }\n \n+ /**\n+ * Returns <code>true</code> iff this collection contains one or more non-ignored unassigned shards.\n+ */\n public boolean isEmpty() {\n return unassigned.isEmpty();\n }\n \n- public void shuffle() {\n- Collections.shuffle(unassigned);\n- }\n-\n- public void clear() {\n- transactionId++;\n- unassigned.clear();\n- ignored.clear();\n- primaries = 0;\n- }\n-\n- public void transactionEnd(UnassignedShards shards) {\n- assert shards.source == this && shards.sourceTransactionId == transactionId :\n- \"Expected ID: \" + shards.sourceTransactionId + \" actual: \" + transactionId + \" Expected Source: \" + shards.source + \" actual: \" + this;\n- transactionId++;\n- this.unassigned.clear();\n- this.unassigned.addAll(shards.unassigned);\n- this.ignored.clear();\n- this.ignored.addAll(shards.ignored);\n- this.primaries = shards.primaries;\n+ /**\n+ * Returns <code>true</code> iff any unassigned shards are marked as temporarily ignored.\n+ * @see UnassignedShards#ignoreShard(ShardRouting)\n+ * @see UnassignedIterator#removeAndIgnore()\n+ */\n+ public boolean isIgnoredEmpty() {\n+ return ignored.isEmpty();\n }\n \n- public UnassignedShards transactionBegin() {\n- return new UnassignedShards(this);\n+ public void shuffle() {\n+ Collections.shuffle(unassigned);\n }\n \n+ /**\n+ * Drains all unassigned shards and returns it.\n+ * This method will not drain ignored shards.\n+ */\n public ShardRouting[] drain() {\n ShardRouting[] mutableShardRoutings = unassigned.toArray(new ShardRouting[unassigned.size()]);\n unassigned.clear();\n primaries = 0;\n- transactionId++;\n return mutableShardRoutings;\n }\n }\n@@ -698,10 +702,10 @@ public static boolean assertShardStats(RoutingNodes routingNodes) {\n return true;\n }\n int unassignedPrimaryCount = 0;\n+ int unassignedIgnoredPrimaryCount = 0;\n int inactivePrimaryCount = 0;\n int inactiveShardCount = 0;\n int relocating = 0;\n- final Set<ShardId> seenShards = new HashSet<>();\n Map<String, Integer> indicesAndShards = new HashMap<>();\n for (RoutingNode node : routingNodes) {\n for (ShardRouting shard : node) {\n@@ -716,7 +720,6 @@ public static boolean assertShardStats(RoutingNodes routingNodes) {\n if (shard.relocating()) {\n relocating++;\n }\n- seenShards.add(shard.shardId());\n Integer i = indicesAndShards.get(shard.index());\n if (i == null) {\n i = shard.id();\n@@ -751,11 +754,18 @@ public static boolean assertShardStats(RoutingNodes routingNodes) {\n if (shard.primary()) {\n unassignedPrimaryCount++;\n }\n- seenShards.add(shard.shardId());\n }\n \n- assert unassignedPrimaryCount == routingNodes.unassignedShards.numPrimaries() :\n- \"Unassigned primaries is [\" + unassignedPrimaryCount + \"] but RoutingNodes returned unassigned primaries [\" + routingNodes.unassigned().numPrimaries() + \"]\";\n+ for (ShardRouting shard : routingNodes.unassigned().ignored()) {\n+ if (shard.primary()) {\n+ unassignedIgnoredPrimaryCount++;\n+ }\n+ }\n+\n+ assert unassignedPrimaryCount == routingNodes.unassignedShards.getNumPrimaries() :\n+ \"Unassigned primaries is [\" + unassignedPrimaryCount + \"] but RoutingNodes returned unassigned primaries [\" + routingNodes.unassigned().getNumPrimaries() + \"]\";\n+ assert unassignedIgnoredPrimaryCount == routingNodes.unassignedShards.getNumIgnoredPrimaries() :\n+ \"Unassigned ignored primaries is [\" + unassignedIgnoredPrimaryCount + \"] but RoutingNodes returned unassigned ignored primaries [\" + routingNodes.unassigned().getNumIgnoredPrimaries() + \"]\";\n assert inactivePrimaryCount == routingNodes.inactivePrimaryCount :\n \"Inactive Primary count [\" + inactivePrimaryCount + \"] but RoutingNodes returned inactive primaries [\" + routingNodes.inactivePrimaryCount + \"]\";\n assert inactiveShardCount == routingNodes.inactiveShardCount :", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingNodes.java", "status": "modified" }, { "diff": "@@ -176,7 +176,7 @@ private boolean reroute(RoutingAllocation allocation) {\n changed |= electPrimariesAndUnassignedDanglingReplicas(allocation);\n \n // now allocate all the unassigned to available nodes\n- if (allocation.routingNodes().hasUnassigned()) {\n+ if (allocation.routingNodes().unassigned().size() > 0) {\n changed |= shardsAllocators.allocateUnassigned(allocation);\n }\n \n@@ -232,7 +232,7 @@ private boolean moveShards(RoutingAllocation allocation) {\n private boolean electPrimariesAndUnassignedDanglingReplicas(RoutingAllocation allocation) {\n boolean changed = false;\n RoutingNodes routingNodes = allocation.routingNodes();\n- if (!routingNodes.hasUnassignedPrimaries()) {\n+ if (routingNodes.unassigned().getNumPrimaries() == 0) {\n // move out if we don't have unassigned primaries\n return changed;\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocationService.java", "status": "modified" }, { "diff": "@@ -353,7 +353,7 @@ private boolean balance(boolean onlyAssign) {\n logger.trace(\"Start assigning unassigned shards\");\n }\n }\n- final RoutingNodes.UnassignedShards unassigned = routingNodes.unassigned().transactionBegin();\n+ final RoutingNodes.UnassignedShards unassigned = routingNodes.unassigned();\n boolean changed = initialize(routingNodes, unassigned);\n if (onlyAssign == false && changed == false && allocation.deciders().canRebalance(allocation).type() == Type.YES) {\n NodeSorter sorter = newNodeSorter();\n@@ -433,7 +433,6 @@ private boolean balance(boolean onlyAssign) {\n }\n }\n }\n- routingNodes.unassigned().transactionEnd(unassigned);\n return changed;\n }\n \n@@ -508,7 +507,7 @@ public boolean move(ShardRouting shard, RoutingNode node ) {\n if (logger.isTraceEnabled()) {\n logger.trace(\"Try moving shard [{}] from [{}]\", shard, node);\n }\n- final RoutingNodes.UnassignedShards unassigned = routingNodes.unassigned().transactionBegin();\n+ final RoutingNodes.UnassignedShards unassigned = routingNodes.unassigned();\n boolean changed = initialize(routingNodes, unassigned);\n if (!changed) {\n final ModelNode sourceNode = nodes.get(node.nodeId());\n@@ -544,7 +543,6 @@ public boolean move(ShardRouting shard, RoutingNode node ) {\n }\n }\n }\n- routingNodes.unassigned().transactionEnd(unassigned);\n return changed;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java", "status": "modified" }, { "diff": "@@ -51,15 +51,12 @@ public class ClusterRebalanceAllocationDecider extends AllocationDecider {\n public static final String NAME = \"cluster_rebalance\";\n \n public static final String CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE = \"cluster.routing.allocation.allow_rebalance\";\n- public static final Validator ALLOCATION_ALLOW_REBALANCE_VALIDATOR = new Validator() {\n- @Override\n- public String validate(String setting, String value, ClusterState clusterState) {\n- try {\n- ClusterRebalanceType.parseString(value);\n- return null;\n- } catch (IllegalArgumentException e) {\n- return \"the value of \" + setting + \" must be one of: [always, indices_primaries_active, indices_all_active]\";\n- }\n+ public static final Validator ALLOCATION_ALLOW_REBALANCE_VALIDATOR = (setting, value, clusterState) -> {\n+ try {\n+ ClusterRebalanceType.parseString(value);\n+ return null;\n+ } catch (IllegalArgumentException e) {\n+ return \"the value of \" + setting + \" must be one of: [always, indices_primaries_active, indices_all_active]\";\n }\n };\n \n@@ -153,7 +150,7 @@ public Decision canRebalance(RoutingAllocation allocation) {\n }\n if (type == ClusterRebalanceType.INDICES_ALL_ACTIVE) {\n // check if there are unassigned shards.\n- if ( allocation.routingNodes().hasUnassignedShards() ) {\n+ if (allocation.routingNodes().hasUnassignedShards() ) {\n return allocation.decision(Decision.NO, NAME, \"cluster has unassigned shards\");\n }\n // in case all indices are assigned, are there initializing shards which", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/ClusterRebalanceAllocationDecider.java", "status": "modified" }, { "diff": "@@ -73,7 +73,7 @@ public void testDelayedAllocationNodeLeavesAndComesBack() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- assertThat(client().admin().cluster().prepareState().all().get().getState().getRoutingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().getRoutingNodes().unassigned().size() > 0, equalTo(true));\n }\n });\n assertThat(client().admin().cluster().prepareHealth().get().getDelayedUnassignedShards(), equalTo(1));\n@@ -119,7 +119,7 @@ public void testDelayedAllocationChangeWithSettingTo100ms() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- assertThat(client().admin().cluster().prepareState().all().get().getState().getRoutingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().getRoutingNodes().unassigned().size() > 0, equalTo(true));\n }\n });\n assertThat(client().admin().cluster().prepareHealth().get().getDelayedUnassignedShards(), equalTo(1));\n@@ -145,7 +145,7 @@ public void testDelayedAllocationChangeWithSettingTo0() throws Exception {\n assertBusy(new Runnable() {\n @Override\n public void run() {\n- assertThat(client().admin().cluster().prepareState().all().get().getState().getRoutingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(client().admin().cluster().prepareState().all().get().getState().getRoutingNodes().unassigned().size() > 0, equalTo(true));\n }\n });\n assertThat(client().admin().cluster().prepareHealth().get().getDelayedUnassignedShards(), equalTo(1));", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/DelayedAllocationIT.java", "status": "modified" }, { "diff": "@@ -78,7 +78,7 @@ public void testNoDelayedUnassigned() throws Exception {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.getRoutingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n@@ -107,7 +107,7 @@ public void testDelayedUnassignedScheduleReroute() throws Exception {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- assertFalse(\"no shards should be unassigned\", clusterState.getRoutingNodes().hasUnassigned());\n+ assertFalse(\"no shards should be unassigned\", clusterState.getRoutingNodes().unassigned().size() > 0);\n String nodeId = null;\n final List<ShardRouting> allShards = clusterState.getRoutingNodes().routingTable().allShards(\"test\");\n // we need to find the node with the replica otherwise we will not reroute\n@@ -153,7 +153,7 @@ public void testDelayedUnassignedDoesNotRerouteForNegativeDelays() throws Except\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.getRoutingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n ClusterState prevState = clusterState;\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java", "status": "modified" }, { "diff": "@@ -213,12 +213,12 @@ public void testNodeLeave() {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.getRoutingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n // verify that NODE_LEAVE is the reason for meta\n- assertThat(clusterState.getRoutingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(true));\n assertThat(clusterState.getRoutingNodes().shardsWithState(UNASSIGNED).size(), equalTo(1));\n assertThat(clusterState.getRoutingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo(), notNullValue());\n assertThat(clusterState.getRoutingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo().getReason(), equalTo(UnassignedInfo.Reason.NODE_LEFT));\n@@ -242,12 +242,12 @@ public void testFailedShard() {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.getRoutingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(false));\n // fail shard\n ShardRouting shardToFail = clusterState.getRoutingNodes().shardsWithState(STARTED).get(0);\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyFailedShards(clusterState, Collections.singletonList(new FailedRerouteAllocation.FailedShard(shardToFail, \"test fail\", null)))).build();\n // verify the reason and details\n- assertThat(clusterState.getRoutingNodes().hasUnassigned(), equalTo(true));\n+ assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(true));\n assertThat(clusterState.getRoutingNodes().shardsWithState(UNASSIGNED).size(), equalTo(1));\n assertThat(clusterState.getRoutingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo(), notNullValue());\n assertThat(clusterState.getRoutingNodes().shardsWithState(UNASSIGNED).get(0).unassignedInfo().getReason(), equalTo(UnassignedInfo.Reason.ALLOCATION_FAILED));\n@@ -305,7 +305,7 @@ public void testNumberOfDelayedUnassigned() throws Exception {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.getRoutingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n@@ -330,7 +330,7 @@ public void testFindNextDelayedAllocation() {\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n // starting replicas\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n- assertThat(clusterState.getRoutingNodes().hasUnassigned(), equalTo(false));\n+ assertThat(clusterState.getRoutingNodes().unassigned().size() > 0, equalTo(false));\n // remove node2 and reroute\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(\"node2\")).build();\n clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/UnassignedInfoTests.java", "status": "modified" }, { "diff": "@@ -365,7 +365,7 @@ public void applyFailedShards(FailedRerouteAllocation allocation) {\n public boolean allocateUnassigned(RoutingAllocation allocation) {\n RoutingNodes.UnassignedShards unassigned = allocation.routingNodes().unassigned();\n boolean changed = !unassigned.isEmpty();\n- for (ShardRouting sr : unassigned) {\n+ for (ShardRouting sr : unassigned.drain()) {\n switch (sr.id()) {\n case 0:\n if (sr.primary()) {\n@@ -405,7 +405,6 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n }\n \n }\n- unassigned.clear();\n return changed;\n }\n }), EmptyClusterInfoService.INSTANCE);", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/BalanceConfigurationTests.java", "status": "modified" }, { "diff": "@@ -26,15 +26,16 @@\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.RoutingTable;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.allocation.decider.ClusterRebalanceAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.test.ESAllocationTestCase;\n import org.elasticsearch.test.gateway.NoopGatewayAllocator;\n \n import java.util.concurrent.atomic.AtomicBoolean;\n-import java.util.concurrent.atomic.AtomicInteger;\n \n import static org.elasticsearch.cluster.routing.ShardRoutingState.*;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n@@ -628,6 +629,112 @@ public void testClusterAllActive3() {\n assertThat(routingNodes.node(\"node3\").isEmpty(), equalTo(true));\n }\n \n+ public void testRebalanceWithIgnoredUnassignedShards() {\n+ final AtomicBoolean allocateTest1 = new AtomicBoolean(false);\n+\n+ AllocationService strategy = createAllocationService(Settings.EMPTY, new NoopGatewayAllocator() {\n+ @Override\n+ public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ if (allocateTest1.get() == false) {\n+ RoutingNodes.UnassignedShards unassigned = allocation.routingNodes().unassigned();\n+ RoutingNodes.UnassignedShards.UnassignedIterator iterator = unassigned.iterator();\n+ while (iterator.hasNext()) {\n+ ShardRouting next = iterator.next();\n+ if (\"test1\".equals(next.index())) {\n+ iterator.removeAndIgnore();\n+ }\n+\n+ }\n+ }\n+ return super.allocateUnassigned(allocation);\n+ }\n+ });\n+\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT)).numberOfShards(2).numberOfReplicas(0))\n+ .put(IndexMetaData.builder(\"test1\").settings(settings(Version.CURRENT)).numberOfShards(2).numberOfReplicas(0))\n+ .build();\n+\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test\"))\n+ .addAsNew(metaData.index(\"test1\"))\n+ .build();\n+\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n+\n+ logger.info(\"start two nodes\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\"))).build();\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(INITIALIZING));\n+ }\n+\n+ logger.debug(\"start all the primary shards for test\");\n+ RoutingNodes routingNodes = clusterState.getRoutingNodes();\n+ routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(\"test\", INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+\n+ logger.debug(\"now, start 1 more node, check that rebalancing will not happen since we unassigned shards\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes())\n+ .put(newNode(\"node2\")))\n+ .build();\n+ logger.debug(\"reroute and check that nothing has changed\");\n+ RoutingAllocation.Result reroute = strategy.reroute(clusterState);\n+ assertFalse(reroute.changed());\n+ routingTable = reroute.routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(UNASSIGNED));\n+ }\n+ logger.debug(\"now set allocateTest1 to true and reroute we should see the [test1] index initializing\");\n+ allocateTest1.set(true);\n+ reroute = strategy.reroute(clusterState);\n+ assertTrue(reroute.changed());\n+ routingTable = reroute.routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(INITIALIZING));\n+ }\n+\n+ logger.debug(\"now start initializing shards and expect exactly one rebalance from node1 to node 2 sicne index [test] is all on node1\");\n+\n+ routingNodes = clusterState.getRoutingNodes();\n+ routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(\"test1\", INITIALIZING)).routingTable();\n+\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+ int numStarted = 0;\n+ int numRelocating = 0;\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ if (routingTable.index(\"test\").shard(i).primaryShard().state() == STARTED) {\n+ numStarted++;\n+ } else if (routingTable.index(\"test\").shard(i).primaryShard().state() == RELOCATING) {\n+ numRelocating++;\n+ }\n+ }\n+ assertEquals(numStarted, 1);\n+ assertEquals(numRelocating, 1);\n+\n+ }\n+\n public void testRebalanceWhileShardFetching() {\n final AtomicBoolean hasFetches = new AtomicBoolean(true);\n AllocationService strategy = createAllocationService(settingsBuilder().put(ClusterRebalanceAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE,", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/ClusterRebalanceRoutingTests.java", "status": "modified" } ] }
{ "body": "When upgrading from 1.x to 2.0, a field that has only a `search_analyzer` specified will throw an exception and prevent the node from starting. Really, the field does have an `index_analyzer`, set to `default`. We should be able to migrate this mapping to a 2.0 compatible version.\n\nRelates to #14313\n", "comments": [ { "body": "Closed by https://github.com/elastic/elasticsearch/pull/14677\n", "created_at": "2015-11-18T14:05:42Z" } ], "number": 14383, "title": "Upgrading fields with only `search_analyzer` specified" }
{ "body": "Previously if an index was created before 2.0 using the default index_analyzer but specifying an explicit search_analyzer the index could not be started in 2.0 because we require 'analyzer' (formerly index_analyzer) to be set if you define an explicit search_analyzer. This change allows indexes created before 2.0 to use the default analyzer and specify an explicit search_analyzer. Indexes created on or after 2.0 are still required to explicitly set the analyzer if they want to explicitly set the search_analyzer.\n\nCloses #14383\n", "number": 14677, "review_comments": [ { "body": "Instead of duplicating this complicated logic twice, can you move the search analyzer check inside (ie we would only write search_analyzer if we at least wrote analyzer)?\n", "created_at": "2015-11-12T17:01:26Z" }, { "body": "Instead of relying on serialization, can the test look at the parsed mapping and see that the search analyzer is equal to keyword? And also check the index analyzer is the default?\n", "created_at": "2015-11-12T17:03:06Z" }, { "body": "But in that case we would not write search_analyzer in the case where the index_analyzer is `default` which would mean that the fix would not work (note the checks on this second if statement are checking the searchAnalyzer not the indexAnalyzer)\n", "created_at": "2015-11-12T17:05:16Z" }, { "body": "We should be writing out the settings in the \"new format\". There is no longer index_analyzer. So in the case of search_analyzer being set alone, when we serialize, we should write both analyzer and search_analyzer.\n", "created_at": "2015-11-12T17:21:27Z" }, { "body": "@rjernst I have changed this so if the `search_analyzer` is written we bypass the checks on whether to write the `analyzer` since we want to write the output in the \"new format\" where `search_analyzer` cannot be specified without `analyzer`. However, I can't see a way of simplifying the other checks since we need to check whether the search_analyzer is `default` or starts with `_` before we write it. Let me know if you can see a better way to write this.\n", "created_at": "2015-11-13T09:23:55Z" }, { "body": "You can simplify this to:\n\n```\nboolean writeSearchAnalyzer = // logic\nif (writeSearchAnalyzer || analyzer logic) {\n // write analyzer\n}\nif (writeSearchAnalyzer) {\n // write search_analyzer\n}\n```\n\nThis will also keep the same order (analyzer followed by search_analyzer) that we had before.\n", "created_at": "2015-11-13T10:23:18Z" } ], "title": "Mapping: Allows upgrade of indexes with only search_analyzer specified" }
{ "commits": [ { "message": "Mapping: Allows upgrade of indexes with only search_analyzer specified\n\nPreviously if an index was created before 2.0 using the default index_analyzer but specifying an explicit search_analyzer the index could not be started in 2.0 because we require 'analyzer' (formerly index_analyzer) to be set if you define an explicit search_analyzer. This change allows indexes created before 2.0 to use the default analyzer and specify an explicit search_analyzer. Indexes created on or after 2.0 are still required to explicitly set the analyzer if they want to explicitly set the search_analyzer.\n\nCloses #14383" } ], "files": [ { "diff": "@@ -24,6 +24,7 @@\n import com.google.common.base.Function;\n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.Iterators;\n+\n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.FieldType;\n import org.apache.lucene.index.IndexOptions;\n@@ -467,9 +468,17 @@ protected void doXContentAnalyzers(XContentBuilder builder, boolean includeDefau\n if (includeDefaults) {\n builder.field(\"analyzer\", \"default\");\n }\n- } else if (includeDefaults || fieldType().indexAnalyzer().name().startsWith(\"_\") == false && fieldType().indexAnalyzer().name().equals(\"default\") == false) {\n- builder.field(\"analyzer\", fieldType().indexAnalyzer().name());\n- if (fieldType().searchAnalyzer().name().equals(fieldType().indexAnalyzer().name()) == false) {\n+ } else {\n+ boolean writeSearchAnalyzer = includeDefaults || fieldType().searchAnalyzer().name().startsWith(\"_\") == false\n+ && fieldType().searchAnalyzer().name().equals(\"default\") == false\n+ && fieldType().searchAnalyzer().name().equals(fieldType().indexAnalyzer().name()) == false;\n+ // If we are going to write the search_analyzer then we need to write the analyzer as well since the search_analyzer \n+ // should not be specified without the analyzer\n+ if (writeSearchAnalyzer || includeDefaults || fieldType().indexAnalyzer().name().startsWith(\"_\") == false\n+ && fieldType().indexAnalyzer().name().equals(\"default\") == false) {\n+ builder.field(\"analyzer\", fieldType().indexAnalyzer().name());\n+ }\n+ if (writeSearchAnalyzer) {\n builder.field(\"search_analyzer\", fieldType().searchAnalyzer().name());\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/FieldMapper.java", "status": "modified" }, { "diff": "@@ -291,7 +291,13 @@ public static void parseField(FieldMapper.Builder builder, String name, Map<Stri\n \n if (indexAnalyzer == null) {\n if (searchAnalyzer != null) {\n- throw new MapperParsingException(\"analyzer on field [\" + name + \"] must be set when search_analyzer is set\");\n+ // If the index was created before 2.0 then we are trying to upgrade the mappings so use the default indexAnalyzer \n+ // instead of throwing an exception so the user is able to upgrade\n+ if (parserContext.indexVersionCreated().before(Version.V_2_0_0_beta1)) {\n+ indexAnalyzer = parserContext.analysisService().defaultIndexAnalyzer();\n+ } else {\n+ throw new MapperParsingException(\"analyzer on field [\" + name + \"] must be set when search_analyzer is set\");\n+ }\n }\n } else if (searchAnalyzer == null) {\n searchAnalyzer = indexAnalyzer;", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java", "status": "modified" }, { "diff": "@@ -26,26 +26,29 @@\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.index.IndexableFieldType;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.fielddata.FieldDataType;\n import org.elasticsearch.index.mapper.ContentPath;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.index.mapper.Mapper;\n import org.elasticsearch.index.mapper.Mapper.BuilderContext;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.MergeResult;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n-import org.elasticsearch.index.mapper.MapperParsingException;\n-import org.elasticsearch.Version;\n+import org.elasticsearch.index.mapper.core.StringFieldMapper.Builder;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.elasticsearch.test.VersionUtils;\n import org.junit.Before;\n@@ -54,11 +57,11 @@\n import java.util.Arrays;\n import java.util.Map;\n \n-import static org.elasticsearch.index.mapper.core.StringFieldMapper.Builder;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.notNullValue;\n import static org.hamcrest.Matchers.nullValue;\n-import static org.hamcrest.Matchers.containsString;\n \n /**\n */\n@@ -567,4 +570,59 @@ public void testBackwardCompatible() throws Exception {\n \n assertThat(parser.parse(mapping).mapping().toString(), containsString(\"\\\"position_increment_gap\\\":10\"));\n }\n+\n+ /**\n+ * Test backward compatibility when a search_analyzer is specified without\n+ * an index_analyzer\n+ */\n+ public void testBackwardCompatibleSearchAnalyzerMigration() throws Exception {\n+\n+ Settings settings = Settings.settingsBuilder()\n+ .put(IndexMetaData.SETTING_VERSION_CREATED, VersionUtils.randomVersionBetween(random(), Version.V_1_0_0, Version.V_1_7_1))\n+ .build();\n+\n+ DocumentMapperParser parser = createIndex(\"backward_compatible_index\", settings).mapperService().documentMapperParser();\n+\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\").startObject(\"field1\")\n+ .field(\"type\", \"string\").field(\"search_analyzer\", \"keyword\").endObject().endObject().endObject().endObject().string();\n+ parser.parse(mapping);\n+\n+ assertThat(parser.parse(mapping).mapping().toString(), containsString(\"\\\"search_analyzer\\\":\\\"keyword\\\"\"));\n+ assertThat(parser.parse(mapping).mapping().toString(), containsString(\"\\\"analyzer\\\":\\\"default\\\"\"));\n+ assertThat(parser.parse(mapping).mapping(), notNullValue());\n+ assertThat(parser.parse(mapping).mapping().root(), notNullValue());\n+ Mapper mapper = parser.parse(mapping).mapping().root().getMapper(\"field1\");\n+ assertThat(mapper, notNullValue());\n+ assertThat(mapper, instanceOf(StringFieldMapper.class));\n+ StringFieldMapper stringMapper = (StringFieldMapper) mapper;\n+ MappedFieldType fieldType = stringMapper.fieldType();\n+ assertThat(fieldType, notNullValue());\n+ assertThat(fieldType.indexAnalyzer(), notNullValue());\n+ assertThat(fieldType.indexAnalyzer().name(), equalTo(\"default\"));\n+ assertThat(fieldType.searchAnalyzer().name(), equalTo(\"keyword\"));\n+ }\n+\n+ /**\n+ * Test for indexes created on or after 2.0 an index analyzer must be\n+ * specified when declaring a search analyzer\n+ */\n+ public void testSearchAnalyzer() throws Exception {\n+\n+ try {\n+ Settings settings = Settings\n+ .settingsBuilder()\n+ .put(IndexMetaData.SETTING_VERSION_CREATED,\n+ VersionUtils.randomVersionBetween(random(), Version.V_2_0_0, Version.CURRENT)).build();\n+\n+ DocumentMapperParser parser = createIndex(\"index\", settings).mapperService().documentMapperParser();\n+\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"field1\").field(\"type\", \"string\").field(\"search_analyzer\", \"keyword\").endObject().endObject().endObject()\n+ .endObject().string();\n+ parser.parse(mapping);\n+ fail(\"Expected a MapperParsingException\");\n+ } catch (MapperParsingException e) {\n+ assertThat(e.getMessage(), equalTo(\"analyzer on field [field1] must be set when search_analyzer is set\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/string/SimpleStringMappingTests.java", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.0.Beta1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.0.RC1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.0.RC2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.0.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.10.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.11.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.12.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.13.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.3.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.4.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.5.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.6.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.7.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.8.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-0.90.9.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-1.0.0.Beta1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-1.0.0.Beta2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-1.0.0.RC1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-1.0.0.RC2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-1.0.0.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-1.0.1.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-1.0.2.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-1.0.3.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-1.1.0.zip", "status": "modified" }, { "diff": "", "filename": "core/src/test/resources/org/elasticsearch/bwcompat/index-1.1.1.zip", "status": "modified" } ] }
{ "body": "On Found we are testing the delete-by-query plugin in 2.0.0. I have enabled the plugin and when I issue a delete via _query, I get the following response.\n\n```\n{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"action [indices:data/read/search] is unauthorized for user []\"}],\"type\":\"security_exception\",\"reason\":\"action [indices:data/read/search] is unauthorized for user []\"},\"status\":403}\n```\n", "comments": [], "number": 14527, "title": "delete-by-query failing with Shield enabled" }
{ "body": "closes #14527\n", "number": 14658, "review_comments": [ { "body": "can you instead add a new constructor to SearchScrollRequest that accepts the original request like we do elsewhere?\n", "created_at": "2015-11-10T16:32:55Z" }, { "body": "same as above\n", "created_at": "2015-11-10T16:33:07Z" }, { "body": "Should I do it for `BulkRequest` too? (line 200)\n", "created_at": "2015-11-10T16:35:34Z" }, { "body": "yea I missed that, I think it makes sense to do it there too.\n", "created_at": "2015-11-10T16:40:22Z" }, { "body": "sure\n", "created_at": "2015-11-10T16:58:09Z" } ], "title": "Fix Delete-by-Query with Shield" }
{ "commits": [ { "message": "Fix Delete-by-Query with Shield\n\ncloses #14527" } ], "files": [ { "diff": "@@ -64,6 +64,17 @@ public class BulkRequest extends ActionRequest<BulkRequest> implements Composite\n \n private long sizeInBytes = 0;\n \n+ public BulkRequest() {\n+ }\n+\n+ /**\n+ * Creates a bulk request caused by some other request, which is provided as an\n+ * argument so that its headers and context can be copied to the new request\n+ */\n+ public BulkRequest(ActionRequest request) {\n+ super(request);\n+ }\n+\n /**\n * Adds a list of requests to be executed. Either index or delete requests.\n */", "filename": "core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java", "status": "modified" }, { "diff": "@@ -37,6 +37,17 @@ public class ClearScrollRequest extends ActionRequest<ClearScrollRequest> {\n \n private List<String> scrollIds;\n \n+ public ClearScrollRequest() {\n+ }\n+\n+ /**\n+ * Creates a clear scroll request caused by some other request, which is provided as an\n+ * argument so that its headers and context can be copied to the new request\n+ */\n+ public ClearScrollRequest(ActionRequest request) {\n+ super(request);\n+ }\n+\n public List<String> getScrollIds() {\n return scrollIds;\n }", "filename": "core/src/main/java/org/elasticsearch/action/search/ClearScrollRequest.java", "status": "modified" }, { "diff": "@@ -46,6 +46,14 @@ public SearchScrollRequest(String scrollId) {\n this.scrollId = scrollId;\n }\n \n+ /**\n+ * Creates a scroll request caused by some other request, which is provided as an\n+ * argument so that its headers and context can be copied to the new request\n+ */\n+ public SearchScrollRequest(ActionRequest request) {\n+ super(request);\n+ }\n+\n @Override\n public ActionRequestValidationException validate() {\n ActionRequestValidationException validationException = null;", "filename": "core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java", "status": "modified" }, { "diff": "@@ -27,13 +27,7 @@\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.delete.DeleteRequest;\n import org.elasticsearch.action.delete.DeleteResponse;\n-import org.elasticsearch.action.search.ClearScrollResponse;\n-import org.elasticsearch.action.search.SearchRequest;\n-import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.action.search.SearchScrollRequest;\n-import org.elasticsearch.action.search.ShardSearchFailure;\n-import org.elasticsearch.action.search.TransportSearchAction;\n-import org.elasticsearch.action.search.TransportSearchScrollAction;\n+import org.elasticsearch.action.search.*;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.HandledTransportAction;\n import org.elasticsearch.client.Client;\n@@ -48,10 +42,7 @@\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n-import java.util.ArrayList;\n-import java.util.HashMap;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicLong;\n \n@@ -109,8 +100,11 @@ public void start() {\n \n void executeScan() {\n try {\n- final SearchRequest scanRequest = new SearchRequest(request.indices()).types(request.types()).indicesOptions(request.indicesOptions());\n- scanRequest.scroll(request.scroll());\n+ final SearchRequest scanRequest = new SearchRequest(request)\n+ .indices(request.indices())\n+ .types(request.types())\n+ .indicesOptions(request.indicesOptions())\n+ .scroll(request.scroll());\n if (request.routing() != null) {\n scanRequest.routing(request.routing());\n }\n@@ -119,7 +113,8 @@ void executeScan() {\n fields.add(\"_routing\");\n fields.add(\"_parent\");\n SearchSourceBuilder source = new SearchSourceBuilder()\n-.query(request.query()).fields(fields)\n+ .query(request.query())\n+ .fields(fields)\n .sort(\"_doc\") // important for performance\n .fetchSource(false)\n .version(true);\n@@ -155,7 +150,7 @@ public void onFailure(Throwable e) {\n void executeScroll(final String scrollId) {\n try {\n logger.trace(\"executing scroll request [{}]\", scrollId);\n- scrollAction.execute(new SearchScrollRequest(scrollId).scroll(request.scroll()), new ActionListener<SearchResponse>() {\n+ scrollAction.execute(new SearchScrollRequest(request).scrollId(scrollId).scroll(request.scroll()), new ActionListener<SearchResponse>() {\n @Override\n public void onResponse(SearchResponse scrollResponse) {\n deleteHits(scrollId, scrollResponse);\n@@ -197,9 +192,9 @@ void deleteHits(String scrollId, SearchResponse scrollResponse) {\n }\n \n // Delete the scrolled documents using the Bulk API\n- BulkRequest bulkRequest = new BulkRequest();\n+ BulkRequest bulkRequest = new BulkRequest(request);\n for (SearchHit doc : docs) {\n- DeleteRequest delete = new DeleteRequest(doc.index(), doc.type(), doc.id()).version(doc.version());\n+ DeleteRequest delete = new DeleteRequest(request).index(doc.index()).type(doc.type()).id(doc.id()).version(doc.version());\n SearchHitField routing = doc.field(\"_routing\");\n if (routing != null) {\n delete.routing((String) routing.value());\n@@ -283,7 +278,9 @@ void finishHim(final String scrollId, boolean scrollTimedOut, Throwable failure)\n }\n \n if (Strings.hasText(scrollId)) {\n- client.prepareClearScroll().addScrollId(scrollId).execute(new ActionListener<ClearScrollResponse>() {\n+ ClearScrollRequest clearScrollRequest = new ClearScrollRequest(request);\n+ clearScrollRequest.addScrollId(scrollId);\n+ client.clearScroll(clearScrollRequest, new ActionListener<ClearScrollResponse>() {\n @Override\n public void onResponse(ClearScrollResponse clearScrollResponse) {\n logger.trace(\"scroll id [{}] cleared\", scrollId);", "filename": "plugins/delete-by-query/src/main/java/org/elasticsearch/action/deletebyquery/TransportDeleteByQueryAction.java", "status": "modified" } ] }
{ "body": "Currently, disruption rules apply only to the main bound address of a node. As consequence, integration tests that use Network Partitions must currently ensure that nodes are bound to a single local interface (e.g. 127.0.0.1 / IPv4) in order to correctly work. This is ugly as it requires setting `network.host` explicitly to `127.0.0.1` in all such tests.\n\nImplementation details:\nUnicastZenPing creates fake DiscoveryNode for each local network interface. If multiple interfaces are active, it creates one for IPv4 and one for IPv6 for each node. To add disruptions, MockTransportService provides methods that take as parameter a node to disrupt, but this only adds disruptions rules that match on the main bound address `DiscoveryNode.address()` of the node. This means that no rules exist to intercept connections trough other bound addresses.\n\nSolution:\nDisruption rules in MockTransportService should match all bound addresses of a node.\n\nAs future enhancement, the interface of MockTransportService / NetworkPartition could also be expanded so that disruptions can be specified directly on a per TransportAddress level. This provides more fine-granular testing possibilities (where connection over one interface works but the other not).\n", "comments": [ { "body": "Thanks for digging @ywelsch . Makes sense.\n", "created_at": "2015-11-09T17:12:12Z" } ], "number": 14625, "title": "Disruption rules in MockTransportService should match all bound addresses of a node" }
{ "body": "Currently, disruption rules apply only to the main bound address of a node. As consequence, integration tests that use Network Partitions must currently ensure that nodes are bound to a single local interface (e.g. 127.0.0.1 / IPv4) in order to correctly work. \n\nThis PR changes the disruption rules to work on all transport addresses that are bound by a node.\n\nRelates to #14625 \n", "number": 14653, "review_comments": [ { "body": "can we add java docs? this starts to be confusing... \n", "created_at": "2015-11-10T13:44:44Z" }, { "body": "do we also want to have the publish address just to be sure? that's what nodes typically use to communicate with each other.\n", "created_at": "2015-11-10T13:57:25Z" }, { "body": "same here - what about publish address?\n", "created_at": "2015-11-10T13:58:28Z" }, { "body": "can we document transportAddress, how we extract addresses from it and how it will affect the sendRequest command (which uses a disco node)?\n", "created_at": "2015-11-10T13:59:21Z" }, { "body": "publish address should normally be one of the bound transport addresses. If not, we run into some trouble in LookupTestTransport which uses publish address to determine transport. How about (instead of opening a can of worms) adding a check that requires publish address to be part of bound addresses, and fail otherwise?\n", "created_at": "2015-11-10T14:40:00Z" }, { "body": "I'm a bit reluctant because,\n\n> publish address should normally be one of the bound transport addresses. \n\nis not always true (the reason why we make this distinction is to allow setting it to something else). How about throwing it all in a set and let the addresses de-dup themselves? \n", "created_at": "2015-11-10T15:35:15Z" }, { "body": "I think the extraction here is redundant no - just pass the transport service... \n", "created_at": "2015-11-11T12:50:50Z" } ], "title": "Disruption rules in MockTransportService should match all bound addresses of a node." }
{ "commits": [ { "message": "[TEST] Use TransportService/TransportAddress instead of DiscoveryNode for disruption rules\n\nThe disruption rules are changed to work on all transport addresses that are bound by a node (not only publish address).\nThis is important as UnicastZenPing creates fake DiscoveryNode instances which match one of the bound addresses and not necessarily the publish address.\n\nCloses #14625\nCloses #14653" } ], "files": [ { "diff": "@@ -47,7 +47,6 @@ public void testMasterFailoverDuringIndexingWithMappingChanges() throws Throwabl\n .put(\"discovery.zen.join_timeout\", \"10s\") // still long to induce failures but to long so test won't time out\n .put(DiscoverySettings.PUBLISH_TIMEOUT, \"1s\") // <-- for hitting simulated network failures quickly\n .put(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES, 2)\n- .put(\"transport.host\", \"127.0.0.1\") // only bind on one IF we use v4 here by default\n .build();\n \n internalCluster().startMasterOnlyNodesAsync(3, sharedSettings).get();", "filename": "core/src/test/java/org/elasticsearch/action/support/master/IndexingMasterFailoverIT.java", "status": "modified" }, { "diff": "@@ -207,7 +207,7 @@ public void testClusterInfoServiceInformationClearOnError() throws InterruptedEx\n final Set<String> blockedActions = newHashSet(NodesStatsAction.NAME, NodesStatsAction.NAME + \"[n]\", IndicesStatsAction.NAME, IndicesStatsAction.NAME + \"[n]\");\n // drop all outgoing stats requests to force a timeout.\n for (DiscoveryNode node : internalTestCluster.clusterService().state().getNodes()) {\n- mockTransportService.addDelegate(node, new MockTransportService.DelegateTransport(mockTransportService.original()) {\n+ mockTransportService.addDelegate(internalTestCluster.getInstance(TransportService.class, node.getName()), new MockTransportService.DelegateTransport(mockTransportService.original()) {\n @Override\n public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request,\n TransportRequestOptions options) throws IOException, TransportException {", "filename": "core/src/test/java/org/elasticsearch/cluster/ClusterInfoServiceIT.java", "status": "modified" }, { "diff": "@@ -163,9 +163,6 @@ private List<String> startCluster(int numberOfNodes, int minimumMasterNode, @Nul\n .put(\"discovery.zen.join_timeout\", \"10s\") // still long to induce failures but to long so test won't time out\n .put(DiscoverySettings.PUBLISH_TIMEOUT, \"1s\") // <-- for hitting simulated network failures quickly\n .put(\"http.enabled\", false) // just to make test quicker\n- .put(\"transport.host\", \"127.0.0.1\") // only bind on one IF we use v4 here by default\n- .put(\"transport.bind_host\", \"127.0.0.1\")\n- .put(\"transport.publish_host\", \"127.0.0.1\")\n .put(\"gateway.local.list_timeout\", \"10s\") // still long to induce failures but to long so test won't time out\n .build();\n \n@@ -844,23 +841,26 @@ public void testClusterJoinDespiteOfPublishingIssues() throws Exception {\n \n DiscoveryNodes discoveryNodes = internalCluster().getInstance(ClusterService.class, nonMasterNode).state().nodes();\n \n+ TransportService masterTranspotService = internalCluster().getInstance(TransportService.class, discoveryNodes.masterNode().getName());\n+\n logger.info(\"blocking requests from non master [{}] to master [{}]\", nonMasterNode, masterNode);\n MockTransportService nonMasterTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, nonMasterNode);\n- nonMasterTransportService.addFailToSendNoConnectRule(discoveryNodes.masterNode());\n+ nonMasterTransportService.addFailToSendNoConnectRule(masterTranspotService);\n \n assertNoMaster(nonMasterNode);\n \n logger.info(\"blocking cluster state publishing from master [{}] to non master [{}]\", masterNode, nonMasterNode);\n MockTransportService masterTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, masterNode);\n+ TransportService localTransportService = internalCluster().getInstance(TransportService.class, discoveryNodes.localNode().getName());\n if (randomBoolean()) {\n- masterTransportService.addFailToSendNoConnectRule(discoveryNodes.localNode(), PublishClusterStateAction.SEND_ACTION_NAME);\n+ masterTransportService.addFailToSendNoConnectRule(localTransportService, PublishClusterStateAction.SEND_ACTION_NAME);\n } else {\n- masterTransportService.addFailToSendNoConnectRule(discoveryNodes.localNode(), PublishClusterStateAction.COMMIT_ACTION_NAME);\n+ masterTransportService.addFailToSendNoConnectRule(localTransportService, PublishClusterStateAction.COMMIT_ACTION_NAME);\n }\n \n logger.info(\"allowing requests from non master [{}] to master [{}], waiting for two join request\", nonMasterNode, masterNode);\n final CountDownLatch countDownLatch = new CountDownLatch(2);\n- nonMasterTransportService.addDelegate(discoveryNodes.masterNode(), new MockTransportService.DelegateTransport(nonMasterTransportService.original()) {\n+ nonMasterTransportService.addDelegate(masterTranspotService, new MockTransportService.DelegateTransport(nonMasterTransportService.original()) {\n @Override\n public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {\n if (action.equals(MembershipAction.DISCOVERY_JOIN_ACTION_NAME)) {\n@@ -873,8 +873,8 @@ public void sendRequest(DiscoveryNode node, long requestId, String action, Trans\n countDownLatch.await();\n \n logger.info(\"waiting for cluster to reform\");\n- masterTransportService.clearRule(discoveryNodes.localNode());\n- nonMasterTransportService.clearRule(discoveryNodes.masterNode());\n+ masterTransportService.clearRule(localTransportService);\n+ nonMasterTransportService.clearRule(localTransportService);\n \n ensureStableCluster(2);\n \n@@ -924,9 +924,9 @@ public void testNodeNotReachableFromMaster() throws Exception {\n logger.info(\"blocking request from master [{}] to [{}]\", masterNode, nonMasterNode);\n MockTransportService masterTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, masterNode);\n if (randomBoolean()) {\n- masterTransportService.addUnresponsiveRule(internalCluster().getInstance(ClusterService.class, nonMasterNode).localNode());\n+ masterTransportService.addUnresponsiveRule(internalCluster().getInstance(TransportService.class, nonMasterNode));\n } else {\n- masterTransportService.addFailToSendNoConnectRule(internalCluster().getInstance(ClusterService.class, nonMasterNode).localNode());\n+ masterTransportService.addFailToSendNoConnectRule(internalCluster().getInstance(TransportService.class, nonMasterNode));\n }\n \n logger.info(\"waiting for [{}] to be removed from cluster\", nonMasterNode);", "filename": "core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java", "status": "modified" }, { "diff": "@@ -476,7 +476,7 @@ public void testPrimaryRelocationWhereRecoveryFails() throws Exception {\n final AtomicBoolean keepFailing = new AtomicBoolean(true);\n \n MockTransportService mockTransportService = ((MockTransportService) internalCluster().getInstance(TransportService.class, node1));\n- mockTransportService.addDelegate(internalCluster().getInstance(Discovery.class, node3).localNode(),\n+ mockTransportService.addDelegate(internalCluster().getInstance(TransportService.class, node3),\n new MockTransportService.DelegateTransport(mockTransportService.original()) {\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/index/IndexWithShadowReplicasIT.java", "status": "modified" }, { "diff": "@@ -115,12 +115,12 @@ public void testNetworkPartitionDuringReplicaIndexOp() throws Exception {\n logger.info(\"--> preventing index/replica operations\");\n TransportService mockTransportService = internalCluster().getInstance(TransportService.class, primaryNode);\n ((MockTransportService) mockTransportService).addFailToSendNoConnectRule(\n- internalCluster().getInstance(Discovery.class, replicaNode).localNode(),\n+ internalCluster().getInstance(TransportService.class, replicaNode),\n singleton(IndexAction.NAME + \"[r]\")\n );\n mockTransportService = internalCluster().getInstance(TransportService.class, replicaNode);\n ((MockTransportService) mockTransportService).addFailToSendNoConnectRule(\n- internalCluster().getInstance(Discovery.class, primaryNode).localNode(),\n+ internalCluster().getInstance(TransportService.class, primaryNode),\n singleton(IndexAction.NAME + \"[r]\")\n );\n ", "filename": "core/src/test/java/org/elasticsearch/index/TransportIndexFailuresIT.java", "status": "modified" }, { "diff": "@@ -335,7 +335,7 @@ public void testCorruptionOnNetworkLayerFinalizingRecovery() throws ExecutionExc\n final CountDownLatch hasCorrupted = new CountDownLatch(1);\n for (NodeStats dataNode : dataNodeStats) {\n MockTransportService mockTransportService = ((MockTransportService) internalCluster().getInstance(TransportService.class, dataNode.getNode().name()));\n- mockTransportService.addDelegate(internalCluster().getInstance(Discovery.class, unluckyNode.getNode().name()).localNode(), new MockTransportService.DelegateTransport(mockTransportService.original()) {\n+ mockTransportService.addDelegate(internalCluster().getInstance(TransportService.class, unluckyNode.getNode().name()), new MockTransportService.DelegateTransport(mockTransportService.original()) {\n \n @Override\n public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {\n@@ -407,7 +407,7 @@ public void testCorruptionOnNetworkLayer() throws ExecutionException, Interrupte\n final boolean truncate = randomBoolean();\n for (NodeStats dataNode : dataNodeStats) {\n MockTransportService mockTransportService = ((MockTransportService) internalCluster().getInstance(TransportService.class, dataNode.getNode().name()));\n- mockTransportService.addDelegate(internalCluster().getInstance(Discovery.class, unluckyNode.getNode().name()).localNode(), new MockTransportService.DelegateTransport(mockTransportService.original()) {\n+ mockTransportService.addDelegate(internalCluster().getInstance(TransportService.class, unluckyNode.getNode().name()), new MockTransportService.DelegateTransport(mockTransportService.original()) {\n \n @Override\n public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {", "filename": "core/src/test/java/org/elasticsearch/index/store/CorruptedFileIT.java", "status": "modified" }, { "diff": "@@ -83,7 +83,7 @@ public void testRetryDueToExceptionOnNetworkLayer() throws ExecutionException, I\n //create a transport service that throws a ConnectTransportException for one bulk request and therefore triggers a retry.\n for (NodeStats dataNode : nodeStats.getNodes()) {\n MockTransportService mockTransportService = ((MockTransportService) internalCluster().getInstance(TransportService.class, dataNode.getNode().name()));\n- mockTransportService.addDelegate(internalCluster().getInstance(Discovery.class, unluckyNode.getNode().name()).localNode(), new MockTransportService.DelegateTransport(mockTransportService.original()) {\n+ mockTransportService.addDelegate(internalCluster().getInstance(TransportService.class, unluckyNode.getNode().name()), new MockTransportService.DelegateTransport(mockTransportService.original()) {\n \n @Override\n public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {", "filename": "core/src/test/java/org/elasticsearch/index/store/ExceptionRetryIT.java", "status": "modified" }, { "diff": "@@ -580,12 +580,12 @@ public void testDisconnectsWhileRecovering() throws Exception {\n \n MockTransportService blueMockTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, blueNodeName);\n MockTransportService redMockTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, redNodeName);\n- DiscoveryNode redDiscoNode = internalCluster().getInstance(ClusterService.class, redNodeName).localNode();\n- DiscoveryNode blueDiscoNode = internalCluster().getInstance(ClusterService.class, blueNodeName).localNode();\n+ TransportService redTransportService = internalCluster().getInstance(TransportService.class, redNodeName);\n+ TransportService blueTransportService = internalCluster().getInstance(TransportService.class, blueNodeName);\n final CountDownLatch requestBlocked = new CountDownLatch(1);\n \n- blueMockTransportService.addDelegate(redDiscoNode, new RecoveryActionBlocker(dropRequests, recoveryActionToBlock, blueMockTransportService.original(), requestBlocked));\n- redMockTransportService.addDelegate(blueDiscoNode, new RecoveryActionBlocker(dropRequests, recoveryActionToBlock, redMockTransportService.original(), requestBlocked));\n+ blueMockTransportService.addDelegate(redTransportService, new RecoveryActionBlocker(dropRequests, recoveryActionToBlock, blueMockTransportService.original(), requestBlocked));\n+ redMockTransportService.addDelegate(blueTransportService, new RecoveryActionBlocker(dropRequests, recoveryActionToBlock, redMockTransportService.original(), requestBlocked));\n \n logger.info(\"--> starting recovery from blue to red\");\n client().admin().indices().prepareUpdateSettings(indexName).setSettings(", "filename": "core/src/test/java/org/elasticsearch/indices/recovery/IndexRecoveryIT.java", "status": "modified" }, { "diff": "@@ -195,10 +195,9 @@ public void testShardCleanupIfShardDeletionAfterRelocationFailedAndIndexDeleted(\n // add a transport delegate that will prevent the shard active request to succeed the first time after relocation has finished.\n // node_1 will then wait for the next cluster state change before it tries a next attempt to delet the shard.\n MockTransportService transportServiceNode_1 = (MockTransportService) internalCluster().getInstance(TransportService.class, node_1);\n- String node_2_id = internalCluster().getInstance(DiscoveryService.class, node_2).localNode().id();\n- DiscoveryNode node_2_disco = internalCluster().clusterService().state().getNodes().dataNodes().get(node_2_id);\n+ TransportService transportServiceNode_2 = internalCluster().getInstance(TransportService.class, node_2);\n final CountDownLatch shardActiveRequestSent = new CountDownLatch(1);\n- transportServiceNode_1.addDelegate(node_2_disco, new MockTransportService.DelegateTransport(transportServiceNode_1.original()) {\n+ transportServiceNode_1.addDelegate(transportServiceNode_2, new MockTransportService.DelegateTransport(transportServiceNode_1.original()) {\n @Override\n public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {\n if (action.equals(\"internal:index/shard/exists\") && shardActiveRequestSent.getCount() > 0) {", "filename": "core/src/test/java/org/elasticsearch/indices/store/IndicesStoreIntegrationIT.java", "status": "modified" }, { "diff": "@@ -377,7 +377,7 @@ public void testCancellationCleansTempFiles() throws Exception {\n MockTransportService mockTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, p_node);\n for (DiscoveryNode node : clusterService.state().nodes()) {\n if (!node.equals(clusterService.localNode())) {\n- mockTransportService.addDelegate(node, new RecoveryCorruption(mockTransportService.original(), corruptionCount));\n+ mockTransportService.addDelegate(internalCluster().getInstance(TransportService.class, node.getName()), new RecoveryCorruption(mockTransportService.original(), corruptionCount));\n }\n }\n ", "filename": "core/src/test/java/org/elasticsearch/recovery/RelocationIT.java", "status": "modified" }, { "diff": "@@ -121,7 +121,7 @@ public void testCancelRecoveryAndResume() throws Exception {\n final AtomicBoolean truncate = new AtomicBoolean(true);\n for (NodeStats dataNode : dataNodeStats) {\n MockTransportService mockTransportService = ((MockTransportService) internalCluster().getInstance(TransportService.class, dataNode.getNode().name()));\n- mockTransportService.addDelegate(internalCluster().getInstance(Discovery.class, unluckyNode.getNode().name()).localNode(), new MockTransportService.DelegateTransport(mockTransportService.original()) {\n+ mockTransportService.addDelegate(internalCluster().getInstance(TransportService.class, unluckyNode.getNode().name()), new MockTransportService.DelegateTransport(mockTransportService.original()) {\n \n @Override\n public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {", "filename": "core/src/test/java/org/elasticsearch/recovery/TruncatedRecoveryIT.java", "status": "modified" }, { "diff": "@@ -1046,7 +1046,7 @@ public void messageReceived(StringMessageRequest request, TransportChannel chann\n }\n });\n \n- serviceB.addFailToSendNoConnectRule(nodeA);\n+ serviceB.addFailToSendNoConnectRule(serviceA);\n \n TransportFuture<StringMessageResponse> res = serviceB.submitRequest(nodeA, \"sayHello\",\n new StringMessageRequest(\"moshe\"), new BaseTransportResponseHandler<StringMessageResponse>() {\n@@ -1104,7 +1104,7 @@ public void messageReceived(StringMessageRequest request, TransportChannel chann\n }\n });\n \n- serviceB.addUnresponsiveRule(nodeA);\n+ serviceB.addUnresponsiveRule(serviceA);\n \n TransportFuture<StringMessageResponse> res = serviceB.submitRequest(nodeA, \"sayHello\",\n new StringMessageRequest(\"moshe\"), TransportRequestOptions.options().withTimeout(100), new BaseTransportResponseHandler<StringMessageResponse>() {", "filename": "core/src/test/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java", "status": "modified" }, { "diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.test.disruption;\n \n-import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.test.transport.MockTransportService;\n \n@@ -78,10 +77,9 @@ public synchronized void startDisrupting() {\n }\n \n @Override\n- void applyDisruption(DiscoveryNode node1, MockTransportService transportService1,\n- DiscoveryNode node2, MockTransportService transportService2) {\n- transportService1.addUnresponsiveRule(node1, duration);\n- transportService1.addUnresponsiveRule(node2, duration);\n+ void applyDisruption(MockTransportService transportService1, MockTransportService transportService2) {\n+ transportService1.addUnresponsiveRule(transportService1, duration);\n+ transportService1.addUnresponsiveRule(transportService2, duration);\n }\n \n @Override", "filename": "test-framework/src/main/java/org/elasticsearch/test/disruption/NetworkDelaysPartition.java", "status": "modified" }, { "diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.test.disruption;\n \n-import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.test.transport.MockTransportService;\n \n@@ -46,10 +45,9 @@ protected String getPartitionDescription() {\n }\n \n @Override\n- void applyDisruption(DiscoveryNode node1, MockTransportService transportService1,\n- DiscoveryNode node2, MockTransportService transportService2) {\n- transportService1.addFailToSendNoConnectRule(node2);\n- transportService2.addFailToSendNoConnectRule(node1);\n+ void applyDisruption(MockTransportService transportService1, MockTransportService transportService2) {\n+ transportService1.addFailToSendNoConnectRule(transportService2);\n+ transportService2.addFailToSendNoConnectRule(transportService1);\n }\n \n @Override", "filename": "test-framework/src/main/java/org/elasticsearch/test/disruption/NetworkDisconnectPartition.java", "status": "modified" }, { "diff": "@@ -18,18 +18,15 @@\n */\n package org.elasticsearch.test.disruption;\n \n-import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n-import org.elasticsearch.discovery.Discovery;\n import org.elasticsearch.test.InternalTestCluster;\n import org.elasticsearch.test.transport.MockTransportService;\n import org.elasticsearch.transport.TransportService;\n \n import java.util.Collection;\n import java.util.Collections;\n import java.util.HashSet;\n-import java.util.List;\n import java.util.Random;\n import java.util.Set;\n \n@@ -140,7 +137,6 @@ public synchronized void applyToNode(String node, InternalTestCluster cluster) {\n @Override\n public synchronized void removeFromNode(String node, InternalTestCluster cluster) {\n MockTransportService transportService = (MockTransportService) cluster.getInstance(TransportService.class, node);\n- DiscoveryNode discoveryNode = discoveryNode(node);\n Set<String> otherSideNodes;\n if (nodesSideOne.contains(node)) {\n otherSideNodes = nodesSideTwo;\n@@ -153,8 +149,7 @@ public synchronized void removeFromNode(String node, InternalTestCluster cluster\n }\n for (String node2 : otherSideNodes) {\n MockTransportService transportService2 = (MockTransportService) cluster.getInstance(TransportService.class, node2);\n- DiscoveryNode discoveryNode2 = discoveryNode(node2);\n- removeDisruption(discoveryNode, transportService, discoveryNode2, transportService2);\n+ removeDisruption(transportService, transportService2);\n }\n }\n \n@@ -165,11 +160,6 @@ public synchronized void testClusterClosed() {\n \n protected abstract String getPartitionDescription();\n \n-\n- protected DiscoveryNode discoveryNode(String node) {\n- return cluster.getInstance(Discovery.class, node).localNode();\n- }\n-\n @Override\n public synchronized void startDisrupting() {\n if (nodesSideOne.size() == 0 || nodesSideTwo.size() == 0) {\n@@ -179,11 +169,9 @@ public synchronized void startDisrupting() {\n activeDisruption = true;\n for (String node1 : nodesSideOne) {\n MockTransportService transportService1 = (MockTransportService) cluster.getInstance(TransportService.class, node1);\n- DiscoveryNode discoveryNode1 = discoveryNode(node1);\n for (String node2 : nodesSideTwo) {\n- DiscoveryNode discoveryNode2 = discoveryNode(node2);\n MockTransportService transportService2 = (MockTransportService) cluster.getInstance(TransportService.class, node2);\n- applyDisruption(discoveryNode1, transportService1, discoveryNode2, transportService2);\n+ applyDisruption(transportService1, transportService2);\n }\n }\n }\n@@ -197,24 +185,20 @@ public synchronized void stopDisrupting() {\n logger.info(\"restoring partition between nodes {} & nodes {}\", nodesSideOne, nodesSideTwo);\n for (String node1 : nodesSideOne) {\n MockTransportService transportService1 = (MockTransportService) cluster.getInstance(TransportService.class, node1);\n- DiscoveryNode discoveryNode1 = discoveryNode(node1);\n for (String node2 : nodesSideTwo) {\n- DiscoveryNode discoveryNode2 = discoveryNode(node2);\n MockTransportService transportService2 = (MockTransportService) cluster.getInstance(TransportService.class, node2);\n- removeDisruption(discoveryNode1, transportService1, discoveryNode2, transportService2);\n+ removeDisruption(transportService1, transportService2);\n }\n }\n activeDisruption = false;\n }\n \n- abstract void applyDisruption(DiscoveryNode node1, MockTransportService transportService1,\n- DiscoveryNode node2, MockTransportService transportService2);\n+ abstract void applyDisruption(MockTransportService transportService1, MockTransportService transportService2);\n \n \n- protected void removeDisruption(DiscoveryNode node1, MockTransportService transportService1,\n- DiscoveryNode node2, MockTransportService transportService2) {\n- transportService1.clearRule(node2);\n- transportService2.clearRule(node1);\n+ protected void removeDisruption(MockTransportService transportService1, MockTransportService transportService2) {\n+ transportService1.clearRule(transportService2);\n+ transportService2.clearRule(transportService1);\n }\n \n }", "filename": "test-framework/src/main/java/org/elasticsearch/test/disruption/NetworkPartition.java", "status": "modified" }, { "diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.test.disruption;\n \n-import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.test.transport.MockTransportService;\n \n@@ -45,10 +44,9 @@ protected String getPartitionDescription() {\n }\n \n @Override\n- void applyDisruption(DiscoveryNode node1, MockTransportService transportService1,\n- DiscoveryNode node2, MockTransportService transportService2) {\n- transportService1.addUnresponsiveRule(node2);\n- transportService2.addUnresponsiveRule(node1);\n+ void applyDisruption(MockTransportService transportService1, MockTransportService transportService2) {\n+ transportService1.addUnresponsiveRule(transportService2);\n+ transportService2.addUnresponsiveRule(transportService1);\n }\n \n @Override", "filename": "test-framework/src/main/java/org/elasticsearch/test/disruption/NetworkUnresponsivePartition.java", "status": "modified" }, { "diff": "@@ -55,6 +55,14 @@\n \n /**\n * A mock transport service that allows to simulate different network topology failures.\n+ * Internally it maps TransportAddress objects to rules that inject failures.\n+ * Adding rules for a node is done by adding rules for all bound addresses of a node\n+ * (and the publish address, if different).\n+ * Matching requests to rules is based on the transport address associated with the\n+ * discovery node of the request, namely by DiscoveryNode.getAddress().\n+ * This address is usually the publish address of the node but can also be a different one\n+ * (for example, @see org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing, which constructs\n+ * fake DiscoveryNode instances where the publish address is one of the bound addresses).\n */\n public class MockTransportService extends TransportService {\n \n@@ -82,7 +90,14 @@ public Settings additionalSettings() {\n public MockTransportService(Settings settings, Transport transport, ThreadPool threadPool) {\n super(settings, new LookupTestTransport(transport), threadPool);\n this.original = transport;\n+ }\n \n+ public static TransportAddress[] extractTransportAddresses(TransportService transportService) {\n+ HashSet<TransportAddress> transportAddresses = new HashSet<>();\n+ BoundTransportAddress boundTransportAddress = transportService.boundAddress();\n+ transportAddresses.addAll(Arrays.asList(boundTransportAddress.boundAddresses()));\n+ transportAddresses.add(boundTransportAddress.publishAddress());\n+ return transportAddresses.toArray(new TransportAddress[transportAddresses.size()]);\n }\n \n /**\n@@ -93,10 +108,19 @@ public void clearAllRules() {\n }\n \n /**\n- * Clears the rule associated with the provided node.\n+ * Clears the rule associated with the provided transport service.\n+ */\n+ public void clearRule(TransportService transportService) {\n+ for (TransportAddress transportAddress : extractTransportAddresses(transportService)) {\n+ clearRule(transportAddress);\n+ }\n+ }\n+\n+ /**\n+ * Clears the rule associated with the provided transport address.\n */\n- public void clearRule(DiscoveryNode node) {\n- transport().transports.remove(node.getAddress());\n+ public void clearRule(TransportAddress transportAddress) {\n+ transport().transports.remove(transportAddress);\n }\n \n /**\n@@ -110,8 +134,18 @@ public Transport original() {\n * Adds a rule that will cause every send request to fail, and each new connect since the rule\n * is added to fail as well.\n */\n- public void addFailToSendNoConnectRule(DiscoveryNode node) {\n- addDelegate(node, new DelegateTransport(original) {\n+ public void addFailToSendNoConnectRule(TransportService transportService) {\n+ for (TransportAddress transportAddress : extractTransportAddresses(transportService)) {\n+ addFailToSendNoConnectRule(transportAddress);\n+ }\n+ }\n+\n+ /**\n+ * Adds a rule that will cause every send request to fail, and each new connect since the rule\n+ * is added to fail as well.\n+ */\n+ public void addFailToSendNoConnectRule(TransportAddress transportAddress) {\n+ addDelegate(transportAddress, new DelegateTransport(original) {\n @Override\n public void connectToNode(DiscoveryNode node) throws ConnectTransportException {\n throw new ConnectTransportException(node, \"DISCONNECT: simulated\");\n@@ -132,16 +166,32 @@ public void sendRequest(DiscoveryNode node, long requestId, String action, Trans\n /**\n * Adds a rule that will cause matching operations to throw ConnectTransportExceptions\n */\n- public void addFailToSendNoConnectRule(DiscoveryNode node, final String... blockedActions) {\n- addFailToSendNoConnectRule(node, new HashSet<>(Arrays.asList(blockedActions)));\n+ public void addFailToSendNoConnectRule(TransportService transportService, final String... blockedActions) {\n+ addFailToSendNoConnectRule(transportService, new HashSet<>(Arrays.asList(blockedActions)));\n+ }\n+\n+ /**\n+ * Adds a rule that will cause matching operations to throw ConnectTransportExceptions\n+ */\n+ public void addFailToSendNoConnectRule(TransportAddress transportAddress, final String... blockedActions) {\n+ addFailToSendNoConnectRule(transportAddress, new HashSet<>(Arrays.asList(blockedActions)));\n+ }\n+\n+ /**\n+ * Adds a rule that will cause matching operations to throw ConnectTransportExceptions\n+ */\n+ public void addFailToSendNoConnectRule(TransportService transportService, final Set<String> blockedActions) {\n+ for (TransportAddress transportAddress : extractTransportAddresses(transportService)) {\n+ addFailToSendNoConnectRule(transportAddress, blockedActions);\n+ }\n }\n \n /**\n * Adds a rule that will cause matching operations to throw ConnectTransportExceptions\n */\n- public void addFailToSendNoConnectRule(DiscoveryNode node, final Set<String> blockedActions) {\n+ public void addFailToSendNoConnectRule(TransportAddress transportAddress, final Set<String> blockedActions) {\n \n- addDelegate(node, new DelegateTransport(original) {\n+ addDelegate(transportAddress, new DelegateTransport(original) {\n @Override\n public void connectToNode(DiscoveryNode node) throws ConnectTransportException {\n original.connectToNode(node);\n@@ -167,8 +217,18 @@ public void sendRequest(DiscoveryNode node, long requestId, String action, Trans\n * Adds a rule that will cause ignores each send request, simulating an unresponsive node\n * and failing to connect once the rule was added.\n */\n- public void addUnresponsiveRule(DiscoveryNode node) {\n- addDelegate(node, new DelegateTransport(original) {\n+ public void addUnresponsiveRule(TransportService transportService) {\n+ for (TransportAddress transportAddress : extractTransportAddresses(transportService)) {\n+ addUnresponsiveRule(transportAddress);\n+ }\n+ }\n+\n+ /**\n+ * Adds a rule that will cause ignores each send request, simulating an unresponsive node\n+ * and failing to connect once the rule was added.\n+ */\n+ public void addUnresponsiveRule(TransportAddress transportAddress) {\n+ addDelegate(transportAddress, new DelegateTransport(original) {\n @Override\n public void connectToNode(DiscoveryNode node) throws ConnectTransportException {\n throw new ConnectTransportException(node, \"UNRESPONSIVE: simulated\");\n@@ -192,10 +252,22 @@ public void sendRequest(DiscoveryNode node, long requestId, String action, Trans\n *\n * @param duration the amount of time to delay sending and connecting.\n */\n- public void addUnresponsiveRule(DiscoveryNode node, final TimeValue duration) {\n+ public void addUnresponsiveRule(TransportService transportService, final TimeValue duration) {\n+ for (TransportAddress transportAddress : extractTransportAddresses(transportService)) {\n+ addUnresponsiveRule(transportAddress, duration);\n+ }\n+ }\n+\n+ /**\n+ * Adds a rule that will cause ignores each send request, simulating an unresponsive node\n+ * and failing to connect once the rule was added.\n+ *\n+ * @param duration the amount of time to delay sending and connecting.\n+ */\n+ public void addUnresponsiveRule(TransportAddress transportAddress, final TimeValue duration) {\n final long startTime = System.currentTimeMillis();\n \n- addDelegate(node, new DelegateTransport(original) {\n+ addDelegate(transportAddress, new DelegateTransport(original) {\n \n TimeValue getDelay() {\n return new TimeValue(duration.millis() - (System.currentTimeMillis() - startTime));\n@@ -280,12 +352,25 @@ protected void doRun() throws IOException {\n }\n \n /**\n- * Adds a new delegate transport that is used for communication with the given node.\n+ * Adds a new delegate transport that is used for communication with the given transport service.\n+ *\n+ * @return <tt>true</tt> iff no other delegate was registered for any of the addresses bound by transport service, otherwise <tt>false</tt>\n+ */\n+ public boolean addDelegate(TransportService transportService, DelegateTransport transport) {\n+ boolean noRegistered = true;\n+ for (TransportAddress transportAddress : extractTransportAddresses(transportService)) {\n+ noRegistered &= addDelegate(transportAddress, transport);\n+ }\n+ return noRegistered;\n+ }\n+\n+ /**\n+ * Adds a new delegate transport that is used for communication with the given transport address.\n *\n- * @return <tt>true</tt> iff no other delegate was registered for this node before, otherwise <tt>false</tt>\n+ * @return <tt>true</tt> iff no other delegate was registered for this address before, otherwise <tt>false</tt>\n */\n- public boolean addDelegate(DiscoveryNode node, DelegateTransport transport) {\n- return transport().transports.put(node.getAddress(), transport) == null;\n+ public boolean addDelegate(TransportAddress transportAddress, DelegateTransport transport) {\n+ return transport().transports.put(transportAddress, transport) == null;\n }\n \n private LookupTestTransport transport() {", "filename": "test-framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java", "status": "modified" } ] }
{ "body": "This commit prevents running rebalance operations if the store allocator is\nstill fetching async shard / store data to prevent pre-mature rebalance decisions\nwhich need to be reverted once shard store data is available. This is typically happening\non rolling restarts which can make those restarts extremely painful.\n\nCloses #14387\n", "comments": [ { "body": "I tried to implement this with the least impact possible code wise to safely backport to 1.7\n", "created_at": "2015-11-06T19:26:34Z" }, { "body": "@bleskes I pushed a new commit\n", "created_at": "2015-11-08T11:43:47Z" }, { "body": "looks great. Left one minor comment in ReplicaShardAllocator\n", "created_at": "2015-11-09T11:28:27Z" }, { "body": "@bleskes pushed a new commit\n", "created_at": "2015-11-09T12:02:00Z" }, { "body": "LGTM. Thanks @s1monw \n", "created_at": "2015-11-09T12:02:47Z" }, { "body": "@ywelsch you mentioned an integration test for this you added, is the test much different to my last commit (https://github.com/s1monw/elasticsearch/commit/7b5e323ec0581ce53d407e1de6f84709776ebfe0) and if so do you wanna add it to this PR?\n", "created_at": "2015-11-09T21:29:00Z" }, { "body": "My test looked similar (no need to add more of the same). I did not rely on delayed shard allocation though but explicitly disabled allocation for the restart. Maybe the test can be randomised to pick one of these approaches and also chose a random number of shards?\n", "created_at": "2015-11-10T09:05:22Z" }, { "body": "> Maybe the test can be randomised to pick one of these approaches and also chose a random number of shards?\n\nI don't like the randominzation aspect here. I think the test should be really stable to just test what it is supposed to test. \n", "created_at": "2015-11-10T10:54:06Z" } ], "number": 14591, "title": "Only allow rebalance operations to run if all shard store data is available" }
{ "body": "This commit prevents running rebalance operations if the store allocator is\nstill fetching async shard / store data to prevent pre-mature rebalance decisions\nwhich need to be reverted once shard store data is available. This is typically happening\non rolling restarts which can make those restarts extremely painful.\n\nCloses #14387\n\nthis is the 1.7 backport of #14591 \n", "number": 14652, "review_comments": [ { "body": ":)\n", "created_at": "2015-11-10T12:59:15Z" } ], "title": "Only allow rebalance operations to run if all shard store data is available" }
{ "commits": [ { "message": "Only allow rebalance operations to run if all shard store data is available\n\nThis commit prevents running rebalance operations if the store allocator is\nstill fetching async shard / store data to prevent pre-mature rebalance decisions\nwhich need to be reverted once shard store data is available. This is typically happening\non rolling restarts which can make those restarts extremely painful.\n\nCloses #14387" } ], "files": [ { "diff": "@@ -116,6 +116,9 @@ public RoutingExplanations explanations() {\n \n private boolean debugDecision = false;\n \n+ private boolean hasPendingAsyncFetch = false;\n+\n+\n /**\n * Creates a new {@link RoutingAllocation}\n * \n@@ -244,4 +247,20 @@ public Decision decision(Decision decision, String deciderLabel, String reason,\n return decision;\n }\n }\n+\n+ /**\n+ * Returns <code>true</code> iff the current allocation run has not processed all of the in-flight or available\n+ * shard or store fetches. Otherwise <code>true</code>\n+ */\n+ public boolean hasPendingAsyncFetch() {\n+ return hasPendingAsyncFetch;\n+ }\n+\n+ /**\n+ * Sets a flag that signals that current allocation run has not processed all of the in-flight or available shard or store fetches.\n+ * This state is anti-viral and can be reset in on allocation run.\n+ */\n+ public void setHasPendingAsyncFetch() {\n+ this.hasPendingAsyncFetch = true;\n+ }\n }", "filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/RoutingAllocation.java", "status": "modified" }, { "diff": "@@ -124,7 +124,8 @@ public void applyFailedShards(FailedRerouteAllocation allocation) { /* ONLY FOR\n \n @Override\n public boolean allocateUnassigned(RoutingAllocation allocation) {\n- return rebalance(allocation);\n+ final Balancer balancer = new Balancer(logger, allocation, weightFunction, threshold);\n+ return balancer.allocateUnassigned();\n }\n \n @Override\n@@ -342,6 +343,15 @@ private static boolean lessThan(float delta, float threshold) {\n return delta <= (threshold + 0.001f);\n }\n \n+ /**\n+ * Allocates all possible unassigned shards\n+ * @return <code>true</code> if the current configuration has been\n+ * changed, otherwise <code>false</code>\n+ */\n+ final boolean allocateUnassigned() {\n+ return balance(true);\n+ }\n+\n /**\n * Balances the nodes on the cluster model according to the weight\n * function. The configured threshold is the minimum delta between the\n@@ -357,16 +367,24 @@ private static boolean lessThan(float delta, float threshold) {\n * changed, otherwise <code>false</code>\n */\n public boolean balance() {\n+ return balance(false);\n+ }\n+\n+ private boolean balance(boolean onlyAssign) {\n if (this.nodes.isEmpty()) {\n /* with no nodes this is pointless */\n return false;\n }\n if (logger.isTraceEnabled()) {\n- logger.trace(\"Start balancing cluster\");\n+ if (onlyAssign) {\n+ logger.trace(\"Start balancing cluster\");\n+ } else {\n+ logger.trace(\"Start assigning unassigned shards\");\n+ }\n }\n final RoutingNodes.UnassignedShards unassigned = routingNodes.unassigned().transactionBegin();\n boolean changed = initialize(routingNodes, unassigned);\n- if (!changed && allocation.deciders().canRebalance(allocation).type() == Type.YES) {\n+ if (onlyAssign == false && changed == false && allocation.deciders().canRebalance(allocation).type() == Type.YES) {\n NodeSorter sorter = newNodeSorter();\n if (nodes.size() > 1) { /* skip if we only have one node */\n for (String index : buildWeightOrderedIndidces(Operation.BALANCE, sorter)) {", "filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java", "status": "modified" }, { "diff": "@@ -78,7 +78,19 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n \n @Override\n public boolean rebalance(RoutingAllocation allocation) {\n- return allocator.rebalance(allocation);\n+ if (allocation.hasPendingAsyncFetch() == false) {\n+ /*\n+ * see https://github.com/elastic/elasticsearch/issues/14387\n+ * if we allow rebalance operations while we are still fetching shard store data\n+ * we might end up with unnecessary rebalance operations which can be super confusion/frustrating\n+ * since once the fetches come back we might just move all the shards back again.\n+ * Therefore we only do a rebalance if we have fetched all information.\n+ */\n+ return allocator.rebalance(allocation);\n+ } else {\n+ logger.debug(\"skipping rebalance due to in-flight shard/store fetches\");\n+ return false;\n+ }\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/ShardsAllocators.java", "status": "modified" }, { "diff": "@@ -186,6 +186,7 @@ protected Settings getIndexSettings(String index) {\n AsyncShardFetch.FetchResult<TransportNodesListGatewayStartedShards.NodeLocalGatewayStartedShards> shardState = fetch.fetchData(nodes, metaData, allocation.getIgnoreNodes(shard.shardId()));\n if (shardState.hasData() == false) {\n logger.trace(\"{}: ignoring allocation, still fetching shard started state\", shard);\n+ allocation.setHasPendingAsyncFetch();\n unassignedIterator.remove();\n routingNodes.ignoredUnassigned().add(shard);\n continue;\n@@ -422,6 +423,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n AsyncShardFetch.FetchResult<TransportNodesListShardStoreMetaData.NodeStoreFilesMetaData> shardStores = fetch.fetchData(nodes, metaData, allocation.getIgnoreNodes(shard.shardId()));\n if (shardStores.hasData() == false) {\n logger.trace(\"{}: ignoring allocation, still fetching shard stores\", shard);\n+ allocation.setHasPendingAsyncFetch();\n unassignedIterator.remove();\n routingNodes.ignoredUnassigned().add(shard);\n continue; // still fetching", "filename": "src/main/java/org/elasticsearch/gateway/local/LocalGatewayAllocator.java", "status": "modified" }, { "diff": "@@ -19,18 +19,23 @@\n \n package org.elasticsearch.cluster.routing.allocation;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.RoutingNodes;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.allocation.decider.ClusterRebalanceAllocationDecider;\n+import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.gateway.none.NoneGatewayAllocator;\n import org.elasticsearch.test.ElasticsearchAllocationTestCase;\n import org.junit.Test;\n \n+import java.util.concurrent.atomic.AtomicBoolean;\n+\n import static org.elasticsearch.cluster.routing.ShardRoutingState.*;\n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n import static org.hamcrest.Matchers.anyOf;\n@@ -629,4 +634,93 @@ public void testClusterAllActive3() {\n \n assertThat(routingNodes.node(\"node3\").isEmpty(), equalTo(true));\n }\n+\n+ public void testRebalanceWhileShardFetching() {\n+ final AtomicBoolean hasFetches = new AtomicBoolean(true);\n+ AllocationService strategy = createAllocationService(settingsBuilder().put(ClusterRebalanceAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE,\n+ ClusterRebalanceAllocationDecider.ClusterRebalanceType.ALWAYS.toString()).build(), new NoneGatewayAllocator() {\n+ @Override\n+ public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ if (hasFetches.get()) {\n+ allocation.setHasPendingAsyncFetch();\n+ }\n+ return super.allocateUnassigned(allocation);\n+ }\n+ });\n+\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").numberOfShards(2).numberOfReplicas(0))\n+ .put(IndexMetaData.builder(\"test1\").settings(settingsBuilder().put(FilterAllocationDecider.INDEX_ROUTING_EXCLUDE_GROUP + \"_id\", \"node1,node2\")).numberOfShards(2).numberOfReplicas(0))\n+ .build();\n+\n+ // we use a second index here (test1) that never gets assigned otherwise allocateUnassinged is never called if we don't have unassigned shards.\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test\"))\n+ .addAsNew(metaData.index(\"test1\"))\n+ .build();\n+\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n+\n+ logger.info(\"start two nodes\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\"))).build();\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(INITIALIZING));\n+ }\n+\n+ logger.debug(\"start all the primary shards for test\");\n+ RoutingNodes routingNodes = clusterState.getRoutingNodes();\n+ routingTable = strategy.applyStartedShards(clusterState, routingNodes.shardsWithState(\"test\", INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+\n+ logger.debug(\"now, start 1 more node, check that rebalancing will not happen since we have shard sync going on\");\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes())\n+ .put(newNode(\"node2\")))\n+ .build();\n+ logger.debug(\"reroute and check that nothing has changed\");\n+ RoutingAllocation.Result reroute = strategy.reroute(clusterState);\n+ assertFalse(reroute.changed());\n+ routingTable = reroute.routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test\").shard(i).primaryShard().state(), equalTo(STARTED));\n+ }\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(UNASSIGNED));\n+ }\n+ logger.debug(\"now set hasFetches to true and reroute we should now see exactly one relocating shard\");\n+ hasFetches.set(false);\n+ reroute = strategy.reroute(clusterState);\n+ assertTrue(reroute.changed());\n+ routingTable = reroute.routingTable();\n+ int numStarted = 0;\n+ int numRelocating = 0;\n+ for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n+\n+ assertThat(routingTable.index(\"test\").shard(i).shards().size(), equalTo(1));\n+ if (routingTable.index(\"test\").shard(i).primaryShard().state() == STARTED) {\n+ numStarted++;\n+ } else if (routingTable.index(\"test\").shard(i).primaryShard().state() == RELOCATING) {\n+ numRelocating++;\n+ }\n+ }\n+ for (int i = 0; i < routingTable.index(\"test1\").shards().size(); i++) {\n+ assertThat(routingTable.index(\"test1\").shard(i).shards().size(), equalTo(1));\n+ assertThat(routingTable.index(\"test1\").shard(i).primaryShard().state(), equalTo(UNASSIGNED));\n+ }\n+ assertEquals(numStarted, 1);\n+ assertEquals(numRelocating, 1);\n+\n+ }\n }\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/ClusterRebalanceRoutingTests.java", "status": "modified" }, { "diff": "@@ -22,11 +22,18 @@\n import org.apache.lucene.util.LuceneTestCase.Slow;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthRequestBuilder;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n+import org.elasticsearch.action.admin.indices.recovery.RecoveryResponse;\n+import org.elasticsearch.action.admin.indices.recovery.ShardRecoveryResponse;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.discovery.zen.ZenDiscovery;\n+import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n import org.junit.Test;\n@@ -130,4 +137,40 @@ public void testFullRollingRestart() throws Exception {\n assertHitCount(client().prepareCount().setQuery(matchAllQuery()).get(), 2000l);\n }\n }\n+\n+ @Slow\n+ public void testNoRebalanceOnRollingRestart() throws Exception {\n+ // see https://github.com/elastic/elasticsearch/issues/14387\n+ internalCluster().startNode(ImmutableSettings.settingsBuilder().put(\"node.master\", true).put(\"node.data\", false).put(\"gateway.type\", \"local\").build());\n+ internalCluster().startNodesAsync(3, ImmutableSettings.settingsBuilder().put(\"node.master\", false).put(\"gateway.type\", \"local\").build()).get();\n+\n+ /**\n+ * We start 3 nodes and a dedicated master. Restart on of the data-nodes and ensure that we got no relocations.\n+ * Yet we have 6 shards 0 replica so that means if the restarting node comes back both other nodes are subject\n+ * to relocating to the restarting node since all had 2 shards and now one node has nothing allocated.\n+ * We have a fix for this to wait until we have allocated unallocated shards now so this shouldn't happen.\n+ */\n+ prepareCreate(\"test\").setSettings(ImmutableSettings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, \"6\").put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, \"0\").put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, TimeValue.timeValueMinutes(1))).get();\n+\n+ for (int i = 0; i < 100; i++) {\n+ client().prepareIndex(\"test\", \"type1\", Long.toString(i))\n+ .setSource(MapBuilder.<String, Object>newMapBuilder().put(\"test\", \"value\" + i).map()).execute().actionGet();\n+ }\n+ ensureGreen();\n+ ClusterState state = client().admin().cluster().prepareState().get().getState();\n+ RecoveryResponse recoveryResponse = client().admin().indices().prepareRecoveries(\"test\").get();\n+ for (ShardRecoveryResponse response : recoveryResponse.shardResponses().get(\"test\")) {\n+ RecoveryState recoveryState = response.recoveryState();\n+ assertTrue(\"relocated from: \" + recoveryState.getSourceNode() + \" to: \" + recoveryState.getTargetNode() + \"\\n\" + state.prettyPrint(), recoveryState.getType() != RecoveryState.Type.RELOCATION);\n+ }\n+ internalCluster().restartRandomDataNode();\n+ ensureGreen();\n+ ClusterState afterState = client().admin().cluster().prepareState().get().getState();\n+\n+ recoveryResponse = client().admin().indices().prepareRecoveries(\"test\").get();\n+ for (ShardRecoveryResponse response : recoveryResponse.shardResponses().get(\"test\")) {\n+ RecoveryState recoveryState = response.recoveryState();\n+ assertTrue(\"relocated from: \" + recoveryState.getSourceNode() + \" to: \" + recoveryState.getTargetNode()+ \"-- \\nbefore: \\n\" + state.prettyPrint() + \"\\nafter: \\n\" + afterState.prettyPrint(), recoveryState.getType() != RecoveryState.Type.RELOCATION);\n+ }\n+ }\n }", "filename": "src/test/java/org/elasticsearch/recovery/FullRollingRestartTests.java", "status": "modified" }, { "diff": "@@ -26,6 +26,8 @@\n import org.elasticsearch.cluster.routing.MutableShardRouting;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n+import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator;\n+import org.elasticsearch.cluster.routing.allocation.allocator.GatewayAllocator;\n import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocators;\n import org.elasticsearch.cluster.routing.allocation.decider.AllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;\n@@ -57,6 +59,12 @@ public static AllocationService createAllocationService(Settings settings) {\n return createAllocationService(settings, getRandom());\n }\n \n+ public static AllocationService createAllocationService(Settings settings, GatewayAllocator allocator) {\n+ return new AllocationService(settings,\n+ randomAllocationDeciders(settings, new NodeSettingsService(ImmutableSettings.Builder.EMPTY_SETTINGS), getRandom()),\n+ new ShardsAllocators(settings, allocator, new BalancedShardsAllocator(settings)), ClusterInfoService.EMPTY);\n+ }\n+\n public static AllocationService createAllocationService(Settings settings, Random random) {\n return new AllocationService(settings,\n randomAllocationDeciders(settings, new NodeSettingsService(ImmutableSettings.Builder.EMPTY_SETTINGS), random),", "filename": "src/test/java/org/elasticsearch/test/ElasticsearchAllocationTestCase.java", "status": "modified" }, { "diff": "@@ -1251,7 +1251,7 @@ public void restartRandomNode(RestartCallback callback) throws Exception {\n * Restarts a random data node in the cluster\n */\n public void restartRandomDataNode() throws Exception {\n- restartRandomNode(EMPTY_CALLBACK);\n+ restartRandomDataNode(EMPTY_CALLBACK);\n }\n \n /**", "filename": "src/test/java/org/elasticsearch/test/InternalTestCluster.java", "status": "modified" } ] }
{ "body": "Hi,\n\nwe may have tried something stupid here:\n\n```\n$ curl 192.168.8.180:9200/_cat/nodes?h=name,version,jdk,ip,master,node.role\nrobocop 2.0.0 1.7.0_79 192.168.8.237 - d\nDet Røde Lyn 2.0.0 1.8.0_65 192.168.8.180 * -\nstuart 2.0.0 1.8.0_65 192.168.8.13 - d\nGulle 2.0.0 1.7.0_85 192.168.8.21 m d \n```\n\n\"Det Røde Lyn\" is a windows machine.\n\nall nodes have \"Gulle\" configured as unicast host\n\n```\ndiscovery.zen.ping.unicast.hosts: [\"192.168.8.21\"]\n```\n\nwhen Gulle shutdown the cluster broke down and the following shows up in the log of the stuart node. The logfile of the other nodes in the cluster dont show a similar NPE.\n\nWe have been unable to reproduce this, but thought you might be interested in this anyway.\n\n```\n[2015-11-06 13:28:07,710][INFO ][cluster.routing.allocation.decider] [stuart] updating [cluster.routing.allocation.enable] from [ALL] to [NONE]\n[2015-11-06 13:28:47,859][INFO ][discovery.zen ] [stuart] master_left [{Gulle}{F4pDMpoOSG6U0fBuxUzFWg}{192.168.8.21}{192.168.8.21:9300}], reason [transport disconnected]\n[2015-11-06 13:28:47,859][WARN ][discovery.zen ] [stuart] master left (reason = transport disconnected), current nodes: {{robocop}{LHRwN1SYTMC05oPnrnlhVg}{192.168.8.237}{192.168.8.237:9300}{rack=robo-desk, max_local_storage_nodes=1, master=false},{Det Røde Lyn}{DmoJNBpOR8m1WKBPQojwag}{192.168.8.180}{192.168.8.180:9300}{data=false},{stuart}{TYWtWR00Q6Kv8pG6Fi09Yw}{192.168.8.13}{192.168.8.13:9300}{master=false},}\n[2015-11-06 13:28:47,859][INFO ][cluster.service ] [stuart] removed {{Gulle}{F4pDMpoOSG6U0fBuxUzFWg}{192.168.8.21}{192.168.8.21:9300},}, reason: zen-disco-master_failed ({Gulle}{F4pDMpoOSG6U0fBuxUzFWg}{192.168.8.21}{192.168.8.21:9300})\n[2015-11-06 13:28:50,397][DEBUG][action.admin.cluster.state] [stuart] no known master node, scheduling a retry\n[2015-11-06 13:28:50,398][DEBUG][action.admin.cluster.state] [stuart] no known master node, scheduling a retry\n[2015-11-06 13:28:50,398][INFO ][rest.suppressed ] /_stats/docs,store Params: {metric=docs,store}\njava.lang.NullPointerException\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.onNodeFailure(TransportBroadcastByNodeAction.java:332)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.sendNodeRequest(TransportBroadcastByNodeAction.java:314)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.start(TransportBroadcastByNodeAction.java:284)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:217)\n at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:77)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)\n at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)\n at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)\n at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52)\n at org.elasticsearch.rest.BaseRestHandler$HeadersAndContextCopyClient.doExecute(BaseRestHandler.java:83)\n at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)\n at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1177)\n at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.stats(AbstractClient.java:1482)\n at org.elasticsearch.rest.action.admin.indices.stats.RestIndicesStatsAction.handleRequest(RestIndicesStatsAction.java:102)\n at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:54)\n at org.elasticsearch.rest.RestController.executeHandler(RestController.java:207)\n at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:166)\n at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:128)\n at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:86)\n at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:348)\n at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:63)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60)\n at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)\n at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)\n at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)\n at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\n at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\n at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\n at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-11-06 13:28:50,398][DEBUG][action.admin.indices.get ] [stuart] no known master node, scheduling a retry\n[2015-11-06 13:28:50,399][DEBUG][action.admin.cluster.health] [stuart] no known master node, scheduling a retry\n[2015-11-06 13:28:51,291][INFO ][cluster.service ] [stuart] detected_master {Det Røde Lyn}{DmoJNBpOR8m1WKBPQojwag}{192.168.8.180}{192.168.8.180:9300}{data=false}, reason: zen-disco-receive(from master [{Det Røde Lyn}{DmoJNBpOR8m1WKBPQojwag}{192.168.8.180}{192.168.8.180:9300}{data=false}])\n[2015-11-06 13:29:13,992][INFO ][cluster.service ] [stuart] added {{Gulle}{vDbnkAfsT96NnX_Q3cN1oA}{192.168.8.21}{192.168.8.21:9300},}, reason: zen-disco-receive(from master [{Det Røde Lyn}{DmoJNBpOR8m1WKBPQojwag}{192.168.8.180}{192.168.8.180:9300}{data=false}])\n[2015-11-06 13:29:55,312][INFO ][cluster.routing.allocation.decider] [stuart] updating [cluster.routing.allocation.enable] from [NONE] to [ALL]\n```\n", "comments": [ { "body": "hm, we were actually able to reproduce this.\nThis time, all remaining nodes showed the same NPE, while the new master node (Det Røde Lyn) actually showed it twice (approx. 10 seconds between the two)\n", "created_at": "2015-11-06T13:33:30Z" }, { "body": "This is happening because in between the master leaving the cluster and a new one getting assigned, the routing table won't get updated. As such, when the master leaves the cluster, the shards assigned to that node will not be reassigned. However, for any node that has detected the master as failed, it will update its local cluster state to no longer include the master in the list of nodes in the cluster. As such, the shards that were on the master will be in the cluster state but not associated with a node that is in the cluster state. When `sendNodeRequest` tries to look up the node to send the request to for such a shard there ends up being no such node. This leads to one NPE, which gets caught and cascades into a second NPE in the `onNodeFailure` handler. The correct course of action is to treat such shards as if it they are not available so that we never even attempt to send requests for them.\n\nThank you for the bug report. I'll be opening a pull request later to address this issue.\n", "created_at": "2015-11-06T13:46:28Z" }, { "body": "Awesome work! Thanks.\n", "created_at": "2015-11-09T08:31:00Z" } ], "number": 14584, "title": "NPE onNodeFailure " }
{ "body": "This commit addresses an issue in TransportBroadcastByNodeAction.\nNamely, if in between a master leaving the cluster and a new one being\nassigned, a request that relies on TransportBroadcastByNodeAction\n(e.g., an indices stats request) is issued,\nTransportBroadcastByNodeAction might attempt to send a request to a\nnode that is no longer in the local node’s cluster state.\n\nThe exact circumstances that lead to this are as follows. When the\nmaster leaves the cluster and another node’s master fault detection\ndetects this, the local node will update its local cluster state to no\nlonger include the master node. However, the routing table will not be\nupdated. This means that in the preparation for sending the requests in\nTransportBroadcastByNodeAction, we need to check that not only is a\nshard assigned, but also that it is assigned to a node that is still in\nthe local node’s cluster state.\n\nThis commit adds such a check to the constructor of\nTransportBroadcastByNodeAction. A new unit test is added that checks\nthat no request is sent to the master node in such a situation; this\ntest fails with a NullPointerException without the fix. Additionally,\nthe unit test TransportBroadcastByNodeActionTests#testResultAggregation\nis updated to also simulate a master failure. This updated test also\nfails prior to the fix.\n\nCloses #14584\n", "number": 14586, "review_comments": [ { "body": "maybe add a comment here too?\n", "created_at": "2015-11-06T17:03:40Z" }, { "body": "Added in 395fe444af79f6327cce534578dfb1ac3593d387.\n", "created_at": "2015-11-06T17:07:33Z" }, { "body": "final?\n", "created_at": "2015-11-06T17:07:57Z" }, { "body": "Done in 70ac92bccbf312a660fc739e7650aa201a6e4195.\n", "created_at": "2015-11-06T17:09:03Z" } ], "title": "Handle shards assigned to nodes that are not in the cluster state" }
{ "commits": [ { "message": "Handle shards assigned to nodes that are not in the cluster state\n\nThis commit addresses an issue in TransportBroadcastByNodeAction.\nNamely, if in between a master leaving the cluster and a new one being\nassigned, a request that relies on TransportBroadcastByNodeAction\n(e.g., an indices stats request) is issued,\nTransportBroadcastByNodeAction might attempt to send a request to a\nnode that is no longer in the local node’s cluster state.\n\nThe exact circumstances that lead to this are as follows. When the\nmaster leaves the cluster and another node’s master fault detection\ndetects this, the local node will update its local cluster state to no\nlonger include the master node. However, the routing table will not be\nupdated. This means that in the preparation for sending the requests in\nTransportBroadcastByNodeAction, we need to check that not only is a\nshard assigned, but also that it is assigned to a node that is still in\nthe local node’s cluster state.\n\nThis commit adds such a check to the constructor of\nTransportBroadcastByNodeAction. A new unit test is added that checks\nthat no request is sent to the master node in such a situation; this\ntest fails with a NullPointerException without the fix. Additionally,\nthe unit test TransportBroadcastByNodeActionTests#testResultAggregation\nis updated to also simulate a master failure. This updated test also\nfails prior to the fix.\n\nCloses #14584" } ], "files": [ { "diff": "@@ -228,7 +228,13 @@ protected AsyncAction(Request request, ActionListener<Response> listener) {\n nodeIds = new HashMap<>();\n \n for (ShardRouting shard : shardIt.asUnordered()) {\n- if (shard.assignedToNode()) {\n+ // send a request to the shard only if it is assigned to a node that is in the local node's cluster state\n+ // a scenario in which a shard can be assigned but to a node that is not in the local node's cluster state\n+ // is when the shard is assigned to the master node, the local node has detected the master as failed\n+ // and a new master has not yet been elected; in this situation the local node will have removed the\n+ // master node from the local cluster state, but the shards assigned to the master will still be in the\n+ // routing table as such\n+ if (shard.assignedToNode() && nodes.get(shard.currentNodeId()) != null) {\n String nodeId = shard.currentNodeId();\n if (!nodeIds.containsKey(nodeId)) {\n nodeIds.put(nodeId, new ArrayList<>());", "filename": "core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java", "status": "modified" }, { "diff": "@@ -289,6 +289,44 @@ public void testOneRequestIsSentToEachNodeHoldingAShard() {\n }\n }\n \n+ // simulate the master being removed from the cluster but before a new master is elected\n+ // as such, the shards assigned to the master will still show up in the cluster state as assigned to a node but\n+ // that node will not be in the local cluster state on any node that has detected the master as failing\n+ // in this case, such a shard should be treated as unassigned\n+ public void testRequestsAreNotSentToFailedMaster() {\n+ Request request = new Request(new String[]{TEST_INDEX});\n+ PlainActionFuture<Response> listener = new PlainActionFuture<>();\n+\n+ DiscoveryNode masterNode = clusterService.state().nodes().masterNode();\n+ DiscoveryNodes.Builder builder = DiscoveryNodes.builder(clusterService.state().getNodes());\n+ builder.remove(masterNode.id());\n+\n+ clusterService.setState(ClusterState.builder(clusterService.state()).nodes(builder));\n+\n+ action.new AsyncAction(request, listener).start();\n+\n+ Map<String, List<CapturingTransport.CapturedRequest>> capturedRequests = transport.capturedRequestsByTargetNode();\n+\n+ // the master should not be in the list of nodes that requests were sent to\n+ ShardsIterator shardIt = clusterService.state().routingTable().allShards(new String[]{TEST_INDEX});\n+ Set<String> set = new HashSet<>();\n+ for (ShardRouting shard : shardIt.asUnordered()) {\n+ if (shard.currentNodeId() != masterNode.id()) {\n+ set.add(shard.currentNodeId());\n+ }\n+ }\n+\n+ // check a request was sent to the right number of nodes\n+ assertEquals(set.size(), capturedRequests.size());\n+\n+ // check requests were sent to the right nodes\n+ assertEquals(set, capturedRequests.keySet());\n+ for (Map.Entry<String, List<CapturingTransport.CapturedRequest>> entry : capturedRequests.entrySet()) {\n+ // check one request was sent to each non-master node\n+ assertEquals(1, entry.getValue().size());\n+ }\n+ }\n+\n public void testOperationExecution() throws Exception {\n ShardsIterator shardIt = clusterService.state().routingTable().allShards(new String[]{TEST_INDEX});\n Set<ShardRouting> shards = new HashSet<>();\n@@ -340,6 +378,18 @@ public void testResultAggregation() throws ExecutionException, InterruptedExcept\n Request request = new Request(new String[]{TEST_INDEX});\n PlainActionFuture<Response> listener = new PlainActionFuture<>();\n \n+ // simulate removing the master\n+ final boolean simulateFailedMasterNode = rarely();\n+ DiscoveryNode failedMasterNode = null;\n+ if (simulateFailedMasterNode) {\n+ failedMasterNode = clusterService.state().nodes().masterNode();\n+ DiscoveryNodes.Builder builder = DiscoveryNodes.builder(clusterService.state().getNodes());\n+ builder.remove(failedMasterNode.id());\n+ builder.masterNodeId(null);\n+\n+ clusterService.setState(ClusterState.builder(clusterService.state()).nodes(builder));\n+ }\n+\n action.new AsyncAction(request, listener).start();\n Map<String, List<CapturingTransport.CapturedRequest>> capturedRequests = transport.capturedRequestsByTargetNode();\n transport.clear();\n@@ -382,6 +432,9 @@ public void testResultAggregation() throws ExecutionException, InterruptedExcept\n transport.handleResponse(requestId, nodeResponse);\n }\n }\n+ if (simulateFailedMasterNode) {\n+ totalShards += map.get(failedMasterNode.id()).size();\n+ }\n \n Response response = listener.get();\n assertEquals(\"total shards\", totalShards, response.getTotalShards());", "filename": "core/src/test/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeActionTests.java", "status": "modified" } ] }
{ "body": "If you create an index number_of_replicas to 0 and refresh_interval to -1 (using one API call) & add some documents. Then set number_number_of_replicas to 1 and refresh_interval to \"1s\" and add some more documents. The primary and replicas are out of sync, it looks like the replica doesn't have the new documents. See this gist for a repro:\n\nhttps://gist.github.com/msimos/869292265208852fbee4\n\nIf you run the match_all query multiple times you'll see the different documents returned. If you use preference=_primary then all documents are returned. If the query hits the replica, then only the first documents are returned when replicas were set to 0. Also using _cat/shards you can see the size difference between the shards.\n\nIt seems you can workaround this by doing sending this in two separate calls:\n\n```\nPUT /index/_settings\n{\n \"number_of_replicas\": 1\n}\n\nPUT /index/_settings\n{\n \"refresh_interval\": \"1s\"\n}\n```\n\nAlso seems the ordering matters, that \"number_of_replicas\": 1 is set before \"refresh_interval\": \"1s\". This was on Elasticsearch 1.7.2.\n", "comments": [ { "body": "Hi @msimos \n\nThanks for the report and the nice clear recreation. It is not that the shards are out of sync, but that the shard holding the first doc thinks that it doesn't need to refresh.\n\nThis is fixed in 2.0 already\n", "created_at": "2015-10-29T20:00:59Z" }, { "body": "A forced refresh (`/index2/_refresh`) resolves the issue, so somehow the replica is failing to refresh after 1s (but did in fact index the 2nd document).\n\nCuriously, if I pass `\"number_of_shards\": 1` when creating the index, the bug doesn't happen, but with 2 shards, it does. Hmm.\n", "created_at": "2015-10-30T14:54:34Z" }, { "body": "OK I think I see what's happening:\n\nThe new replica is created (because of `\"number_of_replicas\": 1`), but because this is async change, the additional `\"refresh_interval\": \"1s\"` in the same request is never applied to the newly created replica shards since they have not yet set up listeners (`IndexShard.ApplyRefreshSettings`) for settings changes.\n\nSo the replica shards still have the old `refresh_interval` (-1).\n\nReversing the two settings works because the newly created replica shards will inherit the latest index settings when they are created, including the change just made to `refresh_interval`.\n\nI'm not sure how to cleanly fix this. @bleskes?\n\nMaybe we could simply move `number_of_replicas` to the end of all settings updated in one request?\n", "created_at": "2015-10-30T15:34:47Z" }, { "body": "@mikemccand can you try and verify if that still happens on master / 3.0?\n", "created_at": "2015-10-30T15:36:45Z" }, { "body": "@s1monw I'll test master. @clintongormley already checked that 2.0 fixes it (I'm not sure why/how!) ...\n", "created_at": "2015-10-30T15:46:19Z" }, { "body": "@s1monw master doesn't have the bug either ... I'm not sure why.\n", "created_at": "2015-10-30T15:58:54Z" }, { "body": "I did some digging and this has to do with the fact that index shards are created with old index settings. IndexShard get's it's index setting using guice and the `@IndexSettings` annotation. This is supposed be supplied by [IndexSettingsProvider](https://github.com/elastic/elasticsearch/blob/1.7/src/main/java/org/elasticsearch/index/settings/IndexSettingsProvider.java) , which in turn asks `IndexSettingsService` for the latest setting. For some unknown reason, Guice decides to cache those settings and thus IndexShard gets a stale copy. This means that if Guice captures an old settings, the settings are changed and then a shard is created, the shard will see the old setting. It does register a setting change listener but as far as `IndexSettingService` is concerned, no settings have changed since the last iteration. This doesn't happen in 2.0 because we don't inject the index setting to the shard anymore but rather ask for a live copy in the IndexShard constructor. Master is of course totally rewritten to another mechanism.\n\nI've put a test reproducing on https://gist.github.com/bleskes/1e336a206a52cb4f1b0f . (based on 1.7 code)\n\nI think we can do one of the following:\n- Fix the Guice issue and make the provider be called every time (I did some looking and this looks like a rabbit hole - unless somebogy knows what needs to be done)\n- Change 1.7 to do what 2.0 does and not inject the index setting to the IndexShard. This is a local fix and we might suffer other simillar issues in the future, so if we go down this route I suggest we remove the index settings annotation and use this mechanism all over the place (in 2.x and 1.7).\n\nThoughts?\n", "created_at": "2015-11-01T10:06:16Z" }, { "body": "Option 1 sounds scary (I don't like rabbit holes). Option 2 seems like a biggish change to make on a bug fix branch? We could simply leave this bug unfixed in 1.x and encourage users who hit it to upgrade to 2.x?\n", "created_at": "2015-11-01T12:33:11Z" }, { "body": "> We could simply leave this bug unfixed in 1.x and encourage users who hit it to upgrade to 2.x?\n\nSince a workaround exists in the 1.x line, this seems to me to be a sensible option.\n", "created_at": "2015-11-01T15:09:56Z" }, { "body": "note that the work around is likely to to work but not guaranteed. Sometime shard creation is throttled/disabled (see test) and then it doesn't.\n\nThere is the option of just doing the on liner change in 1.7 to make index shard get a new setting every time and only the bigger change (i.e., remove the provider and the annotation) in 2.x .\n", "created_at": "2015-11-01T15:13:17Z" }, { "body": "> @s1monw master doesn't have the bug either ... I'm not sure why.\n\n@mikemccand in master I trashed this Provider @bleskes is talking about entirely. New shards are guaranteed to see the latest settings.\n", "created_at": "2015-11-02T21:00:56Z" }, { "body": "Unless someone objects soon, I'll assume we're going with:\n\n1) One line fix to index shard in 1.7 to _not_ have an injected index settings.\n2) removal of the provider in 2.x.\n", "created_at": "2015-11-03T14:28:00Z" }, { "body": "> Unless someone objects soon, I'll assume we're going with:\n> \n> 1) One line fix to index shard in 1.7 to _not_ have an injected index settings.\n> 2) removal of the provider in 2.x.\n\n+1\n", "created_at": "2015-11-03T14:30:17Z" }, { "body": "> @mikemccand in master I trashed this Provider @bleskes is talking about entirely. \n\nOK wonderful!\n\n> Unless someone objects soon, I'll assume we're going with:\n> \n> 1) One line fix to index shard in 1.7 to not have an injected index settings.\n> 2) removal of the provider in 2.x.\n\n+1\n", "created_at": "2015-11-03T14:37:59Z" }, { "body": "I've implemented the fix for 1.7 (see PR #14522). I am about to work on removing the provider in 2.x.\n", "created_at": "2015-11-04T13:42:37Z" }, { "body": "This was closed by #14522.\n", "created_at": "2016-01-07T21:16:43Z" } ], "number": 14319, "title": "Shards out of sync when setting refresh_interval & number_of_replicas at the same time" }
{ "body": "With this commit fresh index settings when creating index related\ncomponents. Previously a Guice provider has been used together with a\ncustom @IndexSettings annotation. This turned out to be problematic\nas Guice caches this value too aggressively.\n\nFixes #14319\n", "number": 14578, "review_comments": [ { "body": "why don't we keep `getIndexSettings` intead of `indexSettings()` I like the getter better?\n", "created_at": "2015-11-06T11:20:47Z" }, { "body": "@s1monw: `#indexSettings()` is the overridden method from the base class and I wanted to reduce API surface area because both methods did the same. If I should reintroduce the getter, I would implement it identically to `#indexSettings()`, i.e.:\n\n```\npublic Settings getIndexSettings() {\n return settingsService.getSettings();\n}\n```\n", "created_at": "2015-11-06T11:45:25Z" }, { "body": "oh I see ok - I think we should remove that at least in master since I think it's still there but shouldn't\n", "created_at": "2015-11-06T11:46:50Z" } ], "title": "Use fresh index settings instead of relying on @IndexSettings" }
{ "commits": [ { "message": "Use fresh index settings instead of relying on @IndexSettings\n\nWith this commit fresh index settings when creating index related\ncomponents. Previously a Guice provide has been together with a\ncustom @IndexSettings annotation. This turned out to be problematic\nas Guice caches this value too aggressively.\n\nFixes #14319" } ], "files": [ { "diff": "@@ -399,7 +399,7 @@ public ClusterState execute(final ClusterState currentState) throws Exception {\n // For example in MapperService we can't distinguish between a create index api call\n // and a put mapping api call, so we don't which type did exist before.\n // Also the order of the mappings may be backwards.\n- if (Version.indexCreated(indexService.getIndexSettings()).onOrAfter(Version.V_2_0_0_beta1) && newMapper.parentFieldMapper().active()) {\n+ if (Version.indexCreated(indexService.indexSettings()).onOrAfter(Version.V_2_0_0_beta1) && newMapper.parentFieldMapper().active()) {\n IndexMetaData indexMetaData = currentState.metaData().index(index);\n for (ObjectCursor<MappingMetaData> mapping : indexMetaData.getMappings().values()) {\n if (newMapper.parentFieldMapper().type().equals(mapping.value.type())) {", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java", "status": "modified" }, { "diff": "@@ -34,11 +34,9 @@\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.FileSystemUtils;\n-import org.elasticsearch.common.io.PathUtils;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.store.FsDirectoryService;\n import org.elasticsearch.monitor.fs.FsInfo;\n@@ -294,7 +292,7 @@ private static String toString(Collection<String> items) {\n * @param shardId the id of the shard to delete to delete\n * @throws IOException if an IOException occurs\n */\n- public void deleteShardDirectorySafe(ShardId shardId, @IndexSettings Settings indexSettings) throws IOException {\n+ public void deleteShardDirectorySafe(ShardId shardId, Settings indexSettings) throws IOException {\n // This is to ensure someone doesn't use Settings.EMPTY\n assert indexSettings != Settings.EMPTY;\n final Path[] paths = availableShardPaths(shardId);\n@@ -311,7 +309,7 @@ public void deleteShardDirectorySafe(ShardId shardId, @IndexSettings Settings in\n *\n * @throws LockObtainFailedException if any of the locks could not be acquired\n */\n- public static void acquireFSLockForPaths(@IndexSettings Settings indexSettings, Path... shardPaths) throws IOException {\n+ public static void acquireFSLockForPaths(Settings indexSettings, Path... shardPaths) throws IOException {\n Lock[] locks = new Lock[shardPaths.length];\n Directory[] dirs = new Directory[shardPaths.length];\n try {\n@@ -345,7 +343,7 @@ public static void acquireFSLockForPaths(@IndexSettings Settings indexSettings,\n * @throws IOException if an IOException occurs\n * @throws ElasticsearchException if the write.lock is not acquirable\n */\n- public void deleteShardDirectoryUnderLock(ShardLock lock, @IndexSettings Settings indexSettings) throws IOException {\n+ public void deleteShardDirectoryUnderLock(ShardLock lock, Settings indexSettings) throws IOException {\n assert indexSettings != Settings.EMPTY;\n final ShardId shardId = lock.getShardId();\n assert isShardLocked(shardId) : \"shard \" + shardId + \" is not locked\";\n@@ -383,7 +381,7 @@ private boolean isShardLocked(ShardId id) {\n * @param indexSettings settings for the index being deleted\n * @throws Exception if any of the shards data directories can't be locked or deleted\n */\n- public void deleteIndexDirectorySafe(Index index, long lockTimeoutMS, @IndexSettings Settings indexSettings) throws IOException {\n+ public void deleteIndexDirectorySafe(Index index, long lockTimeoutMS, Settings indexSettings) throws IOException {\n // This is to ensure someone doesn't use Settings.EMPTY\n assert indexSettings != Settings.EMPTY;\n final List<ShardLock> locks = lockAllForIndex(index, indexSettings, lockTimeoutMS);\n@@ -401,7 +399,7 @@ public void deleteIndexDirectorySafe(Index index, long lockTimeoutMS, @IndexSett\n * @param index the index to delete\n * @param indexSettings settings for the index being deleted\n */\n- public void deleteIndexDirectoryUnderLock(Index index, @IndexSettings Settings indexSettings) throws IOException {\n+ public void deleteIndexDirectoryUnderLock(Index index, Settings indexSettings) throws IOException {\n // This is to ensure someone doesn't use Settings.EMPTY\n assert indexSettings != Settings.EMPTY;\n final Path[] indexPaths = indexPaths(index);\n@@ -424,7 +422,7 @@ public void deleteIndexDirectoryUnderLock(Index index, @IndexSettings Settings i\n * @return the {@link ShardLock} instances for this index.\n * @throws IOException if an IOException occurs.\n */\n- public List<ShardLock> lockAllForIndex(Index index, @IndexSettings Settings settings, long lockTimeoutMS) throws IOException {\n+ public List<ShardLock> lockAllForIndex(Index index, Settings settings, long lockTimeoutMS) throws IOException {\n final Integer numShards = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null);\n if (numShards == null || numShards <= 0) {\n throw new IllegalArgumentException(\"settings must contain a non-null > 0 number of shards\");\n@@ -772,7 +770,7 @@ Settings getSettings() { // for testing\n * @param indexSettings settings for an index\n * @return true if the index has a custom data path\n */\n- public static boolean hasCustomDataPath(@IndexSettings Settings indexSettings) {\n+ public static boolean hasCustomDataPath(Settings indexSettings) {\n return indexSettings.get(IndexMetaData.SETTING_DATA_PATH) != null;\n }\n \n@@ -783,7 +781,7 @@ public static boolean hasCustomDataPath(@IndexSettings Settings indexSettings) {\n *\n * @param indexSettings settings for the index\n */\n- private Path resolveCustomLocation(@IndexSettings Settings indexSettings) {\n+ private Path resolveCustomLocation(Settings indexSettings) {\n assert indexSettings != Settings.EMPTY;\n String customDataDir = indexSettings.get(IndexMetaData.SETTING_DATA_PATH);\n if (customDataDir != null) {\n@@ -807,7 +805,7 @@ private Path resolveCustomLocation(@IndexSettings Settings indexSettings) {\n * @param indexSettings settings for the index\n * @param indexName index to resolve the path for\n */\n- private Path resolveCustomLocation(@IndexSettings Settings indexSettings, final String indexName) {\n+ private Path resolveCustomLocation(Settings indexSettings, final String indexName) {\n return resolveCustomLocation(indexSettings).resolve(indexName);\n }\n \n@@ -819,7 +817,7 @@ private Path resolveCustomLocation(@IndexSettings Settings indexSettings, final\n * @param indexSettings settings for the index\n * @param shardId shard to resolve the path to\n */\n- public Path resolveCustomLocation(@IndexSettings Settings indexSettings, final ShardId shardId) {\n+ public Path resolveCustomLocation(Settings indexSettings, final ShardId shardId) {\n return resolveCustomLocation(indexSettings, shardId.index().name()).resolve(Integer.toString(shardId.id()));\n }\n ", "filename": "core/src/main/java/org/elasticsearch/env/NodeEnvironment.java", "status": "modified" }, { "diff": "@@ -31,7 +31,6 @@\n import org.elasticsearch.cluster.routing.allocation.decider.Decision;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.index.settings.IndexSettings;\n \n import java.util.ArrayList;\n import java.util.Collections;\n@@ -263,7 +262,7 @@ public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n * Return {@code true} if the index is configured to allow shards to be\n * recovered on any node\n */\n- private boolean recoverOnAnyNode(@IndexSettings Settings idxSettings) {\n+ private boolean recoverOnAnyNode(Settings idxSettings) {\n return IndexMetaData.isOnSharedFilesystem(idxSettings) &&\n idxSettings.getAsBoolean(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false);\n }", "filename": "core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java", "status": "modified" }, { "diff": "@@ -23,7 +23,6 @@\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.index.settings.IndexSettings;\n \n /**\n *\n@@ -33,15 +32,15 @@ public abstract class AbstractIndexComponent implements IndexComponent {\n protected final ESLogger logger;\n protected final DeprecationLogger deprecationLogger;\n protected final Index index;\n- protected final Settings indexSettings;\n+ private final Settings indexSettings;\n \n /**\n * Constructs a new index component, with the index name and its settings.\n *\n * @param index The index name\n * @param indexSettings The index settings\n */\n- protected AbstractIndexComponent(Index index, @IndexSettings Settings indexSettings) {\n+ protected AbstractIndexComponent(Index index, Settings indexSettings) {\n this.index = index;\n this.indexSettings = indexSettings;\n this.logger = Loggers.getLogger(getClass(), indexSettings, index);\n@@ -57,7 +56,4 @@ public Settings indexSettings() {\n return indexSettings;\n }\n \n- public String nodeName() {\n- return indexSettings.get(\"name\", \"\");\n- }\n }\n\\ No newline at end of file", "filename": "core/src/main/java/org/elasticsearch/index/AbstractIndexComponent.java", "status": "modified" }, { "diff": "@@ -45,7 +45,6 @@\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.query.IndexQueryParserService;\n-import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.settings.IndexSettingsService;\n import org.elasticsearch.index.shard.*;\n import org.elasticsearch.index.similarity.SimilarityService;\n@@ -79,8 +78,6 @@ public class IndexService extends AbstractIndexComponent implements IndexCompone\n \n private final Injector injector;\n \n- private final Settings indexSettings;\n-\n private final PluginsService pluginsService;\n \n private final InternalIndicesLifecycle indicesLifecycle;\n@@ -130,15 +127,14 @@ public Injector getInjector() {\n private final AtomicBoolean deleted = new AtomicBoolean(false);\n \n @Inject\n- public IndexService(Injector injector, Index index, @IndexSettings Settings indexSettings, NodeEnvironment nodeEnv,\n+ public IndexService(Injector injector, Index index, NodeEnvironment nodeEnv,\n AnalysisService analysisService, MapperService mapperService, IndexQueryParserService queryParserService,\n SimilarityService similarityService, IndexAliasesService aliasesService, IndexCache indexCache,\n IndexSettingsService settingsService,\n IndexFieldDataService indexFieldData, BitsetFilterCache bitSetFilterCache, IndicesService indicesServices) {\n \n- super(index, indexSettings);\n+ super(index, settingsService.getSettings());\n this.injector = injector;\n- this.indexSettings = indexSettings;\n this.analysisService = analysisService;\n this.mapperService = mapperService;\n this.queryParserService = queryParserService;\n@@ -274,7 +270,7 @@ public Injector shardInjectorSafe(int shardId) {\n }\n \n public String indexUUID() {\n- return indexSettings.get(IndexMetaData.SETTING_INDEX_UUID, IndexMetaData.INDEX_UUID_NA_VALUE);\n+ return indexSettings().get(IndexMetaData.SETTING_INDEX_UUID, IndexMetaData.INDEX_UUID_NA_VALUE);\n }\n \n // NOTE: O(numShards) cost, but numShards should be smallish?\n@@ -294,6 +290,7 @@ private long getAvgShardSizeInBytes() throws IOException {\n \n public synchronized IndexShard createShard(int sShardId, ShardRouting routing) {\n final boolean primary = routing.primary();\n+ final Settings indexSettings = indexSettings();\n /*\n * TODO: we execute this in parallel but it's a synced method. Yet, we might\n * be able to serialize the execution via the cluster state in the future. for now we just\n@@ -422,6 +419,7 @@ public synchronized void removeShard(int shardId, String reason) {\n \n private void closeShardInjector(String reason, ShardId sId, Injector shardInjector, IndexShard indexShard) {\n final int shardId = sId.id();\n+ final Settings indexSettings = indexSettings();\n try {\n try {\n indicesLifecycle.beforeIndexShardClosed(sId, indexShard, indexSettings);\n@@ -495,6 +493,7 @@ private boolean closeInjectorOptionalResource(ShardId shardId, Injector shardInj\n \n private void onShardClose(ShardLock lock, boolean ownsShard) {\n if (deleted.get()) { // we remove that shards content if this index has been deleted\n+ final Settings indexSettings = indexSettings();\n try {\n if (ownsShard) {\n try {\n@@ -538,8 +537,9 @@ public void handle(ShardLock lock) {\n }\n }\n \n- public Settings getIndexSettings() {\n- return indexSettings;\n+ @Override\n+ public Settings indexSettings() {\n+ return settingsService.getSettings();\n }\n \n private static final class BitsetCacheListener implements BitsetFilterCache.Listener {", "filename": "core/src/main/java/org/elasticsearch/index/IndexService.java", "status": "modified" }, { "diff": "@@ -27,30 +27,26 @@\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.compress.CompressedXContent;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.AbstractIndexComponent;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.query.IndexQueryParserService;\n import org.elasticsearch.index.query.ParsedQuery;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n import org.elasticsearch.indices.AliasFilterParsingException;\n import org.elasticsearch.indices.InvalidAliasNameException;\n \n import java.io.IOException;\n \n-/**\n- *\n- */\n public class IndexAliasesService extends AbstractIndexComponent {\n \n private final IndexQueryParserService indexQueryParser;\n private volatile ImmutableOpenMap<String, AliasMetaData> aliases = ImmutableOpenMap.of();\n \n @Inject\n- public IndexAliasesService(Index index, @IndexSettings Settings indexSettings, IndexQueryParserService indexQueryParser) {\n- super(index, indexSettings);\n+ public IndexAliasesService(Index index, IndexSettingsService indexSettingsService, IndexQueryParserService indexQueryParser) {\n+ super(index, indexSettingsService.getSettings());\n this.indexQueryParser = indexQueryParser;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/aliases/IndexAliasesService.java", "status": "modified" }, { "diff": "@@ -25,7 +25,7 @@\n import org.elasticsearch.common.inject.assistedinject.Assisted;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n /**\n * Factory for ASCIIFoldingFilter.\n@@ -34,8 +34,8 @@ public class ASCIIFoldingTokenFilterFactory extends AbstractTokenFilterFactory {\n private final boolean preserveOriginal;\n \n @Inject\n- public ASCIIFoldingTokenFilterFactory(Index index, @IndexSettings Settings indexSettings, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public ASCIIFoldingTokenFilterFactory(Index index, IndexSettingsService indexSettingsService, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n preserveOriginal = settings.getAsBoolean(\"preserve_original\", false);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/analysis/ASCIIFoldingTokenFilterFactory.java", "status": "modified" }, { "diff": "@@ -22,16 +22,12 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.AbstractIndexComponent;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n \n-/**\n- *\n- */\n public abstract class AbstractCharFilterFactory extends AbstractIndexComponent implements CharFilterFactory {\n \n private final String name;\n \n- public AbstractCharFilterFactory(Index index, @IndexSettings Settings indexSettings, String name) {\n+ public AbstractCharFilterFactory(Index index, Settings indexSettings, String name) {\n super(index, indexSettings);\n this.name = name;\n }", "filename": "core/src/main/java/org/elasticsearch/index/analysis/AbstractCharFilterFactory.java", "status": "modified" }, { "diff": "@@ -24,11 +24,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.AbstractIndexComponent;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n \n-/**\n- *\n- */\n public abstract class AbstractIndexAnalyzerProvider<T extends Analyzer> extends AbstractIndexComponent implements AnalyzerProvider<T> {\n \n private final String name;\n@@ -42,7 +38,7 @@ public abstract class AbstractIndexAnalyzerProvider<T extends Analyzer> extends\n * @param indexSettings The index settings\n * @param name The analyzer name\n */\n- public AbstractIndexAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, String name, Settings settings) {\n+ public AbstractIndexAnalyzerProvider(Index index, Settings indexSettings, String name, Settings settings) {\n super(index, indexSettings);\n this.name = name;\n this.version = Analysis.parseAnalysisVersion(indexSettings, settings, logger);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/AbstractIndexAnalyzerProvider.java", "status": "modified" }, { "diff": "@@ -23,18 +23,14 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.AbstractIndexComponent;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n \n-/**\n- *\n- */\n public abstract class AbstractTokenFilterFactory extends AbstractIndexComponent implements TokenFilterFactory {\n \n private final String name;\n \n protected final Version version;\n \n- public AbstractTokenFilterFactory(Index index, @IndexSettings Settings indexSettings, String name, Settings settings) {\n+ public AbstractTokenFilterFactory(Index index, Settings indexSettings, String name, Settings settings) {\n super(index, indexSettings);\n this.name = name;\n this.version = Analysis.parseAnalysisVersion(indexSettings, settings, logger);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/AbstractTokenFilterFactory.java", "status": "modified" }, { "diff": "@@ -23,7 +23,6 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.AbstractIndexComponent;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n \n /**\n *\n@@ -35,7 +34,7 @@ public abstract class AbstractTokenizerFactory extends AbstractIndexComponent im\n protected final Version version;\n \n \n- public AbstractTokenizerFactory(Index index, @IndexSettings Settings indexSettings, String name, Settings settings) {\n+ public AbstractTokenizerFactory(Index index, Settings indexSettings, String name, Settings settings) {\n super(index, indexSettings);\n this.name = name;\n this.version = Analysis.parseAnalysisVersion(indexSettings, settings, logger);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/AbstractTokenizerFactory.java", "status": "modified" }, { "diff": "@@ -66,7 +66,6 @@\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n-import org.elasticsearch.index.settings.IndexSettings;\n \n import java.io.BufferedReader;\n import java.io.IOException;\n@@ -85,7 +84,7 @@\n */\n public class Analysis {\n \n- public static Version parseAnalysisVersion(@IndexSettings Settings indexSettings, Settings settings, ESLogger logger) {\n+ public static Version parseAnalysisVersion(Settings indexSettings, Settings settings, ESLogger logger) {\n // check for explicit version on the specific analyzer component\n String sVersion = settings.get(\"version\");\n if (sVersion != null) {", "filename": "core/src/main/java/org/elasticsearch/index/analysis/Analysis.java", "status": "modified" }, { "diff": "@@ -30,7 +30,7 @@\n import org.elasticsearch.index.AbstractIndexComponent;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n import org.elasticsearch.indices.analysis.IndicesAnalysisService;\n \n import java.io.Closeable;\n@@ -52,13 +52,24 @@ public class AnalysisService extends AbstractIndexComponent implements Closeable\n private final NamedAnalyzer defaultSearchAnalyzer;\n private final NamedAnalyzer defaultSearchQuoteAnalyzer;\n \n-\n public AnalysisService(Index index, Settings indexSettings) {\n this(index, indexSettings, null, null, null, null, null);\n }\n \n+\n @Inject\n- public AnalysisService(Index index, @IndexSettings Settings indexSettings, @Nullable IndicesAnalysisService indicesAnalysisService,\n+ public AnalysisService(Index index, IndexSettingsService indexSettingsService, @Nullable IndicesAnalysisService indicesAnalysisService,\n+ @Nullable Map<String, AnalyzerProviderFactory> analyzerFactoryFactories,\n+ @Nullable Map<String, TokenizerFactoryFactory> tokenizerFactoryFactories,\n+ @Nullable Map<String, CharFilterFactoryFactory> charFilterFactoryFactories,\n+ @Nullable Map<String, TokenFilterFactoryFactory> tokenFilterFactoryFactories) {\n+ this(index, indexSettingsService.getSettings(), indicesAnalysisService, analyzerFactoryFactories, tokenizerFactoryFactories,\n+ charFilterFactoryFactories, tokenFilterFactoryFactories);\n+\n+ }\n+\n+ //package private for testing\n+ AnalysisService(Index index, Settings indexSettings, @Nullable IndicesAnalysisService indicesAnalysisService,\n @Nullable Map<String, AnalyzerProviderFactory> analyzerFactoryFactories,\n @Nullable Map<String, TokenizerFactoryFactory> tokenizerFactoryFactories,\n @Nullable Map<String, CharFilterFactoryFactory> charFilterFactoryFactories,", "filename": "core/src/main/java/org/elasticsearch/index/analysis/AnalysisService.java", "status": "modified" }, { "diff": "@@ -24,16 +24,16 @@\n import org.elasticsearch.common.inject.assistedinject.Assisted;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n /**\n * Factory for {@link ApostropheFilter}\n */\n public class ApostropheFilterFactory extends AbstractTokenFilterFactory {\n \n @Inject\n- public ApostropheFilterFactory(Index index, @IndexSettings Settings indexSettings, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public ApostropheFilterFactory(Index index, IndexSettingsService indexSettingsService, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/index/analysis/ApostropheFilterFactory.java", "status": "modified" }, { "diff": "@@ -26,7 +26,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n /**\n *\n@@ -36,8 +36,8 @@ public class ArabicAnalyzerProvider extends AbstractIndexAnalyzerProvider<Arabic\n private final ArabicAnalyzer arabicAnalyzer;\n \n @Inject\n- public ArabicAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, Environment env, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public ArabicAnalyzerProvider(Index index, IndexSettingsService indexSettingsService, Environment env, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n arabicAnalyzer = new ArabicAnalyzer(Analysis.parseStopWords(env, settings, ArabicAnalyzer.getDefaultStopSet()),\n Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET));\n arabicAnalyzer.setVersion(version);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/ArabicAnalyzerProvider.java", "status": "modified" }, { "diff": "@@ -24,16 +24,16 @@\n import org.elasticsearch.common.inject.assistedinject.Assisted;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n /**\n *\n */\n public class ArabicNormalizationFilterFactory extends AbstractTokenFilterFactory {\n \n @Inject\n- public ArabicNormalizationFilterFactory(Index index, @IndexSettings Settings indexSettings, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public ArabicNormalizationFilterFactory(Index index, IndexSettingsService indexSettingsService, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/index/analysis/ArabicNormalizationFilterFactory.java", "status": "modified" }, { "diff": "@@ -25,16 +25,16 @@\n import org.elasticsearch.common.inject.assistedinject.Assisted;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n /**\n *\n */\n public class ArabicStemTokenFilterFactory extends AbstractTokenFilterFactory {\n \n @Inject\n- public ArabicStemTokenFilterFactory(Index index, @IndexSettings Settings indexSettings, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public ArabicStemTokenFilterFactory(Index index, IndexSettingsService indexSettingsService, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/index/analysis/ArabicStemTokenFilterFactory.java", "status": "modified" }, { "diff": "@@ -26,7 +26,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n /**\n *\n@@ -36,8 +36,8 @@ public class ArmenianAnalyzerProvider extends AbstractIndexAnalyzerProvider<Arme\n private final ArmenianAnalyzer analyzer;\n \n @Inject\n- public ArmenianAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, Environment env, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public ArmenianAnalyzerProvider(Index index, IndexSettingsService indexSettingsService, Environment env, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n analyzer = new ArmenianAnalyzer(Analysis.parseStopWords(env, settings, ArmenianAnalyzer.getDefaultStopSet()),\n Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET));\n analyzer.setVersion(version);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/ArmenianAnalyzerProvider.java", "status": "modified" }, { "diff": "@@ -26,7 +26,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n /**\n *\n@@ -36,8 +36,8 @@ public class BasqueAnalyzerProvider extends AbstractIndexAnalyzerProvider<Basque\n private final BasqueAnalyzer analyzer;\n \n @Inject\n- public BasqueAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, Environment env, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public BasqueAnalyzerProvider(Index index, IndexSettingsService indexSettingsService, Environment env, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n analyzer = new BasqueAnalyzer(Analysis.parseStopWords(env, settings, BasqueAnalyzer.getDefaultStopSet()),\n Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET));\n analyzer.setVersion(version);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/BasqueAnalyzerProvider.java", "status": "modified" }, { "diff": "@@ -26,18 +26,15 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n-/**\n- *\n- */\n public class BrazilianAnalyzerProvider extends AbstractIndexAnalyzerProvider<BrazilianAnalyzer> {\n \n private final BrazilianAnalyzer analyzer;\n \n @Inject\n- public BrazilianAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, Environment env, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public BrazilianAnalyzerProvider(Index index, IndexSettingsService indexSettingsService, Environment env, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n analyzer = new BrazilianAnalyzer(Analysis.parseStopWords(env, settings, BrazilianAnalyzer.getDefaultStopSet()),\n Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET));\n analyzer.setVersion(version);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/BrazilianAnalyzerProvider.java", "status": "modified" }, { "diff": "@@ -27,17 +27,15 @@\n import org.elasticsearch.common.inject.assistedinject.Assisted;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n-/**\n- *\n- */\n+import org.elasticsearch.index.settings.IndexSettingsService;\n+\n public class BrazilianStemTokenFilterFactory extends AbstractTokenFilterFactory {\n \n private final CharArraySet exclusions;\n \n @Inject\n- public BrazilianStemTokenFilterFactory(Index index, @IndexSettings Settings indexSettings, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public BrazilianStemTokenFilterFactory(Index index, IndexSettingsService indexSettingsService, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n this.exclusions = Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/analysis/BrazilianStemTokenFilterFactory.java", "status": "modified" }, { "diff": "@@ -26,18 +26,15 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n-/**\n- *\n- */\n public class BulgarianAnalyzerProvider extends AbstractIndexAnalyzerProvider<BulgarianAnalyzer> {\n \n private final BulgarianAnalyzer analyzer;\n \n @Inject\n- public BulgarianAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, Environment env, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public BulgarianAnalyzerProvider(Index index, IndexSettingsService indexSettingsService, Environment env, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n analyzer = new BulgarianAnalyzer(Analysis.parseStopWords(env, settings, BulgarianAnalyzer.getDefaultStopSet()),\n Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET));\n analyzer.setVersion(version);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/BulgarianAnalyzerProvider.java", "status": "modified" }, { "diff": "@@ -25,7 +25,7 @@\n import org.elasticsearch.common.inject.assistedinject.Assisted;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n import java.util.Arrays;\n import java.util.HashSet;\n@@ -51,8 +51,8 @@ public final class CJKBigramFilterFactory extends AbstractTokenFilterFactory {\n private final boolean outputUnigrams;\n \n @Inject\n- public CJKBigramFilterFactory(Index index, @IndexSettings Settings indexSettings, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public CJKBigramFilterFactory(Index index, IndexSettingsService indexSettingsService, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n outputUnigrams = settings.getAsBoolean(\"output_unigrams\", false);\n final String[] asArray = settings.getAsArray(\"ignored_scripts\");\n Set<String> scripts = new HashSet<>(Arrays.asList(\"han\", \"hiragana\", \"katakana\", \"hangul\"));", "filename": "core/src/main/java/org/elasticsearch/index/analysis/CJKBigramFilterFactory.java", "status": "modified" }, { "diff": "@@ -24,12 +24,13 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n public final class CJKWidthFilterFactory extends AbstractTokenFilterFactory {\n \n @Inject\n- public CJKWidthFilterFactory(Index index, Settings indexSettings, String name, Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public CJKWidthFilterFactory(Index index, IndexSettingsService indexSettingsService, String name, Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/index/analysis/CJKWidthFilterFactory.java", "status": "modified" }, { "diff": "@@ -26,18 +26,15 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n-/**\n- *\n- */\n public class CatalanAnalyzerProvider extends AbstractIndexAnalyzerProvider<CatalanAnalyzer> {\n \n private final CatalanAnalyzer analyzer;\n \n @Inject\n- public CatalanAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, Environment env, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public CatalanAnalyzerProvider(Index index, IndexSettingsService indexSettingsService, Environment env, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n analyzer = new CatalanAnalyzer(Analysis.parseStopWords(env, settings, CatalanAnalyzer.getDefaultStopSet()),\n Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET));\n analyzer.setVersion(version);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/CatalanAnalyzerProvider.java", "status": "modified" }, { "diff": "@@ -24,7 +24,7 @@\n import org.elasticsearch.common.inject.assistedinject.Assisted;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n /**\n * Only for old indexes\n@@ -34,8 +34,8 @@ public class ChineseAnalyzerProvider extends AbstractIndexAnalyzerProvider<Stand\n private final StandardAnalyzer analyzer;\n \n @Inject\n- public ChineseAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public ChineseAnalyzerProvider(Index index, IndexSettingsService indexSettingsService, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n // old index: best effort\n analyzer = new StandardAnalyzer();\n analyzer.setVersion(version);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/ChineseAnalyzerProvider.java", "status": "modified" }, { "diff": "@@ -26,18 +26,15 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n-/**\n- *\n- */\n public class CjkAnalyzerProvider extends AbstractIndexAnalyzerProvider<CJKAnalyzer> {\n \n private final CJKAnalyzer analyzer;\n \n @Inject\n- public CjkAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, Environment env, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public CjkAnalyzerProvider(Index index, IndexSettingsService indexSettingsService, Environment env, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n CharArraySet stopWords = Analysis.parseStopWords(env, settings, CJKAnalyzer.getDefaultStopSet());\n \n analyzer = new CJKAnalyzer(stopWords);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/CjkAnalyzerProvider.java", "status": "modified" }, { "diff": "@@ -24,16 +24,16 @@\n import org.elasticsearch.common.inject.assistedinject.Assisted;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n /**\n * Factory for {@link ClassicFilter}\n */\n public class ClassicFilterFactory extends AbstractTokenFilterFactory {\n \n @Inject\n- public ClassicFilterFactory(Index index, @IndexSettings Settings indexSettings, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public ClassicFilterFactory(Index index, IndexSettingsService indexSettingsService, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/index/analysis/ClassicFilterFactory.java", "status": "modified" }, { "diff": "@@ -26,7 +26,7 @@\n import org.elasticsearch.common.inject.assistedinject.Assisted;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n /**\n * Factory for {@link ClassicTokenizer}\n@@ -36,8 +36,8 @@ public class ClassicTokenizerFactory extends AbstractTokenizerFactory {\n private final int maxTokenLength;\n \n @Inject\n- public ClassicTokenizerFactory(Index index, @IndexSettings Settings indexSettings, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public ClassicTokenizerFactory(Index index, IndexSettingsService indexSettingsService, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n maxTokenLength = settings.getAsInt(\"max_token_length\", StandardAnalyzer.DEFAULT_MAX_TOKEN_LENGTH);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/analysis/ClassicTokenizerFactory.java", "status": "modified" }, { "diff": "@@ -28,11 +28,8 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.index.Index;\n-import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.settings.IndexSettingsService;\n \n-/**\n- *\n- */\n @AnalysisSettingsRequired\n public class CommonGramsTokenFilterFactory extends AbstractTokenFilterFactory {\n \n@@ -43,8 +40,8 @@ public class CommonGramsTokenFilterFactory extends AbstractTokenFilterFactory {\n private final boolean queryMode;\n \n @Inject\n- public CommonGramsTokenFilterFactory(Index index, @IndexSettings Settings indexSettings, Environment env, @Assisted String name, @Assisted Settings settings) {\n- super(index, indexSettings, name, settings);\n+ public CommonGramsTokenFilterFactory(Index index, IndexSettingsService indexSettingsService, Environment env, @Assisted String name, @Assisted Settings settings) {\n+ super(index, indexSettingsService.getSettings(), name, settings);\n this.ignoreCase = settings.getAsBoolean(\"ignore_case\", false);\n this.queryMode = settings.getAsBoolean(\"query_mode\", false);\n this.words = Analysis.parseCommonWords(env, settings, null, ignoreCase);", "filename": "core/src/main/java/org/elasticsearch/index/analysis/CommonGramsTokenFilterFactory.java", "status": "modified" } ] }
{ "body": "If you create an index number_of_replicas to 0 and refresh_interval to -1 (using one API call) & add some documents. Then set number_number_of_replicas to 1 and refresh_interval to \"1s\" and add some more documents. The primary and replicas are out of sync, it looks like the replica doesn't have the new documents. See this gist for a repro:\n\nhttps://gist.github.com/msimos/869292265208852fbee4\n\nIf you run the match_all query multiple times you'll see the different documents returned. If you use preference=_primary then all documents are returned. If the query hits the replica, then only the first documents are returned when replicas were set to 0. Also using _cat/shards you can see the size difference between the shards.\n\nIt seems you can workaround this by doing sending this in two separate calls:\n\n```\nPUT /index/_settings\n{\n \"number_of_replicas\": 1\n}\n\nPUT /index/_settings\n{\n \"refresh_interval\": \"1s\"\n}\n```\n\nAlso seems the ordering matters, that \"number_of_replicas\": 1 is set before \"refresh_interval\": \"1s\". This was on Elasticsearch 1.7.2.\n", "comments": [ { "body": "Hi @msimos \n\nThanks for the report and the nice clear recreation. It is not that the shards are out of sync, but that the shard holding the first doc thinks that it doesn't need to refresh.\n\nThis is fixed in 2.0 already\n", "created_at": "2015-10-29T20:00:59Z" }, { "body": "A forced refresh (`/index2/_refresh`) resolves the issue, so somehow the replica is failing to refresh after 1s (but did in fact index the 2nd document).\n\nCuriously, if I pass `\"number_of_shards\": 1` when creating the index, the bug doesn't happen, but with 2 shards, it does. Hmm.\n", "created_at": "2015-10-30T14:54:34Z" }, { "body": "OK I think I see what's happening:\n\nThe new replica is created (because of `\"number_of_replicas\": 1`), but because this is async change, the additional `\"refresh_interval\": \"1s\"` in the same request is never applied to the newly created replica shards since they have not yet set up listeners (`IndexShard.ApplyRefreshSettings`) for settings changes.\n\nSo the replica shards still have the old `refresh_interval` (-1).\n\nReversing the two settings works because the newly created replica shards will inherit the latest index settings when they are created, including the change just made to `refresh_interval`.\n\nI'm not sure how to cleanly fix this. @bleskes?\n\nMaybe we could simply move `number_of_replicas` to the end of all settings updated in one request?\n", "created_at": "2015-10-30T15:34:47Z" }, { "body": "@mikemccand can you try and verify if that still happens on master / 3.0?\n", "created_at": "2015-10-30T15:36:45Z" }, { "body": "@s1monw I'll test master. @clintongormley already checked that 2.0 fixes it (I'm not sure why/how!) ...\n", "created_at": "2015-10-30T15:46:19Z" }, { "body": "@s1monw master doesn't have the bug either ... I'm not sure why.\n", "created_at": "2015-10-30T15:58:54Z" }, { "body": "I did some digging and this has to do with the fact that index shards are created with old index settings. IndexShard get's it's index setting using guice and the `@IndexSettings` annotation. This is supposed be supplied by [IndexSettingsProvider](https://github.com/elastic/elasticsearch/blob/1.7/src/main/java/org/elasticsearch/index/settings/IndexSettingsProvider.java) , which in turn asks `IndexSettingsService` for the latest setting. For some unknown reason, Guice decides to cache those settings and thus IndexShard gets a stale copy. This means that if Guice captures an old settings, the settings are changed and then a shard is created, the shard will see the old setting. It does register a setting change listener but as far as `IndexSettingService` is concerned, no settings have changed since the last iteration. This doesn't happen in 2.0 because we don't inject the index setting to the shard anymore but rather ask for a live copy in the IndexShard constructor. Master is of course totally rewritten to another mechanism.\n\nI've put a test reproducing on https://gist.github.com/bleskes/1e336a206a52cb4f1b0f . (based on 1.7 code)\n\nI think we can do one of the following:\n- Fix the Guice issue and make the provider be called every time (I did some looking and this looks like a rabbit hole - unless somebogy knows what needs to be done)\n- Change 1.7 to do what 2.0 does and not inject the index setting to the IndexShard. This is a local fix and we might suffer other simillar issues in the future, so if we go down this route I suggest we remove the index settings annotation and use this mechanism all over the place (in 2.x and 1.7).\n\nThoughts?\n", "created_at": "2015-11-01T10:06:16Z" }, { "body": "Option 1 sounds scary (I don't like rabbit holes). Option 2 seems like a biggish change to make on a bug fix branch? We could simply leave this bug unfixed in 1.x and encourage users who hit it to upgrade to 2.x?\n", "created_at": "2015-11-01T12:33:11Z" }, { "body": "> We could simply leave this bug unfixed in 1.x and encourage users who hit it to upgrade to 2.x?\n\nSince a workaround exists in the 1.x line, this seems to me to be a sensible option.\n", "created_at": "2015-11-01T15:09:56Z" }, { "body": "note that the work around is likely to to work but not guaranteed. Sometime shard creation is throttled/disabled (see test) and then it doesn't.\n\nThere is the option of just doing the on liner change in 1.7 to make index shard get a new setting every time and only the bigger change (i.e., remove the provider and the annotation) in 2.x .\n", "created_at": "2015-11-01T15:13:17Z" }, { "body": "> @s1monw master doesn't have the bug either ... I'm not sure why.\n\n@mikemccand in master I trashed this Provider @bleskes is talking about entirely. New shards are guaranteed to see the latest settings.\n", "created_at": "2015-11-02T21:00:56Z" }, { "body": "Unless someone objects soon, I'll assume we're going with:\n\n1) One line fix to index shard in 1.7 to _not_ have an injected index settings.\n2) removal of the provider in 2.x.\n", "created_at": "2015-11-03T14:28:00Z" }, { "body": "> Unless someone objects soon, I'll assume we're going with:\n> \n> 1) One line fix to index shard in 1.7 to _not_ have an injected index settings.\n> 2) removal of the provider in 2.x.\n\n+1\n", "created_at": "2015-11-03T14:30:17Z" }, { "body": "> @mikemccand in master I trashed this Provider @bleskes is talking about entirely. \n\nOK wonderful!\n\n> Unless someone objects soon, I'll assume we're going with:\n> \n> 1) One line fix to index shard in 1.7 to not have an injected index settings.\n> 2) removal of the provider in 2.x.\n\n+1\n", "created_at": "2015-11-03T14:37:59Z" }, { "body": "I've implemented the fix for 1.7 (see PR #14522). I am about to work on removing the provider in 2.x.\n", "created_at": "2015-11-04T13:42:37Z" }, { "body": "This was closed by #14522.\n", "created_at": "2016-01-07T21:16:43Z" } ], "number": 14319, "title": "Shards out of sync when setting refresh_interval & number_of_replicas at the same time" }
{ "body": "With this commit fresh index settings are used when creating a shard. This\nworks around a problem with Guice regarding too aggressive caching.\n\nFixes #14319\n", "number": 14522, "review_comments": [], "title": "Use fresh index settings for shards" }
{ "commits": [ { "message": "Use fresh index settings for shards\n\nWith this commit fresh index settings are used when creating a shard. This\nworks around a problem with Guice regarding too aggressive caching.\n\nFixes #14319" } ], "files": [ { "diff": "@@ -97,7 +97,6 @@\n import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n import org.elasticsearch.index.search.stats.SearchStats;\n import org.elasticsearch.index.search.stats.ShardSearchService;\n-import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.settings.IndexSettingsService;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.index.store.Store;\n@@ -201,13 +200,13 @@ public class IndexShard extends AbstractIndexShardComponent {\n private final IndexShardOperationCounter indexShardOperationCounter;\n \n @Inject\n- public IndexShard(ShardId shardId, @IndexSettings Settings indexSettings, IndexSettingsService indexSettingsService, IndicesLifecycle indicesLifecycle, Store store, MergeSchedulerProvider mergeScheduler, Translog translog,\n+ public IndexShard(ShardId shardId, IndexSettingsService indexSettingsService, IndicesLifecycle indicesLifecycle, Store store, MergeSchedulerProvider mergeScheduler, Translog translog,\n ThreadPool threadPool, MapperService mapperService, IndexQueryParserService queryParserService, IndexCache indexCache, IndexAliasesService indexAliasesService, ShardIndexingService indexingService, ShardGetService getService, ShardSearchService searchService, ShardIndexWarmerService shardWarmerService,\n ShardFilterCache shardFilterCache, ShardFieldData shardFieldData, PercolatorQueriesRegistry percolatorQueriesRegistry, ShardPercolateService shardPercolateService, CodecService codecService,\n ShardTermVectorService termVectorService, IndexFieldDataService indexFieldDataService, IndexService indexService, ShardSuggestService shardSuggestService, ShardQueryCache shardQueryCache, ShardFixedBitSetFilterCache shardFixedBitSetFilterCache,\n @Nullable IndicesWarmer warmer, SnapshotDeletionPolicy deletionPolicy, AnalysisService analysisService, SimilarityService similarityService, MergePolicyProvider mergePolicyProvider, EngineFactory factory,\n ClusterService clusterService) {\n- super(shardId, indexSettings);\n+ super(shardId, indexSettingsService.getSettings());\n Preconditions.checkNotNull(store, \"Store must be provided to the index shard\");\n Preconditions.checkNotNull(deletionPolicy, \"Snapshot deletion policy must be provided to the index shard\");\n Preconditions.checkNotNull(translog, \"Translog must be provided to the index shard\");", "filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -23,7 +23,6 @@\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.aliases.IndexAliasesService;\n import org.elasticsearch.index.analysis.AnalysisService;\n@@ -46,7 +45,6 @@\n import org.elasticsearch.index.percolator.stats.ShardPercolateService;\n import org.elasticsearch.index.query.IndexQueryParserService;\n import org.elasticsearch.index.search.stats.ShardSearchService;\n-import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.settings.IndexSettingsService;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.index.store.Store;\n@@ -69,7 +67,7 @@ public final class ShadowIndexShard extends IndexShard {\n private final Object mutex = new Object();\n \n @Inject\n- public ShadowIndexShard(ShardId shardId, @IndexSettings Settings indexSettings, IndexSettingsService indexSettingsService,\n+ public ShadowIndexShard(ShardId shardId, IndexSettingsService indexSettingsService,\n IndicesLifecycle indicesLifecycle, Store store, MergeSchedulerProvider mergeScheduler, Translog translog,\n ThreadPool threadPool, MapperService mapperService, IndexQueryParserService queryParserService,\n IndexCache indexCache, IndexAliasesService indexAliasesService, ShardIndexingService indexingService,\n@@ -81,7 +79,7 @@ public ShadowIndexShard(ShardId shardId, @IndexSettings Settings indexSettings,\n ShardFixedBitSetFilterCache shardFixedBitSetFilterCache, AnalysisService analysisService,\n @Nullable IndicesWarmer warmer, SnapshotDeletionPolicy deletionPolicy, SimilarityService similarityService,\n MergePolicyProvider mergePolicyProvider, EngineFactory factory, ClusterService clusterService) {\n- super(shardId, indexSettings, indexSettingsService, indicesLifecycle, store, mergeScheduler,\n+ super(shardId, indexSettingsService, indicesLifecycle, store, mergeScheduler,\n translog, threadPool, mapperService, queryParserService, indexCache, indexAliasesService,\n indexingService, getService, searchService, shardWarmerService, shardFilterCache,\n shardFieldData, percolatorQueriesRegistry, shardPercolateService, codecService,", "filename": "src/main/java/org/elasticsearch/index/shard/ShadowIndexShard.java", "status": "modified" } ] }
{ "body": "The issue I focus on is described in #14010, #14011. Reproduced it using an integration test and I think I found a program flow leading to this issue.\n\nAssume cluster where setting for delayed allocation is 1 minute. Now a node goes down. Shard becomes unassigned. In `RoutingService.clusterChanged()` (master node), `nextDelaySetting` is 1 minute (= smallest delayed allocation setting). The variable `registeredNextDelaySetting` is initially set to `Long.Max_VALUE`, hence we schedule a new reroute in a minute. The variable `registeredNextDelaySetting` is set to a minute. After half a minute, a second node goes down. A shard of the second node becomes unassigned. In this case, `registeredNextDelaySetting` is still 1 minute, and `nextDelaySetting` is also 1 minute. So nothing happens in `RoutingService.clusterChanged()`. The previous scheduled reroute gets executed at some point. This goes as follows: First, the variable `registeredNextDelaySetting` is again set to `Long.MAX_VALUE` and then we submit a cluster update task to do the reroute. Lets assume that routingResult.changed() yields true (but only because it is set to true in ReplicaShardAllocator due to the second shard still being delayed). Now, after the reroute is successfully applied, `InternalClusterService` calls back to `RoutingService.clusterChanged()`. Here we check if we were the reason for the cluster change event. If yes (and that's the case), we do not schedule a delayed allocation until the next cluster change event. This means that if no new cluster change event happens, the check never gets reevaluated for the remaining delayed shards.\n", "comments": [ { "body": "Let me see if I understand this:\n- 00:00 Nodes goes down, 1 shard unassigned (shard A)\n- 00:00 Routing service sees a 1 minute delay in assigning the shard and schedules a reroute in 1 minute\n- 00:30 Another node goes down, 1 additional shard unassigned (shard B)\n- 00:30 Routing service sees a 1 minute configured delay, but does nothing\n- 01:00 Scheduled reroute is executed, shard A is assigned (B still unassigned because it has 30 seconds of \"delay\" left)\n- 01:05 New cluster state from the reroute, but ignored by RoutingService because it came from routing service\n\n> we do not schedule a delayed allocation until the next cluster change event. This means that if no new cluster change event happens, the check never gets reevaluated for the remaining delayed shards.\n\nHowever, assuming shard A is actually assigned there will be a new cluster change event (a shard started event), so that's why this doesn't look like it occurs very frequently.\n\nI think perhaps we can remove the \n\n``` java\n if (event.source().startsWith(CLUSTER_UPDATE_TASK_SOURCE)) {\n // that's us, ignore this event\n return;\n }\n```\n\nFrom the cluster changed event, what do you think?\n", "created_at": "2015-11-02T17:34:17Z" } ], "number": 14445, "title": "Delayed allocation can miss a reroute" }
{ "body": "Closes #14445 \nRelates to #14010 and #14011\n\nImplementation note:\n\nThe smallest test case I could come up with follows #14445 :\n- Create 4 data nodes (and 1 master node, to make the test simpler).\n- Create 2 indices with delayed shard allocation set to 10 seconds. Each index has 2 shards (1 primary and 1 replica). Balancer ensures that each data node gets a shard, either a primary or a replica shard.\n- Shutdown one node that has a replica shard of first index.\n- Delayed allocation kicks in (a task is scheduled to reroute in 10 seconds).\n- Wait a second\n- Shutdown second node that has a replica shard of second index (no new task is scheduled to reroute in 10 seconds).\n- Make replica shard of first index obsolete by setting number_of_replicas to 0 (the task that was scheduled to reroute is still being scheduled).\n- Wait until 3 shards (1 for first index and 2 for second index) are available. Wait at least 30 minutes to give a chance to delayed allocation of replica shard of second index to kick in.\n\nSame as in #14010, the issue can be \"fixed\" by waiting until delayed routing of first shard has kicked in and manually calling reroute. In the given test this works by inserting the following lines before the last line in the test:\n\n```\nlogger.info(\"--> waiting after delayed shard assignment should kick in for test1\");\nThread.sleep(15000);\nlogger.info(\"--> reroute\");\nclient().admin().cluster().prepareReroute().execute().actionGet();\n```\n", "number": 14494, "review_comments": [ { "body": "Is there anything your missing in TestClusterService?\n", "created_at": "2015-11-11T14:09:37Z" }, { "body": "can we call these indices \"short_delay\" and \"long_delay\" ? I think it will be easier to read.\n", "created_at": "2015-11-12T08:27:46Z" }, { "body": "discussed and agreed to remove this now that we have the a unit test.\n", "created_at": "2015-11-12T08:36:19Z" } ], "title": "Delayed allocation can miss a reroute" }
{ "commits": [ { "message": "Fix missing reroute in case of multiple delayed shards\n\nAfter a delayed reroute of a shard, RoutingService misses to schedule a new delayed reroute of other delayed shards.\n\nCloses #14494\nCloses #14010\nCloses #14445" } ], "files": [ { "diff": "@@ -110,10 +110,6 @@ public final void reroute(String reason) {\n \n @Override\n public void clusterChanged(ClusterChangedEvent event) {\n- if (event.source().startsWith(CLUSTER_UPDATE_TASK_SOURCE)) {\n- // that's us, ignore this event\n- return;\n- }\n if (event.state().nodes().localNodeMaster()) {\n // figure out when the next unassigned allocation need to happen from now. If this is larger or equal\n // then the last time we checked and scheduled, we are guaranteed to have a reroute until then, so no need", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java", "status": "modified" }, { "diff": "@@ -23,22 +23,35 @@\n import org.elasticsearch.cluster.ClusterChangedEvent;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.EmptyClusterInfoService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n+import org.elasticsearch.cluster.routing.allocation.FailedRerouteAllocation;\n+import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n+import org.elasticsearch.cluster.routing.allocation.StartedRerouteAllocation;\n+import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocators;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.gateway.GatewayAllocator;\n+import org.elasticsearch.node.settings.NodeSettingsService;\n import org.elasticsearch.test.ESAllocationTestCase;\n+import org.elasticsearch.test.cluster.TestClusterService;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.junit.After;\n import org.junit.Before;\n \n+import java.util.Arrays;\n+import java.util.Collections;\n import java.util.List;\n+import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n+import static java.util.Collections.singletonMap;\n import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.STARTED;\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n@@ -138,6 +151,105 @@ public void testDelayedUnassignedScheduleReroute() throws Exception {\n assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(Long.MAX_VALUE));\n }\n \n+ /**\n+ * This tests that a new delayed reroute is scheduled right after a delayed reroute was run\n+ */\n+ public void testDelayedUnassignedScheduleRerouteAfterDelayedReroute() throws Exception {\n+ final ThreadPool testThreadPool = new ThreadPool(getTestName());\n+\n+ try {\n+ DelayedShardsMockGatewayAllocator mockGatewayAllocator = new DelayedShardsMockGatewayAllocator();\n+ AllocationService allocation = new AllocationService(Settings.Builder.EMPTY_SETTINGS,\n+ randomAllocationDeciders(Settings.Builder.EMPTY_SETTINGS, new NodeSettingsService(Settings.Builder.EMPTY_SETTINGS), getRandom()),\n+ new ShardsAllocators(Settings.Builder.EMPTY_SETTINGS, mockGatewayAllocator), EmptyClusterInfoService.INSTANCE);\n+\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"short_delay\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"100ms\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .put(IndexMetaData.builder(\"long_delay\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10s\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"short_delay\")).addAsNew(metaData.index(\"long_delay\")).build())\n+ .nodes(DiscoveryNodes.builder()\n+ .put(newNode(\"node0\", singletonMap(\"data\", Boolean.FALSE.toString()))).localNodeId(\"node0\").masterNodeId(\"node0\")\n+ .put(newNode(\"node1\")).put(newNode(\"node2\")).put(newNode(\"node3\")).put(newNode(\"node4\"))).build();\n+ // allocate shards\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // start primaries\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n+ // start replicas\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n+ assertThat(\"all shards should be started\", clusterState.getRoutingNodes().shardsWithState(STARTED).size(), equalTo(4));\n+\n+ // find replica of short_delay\n+ ShardRouting shortDelayReplica = null;\n+ for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"short_delay\")) {\n+ if (shardRouting.primary() == false) {\n+ shortDelayReplica = shardRouting;\n+ break;\n+ }\n+ }\n+ assertNotNull(shortDelayReplica);\n+\n+ // find replica of long_delay\n+ ShardRouting longDelayReplica = null;\n+ for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"long_delay\")) {\n+ if (shardRouting.primary() == false) {\n+ longDelayReplica = shardRouting;\n+ break;\n+ }\n+ }\n+ assertNotNull(longDelayReplica);\n+\n+ // remove node of shortDelayReplica and node of longDelayReplica and reroute\n+ ClusterState prevState = clusterState;\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(shortDelayReplica.currentNodeId()).remove(longDelayReplica.currentNodeId())).build();\n+ // make sure both replicas are marked as delayed (i.e. not reallocated)\n+ mockGatewayAllocator.setShardsToDelay(Arrays.asList(shortDelayReplica, longDelayReplica));\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+\n+ // check that shortDelayReplica and longDelayReplica have been marked unassigned\n+ RoutingNodes.UnassignedShards unassigned = clusterState.getRoutingNodes().unassigned();\n+ assertEquals(2, unassigned.size());\n+ // update shortDelayReplica and longDelayReplica variables with new shard routing\n+ ShardRouting shortDelayUnassignedReplica = null;\n+ ShardRouting longDelayUnassignedReplica = null;\n+ for (ShardRouting shr : unassigned) {\n+ if (shr.getIndex().equals(\"short_delay\")) {\n+ shortDelayUnassignedReplica = shr;\n+ } else {\n+ longDelayUnassignedReplica = shr;\n+ }\n+ }\n+ assertTrue(shortDelayReplica.isSameShard(shortDelayUnassignedReplica));\n+ assertTrue(longDelayReplica.isSameShard(longDelayUnassignedReplica));\n+\n+ // manually trigger a clusterChanged event on routingService\n+ ClusterState newState = clusterState;\n+ // create fake cluster service\n+ TestClusterService clusterService = new TestClusterService(newState, testThreadPool);\n+ // create routing service, also registers listener on cluster service\n+ RoutingService routingService = new RoutingService(Settings.EMPTY, testThreadPool, clusterService, allocation);\n+ routingService.start(); // just so performReroute does not prematurely return\n+ // ensure routing service has proper timestamp before triggering\n+ routingService.setUnassignedShardsAllocatedTimestamp(shortDelayUnassignedReplica.unassignedInfo().getTimestampInMillis() + randomIntBetween(0, 50));\n+ // next (delayed) reroute should only delay longDelayReplica/longDelayUnassignedReplica\n+ mockGatewayAllocator.setShardsToDelay(Arrays.asList(longDelayUnassignedReplica));\n+ // register listener on cluster state so we know when cluster state has been changed\n+ CountDownLatch latch = new CountDownLatch(1);\n+ clusterService.addLast(event -> latch.countDown());\n+ // instead of clusterService calling clusterChanged, we call it directly here\n+ routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n+ // cluster service should have updated state and called routingService with clusterChanged\n+ latch.await();\n+ // verify the registration has been set to the delay of longDelayReplica/longDelayUnassignedReplica\n+ assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(10000L));\n+ } finally {\n+ terminate(testThreadPool);\n+ }\n+ }\n+\n public void testDelayedUnassignedDoesNotRerouteForNegativeDelays() throws Exception {\n AllocationService allocation = createAllocationService();\n MetaData metaData = MetaData.builder()\n@@ -197,4 +309,46 @@ protected void performReroute(String reason) {\n rerouted.set(true);\n }\n }\n+\n+ /**\n+ * Mocks behavior in ReplicaShardAllocator to remove delayed shards from list of unassigned shards so they don't get reassigned yet.\n+ * It does not implement the full logic but shards that are to be delayed need to be explicitly set using the method setShardsToDelay(...).\n+ */\n+ private static class DelayedShardsMockGatewayAllocator extends GatewayAllocator {\n+ volatile List<ShardRouting> delayedShards = Collections.emptyList();\n+\n+ public DelayedShardsMockGatewayAllocator() {\n+ super(Settings.EMPTY, null, null);\n+ }\n+\n+ @Override\n+ public void applyStartedShards(StartedRerouteAllocation allocation) {}\n+\n+ @Override\n+ public void applyFailedShards(FailedRerouteAllocation allocation) {}\n+\n+ /**\n+ * Explicitly set which shards should be delayed in the next allocateUnassigned calls\n+ */\n+ public void setShardsToDelay(List<ShardRouting> delayedShards) {\n+ this.delayedShards = delayedShards;\n+ }\n+\n+ @Override\n+ public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ final RoutingNodes routingNodes = allocation.routingNodes();\n+ final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator();\n+ boolean changed = false;\n+ while (unassignedIterator.hasNext()) {\n+ ShardRouting shard = unassignedIterator.next();\n+ for (ShardRouting shardToDelay : delayedShards) {\n+ if (shard.isSameShard(shardToDelay)) {\n+ changed = true;\n+ unassignedIterator.removeAndIgnore();\n+ }\n+ }\n+ }\n+ return changed;\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@\n public class TestClusterService implements ClusterService {\n \n volatile ClusterState state;\n- private final Collection<ClusterStateListener> listeners = new CopyOnWriteArrayList<>();\n+ private final List<ClusterStateListener> listeners = new CopyOnWriteArrayList<>();\n private final Queue<NotifyTimeout> onGoingTimeouts = ConcurrentCollections.newQueue();\n private final ThreadPool threadPool;\n private final ESLogger logger = Loggers.getLogger(getClass(), Settings.EMPTY);\n@@ -135,7 +135,7 @@ public OperationRouting operationRouting() {\n \n @Override\n public void addFirst(ClusterStateListener listener) {\n- throw new UnsupportedOperationException();\n+ listeners.add(0, listener);\n }\n \n @Override", "filename": "test-framework/src/main/java/org/elasticsearch/test/cluster/TestClusterService.java", "status": "modified" } ] }
{ "body": "This test uses a similar setup as 14010 (also using ES 1.7.2), the only difference is that for this use case, only 1 of the attributes (updateDomain) is set to be forced, the other attribute (faultDomain) is not set to be forced.\n\nWith this configuration, you will see from the video that the first issue I ran into is 14010 where I have to prod it by forcing a cluster reroute in order for it to complete its allocation. Except that even after prodding, one of the shard is not at the right place. If I use a manual reroute command to move it, it actually gets moved successfully and does not violate any routing rules. Given that it does not violate any routing rules, it seems like there is another bug here where it is not moving the shard to the right place automatically as part of awareness.\n\nRepro video:\nhttps://drive.google.com/file/d/0B1rxJ0dAZbQvd3p5Y3F0QzItMWs/view?usp=sharing\n\nNode setup:\nhttps://docs.google.com/document/d/1ktYSzaZh_FlhloSextzCNh3HYYS0OEE8wPU4jhHGPTo/edit?usp=sharing\n", "comments": [ { "body": "Also see #14010\n", "created_at": "2015-10-08T13:11:16Z" }, { "body": "The issue here is that the AwarenessAllocationDecider uses the total number of shards (which is 1 primary + 2 replicas = 3) as basis to calculate balancedness in faultDomain. However, one of the replicas cannot be allocated due to forced awareness in updateDomain. But this unassigned replica is still used as basis to calculate balancedness in faultDomain.\n\nI'm not sure what we could change here. Any thoughts @bleskes @dakrone?\n", "created_at": "2015-12-01T18:19:15Z" }, { "body": "agreed this will be hard to solve. Basically we need to capture the idea that the third copy can not be assigned under the current setup and this shouldn't be taken into account when balancing on the allocation awareness attribute. My only idea here is to separate allocation deciders into \"hard\" and \"soft\" group. The hard one should be run first and should mark any shard copy that is \"not assignable\" as such. The soft group can the ignore those shards when making decisions. It will require more thoughts and review of current deciders to see how they fit. \n\nNote: #8178 is another example the result of mixing allocation awareness with a hard rule.\n", "created_at": "2015-12-14T21:04:25Z" }, { "body": "This is an interesting idea, but since its opening we have not seen enough feedback that it is something we should work on. We prefer to close this issue as a clear indication that we are not going to work on this at this time. We are always open to reconsidering this in the future based on compelling feedback; despite this issue being closed please feel free to leave feedback (including +1s).", "created_at": "2018-03-20T09:59:42Z" } ], "number": 14011, "title": "Allocation not complete when only 1 of 2 awareness attributes is set to forced" }
{ "body": "Closes #14445 \nRelates to #14010 and #14011\n\nImplementation note:\n\nThe smallest test case I could come up with follows #14445 :\n- Create 4 data nodes (and 1 master node, to make the test simpler).\n- Create 2 indices with delayed shard allocation set to 10 seconds. Each index has 2 shards (1 primary and 1 replica). Balancer ensures that each data node gets a shard, either a primary or a replica shard.\n- Shutdown one node that has a replica shard of first index.\n- Delayed allocation kicks in (a task is scheduled to reroute in 10 seconds).\n- Wait a second\n- Shutdown second node that has a replica shard of second index (no new task is scheduled to reroute in 10 seconds).\n- Make replica shard of first index obsolete by setting number_of_replicas to 0 (the task that was scheduled to reroute is still being scheduled).\n- Wait until 3 shards (1 for first index and 2 for second index) are available. Wait at least 30 minutes to give a chance to delayed allocation of replica shard of second index to kick in.\n\nSame as in #14010, the issue can be \"fixed\" by waiting until delayed routing of first shard has kicked in and manually calling reroute. In the given test this works by inserting the following lines before the last line in the test:\n\n```\nlogger.info(\"--> waiting after delayed shard assignment should kick in for test1\");\nThread.sleep(15000);\nlogger.info(\"--> reroute\");\nclient().admin().cluster().prepareReroute().execute().actionGet();\n```\n", "number": 14494, "review_comments": [ { "body": "Is there anything your missing in TestClusterService?\n", "created_at": "2015-11-11T14:09:37Z" }, { "body": "can we call these indices \"short_delay\" and \"long_delay\" ? I think it will be easier to read.\n", "created_at": "2015-11-12T08:27:46Z" }, { "body": "discussed and agreed to remove this now that we have the a unit test.\n", "created_at": "2015-11-12T08:36:19Z" } ], "title": "Delayed allocation can miss a reroute" }
{ "commits": [ { "message": "Fix missing reroute in case of multiple delayed shards\n\nAfter a delayed reroute of a shard, RoutingService misses to schedule a new delayed reroute of other delayed shards.\n\nCloses #14494\nCloses #14010\nCloses #14445" } ], "files": [ { "diff": "@@ -110,10 +110,6 @@ public final void reroute(String reason) {\n \n @Override\n public void clusterChanged(ClusterChangedEvent event) {\n- if (event.source().startsWith(CLUSTER_UPDATE_TASK_SOURCE)) {\n- // that's us, ignore this event\n- return;\n- }\n if (event.state().nodes().localNodeMaster()) {\n // figure out when the next unassigned allocation need to happen from now. If this is larger or equal\n // then the last time we checked and scheduled, we are guaranteed to have a reroute until then, so no need", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java", "status": "modified" }, { "diff": "@@ -23,22 +23,35 @@\n import org.elasticsearch.cluster.ClusterChangedEvent;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.EmptyClusterInfoService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n+import org.elasticsearch.cluster.routing.allocation.FailedRerouteAllocation;\n+import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n+import org.elasticsearch.cluster.routing.allocation.StartedRerouteAllocation;\n+import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocators;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.gateway.GatewayAllocator;\n+import org.elasticsearch.node.settings.NodeSettingsService;\n import org.elasticsearch.test.ESAllocationTestCase;\n+import org.elasticsearch.test.cluster.TestClusterService;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.junit.After;\n import org.junit.Before;\n \n+import java.util.Arrays;\n+import java.util.Collections;\n import java.util.List;\n+import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n+import static java.util.Collections.singletonMap;\n import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.STARTED;\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n@@ -138,6 +151,105 @@ public void testDelayedUnassignedScheduleReroute() throws Exception {\n assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(Long.MAX_VALUE));\n }\n \n+ /**\n+ * This tests that a new delayed reroute is scheduled right after a delayed reroute was run\n+ */\n+ public void testDelayedUnassignedScheduleRerouteAfterDelayedReroute() throws Exception {\n+ final ThreadPool testThreadPool = new ThreadPool(getTestName());\n+\n+ try {\n+ DelayedShardsMockGatewayAllocator mockGatewayAllocator = new DelayedShardsMockGatewayAllocator();\n+ AllocationService allocation = new AllocationService(Settings.Builder.EMPTY_SETTINGS,\n+ randomAllocationDeciders(Settings.Builder.EMPTY_SETTINGS, new NodeSettingsService(Settings.Builder.EMPTY_SETTINGS), getRandom()),\n+ new ShardsAllocators(Settings.Builder.EMPTY_SETTINGS, mockGatewayAllocator), EmptyClusterInfoService.INSTANCE);\n+\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"short_delay\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"100ms\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .put(IndexMetaData.builder(\"long_delay\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10s\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"short_delay\")).addAsNew(metaData.index(\"long_delay\")).build())\n+ .nodes(DiscoveryNodes.builder()\n+ .put(newNode(\"node0\", singletonMap(\"data\", Boolean.FALSE.toString()))).localNodeId(\"node0\").masterNodeId(\"node0\")\n+ .put(newNode(\"node1\")).put(newNode(\"node2\")).put(newNode(\"node3\")).put(newNode(\"node4\"))).build();\n+ // allocate shards\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // start primaries\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n+ // start replicas\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n+ assertThat(\"all shards should be started\", clusterState.getRoutingNodes().shardsWithState(STARTED).size(), equalTo(4));\n+\n+ // find replica of short_delay\n+ ShardRouting shortDelayReplica = null;\n+ for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"short_delay\")) {\n+ if (shardRouting.primary() == false) {\n+ shortDelayReplica = shardRouting;\n+ break;\n+ }\n+ }\n+ assertNotNull(shortDelayReplica);\n+\n+ // find replica of long_delay\n+ ShardRouting longDelayReplica = null;\n+ for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"long_delay\")) {\n+ if (shardRouting.primary() == false) {\n+ longDelayReplica = shardRouting;\n+ break;\n+ }\n+ }\n+ assertNotNull(longDelayReplica);\n+\n+ // remove node of shortDelayReplica and node of longDelayReplica and reroute\n+ ClusterState prevState = clusterState;\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(shortDelayReplica.currentNodeId()).remove(longDelayReplica.currentNodeId())).build();\n+ // make sure both replicas are marked as delayed (i.e. not reallocated)\n+ mockGatewayAllocator.setShardsToDelay(Arrays.asList(shortDelayReplica, longDelayReplica));\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+\n+ // check that shortDelayReplica and longDelayReplica have been marked unassigned\n+ RoutingNodes.UnassignedShards unassigned = clusterState.getRoutingNodes().unassigned();\n+ assertEquals(2, unassigned.size());\n+ // update shortDelayReplica and longDelayReplica variables with new shard routing\n+ ShardRouting shortDelayUnassignedReplica = null;\n+ ShardRouting longDelayUnassignedReplica = null;\n+ for (ShardRouting shr : unassigned) {\n+ if (shr.getIndex().equals(\"short_delay\")) {\n+ shortDelayUnassignedReplica = shr;\n+ } else {\n+ longDelayUnassignedReplica = shr;\n+ }\n+ }\n+ assertTrue(shortDelayReplica.isSameShard(shortDelayUnassignedReplica));\n+ assertTrue(longDelayReplica.isSameShard(longDelayUnassignedReplica));\n+\n+ // manually trigger a clusterChanged event on routingService\n+ ClusterState newState = clusterState;\n+ // create fake cluster service\n+ TestClusterService clusterService = new TestClusterService(newState, testThreadPool);\n+ // create routing service, also registers listener on cluster service\n+ RoutingService routingService = new RoutingService(Settings.EMPTY, testThreadPool, clusterService, allocation);\n+ routingService.start(); // just so performReroute does not prematurely return\n+ // ensure routing service has proper timestamp before triggering\n+ routingService.setUnassignedShardsAllocatedTimestamp(shortDelayUnassignedReplica.unassignedInfo().getTimestampInMillis() + randomIntBetween(0, 50));\n+ // next (delayed) reroute should only delay longDelayReplica/longDelayUnassignedReplica\n+ mockGatewayAllocator.setShardsToDelay(Arrays.asList(longDelayUnassignedReplica));\n+ // register listener on cluster state so we know when cluster state has been changed\n+ CountDownLatch latch = new CountDownLatch(1);\n+ clusterService.addLast(event -> latch.countDown());\n+ // instead of clusterService calling clusterChanged, we call it directly here\n+ routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n+ // cluster service should have updated state and called routingService with clusterChanged\n+ latch.await();\n+ // verify the registration has been set to the delay of longDelayReplica/longDelayUnassignedReplica\n+ assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(10000L));\n+ } finally {\n+ terminate(testThreadPool);\n+ }\n+ }\n+\n public void testDelayedUnassignedDoesNotRerouteForNegativeDelays() throws Exception {\n AllocationService allocation = createAllocationService();\n MetaData metaData = MetaData.builder()\n@@ -197,4 +309,46 @@ protected void performReroute(String reason) {\n rerouted.set(true);\n }\n }\n+\n+ /**\n+ * Mocks behavior in ReplicaShardAllocator to remove delayed shards from list of unassigned shards so they don't get reassigned yet.\n+ * It does not implement the full logic but shards that are to be delayed need to be explicitly set using the method setShardsToDelay(...).\n+ */\n+ private static class DelayedShardsMockGatewayAllocator extends GatewayAllocator {\n+ volatile List<ShardRouting> delayedShards = Collections.emptyList();\n+\n+ public DelayedShardsMockGatewayAllocator() {\n+ super(Settings.EMPTY, null, null);\n+ }\n+\n+ @Override\n+ public void applyStartedShards(StartedRerouteAllocation allocation) {}\n+\n+ @Override\n+ public void applyFailedShards(FailedRerouteAllocation allocation) {}\n+\n+ /**\n+ * Explicitly set which shards should be delayed in the next allocateUnassigned calls\n+ */\n+ public void setShardsToDelay(List<ShardRouting> delayedShards) {\n+ this.delayedShards = delayedShards;\n+ }\n+\n+ @Override\n+ public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ final RoutingNodes routingNodes = allocation.routingNodes();\n+ final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator();\n+ boolean changed = false;\n+ while (unassignedIterator.hasNext()) {\n+ ShardRouting shard = unassignedIterator.next();\n+ for (ShardRouting shardToDelay : delayedShards) {\n+ if (shard.isSameShard(shardToDelay)) {\n+ changed = true;\n+ unassignedIterator.removeAndIgnore();\n+ }\n+ }\n+ }\n+ return changed;\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@\n public class TestClusterService implements ClusterService {\n \n volatile ClusterState state;\n- private final Collection<ClusterStateListener> listeners = new CopyOnWriteArrayList<>();\n+ private final List<ClusterStateListener> listeners = new CopyOnWriteArrayList<>();\n private final Queue<NotifyTimeout> onGoingTimeouts = ConcurrentCollections.newQueue();\n private final ThreadPool threadPool;\n private final ESLogger logger = Loggers.getLogger(getClass(), Settings.EMPTY);\n@@ -135,7 +135,7 @@ public OperationRouting operationRouting() {\n \n @Override\n public void addFirst(ClusterStateListener listener) {\n- throw new UnsupportedOperationException();\n+ listeners.add(0, listener);\n }\n \n @Override", "filename": "test-framework/src/main/java/org/elasticsearch/test/cluster/TestClusterService.java", "status": "modified" } ] }
{ "body": "It is difficult to write out a full repro in words, so I recorded a video of the repro which will help.\n\nThe test uses the latest 1.7.2 release.\n\nIn short, 6 nodes in cluster, 1 index with 4 shards and 2 replicas (3 copies). \nEach node has 2 awareness attributes (updateDomain and faultDomain) set (both forced). 3 nodes are in 1 updateDomain, the other 3 are in the other updateDomain. And these nodes are also in different faultDomains. Test has delayed allocation set to 10s for quicker allocation. \n\nWhen an updateDomain is killed (3 nodes gone), the cluster shows partial allocation of shards - until a manual _cluster/reroute command is run (without post body) to prod it, or if a command is issued that updates the cluster state (eg. create an index). Once a manual reroute (that doesn't change anything) is run or the cluster state is updated, then the remaining shards are immediately allocated successfully based on the awareness settings.\n\nIf delayed allocation is turned off entirely, then everything works fine and there is no need to manually prod it to complete the rest of the allocation.\n\nNote that sometimes, with delayed allocation on, it does do the right thing, but if you retest a few times stopping and restarting the 3 nodes, you will see that it doesn't do so consistently.\n\nRepro video:\nhttps://drive.google.com/file/d/0B1rxJ0dAZbQvRUE0SlVxT2pOZFE/view?usp=sharing\n\nNode setup:\nhttps://docs.google.com/document/d/1J5FPSvIA5U41Ou1BNpEN9P7q2L8e7KMxM69IG4dGMkk/edit?usp=sharing\n", "comments": [ { "body": "Also see #14011\n", "created_at": "2015-10-08T13:11:29Z" } ], "number": 14010, "title": "Delayed allocation causing partial allocation of shards on allocation awareness" }
{ "body": "Closes #14445 \nRelates to #14010 and #14011\n\nImplementation note:\n\nThe smallest test case I could come up with follows #14445 :\n- Create 4 data nodes (and 1 master node, to make the test simpler).\n- Create 2 indices with delayed shard allocation set to 10 seconds. Each index has 2 shards (1 primary and 1 replica). Balancer ensures that each data node gets a shard, either a primary or a replica shard.\n- Shutdown one node that has a replica shard of first index.\n- Delayed allocation kicks in (a task is scheduled to reroute in 10 seconds).\n- Wait a second\n- Shutdown second node that has a replica shard of second index (no new task is scheduled to reroute in 10 seconds).\n- Make replica shard of first index obsolete by setting number_of_replicas to 0 (the task that was scheduled to reroute is still being scheduled).\n- Wait until 3 shards (1 for first index and 2 for second index) are available. Wait at least 30 minutes to give a chance to delayed allocation of replica shard of second index to kick in.\n\nSame as in #14010, the issue can be \"fixed\" by waiting until delayed routing of first shard has kicked in and manually calling reroute. In the given test this works by inserting the following lines before the last line in the test:\n\n```\nlogger.info(\"--> waiting after delayed shard assignment should kick in for test1\");\nThread.sleep(15000);\nlogger.info(\"--> reroute\");\nclient().admin().cluster().prepareReroute().execute().actionGet();\n```\n", "number": 14494, "review_comments": [ { "body": "Is there anything your missing in TestClusterService?\n", "created_at": "2015-11-11T14:09:37Z" }, { "body": "can we call these indices \"short_delay\" and \"long_delay\" ? I think it will be easier to read.\n", "created_at": "2015-11-12T08:27:46Z" }, { "body": "discussed and agreed to remove this now that we have the a unit test.\n", "created_at": "2015-11-12T08:36:19Z" } ], "title": "Delayed allocation can miss a reroute" }
{ "commits": [ { "message": "Fix missing reroute in case of multiple delayed shards\n\nAfter a delayed reroute of a shard, RoutingService misses to schedule a new delayed reroute of other delayed shards.\n\nCloses #14494\nCloses #14010\nCloses #14445" } ], "files": [ { "diff": "@@ -110,10 +110,6 @@ public final void reroute(String reason) {\n \n @Override\n public void clusterChanged(ClusterChangedEvent event) {\n- if (event.source().startsWith(CLUSTER_UPDATE_TASK_SOURCE)) {\n- // that's us, ignore this event\n- return;\n- }\n if (event.state().nodes().localNodeMaster()) {\n // figure out when the next unassigned allocation need to happen from now. If this is larger or equal\n // then the last time we checked and scheduled, we are guaranteed to have a reroute until then, so no need", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/RoutingService.java", "status": "modified" }, { "diff": "@@ -23,22 +23,35 @@\n import org.elasticsearch.cluster.ClusterChangedEvent;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.EmptyClusterInfoService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n+import org.elasticsearch.cluster.routing.allocation.FailedRerouteAllocation;\n+import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n+import org.elasticsearch.cluster.routing.allocation.StartedRerouteAllocation;\n+import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocators;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.gateway.GatewayAllocator;\n+import org.elasticsearch.node.settings.NodeSettingsService;\n import org.elasticsearch.test.ESAllocationTestCase;\n+import org.elasticsearch.test.cluster.TestClusterService;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.junit.After;\n import org.junit.Before;\n \n+import java.util.Arrays;\n+import java.util.Collections;\n import java.util.List;\n+import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n+import static java.util.Collections.singletonMap;\n import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.STARTED;\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n@@ -138,6 +151,105 @@ public void testDelayedUnassignedScheduleReroute() throws Exception {\n assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(Long.MAX_VALUE));\n }\n \n+ /**\n+ * This tests that a new delayed reroute is scheduled right after a delayed reroute was run\n+ */\n+ public void testDelayedUnassignedScheduleRerouteAfterDelayedReroute() throws Exception {\n+ final ThreadPool testThreadPool = new ThreadPool(getTestName());\n+\n+ try {\n+ DelayedShardsMockGatewayAllocator mockGatewayAllocator = new DelayedShardsMockGatewayAllocator();\n+ AllocationService allocation = new AllocationService(Settings.Builder.EMPTY_SETTINGS,\n+ randomAllocationDeciders(Settings.Builder.EMPTY_SETTINGS, new NodeSettingsService(Settings.Builder.EMPTY_SETTINGS), getRandom()),\n+ new ShardsAllocators(Settings.Builder.EMPTY_SETTINGS, mockGatewayAllocator), EmptyClusterInfoService.INSTANCE);\n+\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"short_delay\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"100ms\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .put(IndexMetaData.builder(\"long_delay\").settings(settings(Version.CURRENT).put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING, \"10s\"))\n+ .numberOfShards(1).numberOfReplicas(1))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData)\n+ .routingTable(RoutingTable.builder().addAsNew(metaData.index(\"short_delay\")).addAsNew(metaData.index(\"long_delay\")).build())\n+ .nodes(DiscoveryNodes.builder()\n+ .put(newNode(\"node0\", singletonMap(\"data\", Boolean.FALSE.toString()))).localNodeId(\"node0\").masterNodeId(\"node0\")\n+ .put(newNode(\"node1\")).put(newNode(\"node2\")).put(newNode(\"node3\")).put(newNode(\"node4\"))).build();\n+ // allocate shards\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+ // start primaries\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n+ // start replicas\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING))).build();\n+ assertThat(\"all shards should be started\", clusterState.getRoutingNodes().shardsWithState(STARTED).size(), equalTo(4));\n+\n+ // find replica of short_delay\n+ ShardRouting shortDelayReplica = null;\n+ for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"short_delay\")) {\n+ if (shardRouting.primary() == false) {\n+ shortDelayReplica = shardRouting;\n+ break;\n+ }\n+ }\n+ assertNotNull(shortDelayReplica);\n+\n+ // find replica of long_delay\n+ ShardRouting longDelayReplica = null;\n+ for (ShardRouting shardRouting : clusterState.getRoutingNodes().routingTable().allShards(\"long_delay\")) {\n+ if (shardRouting.primary() == false) {\n+ longDelayReplica = shardRouting;\n+ break;\n+ }\n+ }\n+ assertNotNull(longDelayReplica);\n+\n+ // remove node of shortDelayReplica and node of longDelayReplica and reroute\n+ ClusterState prevState = clusterState;\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder(clusterState.nodes()).remove(shortDelayReplica.currentNodeId()).remove(longDelayReplica.currentNodeId())).build();\n+ // make sure both replicas are marked as delayed (i.e. not reallocated)\n+ mockGatewayAllocator.setShardsToDelay(Arrays.asList(shortDelayReplica, longDelayReplica));\n+ clusterState = ClusterState.builder(clusterState).routingResult(allocation.reroute(clusterState)).build();\n+\n+ // check that shortDelayReplica and longDelayReplica have been marked unassigned\n+ RoutingNodes.UnassignedShards unassigned = clusterState.getRoutingNodes().unassigned();\n+ assertEquals(2, unassigned.size());\n+ // update shortDelayReplica and longDelayReplica variables with new shard routing\n+ ShardRouting shortDelayUnassignedReplica = null;\n+ ShardRouting longDelayUnassignedReplica = null;\n+ for (ShardRouting shr : unassigned) {\n+ if (shr.getIndex().equals(\"short_delay\")) {\n+ shortDelayUnassignedReplica = shr;\n+ } else {\n+ longDelayUnassignedReplica = shr;\n+ }\n+ }\n+ assertTrue(shortDelayReplica.isSameShard(shortDelayUnassignedReplica));\n+ assertTrue(longDelayReplica.isSameShard(longDelayUnassignedReplica));\n+\n+ // manually trigger a clusterChanged event on routingService\n+ ClusterState newState = clusterState;\n+ // create fake cluster service\n+ TestClusterService clusterService = new TestClusterService(newState, testThreadPool);\n+ // create routing service, also registers listener on cluster service\n+ RoutingService routingService = new RoutingService(Settings.EMPTY, testThreadPool, clusterService, allocation);\n+ routingService.start(); // just so performReroute does not prematurely return\n+ // ensure routing service has proper timestamp before triggering\n+ routingService.setUnassignedShardsAllocatedTimestamp(shortDelayUnassignedReplica.unassignedInfo().getTimestampInMillis() + randomIntBetween(0, 50));\n+ // next (delayed) reroute should only delay longDelayReplica/longDelayUnassignedReplica\n+ mockGatewayAllocator.setShardsToDelay(Arrays.asList(longDelayUnassignedReplica));\n+ // register listener on cluster state so we know when cluster state has been changed\n+ CountDownLatch latch = new CountDownLatch(1);\n+ clusterService.addLast(event -> latch.countDown());\n+ // instead of clusterService calling clusterChanged, we call it directly here\n+ routingService.clusterChanged(new ClusterChangedEvent(\"test\", newState, prevState));\n+ // cluster service should have updated state and called routingService with clusterChanged\n+ latch.await();\n+ // verify the registration has been set to the delay of longDelayReplica/longDelayUnassignedReplica\n+ assertThat(routingService.getRegisteredNextDelaySetting(), equalTo(10000L));\n+ } finally {\n+ terminate(testThreadPool);\n+ }\n+ }\n+\n public void testDelayedUnassignedDoesNotRerouteForNegativeDelays() throws Exception {\n AllocationService allocation = createAllocationService();\n MetaData metaData = MetaData.builder()\n@@ -197,4 +309,46 @@ protected void performReroute(String reason) {\n rerouted.set(true);\n }\n }\n+\n+ /**\n+ * Mocks behavior in ReplicaShardAllocator to remove delayed shards from list of unassigned shards so they don't get reassigned yet.\n+ * It does not implement the full logic but shards that are to be delayed need to be explicitly set using the method setShardsToDelay(...).\n+ */\n+ private static class DelayedShardsMockGatewayAllocator extends GatewayAllocator {\n+ volatile List<ShardRouting> delayedShards = Collections.emptyList();\n+\n+ public DelayedShardsMockGatewayAllocator() {\n+ super(Settings.EMPTY, null, null);\n+ }\n+\n+ @Override\n+ public void applyStartedShards(StartedRerouteAllocation allocation) {}\n+\n+ @Override\n+ public void applyFailedShards(FailedRerouteAllocation allocation) {}\n+\n+ /**\n+ * Explicitly set which shards should be delayed in the next allocateUnassigned calls\n+ */\n+ public void setShardsToDelay(List<ShardRouting> delayedShards) {\n+ this.delayedShards = delayedShards;\n+ }\n+\n+ @Override\n+ public boolean allocateUnassigned(RoutingAllocation allocation) {\n+ final RoutingNodes routingNodes = allocation.routingNodes();\n+ final RoutingNodes.UnassignedShards.UnassignedIterator unassignedIterator = routingNodes.unassigned().iterator();\n+ boolean changed = false;\n+ while (unassignedIterator.hasNext()) {\n+ ShardRouting shard = unassignedIterator.next();\n+ for (ShardRouting shardToDelay : delayedShards) {\n+ if (shard.isSameShard(shardToDelay)) {\n+ changed = true;\n+ unassignedIterator.removeAndIgnore();\n+ }\n+ }\n+ }\n+ return changed;\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/RoutingServiceTests.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@\n public class TestClusterService implements ClusterService {\n \n volatile ClusterState state;\n- private final Collection<ClusterStateListener> listeners = new CopyOnWriteArrayList<>();\n+ private final List<ClusterStateListener> listeners = new CopyOnWriteArrayList<>();\n private final Queue<NotifyTimeout> onGoingTimeouts = ConcurrentCollections.newQueue();\n private final ThreadPool threadPool;\n private final ESLogger logger = Loggers.getLogger(getClass(), Settings.EMPTY);\n@@ -135,7 +135,7 @@ public OperationRouting operationRouting() {\n \n @Override\n public void addFirst(ClusterStateListener listener) {\n- throw new UnsupportedOperationException();\n+ listeners.add(0, listener);\n }\n \n @Override", "filename": "test-framework/src/main/java/org/elasticsearch/test/cluster/TestClusterService.java", "status": "modified" } ] }
{ "body": "Running elasticsearch 2.0.0 on ubuntu 14.04, marvel-agent is installed:\n\n```\n/usr/share/elasticsearch/bin/plugin list\nInstalled plugins in /usr/share/elasticsearch/plugins:\n - marvel-agent\n - license\n```\n\nHowever if I set marvel-agent in the list of mandatory plugins in elasticsearch.yml I get the following Exception:\n\n```\n[2015-10-29 16:53:52,138][INFO ][node ] [schinf-0000-prd1] version[2.0.0], pid[5239], build[de54438/2015-10-22T08:09:48Z]\n[2015-10-29 16:53:52,140][INFO ][node ] [schinf-0000-prd1] initializing ...\n[2015-10-29 16:53:52,404][ERROR][bootstrap ] Exception\nElasticsearchException[Missing mandatory plugins [marvel-agent]]\n at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:149)\n at org.elasticsearch.node.Node.<init>(Node.java:144)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\n```\n\nIf I don't set marvel-agent in plugin.mandatory, elasticsearch start correctly and list marvel-agent as installed.\n", "comments": [ { "body": "As a workaround, specify `plugin.mandatory: marvel`\n", "created_at": "2015-10-30T12:44:49Z" }, { "body": "The offending bug is located [here](https://github.com/elastic/elasticsearch/blob/v2.0.0/core/src/main/java/org/elasticsearch/plugins/PluginsService.java#L132) where it used the name defined in the plugin's class rather than the name defined in the plugin's properties file.\n\nThis is already fixed in 2.x, 2.1 and [master](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/plugins/PluginsService.java#L129) branch of elasticsearch. We don't need to rename anything in Marvel for now.\n\nI created #14479 to backport the fix in 2.0 too, in case 2.0.1 is released.\n", "created_at": "2015-11-03T12:15:21Z" }, { "body": "2.1.0 has been released. Closing\n", "created_at": "2016-01-29T20:22:03Z" } ], "number": 14357, "title": "Elasticsearch 2.0.0 don't start if marvel-agent is set in plugin.mandatory" }
{ "body": "Simple backport for 2.0 branch (already fixed in 2.1, 2.x and master)\n\ncloses #14357 \n", "number": 14479, "review_comments": [], "title": "Use plugin name from plugin's properties file rather than plugin's class" }
{ "commits": [ { "message": "Use plugin name from plugin's properties file rather than plugin's class\n\ncloses #14357" } ], "files": [ { "diff": "@@ -129,7 +129,7 @@ public PluginsService(Settings settings, Path pluginsDirectory) {\n for (Tuple<PluginInfo, Plugin> tuple : plugins) {\n PluginInfo info = tuple.v1();\n if (info.isJvm()) {\n- jvmPlugins.put(tuple.v2().name(), tuple.v2());\n+ jvmPlugins.put(info.getName(), tuple.v2());\n }\n if (info.isSite()) {\n sitePlugins.add(info.getName());", "filename": "core/src/main/java/org/elasticsearch/plugins/PluginsService.java", "status": "modified" } ] }
{ "body": "To reproduce - create several snapshots, truncate one of the snapshot files and try getting the list of snapshots by running `curl -XGET localhost:9200/_snapshot/my_repo/_all`\n", "comments": [], "number": 13887, "title": "Single corrupted snapshot file shouldn't prevent listing all other snapshot in repository" }
{ "body": "Catch exception when reading corrupted snapshot.\n\nSingle corrupted snapshot file shouldn't prevent listing all other snapshot in repository.\ncloses #13887\n", "number": 14471, "review_comments": [ { "body": "Add Javadoc to the getter. I would also remove the `is***`prefix.\n\nExample Javadoc:\n\n```\n/**\n * @return Whether snapshots should be ignored when unavailable (corrupt or temporarily not fetchable)\n */\n```\n", "created_at": "2015-11-03T09:38:24Z" }, { "body": "Set to true to ignore unavailable snapshots.\n", "created_at": "2015-11-03T09:44:24Z" }, { "body": "see comment above\n", "created_at": "2015-11-03T09:44:56Z" }, { "body": "describe what the parameter is for\n", "created_at": "2015-11-03T09:45:16Z" }, { "body": "Might be better to use the default of the request (in this case this coincides, but explicit is better in case of refactoring):\n\ngetSnapshotsRequest.ignoreUnavailable(request.paramAsBoolean(\"ignore_unavailable\", getSnapshotsRequest.ignoreUnavailable());\n", "created_at": "2015-11-03T09:48:11Z" }, { "body": "same comment above\n", "created_at": "2015-11-03T09:49:08Z" }, { "body": "please don't use String concatenation in logger calls, but positional parameters, for example `logger.warn(\"failed to get snapshot [{}]\", ex, snapshotId);`\n", "created_at": "2015-11-03T09:52:40Z" }, { "body": "Snapshot could not be read\n", "created_at": "2015-11-03T09:53:39Z" }, { "body": "I think this check does not add much (I would skip it)\n", "created_at": "2015-11-03T09:58:09Z" }, { "body": "This is in the wrong section of the documentation (The command above is GetRepositoriesRequest).\n", "created_at": "2015-11-03T10:01:58Z" }, { "body": "I would put something below\n\n```\nAll snapshots currently stored in the repository can be listed using the following command:\n\nGET /_snapshot/my_backup/_all\n```\n\nThe command fails if some of the snapshots are unavailable. The boolean parameter `ignore_unvailable` can be used to return all snapshots that are currently available.\n", "created_at": "2015-11-03T10:08:07Z" }, { "body": "Same as in GetSnapshotRequest, replace this by \"Set to true to ignore unavailable snapshots\"\n", "created_at": "2015-11-04T09:19:25Z" }, { "body": "This new parameter should be documented in API for /_cat/snapshots (file cat.snapshots.json)\n", "created_at": "2015-11-04T09:22:18Z" }, { "body": "clean-up imports, these new imports seem to be unused.\n", "created_at": "2015-11-04T09:24:51Z" } ], "title": "Add ignore_unavailable parameter to skip unavailable snapshot" }
{ "commits": [ { "message": "Catch exception when reading corrupted snapshot.\n\nSingle corrupted snapshot file shouldn't prevent listing all other\nsnapshot in repository." } ], "files": [ { "diff": "@@ -41,6 +41,8 @@ public class GetSnapshotsRequest extends MasterNodeRequest<GetSnapshotsRequest>\n \n private String[] snapshots = Strings.EMPTY_ARRAY;\n \n+ private boolean ignoreUnavailable;\n+\n public GetSnapshotsRequest() {\n }\n \n@@ -112,17 +114,35 @@ public GetSnapshotsRequest snapshots(String[] snapshots) {\n return this;\n }\n \n+ /**\n+ * Set to true to ignore unavailable snapshots\n+ *\n+ * @return this request\n+ */\n+ public GetSnapshotsRequest ignoreUnavailable(boolean ignoreUnavailable) {\n+ this.ignoreUnavailable = ignoreUnavailable;\n+ return this;\n+ }\n+ /**\n+ * @return Whether snapshots should be ignored when unavailable (corrupt or temporarily not fetchable)\n+ */\n+ public boolean ignoreUnavailable() {\n+ return ignoreUnavailable;\n+ }\n+\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n repository = in.readString();\n snapshots = in.readStringArray();\n+ ignoreUnavailable = in.readBoolean();\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeString(repository);\n out.writeStringArray(snapshots);\n+ out.writeBoolean(ignoreUnavailable);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java", "status": "modified" }, { "diff": "@@ -84,4 +84,16 @@ public GetSnapshotsRequestBuilder addSnapshots(String... snapshots) {\n request.snapshots(ArrayUtils.concat(request.snapshots(), snapshots));\n return this;\n }\n+\n+ /**\n+ * Makes the request ignore unavailable snapshots\n+ *\n+ * @param ignoreUnavailable true to ignore unavailable snapshots.\n+ * @return this builder\n+ */\n+ public GetSnapshotsRequestBuilder setIgnoreUnavailable(boolean ignoreUnavailable) {\n+ request.ignoreUnavailable(ignoreUnavailable);\n+ return this;\n+ }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java", "status": "modified" }, { "diff": "@@ -74,7 +74,7 @@ protected void masterOperation(final GetSnapshotsRequest request, ClusterState s\n try {\n List<SnapshotInfo> snapshotInfoBuilder = new ArrayList<>();\n if (isAllSnapshots(request.snapshots())) {\n- List<Snapshot> snapshots = snapshotsService.snapshots(request.repository());\n+ List<Snapshot> snapshots = snapshotsService.snapshots(request.repository(), request.ignoreUnavailable());\n for (Snapshot snapshot : snapshots) {\n snapshotInfoBuilder.add(new SnapshotInfo(snapshot));\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -47,7 +47,10 @@ public RestGetSnapshotsAction(Settings settings, RestController controller, Clie\n public void handleRequest(final RestRequest request, final RestChannel channel, final Client client) {\n String repository = request.param(\"repository\");\n String[] snapshots = request.paramAsStringArray(\"snapshot\", Strings.EMPTY_ARRAY);\n+\n GetSnapshotsRequest getSnapshotsRequest = getSnapshotsRequest(repository).snapshots(snapshots);\n+ getSnapshotsRequest.ignoreUnavailable(request.paramAsBoolean(\"ignore_unavailable\", getSnapshotsRequest.ignoreUnavailable()));\n+\n getSnapshotsRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", getSnapshotsRequest.masterNodeTimeout()));\n client.admin().cluster().getSnapshots(getSnapshotsRequest, new RestToXContentListener<GetSnapshotsResponse>(channel));\n }", "filename": "core/src/main/java/org/elasticsearch/rest/action/admin/cluster/snapshots/get/RestGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -54,9 +54,12 @@ public RestSnapshotAction(Settings settings, RestController controller, Client c\n \n @Override\n protected void doRequest(final RestRequest request, RestChannel channel, Client client) {\n- GetSnapshotsRequest getSnapshotsRequest = new GetSnapshotsRequest();\n- getSnapshotsRequest.repository(request.param(\"repository\"));\n- getSnapshotsRequest.snapshots(new String[] { GetSnapshotsRequest.ALL_SNAPSHOTS });\n+ GetSnapshotsRequest getSnapshotsRequest = new GetSnapshotsRequest()\n+ .repository(request.param(\"repository\"))\n+ .snapshots(new String[]{GetSnapshotsRequest.ALL_SNAPSHOTS});\n+\n+ getSnapshotsRequest.ignoreUnavailable(request.paramAsBoolean(\"ignore_unavailable\", getSnapshotsRequest.ignoreUnavailable()));\n+\n getSnapshotsRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", getSnapshotsRequest.masterNodeTimeout()));\n \n client.admin().cluster().getSnapshots(getSnapshotsRequest, new RestResponseListener<GetSnapshotsResponse>(channel) {", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestSnapshotAction.java", "status": "modified" }, { "diff": "@@ -141,7 +141,7 @@ public Snapshot snapshot(SnapshotId snapshotId) {\n * @param repositoryName repository name\n * @return list of snapshots\n */\n- public List<Snapshot> snapshots(String repositoryName) {\n+ public List<Snapshot> snapshots(String repositoryName, boolean ignoreUnavailable) {\n Set<Snapshot> snapshotSet = new HashSet<>();\n List<SnapshotsInProgress.Entry> entries = currentSnapshots(repositoryName, null);\n for (SnapshotsInProgress.Entry entry : entries) {\n@@ -150,8 +150,17 @@ public List<Snapshot> snapshots(String repositoryName) {\n Repository repository = repositoriesService.repository(repositoryName);\n List<SnapshotId> snapshotIds = repository.snapshots();\n for (SnapshotId snapshotId : snapshotIds) {\n- snapshotSet.add(repository.readSnapshot(snapshotId));\n+ try {\n+ snapshotSet.add(repository.readSnapshot(snapshotId));\n+ } catch (Exception ex) {\n+ if (ignoreUnavailable) {\n+ logger.warn(\"failed to get snapshot [{}]\", ex, snapshotId);\n+ } else {\n+ throw new SnapshotException(snapshotId, \"Snapshot could not be read\", ex);\n+ }\n+ }\n }\n+\n ArrayList<Snapshot> snapshotList = new ArrayList<>(snapshotSet);\n CollectionUtil.timSort(snapshotList);\n return Collections.unmodifiableList(snapshotList);", "filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotResponse;\n+import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest;\n import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotIndexShardStage;\n@@ -54,6 +55,7 @@\n import org.elasticsearch.cluster.metadata.SnapshotId;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n@@ -77,6 +79,7 @@\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n \n+import static org.elasticsearch.client.Requests.getSnapshotsRequest;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n@@ -2047,4 +2050,53 @@ public void testSnapshotName() throws Exception {\n assertThat(ex.getMessage(), containsString(\"Invalid snapshot name\"));\n }\n }\n+\n+ public void testListCorruptedSnapshot() throws Exception {\n+ Client client = client();\n+ Path repo = randomRepoPath();\n+ logger.info(\"--> creating repository at \" + repo.toAbsolutePath());\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(Settings.settingsBuilder()\n+ .put(\"location\", repo)\n+ .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)));\n+\n+ createIndex(\"test-idx-1\", \"test-idx-2\", \"test-idx-3\");\n+ ensureYellow();\n+ logger.info(\"--> indexing some data\");\n+ indexRandom(true,\n+ client().prepareIndex(\"test-idx-1\", \"doc\").setSource(\"foo\", \"bar\"),\n+ client().prepareIndex(\"test-idx-2\", \"doc\").setSource(\"foo\", \"bar\"),\n+ client().prepareIndex(\"test-idx-3\", \"doc\").setSource(\"foo\", \"bar\"));\n+\n+ logger.info(\"--> creating 2 snapshots\");\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).setIndices(\"test-idx-*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-2\").setWaitForCompletion(true).setIndices(\"test-idx-*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ logger.info(\"--> truncate snapshot file to make it unreadable\");\n+ Path snapshotPath = repo.resolve(\"snap-test-snap-2.dat\");\n+ try(SeekableByteChannel outChan = Files.newByteChannel(snapshotPath, StandardOpenOption.WRITE)) {\n+ outChan.truncate(randomInt(10));\n+ }\n+\n+ logger.info(\"--> get snapshots request should return both snapshots\");\n+ List<SnapshotInfo> snapshotInfos = client.admin().cluster()\n+ .prepareGetSnapshots(\"test-repo\")\n+ .setIgnoreUnavailable(true).get().getSnapshots();\n+\n+ assertThat(snapshotInfos.size(), equalTo(1));\n+ assertThat(snapshotInfos.get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ assertThat(snapshotInfos.get(0).name(), equalTo(\"test-snap-1\"));\n+\n+ try {\n+ client.admin().cluster().prepareGetSnapshots(\"test-repo\").setIgnoreUnavailable(false).get().getSnapshots();\n+ } catch (SnapshotException ex) {\n+ assertThat(ex.snapshot().getRepository(), equalTo(\"test-repo\"));\n+ assertThat(ex.snapshot().getSnapshot(), equalTo(\"test-snap-2\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java", "status": "modified" }, { "diff": "@@ -259,6 +259,9 @@ GET /_snapshot/my_backup/_all\n -----------------------------------\n // AUTOSENSE\n \n+The command fails if some of the snapshots are unavailable. The boolean parameter `ignore_unvailable` can be used to\n+return all snapshots that are currently available.\n+\n A currently running snapshot can be retrieved using the following command:\n \n [source,sh]", "filename": "docs/reference/modules/snapshots.asciidoc", "status": "modified" }, { "diff": "@@ -12,6 +12,11 @@\n }\n },\n \"params\": {\n+ \"ignore_unavailable\": {\n+ \"type\": \"boolean\",\n+ \"description\": \"Set to true to ignore unavailable snapshots\",\n+ \"default\": false\n+ },\n \"master_timeout\": {\n \"type\" : \"time\",\n \"description\" : \"Explicit operation timeout for connection to master node\"", "filename": "rest-api-spec/src/main/resources/rest-api-spec/api/cat.snapshots.json", "status": "modified" } ] }
{ "body": "The setup:\n\n```\ncurl -XPUT http://127.0.0.1:9200/my_index -d '{\n \"mappings\": {\n \"my_type\": {\n \"properties\": {\n \"my_loc\": { \"type\": \"geo_point\" },\n \"my_num\": { \"type\": \"integer\" }\n }\n }\n }\n}'\n\ncurl -XPUT http://127.0.0.1:9200/my_index/my_type/1 -d '{ \"my_loc\": [0,0] }'\ncurl -XPUT http://127.0.0.1:9200/my_index/my_type/2 -d '{ \"my_loc\": [1.25,1.25] }'\ncurl -XPUT http://127.0.0.1:9200/my_index/my_type/3 -d '{ \"my_loc\": [1.5,1.5] }'\ncurl -XPUT http://127.0.0.1:9200/my_index/my_type/4 -d '{ \"my_loc\": [1.75,1.75] }'\ncurl -XPUT http://127.0.0.1:9200/my_index/my_type/5 -d '{ \"my_loc\": [2,2] }'\ncurl -XPUT http://127.0.0.1:9200/my_index/my_type/6 -d '{ \"my_num\": 10 }'\n```\n\nES is perfectly happy to convert over pure-numeric query data contained in quotes (e.g. \"7\"):\n\n```\ncurl -XGET http://127.0.0.1:9200/my_index/my_type/_search?pretty -d '{\n \"filter\": {\n \"range\": {\n \"my_num\": {\n \"gte\": \"7\"\n }\n }\n }\n}'\n```\n\nReturning the expected doc (my_index/my_type/6). This also \"just works\" with numeric aggs that I've tested (e.g. range aggs).\n\nHowever, this behavior is inconsistent when specifying the \"precision\" variable in a geohash_grid agg:\n\n```\ncurl -XGET http://127.0.0.1:9200/my_index/my_type/_search?pretty -d '{\n \"aggs\": {\n \"my_agg\": {\n \"geohash_grid\": {\n \"field\": \"my_loc\",\n \"precision\": \"3\"\n }\n }\n },\n \"size\":0\n}'\n```\n\nThis ignores the precision of 3 and reverts to the default of 5 (unless \"3\" is removed from quotes)\n", "comments": [], "number": 13132, "title": "Inconsistent handling of string numerics in geo aggregation \"precision\" parameter" }
{ "body": "Originally, only numeric values were allowed for parameters of the\n'geohash_grid' aggregation in contrast to other places in the REST \nAPI. \n\nWith this commit we also allow that parameters are enclosed in quotes (i.e.\nas JSON strings). Additionally, with this commit the valid range for\n'precision' is enforced for the Java API and the REST API (the latter was\npreviously missing the check).\n\nCloses #13132\n", "number": 14440, "review_comments": [ { "body": "I tend to find the '==false' way easier to read\n", "created_at": "2015-11-03T08:35:34Z" }, { "body": "since you are refactoring this part of the parser, let's use ParseField, which will handle undercase/camelcase for us?\n", "created_at": "2015-11-03T08:36:19Z" }, { "body": "Why not reuse TestSearchContext directly?\n", "created_at": "2015-11-03T08:37:30Z" }, { "body": "it doesn't look like we access this file from outside the geogrid package, can this class be made pkg-private?\n", "created_at": "2015-11-03T08:39:36Z" }, { "body": "I think part of the bug is also that we don't have an `else` statement here where we throw an exception. Which means that if you provide a parameter with eg. a boolean value (for which we don't check above), it will be silently ignored. Can you add such an `else` statement (you can look eg. at MissingParser for an example).\n", "created_at": "2015-11-03T08:43:00Z" }, { "body": "I am used to the !x notation but I am ok with reverting that.\n", "created_at": "2015-11-03T08:44:14Z" }, { "body": "That was an oversight. You're right. I've corrected it.\n", "created_at": "2015-11-03T08:46:49Z" }, { "body": "I've added the check you've suggested.\n", "created_at": "2015-11-03T09:31:08Z" }, { "body": "You're right. Nice catch.\n", "created_at": "2015-11-03T09:31:18Z" }, { "body": "Didn't know about it. I've used it now.\n", "created_at": "2015-11-03T09:31:53Z" }, { "body": "Should we just use `parser.isValue()` here instead? I think we do this in other places that we are coercing the value into a number\n", "created_at": "2015-11-04T07:28:43Z" }, { "body": "We cannot use `#isValue()` here because it would also return `true` on other types such as booleans which we want to exclude.\n", "created_at": "2015-11-04T07:37:08Z" } ], "title": "Geo: Allow numeric parameters enclosed in quotes for 'geohash_grid' aggregation" }
{ "commits": [ { "message": "Geo: Allow numeric parameters enclosed in quotes for 'geohash_grid' aggregation\n\nOriginally, only numeric values were allowed for parameters of the\n'geohash_grid' aggregation in contrast to other places in the REST\nAPI.\n\nWith this commit we also allow that parameters are enclosed in quotes (i.e.\nas JSON strings). Additionally, with this commit the valid range for\n'precision' is enforced for the Java API and the REST API (the latter was\npreviously missing the check).\n\nCloses #13132" } ], "files": [ { "diff": "@@ -30,8 +30,8 @@ public class GeoHashGridBuilder extends AggregationBuilder<GeoHashGridBuilder> {\n \n \n private String field;\n- private int precision = GeoHashGridParser.DEFAULT_PRECISION;\n- private int requiredSize = GeoHashGridParser.DEFAULT_MAX_NUM_CELLS;\n+ private int precision = GeoHashGridParams.DEFAULT_PRECISION;\n+ private int requiredSize = GeoHashGridParams.DEFAULT_MAX_NUM_CELLS;\n private int shardSize = 0;\n \n /**\n@@ -54,11 +54,7 @@ public GeoHashGridBuilder field(String field) {\n * precision, the more fine-grained this aggregation will be.\n */\n public GeoHashGridBuilder precision(int precision) {\n- if ((precision < 1) || (precision > 12)) {\n- throw new IllegalArgumentException(\"Invalid geohash aggregation precision of \" + precision\n- + \"must be between 1 and 12\");\n- }\n- this.precision = precision;\n+ this.precision = GeoHashGridParams.checkPrecision(precision);\n return this;\n }\n \n@@ -85,14 +81,14 @@ protected XContentBuilder internalXContent(XContentBuilder builder, Params param\n if (field != null) {\n builder.field(\"field\", field);\n }\n- if (precision != GeoHashGridParser.DEFAULT_PRECISION) {\n- builder.field(\"precision\", precision);\n+ if (precision != GeoHashGridParams.DEFAULT_PRECISION) {\n+ builder.field(GeoHashGridParams.FIELD_PRECISION.getPreferredName(), precision);\n }\n- if (requiredSize != GeoHashGridParser.DEFAULT_MAX_NUM_CELLS) {\n- builder.field(\"size\", requiredSize);\n+ if (requiredSize != GeoHashGridParams.DEFAULT_MAX_NUM_CELLS) {\n+ builder.field(GeoHashGridParams.FIELD_SIZE.getPreferredName(), requiredSize);\n }\n if (shardSize != 0) {\n- builder.field(\"shard_size\", shardSize);\n+ builder.field(GeoHashGridParams.FIELD_SHARD_SIZE.getPreferredName(), shardSize);\n }\n \n return builder.endObject();", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridBuilder.java", "status": "modified" }, { "diff": "@@ -0,0 +1,48 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.search.aggregations.bucket.geogrid;\n+\n+import org.elasticsearch.common.ParseField;\n+\n+/**\n+ * Encapsulates relevant parameter defaults and validations for the geo hash grid aggregation.\n+ */\n+final class GeoHashGridParams {\n+ /* default values */\n+ public static final int DEFAULT_PRECISION = 5;\n+ public static final int DEFAULT_MAX_NUM_CELLS = 10000;\n+\n+ /* recognized field names in JSON */\n+ public static final ParseField FIELD_PRECISION = new ParseField(\"precision\");\n+ public static final ParseField FIELD_SIZE = new ParseField(\"size\");\n+ public static final ParseField FIELD_SHARD_SIZE = new ParseField(\"shard_size\");\n+\n+\n+ public static int checkPrecision(int precision) {\n+ if ((precision < 1) || (precision > 12)) {\n+ throw new IllegalArgumentException(\"Invalid geohash aggregation precision of \" + precision\n+ + \". Must be between 1 and 12.\");\n+ }\n+ return precision;\n+ }\n+\n+ private GeoHashGridParams() {\n+ throw new AssertionError(\"No instances intended\");\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParams.java", "status": "added" }, { "diff": "@@ -21,13 +21,15 @@\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.SortedNumericDocValues;\n import org.apache.lucene.util.GeoHashUtils;\n+import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.fielddata.MultiGeoPointValues;\n import org.elasticsearch.index.fielddata.SortedBinaryDocValues;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n import org.elasticsearch.index.fielddata.SortingNumericDocValues;\n import org.elasticsearch.index.query.GeoBoundingBoxQueryBuilder;\n+import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n@@ -58,16 +60,13 @@ public String type() {\n return InternalGeoHashGrid.TYPE.name();\n }\n \n- public static final int DEFAULT_PRECISION = 5;\n- public static final int DEFAULT_MAX_NUM_CELLS = 10000;\n-\n @Override\n public AggregatorFactory parse(String aggregationName, XContentParser parser, SearchContext context) throws IOException {\n \n ValuesSourceParser vsParser = ValuesSourceParser.geoPoint(aggregationName, InternalGeoHashGrid.TYPE, context).build();\n \n- int precision = DEFAULT_PRECISION;\n- int requiredSize = DEFAULT_MAX_NUM_CELLS;\n+ int precision = GeoHashGridParams.DEFAULT_PRECISION;\n+ int requiredSize = GeoHashGridParams.DEFAULT_MAX_NUM_CELLS;\n int shardSize = -1;\n \n XContentParser.Token token;\n@@ -77,14 +76,18 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n currentFieldName = parser.currentName();\n } else if (vsParser.token(currentFieldName, token, parser)) {\n continue;\n- } else if (token == XContentParser.Token.VALUE_NUMBER) {\n- if (\"precision\".equals(currentFieldName)) {\n- precision = parser.intValue();\n- } else if (\"size\".equals(currentFieldName)) {\n+ } else if (token == XContentParser.Token.VALUE_NUMBER ||\n+ token == XContentParser.Token.VALUE_STRING) { //Be lenient and also allow numbers enclosed in quotes\n+ if (context.parseFieldMatcher().match(currentFieldName, GeoHashGridParams.FIELD_PRECISION)) {\n+ precision = GeoHashGridParams.checkPrecision(parser.intValue());\n+ } else if (context.parseFieldMatcher().match(currentFieldName, GeoHashGridParams.FIELD_SIZE)) {\n requiredSize = parser.intValue();\n- } else if (\"shard_size\".equals(currentFieldName) || \"shardSize\".equals(currentFieldName)) {\n+ } else if (context.parseFieldMatcher().match(currentFieldName, GeoHashGridParams.FIELD_SHARD_SIZE)) {\n shardSize = parser.intValue();\n }\n+ } else if (token != XContentParser.Token.START_OBJECT) {\n+ throw new SearchParseException(context, \"Unexpected token \" + token + \" in [\" + aggregationName + \"].\",\n+ parser.getTokenLocation());\n }\n }\n \n@@ -112,9 +115,9 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n \n static class GeoGridFactory extends ValuesSourceAggregatorFactory<ValuesSource.GeoPoint> {\n \n- private int precision;\n- private int requiredSize;\n- private int shardSize;\n+ private final int precision;\n+ private final int requiredSize;\n+ private final int shardSize;\n \n public GeoGridFactory(String name, ValuesSourceConfig<ValuesSource.GeoPoint> config, int precision, int requiredSize, int shardSize) {\n super(name, InternalGeoHashGrid.TYPE.name(), config);", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParser.java", "status": "modified" }, { "diff": "@@ -0,0 +1,84 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.search.aggregations.bucket.geogrid;\n+\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.search.SearchParseException;\n+import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.TestSearchContext;\n+\n+public class GeoHashGridParserTests extends ESTestCase {\n+ public void testParseValidFromInts() throws Exception {\n+ SearchContext searchContext = new TestSearchContext();\n+ int precision = randomIntBetween(1, 12);\n+ XContentParser stParser = JsonXContent.jsonXContent.createParser(\n+ \"{\\\"field\\\":\\\"my_loc\\\", \\\"precision\\\":\" + precision + \", \\\"size\\\": 500, \\\"shard_size\\\": 550}\");\n+ GeoHashGridParser parser = new GeoHashGridParser();\n+ // can create a factory\n+ assertNotNull(parser.parse(\"geohash_grid\", stParser, searchContext));\n+ }\n+\n+ public void testParseValidFromStrings() throws Exception {\n+ SearchContext searchContext = new TestSearchContext();\n+ int precision = randomIntBetween(1, 12);\n+ XContentParser stParser = JsonXContent.jsonXContent.createParser(\n+ \"{\\\"field\\\":\\\"my_loc\\\", \\\"precision\\\":\\\"\" + precision + \"\\\", \\\"size\\\": \\\"500\\\", \\\"shard_size\\\": \\\"550\\\"}\");\n+ GeoHashGridParser parser = new GeoHashGridParser();\n+ // can create a factory\n+ assertNotNull(parser.parse(\"geohash_grid\", stParser, searchContext));\n+ }\n+\n+ public void testParseErrorOnNonIntPrecision() throws Exception {\n+ SearchContext searchContext = new TestSearchContext();\n+ XContentParser stParser = JsonXContent.jsonXContent.createParser(\"{\\\"field\\\":\\\"my_loc\\\", \\\"precision\\\":\\\"2.0\\\"}\");\n+ GeoHashGridParser parser = new GeoHashGridParser();\n+ try {\n+ parser.parse(\"geohash_grid\", stParser, searchContext);\n+ fail();\n+ } catch (NumberFormatException ex) {\n+ assertEquals(\"For input string: \\\"2.0\\\"\", ex.getMessage());\n+ }\n+ }\n+\n+ public void testParseErrorOnBooleanPrecision() throws Exception {\n+ SearchContext searchContext = new TestSearchContext();\n+ XContentParser stParser = JsonXContent.jsonXContent.createParser(\"{\\\"field\\\":\\\"my_loc\\\", \\\"precision\\\":false}\");\n+ GeoHashGridParser parser = new GeoHashGridParser();\n+ try {\n+ parser.parse(\"geohash_grid\", stParser, searchContext);\n+ fail();\n+ } catch (SearchParseException ex) {\n+ assertEquals(\"Unexpected token VALUE_BOOLEAN in [geohash_grid].\", ex.getMessage());\n+ }\n+ }\n+\n+ public void testParseErrorOnPrecisionOutOfRange() throws Exception {\n+ SearchContext searchContext = new TestSearchContext();\n+ XContentParser stParser = JsonXContent.jsonXContent.createParser(\"{\\\"field\\\":\\\"my_loc\\\", \\\"precision\\\":\\\"13\\\"}\");\n+ GeoHashGridParser parser = new GeoHashGridParser();\n+ try {\n+ parser.parse(\"geohash_grid\", stParser, searchContext);\n+ fail();\n+ } catch (IllegalArgumentException ex) {\n+ assertEquals(\"Invalid geohash aggregation precision of 13. Must be between 1 and 12.\", ex.getMessage());\n+ }\n+ }\n+}\n\\ No newline at end of file", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParserTests.java", "status": "added" } ] }
{ "body": "This fix is related to https://github.com/elastic/elasticsearch/issues/7295\n\nCurrently a ClusterState update is issued for every index when deleting multiple indices, this causes timeout errors in most cases when deleting a large amount of indices. We changed the behavior so only one clusterState update (maximum two if some indices are locked ) is issued when deleting multiple indices. This makes large delete index operations much more faster and more fail-safe.\n", "comments": [ { "body": "@s1monw could you review this please?\n", "created_at": "2015-05-25T16:15:51Z" }, { "body": "hey, this has not been forgotten... I wonder if we do need to do some higher level refactorings to this area of the code. I will look into it next week and come back to this.\n", "created_at": "2015-06-26T09:26:15Z" }, { "body": "@bogensberger I finally found the time to look closer at this. It always bugged me that we had a list of semaphores to acquire and maintain across the lifetime of the delete. This is super error prone and was the main reason why we didn't move forward here sooner. I spend the day looking into why it is needed and what the reasons were why it was added (in 0.17.0) ;) ie. 200 years ago... anyway I am sure we fixed all the problems that lead to the addition of these semaphores in other PRs in 1.x already so we can remove the locking and just go ahead without the locks. I opened https://github.com/elastic/elasticsearch/pull/14159 to get rid of it which I will port for 2.2 once it's in would you mind bringing this up to date? \n", "created_at": "2015-10-16T18:42:37Z" }, { "body": "@s1monw yes, i will do.\n", "created_at": "2015-10-19T09:21:20Z" }, { "body": "@bogensberger I pushed it to master... please go ahead!\n", "created_at": "2015-10-19T11:55:38Z" }, { "body": "@s1monw thanks, i've updated the PR\n", "created_at": "2015-10-20T15:18:48Z" }, { "body": "it looks good to me @javanna can you please take a look at this as well\n", "created_at": "2015-10-23T10:57:51Z" }, { "body": "I had a quick look at this and left a few minor comments, looks very good though. I might be getting confused, but I think this one makes #11189 obsolete, and it's a very good change, long overdue. @bogensberger do you have time to address those minor comments so we can pull this in?\n", "created_at": "2015-10-23T14:13:59Z" }, { "body": "@bogensberger if you don't have time to apply the changes I can do it... but I'd love if you could do it :)\n", "created_at": "2015-10-27T10:44:48Z" }, { "body": "@s1monw i will do it! \n", "created_at": "2015-10-27T10:46:36Z" }, { "body": "@bogensberger awesome thanks!\n", "created_at": "2015-10-27T10:49:37Z" }, { "body": "@javanna @s1monw i'v done the changes and also squashed them. \n", "created_at": "2015-10-27T13:08:32Z" }, { "body": "looks great thanks @bogensberger !\n", "created_at": "2015-10-27T13:21:17Z" }, { "body": "merged into master - I will port to 2.x after it's seen some CI cycles thanks @bogensberger \n", "created_at": "2015-10-27T13:50:06Z" } ], "number": 11258, "title": "Bulk cluster state updates on index deletion" }
{ "body": "This commit fixes a regression introduced with #12058 as we are not deduplicating concrete indices after indices resolution anymore. This causes failures with the delete index api when providing the same index name multiple times in the request, or aliases/wildcard expressions that end up pointing to the same concrete index. The bug was revealed after merging #11258 as we delete indices in batch rather than one by one. The master node will expect too many acknowledgements based on the number of indices that it's trying to delete, hence the request will never be acknowledged by all nodes.\n", "number": 14316, "review_comments": [ { "body": "ohoh :) can we make it final?\n", "created_at": "2015-10-27T17:19:57Z" } ], "title": "Deduplicate concrete indices after indices resolution" }
{ "commits": [ { "message": "Deduplicate concrete indices after indices resolution\n\nThis commit fixes a regression introduced with #12058. This causes failures with the delete index api when providing the same index name multiple times in the request, or aliases/wildcard expressions that end up pointing to the same concrete index. The bug was revealed after merging #11258 as we delete indices in batch rather than one by one. The master node will expect too many acknowledgements based on the number of indices that it's trying to delete, hence the request will never be acknowledged by all nodes.\n\nCloses #14316" } ], "files": [ { "diff": "@@ -76,7 +76,7 @@ public String[] concreteIndices(ClusterState state, IndicesRequest request) {\n }\n \n /**\n- * Translates the provided index expression into actual concrete indices.\n+ * Translates the provided index expression into actual concrete indices, properly deduplicated.\n *\n * @param state the cluster state containing all the data to resolve to expressions to concrete indices\n * @param options defines how the aliases or indices need to be resolved to concrete indices\n@@ -94,7 +94,7 @@ public String[] concreteIndices(ClusterState state, IndicesOptions options, Stri\n }\n \n /**\n- * Translates the provided index expression into actual concrete indices.\n+ * Translates the provided index expression into actual concrete indices, properly deduplicated.\n *\n * @param state the cluster state containing all the data to resolve to expressions to concrete indices\n * @param options defines how the aliases or indices need to be resolved to concrete indices\n@@ -141,7 +141,7 @@ String[] concreteIndices(Context context, String... indexExpressions) {\n }\n }\n \n- List<String> concreteIndices = new ArrayList<>(expressions.size());\n+ final Set<String> concreteIndices = new HashSet<>(expressions.size());\n for (String expression : expressions) {\n AliasOrIndex aliasOrIndex = metaData.getAliasAndIndexLookup().get(expression);\n if (aliasOrIndex == null) {", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java", "status": "modified" }, { "diff": "@@ -847,6 +847,19 @@ public void testIndexOptionsFailClosedIndicesAndAliases() {\n assertThat(results, arrayContainingInAnyOrder(\"foo1-closed\", \"foo2-closed\", \"foo3\"));\n }\n \n+ public void testDedupConcreteIndices() {\n+ MetaData.Builder mdBuilder = MetaData.builder()\n+ .put(indexBuilder(\"index1\").putAlias(AliasMetaData.builder(\"alias1\")));\n+ ClusterState state = ClusterState.builder(new ClusterName(\"_name\")).metaData(mdBuilder).build();\n+ IndicesOptions[] indicesOptions = new IndicesOptions[]{ IndicesOptions.strictExpandOpen(), IndicesOptions.strictExpand(),\n+ IndicesOptions.lenientExpandOpen(), IndicesOptions.strictExpandOpenAndForbidClosed()};\n+ for (IndicesOptions options : indicesOptions) {\n+ IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options);\n+ String[] results = indexNameExpressionResolver.concreteIndices(context, \"index1\", \"index1\", \"alias1\");\n+ assertThat(results, equalTo(new String[]{\"index1\"}));\n+ }\n+ }\n+\n private MetaData metaDataBuilder(String... indices) {\n MetaData.Builder mdBuilder = MetaData.builder();\n for (String concreteIndex : indices) {", "filename": "core/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java", "status": "modified" } ] }
{ "body": "The delete-by-query plugin wraps all queries in a `query` element, so the parsed queries are wrapperd in a QueryWrapperFilter in the end. This does not change the matching documents but prevents me from removing the `query` query which is deprecated (which takes a query and turns it into a filter).\n", "comments": [], "number": 13326, "title": "The delete-by-query plugin wraps all queries in a `query` element" }
{ "body": "We have two types of parse methods for queries: one for the inner query, to be used once the parser is positioned within the query element, and one for the whole query source, including the query element that wraps the actual query.\n\nWith the search refactoring we ended up using the former in count, cat count and delete by query, whereas we should have used the former. It ends up working properly given that we have a registered (deprecated) query called \"query\", which used to allow to wrap a filter into a query, but this has the following downsides:\n1) prevents us from removing the deprecated \"query\" query\n2) we end up supporting a top level query that is not wrapped within a query element (pre 1.0 syntax iirc that shouldn't be supported anymore)\n\nThis commit finally removes the \"query\" query and fixes the related parsing bugs. We also had some tests that were providing queries in the wrong format, those have been fixed too.\n\nCloses #13326\n", "number": 14304, "review_comments": [ { "body": "s/``bool`/`bool`/\n", "created_at": "2015-10-27T12:33:21Z" }, { "body": "s/deprecated/merged/ ?\n", "created_at": "2015-10-27T12:33:46Z" } ], "title": "Remove \"query\" query and fix related parsing bugs" }
{ "commits": [ { "message": "Remove \"query\" query and fix related parsing bugs\n\nWe have two types of parse methods for queries: one for the inner query, to be used once the parser is positioned within the query element, and one for the whole query source, including the query element that wraps the actual query.\n\nWith the search refactoring we ended up using the former in count, cat count and delete by query, whereas we should have used the former. It ends up working properly given that we have a registered (deprecated) query called \"query\", which used to allow to wrap a filter into a query, but this has the following downsides:\n1) prevents us from removing the deprecated \"query\" query\n2) we end up supporting a top level query that is not wrapped within a query element (pre 1.0 syntax iirc that shouldn't be supported anymore)\n\nThis commit finally removes the \"query\" query and fixes the related parsing bugs. We also had some tests that were providing queries in the wrong format, those have been fixed too.\n\nCloses #13326\nCloses #14304" } ], "files": [ { "diff": "@@ -179,7 +179,7 @@ protected ShardValidateQueryResponse shardOperation(ShardValidateQueryRequest re\n SearchContext.setCurrent(searchContext);\n try {\n if (request.source() != null && request.source().length() > 0) {\n- searchContext.parsedQuery(queryParserService.parseQuery(request.source()));\n+ searchContext.parsedQuery(queryParserService.parseTopLevelQuery(request.source()));\n }\n searchContext.preProcess();\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java", "status": "modified" }, { "diff": "@@ -121,7 +121,7 @@ protected ExplainResponse shardOperation(ExplainRequest request, ShardId shardId\n SearchContext.setCurrent(context);\n \n try {\n- context.parsedQuery(indexService.queryParserService().parseQuery(request.source()));\n+ context.parsedQuery(indexService.queryParserService().parseTopLevelQuery(request.source()));\n context.preProcess();\n int topLevelDocId = result.docIdAndVersion().docId + result.docIdAndVersion().context.docBase;\n Explanation explanation = context.searcher().explain(context.query(), topLevelDocId);", "filename": "core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java", "status": "modified" }, { "diff": "@@ -34,7 +34,6 @@\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.AbstractIndexComponent;\n import org.elasticsearch.index.IndexSettings;\n@@ -213,29 +212,20 @@ public Version getIndexCreatedVersion() {\n /**\n * Selectively parses a query from a top level query or query_binary json field from the specified source.\n */\n- public ParsedQuery parseQuery(BytesReference source) {\n+ public ParsedQuery parseTopLevelQuery(BytesReference source) {\n XContentParser parser = null;\n try {\n- parser = XContentHelper.createParser(source);\n- ParsedQuery parsedQuery = null;\n- for (XContentParser.Token token = parser.nextToken(); token != XContentParser.Token.END_OBJECT; token = parser.nextToken()) {\n- if (token == XContentParser.Token.FIELD_NAME) {\n- String fieldName = parser.currentName();\n- if (\"query\".equals(fieldName)) {\n- parsedQuery = parse(parser);\n- } else if (\"query_binary\".equals(fieldName) || \"queryBinary\".equals(fieldName)) {\n- byte[] querySource = parser.binaryValue();\n- XContentParser qSourceParser = XContentFactory.xContent(querySource).createParser(querySource);\n- parsedQuery = parse(qSourceParser);\n- } else {\n- throw new ParsingException(parser.getTokenLocation(), \"request does not support [\" + fieldName + \"]\");\n- }\n- }\n- }\n- if (parsedQuery == null) {\n- throw new ParsingException(parser.getTokenLocation(), \"Required query is missing\");\n+ parser = XContentFactory.xContent(source).createParser(source);\n+ QueryShardContext queryShardContext = cache.get();\n+ queryShardContext.reset(parser);\n+ queryShardContext.parseFieldMatcher(parseFieldMatcher);\n+ try {\n+ QueryBuilder<?> queryBuilder = queryShardContext.parseContext().parseTopLevelQueryBuilder();\n+ Query query = toQuery(queryBuilder, queryShardContext);\n+ return new ParsedQuery(query, queryShardContext.copyNamedQueries());\n+ } finally {\n+ queryShardContext.reset(null);\n }\n- return parsedQuery;\n } catch (ParsingException | QueryShardException e) {\n throw e;\n } catch (Throwable e) {", "filename": "core/src/main/java/org/elasticsearch/index/query/IndexQueryParserService.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.ParsingException;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.indices.query.IndicesQueriesRegistry;\n \n@@ -65,9 +66,47 @@ public boolean isDeprecatedSetting(String setting) {\n }\n \n /**\n- * @return a new QueryBuilder based on the current state of the parser\n+ * Parses a top level query including the query element that wraps it\n */\n- public QueryBuilder parseInnerQueryBuilder() throws IOException {\n+ public QueryBuilder<?> parseTopLevelQueryBuilder() {\n+ try {\n+ QueryBuilder<?> queryBuilder = null;\n+ for (XContentParser.Token token = parser.nextToken(); token != XContentParser.Token.END_OBJECT; token = parser.nextToken()) {\n+ if (token == XContentParser.Token.FIELD_NAME) {\n+ String fieldName = parser.currentName();\n+ if (\"query\".equals(fieldName)) {\n+ queryBuilder = parseInnerQueryBuilder();\n+ } else if (\"query_binary\".equals(fieldName) || \"queryBinary\".equals(fieldName)) {\n+ byte[] querySource = parser.binaryValue();\n+ XContentParser qSourceParser = XContentFactory.xContent(querySource).createParser(querySource);\n+ QueryParseContext queryParseContext = new QueryParseContext(indicesQueriesRegistry);\n+ queryParseContext.reset(qSourceParser);\n+ try {\n+ queryParseContext.parseFieldMatcher(parseFieldMatcher);\n+ queryBuilder = queryParseContext.parseInnerQueryBuilder();\n+ } finally {\n+ queryParseContext.reset(null);\n+ }\n+ } else {\n+ throw new ParsingException(parser.getTokenLocation(), \"request does not support [\" + parser.currentName() + \"]\");\n+ }\n+ }\n+ }\n+ if (queryBuilder == null) {\n+ throw new ParsingException(parser.getTokenLocation(), \"Required query is missing\");\n+ }\n+ return queryBuilder;\n+ } catch (ParsingException e) {\n+ throw e;\n+ } catch (Throwable e) {\n+ throw new ParsingException(parser == null ? null : parser.getTokenLocation(), \"Failed to parse\", e);\n+ }\n+ }\n+\n+ /**\n+ * Parses a query excluding the query element that wraps it\n+ */\n+ public QueryBuilder<?> parseInnerQueryBuilder() throws IOException {\n // move to START object\n XContentParser.Token token;\n if (parser.currentToken() != XContentParser.Token.START_OBJECT) {", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryParseContext.java", "status": "modified" }, { "diff": "@@ -24,11 +24,9 @@\n import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService;\n import org.elasticsearch.common.geo.ShapesAvailability;\n import org.elasticsearch.common.inject.AbstractModule;\n-import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.ExtensionPoint;\n import org.elasticsearch.index.query.*;\n import org.elasticsearch.index.query.functionscore.FunctionScoreQueryParser;\n-import org.elasticsearch.index.query.MoreLikeThisQueryParser;\n import org.elasticsearch.index.termvectors.TermVectorsService;\n import org.elasticsearch.indices.analysis.HunspellService;\n import org.elasticsearch.indices.analysis.IndicesAnalysisService;\n@@ -105,7 +103,6 @@ private void registerBuiltinQueryParsers() {\n registerQueryParser(GeoBoundingBoxQueryParser.class);\n registerQueryParser(GeohashCellQuery.Parser.class);\n registerQueryParser(GeoPolygonQueryParser.class);\n- registerQueryParser(QueryFilterParser.class);\n registerQueryParser(ExistsQueryParser.class);\n registerQueryParser(MissingQueryParser.class);\n registerQueryParser(MatchNoneQueryParser.class);", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesModule.java", "status": "modified" }, { "diff": "@@ -28,7 +28,6 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.query.QueryBuilder;\n-import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.indices.query.IndicesQueriesRegistry;\n import org.elasticsearch.rest.RestChannel;\n import org.elasticsearch.rest.RestController;\n@@ -71,9 +70,7 @@ public void doRequest(final RestRequest request, final RestChannel channel, fina\n SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder().size(0);\n countRequest.source(searchSourceBuilder);\n if (source != null) {\n- QueryParseContext context = new QueryParseContext(indicesQueriesRegistry);\n- context.parseFieldMatcher(parseFieldMatcher);\n- searchSourceBuilder.query(RestActions.getQueryContent(new BytesArray(source), context));\n+ searchSourceBuilder.query(RestActions.getQueryContent(new BytesArray(source), indicesQueriesRegistry, parseFieldMatcher));\n } else {\n QueryBuilder<?> queryBuilder = RestActions.urlParamsToQueryBuilder(request);\n if (queryBuilder != null) {", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestCountAction.java", "status": "modified" }, { "diff": "@@ -29,7 +29,6 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n-import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.indices.query.IndicesQueriesRegistry;\n import org.elasticsearch.rest.*;\n import org.elasticsearch.rest.action.support.RestActions;\n@@ -68,9 +67,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n countRequest.source(searchSourceBuilder);\n if (RestActions.hasBodyContent(request)) {\n BytesReference restContent = RestActions.getRestContent(request);\n- QueryParseContext context = new QueryParseContext(indicesQueriesRegistry);\n- context.parseFieldMatcher(parseFieldMatcher);\n- searchSourceBuilder.query(RestActions.getQueryContent(restContent, context));\n+ searchSourceBuilder.query(RestActions.getQueryContent(restContent, indicesQueriesRegistry, parseFieldMatcher));\n } else {\n QueryBuilder<?> queryBuilder = RestActions.urlParamsToQueryBuilder(request);\n if (queryBuilder != null) {", "filename": "core/src/main/java/org/elasticsearch/rest/action/count/RestCountAction.java", "status": "modified" }, { "diff": "@@ -27,17 +27,8 @@\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.lucene.uid.Versions;\n-import org.elasticsearch.common.xcontent.ToXContent;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentBuilderString;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.common.xcontent.XContentType;\n-import org.elasticsearch.index.query.Operator;\n-import org.elasticsearch.index.query.QueryBuilder;\n-import org.elasticsearch.index.query.QueryBuilders;\n-import org.elasticsearch.index.query.QueryParseContext;\n-import org.elasticsearch.index.query.QueryStringQueryBuilder;\n+import org.elasticsearch.common.xcontent.*;\n+import org.elasticsearch.index.query.*;\n import org.elasticsearch.indices.query.IndicesQueriesRegistry;\n import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n@@ -142,14 +133,12 @@ public static BytesReference getRestContent(RestRequest request) {\n return content;\n }\n \n- public static QueryBuilder<?> getQueryContent(BytesReference source, QueryParseContext context) {\n+ public static QueryBuilder<?> getQueryContent(BytesReference source, IndicesQueriesRegistry indicesQueriesRegistry, ParseFieldMatcher parseFieldMatcher) {\n+ QueryParseContext context = new QueryParseContext(indicesQueriesRegistry);\n try (XContentParser requestParser = XContentFactory.xContent(source).createParser(source)) {\n- // Save the parseFieldMatcher because its about to be trashed in the\n- // QueryParseContext\n- ParseFieldMatcher parseFieldMatcher = context.parseFieldMatcher();\n context.reset(requestParser);\n context.parseFieldMatcher(parseFieldMatcher);\n- return context.parseInnerQueryBuilder();\n+ return context.parseTopLevelQueryBuilder();\n } catch (IOException e) {\n throw new ElasticsearchException(\"failed to parse source\", e);\n } finally {", "filename": "core/src/main/java/org/elasticsearch/rest/action/support/RestActions.java", "status": "modified" }, { "diff": "@@ -724,8 +724,7 @@ public SearchSourceBuilder fromXContent(XContentParser parser, QueryParseContext\n } else if (context.parseFieldMatcher().match(currentFieldName, TRACK_SCORES_FIELD)) {\n builder.trackScores = parser.booleanValue();\n } else if (context.parseFieldMatcher().match(currentFieldName, _SOURCE_FIELD)) {\n- FetchSourceContext fetchSourceContext = FetchSourceContext.parse(parser, context);\n- builder.fetchSourceContext = fetchSourceContext;\n+ builder.fetchSourceContext = FetchSourceContext.parse(parser, context);\n } else if (context.parseFieldMatcher().match(currentFieldName, FIELDS_FIELD)) {\n List<String> fieldNames = new ArrayList<>();\n fieldNames.add(parser.text());\n@@ -742,8 +741,7 @@ public SearchSourceBuilder fromXContent(XContentParser parser, QueryParseContext\n } else if (context.parseFieldMatcher().match(currentFieldName, POST_FILTER_FIELD)) {\n builder.postQueryBuilder = context.parseInnerQueryBuilder();\n } else if (context.parseFieldMatcher().match(currentFieldName, _SOURCE_FIELD)) {\n- FetchSourceContext fetchSourceContext = FetchSourceContext.parse(parser, context);\n- builder.fetchSourceContext = fetchSourceContext;\n+ builder.fetchSourceContext = FetchSourceContext.parse(parser, context);\n } else if (context.parseFieldMatcher().match(currentFieldName, SCRIPT_FIELDS_FIELD)) {\n List<ScriptField> scriptFields = new ArrayList<>();\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n@@ -886,8 +884,7 @@ public SearchSourceBuilder fromXContent(XContentParser parser, QueryParseContext\n }\n builder.stats = stats;\n } else if (context.parseFieldMatcher().match(currentFieldName, _SOURCE_FIELD)) {\n- FetchSourceContext fetchSourceContext = FetchSourceContext.parse(parser, context);\n- builder.fetchSourceContext = fetchSourceContext;\n+ builder.fetchSourceContext = FetchSourceContext.parse(parser, context);\n } else {\n throw new ParsingException(parser.getTokenLocation(), \"Unknown key for a \" + token + \" in [\" + currentFieldName + \"].\",\n parser.getTokenLocation());", "filename": "core/src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java", "status": "modified" }, { "diff": "@@ -21,7 +21,6 @@\n \n import com.carrotsearch.randomizedtesting.generators.CodepointSetGenerator;\n import com.fasterxml.jackson.core.io.JsonStringEncoder;\n-\n import org.apache.lucene.search.Query;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest;\n@@ -73,12 +72,7 @@\n import org.elasticsearch.indices.analysis.IndicesAnalysisService;\n import org.elasticsearch.indices.breaker.CircuitBreakerService;\n import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n-import org.elasticsearch.script.MockScriptEngine;\n-import org.elasticsearch.script.ScriptContext;\n-import org.elasticsearch.script.ScriptContextRegistry;\n-import org.elasticsearch.script.ScriptEngineService;\n-import org.elasticsearch.script.ScriptModule;\n-import org.elasticsearch.script.ScriptService;\n+import org.elasticsearch.script.*;\n import org.elasticsearch.script.mustache.MustacheScriptEngineService;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.test.ESTestCase;\n@@ -99,12 +93,7 @@\n import java.lang.reflect.InvocationHandler;\n import java.lang.reflect.Method;\n import java.lang.reflect.Proxy;\n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.HashSet;\n-import java.util.List;\n-import java.util.Map;\n-import java.util.Set;\n+import java.util.*;\n import java.util.concurrent.ExecutionException;\n \n import static org.hamcrest.Matchers.equalTo;\n@@ -220,8 +209,8 @@ protected void configure() {\n new AbstractModule() {\n @Override\n protected void configure() {\n- IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(index, indexSettings, Collections.EMPTY_LIST);\n- SimilarityService service = new SimilarityService(idxSettings, Collections.EMPTY_MAP);\n+ IndexSettings idxSettings = IndexSettingsModule.newIndexSettings(index, indexSettings, Collections.emptyList());\n+ SimilarityService service = new SimilarityService(idxSettings, Collections.emptyMap());\n bind(SimilarityService.class).toInstance(service);\n BitsetFilterCache bitsetFilterCache = new BitsetFilterCache(idxSettings, new IndicesWarmer(idxSettings.getNodeSettings(), null));\n bind(BitsetFilterCache.class).toInstance(bitsetFilterCache);\n@@ -413,7 +402,7 @@ public void testToQuery() throws IOException {\n /**\n * Few queries allow you to set the boost and queryName on the java api, although the corresponding parser doesn't parse them as they are not supported.\n * This method allows to disable boost and queryName related tests for those queries. Those queries are easy to identify: their parsers\n- * don't parse `boost` and `_name` as they don't apply to the specific query: filter query, wrapper query and match_none\n+ * don't parse `boost` and `_name` as they don't apply to the specific query: wrapper query and match_none\n */\n protected boolean supportsBoostAndQueryName() {\n return true;", "filename": "core/src/test/java/org/elasticsearch/index/query/AbstractQueryTestCase.java", "status": "modified" }, { "diff": "@@ -88,23 +88,12 @@ public void testTemplateInBody() throws IOException {\n }\n \n public void testTemplateInBodyWithSize() throws IOException {\n- String request = \"{\\n\" +\n- \" \\\"size\\\":0,\" +\n- \" \\\"query\\\": {\\n\" +\n- \" \\\"template\\\": {\\n\" +\n- \" \\\"query\\\": {\\\"match_{{template}}\\\": {}},\\n\" +\n- \" \\\"params\\\" : {\\n\" +\n- \" \\\"template\\\" : \\\"all\\\"\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \"}\";\n Map<String, Object> params = new HashMap<>();\n params.put(\"template\", \"all\");\n SearchResponse sr = client().prepareSearch()\n .setSource(\n new SearchSourceBuilder().size(0).query(\n- QueryBuilders.templateQuery(new Template(\"{ \\\"query\\\": { \\\"match_{{template}}\\\": {} } }\",\n+ QueryBuilders.templateQuery(new Template(\"{ \\\"match_{{template}}\\\": {} }\",\n ScriptType.INLINE, null, null, params)))).execute()\n .actionGet();\n assertNoFailures(sr);", "filename": "core/src/test/java/org/elasticsearch/index/query/TemplateQueryIT.java", "status": "modified" }, { "diff": "@@ -579,7 +579,6 @@ public void testCustomWeightFactorQueryBuilderWithFunctionScoreWithoutQueryGiven\n public void testFieldValueFactorFactorArray() throws IOException {\n // don't permit an array of factors\n String querySource = \"{\" +\n- \"\\\"query\\\": {\" +\n \" \\\"function_score\\\": {\" +\n \" \\\"query\\\": {\" +\n \" \\\"match\\\": {\\\"name\\\": \\\"foo\\\"}\" +\n@@ -593,7 +592,6 @@ public void testFieldValueFactorFactorArray() throws IOException {\n \" }\" +\n \" ]\" +\n \" }\" +\n- \" }\" +\n \"}\";\n try {\n parseQuery(querySource);", "filename": "core/src/test/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilderTests.java", "status": "modified" }, { "diff": "@@ -78,6 +78,16 @@ format can't read from version `3.0.0` and onwards. The new format allows for a\n scalable join between parent and child documents and the join data structures are stored on on disk\n data structures as opposed as before the join data structures were stored in the jvm heap space.\n \n+==== Deprecated queries removed\n+\n+The following deprecated queries have been removed:\n+* `filtered`: use `bool` query instead, which supports `filter` clauses too\n+* `and`: use `must` clauses in a `bool` query instead\n+* `or`: use should clauses in a `bool` query instead\n+* `limit`: use `terminate_after` parameter instead\n+* `fquery`: obsolete after filters and queries have been merged\n+* `query`: obsolete after filters and queries have been merged\n+\n ==== `score_type` has been removed\n \n The `score_type` option has been removed from the `has_child` and `has_parent` queries in favour of the `score_mode` option", "filename": "docs/reference/migration/migrate_3_0.asciidoc", "status": "modified" }, { "diff": "@@ -362,7 +362,7 @@ The filter cache has been renamed <<query-cache>>.\n [role=\"exclude\",id=\"query-dsl-filtered-query\"]\n === Filtered query\n \n-The `filtered` query is replaced in favour of the <<query-dsl-bool-query,bool>> query. Instead of\n+The `filtered` query is replaced by the <<query-dsl-bool-query,bool>> query. Instead of\n the following:\n \n [source,js]", "filename": "docs/reference/redirects.asciidoc", "status": "modified" }, { "diff": "@@ -20,16 +20,12 @@\n package org.elasticsearch.rest.action.deletebyquery;\n \n import org.elasticsearch.action.deletebyquery.DeleteByQueryRequest;\n-import org.elasticsearch.action.deletebyquery.DeleteByQueryResponse;\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryBuilder;\n-import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.indices.query.IndicesQueriesRegistry;\n import org.elasticsearch.rest.BaseRestHandler;\n import org.elasticsearch.rest.RestChannel;\n@@ -67,29 +63,15 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n if (request.hasParam(\"timeout\")) {\n delete.timeout(request.paramAsTime(\"timeout\", null));\n }\n- if (request.hasContent()) {\n- XContentParser requestParser = XContentFactory.xContent(request.content()).createParser(request.content());\n- QueryParseContext context = new QueryParseContext(indicesQueriesRegistry);\n- context.reset(requestParser);\n- context.parseFieldMatcher(parseFieldMatcher);\n- final QueryBuilder<?> builder = context.parseInnerQueryBuilder();\n- delete.query(builder);\n+ if (RestActions.hasBodyContent(request)) {\n+ delete.query(RestActions.getQueryContent(RestActions.getRestContent(request), indicesQueriesRegistry, parseFieldMatcher));\n } else {\n- String source = request.param(\"source\");\n- if (source != null) {\n- XContentParser requestParser = XContentFactory.xContent(source).createParser(source);\n- QueryParseContext context = new QueryParseContext(indicesQueriesRegistry);\n- context.reset(requestParser);\n- final QueryBuilder<?> builder = context.parseInnerQueryBuilder();\n- delete.query(builder);\n- } else {\n- QueryBuilder<?> queryBuilder = RestActions.urlParamsToQueryBuilder(request);\n- if (queryBuilder != null) {\n- delete.query(queryBuilder);\n- }\n+ QueryBuilder<?> queryBuilder = RestActions.urlParamsToQueryBuilder(request);\n+ if (queryBuilder != null) {\n+ delete.query(queryBuilder);\n }\n }\n delete.types(Strings.splitStringByCommaToArray(request.param(\"type\")));\n- client.execute(INSTANCE, delete, new RestToXContentListener<DeleteByQueryResponse>(channel));\n+ client.execute(INSTANCE, delete, new RestToXContentListener<>(channel));\n }\n }", "filename": "plugins/delete-by-query/src/main/java/org/elasticsearch/rest/action/deletebyquery/RestDeleteByQueryAction.java", "status": "modified" }, { "diff": "@@ -1,5 +1,4 @@\n----\n-\"Basic delete_by_query\":\n+setup:\n - do:\n index:\n index: test_1\n@@ -24,6 +23,8 @@\n - do:\n indices.refresh: {}\n \n+---\n+\"Basic delete_by_query\":\n - do:\n delete_by_query:\n index: test_1\n@@ -40,3 +41,14 @@\n index: test_1\n \n - match: { count: 2 }\n+\n+---\n+\"Delete_by_query body without query element\":\n+ - do:\n+ catch: request\n+ delete_by_query:\n+ index: test_1\n+ body:\n+ match:\n+ foo: bar\n+", "filename": "plugins/delete-by-query/src/test/resources/rest-api-spec/test/delete_by_query/10_basic.yaml", "status": "modified" }, { "diff": "@@ -1,5 +1,4 @@\n----\n-\"count with body\":\n+setup:\n - do:\n indices.create:\n index: test\n@@ -14,6 +13,8 @@\n indices.refresh:\n index: [test]\n \n+---\n+\"count with body\":\n - do:\n count:\n index: test\n@@ -35,3 +36,13 @@\n foo: test\n \n - match: {count : 0}\n+\n+---\n+\"count body without query element\":\n+ - do:\n+ catch: request\n+ count:\n+ index: test\n+ body:\n+ match:\n+ foo: bar", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/count/10_basic.yaml", "status": "modified" }, { "diff": "@@ -1,5 +1,14 @@\n----\n-\"Basic explain\":\n+setup:\n+ - do:\n+ indices.create:\n+ index: test_1\n+ body:\n+ aliases:\n+ alias_1: {}\n+ - do:\n+ cluster.health:\n+ wait_for_status: yellow\n+\n - do:\n index:\n index: test_1\n@@ -10,6 +19,9 @@\n - do:\n indices.refresh: {}\n \n+---\n+\"Basic explain\":\n+\n - do:\n explain:\n index: test_1\n@@ -27,26 +39,6 @@\n \n ---\n \"Basic explain with alias\":\n- - do:\n- indices.create:\n- index: test_1\n- body:\n- aliases:\n- alias_1: {}\n-\n- - do:\n- cluster.health:\n- wait_for_status: yellow\n-\n- - do:\n- index:\n- index: test_1\n- type: test\n- id: id_1\n- body: { foo: bar, title: howdy }\n-\n- - do:\n- indices.refresh: {}\n \n - do:\n explain:\n@@ -63,3 +55,14 @@\n - match: { _type: test }\n - match: { _id: id_1 }\n \n+---\n+\"Explain body without query element\":\n+ - do:\n+ catch: request\n+ explain:\n+ index: test_1\n+ type: test\n+ id: id_1\n+ body:\n+ match_all: {}\n+", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/explain/10_basic.yaml", "status": "modified" }, { "diff": "@@ -1,5 +1,4 @@\n----\n-\"Validate query api\":\n+setup:\n - do:\n indices.create:\n index: testing\n@@ -11,6 +10,8 @@\n cluster.health:\n wait_for_status: yellow\n \n+---\n+\"Validate query api\":\n - do:\n indices.validate_query:\n q: query string\n@@ -34,3 +35,11 @@\n - match: {explanations.0.index: 'testing'}\n - match: {explanations.0.explanation: '*:*'}\n \n+---\n+\"Validate body without query element\":\n+ - do:\n+ indices.validate_query:\n+ body:\n+ match_all: {}\n+\n+ - is_false: valid", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.validate_query/10_basic.yaml", "status": "modified" }, { "diff": "@@ -1,5 +1,4 @@\n----\n-\"Default index\":\n+setup:\n - do:\n indices.create:\n index: test_2\n@@ -24,6 +23,9 @@\n indices.refresh:\n index: [test_1, test_2]\n \n+---\n+\"Basic search\":\n+\n - do:\n search:\n index: _all\n@@ -62,3 +64,14 @@\n - match: {hits.hits.0._index: test_2 }\n - match: {hits.hits.0._type: test }\n - match: {hits.hits.0._id: \"42\" }\n+\n+---\n+\"Search body without query element\":\n+\n+ - do:\n+ catch: request\n+ search:\n+ body:\n+ match:\n+ foo: bar\n+", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/search/20_default_values.yaml", "status": "modified" } ] }
{ "body": "The delete-by-query plugin wraps all queries in a `query` element, so the parsed queries are wrapperd in a QueryWrapperFilter in the end. This does not change the matching documents but prevents me from removing the `query` query which is deprecated (which takes a query and turns it into a filter).\n", "comments": [], "number": 13326, "title": "The delete-by-query plugin wraps all queries in a `query` element" }
{ "body": "The delete by query plugin used to set the provided body as the query of the SearchSourceBuilder, which means it was going to be wrapped into an additional query element. This ended up working properly only because we have a registered \"query\" query that makes all the parsing work anyway. That said, this has a side effect: we ended up supporting a query that is not wrapped into a query element on the REST layer, something that should not be supported. Also, we want to remove the deprecated \"query\" query from master as it is deprecated in 2.x, but it is not possible as long as we need it to properly parse the delete_by_query body.\n\nThis is what caused #13326 in the first place, but master has changed in the meantime and will need different changes.\n\nRelates to #13326\n", "number": 14302, "review_comments": [], "title": "Delete by query to not wrap the inner query into an additional query element" }
{ "commits": [ { "message": "Delete by query to not wrap the inner query into an additional query element\n\nThe delete by query plugin used to set the provided body as the query of the SearchSourceBuilder, which means it was going to be wrapped into an additional query element. This ended up working properly only because we have a registered \"query\" query that makes all the parsing work anyway. That said, this has a side effect: we ended up supporting a query that is not wrapped into a query element on the REST layer, something that should not be supported. Also, we want to remove the deprecated \"query\" query from master as it is deprecated in 2.x, but it is not possible as long as we need it to properly parse the delete_by_query body.\n\nThis is what caused #13326 in the first place, but master has changed in the meantime and will need different changes.\n\nRelates to #13326" } ], "files": [ { "diff": "@@ -107,8 +107,8 @@ void executeScan() {\n scanRequest.routing(request.routing());\n }\n \n+ scanRequest.source(request.source());\n SearchSourceBuilder source = new SearchSourceBuilder()\n- .query(request.source())\n .fields(\"_routing\", \"_parent\")\n .sort(\"_doc\") // important for performance\n .fetchSource(false)\n@@ -119,7 +119,7 @@ void executeScan() {\n if (request.timeout() != null) {\n source.timeout(request.timeout());\n }\n- scanRequest.source(source);\n+ scanRequest.extraSource(source);\n \n logger.trace(\"executing scan request\");\n searchAction.execute(scanRequest, new ActionListener<SearchResponse>() {\n@@ -302,10 +302,6 @@ boolean hasTimedOut() {\n return request.timeout() != null && (threadPool.estimatedTimeInMillis() >= (startTime + request.timeout().millis()));\n }\n \n- void addShardFailure(ShardOperationFailedException failure) {\n- addShardFailures(new ShardOperationFailedException[]{failure});\n- }\n-\n void addShardFailures(ShardOperationFailedException[] failures) {\n if (!CollectionUtils.isEmpty(failures)) {\n ShardOperationFailedException[] duplicates = new ShardOperationFailedException[shardFailures.length + failures.length];", "filename": "plugins/delete-by-query/src/main/java/org/elasticsearch/action/deletebyquery/TransportDeleteByQueryAction.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.action.search.ClearScrollResponse;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.action.support.QuerySourceBuilder;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.text.StringText;\n import org.elasticsearch.common.unit.TimeValue;\n@@ -85,7 +86,7 @@ public void testExecuteScan() {\n assertHitCount(client().prepareCount(\"test\").get(), numDocs);\n \n final long limit = randomIntBetween(0, numDocs);\n- DeleteByQueryRequest delete = new DeleteByQueryRequest().indices(\"test\").source(boolQuery().must(rangeQuery(\"num\").lte(limit)).buildAsBytes());\n+ DeleteByQueryRequest delete = new DeleteByQueryRequest().indices(\"test\").source(new QuerySourceBuilder().setQuery(boolQuery().must(rangeQuery(\"num\").lte(limit))));\n TestActionListener listener = new TestActionListener();\n \n newAsyncAction(delete, listener).executeScan();", "filename": "plugins/delete-by-query/src/test/java/org/elasticsearch/action/deletebyquery/TransportDeleteByQueryActionTests.java", "status": "modified" }, { "diff": "@@ -1,5 +1,4 @@\n----\n-\"Basic delete_by_query\":\n+setup:\n - do:\n index:\n index: test_1\n@@ -24,6 +23,8 @@\n - do:\n indices.refresh: {}\n \n+---\n+\"Basic delete_by_query\":\n - do:\n delete_by_query:\n index: test_1\n@@ -40,3 +41,14 @@\n index: test_1\n \n - match: { count: 2 }\n+\n+---\n+\"No query element delete_by_query\":\n+ - do:\n+ catch: request\n+ delete_by_query:\n+ index: test_1\n+ body:\n+ match:\n+ foo: bar\n+", "filename": "plugins/delete-by-query/src/test/resources/rest-api-spec/test/delete_by_query/10_basic.yaml", "status": "modified" } ] }
{ "body": "```\nPlugin watcher not found. Run plugin --list to get list of installed plugins.\n```\n\nBut when you try to use `bin/plugin --list`:\n\n```\n$ bin/plugin --list\nERROR: unknown command [--list]. Use [-h] option to list available commands\n```\n\nIt should show `Run plugin list to get list of installed plugins`, or something of that nature.\n", "comments": [ { "body": "Beat me by 60 seconds :smile: \n", "created_at": "2015-10-26T18:05:18Z" } ], "number": 14287, "title": "bin/plugin tells you to use --list when plugin not found" }
{ "body": "Closes #14287\n", "number": 14288, "review_comments": [], "title": "Fix plugin list command error message" }
{ "commits": [ { "message": "Fix plugin list command error message\n\nCloses #14287" } ], "files": [ { "diff": "@@ -511,7 +511,7 @@ public void removePlugin(String name, Terminal terminal) throws IOException {\n if (removed) {\n terminal.println(\"Removed %s\", name);\n } else {\n- terminal.println(\"Plugin %s not found. Run plugin --list to get list of installed plugins.\", name);\n+ terminal.println(\"Plugin %s not found. Run \\\"plugin list\\\" to get list of installed plugins.\", name);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/plugins/PluginManager.java", "status": "modified" } ] }
{ "body": "Here's the aggregation expressed in elasticsearch-dsl-py (which works fine in 1.x):\n\n```\ns.aggs \\\n .bucket('hour', 'terms', script= \\\n 'use(groovy.time.TimeCategory) { \\\n new Date(doc[\"timestamp\"].value).format(\"HH\") \\\n }', size=0, order={'_term': 'asc'}) \\\n .metric('bandwidth', 'sum', field='size') \\\n .bucket('action', 'terms', field='action', size=0) \\\n .metric('bandwidth', 'sum', field='size')\n```\n\nIn 2.0 we get this error:\n\n```\nRemoteTransportException[[Longshot][127.0.0.1:9300][indices:data/read/search[phase/query]]]; nested: \nQueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: \nGroovyScriptExecutionException[failed to run inline script \n[use(groovy.time.TimeCategory) { new Date(doc[\"timestamp\"].value).format(\"HH\") }] using lang [groovy]]; nested: \nNoClassDefFoundError[Could not initialize class org.codehaus.groovy.vmplugin.v7.IndyInterface];\n```\n\nIf we can no longer use groovy.time.TimeCategory, is there an alternative way to get hour buckets coalesced across multiple days from timestamps? Thanks in advance!\n", "comments": [ { "body": "Can you check the log of the node for the cause of the `Could not initialize class org.codehaus.groovy.vmplugin.v7.IndyInterface` error? That's not the root error, rather something else failed to let that happen.\n", "created_at": "2015-10-23T22:27:17Z" }, { "body": "This is the full error from a recent run: https://gist.github.com/reflection/a62d981f9f1e499e7c8e\n", "created_at": "2015-10-23T22:38:13Z" }, { "body": "Hi @reflection, can you try to find the original run of the script? Once that kind of error happens, it repeats? _If_ you can restart the node, then run it to find the first occurrence (if it's difficult without restarting), then that would be good.\n", "created_at": "2015-10-24T01:12:07Z" }, { "body": "I think its a legit bug: see http://www.groovy-lang.org/indy.html\n\nEven though we have not enabled indy (https://github.com/elastic/elasticsearch/pull/8201), we ship an indy jar in 2.0, hence it still will be used for core groovy classes. \n\nTo fix this for 2.0, we'd need to either backport that change completely, or at least backport policy file changes from that PR. And of course a test that is like @reflection script that uses one of these classes so we know its working.\n", "created_at": "2015-10-24T02:53:47Z" }, { "body": "Would a workaround be to delete the indy jar for now?\n", "created_at": "2015-10-24T08:47:53Z" }, { "body": "no clue. first things first, we need a failing unit test.\n", "created_at": "2015-10-24T12:38:33Z" }, { "body": "i also don't think we should encourage users to manually mess with jars like that. we should just fix the bug....\n", "created_at": "2015-10-24T12:41:37Z" }, { "body": "Yeah, that's why I was trying to find the core error. Curious if it's a security manager error that trips up the `IndyInterface` (as we found with #8201) or if it's something else.\n", "created_at": "2015-10-24T15:56:54Z" }, { "body": "I'm sure its probably that. You probably get NoClassDefFound because there was an earlier security exception before when trying to do the static init, as the indy stuff will not work.\n\nSo I think we just need to have a test that uses standard groovy function to ensure invokeDynamic is working.\n", "created_at": "2015-10-24T16:35:38Z" }, { "body": "I just tried this in a script field in 2.0 RC1 and it did fail:\n\n``` http\nPUT /test/type/1\n{\n \"message\" : \"this is text\",\n \"timestamp\" : \"2015-10-24T17:12:00.000Z\"\n}\n\nGET /test/_search\n{\n \"script_fields\": {\n \"hours\": {\n \"script\": \"use(groovy.time.TimeCategory) { new Date(doc['timestamp'].value).format('HH') }\"\n }\n }\n}\n```\n\nThis generated the `ExceptionInInitializerError`:\n\n```\nCaused by: java.lang.ExceptionInInitializerError\n at org.codehaus.groovy.vmplugin.v7.Java7.invalidateCallSites(Java7.java:66)\n at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.newScope(GroovyCategorySupport.java:78)\n at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:109)\n at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.access$400(GroovyCategorySupport.java:68)\n at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:252)\n at org.codehaus.groovy.runtime.DefaultGroovyMethods.use(DefaultGroovyMethods.java:406)\n at org.codehaus.groovy.runtime.dgm$755.invoke(Unknown Source)\n at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoMetaMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:251)\n at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:59)\n at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:52)\n at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:154)\n at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:174)\n at 6500c935a9b77886d78797d577be9c2be71f03d0.run(6500c935a9b77886d78797d577be9c2be71f03d0:1)\n at org.elasticsearch.script.groovy.GroovyScriptEngineService$GroovyScript.run(GroovyScriptEngineService.java:248)\n ... 10 more\nCaused by: java.security.AccessControlException: access denied (\"java.util.PropertyPermission\" \"groovy.indy.logging\" \"read\")\n at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)\n at java.security.AccessController.checkPermission(AccessController.java:884)\n at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)\n at java.lang.SecurityManager.checkPropertyAccess(SecurityManager.java:1294)\n at java.lang.System.getProperty(System.java:717)\n at org.codehaus.groovy.vmplugin.v7.IndyInterface.<clinit>(IndyInterface.java:76)\n ... 24 more\n```\n\nCore caused by:\n\n```\n java.security.AccessControlException: access denied (\"java.util.PropertyPermission\" \"groovy.indy.logging\" \"read\")\n```\n", "created_at": "2015-10-24T17:19:39Z" }, { "body": "Related to https://github.com/apache/incubator-groovy/pull/119\n", "created_at": "2015-10-24T17:21:27Z" }, { "body": "To me the safest option for a 2.0.x fix would be:\n- add test to GroovySecurityIT\n- backport https://github.com/elastic/elasticsearch/pull/8201 in its entirety but just change default to false. it can be enabled in 2.1\n", "created_at": "2015-10-24T17:27:51Z" }, { "body": "@reflection can you tell me what what JVM you are using?\n", "created_at": "2015-10-26T12:04:00Z" }, { "body": "this is fixed by #14283 for `2.0.1` the first time the class is accessed we get a security excepiton and all subsequent tries get a shadowing `NoClassDefFoundError`.\n", "created_at": "2015-10-26T14:10:45Z" } ], "number": 14273, "title": "Groovy script in aggregation not working in 2.0" }
{ "body": "This relates to #14273\n", "number": 14283, "review_comments": [], "title": "Backport #8201 to 2.0 and disable by default" }
{ "commits": [ { "message": "Merge pull request #8201 from pickypg/feature/groovy-compile-indy-8184\n\nEnable indy (invokedynamic) compile flag for Groovy scripts by default" }, { "message": "Disable indy by default" }, { "message": "[TEST] Add test for loading groovy.time.TimeCategory" } ], "files": [ { "diff": "@@ -26,22 +26,29 @@\n import java.security.CodeSource;\n import java.security.Permission;\n import java.security.PermissionCollection;\n+import java.security.Permissions;\n import java.security.Policy;\n import java.security.ProtectionDomain;\n import java.security.URIParameter;\n+import java.util.PropertyPermission;\n \n /** custom policy for union of static and dynamic permissions */\n final class ESPolicy extends Policy {\n \n /** template policy file, the one used in tests */\n static final String POLICY_RESOURCE = \"security.policy\";\n+ /** limited policy for groovy scripts */\n+ static final String GROOVY_RESOURCE = \"groovy.policy\";\n \n final Policy template;\n+ final Policy groovy;\n final PermissionCollection dynamic;\n \n public ESPolicy(PermissionCollection dynamic) throws Exception {\n- URI uri = getClass().getResource(POLICY_RESOURCE).toURI();\n- this.template = Policy.getInstance(\"JavaPolicy\", new URIParameter(uri));\n+ URI policyUri = getClass().getResource(POLICY_RESOURCE).toURI();\n+ URI groovyUri = getClass().getResource(GROOVY_RESOURCE).toURI();\n+ this.template = Policy.getInstance(\"JavaPolicy\", new URIParameter(policyUri));\n+ this.groovy = Policy.getInstance(\"JavaPolicy\", new URIParameter(groovyUri));\n this.dynamic = dynamic;\n }\n \n@@ -54,9 +61,9 @@ public boolean implies(ProtectionDomain domain, Permission permission) {\n // location can be null... ??? nobody knows\n // https://bugs.openjdk.java.net/browse/JDK-8129972\n if (location != null) {\n- // run groovy scripts with no permissions\n+ // run groovy scripts with no permissions (except logging property)\n if (\"/groovy/script\".equals(location.getFile())) {\n- return false;\n+ return groovy.implies(domain, permission);\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/bootstrap/ESPolicy.java", "status": "modified" }, { "diff": "@@ -57,19 +57,48 @@\n */\n public class GroovyScriptEngineService extends AbstractComponent implements ScriptEngineService {\n \n+ /**\n+ * The name of the scripting engine/language.\n+ */\n public static final String NAME = \"groovy\";\n+ /**\n+ * The setting to enable or disable <code>invokedynamic</code> instruction support in Java 7+.\n+ * <p>\n+ * Note: If this is disabled because <code>invokedynamic</code> is causing issues, then the Groovy\n+ * <code>indy</code> jar needs to be replaced by the non-<code>indy</code> variant of it on the classpath (e.g.,\n+ * <code>groovy-all-2.4.4-indy.jar</code> should be replaced by <code>groovy-all-2.4.4.jar</code>).\n+ * <p>\n+ * Defaults to {@code false} in 2.0 and will be {@code true} in 2.1.\n+ */\n+ public static final String GROOVY_INDY_ENABLED = \"script.groovy.indy\";\n+ /**\n+ * The name of the Groovy compiler setting to use associated with activating <code>invokedynamic</code> support.\n+ */\n+ public static final String GROOVY_INDY_SETTING_NAME = \"indy\";\n+\n private final GroovyClassLoader loader;\n \n @Inject\n public GroovyScriptEngineService(Settings settings) {\n super(settings);\n+\n ImportCustomizer imports = new ImportCustomizer();\n imports.addStarImports(\"org.joda.time\");\n imports.addStaticStars(\"java.lang.Math\");\n+\n CompilerConfiguration config = new CompilerConfiguration();\n+\n config.addCompilationCustomizers(imports);\n // Add BigDecimal -> Double transformer\n config.addCompilationCustomizers(new GroovyBigDecimalTransformer(CompilePhase.CONVERSION));\n+\n+ // Implicitly requires Java 7u60 or later to get valid support\n+ if (settings.getAsBoolean(GROOVY_INDY_ENABLED, false)) {\n+ // maintain any default optimizations\n+ config.getOptimizationOptions().put(GROOVY_INDY_SETTING_NAME, true);\n+ }\n+\n+ // Groovy class loader to isolate Groovy-land code\n this.loader = new GroovyClassLoader(getClass().getClassLoader(), config);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java", "status": "modified" }, { "diff": "@@ -0,0 +1,31 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+ \n+/*\n+ * Limited security policy for groovy scripts.\n+ * This is what is needed for its invokeDynamic functionality to work.\n+ */\n+grant {\n+ \n+ // groovy IndyInterface bootstrap requires this property for indy logging\n+ permission java.util.PropertyPermission \"groovy.indy.logging\", \"read\";\n+ \n+ // needed IndyInterface selectMethod (setCallSiteTarget)\n+ permission java.lang.RuntimePermission \"getClassLoader\";\n+};", "filename": "core/src/main/resources/org/elasticsearch/bootstrap/groovy.policy", "status": "added" }, { "diff": "@@ -72,6 +72,8 @@ public void testEvilGroovyScripts() throws Exception {\n assertSuccess(\"def v = doc['foo'].value; def m = [:]; m.put(\\\\\\\"value\\\\\\\", v)\");\n // Times\n assertSuccess(\"def t = Instant.now().getMillis()\");\n+ // groovy time\n+ assertSuccess(\"use(groovy.time.TimeCategory) { new Date(123456789).format('HH') }\");\n // GroovyCollections\n assertSuccess(\"def n = [1,2,3]; GroovyCollections.max(n)\");\n ", "filename": "core/src/test/java/org/elasticsearch/script/GroovySecurityIT.java", "status": "modified" } ] }
{ "body": "DedicatedClusterSnapshotRestoreIT.restoreIndexWithMissingShards failed twice in the last 10 days with the following error:\n\n```\nExpected: <100L> but: was <87L>\n at __randomizedtesting.SeedInfo.seed([5C47F8A2C729A266:D663A36E857381AD]:0)\n at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)\n at org.junit.Assert.assertThat(Assert.java:865)\n at org.junit.Assert.assertThat(Assert.java:832)\n at org.elasticsearch.snapshots.DedicatedClusterSnapshotRestoreIT.restoreIndexWithMissingShards(DedicatedClusterSnapshotRestoreIT.java:525)\n```\n\nI didn't manage to reproduce the failure. Seems like after a restore not all expected documents can be found in the index using the count api:\n\nhttp://build-us-00.elastic.co/job/es_core_master_centos/8019\n", "comments": [ { "body": "This fails because of the change in https://github.com/elastic/elasticsearch/pull/13766/files#diff-e3d064cfd24df0c1f53af98e6f9bc66dR675 . I'll work on a fix but need to get some other things done first. If anyone interested in picking this up first , please ping me :)\n", "created_at": "2015-10-20T08:37:48Z" }, { "body": "@bleskes what's the reason for this failure can you comment with more details?\n", "created_at": "2015-10-23T07:27:12Z" }, { "body": "@s1monw see #14276 \n", "created_at": "2015-10-25T16:31:22Z" } ], "number": 14115, "title": "DedicatedClusterSnapshotRestoreIT.restoreIndexWithMissingShards rarely fails" }
{ "body": "#13766 removed the previous async trigger of shard recovery with the goal of making it easier to test. The async constructs were folded into IndicesClusterStateService. The change is good but it also made the state managment on the shard level async, while it was previously done on the cluster state thread, before triggering the recoveries (but it was hard to see that). This caused state related race conditions and some build failures (#14115). To fix, the shard state management is now pulled out of the recovery code and made explicit in IndicesClusterStateService, and runs on the cluster state update thread.\n\n Closes #14115\n", "number": 14276, "review_comments": [ { "body": "I know unrelated but maybe you find the time to add some javadocs to this method / methods?\n", "created_at": "2015-10-26T10:42:50Z" } ], "title": "Mark shard as recovering on the cluster state thread" }
{ "commits": [ { "message": "Recovery: mark shard as recovering on the cluster state thread\n\n#13766 removed the previous async trigger of shard recovery with the goal of making it easier to test. The async constructs were folded into IndicesClusterStateService. The change is good but it also made the state managment on the shard level async, while it was previously done on the cluster state thread, before triggering the recoveries (but it was hard to see that). This caused state related race conditions and some build failures (#14115). To fix, the shard state management is now pulled out of the recovery code and made explicit in IndicesClusterStateService, and runs on the cluster state update thread.\n\n Closes #14115" } ], "files": [ { "diff": "@@ -20,7 +20,10 @@\n package org.elasticsearch.index.shard;\n \n import org.apache.lucene.codecs.PostingsFormat;\n-import org.apache.lucene.index.*;\n+import org.apache.lucene.index.CheckIndex;\n+import org.apache.lucene.index.IndexCommit;\n+import org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy;\n+import org.apache.lucene.index.SnapshotDeletionPolicy;\n import org.apache.lucene.search.QueryCachingPolicy;\n import org.apache.lucene.search.UsageTrackingQueryCachingPolicy;\n import org.apache.lucene.store.AlreadyClosedException;\n@@ -55,8 +58,8 @@\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.common.util.concurrent.FutureUtils;\n import org.elasticsearch.gateway.MetaDataStateFormat;\n-import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.IndexServicesProvider;\n+import org.elasticsearch.index.IndexSettings;\n import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.cache.IndexCache;\n import org.elasticsearch.index.cache.IndexCacheModule;\n@@ -83,8 +86,8 @@\n import org.elasticsearch.index.search.stats.ShardSearchStats;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.index.snapshots.IndexShardRepository;\n-import org.elasticsearch.index.store.Store.MetadataSnapshot;\n import org.elasticsearch.index.store.Store;\n+import org.elasticsearch.index.store.Store.MetadataSnapshot;\n import org.elasticsearch.index.store.StoreFileMetaData;\n import org.elasticsearch.index.store.StoreStats;\n import org.elasticsearch.index.suggest.stats.ShardSuggestMetric;\n@@ -381,7 +384,7 @@ public void updateRoutingEntry(final ShardRouting newRouting, final boolean pers\n /**\n * Marks the shard as recovering based on a recovery state, fails with exception is recovering is not allowed to be set.\n */\n- public IndexShardState recovering(String reason, RecoveryState recoveryState) throws IndexShardStartedException,\n+ public IndexShardState markAsRecovering(String reason, RecoveryState recoveryState) throws IndexShardStartedException,\n IndexShardRelocatedException, IndexShardRecoveringException, IndexShardClosedException {\n synchronized (mutex) {\n if (state == IndexShardState.CLOSED) {\n@@ -1067,17 +1070,17 @@ public ShardPath shardPath() {\n return path;\n }\n \n- public boolean recoverFromStore(ShardRouting shard, DiscoveryNode localNode) {\n+ public boolean recoverFromStore(DiscoveryNode localNode) {\n // we are the first primary, recover from the gateway\n // if its post api allocation, the index should exists\n- assert shard.primary() : \"recover from store only makes sense if the shard is a primary shard\";\n- final boolean shouldExist = shard.allocatedPostIndexCreate();\n+ assert shardRouting.primary() : \"recover from store only makes sense if the shard is a primary shard\";\n+ final boolean shouldExist = shardRouting.allocatedPostIndexCreate();\n StoreRecovery storeRecovery = new StoreRecovery(shardId, logger);\n return storeRecovery.recoverFromStore(this, shouldExist, localNode);\n }\n \n- public boolean restoreFromRepository(ShardRouting shard, IndexShardRepository repository, DiscoveryNode locaNode) {\n- assert shard.primary() : \"recover from store only makes sense if the shard is a primary shard\";\n+ public boolean restoreFromRepository(IndexShardRepository repository, DiscoveryNode locaNode) {\n+ assert shardRouting.primary() : \"recover from store only makes sense if the shard is a primary shard\";\n StoreRecovery storeRecovery = new StoreRecovery(shardId, logger);\n return storeRecovery.recoverFromRepository(this, repository, locaNode);\n }", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -72,13 +72,6 @@ boolean recoverFromStore(final IndexShard indexShard, final boolean indexShouldE\n if (indexShard.routingEntry().restoreSource() != null) {\n throw new IllegalStateException(\"can't recover - restore source is not null\");\n }\n- try {\n- final RecoveryState recoveryState = new RecoveryState(indexShard.shardId(), indexShard.routingEntry().primary(), RecoveryState.Type.STORE, localNode, localNode);\n- indexShard.recovering(\"from store\", recoveryState);\n- } catch (IllegalIndexShardStateException e) {\n- // that's fine, since we might be called concurrently, just ignore this, we are already recovering\n- return false;\n- }\n return executeRecovery(indexShard, () -> {\n logger.debug(\"starting recovery from store ...\");\n internalRecoverFromStore(indexShard, indexShouldExists);\n@@ -101,13 +94,6 @@ boolean recoverFromRepository(final IndexShard indexShard, IndexShardRepository\n if (shardRouting.restoreSource() == null) {\n throw new IllegalStateException(\"can't restore - restore source is null\");\n }\n- try {\n- final RecoveryState recoveryState = new RecoveryState(shardId, shardRouting.primary(), RecoveryState.Type.SNAPSHOT, shardRouting.restoreSource(), localNode);\n- indexShard.recovering(\"from snapshot\", recoveryState);\n- } catch (IllegalIndexShardStateException e) {\n- // that's fine, since we might be called concurrently, just ignore this, we are already recovering\n- return false;\n- }\n return executeRecovery(indexShard, () -> {\n logger.debug(\"restoring from {} ...\", shardRouting.restoreSource());\n restore(indexShard, repository);", "filename": "core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java", "status": "modified" }, { "diff": "@@ -633,6 +633,8 @@ private void applyInitializingShard(final ClusterState state, final IndexMetaDat\n return;\n }\n \n+ final RestoreSource restoreSource = shardRouting.restoreSource();\n+\n if (isPeerRecovery(shardRouting)) {\n try {\n \n@@ -643,41 +645,53 @@ private void applyInitializingShard(final ClusterState state, final IndexMetaDat\n // the edge case where its mark as relocated, and we might need to roll it back...\n // For replicas: we are recovering a backup from a primary\n RecoveryState.Type type = shardRouting.primary() ? RecoveryState.Type.RELOCATION : RecoveryState.Type.REPLICA;\n+ RecoveryState recoveryState = new RecoveryState(indexShard.shardId(), shardRouting.primary(), type, sourceNode, nodes.localNode());\n+ indexShard.markAsRecovering(\"from \" + sourceNode, recoveryState);\n recoveryTarget.startRecovery(indexShard, type, sourceNode, new PeerRecoveryListener(shardRouting, indexService, indexMetaData));\n } catch (Throwable e) {\n indexShard.failShard(\"corrupted preexisting index\", e);\n handleRecoveryFailure(indexService, shardRouting, true, e);\n }\n+ } else if (restoreSource == null) {\n+ assert indexShard.routingEntry().equals(shardRouting); // should have already be done before\n+ // recover from filesystem store\n+ final RecoveryState recoveryState = new RecoveryState(indexShard.shardId(), shardRouting.primary(),\n+ RecoveryState.Type.STORE,\n+ nodes.localNode(), nodes.localNode());\n+ indexShard.markAsRecovering(\"from store\", recoveryState); // mark the shard as recovering on the cluster state thread\n+ threadPool.generic().execute(() -> {\n+ try {\n+ if (indexShard.recoverFromStore(nodes.localNode())) {\n+ shardStateAction.shardStarted(shardRouting, indexMetaData.getIndexUUID(), \"after recovery from store\");\n+ }\n+ } catch (Throwable t) {\n+ handleRecoveryFailure(indexService, shardRouting, true, t);\n+ }\n+\n+ });\n } else {\n- final DiscoveryNode localNode = clusterService.localNode();\n+ // recover from a restore\n+ final RecoveryState recoveryState = new RecoveryState(indexShard.shardId(), shardRouting.primary(),\n+ RecoveryState.Type.SNAPSHOT, shardRouting.restoreSource(), nodes.localNode());\n+ indexShard.markAsRecovering(\"from snapshot\", recoveryState); // mark the shard as recovering on the cluster state thread\n threadPool.generic().execute(() -> {\n- final RestoreSource restoreSource = shardRouting.restoreSource();\n final ShardId sId = indexShard.shardId();\n try {\n- final boolean success;\n- if (restoreSource == null) {\n- // recover from filesystem store\n- success = indexShard.recoverFromStore(shardRouting, localNode);\n- } else {\n- // restore\n- final IndexShardRepository indexShardRepository = repositoriesService.indexShardRepository(restoreSource.snapshotId().getRepository());\n- try {\n- success = indexShard.restoreFromRepository(shardRouting, indexShardRepository, localNode);\n- } catch (Throwable t) {\n- if (Lucene.isCorruptionException(t)) {\n- restoreService.failRestore(restoreSource.snapshotId(), sId);\n- }\n- throw t;\n- }\n- if (success) {\n- restoreService.indexShardRestoreCompleted(restoreSource.snapshotId(), sId);\n- }\n+ final IndexShardRepository indexShardRepository = repositoriesService.indexShardRepository(restoreSource.snapshotId().getRepository());\n+ if (indexShard.restoreFromRepository(indexShardRepository, nodes.localNode())) {\n+ restoreService.indexShardRestoreCompleted(restoreSource.snapshotId(), sId);\n+ shardStateAction.shardStarted(shardRouting, indexMetaData.getIndexUUID(), \"after recovery from repository\");\n }\n- if (success) {\n- shardStateAction.shardStarted(shardRouting, indexMetaData.getIndexUUID(), \"after recovery from store\");\n+ } catch (Throwable first) {\n+ try {\n+ if (Lucene.isCorruptionException(first)) {\n+ restoreService.failRestore(restoreSource.snapshotId(), sId);\n+ }\n+ } catch (Throwable second) {\n+ first.addSuppressed(second);\n+ } finally {\n+ handleRecoveryFailure(indexService, shardRouting, true, first);\n }\n- } catch (Throwable e) {\n- handleRecoveryFailure(indexService, shardRouting, true, e);\n }\n });\n }", "filename": "core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java", "status": "modified" }, { "diff": "@@ -124,14 +124,6 @@ public boolean cancelRecoveriesForShard(ShardId shardId, String reason, @Nullabl\n }\n \n public void startRecovery(final IndexShard indexShard, final RecoveryState.Type recoveryType, final DiscoveryNode sourceNode, final RecoveryListener listener) {\n- try {\n- RecoveryState recoveryState = new RecoveryState(indexShard.shardId(), indexShard.routingEntry().primary(), recoveryType, sourceNode, clusterService.localNode());\n- indexShard.recovering(\"from \" + sourceNode, recoveryState);\n- } catch (IllegalIndexShardStateException e) {\n- // that's fine, since we might be called concurrently, just ignore this, we are already recovering\n- logger.debug(\"{} ignore recovery. already in recovering process, {}\", indexShard.shardId(), e.getMessage());\n- return;\n- }\n // create a new recovery status, and process...\n final long recoveryId = onGoingRecoveries.startRecovery(indexShard, sourceNode, listener, recoverySettings.activityTimeout());\n threadPool.generic().execute(new RecoveryRunner(recoveryId));", "filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java", "status": "modified" }, { "diff": "@@ -22,32 +22,15 @@\n import com.carrotsearch.hppc.IntSet;\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n-\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.support.IndicesOptions;\n-import org.elasticsearch.cluster.ClusterChangedEvent;\n-import org.elasticsearch.cluster.ClusterService;\n-import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.ClusterStateListener;\n-import org.elasticsearch.cluster.ClusterStateUpdateTask;\n-import org.elasticsearch.cluster.RestoreInProgress;\n+import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.RestoreInProgress.ShardRestoreStatus;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n-import org.elasticsearch.cluster.metadata.AliasMetaData;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n-import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;\n-import org.elasticsearch.cluster.metadata.MetaData;\n-import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n-import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService;\n-import org.elasticsearch.cluster.metadata.RepositoriesMetaData;\n-import org.elasticsearch.cluster.metadata.SnapshotId;\n+import org.elasticsearch.cluster.metadata.*;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n-import org.elasticsearch.cluster.routing.IndexRoutingTable;\n-import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n-import org.elasticsearch.cluster.routing.RestoreSource;\n-import org.elasticsearch.cluster.routing.RoutingTable;\n-import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.*;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.cluster.settings.ClusterDynamicSettings;\n@@ -70,35 +53,16 @@\n import org.elasticsearch.repositories.RepositoriesService;\n import org.elasticsearch.repositories.Repository;\n import org.elasticsearch.threadpool.ThreadPool;\n-import org.elasticsearch.transport.EmptyTransportResponseHandler;\n-import org.elasticsearch.transport.TransportChannel;\n-import org.elasticsearch.transport.TransportRequest;\n-import org.elasticsearch.transport.TransportRequestHandler;\n-import org.elasticsearch.transport.TransportResponse;\n-import org.elasticsearch.transport.TransportService;\n+import org.elasticsearch.transport.*;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.HashMap;\n-import java.util.HashSet;\n-import java.util.Iterator;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n import java.util.Map.Entry;\n-import java.util.Set;\n import java.util.concurrent.BlockingQueue;\n import java.util.concurrent.CopyOnWriteArrayList;\n \n import static java.util.Collections.unmodifiableSet;\n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS;\n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_CREATION_DATE;\n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_INDEX_UUID;\n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_VERSION_CREATED;\n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_VERSION_MINIMUM_COMPATIBLE;\n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_VERSION_UPGRADED;\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.*;\n import static org.elasticsearch.common.util.set.Sets.newHashSet;\n \n /**\n@@ -113,7 +77,7 @@\n * method.\n * <p>\n * Individual shards are getting restored as part of normal recovery process in\n- * {@link IndexShard#restoreFromRepository(ShardRouting, IndexShardRepository, DiscoveryNode)} )}\n+ * {@link IndexShard#restoreFromRepository(IndexShardRepository, DiscoveryNode)} )}\n * method, which detects that shard should be restored from snapshot rather than recovered from gateway by looking\n * at the {@link org.elasticsearch.cluster.routing.ShardRouting#restoreSource()} property.\n * <p>", "filename": "core/src/main/java/org/elasticsearch/snapshots/RestoreService.java", "status": "modified" }, { "diff": "@@ -20,13 +20,7 @@\n \n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.NumericDocValuesField;\n-import org.apache.lucene.index.CorruptIndexException;\n-import org.apache.lucene.index.DirectoryReader;\n-import org.apache.lucene.index.FieldFilterLeafReader;\n-import org.apache.lucene.index.FilterDirectoryReader;\n-import org.apache.lucene.index.IndexCommit;\n-import org.apache.lucene.index.LeafReader;\n-import org.apache.lucene.index.Term;\n+import org.apache.lucene.index.*;\n import org.apache.lucene.search.IndexSearcher;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.search.TopDocs;\n@@ -49,12 +43,7 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.SnapshotId;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n-import org.elasticsearch.cluster.routing.RestoreSource;\n-import org.elasticsearch.cluster.routing.ShardRouting;\n-import org.elasticsearch.cluster.routing.ShardRoutingHelper;\n-import org.elasticsearch.cluster.routing.ShardRoutingState;\n-import org.elasticsearch.cluster.routing.TestShardRouting;\n-import org.elasticsearch.cluster.routing.UnassignedInfo;\n+import org.elasticsearch.cluster.routing.*;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n@@ -80,11 +69,7 @@\n import org.elasticsearch.index.flush.FlushStats;\n import org.elasticsearch.index.indexing.IndexingOperationListener;\n import org.elasticsearch.index.indexing.ShardIndexingService;\n-import org.elasticsearch.index.mapper.MappedFieldType;\n-import org.elasticsearch.index.mapper.Mapping;\n-import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.index.mapper.ParsedDocument;\n-import org.elasticsearch.index.mapper.Uid;\n+import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.index.snapshots.IndexShardRepository;\n import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus;\n@@ -110,16 +95,11 @@\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n-import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_VERSION_CREATED;\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.*;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.common.xcontent.ToXContent.EMPTY_PARAMS;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n@@ -779,7 +759,8 @@ public void testRecoverFromStore() throws IOException {\n IndexShard newShard = test.createShard(0, routing);\n newShard.updateRoutingEntry(routing, false);\n DiscoveryNode localNode = new DiscoveryNode(\"foo\", DummyTransportAddress.INSTANCE, Version.CURRENT);\n- assertTrue(newShard.recoverFromStore(routing, localNode));\n+ newShard.markAsRecovering(\"store\", new RecoveryState(newShard.shardId(), routing.primary(), RecoveryState.Type.STORE, localNode, localNode));\n+ assertTrue(newShard.recoverFromStore(localNode));\n routing = new ShardRouting(routing);\n ShardRoutingHelper.moveToStarted(routing);\n newShard.updateRoutingEntry(routing, true);\n@@ -809,21 +790,28 @@ public void testFailIfIndexNotPresentInRecoverFromStore() throws IOException {\n ShardRoutingHelper.reinit(routing);\n IndexShard newShard = test.createShard(0, routing);\n newShard.updateRoutingEntry(routing, false);\n+ newShard.markAsRecovering(\"store\", new RecoveryState(newShard.shardId(), routing.primary(), RecoveryState.Type.STORE, localNode, localNode));\n try {\n- newShard.recoverFromStore(routing, localNode);\n+ newShard.recoverFromStore(localNode);\n fail(\"index not there!\");\n } catch (IndexShardRecoveryException ex) {\n assertTrue(ex.getMessage().contains(\"failed to fetch index version after copying it over\"));\n }\n \n ShardRoutingHelper.moveToUnassigned(routing, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"because I say so\"));\n ShardRoutingHelper.initialize(routing, origRouting.currentNodeId());\n-\n- assertFalse(\"it's already recovering\", newShard.recoverFromStore(routing, localNode));\n+ assertTrue(\"it's already recovering, we should ignore new ones\", newShard.ignoreRecoveryAttempt());\n+ try {\n+ newShard.markAsRecovering(\"store\", new RecoveryState(newShard.shardId(), routing.primary(), RecoveryState.Type.STORE, localNode, localNode));\n+ fail(\"we are already recovering, can't mark again\");\n+ } catch (IllegalIndexShardStateException e) {\n+ // OK!\n+ }\n test.removeShard(0, \"I broken it\");\n newShard = test.createShard(0, routing);\n newShard.updateRoutingEntry(routing, false);\n- assertTrue(\"recover even if there is nothing to recover\", newShard.recoverFromStore(routing, localNode));\n+ newShard.markAsRecovering(\"store\", new RecoveryState(newShard.shardId(), routing.primary(), RecoveryState.Type.STORE, localNode, localNode));\n+ assertTrue(\"recover even if there is nothing to recover\", newShard.recoverFromStore(localNode));\n \n routing = new ShardRouting(routing);\n ShardRoutingHelper.moveToStarted(routing);\n@@ -859,7 +847,8 @@ public void testRestoreShard() throws IOException {\n \n test_target_shard.updateRoutingEntry(routing, false);\n DiscoveryNode localNode = new DiscoveryNode(\"foo\", DummyTransportAddress.INSTANCE, Version.CURRENT);\n- assertTrue(test_target_shard.restoreFromRepository(routing, new IndexShardRepository() {\n+ test_target_shard.markAsRecovering(\"store\", new RecoveryState(routing.shardId(), routing.primary(), RecoveryState.Type.SNAPSHOT, routing.restoreSource(), localNode));\n+ assertTrue(test_target_shard.restoreFromRepository(new IndexShardRepository() {\n @Override\n public void snapshot(SnapshotId snapshotId, ShardId shardId, IndexCommit snapshotIndexCommit, IndexShardSnapshotStatus snapshotStatus) {\n }\n@@ -1031,7 +1020,8 @@ private final IndexShard reinitWithWrapper(IndexService indexService, IndexShard\n ShardRoutingHelper.reinit(routing);\n newShard.updateRoutingEntry(routing, false);\n DiscoveryNode localNode = new DiscoveryNode(\"foo\", DummyTransportAddress.INSTANCE, Version.CURRENT);\n- assertTrue(newShard.recoverFromStore(routing, localNode));\n+ newShard.markAsRecovering(\"store\", new RecoveryState(newShard.shardId(), routing.primary(), RecoveryState.Type.STORE, localNode, localNode));\n+ assertTrue(newShard.recoverFromStore(localNode));\n routing = new ShardRouting(routing);\n ShardRoutingHelper.moveToStarted(routing);\n newShard.updateRoutingEntry(routing, true);", "filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java", "status": "modified" }, { "diff": "@@ -31,7 +31,9 @@\n import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n+\n import java.util.Arrays;\n import java.util.concurrent.atomic.AtomicInteger;\n \n@@ -95,7 +97,9 @@ public void afterIndexShardDeleted(ShardId shardId, Settings indexSettings) {\n ShardRoutingHelper.initialize(newRouting, nodeId);\n IndexShard shard = index.createShard(0, newRouting);\n shard.updateRoutingEntry(newRouting, true);\n- shard.recoverFromStore(newRouting, new DiscoveryNode(\"foo\", DummyTransportAddress.INSTANCE, Version.CURRENT));\n+ final DiscoveryNode localNode = new DiscoveryNode(\"foo\", DummyTransportAddress.INSTANCE, Version.CURRENT);\n+ shard.markAsRecovering(\"store\", new RecoveryState(shard.shardId(), newRouting.primary(), RecoveryState.Type.SNAPSHOT, newRouting.restoreSource(), localNode));\n+ shard.recoverFromStore(localNode);\n newRouting = new ShardRouting(newRouting);\n ShardRoutingHelper.moveToStarted(newRouting);\n shard.updateRoutingEntry(newRouting, true);", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerSingleNodeTests.java", "status": "modified" } ] }
{ "body": "Caught numerous times by our RandomGeoCollection test, the ShapeCollection.BoundingBox calculation in Spatial4j is wrong... this is a known issue that has not yet been corrected in Spatial4j https://github.com/spatial4j/spatial4j/issues/77 \n\nFor now Elasticsearch ShapeCollection corrects this, but the fix is not used internally by Spatial4j JtsGeometry. Therefore any Multi-geometry (e.g., multipoint, multilinestring, multipolygon) which creates a spatial4j ShapeCollection under the hood, suffers from this bug.\n\nThere is a near term solution that will be applied until we can fix the fundamental problem within Spatial4j.\n", "comments": [], "number": 9904, "title": "[GEO] Incorrect bounding box for Multi-geometry " }
{ "body": "A long time coming this upgrades to Spatial4J 0.5 which includes the fix for calculating a Multi-geometry bounding box. Duplicate code is removed\n\ncloses #9904\n", "number": 14269, "review_comments": [], "title": "Update to spatial4j 0.5 for correct Multi-Geometry" }
{ "commits": [ { "message": "Fix Multi-geometry bbox\n\nA long time coming this Upgrades to Spatial4J 0.5 which includes the fix for calculating a Multi-geometry bounding box." } ], "files": [ { "diff": "@@ -28,11 +28,7 @@\n import java.util.List;\n \n /**\n- * Overrides bounding box logic in ShapeCollection base class to comply with\n- * OGC OpenGIS Abstract Specification: An Object Model for Interoperable Geoprocessing.\n- *\n- * NOTE: This algorithm is O(N) and can possibly be improved O(log n) using an internal R*-Tree\n- * data structure for a collection of bounding boxes\n+ * Extends spatial4j ShapeCollection for points_only shape indexing support\n */\n public class XShapeCollection<S extends Shape> extends ShapeCollection<S> {\n \n@@ -49,42 +45,4 @@ public boolean pointsOnly() {\n public void setPointsOnly(boolean pointsOnly) {\n this.pointsOnly = pointsOnly;\n }\n-\n- @Override\n- protected Rectangle computeBoundingBox(Collection<? extends Shape> shapes, SpatialContext ctx) {\n- Rectangle retBox = shapes.iterator().next().getBoundingBox();\n- for (Shape geom : shapes) {\n- retBox = expandBBox(retBox, geom.getBoundingBox());\n- }\n- return retBox;\n- }\n-\n- /**\n- * Spatial4J shapes have no knowledge of directed edges. For this reason, a bounding box\n- * that wraps the dateline can have a min longitude that is mathematically &gt; than the\n- * Rectangles' minX value. This is an issue for geometric collections (e.g., MultiPolygon\n- * and ShapeCollection) Until geometry logic can be cleaned up in Spatial4J, ES provides\n- * the following expansion algorithm for GeometryCollections\n- */\n- private Rectangle expandBBox(Rectangle bbox, Rectangle expand) {\n- if (bbox.equals(expand) || bbox.equals(SpatialContext.GEO.getWorldBounds())) {\n- return bbox;\n- }\n-\n- double minX = bbox.getMinX();\n- double eMinX = expand.getMinX();\n- double maxX = bbox.getMaxX();\n- double eMaxX = expand.getMaxX();\n- double minY = bbox.getMinY();\n- double eMinY = expand.getMinY();\n- double maxY = bbox.getMaxY();\n- double eMaxY = expand.getMaxY();\n-\n- bbox.reset(Math.min(Math.min(minX, maxX), Math.min(eMinX, eMaxX)),\n- Math.max(Math.max(minX, maxX), Math.max(eMinX, eMaxX)),\n- Math.min(Math.min(minY, maxY), Math.min(eMinY, eMaxY)),\n- Math.max(Math.max(minY, maxY), Math.max(eMinY, eMaxY)));\n-\n- return bbox;\n- }\n }", "filename": "core/src/main/java/org/elasticsearch/common/geo/XShapeCollection.java", "status": "modified" }, { "diff": "@@ -298,7 +298,6 @@ public void testShapeFetchingPath() throws Exception {\n assertHitCount(result, 1);\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/9904\")\n public void testShapeFilterWithRandomGeoCollection() throws Exception {\n // Create a random geometry collection.\n GeometryCollectionBuilder gcb = RandomShapeGenerator.createGeometryCollection(getRandom());", "filename": "core/src/test/java/org/elasticsearch/search/geo/GeoShapeIntegrationIT.java", "status": "modified" }, { "diff": "@@ -0,0 +1 @@\n+6e16edaf6b1ba76db7f08c2f3723fce3b358ecc3\n\\ No newline at end of file", "filename": "distribution/licenses/spatial4j-0.5.jar.sha1", "status": "added" }, { "diff": "@@ -0,0 +1,15 @@\n+About This Content\n+\n+May 22, 2015\n+\n+License\n+\n+The Eclipse Foundation makes available all content in this plug-in (\"Content\"). Unless otherwise indicated below, the\n+Content is provided to you under the terms and conditions of the Apache License, Version 2.0. A copy of the Apache\n+License, Version 2.0 is available at http://www.apache.org/licenses/LICENSE-2.0.txt\n+\n+If you did not receive this Content directly from the Eclipse Foundation, the Content is being redistributed by another\n+party (\"Redistributor\") and different terms and conditions may apply to your use of any object code in the Content.\n+Check the Redistributor’s license that was provided with the Content. If no such license exists, contact the\n+Redistributor. Unless otherwise indicated below, the terms and conditions of the Apache License, Version 2.0 still apply\n+to any source code in the Content and such source code may be obtained at http://www.eclipse.org](http://www.eclipse.org.\n\\ No newline at end of file", "filename": "distribution/licenses/spatial4j-ABOUT.txt", "status": "added" }, { "diff": "@@ -1 +1,100 @@\n- \n+Eclipse Foundation Software User Agreement\n+\n+April 9, 2014\n+\n+Usage Of Content\n+\n+THE ECLIPSE FOUNDATION MAKES AVAILABLE SOFTWARE, DOCUMENTATION, INFORMATION AND/OR OTHER MATERIALS FOR OPEN SOURCE \n+PROJECTS (COLLECTIVELY \"CONTENT\"). USE OF THE CONTENT IS GOVERNED BY THE TERMS AND CONDITIONS OF THIS AGREEMENT AND/OR \n+THE TERMS AND CONDITIONS OF LICENSE AGREEMENTS OR NOTICES INDICATED OR REFERENCED BELOW. BY USING THE CONTENT, YOU AGREE\n+THAT YOUR USE OF THE CONTENT IS GOVERNED BY THIS AGREEMENT AND/OR THE TERMS AND CONDITIONS OF ANY APPLICABLE LICENSE\n+AGREEMENTS OR NOTICES INDICATED OR REFERENCED BELOW. IF YOU DO NOT AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT\n+AND THE TERMS AND CONDITIONS OF ANY APPLICABLE LICENSE AGREEMENTS OR NOTICES INDICATED OR REFERENCED BELOW, THEN YOU MAY\n+NOT USE THE CONTENT.\n+\n+Applicable Licenses\n+\n+Unless otherwise indicated, all Content made available by the Eclipse Foundation is provided to you under the terms and \n+conditions of the Eclipse Public License Version 1.0 (\"EPL\"). A copy of the EPL is provided with this Content and is\n+also available at http://www.eclipse.org/legal/epl-v10.html. For purposes of the EPL, \"Program\" will mean the Content.\n+\n+Content includes, but is not limited to, source code, object code, documentation and other files maintained in the \n+Eclipse Foundation source code repository (\"Repository\") in software modules (\"Modules\") and made available as\n+downloadable archives (\"Downloads\").\n+\n+* Content may be structured and packaged into modules to facilitate delivering, extending, and upgrading the Content. \n+ Typical modules may include plug-ins (\"Plug-ins\"), plug-in fragments (\"Fragments\"), and features (\"Features\").\n+* Each Plug-in or Fragment may be packaged as a sub-directory or JAR (Java™ ARchive) in a directory named \"plugins\".\n+* A Feature is a bundle of one or more Plug-ins and/or Fragments and associated material. Each Feature may be packaged\n+ as a sub-directory in a directory named \"features\". Within a Feature, files named \"feature.xml\" may contain a list \n+ of the names and version numbers of the Plug-ins and/or Fragments associated with that Feature.\n+* Features may also include other Features (\"Included Features\"). Within a Feature, files named \"feature.xml\" may \n+ contain a list of the names and version numbers of Included Features.\n+\n+The terms and conditions governing Plug-ins and Fragments should be contained in files named \"about.html\" (\"Abouts\"). \n+The terms and conditions governing Features and Included Features should be contained in files named \"license.html\" \n+(\"Feature Licenses\"). Abouts and Feature Licenses may be located in any directory of a Download or Module including, but\n+not limited to the following locations:\n+\n+* The top-level (root) directory\n+* Plug-in and Fragment directories\n+* Inside Plug-ins and Fragments packaged as JARs\n+* Sub-directories of the directory named \"src\" of certain Plug-ins\n+* Feature directories\n+\n+Note: if a Feature made available by the Eclipse Foundation is installed using the Provisioning Technology (as defined \n+below), you must agree to a license (\"Feature Update License\") during the installation process. If the Feature contains \n+Included Features, the Feature Update License should either provide you with the terms and conditions governing the \n+Included Features or inform you where you can locate them. Feature Update Licenses may be found in the \"license\" \n+property of files named \"feature.properties\" found within a Feature. Such Abouts, Feature Licenses, and Feature Update \n+Licenses contain the terms and conditions (or references to such terms and conditions) that govern your use of the \n+associated Content in that directory.\n+\n+THE ABOUTS, FEATURE LICENSES, AND FEATURE UPDATE LICENSES MAY REFER TO THE EPL OR OTHER LICENSE AGREEMENTS, NOTICES OR \n+TERMS AND CONDITIONS. SOME OF THESE OTHER LICENSE AGREEMENTS MAY INCLUDE (BUT ARE NOT LIMITED TO):\n+\n+* Eclipse Distribution License Version 1.0 (available at http://www.eclipse.org/licenses/edl-v10.html)\n+* Common Public License Version 1.0 (available at http://www.eclipse.org/legal/cpl-v10.html)\n+* Apache Software License 1.1 (available at http://www.apache.org/licenses/LICENSE)\n+* Apache Software License 2.0 (available at http://www.apache.org/licenses/LICENSE-2.0)\n+* Mozilla Public License Version 1.1 (available at http://www.mozilla.org/MPL/MPL-1.1.html)\n+\n+IT IS YOUR OBLIGATION TO READ AND ACCEPT ALL SUCH TERMS AND CONDITIONS PRIOR TO USE OF THE CONTENT. If no About, Feature\n+License, or Feature Update License is provided, please contact the Eclipse Foundation to determine what terms and \n+conditions govern that particular Content.\n+\n+### Use of Provisioning Technology\n+\n+The Eclipse Foundation makes available provisioning software, examples of which include, but are not limited to, p2 and \n+the Eclipse Update Manager (\"Provisioning Technology\") for the purpose of allowing users to install software,\n+documentation, information and/or other materials (collectively \"Installable Software\"). This capability is provided\n+with the intent of allowing such users to install, extend and update Eclipse-based products. Information about packaging\n+Installable Software is available at http://eclipse.org/equinox/p2/repository_packaging.html (\"Specification\").\n+\n+You may use Provisioning Technology to allow other parties to install Installable Software. You shall be responsible for\n+enabling the applicable license agreements relating to the Installable Software to be presented to, and accepted by, the\n+users of the Provisioning Technology in accordance with the Specification. By using Provisioning Technology in such a\n+manner and making it available in accordance with the Specification, you further acknowledge your agreement to, and the\n+acquisition of all necessary rights to permit the following:\n+\n+1. A series of actions may occur (\"Provisioning Process\") in which a user may execute the Provisioning Technology on a\n+ machine (\"Target Machine\") with the intent of installing, extending or updating the functionality of an \n+ Eclipse-based product.\n+2. During the Provisioning Process, the Provisioning Technology may cause third party Installable Software or a portion\n+ thereof to be accessed and copied to the Target Machine.\n+3. Pursuant to the Specification, you will provide to the user the terms and conditions that govern the use of the\n+ Installable Software (\"Installable Software Agreement\") and such Installable Software Agreement shall be accessed \n+ from the Target Machine in accordance with the Specification. Such Installable Software Agreement must inform the\n+ user of the terms and conditions that govern the Installable Software and must solicit acceptance by the end user in\n+ the manner prescribed in such Installable Software Agreement. Upon such indication of agreement by the user, the\n+ provisioning Technology will complete installation of the Installable Software.\n+\n+Cryptography\n+\n+Content may contain encryption software. The country in which you are currently may have restrictions on the import, \n+possession, and use, and/or re-export to another country, of encryption software. BEFORE using any encryption software,\n+please check the country's laws, regulations and policies concerning the import, possession, or use, and re-export of\n+encryption software, to see if this is permitted.\n+\n+Java and all Java-based trademarks are trademarks of Oracle Corporation in the United States, other countries,\n+or both.\n\\ No newline at end of file", "filename": "distribution/licenses/spatial4j-NOTICE.txt", "status": "modified" }, { "diff": "@@ -290,7 +290,7 @@\n <dependency>\n <groupId>com.spatial4j</groupId>\n <artifactId>spatial4j</artifactId>\n- <version>0.4.1</version>\n+ <version>0.5</version>\n </dependency>\n <dependency>\n <groupId>com.vividsolutions</groupId>", "filename": "pom.xml", "status": "modified" } ] }
{ "body": "The setting cluster.routing.allocation.cluster_concurrent_rebalance appears to be ignored when moving shard off a node that has been excluded from allocation with the setting cluster.routing.allocation.exclude._ip .\n\nES Version: 1.7.2\n\nRepeatedly experienced with the following steps:\nset cluster.routing.allocation.cluster_concurrent_rebalance to 1\nexclude node from allocation with cluster.routing.allocation.exclude._ip\n\nThis results in 80 shards rebalancing at a time until all shards are removed from the excluded node.\n", "comments": [ { "body": "Investigation required\n", "created_at": "2015-10-16T09:19:20Z" }, { "body": "I can confirm the issue. The root cause is that rebalancing constraints are not taken into consideration when shards that can no longer remain on a node need to be moved. AFAICS, it affects the following options:\n- cluster.routing.allocation.cluster_concurrent_rebalance\n- cluster.routing.allocation.allow_rebalance\n- cluster.routing.rebalance.enable\n- index.routing.rebalance.enable\n- rebalance_only_when_active\n", "created_at": "2015-10-22T16:05:01Z" } ], "number": 14057, "title": "Cluster concurrent rebalance ignored on node allocation exclusion" }
{ "body": "When a shard can no longer remain on a node (because disk is full or some exclude filters are set in place), it is moved to a different node. Currently, rebalancing constraints are not taken into consideration when this move takes place. An example for this is #14057: `cluster.routing.allocation.cluster_concurrent_rebalance` is set to 1 but 80 shards are moved off the node in one go.\n\nThis PR checks rebalancing constraints when shards are moved from a node they can no longer remain on. The constraints that are affected by this are the following:\n- cluster.routing.allocation.cluster_concurrent_rebalance\n- cluster.routing.allocation.allow_rebalance\n- cluster.routing.rebalance.enable\n- index.routing.rebalance.enable\n- rebalance_only_when_active\n\nCloses #14057\n", "number": 14259, "review_comments": [], "title": "Check rebalancing constraints when shards are moved from a node they can no longer remain on" }
{ "commits": [ { "message": "Check rebalancing constraints when shards are moved from a node they can no longer remain on\n\nCloses #14259\nCloses #14057" } ], "files": [ { "diff": "@@ -511,7 +511,9 @@ public boolean move(ShardRouting shard, RoutingNode node ) {\n continue;\n }\n RoutingNode target = routingNodes.node(currentNode.getNodeId());\n- Decision decision = allocation.deciders().canAllocate(shard, target, allocation);\n+ Decision allocationDecision = allocation.deciders().canAllocate(shard, target, allocation);\n+ Decision rebalanceDecision = allocation.deciders().canRebalance(shard, allocation);\n+ Decision decision = new Decision.Multi().add(allocationDecision).add(rebalanceDecision);\n if (decision.type() == Type.YES) { // TODO maybe we can respect throttling here too?\n sourceNode.removeShard(shard);\n ShardRouting targetRelocatingShard = routingNodes.relocate(shard, target.nodeId(), allocation.clusterInfo().getShardSize(shard, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE));", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n@@ -36,6 +37,7 @@\n \n import static java.util.Collections.singletonMap;\n import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.STARTED;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.hamcrest.Matchers.equalTo;\n \n@@ -162,4 +164,69 @@ public void testIndexFilters() {\n assertThat(startedShard.currentNodeId(), Matchers.anyOf(equalTo(\"node1\"), equalTo(\"node4\")));\n }\n }\n+\n+ public void testRebalanceAfterShardsCannotRemainOnNode() {\n+ AllocationService strategy = createAllocationService(settingsBuilder().build());\n+\n+ logger.info(\"Building initial routing table\");\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test1\").settings(settings(Version.CURRENT)).numberOfShards(2).numberOfReplicas(0))\n+ .put(IndexMetaData.builder(\"test2\").settings(settings(Version.CURRENT)).numberOfShards(2).numberOfReplicas(0))\n+ .build();\n+\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test1\"))\n+ .addAsNew(metaData.index(\"test2\"))\n+ .build();\n+\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n+\n+ logger.info(\"--> adding two nodes and performing rerouting\");\n+ DiscoveryNode node1 = newNode(\"node1\", singletonMap(\"tag1\", \"value1\"));\n+ DiscoveryNode node2 = newNode(\"node2\", singletonMap(\"tag1\", \"value2\"));\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(node1).put(node2)).build();\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ assertThat(clusterState.getRoutingNodes().node(node1.getId()).numberOfShardsWithState(INITIALIZING), equalTo(2));\n+ assertThat(clusterState.getRoutingNodes().node(node2.getId()).numberOfShardsWithState(INITIALIZING), equalTo(2));\n+\n+ logger.info(\"--> start the shards (only primaries)\");\n+ routingTable = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ logger.info(\"--> make sure all shards are started\");\n+ assertThat(clusterState.getRoutingNodes().shardsWithState(STARTED).size(), equalTo(4));\n+\n+ logger.info(\"--> disable allocation for node1 and reroute\");\n+ strategy = createAllocationService(settingsBuilder()\n+ .put(\"cluster.routing.allocation.cluster_concurrent_rebalance\", \"1\")\n+ .put(\"cluster.routing.allocation.exclude.tag1\", \"value1\")\n+ .build());\n+\n+ logger.info(\"--> move shards from node1 to node2\");\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ logger.info(\"--> check that concurrent rebalance only allows 1 shard to move\");\n+ assertThat(clusterState.getRoutingNodes().node(node1.getId()).numberOfShardsWithState(STARTED), equalTo(1));\n+ assertThat(clusterState.getRoutingNodes().node(node2.getId()).numberOfShardsWithState(INITIALIZING), equalTo(1));\n+ assertThat(clusterState.getRoutingNodes().node(node2.getId()).numberOfShardsWithState(STARTED), equalTo(2));\n+\n+ logger.info(\"--> start the shards (only primaries)\");\n+ routingTable = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ logger.info(\"--> move second shard from node1 to node2\");\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ assertThat(clusterState.getRoutingNodes().node(node2.getId()).numberOfShardsWithState(INITIALIZING), equalTo(1));\n+ assertThat(clusterState.getRoutingNodes().node(node2.getId()).numberOfShardsWithState(STARTED), equalTo(3));\n+\n+ logger.info(\"--> start the shards (only primaries)\");\n+ routingTable = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ assertThat(clusterState.getRoutingNodes().node(node2.getId()).numberOfShardsWithState(STARTED), equalTo(4));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/FilterRoutingTests.java", "status": "modified" } ] }
{ "body": "Using a 2.0.0-snapshot build, I see odd logging behavior for the low disk watermark. \n\nThe free-disk calculation is working fine, but the logging for this is quite odd. After starting a single-node cluster with 5 shards (each with 1 shard, and 1 unassigned replica), I see several log messages about low disk watermark all at once. 30 seconds later, I see another group of low disk watermark messages, but each time there are more messages than previously. First time was 4 identical log lines, then 7, then 10, then 13.\n\nI was able to reproduce several times after restarting the nodes. I inserted newlines into the excerpted logs below. This could become a real problem if the trend continues. \n\n```\n[2015-10-19 09:21:12,202][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:21:12,202][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:21:12,203][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:21:12,203][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n\n\n[2015-10-19 09:21:42,201][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:21:42,201][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:21:42,201][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:21:42,201][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:21:42,202][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:21:42,202][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:21:42,202][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n\n\n[2015-10-19 09:22:12,206][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:12,207][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:12,207][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:12,207][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:12,207][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:12,207][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:12,207][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:12,208][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:12,208][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:12,208][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n\n\n[2015-10-19 09:22:42,211][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,211][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,211][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,212][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,212][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,212][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,212][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,212][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,212][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,212][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,213][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,213][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:22:42,213][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n\n\n[2015-10-19 09:23:12,213][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,213][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,213][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,213][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,214][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,214][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,214][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,214][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,214][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,214][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,215][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,215][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,215][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,215][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,215][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:12,215][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.3gb[14.7%], replicas will not be assigned to this node\n\n\n[2015-10-19 09:23:42,217][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,217][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,217][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,217][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,217][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,218][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,218][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,218][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,218][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,218][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,218][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,219][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,219][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,219][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,219][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,219][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,219][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,219][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n[2015-10-19 09:23:42,220][INFO ][cluster.routing.allocation.decider] [Solara] low disk watermark [85%] exceeded on [XpVtZKlVQFu80lnBEGdh9A][Solara][/skearns/es/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 34.4gb[14.7%], replicas will not be assigned to this node\n```\n", "comments": [ { "body": "@dakrone could you take a look at this please?\n", "created_at": "2015-10-20T11:52:13Z" }, { "body": "@skearns64 When I try to reproduce this, I am getting only a single logging message (extra spacing added by me):\n\n```\n~/scratch/elasticsearch-2.0.0-SNAPSHOT λ bin/elasticsearch\n[2015-10-20 06:33:54,293][INFO ][node ] [Aftershock] version[2.0.0-SNAPSHOT], pid[4608], build[e1a7cc2/2015-10-20T12:29:09Z]\n[2015-10-20 06:33:54,293][INFO ][node ] [Aftershock] initializing ...\n[2015-10-20 06:33:54,355][INFO ][plugins ] [Aftershock] loaded [], sites []\n[2015-10-20 06:33:54,441][INFO ][env ] [Aftershock] using [1] data paths, mounts [[/home (/dev/mapper/fedora_thulcandra-home)]], net usable_space [270.9gb], net total_space [396.1gb], spins? [no], types [ext4]\n[2015-10-20 06:33:55,850][INFO ][node ] [Aftershock] initialized\n[2015-10-20 06:33:55,851][INFO ][node ] [Aftershock] starting ...\n[2015-10-20 06:33:55,976][INFO ][transport ] [Aftershock] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}\n[2015-10-20 06:33:55,985][INFO ][discovery ] [Aftershock] elasticsearch/4jyUZlbuQJik6bNqb7bxuA\n[2015-10-20 06:33:59,021][INFO ][cluster.service ] [Aftershock] new_master {Aftershock}{4jyUZlbuQJik6bNqb7bxuA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)\n[2015-10-20 06:33:59,052][INFO ][http ] [Aftershock] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}\n[2015-10-20 06:33:59,053][INFO ][node ] [Aftershock] started\n[2015-10-20 06:33:59,072][INFO ][gateway ] [Aftershock] recovered [0] indices into cluster_state\n[2015-10-20 06:34:02,803][INFO ][cluster.metadata ] [Aftershock] [test] creating index, cause [api], templates [], shards [5]/[1], mappings []\n[2015-10-20 06:34:29,033][INFO ][cluster.routing.allocation.decider] [Aftershock] low disk watermark [25%] exceeded on [4jyUZlbuQJik6bNqb7bxuA][Aftershock][/home/hinmanm/scratch/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 270.9gb[68.3%], replicas will not be assigned to this node\n\n[2015-10-20 06:34:59,028][INFO ][cluster.routing.allocation.decider] [Aftershock] low disk watermark [25%] exceeded on [4jyUZlbuQJik6bNqb7bxuA][Aftershock][/home/hinmanm/scratch/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 270.9gb[68.3%], replicas will not be assigned to this node\n\n[2015-10-20 06:35:29,028][INFO ][cluster.routing.allocation.decider] [Aftershock] low disk watermark [25%] exceeded on [4jyUZlbuQJik6bNqb7bxuA][Aftershock][/home/hinmanm/scratch/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 270.9gb[68.3%], replicas will not be assigned to this node\n\n[2015-10-20 06:35:59,027][INFO ][cluster.routing.allocation.decider] [Aftershock] low disk watermark [25%] exceeded on [4jyUZlbuQJik6bNqb7bxuA][Aftershock][/home/hinmanm/scratch/elasticsearch-2.0.0-SNAPSHOT/data/elasticsearch/nodes/0] free: 270.9gb[68.3%], replicas will not be assigned to this node\n```\n\nYou mentioned using a 2.0.0-snapshot build, what SHA are you building? I am building from the 2.0 branch, e1a7cc2.\n", "created_at": "2015-10-20T12:38:26Z" }, { "body": "More info on this. The multiple logging lines is caused by `DiskThresholdDecider` not being bound as a singleton. Usually this wouldn't cause any side effects for Elasticsearch, however, Marvel injects the DTD into its `NodesStatsCollecter` with a `Provider<...>`, which was creating a new instance of `DiskThresholdDecider` every time it collected the stats. The additional instances were then logging, which is why the logging increased in number every time it ran.\n", "created_at": "2015-10-21T19:53:17Z" } ], "number": 14194, "title": "Low Disk Watermark Logging Oddness" }
{ "body": "The ExtensionPoint.ClassSet binds adds the extension classes to a a Multibinder and binds\nthe classes and calls the asEagerSingleton method on the multibinder. This does not actually\ncreate a singleton. Instead we first bind the class as a singleton and add then add the class\nto the multibinder.\n\nCloses #14194\n", "number": 14232, "review_comments": [], "title": "Properly bind ClassSet extensions as singletons" }
{ "commits": [ { "message": "properly bind ClassSet extensions as singletons\n\nThe ExtensionPoint.ClassSet binds adds the extension classes to a a Multibinder and binds\nthe classes and calls the asEagerSingleton method on the multibinder. This does not actually\ncreate a singleton. Instead we first bind the class as a singleton and add then add the class\nto the multibinder.\n\nCloses #14194" } ], "files": [ { "diff": "@@ -191,7 +191,8 @@ public final void registerExtension(Class<? extends T> extension) {\n protected final void bindExtensions(Binder binder) {\n Multibinder<T> allocationMultibinder = Multibinder.newSetBinder(binder, extensionClass);\n for (Class<? extends T> clazz : extensions) {\n- allocationMultibinder.addBinding().to(clazz).asEagerSingleton();\n+ binder.bind(clazz).asEagerSingleton();\n+ allocationMultibinder.addBinding().to(clazz);\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java", "status": "modified" } ] }
{ "body": "If I have a plugin that registers a listener for `beforeIndexShardCreated` via the `indexModules` endpoint, as so:\n\n``` java\npublic class ListenerPlugin extends Plugin {\n /** ... other stuff here */\n @Override\n public Collection<Module> indexModules(Settings settings) {\n List<Module> modules = new ArrayList<>(super.indexModules(settings));\n modules.add(new ListenerModule());\n return modules;\n }\n}\n```\n\nAnd then does:\n\n``` java\npublic class Listener extends IndicesLifecycle.Listener {\n\n @Inject\n public Listener(Settings settings, IndicesLifecycle indicesLifecycle, Environment env) {\n indicesLifecycle.addListener(this);\n System.out.println(\"Listener created!\");\n }\n\n @Override\n public void beforeIndexShardCreated(ShardId shardId, Settings indexSettings) {\n System.out.println(\"Entering beforeIndexShardCreated()\");\n\n System.out.println(\"Exiting beforeIndexShardCreated()\");\n }\n}\n```\n\nI expect that the listener's `beforeIndexShardCreated` method will only be called once for each newly created shard, however, in some cases, it is called twice.\n\nHere's how I reproduced this:\n- Install plugin on all ES nodes\n- Start 3 ES nodes\n- Create an index with 1 primary shard and 0 replicas\n\n``` json\nPOST /myindex\n{\n \"index\": {\n \"number_of_shards\": 1,\n \"number_of_replicas\": 0,\n \"data_path\": \"/tmp/bar/foo\",\n \"shadow_replicas\": true\n }\n}\n```\n- Verify that the shard is created and I see:\n\nEntering beforeIndexShardCreated()\nExiting beforeIndexShardCreated()\n\n**Once**, in the logs (because 1 shard was created)\n- Relocate the shard from a node to another node via the `_cluster/reroute` API\n\nThen I see this in the logs:\n\n```\n[2015-09-01 15:52:16,468][DEBUG][org.elasticsearch.indices.cluster] [Thermo] [myindex][0] creating shard\nEntering beforeIndexShardCreated()\nExiting beforeIndexShardCreated()\nEntering beforeIndexShardCreated()\nExiting beforeIndexShardCreated()\n```\n\nThe listener is being called twice. Looking through the logs, I see the log line `Listener created!` **twice** on this particular node, once when the index was created but no shards were placed on this node, and another time when the actual shard is moved to the node.\n\nIt looks like we create class more than once per index, which seems like something we should try to avoid?\n", "comments": [ { "body": "@dakrone I tried to find out how this happens the only thing I could suspect is Guice causing two instances of Listener to be created. I would be very curious to see the code of ListenerModule.\n", "created_at": "2015-10-06T16:16:58Z" }, { "body": "@dakrone can you reproduce it and print the stacktrace for each of the invocations?\n", "created_at": "2015-10-08T08:50:23Z" }, { "body": "ok I can reproduce this problem. It's caused by the fact that the master creates the index once before we actually creating it on other nodes to ensure we can fully create it and don't throw an exception. If you print stacktraces it looks like this:\n\n```\n[2015-10-08 11:23:50,331][INFO ][org.elasticsearch.index ] [ListenerIT#testIt]: starting test\n[2015-10-08 11:23:50,604][WARN ][LISTENER ] Listener created!\njava.lang.RuntimeException\n at org.elasticsearch.index.ListenerIT$MyTestListener.<init>(ListenerIT.java:67)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:116)\n at org.elasticsearch.common.inject.InjectorImpl$5$1.call(InjectorImpl.java:828)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\n at org.elasticsearch.common.inject.InjectorImpl$5.get(InjectorImpl.java:823)\n at org.elasticsearch.common.inject.InheritingState.makeAllBindingsToEagerSingletons(InheritingState.java:157)\n at org.elasticsearch.common.inject.InjectorImpl.readOnlyAllSingletons(InjectorImpl.java:909)\n at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:59)\n at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:347)\n at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:364)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:383)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-10-08 11:23:50,688][INFO ][org.elasticsearch.cluster.metadata] [node_s0] [test] creating index, cause [api], templates [random_index_template], shards [1]/[0], mappings [_default_]\n[2015-10-08 11:23:50,710][WARN ][LISTENER ] Listener created!\njava.lang.RuntimeException\n at org.elasticsearch.index.ListenerIT$MyTestListener.<init>(ListenerIT.java:67)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:116)\n at org.elasticsearch.common.inject.InjectorImpl$5$1.call(InjectorImpl.java:828)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\n at org.elasticsearch.common.inject.InjectorImpl$5.get(InjectorImpl.java:823)\n at org.elasticsearch.common.inject.InheritingState.makeAllBindingsToEagerSingletons(InheritingState.java:157)\n at org.elasticsearch.common.inject.InjectorImpl.readOnlyAllSingletons(InjectorImpl.java:909)\n at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:59)\n at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:347)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewIndices(IndicesClusterStateService.java:303)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:172)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:493)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n\na short term workaround for this is to de-register the listener which it should do anyway like this:\n\n``` Java\n @Override\n public void afterIndexClosed(Index index, @IndexSettings Settings indexSettings) {\n lifecycle.removeListener(this);\n }\n```\n\nBut we should really turn this into an index level component that we destroy once the index is destroyed. ie instead of `IndicesLifecycle` we would have an `IndexLifecycle` that we can trash after the index is destroyed\n", "created_at": "2015-10-08T09:27:33Z" }, { "body": "@dakrone I am going to push this out to 2.1 for now\n", "created_at": "2015-10-08T09:35:39Z" }, { "body": "That’s indeed why the listener is created twice, but does it explain why we call beforeIndexShardCreated twice- the master doesn’t create shards... That’s where I got stuck looking at this…\n\n> On 08 Oct 2015, at 11:27, Simon Willnauer notifications@github.com wrote:\n> \n> ok I can reproduce this problem. It's caused by the fact that the master creates the index once before we actually creating it on other nodes to ensure we can fully create it and don't throw an exception. If you print stacktraces it looks like this:\n> \n> [2015-10-08 11:23:50,331][INFO ][org.elasticsearch.index ] [ListenerIT#testIt]: starting test\n> [2015-10-08 11:23:50,604][WARN ][LISTENER ] Listener created!\n> java.lang.RuntimeException\n> at org.elasticsearch.index.ListenerIT$MyTestListener.<init>(ListenerIT.java:67)\n> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n> at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\n> at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n> at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:116)\n> at org.elasticsearch.common.inject.InjectorImpl$5$1.call(InjectorImpl.java:828)\n> at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\n> at org.elasticsearch.common.inject.InjectorImpl$5.get(InjectorImpl.java:823)\n> at org.elasticsearch.common.inject.InheritingState.makeAllBindingsToEagerSingletons(InheritingState.java:157)\n> at org.elasticsearch.common.inject.InjectorImpl.readOnlyAllSingletons(InjectorImpl.java:909)\n> at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:59)\n> at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:347)\n> at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:364)\n> at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:383)\n> at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n> at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n> at java.lang.Thread.run(Thread.java:745)\n> [2015-10-08 11:23:50,688][INFO ][org.elasticsearch.cluster.metadata] [node_s0] [test] creating index, cause [api], templates [random_index_template], shards [1]/[0], mappings [_default_]\n> [2015-10-08 11:23:50,710][WARN ][LISTENER ] Listener created!\n> java.lang.RuntimeException\n> at org.elasticsearch.index.ListenerIT$MyTestListener.<init>(ListenerIT.java:67)\n> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n> at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\n> at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n> at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:116)\n> at org.elasticsearch.common.inject.InjectorImpl$5$1.call(InjectorImpl.java:828)\n> at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\n> at org.elasticsearch.common.inject.InjectorImpl$5.get(InjectorImpl.java:823)\n> at org.elasticsearch.common.inject.InheritingState.makeAllBindingsToEagerSingletons(InheritingState.java:157)\n> at org.elasticsearch.common.inject.InjectorImpl.readOnlyAllSingletons(InjectorImpl.java:909)\n> at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:59)\n> at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:347)\n> at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewIndices(IndicesClusterStateService.java:303)\n> at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:172)\n> at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:493)\n> at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n> at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n> at java.lang.Thread.run(Thread.java:745)\n> \n> a short term workaround for this is to de-register the listener which it should do anyway like this:\n> \n> ```\n> @Override\n> ```\n> \n> public void afterIndexClosed(Index index, @IndexSettings Settings\n> indexSettings) {\n> lifecycle\n> .removeListener(this\n> );\n> }\n> \n> But we should really turn this into an index level component that we destroy once the index is destroyed. ie instead of IndicesLifecycle we would have an IndexLifecycle that we can trash after the index is destroyed\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "created_at": "2015-10-08T09:36:23Z" }, { "body": "I also think it's not necessarily a bug, if a listener does this it will leak once an index is moved to another node entirely so it has to cleanup after himself.\n", "created_at": "2015-10-08T09:36:44Z" }, { "body": "> That’s indeed why the listener is created twice, but does it explain why we call beforeIndexShardCreated twice- the master doesn’t create shards... That’s where I got stuck looking at this…\n\nit does never unregister itself. so if you allocate a shard on the node that happens to be the master as well you will AGAIN register the listener (a new instance) once it's actually creating the shard.\n", "created_at": "2015-10-08T09:37:51Z" }, { "body": "here is an output of this test https://gist.github.com/s1monw/1a552a6abb7f63c51e6f\n\n```\n...\n[2015-10-08 11:39:40,526][WARN ][LISTENER ] Listener created!\njava.lang.RuntimeException\n at org.elasticsearch.index.ListenerIT$MyTestListener.<init>(ListenerIT.java:67)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:116)\n at org.elasticsearch.common.inject.InjectorImpl$5$1.call(InjectorImpl.java:828)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\n at org.elasticsearch.common.inject.InjectorImpl$5.get(InjectorImpl.java:823)\n at org.elasticsearch.common.inject.InheritingState.makeAllBindingsToEagerSingletons(InheritingState.java:157)\n at org.elasticsearch.common.inject.InjectorImpl.readOnlyAllSingletons(InjectorImpl.java:909)\n at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:59)\n at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:347)\n at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:364)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:383)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-10-08 11:39:40,714][INFO ][org.elasticsearch.cluster.metadata] [node_s0] [test] creating index, cause [api], templates [random_index_template], shards [1]/[0], mappings [_default_]\n[2015-10-08 11:39:40,741][WARN ][LISTENER ] Listener created!\njava.lang.RuntimeException\n at org.elasticsearch.index.ListenerIT$MyTestListener.<init>(ListenerIT.java:67)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:422)\n at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:116)\n at org.elasticsearch.common.inject.InjectorImpl$5$1.call(InjectorImpl.java:828)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:880)\n at org.elasticsearch.common.inject.InjectorImpl$5.get(InjectorImpl.java:823)\n at org.elasticsearch.common.inject.InheritingState.makeAllBindingsToEagerSingletons(InheritingState.java:157)\n at org.elasticsearch.common.inject.InjectorImpl.readOnlyAllSingletons(InjectorImpl.java:909)\n at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:59)\n at org.elasticsearch.indices.IndicesService.createIndex(IndicesService.java:347)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewIndices(IndicesClusterStateService.java:303)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:172)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:493)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n[2015-10-08 11:39:40,748][WARN ][LISTENER ] RUNNING beforeIndexShardCreated() on instance org.elasticsearch.index.ListenerIT$MyTestListener@25e3d50a \n[2015-10-08 11:39:40,748][WARN ][LISTENER ] RUNNING beforeIndexShardCreated() on instance org.elasticsearch.index.ListenerIT$MyTestListener@7555bbb0 \n[2015-10-08 11:39:40,879][WARN ][LISTENER ] RUNNING afterIndexShardCreated() on instance org.elasticsearch.index.ListenerIT$MyTestListener@25e3d50a\n[2015-10-08 11:39:40,880][WARN ][LISTENER ] RUNNING afterIndexShardCreated() on instance org.elasticsearch.index.ListenerIT$MyTestListener@7555bbb0\n```\n", "created_at": "2015-10-08T09:41:47Z" } ], "number": 13259, "title": "`beforeIndexShardCreated` listener for indexModule called twice" }
{ "body": "Today IndicesLifecycle is a per-node class that allows to register\nlisteners at any time. It also requires to de-register if listeners\nare not needed anymore ie. if classes are created per-index / shard etc.\nThey also cause issues where listeners are registered more than once as in #13259\n\nThis commit removes the per-node class and replaces it with an well defined\nextension point that allows listeners to be registered at index creation time\nwithout the need to unregister since listeners are go out of scope if the index\ngoes out of scope. Yet, this still allows to share instances across indices as before\nbut without the risk of double registering them etc.\n\nAll data-structures used for event notifications are now immuatble and can only changes\non index creation time. This removes flexibility to some degree but increases maintainability\nof the interface and the code itself dramatically especially with the step by step removal of\nthe index level dependency injection.\n\nCloses #13259\n", "number": 14217, "review_comments": [ { "body": "Since we now use a logger with an index prefix , this will result in double index name logging `[index][index][0]` \n", "created_at": "2015-10-21T13:39:15Z" }, { "body": "same holds for other logs bellow...\n", "created_at": "2015-10-21T13:44:23Z" }, { "body": "my intellij thinks this is not used?\n", "created_at": "2015-10-21T13:46:40Z" }, { "body": "good!\n", "created_at": "2015-10-21T13:48:20Z" }, { "body": "These cause compilation errors for me...\n", "created_at": "2015-10-21T13:51:54Z" }, { "body": "why did you take this out of the try catch block? I'm fine with keeping this as is, but then the catch IOException clause is probably redundant:\n\n```\ncatch (IOException e) {\n ElasticsearchException ex = new ElasticsearchException(\"failed to create shard\", e);\n ex.setShard(shardId);\n throw ex;\n }\n```\n", "created_at": "2015-10-21T13:54:53Z" }, { "body": "can we move this back into the try? I'm worried that exceptions wouldn't release the shard lock .\n", "created_at": "2015-10-21T13:55:42Z" }, { "body": "This should be `eventListener.indexShardStateChanged(indexShard, null, indexShard.state(), \"shard created\");\n", "created_at": "2015-10-21T13:59:33Z" }, { "body": "I know that this is how it used to be but we add an explanation that this is called before the index is added to the cluster state? created is misleading. \n", "created_at": "2015-10-21T14:01:55Z" }, { "body": "buildInListeners -> buil_t_InListeners (and in other places..)\n", "created_at": "2015-10-21T14:03:27Z" }, { "body": "good!\n", "created_at": "2015-10-21T14:04:30Z" }, { "body": "awesome I will remove :)\n", "created_at": "2015-10-21T14:18:00Z" }, { "body": "copy paste :)\n", "created_at": "2015-10-21T14:18:37Z" }, { "body": "Can we move the settings here?\n", "created_at": "2015-10-21T14:30:05Z" } ], "title": "Replace IndicesLifecycle with a per-index IndexEventListener" }
{ "commits": [ { "message": "Replace IndicesLifecycle with a per-index IndexEventListener\n\nToday IndicesLifecycle is a per-node class that allows to register\nlisteners at any time. It also requires to de-register if listeners\nare not needed anymore ie. if classes are created per-index / shard etc.\nThey also cause issues where listeners are registered more than once as in #13259\n\nThis commit removes the per-node class and replaces it with an well defined\nextension point that allows listeners to be registered at index creation time\nwithout the need to unregister since listeners are go out of scope if the index\ngoes out of scope. Yet, this still allows to share instances across indices as before\nbut without the risk of double registering them etc.\n\nAll data-structures used for event notifications are now immuatble and can only changes\non index creation time. This removes flexibility to some degree but increases maintainability\nof the interface and the code itself dramatically especially with the step by step removal of\nthe index level dependency injection.\n\nCloses #13259" } ], "files": [ { "diff": "@@ -80,13 +80,7 @@\n import java.nio.file.DirectoryStream;\n import java.nio.file.Files;\n import java.nio.file.Path;\n-import java.util.ArrayList;\n-import java.util.Comparator;\n-import java.util.HashMap;\n-import java.util.List;\n-import java.util.Locale;\n-import java.util.Map;\n-import java.util.Set;\n+import java.util.*;\n import java.util.concurrent.Semaphore;\n import java.util.concurrent.TimeUnit;\n \n@@ -313,7 +307,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n // Set up everything, now locally create the index to see that things are ok, and apply\n final IndexMetaData tmpImd = IndexMetaData.builder(request.index()).settings(actualIndexSettings).build();\n // create the index here (on the master) to validate it can be created, as well as adding the mapping\n- indicesService.createIndex(tmpImd);\n+ indicesService.createIndex(tmpImd, Collections.EMPTY_LIST);\n indexCreated = true;\n // now add the mappings\n IndexService indexService = indicesService.indexServiceSafe(request.index());\n@@ -387,7 +381,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n throw e;\n }\n \n- indexService.indicesLifecycle().beforeIndexAddedToCluster(new Index(request.index()),\n+ indexService.getIndexEventListener().beforeIndexAddedToCluster(new Index(request.index()),\n indexMetaData.getSettings());\n \n MetaData newMetaData = MetaData.builder(currentState.metaData())\n@@ -433,29 +427,6 @@ private Map<String, Object> parseMapping(String mappingSource) throws Exception\n }\n }\n \n- private void addMappings(Map<String, Map<String, Object>> mappings, Path mappingsDir) throws IOException {\n- try (DirectoryStream<Path> stream = Files.newDirectoryStream(mappingsDir)) {\n- for (Path mappingFile : stream) {\n- final String fileName = mappingFile.getFileName().toString();\n- if (FileSystemUtils.isHidden(mappingFile)) {\n- continue;\n- }\n- int lastDotIndex = fileName.lastIndexOf('.');\n- String mappingType = lastDotIndex != -1 ? mappingFile.getFileName().toString().substring(0, lastDotIndex) : mappingFile.getFileName().toString();\n- try (BufferedReader reader = Files.newBufferedReader(mappingFile, StandardCharsets.UTF_8)) {\n- String mappingSource = Streams.copyToString(reader);\n- if (mappings.containsKey(mappingType)) {\n- XContentHelper.mergeDefaults(mappings.get(mappingType), parseMapping(mappingSource));\n- } else {\n- mappings.put(mappingType, parseMapping(mappingSource));\n- }\n- } catch (Exception e) {\n- logger.warn(\"failed to read / parse mapping [\" + mappingType + \"] from location [\" + mappingFile + \"], ignoring...\", e);\n- }\n- }\n- }\n- }\n-\n private List<IndexTemplateMetaData> findTemplates(CreateIndexClusterStateUpdateRequest request, ClusterState state, IndexTemplateFilter indexTemplateFilter) throws IOException {\n List<IndexTemplateMetaData> templates = new ArrayList<>();\n for (ObjectCursor<IndexTemplateMetaData> cursor : state.metaData().templates().values()) {", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -36,10 +36,7 @@\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.indices.IndicesService;\n \n-import java.util.ArrayList;\n-import java.util.HashMap;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n \n /**\n * Service responsible for submitting add and remove aliases requests\n@@ -98,7 +95,7 @@ public ClusterState execute(final ClusterState currentState) {\n if (indexService == null) {\n // temporarily create the index and add mappings so we can parse the filter\n try {\n- indexService = indicesService.createIndex(indexMetaData);\n+ indexService = indicesService.createIndex(indexMetaData, Collections.EMPTY_LIST);\n if (indexMetaData.getMappings().containsKey(MapperService.DEFAULT_MAPPING)) {\n indexService.mapperService().merge(MapperService.DEFAULT_MAPPING, indexMetaData.getMappings().get(MapperService.DEFAULT_MAPPING).source(), false, false);\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java", "status": "modified" }, { "diff": "@@ -172,7 +172,7 @@ Tuple<ClusterState, List<MappingTask>> executeRefreshOrUpdate(final ClusterState\n IndexService indexService = indicesService.indexService(index);\n if (indexService == null) {\n // we need to create the index here, and add the current mapping to it, so we can merge\n- indexService = indicesService.createIndex(indexMetaData);\n+ indexService = indicesService.createIndex(indexMetaData, Collections.EMPTY_LIST);\n removeIndex = true;\n Set<String> typesToIntroduce = new HashSet<>();\n for (MappingTask task : tasks) {\n@@ -350,7 +350,7 @@ public ClusterState execute(final ClusterState currentState) throws Exception {\n continue;\n }\n final IndexMetaData indexMetaData = currentState.metaData().index(index);\n- IndexService indexService = indicesService.createIndex(indexMetaData);\n+ IndexService indexService = indicesService.createIndex(indexMetaData, Collections.EMPTY_LIST);\n indicesToClose.add(indexMetaData.getIndex());\n // make sure to add custom default mapping if exists\n if (indexMetaData.getMappings().containsKey(MapperService.DEFAULT_MAPPING)) {", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java", "status": "modified" }, { "diff": "@@ -22,24 +22,61 @@\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.inject.util.Providers;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.engine.EngineFactory;\n import org.elasticsearch.index.engine.InternalEngineFactory;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexSearcherWrapper;\n \n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Set;\n+\n /**\n *\n */\n public class IndexModule extends AbstractModule {\n \n private final IndexMetaData indexMetaData;\n+ private final Settings settings;\n // pkg private so tests can mock\n Class<? extends EngineFactory> engineFactoryImpl = InternalEngineFactory.class;\n Class<? extends IndexSearcherWrapper> indexSearcherWrapper = null;\n+ private final Set<IndexEventListener> indexEventListeners = new HashSet<>();\n+ private IndexEventListener listener;\n+\n \n- public IndexModule(IndexMetaData indexMetaData) {\n+ public IndexModule(Settings settings, IndexMetaData indexMetaData) {\n this.indexMetaData = indexMetaData;\n+ this.settings = settings;\n+ }\n+\n+ public Settings getIndexSettings() {\n+ return settings;\n+ }\n+\n+ public void addIndexEventListener(IndexEventListener listener) {\n+ if (this.listener != null) {\n+ throw new IllegalStateException(\"can't add listener after listeners are frozen\");\n+ }\n+ if (listener == null) {\n+ throw new IllegalArgumentException(\"listener must not be null\");\n+ }\n+ if (indexEventListeners.contains(listener)) {\n+ throw new IllegalArgumentException(\"listener already added\");\n+ }\n+\n+ this.indexEventListeners.add(listener);\n+ }\n+\n+ public IndexEventListener freeze() {\n+ // TODO somehow we need to make this pkg private...\n+ if (listener == null) {\n+ listener = new CompositeIndexEventListener(indexMetaData.getIndex(), settings, indexEventListeners);\n+ }\n+ return listener;\n }\n \n @Override\n@@ -50,12 +87,11 @@ protected void configure() {\n } else {\n bind(IndexSearcherWrapper.class).to(indexSearcherWrapper).asEagerSingleton();\n }\n+ bind(IndexEventListener.class).toInstance(freeze());\n bind(IndexMetaData.class).toInstance(indexMetaData);\n bind(IndexService.class).asEagerSingleton();\n bind(IndexServicesProvider.class).asEagerSingleton();\n bind(MapperService.class).asEagerSingleton();\n bind(IndexFieldDataService.class).asEagerSingleton();\n }\n-\n-\n }", "filename": "core/src/main/java/org/elasticsearch/index/IndexModule.java", "status": "modified" }, { "diff": "@@ -46,28 +46,17 @@\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.query.IndexQueryParserService;\n import org.elasticsearch.index.query.ParsedQuery;\n-import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.settings.IndexSettingsService;\n-import org.elasticsearch.index.shard.IndexShard;\n-import org.elasticsearch.index.shard.ShadowIndexShard;\n-import org.elasticsearch.index.shard.ShardId;\n-import org.elasticsearch.index.shard.ShardNotFoundException;\n-import org.elasticsearch.index.shard.ShardPath;\n+import org.elasticsearch.index.shard.*;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.index.store.IndexStore;\n import org.elasticsearch.index.store.Store;\n-import org.elasticsearch.indices.AliasFilterParsingException;\n-import org.elasticsearch.indices.IndicesService;\n-import org.elasticsearch.indices.InternalIndicesLifecycle;\n-import org.elasticsearch.indices.InvalidAliasNameException;\n+import org.elasticsearch.indices.*;\n \n import java.io.Closeable;\n import java.io.IOException;\n import java.nio.file.Path;\n-import java.util.HashMap;\n-import java.util.Iterator;\n-import java.util.Map;\n-import java.util.Set;\n+import java.util.*;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n@@ -80,7 +69,7 @@\n */\n public class IndexService extends AbstractIndexComponent implements IndexComponent, Iterable<IndexShard> {\n \n- private final InternalIndicesLifecycle indicesLifecycle;\n+ private final IndexEventListener eventListener;\n private final AnalysisService analysisService;\n private final IndexFieldDataService indexFieldData;\n private final BitsetFilterCache bitsetFilterCache;\n@@ -102,15 +91,16 @@ public IndexService(Index index, IndexMetaData indexMetaData, NodeEnvironment no\n BitsetFilterCache bitSetFilterCache,\n IndicesService indicesServices,\n IndexServicesProvider indexServicesProvider,\n- IndexStore indexStore) {\n+ IndexStore indexStore,\n+ IndexEventListener eventListener) {\n super(index, settingsService.indexSettings());\n assert indexMetaData != null;\n this.analysisService = analysisService;\n this.indexFieldData = indexFieldData;\n this.settingsService = settingsService;\n this.bitsetFilterCache = bitSetFilterCache;\n this.indicesServices = indicesServices;\n- this.indicesLifecycle = (InternalIndicesLifecycle) indexServicesProvider.getIndicesLifecycle();\n+ this.eventListener = eventListener;\n this.nodeEnv = nodeEnv;\n this.indexServicesProvider = indexServicesProvider;\n this.indexStore = indexStore;\n@@ -123,8 +113,8 @@ public int numberOfShards() {\n return shards.size();\n }\n \n- public InternalIndicesLifecycle indicesLifecycle() {\n- return this.indicesLifecycle;\n+ public IndexEventListener getIndexEventListener() {\n+ return this.eventListener;\n }\n \n @Override\n@@ -225,7 +215,7 @@ private long getAvgShardSizeInBytes() throws IOException {\n }\n }\n \n- public synchronized IndexShard createShard(int sShardId, ShardRouting routing) {\n+ public synchronized IndexShard createShard(int sShardId, ShardRouting routing) throws IOException {\n final boolean primary = routing.primary();\n /*\n * TODO: we execute this in parallel but it's a synced method. Yet, we might\n@@ -237,13 +227,12 @@ public synchronized IndexShard createShard(int sShardId, ShardRouting routing) {\n }\n final Settings indexSettings = settingsService.getSettings();\n final ShardId shardId = new ShardId(index, sShardId);\n- ShardLock lock = null;\n boolean success = false;\n Store store = null;\n IndexShard indexShard = null;\n+ final ShardLock lock = nodeEnv.shardLock(shardId, TimeUnit.SECONDS.toMillis(5));\n try {\n- lock = nodeEnv.shardLock(shardId, TimeUnit.SECONDS.toMillis(5));\n- indicesLifecycle.beforeIndexShardCreated(shardId, indexSettings);\n+ eventListener.beforeIndexShardCreated(shardId, indexSettings);\n ShardPath path;\n try {\n path = ShardPath.loadShardPath(logger, nodeEnv, shardId, indexSettings);\n@@ -293,20 +282,16 @@ public synchronized IndexShard createShard(int sShardId, ShardRouting routing) {\n indexShard = new IndexShard(shardId, indexSettings, path, store, indexServicesProvider);\n }\n \n- indicesLifecycle.indexShardStateChanged(indexShard, null, \"shard created\");\n- indicesLifecycle.afterIndexShardCreated(indexShard);\n+ eventListener.indexShardStateChanged(indexShard, null, indexShard.state(), \"shard created\");\n+ eventListener.afterIndexShardCreated(indexShard);\n settingsService.addListener(indexShard);\n shards = newMapBuilder(shards).put(shardId.id(), indexShard).immutableMap();\n success = true;\n return indexShard;\n- } catch (IOException e) {\n- ElasticsearchException ex = new ElasticsearchException(\"failed to create shard\", e);\n- ex.setShard(shardId);\n- throw ex;\n } finally {\n if (success == false) {\n IOUtils.closeWhileHandlingException(lock);\n- closeShard(\"initialization failed\", shardId, indexShard, store);\n+ closeShard(\"initialization failed\", shardId, indexShard, store, eventListener);\n }\n }\n }\n@@ -325,16 +310,16 @@ public synchronized void removeShard(int shardId, String reason) {\n HashMap<Integer, IndexShard> newShards = new HashMap<>(shards);\n indexShard = newShards.remove(shardId);\n shards = unmodifiableMap(newShards);\n- closeShard(reason, sId, indexShard, indexShard.store());\n+ closeShard(reason, sId, indexShard, indexShard.store(), indexShard.getIndexEventListener());\n logger.debug(\"[{}] closed (reason: [{}])\", shardId, reason);\n }\n \n- private void closeShard(String reason, ShardId sId, IndexShard indexShard, Store store) {\n+ private void closeShard(String reason, ShardId sId, IndexShard indexShard, Store store, IndexEventListener listener) {\n final int shardId = sId.id();\n final Settings indexSettings = settingsService.getSettings();\n try {\n try {\n- indicesLifecycle.beforeIndexShardClosed(sId, indexShard, indexSettings);\n+ listener.beforeIndexShardClosed(sId, indexShard, indexSettings);\n } finally {\n // this logic is tricky, we want to close the engine so we rollback the changes done to it\n // and close the shard so no operations are allowed to it\n@@ -349,7 +334,7 @@ private void closeShard(String reason, ShardId sId, IndexShard indexShard, Store\n }\n }\n // call this before we close the store, so we can release resources for it\n- indicesLifecycle.afterIndexShardClosed(sId, indexShard, indexSettings);\n+ listener.afterIndexShardClosed(sId, indexShard, indexSettings);\n }\n } finally {\n try {\n@@ -367,10 +352,10 @@ private void onShardClose(ShardLock lock, boolean ownsShard) {\n try {\n if (ownsShard) {\n try {\n- indicesLifecycle.beforeIndexShardDeleted(lock.getShardId(), indexSettings);\n+ eventListener.beforeIndexShardDeleted(lock.getShardId(), indexSettings);\n } finally {\n indicesServices.deleteShardStore(\"delete index\", lock, indexSettings);\n- indicesLifecycle.afterIndexShardDeleted(lock.getShardId(), indexSettings);\n+ eventListener.afterIndexShardDeleted(lock.getShardId(), indexSettings);\n }\n }\n } catch (IOException e) {", "filename": "core/src/main/java/org/elasticsearch/index/IndexService.java", "status": "modified" }, { "diff": "@@ -28,10 +28,10 @@\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.query.IndexQueryParserService;\n+import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexSearcherWrapper;\n import org.elasticsearch.index.similarity.SimilarityService;\n import org.elasticsearch.index.termvectors.TermVectorsService;\n-import org.elasticsearch.indices.IndicesLifecycle;\n import org.elasticsearch.indices.IndicesWarmer;\n import org.elasticsearch.indices.cache.query.IndicesQueryCache;\n import org.elasticsearch.indices.memory.IndexingMemoryController;\n@@ -44,7 +44,6 @@\n */\n public final class IndexServicesProvider {\n \n- private final IndicesLifecycle indicesLifecycle;\n private final ThreadPool threadPool;\n private final MapperService mapperService;\n private final IndexQueryParserService queryParserService;\n@@ -59,10 +58,11 @@ public final class IndexServicesProvider {\n private final BigArrays bigArrays;\n private final IndexSearcherWrapper indexSearcherWrapper;\n private final IndexingMemoryController indexingMemoryController;\n+ private final IndexEventListener listener;\n \n @Inject\n- public IndexServicesProvider(IndicesLifecycle indicesLifecycle, ThreadPool threadPool, MapperService mapperService, IndexQueryParserService queryParserService, IndexCache indexCache, IndicesQueryCache indicesQueryCache, CodecService codecService, TermVectorsService termVectorsService, IndexFieldDataService indexFieldDataService, @Nullable IndicesWarmer warmer, SimilarityService similarityService, EngineFactory factory, BigArrays bigArrays, @Nullable IndexSearcherWrapper indexSearcherWrapper, IndexingMemoryController indexingMemoryController) {\n- this.indicesLifecycle = indicesLifecycle;\n+ public IndexServicesProvider(IndexEventListener listener, ThreadPool threadPool, MapperService mapperService, IndexQueryParserService queryParserService, IndexCache indexCache, IndicesQueryCache indicesQueryCache, CodecService codecService, TermVectorsService termVectorsService, IndexFieldDataService indexFieldDataService, @Nullable IndicesWarmer warmer, SimilarityService similarityService, EngineFactory factory, BigArrays bigArrays, @Nullable IndexSearcherWrapper indexSearcherWrapper, IndexingMemoryController indexingMemoryController) {\n+ this.listener = listener;\n this.threadPool = threadPool;\n this.mapperService = mapperService;\n this.queryParserService = queryParserService;\n@@ -79,10 +79,9 @@ public IndexServicesProvider(IndicesLifecycle indicesLifecycle, ThreadPool threa\n this.indexingMemoryController = indexingMemoryController;\n }\n \n- public IndicesLifecycle getIndicesLifecycle() {\n- return indicesLifecycle;\n+ public IndexEventListener getIndexEventListener() {\n+ return listener;\n }\n-\n public ThreadPool getThreadPool() {\n return threadPool;\n }", "filename": "core/src/main/java/org/elasticsearch/index/IndexServicesProvider.java", "status": "modified" }, { "diff": "@@ -0,0 +1,181 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.shard;\n+\n+import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.IndexService;\n+\n+/**\n+ * An index event listener is the primary extension point for plugins and build-in services\n+ * to react / listen to per-index and per-shard events. These listeners are registered per-index\n+ * via {@link org.elasticsearch.index.IndexModule#addIndexEventListener(IndexEventListener)}. All listeners have the same\n+ * lifecycle as the {@link IndexService} they are created for.\n+ * <p>\n+ * An IndexEventListener can be used across multiple indices and shards since all callback methods receive sufficient\n+ * local state via their arguments. Yet, if an instance is shared across indices they might be called concurrently and should not\n+ * modify local state without sufficient synchronization.\n+ * </p>\n+ */\n+public interface IndexEventListener {\n+\n+ /**\n+ * Called when the shard routing has changed state.\n+ *\n+ * @param indexShard The index shard\n+ * @param oldRouting The old routing state (can be null)\n+ * @param newRouting The new routing state\n+ */\n+ default void shardRoutingChanged(IndexShard indexShard, @Nullable ShardRouting oldRouting, ShardRouting newRouting) {}\n+\n+ /**\n+ * Called after the index shard has been created.\n+ */\n+ default void afterIndexShardCreated(IndexShard indexShard) {}\n+\n+ /**\n+ * Called after the index shard has been started.\n+ */\n+ default void afterIndexShardStarted(IndexShard indexShard) {}\n+\n+ /**\n+ * Called before the index shard gets closed.\n+ *\n+ * @param indexShard The index shard\n+ */\n+ default void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard, Settings indexSettings) {}\n+\n+ /**\n+ * Called after the index shard has been closed.\n+ *\n+ * @param shardId The shard id\n+ */\n+ default void afterIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard, Settings indexSettings) {}\n+\n+\n+ /**\n+ * Called after a shard's {@link org.elasticsearch.index.shard.IndexShardState} changes.\n+ * The order of concurrent events is preserved. The execution must be lightweight.\n+ *\n+ * @param indexShard the shard the new state was applied to\n+ * @param previousState the previous index shard state if there was one, null otherwise\n+ * @param currentState the new shard state\n+ * @param reason the reason for the state change if there is one, null otherwise\n+ */\n+ default void indexShardStateChanged(IndexShard indexShard, @Nullable IndexShardState previousState, IndexShardState currentState, @Nullable String reason) {}\n+\n+ /**\n+ * Called when a shard is marked as inactive\n+ *\n+ * @param indexShard The shard that was marked inactive\n+ */\n+ default void onShardInactive(IndexShard indexShard) {}\n+\n+ /**\n+ * Called before the index gets created. Note that this is also called\n+ * when the index is created on data nodes\n+ */\n+ default void beforeIndexCreated(Index index, Settings indexSettings) {\n+\n+ }\n+\n+ /**\n+ * Called after the index has been created.\n+ */\n+ default void afterIndexCreated(IndexService indexService) {\n+\n+ }\n+\n+ /**\n+ * Called before the index shard gets created.\n+ */\n+ default void beforeIndexShardCreated(ShardId shardId, Settings indexSettings) {\n+ }\n+\n+\n+ /**\n+ * Called before the index get closed.\n+ *\n+ * @param indexService The index service\n+ */\n+ default void beforeIndexClosed(IndexService indexService) {\n+\n+ }\n+\n+ /**\n+ * Called after the index has been closed.\n+ *\n+ * @param index The index\n+ */\n+ default void afterIndexClosed(Index index, Settings indexSettings) {\n+\n+ }\n+\n+ /**\n+ * Called before the index shard gets deleted from disk\n+ * Note: this method is only executed on the first attempt of deleting the shard. Retries are will not invoke\n+ * this method.\n+ * @param shardId The shard id\n+ * @param indexSettings the shards index settings\n+ */\n+ default void beforeIndexShardDeleted(ShardId shardId, Settings indexSettings) {\n+ }\n+\n+ /**\n+ * Called after the index shard has been deleted from disk.\n+ *\n+ * Note: this method is only called if the deletion of the shard did finish without an exception\n+ *\n+ * @param shardId The shard id\n+ * @param indexSettings the shards index settings\n+ */\n+ default void afterIndexShardDeleted(ShardId shardId, Settings indexSettings) {\n+ }\n+\n+ /**\n+ * Called after the index has been deleted.\n+ * This listener method is invoked after {@link #afterIndexClosed(org.elasticsearch.index.Index, org.elasticsearch.common.settings.Settings)}\n+ * when an index is deleted\n+ *\n+ * @param index The index\n+ */\n+ default void afterIndexDeleted(Index index, Settings indexSettings) {\n+\n+ }\n+\n+ /**\n+ * Called before the index gets deleted.\n+ * This listener method is invoked after\n+ * {@link #beforeIndexClosed(org.elasticsearch.index.IndexService)} when an index is deleted\n+ *\n+ * @param indexService The index service\n+ */\n+ default void beforeIndexDeleted(IndexService indexService) {\n+\n+ }\n+\n+ /**\n+ * Called on the Master node only before the {@link IndexService} instances is created to simulate an index creation.\n+ * This happens right before the index and it's metadata is registered in the cluster state\n+ */\n+ default void beforeIndexAddedToCluster(Index index, Settings indexSettings) {\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexEventListener.java", "status": "added" }, { "diff": "@@ -98,7 +98,6 @@\n import org.elasticsearch.index.warmer.ShardIndexWarmerService;\n import org.elasticsearch.index.warmer.WarmerStats;\n import org.elasticsearch.indices.IndicesWarmer;\n-import org.elasticsearch.indices.InternalIndicesLifecycle;\n import org.elasticsearch.indices.cache.query.IndicesQueryCache;\n import org.elasticsearch.indices.memory.IndexingMemoryController;\n import org.elasticsearch.indices.recovery.RecoveryFailedException;\n@@ -125,7 +124,6 @@ public class IndexShard extends AbstractIndexShardComponent implements IndexSett\n private final ThreadPool threadPool;\n private final MapperService mapperService;\n private final IndexCache indexCache;\n- private final InternalIndicesLifecycle indicesLifecycle;\n private final Store store;\n private final MergeSchedulerConfig mergeSchedulerConfig;\n private final ShardIndexingService indexingService;\n@@ -149,6 +147,7 @@ public class IndexShard extends AbstractIndexShardComponent implements IndexSett\n private final TranslogConfig translogConfig;\n private final MergePolicyConfig mergePolicyConfig;\n private final IndicesQueryCache indicesQueryCache;\n+ private final IndexEventListener indexEventListener;\n \n private TimeValue refreshInterval;\n \n@@ -206,8 +205,8 @@ public IndexShard(ShardId shardId, @IndexSettings Settings indexSettings, ShardP\n this.similarityService = provider.getSimilarityService();\n Objects.requireNonNull(store, \"Store must be provided to the index shard\");\n this.engineFactory = provider.getFactory();\n- this.indicesLifecycle = (InternalIndicesLifecycle) provider.getIndicesLifecycle();\n this.store = store;\n+ this.indexEventListener = provider.getIndexEventListener();\n this.mergeSchedulerConfig = new MergeSchedulerConfig(indexSettings);\n this.threadPool = provider.getThreadPool();\n this.mapperService = provider.getMapperService();\n@@ -367,12 +366,12 @@ public void updateRoutingEntry(final ShardRouting newRouting, final boolean pers\n }\n }\n if (movedToStarted) {\n- indicesLifecycle.afterIndexShardStarted(this);\n+ indexEventListener.afterIndexShardStarted(this);\n }\n }\n }\n this.shardRouting = newRouting;\n- indicesLifecycle.shardRoutingChanged(this, currentRouting, newRouting);\n+ indexEventListener.shardRoutingChanged(this, currentRouting, newRouting);\n } finally {\n if (persistState) {\n persistMetadata(newRouting, currentRouting);\n@@ -431,7 +430,7 @@ private IndexShardState changeState(IndexShardState newState, String reason) {\n logger.debug(\"state: [{}]->[{}], reason [{}]\", state, newState, reason);\n IndexShardState previousState = state;\n state = newState;\n- this.indicesLifecycle.indexShardStateChanged(this, previousState, reason);\n+ this.indexEventListener.indexShardStateChanged(this, previousState, newState, reason);\n return previousState;\n }\n \n@@ -1036,7 +1035,7 @@ public boolean checkIdle(long inactiveTimeNS) {\n if (wasActive) {\n updateBufferSize(IndexingMemoryController.INACTIVE_SHARD_INDEXING_BUFFER, IndexingMemoryController.INACTIVE_SHARD_TRANSLOG_BUFFER);\n logger.debug(\"shard is now inactive\");\n- indicesLifecycle.onShardInactive(this);\n+ indexEventListener.onShardInactive(this);\n }\n }\n \n@@ -1230,6 +1229,10 @@ public PercolateStats percolateStats() {\n return percolatorQueriesRegistry.stats();\n }\n \n+ public IndexEventListener getIndexEventListener() {\n+ return indexEventListener;\n+ }\n+\n class EngineRefresher implements Runnable {\n @Override\n public void run() {", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -52,15 +52,13 @@\n */\n public class IndicesModule extends AbstractModule {\n \n- private final Settings settings;\n \n private final ExtensionPoint.ClassSet<QueryParser> queryParsers\n = new ExtensionPoint.ClassSet<>(\"query_parser\", QueryParser.class);\n private final ExtensionPoint.InstanceMap<String, Dictionary> hunspellDictionaries\n = new ExtensionPoint.InstanceMap<>(\"hunspell_dictionary\", String.class, Dictionary.class);\n \n- public IndicesModule(Settings settings) {\n- this.settings = settings;\n+ public IndicesModule() {\n registerBuiltinQueryParsers();\n }\n \n@@ -130,7 +128,6 @@ protected void configure() {\n bindQueryParsersExtension();\n bindHunspellExtension();\n \n- bind(IndicesLifecycle.class).to(InternalIndicesLifecycle.class).asEagerSingleton();\n bind(IndicesService.class).asEagerSingleton();\n bind(RecoverySettings.class).asEagerSingleton();\n bind(RecoveryTarget.class).asEagerSingleton();", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesModule.java", "status": "modified" }, { "diff": "@@ -68,6 +68,7 @@\n import org.elasticsearch.index.settings.IndexSettingsModule;\n import org.elasticsearch.index.shard.IllegalIndexShardStateException;\n import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.similarity.SimilarityModule;\n import org.elasticsearch.index.store.IndexStore;\n@@ -86,10 +87,7 @@\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n-import java.util.concurrent.CountDownLatch;\n-import java.util.concurrent.ExecutorService;\n-import java.util.concurrent.Executors;\n-import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.*;\n import java.util.stream.Stream;\n \n import static java.util.Collections.emptyMap;\n@@ -106,9 +104,6 @@\n public class IndicesService extends AbstractLifecycleComponent<IndicesService> implements Iterable<IndexService> {\n \n public static final String INDICES_SHARDS_CLOSED_TIMEOUT = \"indices.shards_closed_timeout\";\n-\n- private final InternalIndicesLifecycle indicesLifecycle;\n-\n private final IndicesAnalysisService indicesAnalysisService;\n \n private final Injector injector;\n@@ -142,13 +137,11 @@ public Injector getInjector() {\n private final OldShardsStats oldShardsStats = new OldShardsStats();\n \n @Inject\n- public IndicesService(Settings settings, IndicesLifecycle indicesLifecycle, IndicesAnalysisService indicesAnalysisService, Injector injector, NodeEnvironment nodeEnv) {\n+ public IndicesService(Settings settings, IndicesAnalysisService indicesAnalysisService, Injector injector, PluginsService pluginsService, NodeEnvironment nodeEnv) {\n super(settings);\n- this.indicesLifecycle = (InternalIndicesLifecycle) indicesLifecycle;\n this.indicesAnalysisService = indicesAnalysisService;\n this.injector = injector;\n- this.pluginsService = injector.getInstance(PluginsService.class);\n- this.indicesLifecycle.addListener(oldShardsStats);\n+ this.pluginsService = pluginsService;\n this.nodeEnv = nodeEnv;\n this.shardsClosedTimeout = settings.getAsTime(INDICES_SHARDS_CLOSED_TIMEOUT, new TimeValue(1, TimeUnit.DAYS));\n }\n@@ -195,10 +188,6 @@ protected void doClose() {\n indicesAnalysisService);\n }\n \n- public IndicesLifecycle indicesLifecycle() {\n- return this.indicesLifecycle;\n- }\n-\n /**\n * Returns the node stats indices stats. The <tt>includePrevious</tt> flag controls\n * if old shards stats will be aggregated as well (only for relevant stats, such as\n@@ -305,7 +294,14 @@ public IndexService indexServiceSafe(String index) {\n return indexService;\n }\n \n- public synchronized IndexService createIndex(IndexMetaData indexMetaData) {\n+\n+ /**\n+ * Creates a new {@link IndexService} for the given metadata.\n+ * @param indexMetaData the index metadata to create the index for\n+ * @param builtInListeners a list of built-in lifecycle {@link IndexEventListener} that should should be used along side with the per-index listeners\n+ * @throws IndexAlreadyExistsException if the index already exists.\n+ */\n+ public synchronized IndexService createIndex(IndexMetaData indexMetaData, List<IndexEventListener> builtInListeners) {\n if (!lifecycle.started()) {\n throw new IllegalStateException(\"Can't create an index [\" + indexMetaData.getIndex() + \"], node is closed\");\n }\n@@ -314,9 +310,6 @@ public synchronized IndexService createIndex(IndexMetaData indexMetaData) {\n if (indices.containsKey(index.name())) {\n throw new IndexAlreadyExistsException(index);\n }\n-\n- indicesLifecycle.beforeIndexCreated(index, settings);\n-\n logger.debug(\"creating Index [{}], shards [{}]/[{}{}]\",\n indexMetaData.getIndex(),\n settings.get(SETTING_NUMBER_OF_SHARDS),\n@@ -335,12 +328,19 @@ public synchronized IndexService createIndex(IndexMetaData indexMetaData) {\n for (Module pluginModule : pluginsService.indexModules(indexSettings)) {\n modules.add(pluginModule);\n }\n+ final IndexModule indexModule = new IndexModule(settings, indexMetaData);\n+ for (IndexEventListener listener : builtInListeners) {\n+ indexModule.addIndexEventListener(listener);\n+ }\n+ indexModule.addIndexEventListener(oldShardsStats);\n modules.add(new IndexStoreModule(indexSettings));\n modules.add(new AnalysisModule(indexSettings, indicesAnalysisService));\n modules.add(new SimilarityModule(index, indexSettings));\n modules.add(new IndexCacheModule(indexSettings));\n- modules.add(new IndexModule(indexMetaData));\n+ modules.add(indexModule);\n pluginsService.processModules(modules);\n+ final IndexEventListener listener = indexModule.freeze();\n+ listener.beforeIndexCreated(index, settings);\n \n Injector indexInjector;\n try {\n@@ -352,9 +352,8 @@ public synchronized IndexService createIndex(IndexMetaData indexMetaData) {\n }\n \n IndexService indexService = indexInjector.getInstance(IndexService.class);\n-\n- indicesLifecycle.afterIndexCreated(indexService);\n-\n+ assert indexService.getIndexEventListener() == listener;\n+ listener.afterIndexCreated(indexService);\n indices = newMapBuilder(indices).put(index.name(), new IndexServiceInjectorPair(indexService, indexInjector)).immutableMap();\n return indexService;\n }\n@@ -373,6 +372,7 @@ private void removeIndex(String index, String reason, boolean delete) {\n try {\n final IndexService indexService;\n final Injector indexInjector;\n+ final IndexEventListener listener;\n synchronized (this) {\n if (indices.containsKey(index) == false) {\n return;\n@@ -384,11 +384,12 @@ private void removeIndex(String index, String reason, boolean delete) {\n indexService = remove.getIndexService();\n indexInjector = remove.getInjector();\n indices = unmodifiableMap(newIndices);\n+ listener = indexService.getIndexEventListener();\n }\n \n- indicesLifecycle.beforeIndexClosed(indexService);\n+ listener.beforeIndexClosed(indexService);\n if (delete) {\n- indicesLifecycle.beforeIndexDeleted(indexService);\n+ listener.beforeIndexDeleted(indexService);\n }\n Stream<Closeable> closeables = pluginsService.indexServices().stream().map(p -> indexInjector.getInstance(p));\n IOUtils.close(closeables::iterator);\n@@ -412,10 +413,10 @@ private void removeIndex(String index, String reason, boolean delete) {\n indexInjector.getInstance(IndexStore.class).close();\n \n logger.debug(\"[{}] closed... (reason [{}])\", index, reason);\n- indicesLifecycle.afterIndexClosed(indexService.index(), indexService.settingsService().getSettings());\n+ listener.afterIndexClosed(indexService.index(), indexService.settingsService().getSettings());\n if (delete) {\n final Settings indexSettings = indexService.getIndexSettings();\n- indicesLifecycle.afterIndexDeleted(indexService.index(), indexSettings);\n+ listener.afterIndexDeleted(indexService.index(), indexSettings);\n // now we are done - try to wipe data on disk if possible\n deleteIndexStore(reason, indexService.index(), indexSettings, false);\n }\n@@ -424,7 +425,7 @@ private void removeIndex(String index, String reason, boolean delete) {\n }\n }\n \n- static class OldShardsStats extends IndicesLifecycle.Listener {\n+ static class OldShardsStats implements IndexEventListener {\n \n final SearchStats searchStats = new SearchStats();\n final GetStats getStats = new GetStats();", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -51,17 +51,17 @@\n import org.elasticsearch.index.shard.*;\n import org.elasticsearch.index.snapshots.IndexShardRepository;\n import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.indices.flush.SyncedFlushService;\n import org.elasticsearch.indices.recovery.RecoveryFailedException;\n+import org.elasticsearch.indices.recovery.RecoverySource;\n import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.repositories.RepositoriesService;\n+import org.elasticsearch.search.SearchService;\n import org.elasticsearch.snapshots.RestoreService;\n import org.elasticsearch.threadpool.ThreadPool;\n \n-import java.util.ArrayList;\n-import java.util.Iterator;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n import java.util.concurrent.ConcurrentMap;\n \n /**\n@@ -101,14 +101,16 @@ static class FailedShard {\n private final FailedShardHandler failedShardHandler = new FailedShardHandler();\n \n private final boolean sendRefreshMapping;\n+ private final List<IndexEventListener> buildInIndexListener;\n \n @Inject\n public IndicesClusterStateService(Settings settings, IndicesService indicesService, ClusterService clusterService,\n ThreadPool threadPool, RecoveryTarget recoveryTarget,\n ShardStateAction shardStateAction,\n NodeIndexDeletedAction nodeIndexDeletedAction,\n- NodeMappingRefreshAction nodeMappingRefreshAction, RepositoriesService repositoriesService, RestoreService restoreService) {\n+ NodeMappingRefreshAction nodeMappingRefreshAction, RepositoriesService repositoriesService, RestoreService restoreService, SearchService searchService, SyncedFlushService syncedFlushService, RecoverySource recoverySource) {\n super(settings);\n+ this.buildInIndexListener = Arrays.asList(recoverySource, recoveryTarget, searchService, syncedFlushService);\n this.indicesService = indicesService;\n this.clusterService = clusterService;\n this.threadPool = threadPool;\n@@ -299,7 +301,7 @@ private void applyNewIndices(final ClusterChangedEvent event) {\n logger.debug(\"[{}] creating index\", indexMetaData.getIndex());\n }\n try {\n- indicesService.createIndex(indexMetaData);\n+ indicesService.createIndex(indexMetaData, buildInIndexListener);\n } catch (Throwable e) {\n sendFailShard(shard, indexMetaData.getIndexUUID(), \"failed to create index\", e);\n }", "filename": "core/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java", "status": "modified" }, { "diff": "@@ -41,11 +41,11 @@\n import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n+import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.ShardNotFoundException;\n import org.elasticsearch.indices.IndexClosedException;\n-import org.elasticsearch.indices.IndicesLifecycle;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.BaseTransportResponseHandler;\n@@ -63,7 +63,7 @@\n import java.util.Map;\n import java.util.concurrent.ConcurrentMap;\n \n-public class SyncedFlushService extends AbstractComponent {\n+public class SyncedFlushService extends AbstractComponent implements IndexEventListener {\n \n private static final String PRE_SYNCED_FLUSH_ACTION_NAME = \"internal:indices/flush/synced/pre\";\n private static final String SYNCED_FLUSH_ACTION_NAME = \"internal:indices/flush/synced/sync\";\n@@ -85,25 +85,24 @@ public SyncedFlushService(Settings settings, IndicesService indicesService, Clus\n transportService.registerRequestHandler(PRE_SYNCED_FLUSH_ACTION_NAME, PreSyncedFlushRequest::new, ThreadPool.Names.FLUSH, new PreSyncedFlushTransportHandler());\n transportService.registerRequestHandler(SYNCED_FLUSH_ACTION_NAME, SyncedFlushRequest::new, ThreadPool.Names.FLUSH, new SyncedFlushTransportHandler());\n transportService.registerRequestHandler(IN_FLIGHT_OPS_ACTION_NAME, InFlightOpsRequest::new, ThreadPool.Names.SAME, new InFlightOpCountTransportHandler());\n- indicesService.indicesLifecycle().addListener(new IndicesLifecycle.Listener() {\n- @Override\n- public void onShardInactive(final IndexShard indexShard) {\n- // we only want to call sync flush once, so only trigger it when we are on a primary\n- if (indexShard.routingEntry().primary()) {\n- attemptSyncedFlush(indexShard.shardId(), new ActionListener<ShardsSyncedFlushResult>() {\n- @Override\n- public void onResponse(ShardsSyncedFlushResult syncedFlushResult) {\n- logger.trace(\"{} sync flush on inactive shard returned successfully for sync_id: {}\", syncedFlushResult.getShardId(), syncedFlushResult.syncId());\n- }\n+ }\n \n- @Override\n- public void onFailure(Throwable e) {\n- logger.debug(\"{} sync flush on inactive shard failed\", e, indexShard.shardId());\n- }\n- });\n+ @Override\n+ public void onShardInactive(final IndexShard indexShard) {\n+ // we only want to call sync flush once, so only trigger it when we are on a primary\n+ if (indexShard.routingEntry().primary()) {\n+ attemptSyncedFlush(indexShard.shardId(), new ActionListener<ShardsSyncedFlushResult>() {\n+ @Override\n+ public void onResponse(ShardsSyncedFlushResult syncedFlushResult) {\n+ logger.trace(\"{} sync flush on inactive shard returned successfully for sync_id: {}\", syncedFlushResult.getShardId(), syncedFlushResult.syncId());\n }\n- }\n- });\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ logger.debug(\"{} sync flush on inactive shard failed\", e, indexShard.shardId());\n+ }\n+ });\n+ }\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/indices/flush/SyncedFlushService.java", "status": "modified" }, { "diff": "@@ -30,9 +30,9 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n-import org.elasticsearch.indices.IndicesLifecycle;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportChannel;\n@@ -50,7 +50,7 @@\n * The source recovery accepts recovery requests from other peer shards and start the recovery process from this\n * source shard to the target shard.\n */\n-public class RecoverySource extends AbstractComponent {\n+public class RecoverySource extends AbstractComponent implements IndexEventListener{\n \n public static class Actions {\n public static final String START_RECOVERY = \"internal:index/shard/recovery/start_recovery\";\n@@ -72,21 +72,18 @@ public RecoverySource(Settings settings, TransportService transportService, Indi\n this.transportService = transportService;\n this.indicesService = indicesService;\n this.clusterService = clusterService;\n- this.indicesService.indicesLifecycle().addListener(new IndicesLifecycle.Listener() {\n- @Override\n- public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard,\n- @IndexSettings Settings indexSettings) {\n- if (indexShard != null) {\n- ongoingRecoveries.cancel(indexShard, \"shard is closed\");\n- }\n- }\n- });\n-\n this.recoverySettings = recoverySettings;\n-\n transportService.registerRequestHandler(Actions.START_RECOVERY, StartRecoveryRequest::new, ThreadPool.Names.GENERIC, new StartRecoveryTransportRequestHandler());\n }\n \n+ @Override\n+ public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard,\n+ @IndexSettings Settings indexSettings) {\n+ if (indexShard != null) {\n+ ongoingRecoveries.cancel(indexShard, \"shard is closed\");\n+ }\n+ }\n+\n private RecoveryResponse recover(final StartRecoveryRequest request) {\n final IndexService indexService = indicesService.indexServiceSafe(request.shardId().index().name());\n final IndexShard shard = indexService.getShard(request.shardId().id());", "filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoverySource.java", "status": "modified" }, { "diff": "@@ -48,7 +48,7 @@\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.shard.*;\n import org.elasticsearch.index.store.Store;\n-import org.elasticsearch.indices.IndicesLifecycle;\n+import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.*;\n \n@@ -67,7 +67,7 @@\n * Note, it can be safely assumed that there will only be a single recovery per shard (index+id) and\n * not several of them (since we don't allocate several shard replicas to the same node).\n */\n-public class RecoveryTarget extends AbstractComponent {\n+public class RecoveryTarget extends AbstractComponent implements IndexEventListener {\n \n public static class Actions {\n public static final String FILES_INFO = \"internal:index/shard/recovery/filesInfo\";\n@@ -88,8 +88,7 @@ public static class Actions {\n private final RecoveriesCollection onGoingRecoveries;\n \n @Inject\n- public RecoveryTarget(Settings settings, ThreadPool threadPool, TransportService transportService,\n- IndicesLifecycle indicesLifecycle, RecoverySettings recoverySettings, ClusterService clusterService) {\n+ public RecoveryTarget(Settings settings, ThreadPool threadPool, TransportService transportService, RecoverySettings recoverySettings, ClusterService clusterService) {\n super(settings);\n this.threadPool = threadPool;\n this.transportService = transportService;\n@@ -103,16 +102,14 @@ public RecoveryTarget(Settings settings, ThreadPool threadPool, TransportService\n transportService.registerRequestHandler(Actions.PREPARE_TRANSLOG, RecoveryPrepareForTranslogOperationsRequest::new, ThreadPool.Names.GENERIC, new PrepareForTranslogOperationsRequestHandler());\n transportService.registerRequestHandler(Actions.TRANSLOG_OPS, RecoveryTranslogOperationsRequest::new, ThreadPool.Names.GENERIC, new TranslogOperationsRequestHandler());\n transportService.registerRequestHandler(Actions.FINALIZE, RecoveryFinalizeRecoveryRequest::new, ThreadPool.Names.GENERIC, new FinalizeRecoveryRequestHandler());\n+ }\n \n- indicesLifecycle.addListener(new IndicesLifecycle.Listener() {\n- @Override\n- public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard,\n- @IndexSettings Settings indexSettings) {\n- if (indexShard != null) {\n- onGoingRecoveries.cancelRecoveriesForShard(shardId, \"shard closed\");\n- }\n- }\n- });\n+ @Override\n+ public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard,\n+ @IndexSettings Settings indexSettings) {\n+ if (indexShard != null) {\n+ onGoingRecoveries.cancelRecoveriesForShard(shardId, \"shard closed\");\n+ }\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java", "status": "modified" }, { "diff": "@@ -181,7 +181,7 @@ public Node(Settings preparedSettings) {\n if (settings.getAsBoolean(HTTP_ENABLED, true)) {\n modules.add(new HttpServerModule(settings));\n }\n- modules.add(new IndicesModule(settings));\n+ modules.add(new IndicesModule());\n modules.add(new SearchModule(settings));\n modules.add(new ActionModule(false));\n modules.add(new MonitorModule(settings));", "filename": "core/src/main/java/org/elasticsearch/node/Node.java", "status": "modified" }, { "diff": "@@ -65,8 +65,8 @@\n import org.elasticsearch.index.search.stats.ShardSearchStats;\n import org.elasticsearch.index.search.stats.StatsGroupsParseElement;\n import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexShard;\n-import org.elasticsearch.indices.IndicesLifecycle;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.IndicesWarmer;\n import org.elasticsearch.indices.IndicesWarmer.TerminationHandle;\n@@ -119,7 +119,7 @@\n /**\n *\n */\n-public class SearchService extends AbstractLifecycleComponent<SearchService> {\n+public class SearchService extends AbstractLifecycleComponent<SearchService> implements IndexEventListener {\n \n public static final String NORMS_LOADING_KEY = \"index.norms.loading\";\n public static final String DEFAULT_KEEPALIVE_KEY = \"search.default_keep_alive\";\n@@ -173,27 +173,6 @@ public SearchService(Settings settings, NodeSettingsService nodeSettingsService,\n this.threadPool = threadPool;\n this.clusterService = clusterService;\n this.indicesService = indicesService;\n- indicesService.indicesLifecycle().addListener(new IndicesLifecycle.Listener() {\n- @Override\n- public void afterIndexClosed(Index index, @IndexSettings Settings indexSettings) {\n- // once an index is closed we can just clean up all the pending search context information\n- // to release memory and let references to the filesystem go etc.\n- IndexMetaData idxMeta = SearchService.this.clusterService.state().metaData().index(index.getName());\n- if (idxMeta != null && idxMeta.getState() == IndexMetaData.State.CLOSE) {\n- // we need to check if it's really closed\n- // since sometimes due to a relocation we already closed the shard and that causes the index to be closed\n- // if we then close all the contexts we can get some search failures along the way which are not expected.\n- // it's fine to keep the contexts open if the index is still \"alive\"\n- // unfortunately we don't have a clear way to signal today why an index is closed.\n- afterIndexDeleted(index, indexSettings);\n- }\n- }\n-\n- @Override\n- public void afterIndexDeleted(Index index, @IndexSettings Settings indexSettings) {\n- freeAllContextForIndex(index);\n- }\n- });\n this.indicesWarmer = indicesWarmer;\n this.scriptService = scriptService;\n this.pageCacheRecycler = pageCacheRecycler;\n@@ -235,6 +214,26 @@ public void onRefreshSettings(Settings settings) {\n }\n }\n \n+ @Override\n+ public void afterIndexClosed(Index index, @IndexSettings Settings indexSettings) {\n+ // once an index is closed we can just clean up all the pending search context information\n+ // to release memory and let references to the filesystem go etc.\n+ IndexMetaData idxMeta = SearchService.this.clusterService.state().metaData().index(index.getName());\n+ if (idxMeta != null && idxMeta.getState() == IndexMetaData.State.CLOSE) {\n+ // we need to check if it's really closed\n+ // since sometimes due to a relocation we already closed the shard and that causes the index to be closed\n+ // if we then close all the contexts we can get some search failures along the way which are not expected.\n+ // it's fine to keep the contexts open if the index is still \"alive\"\n+ // unfortunately we don't have a clear way to signal today why an index is closed.\n+ afterIndexDeleted(index, indexSettings);\n+ }\n+ }\n+\n+ @Override\n+ public void afterIndexDeleted(Index index, @IndexSettings Settings indexSettings) {\n+ freeAllContextForIndex(index);\n+ }\n+\n protected void putContext(SearchContext context) {\n final SearchContext previous = activeContexts.put(context.id(), context);\n assert previous == null;", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -38,7 +38,7 @@\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n-import org.elasticsearch.test.store.MockFSDirectoryService;\n+import org.elasticsearch.test.store.MockFSIndexStore;\n \n import java.util.HashMap;\n import java.util.HashSet;\n@@ -148,7 +148,7 @@ public void testCorruptedShards() throws Exception {\n internalCluster().ensureAtLeastNumDataNodes(2);\n assertAcked(prepareCreate(index).setSettings(Settings.builder()\n .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, \"5\")\n- .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false)\n+ .put(MockFSIndexStore.CHECK_INDEX_ON_CLOSE, false)\n ));\n indexRandomData(index);\n ensureGreen(index);", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoreRequestIT.java", "status": "modified" }, { "diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.test.InternalTestCluster.RestartCallback;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.test.store.MockFSDirectoryService;\n+import org.elasticsearch.test.store.MockFSIndexStore;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n@@ -323,7 +324,7 @@ public void doAfterNodes(int numNodes, Client client) throws Exception {\n public void testReusePeerRecovery() throws Exception {\n final Settings settings = settingsBuilder()\n .put(\"action.admin.cluster.node.shutdown.delay\", \"10ms\")\n- .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false)\n+ .put(MockFSIndexStore.CHECK_INDEX_ON_CLOSE, false)\n .put(\"gateway.recover_after_nodes\", 4)\n \n .put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_CONCURRENT_RECOVERIES, 4)", "filename": "core/src/test/java/org/elasticsearch/gateway/RecoveryFromGatewayIT.java", "status": "modified" }, { "diff": "@@ -27,31 +27,46 @@\n import org.elasticsearch.index.engine.EngineException;\n import org.elasticsearch.index.engine.EngineFactory;\n import org.elasticsearch.index.engine.InternalEngineFactory;\n+import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexSearcherWrapper;\n import org.elasticsearch.test.engine.MockEngineFactory;\n \n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.List;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+\n public class IndexModuleTests extends ModuleTestCase {\n \n public void testWrapperIsBound() {\n- IndexModule module = new IndexModule(IndexMetaData.PROTO);\n+ IndexModule module = new IndexModule(Settings.EMPTY, IndexMetaData.PROTO);\n assertInstanceBinding(module, IndexSearcherWrapper.class,(x) -> x == null);\n module.indexSearcherWrapper = Wrapper.class;\n assertBinding(module, IndexSearcherWrapper.class, Wrapper.class);\n }\n \n public void testEngineFactoryBound() {\n- IndexModule module = new IndexModule(IndexMetaData.PROTO);\n+ IndexModule module = new IndexModule(Settings.EMPTY,IndexMetaData.PROTO);\n assertBinding(module, EngineFactory.class, InternalEngineFactory.class);\n module.engineFactoryImpl = MockEngineFactory.class;\n assertBinding(module, EngineFactory.class, MockEngineFactory.class);\n }\n \n public void testOtherServiceBound() {\n+ final AtomicBoolean atomicBoolean = new AtomicBoolean(false);\n+ final IndexEventListener listener = new IndexEventListener() {\n+ @Override\n+ public void beforeIndexDeleted(IndexService indexService) {\n+ atomicBoolean.set(true);\n+ }\n+ };\n final IndexMetaData meta = IndexMetaData.builder(IndexMetaData.PROTO).index(\"foo\").build();\n- IndexModule module = new IndexModule(meta);\n+ IndexModule module = new IndexModule(Settings.EMPTY,meta);\n+ module.addIndexEventListener(listener);\n assertBinding(module, IndexService.class, IndexService.class);\n assertBinding(module, IndexServicesProvider.class, IndexServicesProvider.class);\n assertInstanceBinding(module, IndexMetaData.class, (x) -> x == meta);\n+ assertInstanceBinding(module, IndexEventListener.class, (x) -> {x.beforeIndexDeleted(null); return atomicBoolean.get();});\n }\n \n public static final class Wrapper extends IndexSearcherWrapper {", "filename": "core/src/test/java/org/elasticsearch/index/IndexModuleTests.java", "status": "modified" }, { "diff": "@@ -52,7 +52,7 @@ public static AnalysisService createAnalysisServiceFromSettings(\n if (settings.get(IndexMetaData.SETTING_VERSION_CREATED) == null) {\n settings = Settings.builder().put(settings).put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build();\n }\n- IndicesModule indicesModule = new IndicesModule(settings) {\n+ IndicesModule indicesModule = new IndicesModule() {\n @Override\n public void configure() {\n // skip services", "filename": "core/src/test/java/org/elasticsearch/index/analysis/AnalysisTestsHelper.java", "status": "modified" }, { "diff": "@@ -173,7 +173,7 @@ public static void init() throws IOException {\n new EnvironmentModule(new Environment(settings)),\n new SettingsModule(settings),\n new ThreadPoolModule(new ThreadPool(settings)),\n- new IndicesModule(settings) {\n+ new IndicesModule() {\n @Override\n public void configure() {\n // skip services", "filename": "core/src/test/java/org/elasticsearch/index/query/AbstractQueryTestCase.java", "status": "modified" }, { "diff": "@@ -86,7 +86,7 @@ public void setup() throws IOException {\n new EnvironmentModule(new Environment(settings)),\n new SettingsModule(settings),\n new ThreadPoolModule(new ThreadPool(settings)),\n- new IndicesModule(settings) {\n+ new IndicesModule() {\n @Override\n public void configure() {\n // skip services", "filename": "core/src/test/java/org/elasticsearch/index/query/TemplateQueryParserTests.java", "status": "modified" }, { "diff": "@@ -763,7 +763,7 @@ public void run() {\n assertEquals(total + 1, shard.flushStats().getTotal());\n }\n \n- public void testRecoverFromStore() {\n+ public void testRecoverFromStore() throws IOException {\n createIndex(\"test\");\n ensureGreen();\n IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n@@ -1039,7 +1039,7 @@ private final IndexShard reinitWithWrapper(IndexService indexService, IndexShard\n ShardRouting routing = new ShardRouting(shard.routingEntry());\n shard.close(\"simon says\", true);\n IndexServicesProvider indexServices = indexService.getIndexServices();\n- IndexServicesProvider newProvider = new IndexServicesProvider(indexServices.getIndicesLifecycle(), indexServices.getThreadPool(), indexServices.getMapperService(), indexServices.getQueryParserService(), indexServices.getIndexCache(), indexServices.getIndicesQueryCache(), indexServices.getCodecService(), indexServices.getTermVectorsService(), indexServices.getIndexFieldDataService(), indexServices.getWarmer(), indexServices.getSimilarityService(), indexServices.getFactory(), indexServices.getBigArrays(), wrapper, indexServices.getIndexingMemoryController());\n+ IndexServicesProvider newProvider = new IndexServicesProvider(indexServices.getIndexEventListener(), indexServices.getThreadPool(), indexServices.getMapperService(), indexServices.getQueryParserService(), indexServices.getIndexCache(), indexServices.getIndicesQueryCache(), indexServices.getCodecService(), indexServices.getTermVectorsService(), indexServices.getIndexFieldDataService(), indexServices.getWarmer(), indexServices.getSimilarityService(), indexServices.getFactory(), indexServices.getBigArrays(), wrapper, indexServices.getIndexingMemoryController());\n IndexShard newShard = new IndexShard(shard.shardId(), shard.indexSettings, shard.shardPath(), shard.store(), newProvider);\n ShardRoutingHelper.reinit(routing);\n newShard.updateRoutingEntry(routing, false);", "filename": "core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java", "status": "modified" }, { "diff": "@@ -30,7 +30,6 @@\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n-import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -52,12 +51,7 @@\n import org.elasticsearch.discovery.Discovery;\n import org.elasticsearch.gateway.PrimaryShardAllocator;\n import org.elasticsearch.index.settings.IndexSettings;\n-import org.elasticsearch.index.shard.IndexShard;\n-import org.elasticsearch.index.shard.IndexShardState;\n-import org.elasticsearch.index.shard.MergePolicyConfig;\n-import org.elasticsearch.index.shard.ShardId;\n-import org.elasticsearch.indices.IndicesLifecycle;\n-import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.index.shard.*;\n import org.elasticsearch.indices.recovery.RecoveryFileChunkRequest;\n import org.elasticsearch.indices.recovery.RecoverySettings;\n import org.elasticsearch.indices.recovery.RecoveryTarget;\n@@ -67,7 +61,8 @@\n import org.elasticsearch.test.CorruptionUtils;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.InternalTestCluster;\n-import org.elasticsearch.test.store.MockFSDirectoryService;\n+import org.elasticsearch.test.MockIndexEventListener;\n+import org.elasticsearch.test.store.MockFSIndexStore;\n import org.elasticsearch.test.transport.MockTransportService;\n import org.elasticsearch.transport.TransportException;\n import org.elasticsearch.transport.TransportRequest;\n@@ -125,7 +120,7 @@ protected Settings nodeSettings(int nodeOrdinal) {\n \n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n- return pluginList(MockTransportService.TestPlugin.class);\n+ return pluginList(MockTransportService.TestPlugin.class, MockIndexEventListener.TestPlugin.class);\n }\n \n /**\n@@ -145,7 +140,7 @@ public void testCorruptFileAndRecover() throws ExecutionException, InterruptedEx\n .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, \"1\")\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, \"1\")\n .put(MergePolicyConfig.INDEX_MERGE_ENABLED, false)\n- .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose\n+ .put(MockFSIndexStore.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose\n .put(IndexShard.INDEX_TRANSLOG_DISABLE_FLUSH, true) // no translog based flush - it might change the .liv / segments.N files\n .put(\"indices.recovery.concurrent_streams\", 10)\n ));\n@@ -194,7 +189,7 @@ public void testCorruptFileAndRecover() throws ExecutionException, InterruptedEx\n */\n final CountDownLatch latch = new CountDownLatch(numShards * 3); // primary + 2 replicas\n final CopyOnWriteArrayList<Throwable> exception = new CopyOnWriteArrayList<>();\n- final IndicesLifecycle.Listener listener = new IndicesLifecycle.Listener() {\n+ final IndexEventListener listener = new IndexEventListener() {\n @Override\n public void afterIndexShardClosed(ShardId sid, @Nullable IndexShard indexShard, @IndexSettings Settings indexSettings) {\n if (indexShard != null) {\n@@ -225,16 +220,16 @@ public void afterIndexShardClosed(ShardId sid, @Nullable IndexShard indexShard,\n }\n };\n \n- for (IndicesService service : internalCluster().getDataNodeInstances(IndicesService.class)) {\n- service.indicesLifecycle().addListener(listener);\n+ for (MockIndexEventListener.TestEventListener eventListener : internalCluster().getDataNodeInstances(MockIndexEventListener.TestEventListener.class)) {\n+ eventListener.setNewDelegate(listener);\n }\n try {\n client().admin().indices().prepareDelete(\"test\").get();\n latch.await();\n assertThat(exception, empty());\n } finally {\n- for (IndicesService service : internalCluster().getDataNodeInstances(IndicesService.class)) {\n- service.indicesLifecycle().removeListener(listener);\n+ for (MockIndexEventListener.TestEventListener eventListener : internalCluster().getDataNodeInstances(MockIndexEventListener.TestEventListener.class)) {\n+ eventListener.setNewDelegate(null);\n }\n }\n }\n@@ -250,7 +245,7 @@ public void testCorruptPrimaryNoReplica() throws ExecutionException, Interrupted\n assertAcked(prepareCreate(\"test\").setSettings(Settings.builder()\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, \"0\")\n .put(MergePolicyConfig.INDEX_MERGE_ENABLED, false)\n- .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose\n+ .put(MockFSIndexStore.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose\n .put(IndexShard.INDEX_TRANSLOG_DISABLE_FLUSH, true) // no translog based flush - it might change the .liv / segments.N files\n .put(\"indices.recovery.concurrent_streams\", 10)\n ));\n@@ -395,7 +390,7 @@ public void testCorruptionOnNetworkLayer() throws ExecutionException, Interrupte\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, \"0\")\n .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, between(1, 4)) // don't go crazy here it must recovery fast\n // This does corrupt files on the replica, so we can't check:\n- .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false)\n+ .put(MockFSIndexStore.CHECK_INDEX_ON_CLOSE, false)\n .put(\"index.routing.allocation.include._name\", primariesNode.getNode().name())\n .put(EnableAllocationDecider.INDEX_ROUTING_REBALANCE_ENABLE, EnableAllocationDecider.Rebalance.NONE)\n ));\n@@ -476,7 +471,7 @@ public void testCorruptFileThenSnapshotAndRestore() throws ExecutionException, I\n assertAcked(prepareCreate(\"test\").setSettings(Settings.builder()\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, \"0\") // no replicas for this test\n .put(MergePolicyConfig.INDEX_MERGE_ENABLED, false)\n- .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose\n+ .put(MockFSIndexStore.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose\n .put(IndexShard.INDEX_TRANSLOG_DISABLE_FLUSH, true) // no translog based flush - it might change the .liv / segments.N files\n .put(\"indices.recovery.concurrent_streams\", 10)\n ));\n@@ -531,7 +526,7 @@ public void testReplicaCorruption() throws Exception {\n .put(PrimaryShardAllocator.INDEX_RECOVERY_INITIAL_SHARDS, \"one\")\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, cluster().numDataNodes() - 1)\n .put(MergePolicyConfig.INDEX_MERGE_ENABLED, false)\n- .put(MockFSDirectoryService.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose\n+ .put(MockFSIndexStore.CHECK_INDEX_ON_CLOSE, false) // no checkindex - we corrupt shards on purpose\n .put(IndexShard.INDEX_TRANSLOG_DISABLE_FLUSH, true) // no translog based flush - it might change the .liv / segments.N files\n .put(\"indices.recovery.concurrent_streams\", 10)\n ));", "filename": "core/src/test/java/org/elasticsearch/index/store/CorruptedFileIT.java", "status": "modified" }, { "diff": "@@ -31,14 +31,19 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n+import org.elasticsearch.test.MockIndexEventListener;\n+import org.elasticsearch.test.transport.MockTransportService;\n import org.hamcrest.Matchers;\n \n+import java.util.Collection;\n import java.util.List;\n import java.util.Map;\n import java.util.concurrent.ConcurrentHashMap;\n@@ -63,6 +68,12 @@\n \n @ClusterScope(scope = Scope.TEST, numDataNodes = 0)\n public class IndicesLifecycleListenerIT extends ESIntegTestCase {\n+\n+ @Override\n+ protected Collection<Class<? extends Plugin>> nodePlugins() {\n+ return pluginList(MockIndexEventListener.TestPlugin.class);\n+ }\n+\n public void testBeforeIndexAddedToCluster() throws Exception {\n String node1 = internalCluster().startNode();\n String node2 = internalCluster().startNode();\n@@ -71,7 +82,7 @@ public void testBeforeIndexAddedToCluster() throws Exception {\n final AtomicInteger beforeAddedCount = new AtomicInteger(0);\n final AtomicInteger allCreatedCount = new AtomicInteger(0);\n \n- IndicesLifecycle.Listener listener = new IndicesLifecycle.Listener() {\n+ IndexEventListener listener = new IndexEventListener() {\n @Override\n public void beforeIndexAddedToCluster(Index index, @IndexSettings Settings indexSettings) {\n beforeAddedCount.incrementAndGet();\n@@ -86,9 +97,9 @@ public void beforeIndexCreated(Index index, @IndexSettings Settings indexSetting\n }\n };\n \n- internalCluster().getInstance(IndicesLifecycle.class, node1).addListener(listener);\n- internalCluster().getInstance(IndicesLifecycle.class, node2).addListener(listener);\n- internalCluster().getInstance(IndicesLifecycle.class, node3).addListener(listener);\n+ internalCluster().getInstance(MockIndexEventListener.TestEventListener.class, node1).setNewDelegate(listener);\n+ internalCluster().getInstance(MockIndexEventListener.TestEventListener.class, node2).setNewDelegate(listener);\n+ internalCluster().getInstance(MockIndexEventListener.TestEventListener.class, node3).setNewDelegate(listener);\n \n client().admin().indices().prepareCreate(\"test\")\n .setSettings(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 3, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1).get();\n@@ -115,7 +126,7 @@ public void testIndexShardFailedOnRelocation() throws Throwable {\n client().admin().indices().prepareCreate(\"index1\").setSettings(SETTING_NUMBER_OF_SHARDS, 1, SETTING_NUMBER_OF_REPLICAS, 0).get();\n ensureGreen(\"index1\");\n String node2 = internalCluster().startNode();\n- internalCluster().getInstance(IndicesLifecycle.class, node2).addListener(new IndexShardStateChangeListener() {\n+ internalCluster().getInstance(MockIndexEventListener.TestEventListener.class, node2).setNewDelegate(new IndexShardStateChangeListener() {\n @Override\n public void beforeIndexCreated(Index index, @IndexSettings Settings indexSettings) {\n throw new RuntimeException(\"FAIL\");\n@@ -134,7 +145,7 @@ public void testIndexStateShardChanged() throws Throwable {\n String node1 = internalCluster().startNode();\n IndexShardStateChangeListener stateChangeListenerNode1 = new IndexShardStateChangeListener();\n //add a listener that keeps track of the shard state changes\n- internalCluster().getInstance(IndicesLifecycle.class, node1).addListener(stateChangeListenerNode1);\n+ internalCluster().getInstance(MockIndexEventListener.TestEventListener.class, node1).setNewDelegate(stateChangeListenerNode1);\n \n //create an index that should fail\n try {\n@@ -165,7 +176,7 @@ public void testIndexStateShardChanged() throws Throwable {\n String node2 = internalCluster().startNode();\n IndexShardStateChangeListener stateChangeListenerNode2 = new IndexShardStateChangeListener();\n //add a listener that keeps track of the shard state changes\n- internalCluster().getInstance(IndicesLifecycle.class, node2).addListener(stateChangeListenerNode2);\n+ internalCluster().getInstance(MockIndexEventListener.TestEventListener.class, node2).setNewDelegate(stateChangeListenerNode2);\n //re-enable allocation\n assertAcked(client().admin().cluster().prepareUpdateSettings()\n .setPersistentSettings(builder().put(EnableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ENABLE, \"all\")));\n@@ -226,7 +237,7 @@ private static void assertShardStatesMatch(final IndexShardStateChangeListener s\n stateChangeListener.shardStates.clear();\n }\n \n- private static class IndexShardStateChangeListener extends IndicesLifecycle.Listener {\n+ private static class IndexShardStateChangeListener implements IndexEventListener {\n //we keep track of all the states (ordered) a shard goes through\n final ConcurrentMap<ShardId, List<IndexShardState>> shardStates = new ConcurrentHashMap<>();\n Settings creationSettings = Settings.EMPTY;", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerIT.java", "status": "modified" }, { "diff": "@@ -18,31 +18,40 @@\n */\n package org.elasticsearch.indices;\n \n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.ShardRoutingHelper;\n+import org.elasticsearch.cluster.routing.UnassignedInfo;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.transport.DummyTransportAddress;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.settings.IndexSettings;\n+import org.elasticsearch.index.shard.IndexEventListener;\n+import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n-\n+import java.util.Arrays;\n import java.util.concurrent.atomic.AtomicInteger;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n \n public class IndicesLifecycleListenerSingleNodeTests extends ESSingleNodeTestCase {\n- @Override\n- protected boolean resetNodeAfterTest() {\n- return true;\n- }\n \n public void testCloseDeleteCallback() throws Throwable {\n- final AtomicInteger counter = new AtomicInteger(1);\n+ IndicesService indicesService = getInstanceFromNode(IndicesService.class);\n assertAcked(client().admin().indices().prepareCreate(\"test\")\n .setSettings(SETTING_NUMBER_OF_SHARDS, 1, SETTING_NUMBER_OF_REPLICAS, 0));\n ensureGreen();\n- getInstanceFromNode(IndicesLifecycle.class).addListener(new IndicesLifecycle.Listener() {\n+ IndexMetaData metaData = indicesService.indexService(\"test\").getMetaData();\n+ ShardRouting shardRouting = indicesService.indexService(\"test\").getShard(0).routingEntry();\n+ assertAcked(client().admin().indices().prepareDelete(\"test\").get());\n+ final AtomicInteger counter = new AtomicInteger(1);\n+ IndexEventListener countingListener = new IndexEventListener() {\n @Override\n public void afterIndexClosed(Index index, @IndexSettings Settings indexSettings) {\n assertEquals(counter.get(), 5);\n@@ -62,7 +71,7 @@ public void afterIndexDeleted(Index index, @IndexSettings Settings indexSettings\n }\n \n @Override\n- public void beforeIndexDeleted(IndexService indexService) {\n+ public void beforeIndexDeleted(IndexService indexService) {\n assertEquals(counter.get(), 2);\n counter.incrementAndGet();\n }\n@@ -78,8 +87,19 @@ public void afterIndexShardDeleted(ShardId shardId, Settings indexSettings) {\n assertEquals(counter.get(), 4);\n counter.incrementAndGet();\n }\n- });\n- assertAcked(client().admin().indices().prepareDelete(\"test\").get());\n+ };\n+ IndexService index = indicesService.createIndex(metaData, Arrays.asList(countingListener));\n+ ShardRouting newRouting = new ShardRouting(shardRouting);\n+ String nodeId = newRouting.currentNodeId();\n+ ShardRoutingHelper.moveToUnassigned(newRouting, new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, \"boom\"));\n+ ShardRoutingHelper.initialize(newRouting, nodeId);\n+ IndexShard shard = index.createShard(0, newRouting);\n+ shard.updateRoutingEntry(newRouting, true);\n+ shard.recoverFromStore(newRouting, new DiscoveryNode(\"foo\", DummyTransportAddress.INSTANCE, Version.CURRENT));\n+ newRouting = new ShardRouting(newRouting);\n+ ShardRoutingHelper.moveToStarted(newRouting);\n+ shard.updateRoutingEntry(newRouting, true);\n+ indicesService.deleteIndex(\"test\", \"simon says\");\n assertEquals(7, counter.get());\n }\n ", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesLifecycleListenerSingleNodeTests.java", "status": "modified" }, { "diff": "@@ -48,13 +48,13 @@ public QueryBuilder getBuilderPrototype() {\n }\n \n public void testRegisterQueryParser() {\n- IndicesModule module = new IndicesModule(Settings.EMPTY);\n+ IndicesModule module = new IndicesModule();\n module.registerQueryParser(FakeQueryParser.class);\n assertSetMultiBinding(module, QueryParser.class, FakeQueryParser.class);\n }\n \n public void testRegisterQueryParserDuplicate() {\n- IndicesModule module = new IndicesModule(Settings.EMPTY);\n+ IndicesModule module = new IndicesModule();\n try {\n module.registerQueryParser(TermQueryParser.class);\n } catch (IllegalArgumentException e) {\n@@ -63,7 +63,7 @@ public void testRegisterQueryParserDuplicate() {\n }\n \n public void testRegisterHunspellDictionary() throws Exception {\n- IndicesModule module = new IndicesModule(Settings.EMPTY);\n+ IndicesModule module = new IndicesModule();\n InputStream aff = getClass().getResourceAsStream(\"/indices/analyze/conf_dir/hunspell/en_US/en_US.aff\");\n InputStream dic = getClass().getResourceAsStream(\"/indices/analyze/conf_dir/hunspell/en_US/en_US.dic\");\n Dictionary dictionary = new Dictionary(aff, dic);\n@@ -72,7 +72,7 @@ public void testRegisterHunspellDictionary() throws Exception {\n }\n \n public void testRegisterHunspellDictionaryDuplicate() {\n- IndicesModule module = new IndicesModule(Settings.EMPTY);\n+ IndicesModule module = new IndicesModule();\n try {\n module.registerQueryParser(TermQueryParser.class);\n } catch (IllegalArgumentException e) {", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesModuleTests.java", "status": "modified" }, { "diff": "@@ -40,10 +40,10 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.index.shard.IndexEventListener;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n-import org.elasticsearch.indices.IndicesLifecycle;\n import org.elasticsearch.indices.recovery.RecoveryFileChunkRequest;\n import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.plugins.Plugin;\n@@ -53,6 +53,7 @@\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n+import org.elasticsearch.test.MockIndexEventListener;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.test.transport.MockTransportService;\n import org.elasticsearch.transport.Transport;\n@@ -89,9 +90,11 @@\n public class RelocationIT extends ESIntegTestCase {\n private final TimeValue ACCEPTABLE_RELOCATION_TIME = new TimeValue(5, TimeUnit.MINUTES);\n \n+\n+\n @Override\n protected Collection<Class<? extends Plugin>> nodePlugins() {\n- return pluginList(MockTransportService.TestPlugin.class);\n+ return pluginList(MockTransportService.TestPlugin.class, MockIndexEventListener.TestPlugin.class);\n }\n \n public void testSimpleRelocationNoIndexing() {\n@@ -282,16 +285,16 @@ public void testRelocationWhileRefreshing() throws Exception {\n }\n \n final Semaphore postRecoveryShards = new Semaphore(0);\n-\n- for (IndicesLifecycle indicesLifecycle : internalCluster().getInstances(IndicesLifecycle.class)) {\n- indicesLifecycle.addListener(new IndicesLifecycle.Listener() {\n- @Override\n- public void indexShardStateChanged(IndexShard indexShard, @Nullable IndexShardState previousState, IndexShardState currentState, @Nullable String reason) {\n- if (currentState == IndexShardState.POST_RECOVERY) {\n- postRecoveryShards.release();\n- }\n+ final IndexEventListener listener = new IndexEventListener() {\n+ @Override\n+ public void indexShardStateChanged(IndexShard indexShard, @Nullable IndexShardState previousState, IndexShardState currentState, @Nullable String reason) {\n+ if (currentState == IndexShardState.POST_RECOVERY) {\n+ postRecoveryShards.release();\n }\n- });\n+ }\n+ };\n+ for (MockIndexEventListener.TestEventListener eventListener : internalCluster().getInstances(MockIndexEventListener.TestEventListener.class)) {\n+ eventListener.setNewDelegate(listener);\n }\n \n ", "filename": "core/src/test/java/org/elasticsearch/recovery/RelocationIT.java", "status": "modified" } ] }
{ "body": "Index name expressions like: `<logstash-{now/D}>` are broken up into: `<logstash-{now` and `D}>`. This shouldn't happen. This PR fixes this by preventing the `PathTrie` to split based on `/` if it is currently in between between `<` and `>` characters.\n\nPR for #13665\n", "comments": [ { "body": "left a question, LGTM otherwise\n", "created_at": "2015-09-23T07:56:02Z" } ], "number": 13691, "title": "Index name expressions should not be broken up" }
{ "body": "With #13691 we introduced some custom logic to make sure that date math expressions like `<logstash-{now/D}>` don't get broken up into two where the slash appears in the expression. That said the only correct way to provide such a date math expression as part of the uri would be to properly escape the '/' instead. This fix also introduced a regression, as it would make sure that unescaped '/' are handled only in the case of date math expressions, but it removed support for properly escaped slashes anywhere else. The solution is to keep supporting escaped slashes only and require client libraries to properly escape them.\n\nThis commit reverts 93ad6969669e65db6ba7f35c375109e3f186675b and makes sure that our REST tests runner supports escaping of path parts, which was more involving than expected as each single part of the path needs to be properly escaped. I am not too happy with the current solution but it's the best I could do for now, maybe not that concerning anyway given that it's just test code. I do find uri encoding quite frustrating in java.\n\nRelates to #13691\nRelates to #13665\n\nCloses #14177\n", "number": 14216, "review_comments": [ { "body": "Why do you use replaceAll instead of URLEncoder?\n", "created_at": "2015-10-21T09:03:33Z" }, { "body": "URLEncoder is good only for query_string params, not for url path. They have different escaping rules according to the spec. Also URI applies the right rules to the path, but given that you have to provide the whole path it won't escape slashes. This is frustrating...but I couldn't find any better solutions.\n", "created_at": "2015-10-21T09:09:40Z" }, { "body": "ok\n", "created_at": "2015-10-21T09:47:18Z" } ], "title": "Restore support for escaped '/' as part of document id" }
{ "commits": [ { "message": "Restore support for escaped '/' as part of document id\n\nWith #13691 we introduced some custom logic to make sure that date math expressions like <logstash-{now/D}> don't get broken up into two where the slash appears in the expression. That said the only correct way to provide such a date math expression as part of the uri would be to properly escape the '/' instead. This fix also introduced a regression, as it would make sure that unescaped '/' are handled only in the case of date math expressions, but it removed support for properly escaped slashes anywhere else. The solution is to keep supporting escaped slashes only and require client libraries to properly escape them.\n\nThis commit reverts 93ad696 and makes sure that our REST tests runner supports escaping of path parts, which was more involving than expected as each single part of the path needs to be properly escaped. I am not too happy with the current solution but it's the best I could do for now, maybe not that concerning anyway given that it's just test code. I do find uri encoding quite frustrating in java.\n\nRelates to #13691\nRelates to #13665\n\nCloses #14177\nCloses #14216" } ], "files": [ { "diff": "@@ -21,9 +21,7 @@\n \n import org.elasticsearch.common.Strings;\n \n-import java.util.ArrayList;\n import java.util.HashMap;\n-import java.util.List;\n import java.util.Map;\n \n import static java.util.Collections.emptyMap;\n@@ -34,26 +32,15 @@\n */\n public class PathTrie<T> {\n \n- public static interface Decoder {\n+ public interface Decoder {\n String decode(String value);\n }\n \n- public static final Decoder NO_DECODER = new Decoder() {\n- @Override\n- public String decode(String value) {\n- return value;\n- }\n- };\n-\n private final Decoder decoder;\n private final TrieNode root;\n private final char separator;\n private T rootValue;\n \n- public PathTrie() {\n- this('/', \"*\", NO_DECODER);\n- }\n-\n public PathTrie(Decoder decoder) {\n this('/', \"*\", decoder);\n }\n@@ -198,7 +185,7 @@ public T retrieve(String[] path, int index, Map<String, String> params) {\n \n private void put(Map<String, String> params, TrieNode node, String value) {\n if (params != null && node.isNamedWildcard()) {\n- params.put(node.namedWildcard(), value);\n+ params.put(node.namedWildcard(), decoder.decode(value));\n }\n }\n \n@@ -230,7 +217,7 @@ public T retrieve(String path, Map<String, String> params) {\n if (path.length() == 0) {\n return rootValue;\n }\n- String[] strings = splitPath(decoder.decode(path));\n+ String[] strings = Strings.splitStringToArray(path, separator);\n if (strings.length == 0) {\n return rootValue;\n }\n@@ -241,50 +228,4 @@ public T retrieve(String path, Map<String, String> params) {\n }\n return root.retrieve(strings, index, params);\n }\n-\n- /*\n- Splits up the url path up by '/' and is aware of\n- index name expressions that appear between '<' and '>'.\n- */\n- String[] splitPath(final String path) {\n- if (path == null || path.length() == 0) {\n- return Strings.EMPTY_ARRAY;\n- }\n- int count = 1;\n- boolean splitAllowed = true;\n- for (int i = 0; i < path.length(); i++) {\n- final char currentC = path.charAt(i);\n- if ('<' == currentC) {\n- splitAllowed = false;\n- } else if (currentC == '>') {\n- splitAllowed = true;\n- } else if (splitAllowed && currentC == separator) {\n- count++;\n- }\n- }\n-\n- final List<String> result = new ArrayList<>(count);\n- final StringBuilder builder = new StringBuilder();\n-\n- splitAllowed = true;\n- for (int i = 0; i < path.length(); i++) {\n- final char currentC = path.charAt(i);\n- if ('<' == currentC) {\n- splitAllowed = false;\n- } else if (currentC == '>') {\n- splitAllowed = true;\n- } else if (splitAllowed && currentC == separator) {\n- if (builder.length() > 0) {\n- result.add(builder.toString());\n- builder.setLength(0);\n- }\n- continue;\n- }\n- builder.append(currentC);\n- }\n- if (builder.length() > 0) {\n- result.add(builder.toString());\n- }\n- return result.toArray(new String[result.size()]);\n- }\n }", "filename": "core/src/main/java/org/elasticsearch/common/path/PathTrie.java", "status": "modified" }, { "diff": "@@ -19,12 +19,12 @@\n \n package org.elasticsearch.common.path;\n \n+import org.elasticsearch.rest.support.RestUtils;\n import org.elasticsearch.test.ESTestCase;\n \n import java.util.HashMap;\n import java.util.Map;\n \n-import static org.hamcrest.Matchers.arrayContaining;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.nullValue;\n \n@@ -33,8 +33,15 @@\n */\n public class PathTrieTests extends ESTestCase {\n \n+ public static final PathTrie.Decoder NO_DECODER = new PathTrie.Decoder() {\n+ @Override\n+ public String decode(String value) {\n+ return value;\n+ }\n+ };\n+ \n public void testPath() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"/a/b/c\", \"walla\");\n trie.insert(\"a/d/g\", \"kuku\");\n trie.insert(\"x/b/c\", \"lala\");\n@@ -61,13 +68,13 @@ public void testPath() {\n }\n \n public void testEmptyPath() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"/\", \"walla\");\n assertThat(trie.retrieve(\"\"), equalTo(\"walla\"));\n }\n \n public void testDifferentNamesOnDifferentPath() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"/a/{type}\", \"test1\");\n trie.insert(\"/b/{name}\", \"test2\");\n \n@@ -81,7 +88,7 @@ public void testDifferentNamesOnDifferentPath() {\n }\n \n public void testSameNameOnDifferentPath() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"/a/c/{name}\", \"test1\");\n trie.insert(\"/b/{name}\", \"test2\");\n \n@@ -95,7 +102,7 @@ public void testSameNameOnDifferentPath() {\n }\n \n public void testPreferNonWildcardExecution() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"{test}\", \"test1\");\n trie.insert(\"b\", \"test2\");\n trie.insert(\"{test}/a\", \"test3\");\n@@ -111,7 +118,7 @@ public void testPreferNonWildcardExecution() {\n }\n \n public void testSamePathConcreteResolution() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"{x}/{y}/{z}\", \"test1\");\n trie.insert(\"{x}/_y/{k}\", \"test2\");\n \n@@ -127,7 +134,7 @@ public void testSamePathConcreteResolution() {\n }\n \n public void testNamedWildcardAndLookupWithWildcard() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"x/{test}\", \"test1\");\n trie.insert(\"{test}/a\", \"test2\");\n trie.insert(\"/{test}\", \"test3\");\n@@ -155,24 +162,20 @@ public void testNamedWildcardAndLookupWithWildcard() {\n assertThat(params.get(\"test\"), equalTo(\"*\"));\n }\n \n- public void testSplitPath() {\n- PathTrie<String> trie = new PathTrie<>();\n- assertThat(trie.splitPath(\"/a/\"), arrayContaining(\"a\"));\n- assertThat(trie.splitPath(\"/a/b\"),arrayContaining(\"a\", \"b\"));\n- assertThat(trie.splitPath(\"/a/b/c\"), arrayContaining(\"a\", \"b\", \"c\"));\n- assertThat(trie.splitPath(\"/a/b/<c/d>\"), arrayContaining(\"a\", \"b\", \"<c/d>\"));\n- assertThat(trie.splitPath(\"/a/b/<c/d>/d\"), arrayContaining(\"a\", \"b\", \"<c/d>\", \"d\"));\n-\n- assertThat(trie.splitPath(\"/<logstash-{now}>/_search\"), arrayContaining(\"<logstash-{now}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/d}>/_search\"), arrayContaining(\"<logstash-{now/d}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M{YYYY.MM}}>/_search\"), arrayContaining(\"<logstash-{now/M{YYYY.MM}}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M{YYYY.MM}}>/_search\"), arrayContaining(\"<logstash-{now/M{YYYY.MM}}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M{YYYY.MM|UTC}}>/log/_search\"), arrayContaining(\"<logstash-{now/M{YYYY.MM|UTC}}>\", \"log\", \"_search\"));\n-\n- assertThat(trie.splitPath(\"/<logstash-{now/M}>,<logstash-{now/M-1M}>/_search\"), arrayContaining(\"<logstash-{now/M}>,<logstash-{now/M-1M}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M}>,<logstash-{now/M-1M}>/_search\"), arrayContaining(\"<logstash-{now/M}>,<logstash-{now/M-1M}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M{YYYY.MM}}>,<logstash-{now/M-1M{YYYY.MM}}>/_search\"), arrayContaining(\"<logstash-{now/M{YYYY.MM}}>,<logstash-{now/M-1M{YYYY.MM}}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M{YYYY.MM|UTC}}>,<logstash-{now/M-1M{YYYY.MM|UTC}}>/_search\"), arrayContaining(\"<logstash-{now/M{YYYY.MM|UTC}}>,<logstash-{now/M-1M{YYYY.MM|UTC}}>\", \"_search\"));\n+ //https://github.com/elastic/elasticsearch/issues/14177\n+ //https://github.com/elastic/elasticsearch/issues/13665\n+ public void testEscapedSlashWithinUrl() {\n+ PathTrie<String> pathTrie = new PathTrie<>(RestUtils.REST_DECODER);\n+ pathTrie.insert(\"/{index}/{type}/{id}\", \"test\");\n+ HashMap<String, String> params = new HashMap<>();\n+ assertThat(pathTrie.retrieve(\"/index/type/a%2Fe\", params), equalTo(\"test\"));\n+ assertThat(params.get(\"index\"), equalTo(\"index\"));\n+ assertThat(params.get(\"type\"), equalTo(\"type\"));\n+ assertThat(params.get(\"id\"), equalTo(\"a/e\"));\n+ params.clear();\n+ assertThat(pathTrie.retrieve(\"/<logstash-{now%2Fd}>/type/id\", params), equalTo(\"test\"));\n+ assertThat(params.get(\"index\"), equalTo(\"<logstash-{now/d}>\"));\n+ assertThat(params.get(\"type\"), equalTo(\"type\"));\n+ assertThat(params.get(\"id\"), equalTo(\"id\"));\n }\n-\n }", "filename": "core/src/test/java/org/elasticsearch/common/path/PathTrieTests.java", "status": "modified" }, { "diff": "@@ -98,7 +98,7 @@ public void testThatPathsAreNormalized() throws Exception {\n notFoundUris.add(\"/_plugin/dummy/%2e%2e/%2e%2e/%2e%2e/%2e%2e/index.html\");\n notFoundUris.add(\"/_plugin/dummy/%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2findex.html\");\n notFoundUris.add(\"/_plugin/dummy/%2E%2E/%2E%2E/%2E%2E/%2E%2E/index.html\");\n- notFoundUris.add(\"/_plugin/dummy/..\\\\..\\\\..\\\\..\\\\..\\\\log4j.properties\");\n+ notFoundUris.add(\"/_plugin/dummy/..%5C..%5C..%5C..%5C..%5Clog4j.properties\");\n \n for (String uri : notFoundUris) {\n HttpResponse response = httpClient().path(uri).execute();", "filename": "core/src/test/java/org/elasticsearch/plugins/SitePluginIT.java", "status": "modified" }, { "diff": "@@ -230,8 +230,9 @@ private HttpRequestBuilder callApiBuilder(String apiName, Map<String, String> pa\n httpRequestBuilder.method(RandomizedTest.randomFrom(supportedMethods));\n }\n \n- //the http method is randomized (out of the available ones with the chosen api)\n- return httpRequestBuilder.path(RandomizedTest.randomFrom(restApi.getFinalPaths(pathParts)));\n+ //the rest path to use is randomized out of the matching ones (if more than one)\n+ RestPath restPath = RandomizedTest.randomFrom(restApi.getFinalPaths(pathParts));\n+ return httpRequestBuilder.pathParts(restPath.getPathParts());\n }\n \n private RestApi restApi(String apiName) {", "filename": "core/src/test/java/org/elasticsearch/test/rest/client/RestClient.java", "status": "modified" }, { "diff": "@@ -0,0 +1,97 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.test.rest.client;\n+\n+import java.util.*;\n+\n+public class RestPath {\n+ private final List<PathPart> parts;\n+ private final List<String> placeholders;\n+\n+ public RestPath(List<String> parts) {\n+ List<PathPart> pathParts = new ArrayList<>(parts.size());\n+ for (String part : parts) {\n+ pathParts.add(new PathPart(part, false));\n+ }\n+ this.parts = pathParts;\n+ this.placeholders = Collections.emptyList();\n+ }\n+\n+ public RestPath(String path) {\n+ String[] pathParts = path.split(\"/\");\n+ List<String> placeholders = new ArrayList<>();\n+ List<PathPart> parts = new ArrayList<>();\n+ for (String pathPart : pathParts) {\n+ if (pathPart.length() > 0) {\n+ if (pathPart.startsWith(\"{\")) {\n+ if (pathPart.indexOf('}') != pathPart.length() - 1) {\n+ throw new IllegalArgumentException(\"more than one parameter found in the same path part: [\" + pathPart + \"]\");\n+ }\n+ String placeholder = pathPart.substring(1, pathPart.length() - 1);\n+ parts.add(new PathPart(placeholder, true));\n+ placeholders.add(placeholder);\n+ } else {\n+ parts.add(new PathPart(pathPart, false));\n+ }\n+ }\n+ }\n+ this.placeholders = placeholders;\n+ this.parts = parts;\n+ }\n+\n+ public String[] getPathParts() {\n+ String[] parts = new String[this.parts.size()];\n+ int i = 0;\n+ for (PathPart part : this.parts) {\n+ parts[i++] = part.pathPart;\n+ }\n+ return parts;\n+ }\n+\n+ public boolean matches(Set<String> params) {\n+ return placeholders.size() == params.size() && placeholders.containsAll(params);\n+ }\n+\n+ public RestPath replacePlaceholders(Map<String,String> params) {\n+ List<String> finalPathParts = new ArrayList<>(parts.size());\n+ for (PathPart pathPart : parts) {\n+ if (pathPart.isPlaceholder) {\n+ String value = params.get(pathPart.pathPart);\n+ if (value == null) {\n+ throw new IllegalArgumentException(\"parameter [\" + pathPart.pathPart + \"] missing\");\n+ }\n+ finalPathParts.add(value);\n+ } else {\n+ finalPathParts.add(pathPart.pathPart);\n+ }\n+ }\n+ return new RestPath(finalPathParts);\n+ }\n+\n+ private static class PathPart {\n+ private final boolean isPlaceholder;\n+ private final String pathPart;\n+\n+ private PathPart(String pathPart, boolean isPlaceholder) {\n+ this.isPlaceholder = isPlaceholder;\n+ this.pathPart = pathPart;\n+ }\n+ }\n+}\n\\ No newline at end of file", "filename": "core/src/test/java/org/elasticsearch/test/rest/client/RestPath.java", "status": "added" }, { "diff": "@@ -86,14 +86,42 @@ public HttpRequestBuilder port(int port) {\n return this;\n }\n \n+ /**\n+ * Sets the path to send the request to. Url encoding needs to be applied by the caller.\n+ * Use {@link #pathParts(String...)} instead if the path needs to be encoded, part by part.\n+ */\n public HttpRequestBuilder path(String path) {\n this.path = path;\n return this;\n }\n \n+ /**\n+ * Sets the path by providing the different parts (without slashes), which will be properly encoded.\n+ */\n+ public HttpRequestBuilder pathParts(String... path) {\n+ //encode rules for path and query string parameters are different. We use URI to encode the path, and URLEncoder for each query string parameter (see addParam).\n+ //We need to encode each path part separately though, as each one might contain slashes that need to be escaped, which needs to be done manually.\n+ if (path.length == 0) {\n+ this.path = \"/\";\n+ return this;\n+ }\n+ StringBuilder finalPath = new StringBuilder();\n+ for (String pathPart : path) {\n+ try {\n+ finalPath.append('/');\n+ URI uri = new URI(null, null, null, -1, pathPart, null, null);\n+ //manually escape any slash that each part may contain\n+ finalPath.append(uri.getRawPath().replaceAll(\"/\", \"%2F\"));\n+ } catch(URISyntaxException e) {\n+ throw new RuntimeException(\"unable to build uri\", e);\n+ }\n+ }\n+ this.path = finalPath.toString();\n+ return this;\n+ }\n+\n public HttpRequestBuilder addParam(String name, String value) {\n try {\n- //manually url encode params, since URI does it only partially (e.g. '+' stays as is)\n this.params.put(name, URLEncoder.encode(value, \"utf-8\"));\n return this;\n } catch (UnsupportedEncodingException e) {\n@@ -181,19 +209,12 @@ private HttpUriRequest buildRequest() {\n }\n \n private URI buildUri() {\n- try {\n- //url encode rules for path and query params are different. We use URI to encode the path, but we manually encode each query param through URLEncoder.\n- URI uri = new URI(protocol, null, host, port, path, null, null);\n- //String concatenation FTW. If we use the nicer multi argument URI constructor query parameters will get only partially encoded\n- //(e.g. '+' will stay as is) hence when trying to properly encode params manually they will end up double encoded (+ becomes %252B instead of %2B).\n- StringBuilder uriBuilder = new StringBuilder(protocol).append(\"://\").append(host).append(\":\").append(port).append(uri.getRawPath());\n- if (params.size() > 0) {\n- uriBuilder.append(\"?\").append(params.entrySet().stream().map(e -> e.getKey() + \"=\" + e.getValue()).collect(Collectors.joining(\"&\")));\n- }\n- return URI.create(uriBuilder.toString());\n- } catch(URISyntaxException e) {\n- throw new IllegalArgumentException(\"unable to build uri\", e);\n+ StringBuilder uriBuilder = new StringBuilder(protocol).append(\"://\").append(host).append(\":\").append(port).append(path);\n+ if (params.size() > 0) {\n+ uriBuilder.append(\"?\").append(params.entrySet().stream().map(e -> e.getKey() + \"=\" + e.getValue()).collect(Collectors.joining(\"&\")));\n }\n+ //using this constructor no url encoding happens, as we did everything upfront in addParam and pathPart methods\n+ return URI.create(uriBuilder.toString());\n }\n \n private HttpEntityEnclosingRequestBase addOptionalBody(HttpEntityEnclosingRequestBase requestBase) {", "filename": "core/src/test/java/org/elasticsearch/test/rest/client/http/HttpRequestBuilder.java", "status": "modified" }, { "diff": "@@ -20,14 +20,12 @@\n \n import org.apache.http.client.methods.HttpPost;\n import org.apache.http.client.methods.HttpPut;\n+import org.elasticsearch.test.rest.client.RestPath;\n \n import java.util.ArrayList;\n-import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n-import java.util.regex.Matcher;\n-import java.util.regex.Pattern;\n \n /**\n * Represents an elasticsearch REST endpoint (api)\n@@ -41,7 +39,7 @@ public class RestApi {\n private List<String> params = new ArrayList<>();\n private BODY body = BODY.NOT_SUPPORTED;\n \n- public static enum BODY {\n+ public enum BODY {\n NOT_SUPPORTED, OPTIONAL, REQUIRED\n }\n \n@@ -131,28 +129,18 @@ public boolean isBodyRequired() {\n * Finds the best matching rest path given the current parameters and replaces\n * placeholders with their corresponding values received as arguments\n */\n- public String[] getFinalPaths(Map<String, String> pathParams) {\n-\n+ public RestPath[] getFinalPaths(Map<String, String> pathParams) {\n List<RestPath> matchingRestPaths = findMatchingRestPaths(pathParams.keySet());\n if (matchingRestPaths == null || matchingRestPaths.isEmpty()) {\n throw new IllegalArgumentException(\"unable to find matching rest path for api [\" + name + \"] and path params \" + pathParams);\n }\n \n- String[] paths = new String[matchingRestPaths.size()];\n+ RestPath[] restPaths = new RestPath[matchingRestPaths.size()];\n for (int i = 0; i < matchingRestPaths.size(); i++) {\n RestPath restPath = matchingRestPaths.get(i);\n- String path = restPath.path;\n- for (Map.Entry<String, String> paramEntry : restPath.parts.entrySet()) {\n- // replace path placeholders with actual values\n- String value = pathParams.get(paramEntry.getValue());\n- if (value == null) {\n- throw new IllegalArgumentException(\"parameter [\" + paramEntry.getValue() + \"] missing\");\n- }\n- path = path.replace(paramEntry.getKey(), value);\n- }\n- paths[i] = path;\n+ restPaths[i] = restPath.replacePlaceholders(pathParams);\n }\n- return paths;\n+ return restPaths;\n }\n \n /**\n@@ -165,15 +153,11 @@ private List<RestPath> findMatchingRestPaths(Set<String> restParams) {\n \n List<RestPath> matchingRestPaths = new ArrayList<>();\n RestPath[] restPaths = buildRestPaths();\n-\n for (RestPath restPath : restPaths) {\n- if (restPath.parts.size() == restParams.size()) {\n- if (restPath.parts.values().containsAll(restParams)) {\n- matchingRestPaths.add(restPath);\n- }\n+ if (restPath.matches(restParams)) {\n+ matchingRestPaths.add(restPath);\n }\n }\n-\n return matchingRestPaths;\n }\n \n@@ -184,33 +168,4 @@ private RestPath[] buildRestPaths() {\n }\n return restPaths;\n }\n-\n- private static class RestPath {\n- private static final Pattern PLACEHOLDERS_PATTERN = Pattern.compile(\"(\\\\{(.*?)})\");\n-\n- final String path;\n- //contains param to replace (e.g. {index}) and param key to use for lookup in the current values map (e.g. index)\n- final Map<String, String> parts;\n-\n- RestPath(String path) {\n- this.path = path;\n- this.parts = extractParts(path);\n- }\n-\n- private static Map<String,String> extractParts(String input) {\n- Map<String, String> parts = new HashMap<>();\n- Matcher matcher = PLACEHOLDERS_PATTERN.matcher(input);\n- while (matcher.find()) {\n- //key is e.g. {index}\n- String key = input.substring(matcher.start(), matcher.end());\n- if (matcher.groupCount() != 2) {\n- throw new IllegalArgumentException(\"no lookup key found for param [\" + key + \"]\");\n- }\n- //to be replaced with current value found with key e.g. index\n- String value = matcher.group(2);\n- parts.put(key, value);\n- }\n- return parts;\n- }\n- }\n }", "filename": "core/src/test/java/org/elasticsearch/test/rest/spec/RestApi.java", "status": "modified" } ] }
{ "body": "I try to get a document using date math, but Elasticsearch returns a error.\n\nVersion: Elasticsearch 2.0.0-beta2\n\nPut Document\n\n$ curl -XPUT 'localhost:9200/logstash-2015.09.19/animals/1' -d '{\"name\":\"puppy\"}'\n{\"_index\":\"logstash-2015.09.19\",\"_type\":\"animals\",\"_id\":\"1\",\"_version\":1,\"_shards\":{\"total\":2,\"successful\":1,\"failed\":0},\"created\":true}\n\nGet Document\n\n$ curl -XGET 'localhost:9200/logstash-2015.09.19/animals/1'\n{\"_index\":\"logstash-2015.09.19\",\"_type\":\"animals\",\"_id\":\"1\",\"_version\":1,\"found\":true,\"_source\":{\"name\":\"puppy\"}}\n\nGet Document with Date Math\n\n$ curl -XGET 'localhost:9200/`<logstash-{now/d}`>/animals/1'\nNo handler found for uri `[/<logstash-now/d>/animals/1]` and method [GET]\n", "comments": [ { "body": "It's not supposed to work AFAIK. Did you see that somewhere in docs?\nYou should do that on the client side.\n\nMay be using an alias would help you?\n", "created_at": "2015-09-20T05:56:04Z" }, { "body": "This page includes the date math information.\nhttps://www.elastic.co/guide/en/elasticsearch/reference/master/date-math-index-names.html\n\nExample from page.\ncurl -XGET 'localhost:9200/<logstash-{now/d-2d}>/_search' {\n \"query\" : {\n ...\n }\n}\n\nI try this curl command and get a error.\ncurl -XGET 'localhost:9200/<logstash-{now/d}>/_search' -d '{\n \"query\" : {\n \"match\" : {}\n }\n}'\n\n{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no such index\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"<logstash-now\",\"index\":\"<logstash-now\"}],\"type\":\"index_not_found_exception\",\"reason\":\"no such index\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"<logstash-now\",\"index\":\"<logstash-now\"},\"status\":404}\n\nThere is also this pull request.\n\nAdd date math support in index names #12209\nhttps://github.com/elastic/elasticsearch/pull/12209\n", "created_at": "2015-09-20T06:27:15Z" }, { "body": "Great. I missed that !\n", "created_at": "2015-09-20T08:46:40Z" }, { "body": "Thanks for reporting @sirdavidhuang, this is indeed a bug. The issue here is that the index name is cut of at the date rounding (`/)` and only `<logstash-{now` ends up being received as index name.\n", "created_at": "2015-09-21T07:08:59Z" }, { "body": "Hi @sirdavidhuang we had initially treated this as a bug and fixed it, but afterwards we found out that the fix introduced a regression (#14177), which made us take a step back. Any slash that should not be used as a path separator in a uri should be properly escaped, and it is wrong to try and make distinctions between the different slashes on the server side depending on what surrounds them, that heuristic is not going to fly (in fact it didn't :) )\n\nWe are going to revert the initial fix then, the solution to this problem is to escape the '/' in the url like this: `curl -XGET 'localhost:9200/<logstash-{now%2Fd}>/animals/1'`. That is going to work and didn't require any fix in the first place.\n", "created_at": "2015-10-20T20:02:13Z" } ], "number": 13665, "title": "Date Math Not Working for GET" }
{ "body": "With #13691 we introduced some custom logic to make sure that date math expressions like `<logstash-{now/D}>` don't get broken up into two where the slash appears in the expression. That said the only correct way to provide such a date math expression as part of the uri would be to properly escape the '/' instead. This fix also introduced a regression, as it would make sure that unescaped '/' are handled only in the case of date math expressions, but it removed support for properly escaped slashes anywhere else. The solution is to keep supporting escaped slashes only and require client libraries to properly escape them.\n\nThis commit reverts 93ad6969669e65db6ba7f35c375109e3f186675b and makes sure that our REST tests runner supports escaping of path parts, which was more involving than expected as each single part of the path needs to be properly escaped. I am not too happy with the current solution but it's the best I could do for now, maybe not that concerning anyway given that it's just test code. I do find uri encoding quite frustrating in java.\n\nRelates to #13691\nRelates to #13665\n\nCloses #14177\n", "number": 14216, "review_comments": [ { "body": "Why do you use replaceAll instead of URLEncoder?\n", "created_at": "2015-10-21T09:03:33Z" }, { "body": "URLEncoder is good only for query_string params, not for url path. They have different escaping rules according to the spec. Also URI applies the right rules to the path, but given that you have to provide the whole path it won't escape slashes. This is frustrating...but I couldn't find any better solutions.\n", "created_at": "2015-10-21T09:09:40Z" }, { "body": "ok\n", "created_at": "2015-10-21T09:47:18Z" } ], "title": "Restore support for escaped '/' as part of document id" }
{ "commits": [ { "message": "Restore support for escaped '/' as part of document id\n\nWith #13691 we introduced some custom logic to make sure that date math expressions like <logstash-{now/D}> don't get broken up into two where the slash appears in the expression. That said the only correct way to provide such a date math expression as part of the uri would be to properly escape the '/' instead. This fix also introduced a regression, as it would make sure that unescaped '/' are handled only in the case of date math expressions, but it removed support for properly escaped slashes anywhere else. The solution is to keep supporting escaped slashes only and require client libraries to properly escape them.\n\nThis commit reverts 93ad696 and makes sure that our REST tests runner supports escaping of path parts, which was more involving than expected as each single part of the path needs to be properly escaped. I am not too happy with the current solution but it's the best I could do for now, maybe not that concerning anyway given that it's just test code. I do find uri encoding quite frustrating in java.\n\nRelates to #13691\nRelates to #13665\n\nCloses #14177\nCloses #14216" } ], "files": [ { "diff": "@@ -21,9 +21,7 @@\n \n import org.elasticsearch.common.Strings;\n \n-import java.util.ArrayList;\n import java.util.HashMap;\n-import java.util.List;\n import java.util.Map;\n \n import static java.util.Collections.emptyMap;\n@@ -34,26 +32,15 @@\n */\n public class PathTrie<T> {\n \n- public static interface Decoder {\n+ public interface Decoder {\n String decode(String value);\n }\n \n- public static final Decoder NO_DECODER = new Decoder() {\n- @Override\n- public String decode(String value) {\n- return value;\n- }\n- };\n-\n private final Decoder decoder;\n private final TrieNode root;\n private final char separator;\n private T rootValue;\n \n- public PathTrie() {\n- this('/', \"*\", NO_DECODER);\n- }\n-\n public PathTrie(Decoder decoder) {\n this('/', \"*\", decoder);\n }\n@@ -198,7 +185,7 @@ public T retrieve(String[] path, int index, Map<String, String> params) {\n \n private void put(Map<String, String> params, TrieNode node, String value) {\n if (params != null && node.isNamedWildcard()) {\n- params.put(node.namedWildcard(), value);\n+ params.put(node.namedWildcard(), decoder.decode(value));\n }\n }\n \n@@ -230,7 +217,7 @@ public T retrieve(String path, Map<String, String> params) {\n if (path.length() == 0) {\n return rootValue;\n }\n- String[] strings = splitPath(decoder.decode(path));\n+ String[] strings = Strings.splitStringToArray(path, separator);\n if (strings.length == 0) {\n return rootValue;\n }\n@@ -241,50 +228,4 @@ public T retrieve(String path, Map<String, String> params) {\n }\n return root.retrieve(strings, index, params);\n }\n-\n- /*\n- Splits up the url path up by '/' and is aware of\n- index name expressions that appear between '<' and '>'.\n- */\n- String[] splitPath(final String path) {\n- if (path == null || path.length() == 0) {\n- return Strings.EMPTY_ARRAY;\n- }\n- int count = 1;\n- boolean splitAllowed = true;\n- for (int i = 0; i < path.length(); i++) {\n- final char currentC = path.charAt(i);\n- if ('<' == currentC) {\n- splitAllowed = false;\n- } else if (currentC == '>') {\n- splitAllowed = true;\n- } else if (splitAllowed && currentC == separator) {\n- count++;\n- }\n- }\n-\n- final List<String> result = new ArrayList<>(count);\n- final StringBuilder builder = new StringBuilder();\n-\n- splitAllowed = true;\n- for (int i = 0; i < path.length(); i++) {\n- final char currentC = path.charAt(i);\n- if ('<' == currentC) {\n- splitAllowed = false;\n- } else if (currentC == '>') {\n- splitAllowed = true;\n- } else if (splitAllowed && currentC == separator) {\n- if (builder.length() > 0) {\n- result.add(builder.toString());\n- builder.setLength(0);\n- }\n- continue;\n- }\n- builder.append(currentC);\n- }\n- if (builder.length() > 0) {\n- result.add(builder.toString());\n- }\n- return result.toArray(new String[result.size()]);\n- }\n }", "filename": "core/src/main/java/org/elasticsearch/common/path/PathTrie.java", "status": "modified" }, { "diff": "@@ -19,12 +19,12 @@\n \n package org.elasticsearch.common.path;\n \n+import org.elasticsearch.rest.support.RestUtils;\n import org.elasticsearch.test.ESTestCase;\n \n import java.util.HashMap;\n import java.util.Map;\n \n-import static org.hamcrest.Matchers.arrayContaining;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.nullValue;\n \n@@ -33,8 +33,15 @@\n */\n public class PathTrieTests extends ESTestCase {\n \n+ public static final PathTrie.Decoder NO_DECODER = new PathTrie.Decoder() {\n+ @Override\n+ public String decode(String value) {\n+ return value;\n+ }\n+ };\n+ \n public void testPath() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"/a/b/c\", \"walla\");\n trie.insert(\"a/d/g\", \"kuku\");\n trie.insert(\"x/b/c\", \"lala\");\n@@ -61,13 +68,13 @@ public void testPath() {\n }\n \n public void testEmptyPath() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"/\", \"walla\");\n assertThat(trie.retrieve(\"\"), equalTo(\"walla\"));\n }\n \n public void testDifferentNamesOnDifferentPath() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"/a/{type}\", \"test1\");\n trie.insert(\"/b/{name}\", \"test2\");\n \n@@ -81,7 +88,7 @@ public void testDifferentNamesOnDifferentPath() {\n }\n \n public void testSameNameOnDifferentPath() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"/a/c/{name}\", \"test1\");\n trie.insert(\"/b/{name}\", \"test2\");\n \n@@ -95,7 +102,7 @@ public void testSameNameOnDifferentPath() {\n }\n \n public void testPreferNonWildcardExecution() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"{test}\", \"test1\");\n trie.insert(\"b\", \"test2\");\n trie.insert(\"{test}/a\", \"test3\");\n@@ -111,7 +118,7 @@ public void testPreferNonWildcardExecution() {\n }\n \n public void testSamePathConcreteResolution() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"{x}/{y}/{z}\", \"test1\");\n trie.insert(\"{x}/_y/{k}\", \"test2\");\n \n@@ -127,7 +134,7 @@ public void testSamePathConcreteResolution() {\n }\n \n public void testNamedWildcardAndLookupWithWildcard() {\n- PathTrie<String> trie = new PathTrie<>();\n+ PathTrie<String> trie = new PathTrie<>(NO_DECODER);\n trie.insert(\"x/{test}\", \"test1\");\n trie.insert(\"{test}/a\", \"test2\");\n trie.insert(\"/{test}\", \"test3\");\n@@ -155,24 +162,20 @@ public void testNamedWildcardAndLookupWithWildcard() {\n assertThat(params.get(\"test\"), equalTo(\"*\"));\n }\n \n- public void testSplitPath() {\n- PathTrie<String> trie = new PathTrie<>();\n- assertThat(trie.splitPath(\"/a/\"), arrayContaining(\"a\"));\n- assertThat(trie.splitPath(\"/a/b\"),arrayContaining(\"a\", \"b\"));\n- assertThat(trie.splitPath(\"/a/b/c\"), arrayContaining(\"a\", \"b\", \"c\"));\n- assertThat(trie.splitPath(\"/a/b/<c/d>\"), arrayContaining(\"a\", \"b\", \"<c/d>\"));\n- assertThat(trie.splitPath(\"/a/b/<c/d>/d\"), arrayContaining(\"a\", \"b\", \"<c/d>\", \"d\"));\n-\n- assertThat(trie.splitPath(\"/<logstash-{now}>/_search\"), arrayContaining(\"<logstash-{now}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/d}>/_search\"), arrayContaining(\"<logstash-{now/d}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M{YYYY.MM}}>/_search\"), arrayContaining(\"<logstash-{now/M{YYYY.MM}}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M{YYYY.MM}}>/_search\"), arrayContaining(\"<logstash-{now/M{YYYY.MM}}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M{YYYY.MM|UTC}}>/log/_search\"), arrayContaining(\"<logstash-{now/M{YYYY.MM|UTC}}>\", \"log\", \"_search\"));\n-\n- assertThat(trie.splitPath(\"/<logstash-{now/M}>,<logstash-{now/M-1M}>/_search\"), arrayContaining(\"<logstash-{now/M}>,<logstash-{now/M-1M}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M}>,<logstash-{now/M-1M}>/_search\"), arrayContaining(\"<logstash-{now/M}>,<logstash-{now/M-1M}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M{YYYY.MM}}>,<logstash-{now/M-1M{YYYY.MM}}>/_search\"), arrayContaining(\"<logstash-{now/M{YYYY.MM}}>,<logstash-{now/M-1M{YYYY.MM}}>\", \"_search\"));\n- assertThat(trie.splitPath(\"/<logstash-{now/M{YYYY.MM|UTC}}>,<logstash-{now/M-1M{YYYY.MM|UTC}}>/_search\"), arrayContaining(\"<logstash-{now/M{YYYY.MM|UTC}}>,<logstash-{now/M-1M{YYYY.MM|UTC}}>\", \"_search\"));\n+ //https://github.com/elastic/elasticsearch/issues/14177\n+ //https://github.com/elastic/elasticsearch/issues/13665\n+ public void testEscapedSlashWithinUrl() {\n+ PathTrie<String> pathTrie = new PathTrie<>(RestUtils.REST_DECODER);\n+ pathTrie.insert(\"/{index}/{type}/{id}\", \"test\");\n+ HashMap<String, String> params = new HashMap<>();\n+ assertThat(pathTrie.retrieve(\"/index/type/a%2Fe\", params), equalTo(\"test\"));\n+ assertThat(params.get(\"index\"), equalTo(\"index\"));\n+ assertThat(params.get(\"type\"), equalTo(\"type\"));\n+ assertThat(params.get(\"id\"), equalTo(\"a/e\"));\n+ params.clear();\n+ assertThat(pathTrie.retrieve(\"/<logstash-{now%2Fd}>/type/id\", params), equalTo(\"test\"));\n+ assertThat(params.get(\"index\"), equalTo(\"<logstash-{now/d}>\"));\n+ assertThat(params.get(\"type\"), equalTo(\"type\"));\n+ assertThat(params.get(\"id\"), equalTo(\"id\"));\n }\n-\n }", "filename": "core/src/test/java/org/elasticsearch/common/path/PathTrieTests.java", "status": "modified" }, { "diff": "@@ -98,7 +98,7 @@ public void testThatPathsAreNormalized() throws Exception {\n notFoundUris.add(\"/_plugin/dummy/%2e%2e/%2e%2e/%2e%2e/%2e%2e/index.html\");\n notFoundUris.add(\"/_plugin/dummy/%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2findex.html\");\n notFoundUris.add(\"/_plugin/dummy/%2E%2E/%2E%2E/%2E%2E/%2E%2E/index.html\");\n- notFoundUris.add(\"/_plugin/dummy/..\\\\..\\\\..\\\\..\\\\..\\\\log4j.properties\");\n+ notFoundUris.add(\"/_plugin/dummy/..%5C..%5C..%5C..%5C..%5Clog4j.properties\");\n \n for (String uri : notFoundUris) {\n HttpResponse response = httpClient().path(uri).execute();", "filename": "core/src/test/java/org/elasticsearch/plugins/SitePluginIT.java", "status": "modified" }, { "diff": "@@ -230,8 +230,9 @@ private HttpRequestBuilder callApiBuilder(String apiName, Map<String, String> pa\n httpRequestBuilder.method(RandomizedTest.randomFrom(supportedMethods));\n }\n \n- //the http method is randomized (out of the available ones with the chosen api)\n- return httpRequestBuilder.path(RandomizedTest.randomFrom(restApi.getFinalPaths(pathParts)));\n+ //the rest path to use is randomized out of the matching ones (if more than one)\n+ RestPath restPath = RandomizedTest.randomFrom(restApi.getFinalPaths(pathParts));\n+ return httpRequestBuilder.pathParts(restPath.getPathParts());\n }\n \n private RestApi restApi(String apiName) {", "filename": "core/src/test/java/org/elasticsearch/test/rest/client/RestClient.java", "status": "modified" }, { "diff": "@@ -0,0 +1,97 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.test.rest.client;\n+\n+import java.util.*;\n+\n+public class RestPath {\n+ private final List<PathPart> parts;\n+ private final List<String> placeholders;\n+\n+ public RestPath(List<String> parts) {\n+ List<PathPart> pathParts = new ArrayList<>(parts.size());\n+ for (String part : parts) {\n+ pathParts.add(new PathPart(part, false));\n+ }\n+ this.parts = pathParts;\n+ this.placeholders = Collections.emptyList();\n+ }\n+\n+ public RestPath(String path) {\n+ String[] pathParts = path.split(\"/\");\n+ List<String> placeholders = new ArrayList<>();\n+ List<PathPart> parts = new ArrayList<>();\n+ for (String pathPart : pathParts) {\n+ if (pathPart.length() > 0) {\n+ if (pathPart.startsWith(\"{\")) {\n+ if (pathPart.indexOf('}') != pathPart.length() - 1) {\n+ throw new IllegalArgumentException(\"more than one parameter found in the same path part: [\" + pathPart + \"]\");\n+ }\n+ String placeholder = pathPart.substring(1, pathPart.length() - 1);\n+ parts.add(new PathPart(placeholder, true));\n+ placeholders.add(placeholder);\n+ } else {\n+ parts.add(new PathPart(pathPart, false));\n+ }\n+ }\n+ }\n+ this.placeholders = placeholders;\n+ this.parts = parts;\n+ }\n+\n+ public String[] getPathParts() {\n+ String[] parts = new String[this.parts.size()];\n+ int i = 0;\n+ for (PathPart part : this.parts) {\n+ parts[i++] = part.pathPart;\n+ }\n+ return parts;\n+ }\n+\n+ public boolean matches(Set<String> params) {\n+ return placeholders.size() == params.size() && placeholders.containsAll(params);\n+ }\n+\n+ public RestPath replacePlaceholders(Map<String,String> params) {\n+ List<String> finalPathParts = new ArrayList<>(parts.size());\n+ for (PathPart pathPart : parts) {\n+ if (pathPart.isPlaceholder) {\n+ String value = params.get(pathPart.pathPart);\n+ if (value == null) {\n+ throw new IllegalArgumentException(\"parameter [\" + pathPart.pathPart + \"] missing\");\n+ }\n+ finalPathParts.add(value);\n+ } else {\n+ finalPathParts.add(pathPart.pathPart);\n+ }\n+ }\n+ return new RestPath(finalPathParts);\n+ }\n+\n+ private static class PathPart {\n+ private final boolean isPlaceholder;\n+ private final String pathPart;\n+\n+ private PathPart(String pathPart, boolean isPlaceholder) {\n+ this.isPlaceholder = isPlaceholder;\n+ this.pathPart = pathPart;\n+ }\n+ }\n+}\n\\ No newline at end of file", "filename": "core/src/test/java/org/elasticsearch/test/rest/client/RestPath.java", "status": "added" }, { "diff": "@@ -86,14 +86,42 @@ public HttpRequestBuilder port(int port) {\n return this;\n }\n \n+ /**\n+ * Sets the path to send the request to. Url encoding needs to be applied by the caller.\n+ * Use {@link #pathParts(String...)} instead if the path needs to be encoded, part by part.\n+ */\n public HttpRequestBuilder path(String path) {\n this.path = path;\n return this;\n }\n \n+ /**\n+ * Sets the path by providing the different parts (without slashes), which will be properly encoded.\n+ */\n+ public HttpRequestBuilder pathParts(String... path) {\n+ //encode rules for path and query string parameters are different. We use URI to encode the path, and URLEncoder for each query string parameter (see addParam).\n+ //We need to encode each path part separately though, as each one might contain slashes that need to be escaped, which needs to be done manually.\n+ if (path.length == 0) {\n+ this.path = \"/\";\n+ return this;\n+ }\n+ StringBuilder finalPath = new StringBuilder();\n+ for (String pathPart : path) {\n+ try {\n+ finalPath.append('/');\n+ URI uri = new URI(null, null, null, -1, pathPart, null, null);\n+ //manually escape any slash that each part may contain\n+ finalPath.append(uri.getRawPath().replaceAll(\"/\", \"%2F\"));\n+ } catch(URISyntaxException e) {\n+ throw new RuntimeException(\"unable to build uri\", e);\n+ }\n+ }\n+ this.path = finalPath.toString();\n+ return this;\n+ }\n+\n public HttpRequestBuilder addParam(String name, String value) {\n try {\n- //manually url encode params, since URI does it only partially (e.g. '+' stays as is)\n this.params.put(name, URLEncoder.encode(value, \"utf-8\"));\n return this;\n } catch (UnsupportedEncodingException e) {\n@@ -181,19 +209,12 @@ private HttpUriRequest buildRequest() {\n }\n \n private URI buildUri() {\n- try {\n- //url encode rules for path and query params are different. We use URI to encode the path, but we manually encode each query param through URLEncoder.\n- URI uri = new URI(protocol, null, host, port, path, null, null);\n- //String concatenation FTW. If we use the nicer multi argument URI constructor query parameters will get only partially encoded\n- //(e.g. '+' will stay as is) hence when trying to properly encode params manually they will end up double encoded (+ becomes %252B instead of %2B).\n- StringBuilder uriBuilder = new StringBuilder(protocol).append(\"://\").append(host).append(\":\").append(port).append(uri.getRawPath());\n- if (params.size() > 0) {\n- uriBuilder.append(\"?\").append(params.entrySet().stream().map(e -> e.getKey() + \"=\" + e.getValue()).collect(Collectors.joining(\"&\")));\n- }\n- return URI.create(uriBuilder.toString());\n- } catch(URISyntaxException e) {\n- throw new IllegalArgumentException(\"unable to build uri\", e);\n+ StringBuilder uriBuilder = new StringBuilder(protocol).append(\"://\").append(host).append(\":\").append(port).append(path);\n+ if (params.size() > 0) {\n+ uriBuilder.append(\"?\").append(params.entrySet().stream().map(e -> e.getKey() + \"=\" + e.getValue()).collect(Collectors.joining(\"&\")));\n }\n+ //using this constructor no url encoding happens, as we did everything upfront in addParam and pathPart methods\n+ return URI.create(uriBuilder.toString());\n }\n \n private HttpEntityEnclosingRequestBase addOptionalBody(HttpEntityEnclosingRequestBase requestBase) {", "filename": "core/src/test/java/org/elasticsearch/test/rest/client/http/HttpRequestBuilder.java", "status": "modified" }, { "diff": "@@ -20,14 +20,12 @@\n \n import org.apache.http.client.methods.HttpPost;\n import org.apache.http.client.methods.HttpPut;\n+import org.elasticsearch.test.rest.client.RestPath;\n \n import java.util.ArrayList;\n-import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n-import java.util.regex.Matcher;\n-import java.util.regex.Pattern;\n \n /**\n * Represents an elasticsearch REST endpoint (api)\n@@ -41,7 +39,7 @@ public class RestApi {\n private List<String> params = new ArrayList<>();\n private BODY body = BODY.NOT_SUPPORTED;\n \n- public static enum BODY {\n+ public enum BODY {\n NOT_SUPPORTED, OPTIONAL, REQUIRED\n }\n \n@@ -131,28 +129,18 @@ public boolean isBodyRequired() {\n * Finds the best matching rest path given the current parameters and replaces\n * placeholders with their corresponding values received as arguments\n */\n- public String[] getFinalPaths(Map<String, String> pathParams) {\n-\n+ public RestPath[] getFinalPaths(Map<String, String> pathParams) {\n List<RestPath> matchingRestPaths = findMatchingRestPaths(pathParams.keySet());\n if (matchingRestPaths == null || matchingRestPaths.isEmpty()) {\n throw new IllegalArgumentException(\"unable to find matching rest path for api [\" + name + \"] and path params \" + pathParams);\n }\n \n- String[] paths = new String[matchingRestPaths.size()];\n+ RestPath[] restPaths = new RestPath[matchingRestPaths.size()];\n for (int i = 0; i < matchingRestPaths.size(); i++) {\n RestPath restPath = matchingRestPaths.get(i);\n- String path = restPath.path;\n- for (Map.Entry<String, String> paramEntry : restPath.parts.entrySet()) {\n- // replace path placeholders with actual values\n- String value = pathParams.get(paramEntry.getValue());\n- if (value == null) {\n- throw new IllegalArgumentException(\"parameter [\" + paramEntry.getValue() + \"] missing\");\n- }\n- path = path.replace(paramEntry.getKey(), value);\n- }\n- paths[i] = path;\n+ restPaths[i] = restPath.replacePlaceholders(pathParams);\n }\n- return paths;\n+ return restPaths;\n }\n \n /**\n@@ -165,15 +153,11 @@ private List<RestPath> findMatchingRestPaths(Set<String> restParams) {\n \n List<RestPath> matchingRestPaths = new ArrayList<>();\n RestPath[] restPaths = buildRestPaths();\n-\n for (RestPath restPath : restPaths) {\n- if (restPath.parts.size() == restParams.size()) {\n- if (restPath.parts.values().containsAll(restParams)) {\n- matchingRestPaths.add(restPath);\n- }\n+ if (restPath.matches(restParams)) {\n+ matchingRestPaths.add(restPath);\n }\n }\n-\n return matchingRestPaths;\n }\n \n@@ -184,33 +168,4 @@ private RestPath[] buildRestPaths() {\n }\n return restPaths;\n }\n-\n- private static class RestPath {\n- private static final Pattern PLACEHOLDERS_PATTERN = Pattern.compile(\"(\\\\{(.*?)})\");\n-\n- final String path;\n- //contains param to replace (e.g. {index}) and param key to use for lookup in the current values map (e.g. index)\n- final Map<String, String> parts;\n-\n- RestPath(String path) {\n- this.path = path;\n- this.parts = extractParts(path);\n- }\n-\n- private static Map<String,String> extractParts(String input) {\n- Map<String, String> parts = new HashMap<>();\n- Matcher matcher = PLACEHOLDERS_PATTERN.matcher(input);\n- while (matcher.find()) {\n- //key is e.g. {index}\n- String key = input.substring(matcher.start(), matcher.end());\n- if (matcher.groupCount() != 2) {\n- throw new IllegalArgumentException(\"no lookup key found for param [\" + key + \"]\");\n- }\n- //to be replaced with current value found with key e.g. index\n- String value = matcher.group(2);\n- parts.put(key, value);\n- }\n- return parts;\n- }\n- }\n }", "filename": "core/src/test/java/org/elasticsearch/test/rest/spec/RestApi.java", "status": "modified" } ] }
{ "body": "When installing a plugin in ES 1.7, there is a check to verify that the plugin doesn't already exist, and the plugin installation aborts if it does: \n\n```\nbin/plugin install elasticsearch/license/latest\n-> Installing elasticsearch/license/latest...\nTrying http://download.elasticsearch.org/elasticsearch/license/license-latest.zip...\nDownloading ..............................DONE\nInstalled elasticsearch/license/latest into /path/to/elasticsearch-1.7.0/plugins/license\n\nbin/plugin install elasticsearch/license/latest\n-> Installing elasticsearch/license/latest...\nFailed to install elasticsearch/license/latest, reason: plugin directory /path/to/elasticsearch-1.7.0/plugins/license already exists. To update the plugin, uninstall it first using --remove elasticsearch/license/latest command\n\n```\n\nIn 2.0, this check no longer appears to take place, and the plugin installation appears to be attempted and fails: \n\n```\nbin/plugin install license\n-> Installing license...\nTrying https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/license/2.0.0-rc1/license-2.0.0-rc1.zip ...\nDownloading ......DONE\nVerifying https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/license/2.0.0-rc1/license-2.0.0-rc1.zip checksums if available ...\nDownloading .DONE\nInstalled license into /path/to/elasticsearch-2.0.0-rc1/plugins/license\n\nbin/plugin install license\n-> Installing license...\nTrying https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/license/2.0.0-rc1/license-2.0.0-rc1.zip ...\nDownloading ......DONE\nVerifying https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/license/2.0.0-rc1/license-2.0.0-rc1.zip checksums if available ...\nDownloading .DONE\nERROR: java.lang.IllegalStateException: jar hell!\nclass: org.elasticsearch.license.plugin.rest.RestGetLicenseAction\njar1: /path/to/elasticsearch-2.0.0-rc1/plugins/license/license-2.0.0-rc1.jar\njar2: /var/folders/1r/t56v_5710ygd6klpsbsrmbsr0000gn/T/4690718986021044046/license/license-2.0.0-rc1.jar\n```\n", "comments": [ { "body": "Its actually not really trying to install the plugin again, its just that the jar hell check happens before the \"already installed check\". Thanks @skearns64 \n", "created_at": "2015-10-20T12:53:38Z" }, { "body": "The issue is unique to this license plugin: it hits this problem because it uses the deprecated `isolated=false`. This kind of thing is not really surprising, its why we deprecated it...\n", "created_at": "2015-10-20T13:14:08Z" } ], "number": 14205, "title": "Plugin Installation No Longer Checks for Already Installed Plugin" }
{ "body": "In the case of a plugin using the deprecated `isolated=false` functionality\nthis will cause confusion otherwise.\n\nCloses #14205\n", "number": 14207, "review_comments": [], "title": "Check \"plugin already installed\" before jar hell check." }
{ "commits": [ { "message": "check \"plugin already installed\" before jar hell check.\n\nIn the case of a plugin using the deprecated `isolated=false` functionality\nthis will cause confusion otherwise.\n\nCloses #14205" } ], "files": [ { "diff": "@@ -220,18 +220,18 @@ private void extract(PluginHandle pluginHandle, Terminal terminal, Path pluginFi\n PluginInfo info = PluginInfo.readFromProperties(root);\n terminal.println(VERBOSE, \"%s\", info);\n \n- // check for jar hell before any copying\n- if (info.isJvm()) {\n- jarHellCheck(root, info.isIsolated());\n- }\n-\n // update name in handle based on 'name' property found in descriptor file\n pluginHandle = new PluginHandle(info.getName(), pluginHandle.version, pluginHandle.user);\n final Path extractLocation = pluginHandle.extractedDir(environment);\n if (Files.exists(extractLocation)) {\n throw new IOException(\"plugin directory \" + extractLocation.toAbsolutePath() + \" already exists. To update the plugin, uninstall it first using 'remove \" + pluginHandle.name + \"' command\");\n }\n \n+ // check for jar hell before any copying\n+ if (info.isJvm()) {\n+ jarHellCheck(root, info.isIsolated());\n+ }\n+\n // read optional security policy (extra permissions)\n // if it exists, confirm or warn the user\n Path policy = root.resolve(PluginInfo.ES_PLUGIN_POLICY);", "filename": "core/src/main/java/org/elasticsearch/plugins/PluginManager.java", "status": "modified" }, { "diff": "@@ -57,13 +57,15 @@\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.nio.file.SimpleFileVisitor;\n+import java.nio.file.StandardOpenOption;\n import java.nio.file.attribute.BasicFileAttributes;\n import java.nio.file.attribute.PosixFileAttributeView;\n import java.nio.file.attribute.PosixFileAttributes;\n import java.nio.file.attribute.PosixFilePermission;\n import java.util.ArrayList;\n import java.util.List;\n import java.util.Locale;\n+import java.util.jar.JarOutputStream;\n import java.util.zip.ZipEntry;\n import java.util.zip.ZipOutputStream;\n \n@@ -407,6 +409,45 @@ public void testInstallPlugin() throws IOException {\n assertThatPluginIsListed(pluginName);\n }\n \n+ /**\n+ * @deprecated support for this is not going to stick around, seriously.\n+ */\n+ @Deprecated\n+ public void testAlreadyInstalledNotIsolated() throws Exception {\n+ String pluginName = \"fake-plugin\";\n+ Path pluginDir = createTempDir().resolve(pluginName);\n+ Files.createDirectories(pluginDir);\n+ // create a jar file in the plugin\n+ Path pluginJar = pluginDir.resolve(\"fake-plugin.jar\");\n+ try (ZipOutputStream out = new JarOutputStream(Files.newOutputStream(pluginJar, StandardOpenOption.CREATE))) {\n+ out.putNextEntry(new ZipEntry(\"foo.class\"));\n+ out.closeEntry();\n+ }\n+ String pluginUrl = createPlugin(pluginDir,\n+ \"description\", \"fake desc\",\n+ \"name\", pluginName,\n+ \"version\", \"1.0\",\n+ \"elasticsearch.version\", Version.CURRENT.toString(),\n+ \"java.version\", System.getProperty(\"java.specification.version\"),\n+ \"isolated\", \"false\",\n+ \"jvm\", \"true\",\n+ \"classname\", \"FakePlugin\");\n+\n+ // install\n+ ExitStatus status = new PluginManagerCliParser(terminal).execute(args(\"install \" + pluginUrl));\n+ assertEquals(\"unexpected exit status: output: \" + terminal.getTerminalOutput(), ExitStatus.OK, status);\n+\n+ // install again\n+ status = new PluginManagerCliParser(terminal).execute(args(\"install \" + pluginUrl));\n+ List<String> output = terminal.getTerminalOutput();\n+ assertEquals(\"unexpected exit status: output: \" + output, ExitStatus.IO_ERROR, status);\n+ boolean foundExpectedMessage = false;\n+ for (String line : output) {\n+ foundExpectedMessage |= line.contains(\"already exists\");\n+ }\n+ assertTrue(foundExpectedMessage);\n+ }\n+\n public void testInstallSitePluginVerbose() throws IOException {\n String pluginName = \"fake-plugin\";\n Path pluginDir = createTempDir().resolve(pluginName);", "filename": "core/src/test/java/org/elasticsearch/plugins/PluginManagerIT.java", "status": "modified" } ] }
{ "body": "```\n# With a one-word query and minimum_should_match=-50%, adding extra non-matching fields should not matter.\n# Tested on v1.7.2.\n\n# delete and re-create the index\ncurl -XDELETE localhost:9200/test\ncurl -XPUT localhost:9200/test\n\necho \n\n # insert a document\ncurl -XPUT 'http://localhost:9200/test/test/1' -d '\n { \"title\": \"test document\"}\n '\ncurl -XPOST 'http://localhost:9200/test/_refresh'\n\necho \n\n # this correctly finds the document (f1 is a non-existent field)\n curl -XGET 'http://localhost:9200/test/test/_search' -d '{\n \"query\" : {\n \"simple_query_string\" : {\n \"fields\" : [ \"title\", \"f1\" ],\n \"query\" : \"test\",\n \"minimum_should_match\" : \"-50%\"\n }\n }\n }\n'\n\necho \n\n# this incorrectly does not find the document (f1 and f2 are non-existent fields)\ncurl -XGET 'http://localhost:9200/test/test/_search' -d '{\n \"query\" : {\n \"simple_query_string\" : {\n \"fields\" : [ \"title\", \"f1\", \"f2\" ],\n \"query\" : \"test\",\n \"minimum_should_match\" : \"-50%\"\n }\n }\n }\n'\n\necho\n```\n", "comments": [ { "body": "This does look like a bug. The min_should_match is being applied at the wrong level:\n\n```\nGET /test/test/_validate/query?explain\n{\n \"query\" : {\n \"simple_query_string\" : {\n \"fields\" : [ \"title\", \"f1\", \"f2\" ],\n \"query\" : \"test\",\n \"minimum_should_match\" : \"-50%\"\n }\n }\n}\n```\n\nReturns an explanation of:\n\n```\n+((f1:test title:test f2:test)~2)\n```\n\nWhile the `query_string` and `multi_match` equivalents return:\n\n```\n+(title:test | f1:test | f2:test)\n```\n", "created_at": "2015-10-02T15:53:05Z" }, { "body": "I had a look and saw that `simple_query_string` iterates over all fields for each token in the query string and combines them all in boolean query `should` clauses, and we apply the `minimum_should_match` on the whole result. \n\n@clintongormley As far as I understand you, we should parse the query string for each field separately, apply the `minimum_should_match` there and then combine the result in an overall Boolean query. This however raised another question for me. Suppose we have two terms like `\"query\" : \"test document\"` instead, then currently we we get:\n\n```\n((f1:test title:test f2:test) (f1:document title:document f2:document))~1\n```\n\nIf we would instead create the query per field individually we would get something like\n\n```\n((title:test title:document)~1 (f1:test f1:document)~1 (f2:test f2:document)~1)\n```\n\nWhile treating the query string for each field individually looks like the right behaviour in this case, I wonder if this will break other cases. wdyt?\n", "created_at": "2015-10-14T10:09:12Z" }, { "body": "@cbuescher the `query_string` query takes the same approach as your first output, ie:\n\n```\n((f1:test title:test f2:test) (f1:document title:document f2:document))~1\n```\n\nI think the bug is maybe a bit more subtle. A query across 3 fields for two terms with min should match 80% results in:\n\n```\nbool:\n min_should_match: 80% (==1)\n should:\n bool:\n should: [ f1:term1, f2:term1, f3:term1]\n bool:\n should: [ f1:term2, f2:term2, f3:term2]\n```\n\nhowever with only one term it is producing:\n\n```\nbool:\n min_should_match: 80% (==2) \n should: [ f1:term1, f2:term1, f3:term1]\n```\n\nIn other words, min should match is being applied to the wrong `bool` query. Instead, even the one term case should be wrapped in another `bool` query, and the min should match should be applied at that level.\n", "created_at": "2015-10-14T11:31:18Z" }, { "body": "@clintongormley Yes, I think thats what I meant. I'm working on a PR that applies the `minimum_should_match` to sub-queries that only target one field. That way your examples above would change to something like\n\n```\nbool: \n should:\n bool:\n min_should_match: 80% (==1)\n should: [ f1:term1, f1:term2]\n bool:\n min_should_match: 80% (==1)\n should: [ f2:term1, f2:term2]\n bool:\n min_should_match: 80% (==1)\n should: [ f3:term1, f3:term2]\n```\n\nand for one term\n\n```\nbool: \n should:\n bool:\n min_should_match: 80% (==0)\n should: [ f1:term1]\n bool:\n min_should_match: 80% (==0)\n should: [ f2:term1]\n bool:\n min_should_match: 80% (==0)\n should: [ f3:term1]\n```\n\nIn the later case we already additionally simplify one-term bool queries to TermQueries.\n", "created_at": "2015-10-14T12:03:12Z" }, { "body": "@cbuescher I think that is incorrect. The simple query string query (like the query string query) is term-centric rather than field-centric. In other words, min should match should be applied to the number of terms (regardless of which field the term is in).\n\nI'm guessing that there is an \"optimization\" for the one term case where the field-level bool clause is not wrapped in an outer bool clause. Then the min should match is applied at the field level instead of at the term level, resulting in the wrong calculation.\n", "created_at": "2015-10-15T11:22:45Z" }, { "body": "That guess seems right, there is an optimization in lucenes\nSimpleQueryParser for boolean queries with 0 or 1 clauses that seems to be\nthe problem. I think we can overwrite that.\n\nOn Thu, Oct 15, 2015 at 1:23 PM, Clinton Gormley notifications@github.com\nwrote:\n\n> @cbuescher https://github.com/cbuescher I think that is incorrect. The\n> simple query string query (like the query string query) is term-centric\n> rather than field-centric. In other words, min should match should be\n> applied to the number of terms (regardless of which field the term is in).\n> \n> I'm guessing that there is an \"optimization\" for the one term case where\n> the field-level bool clause is not wrapped in an outer bool clause. Then\n> the min should match is applied at the field level instead of at the term\n> level, resulting in the wrong calculation.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elastic/elasticsearch/issues/13884#issuecomment-148358711\n> .\n\n## \n\nChristoph Büscher\n", "created_at": "2015-10-15T12:49:18Z" }, { "body": "If this is in Lucene, perhaps it should be fixed there?\n\n@jdconrad what do you think?\n", "created_at": "2015-10-15T15:04:19Z" }, { "body": "@clintongormley I don't think SimpleQueryParser#simplify() is at the root of this anymore. The problem seems to be that SimpleQueryParser parses term by term-centric, but only starts wrapping the resulting queries when combining more than two of them. For one search term and two fields I get a Boolean query with two TermQuery clauses (without enclosing Boolean query), for two terms and one field I get an enclosing Boolean query with two Boolean query subclauses. I'm not sure yet how this can be distiguished from outside of the Lucene parser without inspecting the query, and if a solution like that holds for more complicated cases.\n", "created_at": "2015-10-15T15:15:15Z" }, { "body": "Althought it would be nice if Lucene SimpleQueryParse would output a Boolquery with one should-clause and three nested Boolqueries for the 1-term/multi-field case, I think we can detect this case and do the wrapping in the additional Boolquery in the SimpleQueryStringBuilder. I just opened a PR.\n", "created_at": "2015-10-19T08:58:58Z" }, { "body": "The SQP wasn't really designed around multi-field terms, but needed to have it added afterwards for use as a default field which is why the min-should-match never gets applied down at that the level. I don't know if the correct behavior is to make it work on multi-fields. I'll have to give that some thought given that it really is as @cbuescher described as term-centric, and it sort of supposed to be disguised from the user. One thing that will make this easier to fix, though, I believe is #4707, since it will flatten the parse tree a bit.\n", "created_at": "2015-10-19T17:15:49Z" }, { "body": "@jdconrad thanks for explaining, in the meantime I opened #14186 which basically tries to distinguish the one vs. multi-field cases and tries wraps the resulting query one more time to get a correct min-should-match. Please leave comment there if my current approach will colide the plans regarding #4707.\n", "created_at": "2015-10-20T09:10:12Z" }, { "body": "Reopening this issue since the fix proposed in #14186 was too fragile. Discussed with @javanna and @jpountz, at this point we think the options are either fixing this in lucenes SimpleQueryParser so that we can apply minimum_should_match correctly on the ES side or remove this option from `simple_query_string` entirely because it cannot properly supported.\n", "created_at": "2015-11-03T11:01:39Z" }, { "body": "Trying to sum up this issue so far: \n- the number of should-clauses returned by `SimpleQueryParser` is not 1 for one search term and multiple fields, so we cannot apply `minimum_should_match` correctly in `SimpleQueryStringBuilder`. e.g. for `\"query\" : \"term1\", \"fields\" : [ \"f1\", \"f2\", \"f3\" ]` SimpleQueryParser returns a BooleanQuery with three should-clauses. As soon as we add more search terms, the number of should-clauses is the same as the number of search terms, e.g. `\"query\" : \"term1 term2\", \"fields\" : [ \"f1\", \"f2\", \"f3\" ]` returns a BooleanQuery with two subclauses, one per term.\n- it is difficult to determine the number of terms from the query string upfront, because the tokenization depends on the analyzer used, so we really need `SimpleQueryParser#parse()` for this.\n- it is hard to determine the correct number of terms from the returned lucene query without making assumptions about the inner structure of the query (which is subject to change, reason for #14186 beeing reverted). e.g. currently `\"query\" : \"term1\", \"fields\" : [ \"f1\", \"f2\", \"f3\" ]` and `\"query\" : \"term1 term2 term3\", \"fields\" : [ \"f1\" ]` will return a BooleanQuery with same structure (three should-clauses, each containing a TermQuery). \n", "created_at": "2015-11-03T11:57:40Z" }, { "body": "@cbuescher this issue is fixed by https://github.com/elastic/elasticsearch/pull/16155. \n@rmuir has pointed out a nice way to distinguish between a single word query with multiple fields against a multi word query with a single field: we just have to check if the coord are disabled on the top level BooleanQuery, the simple query parser disables the coord when the boolean query for multiple fields is built.\nThough I tested the single word with multiple fields case only, if you think of other issues please reopen this ticket or open a new one ;).\n", "created_at": "2016-02-04T18:11:51Z" }, { "body": "@jimferenczi thats great, I just checked this with a test which is close to the problem desciption here. I'm not sure if this adds anything to your tests, but just in case I justed opened #16465 which adds this as an integration test for SimpleQueryStringQuery. Maybe you can take a look and tell me if it makes sense to add those as well.\n", "created_at": "2016-02-04T21:08:13Z" }, { "body": "@cbuescher thanks, a unit test in SimpleQueryStringBuilderTest could be useful as well. The integ test does not check the minimum should match that is applied (or not) to the boolean query. \n", "created_at": "2016-02-05T08:55:03Z" } ], "number": 13884, "title": "Bug with simple_query_string, minimum_should_match, and multiple fields." }
{ "body": "Currently a `simple_query_string` query with one term and multiple fields\ngets parsed to a BooleanQuery where the number of clauses is determined\nby the number of fields, which lead to wrong calculation of `minimum_should_match`.\n\nThis PR adds checks to detect this case and wrap the resulting BooleanQuery into\nanother BooleanQuery with just one should-clause, so `minimum_should_match`\ncalculation is corrected.\n\nIn order to differentiate between the case where one term is queried across\nmultiple fields and the case where multiple terms are queried on one field,\nwe override a simplification step in Lucenes SimpleQueryParser that reduces\na one-clause BooleanQuery to the clause itself.\n\nCloses #13884\n", "number": 14186, "review_comments": [ { "body": "I'd recommend using the same syntax Lucene does:\n\n```\nbq.clauses().iterator().next().getQuery()\n```\n\nJust to follow their conventions\n", "created_at": "2015-10-26T17:17:55Z" }, { "body": "Also, this doesn't really seem to make sense to me, since it could potentially do different things depending on the order of the clauses in the query (if it's a bool holding clauses like [TermQuery BooleanQuery] versus a bool holding [BooleanQuery TermQuery]). Am I misunderstanding something?\n", "created_at": "2015-10-26T17:19:38Z" }, { "body": "If there isn't more than a single bq clause, should we just ignore applying this at all anyway?\n", "created_at": "2015-10-26T17:21:05Z" }, { "body": "Should add `.trim()` here also?\n", "created_at": "2015-10-26T17:23:25Z" }, { "body": "The problem in #13884 is that for one query term & multiple fields, `minimum_should_match` gets calculated based on the number of fields instead of just the one term like it should. As far as I can see this seems to be because SimpleQueryParser only starts combining the BooleanQueries that are produced for each term into new BooleanQueries for more than one term/subquery. \n\nWhat I'm trying to do here is applying this additional wrapping for the one-term case. The problem now is, how to distiguish the one-term/multi-field case from the multi-term/one-field case. Before this change, Lucenes SimpleQueryParser#simplify() performs some rewrite of one-clause subqueries, so the resulting queries for those two cases have the same structure:\n\n```\nfield1: \"T1 T2 T3\" => (field1:T1 field1:T2 field1:T3) \nfield1, field2, field3: \"T1\" => (field1:T1 field2:T1 field3:T1)\n```\n\nThis PR removes this simplification, so now each term/subquery should be a BooleanQuery in itself:\n\n```\nfield1: \"T1 T2 T3\" => ( (field1:T1) (field1:T2) (field1:T3) )\nfield1, field2, field3: \"T1\" => (field1:T1 field2:T1 field3:T1)\n```\n\nNow I can detect the \"problematic\" case (one term, multiple fields), since it will have leaf TermQueries on the second level of the query tree. That's the case where we need one extra wrapping into an additional BooleanQuery, so the `minimum_should_match` gets calculated correctly.\n", "created_at": "2015-10-26T23:18:40Z" }, { "body": "There might be more than one bq clause here, e.g. if we query `\"t1 t2\"` for two fields `f1`,`f2`, then the resulting query will be `( (f1:t1 f2:t1) (f1:t2 f2:t2) )` (one clause per term), so we should apply `minimum_should_match`. I think this could go into an `else` branch of the previous check, though.\n", "created_at": "2015-10-26T23:23:49Z" } ], "title": "Fix `minimum should match` in `simple_query_string` for single term and multiple fields" }
{ "commits": [ { "message": "Query DSL: Fix `minimum should match` in `simple_query_string` for single term and multiple fields\n\nCurrently a `simple_query_string` query with one term and multiple fields\ngets parsed to a BooleanQuery where the number of clauses is determined\nby the number of fields, which lead to wrong calculation of `minimum_should_match`.\n\nThis PR adds checks to detect this case and wrap the resulting BooleanQuery into\nanother BooleanQuery with just one should-clause, so `minimum_should_match`\ncalculation is corrected.\n\nIn order to differentiate between the case where one term is queried across\nmultiple fields and the case where multiple terms are queried on one field,\nwe override a simplification step in Lucenes SimpleQueryParser that reduces\na one-clause BooleanQuery to the clause itself.\n\nCloses #13884" } ], "files": [ { "diff": "@@ -70,7 +70,7 @@ public Query newDefaultQuery(String text) {\n rethrowUnlessLenient(e);\n }\n }\n- return super.simplify(bq.build());\n+ return simplify(bq.build());\n }\n \n /**\n@@ -93,7 +93,7 @@ public Query newFuzzyQuery(String text, int fuzziness) {\n rethrowUnlessLenient(e);\n }\n }\n- return super.simplify(bq.build());\n+ return simplify(bq.build());\n }\n \n @Override\n@@ -111,7 +111,7 @@ public Query newPhraseQuery(String text, int slop) {\n rethrowUnlessLenient(e);\n }\n }\n- return super.simplify(bq.build());\n+ return simplify(bq.build());\n }\n \n /**\n@@ -140,7 +140,19 @@ public Query newPrefixQuery(String text) {\n return rethrowUnlessLenient(e);\n }\n }\n- return super.simplify(bq.build());\n+ return simplify(bq.build());\n+ }\n+\n+ /**\n+ * Override of lucenes SimpleQueryParser that doesn't simplify for the 1-clause case.\n+ */\n+ @Override\n+ protected Query simplify(BooleanQuery bq) {\n+ if (bq.clauses().isEmpty()) {\n+ return null;\n+ } else {\n+ return bq;\n+ }\n }\n \n /**\n@@ -295,7 +307,7 @@ public boolean equals(Object obj) {\n // For further reasoning see\n // https://issues.apache.org/jira/browse/LUCENE-4021\n return (Objects.equals(locale.toLanguageTag(), other.locale.toLanguageTag())\n- && Objects.equals(lowercaseExpandedTerms, other.lowercaseExpandedTerms) \n+ && Objects.equals(lowercaseExpandedTerms, other.lowercaseExpandedTerms)\n && Objects.equals(lenient, other.lenient)\n && Objects.equals(analyzeWildcard, other.analyzeWildcard));\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java", "status": "modified" }, { "diff": "@@ -20,6 +20,8 @@\n package org.elasticsearch.index.query;\n \n import org.apache.lucene.analysis.Analyzer;\n+import org.apache.lucene.search.BooleanClause;\n+import org.apache.lucene.search.BooleanClause.Occur;\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.common.Strings;\n@@ -286,7 +288,16 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n \n Query query = sqp.parse(queryText);\n if (minimumShouldMatch != null && query instanceof BooleanQuery) {\n- query = Queries.applyMinimumShouldMatch((BooleanQuery) query, minimumShouldMatch);\n+ BooleanQuery booleanQuery = (BooleanQuery) query;\n+ // treat special case for one term query and more than one field\n+ // we need to wrap this in additional BooleanQuery so minimum_should_match is applied correctly\n+ if (booleanQuery.clauses().size() > 1\n+ && ((booleanQuery.clauses().iterator().next().getQuery() instanceof BooleanQuery) == false)) {\n+ BooleanQuery.Builder builder = new BooleanQuery.Builder();\n+ builder.add(new BooleanClause(booleanQuery, Occur.SHOULD));\n+ booleanQuery = builder.build();\n+ }\n+ query = Queries.applyMinimumShouldMatch(booleanQuery, minimumShouldMatch);\n }\n return query;\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java", "status": "modified" }, { "diff": "@@ -456,7 +456,7 @@ protected QB assertSerialization(QB testQuery) throws IOException {\n testQuery.writeTo(output);\n try (StreamInput in = new NamedWriteableAwareStreamInput(StreamInput.wrap(output.bytes()), namedWriteableRegistry)) {\n QueryBuilder<?> prototype = queryParser(testQuery.getName()).getBuilderPrototype();\n- QueryBuilder deserializedQuery = prototype.readFrom(in);\n+ QueryBuilder<?> deserializedQuery = prototype.readFrom(in);\n assertEquals(deserializedQuery, testQuery);\n assertEquals(deserializedQuery.hashCode(), testQuery.hashCode());\n assertNotSame(deserializedQuery, testQuery);", "filename": "core/src/test/java/org/elasticsearch/index/query/AbstractQueryTestCase.java", "status": "modified" }, { "diff": "@@ -27,28 +27,38 @@\n import org.apache.lucene.search.TermQuery;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n-\n import java.io.IOException;\n import java.util.HashMap;\n import java.util.HashSet;\n import java.util.Iterator;\n import java.util.Locale;\n import java.util.Map;\n+import java.util.Map.Entry;\n import java.util.Set;\n+import java.util.TreeMap;\n \n import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.notNullValue;\n \n public class SimpleQueryStringBuilderTests extends AbstractQueryTestCase<SimpleQueryStringBuilder> {\n \n+ private String[] queryTerms;\n+\n @Override\n protected SimpleQueryStringBuilder doCreateTestQueryBuilder() {\n- SimpleQueryStringBuilder result = new SimpleQueryStringBuilder(randomAsciiOfLengthBetween(1, 10));\n+ int numberOfTerms = randomIntBetween(1, 5);\n+ queryTerms = new String[numberOfTerms];\n+ StringBuilder queryString = new StringBuilder();\n+ for (int i = 0; i < numberOfTerms; i++) {\n+ queryTerms[i] = randomAsciiOfLengthBetween(1, 10);\n+ queryString.append(queryTerms[i] + \" \");\n+ }\n+ SimpleQueryStringBuilder result = new SimpleQueryStringBuilder(queryString.toString().trim());\n if (randomBoolean()) {\n result.analyzeWildcard(randomBoolean());\n }\n@@ -72,9 +82,13 @@ protected SimpleQueryStringBuilder doCreateTestQueryBuilder() {\n }\n if (randomBoolean()) {\n Set<SimpleQueryStringFlag> flagSet = new HashSet<>();\n+ if (numberOfTerms > 1) {\n+ flagSet.add(SimpleQueryStringFlag.WHITESPACE);\n+ }\n int size = randomIntBetween(0, SimpleQueryStringFlag.values().length);\n for (int i = 0; i < size; i++) {\n- flagSet.add(randomFrom(SimpleQueryStringFlag.values()));\n+ SimpleQueryStringFlag randomFlag = randomFrom(SimpleQueryStringFlag.values());\n+ flagSet.add(randomFlag);\n }\n if (flagSet.size() > 0) {\n result.flags(flagSet.toArray(new SimpleQueryStringFlag[flagSet.size()]));\n@@ -85,13 +99,12 @@ protected SimpleQueryStringBuilder doCreateTestQueryBuilder() {\n Map<String, Float> fields = new HashMap<>();\n for (int i = 0; i < fieldCount; i++) {\n if (randomBoolean()) {\n- fields.put(randomAsciiOfLengthBetween(1, 10), AbstractQueryBuilder.DEFAULT_BOOST);\n+ fields.put(\"f\" + i + \"_\" + randomAsciiOfLengthBetween(1, 10), AbstractQueryBuilder.DEFAULT_BOOST);\n } else {\n- fields.put(randomBoolean() ? STRING_FIELD_NAME : randomAsciiOfLengthBetween(1, 10), 2.0f / randomIntBetween(1, 20));\n+ fields.put(randomBoolean() ? STRING_FIELD_NAME : \"f\" + i + \"_\" + randomAsciiOfLengthBetween(1, 10), 2.0f / randomIntBetween(1, 20));\n }\n }\n result.fields(fields);\n-\n return result;\n }\n \n@@ -256,8 +269,8 @@ public void testDefaultFieldParsing() throws IOException {\n // no strict field resolution (version before V_1_4_0_Beta1)\n if (getCurrentTypes().length > 0 || shardContext.indexQueryParserService().getIndexCreatedVersion().before(Version.V_1_4_0_Beta1)) {\n Query luceneQuery = queryBuilder.toQuery(shardContext);\n- assertThat(luceneQuery, instanceOf(TermQuery.class));\n- TermQuery termQuery = (TermQuery) luceneQuery;\n+ assertThat(luceneQuery, instanceOf(BooleanQuery.class));\n+ TermQuery termQuery = (TermQuery) ((BooleanQuery) luceneQuery).clauses().get(0).getQuery();\n assertThat(termQuery.getTerm(), equalTo(new Term(MetaData.ALL, query)));\n }\n }\n@@ -275,7 +288,7 @@ protected void doAssertLuceneQuery(SimpleQueryStringBuilder queryBuilder, Query\n \n if (\"\".equals(queryBuilder.value())) {\n assertTrue(\"Query should have been MatchNoDocsQuery but was \" + query.getClass().getName(), query instanceof MatchNoDocsQuery);\n- } else if (queryBuilder.fields().size() > 1) {\n+ } else {\n assertTrue(\"Query should have been BooleanQuery but was \" + query.getClass().getName(), query instanceof BooleanQuery);\n \n BooleanQuery boolQuery = (BooleanQuery) query;\n@@ -288,32 +301,42 @@ protected void doAssertLuceneQuery(SimpleQueryStringBuilder queryBuilder, Query\n }\n }\n \n- assertThat(boolQuery.clauses().size(), equalTo(queryBuilder.fields().size()));\n- Iterator<String> fields = queryBuilder.fields().keySet().iterator();\n- for (BooleanClause booleanClause : boolQuery) {\n- assertThat(booleanClause.getQuery(), instanceOf(TermQuery.class));\n- TermQuery termQuery = (TermQuery) booleanClause.getQuery();\n- assertThat(termQuery.getTerm().field(), equalTo(fields.next()));\n- assertThat(termQuery.getTerm().text().toLowerCase(Locale.ROOT), equalTo(queryBuilder.value().toLowerCase(Locale.ROOT)));\n+ assertThat(boolQuery.clauses().size(), equalTo(queryTerms.length));\n+ Map<String, Float> expectedFields = new TreeMap<String, Float>(queryBuilder.fields());\n+ if (expectedFields.size() == 0) {\n+ expectedFields.put(MetaData.ALL, AbstractQueryBuilder.DEFAULT_BOOST);\n }\n-\n- if (queryBuilder.minimumShouldMatch() != null) {\n- assertThat(boolQuery.getMinimumNumberShouldMatch(), greaterThan(0));\n+ for (int i = 0; i < queryTerms.length; i++) {\n+ BooleanClause booleanClause = boolQuery.clauses().get(i);\n+ Iterator<Entry<String, Float>> fieldsIter = expectedFields.entrySet().iterator();\n+\n+ if (queryTerms.length == 1 && expectedFields.size() == 1) {\n+ assertThat(booleanClause.getQuery(), instanceOf(TermQuery.class));\n+ TermQuery termQuery = (TermQuery) booleanClause.getQuery();\n+ Entry<String, Float> entry = fieldsIter.next();\n+ assertThat(termQuery.getTerm().field(), equalTo(entry.getKey()));\n+ assertThat(termQuery.getBoost(), equalTo(entry.getValue()));\n+ assertThat(termQuery.getTerm().text().toLowerCase(Locale.ROOT), equalTo(queryTerms[i].toLowerCase(Locale.ROOT)));\n+ } else {\n+ assertThat(booleanClause.getQuery(), instanceOf(BooleanQuery.class));\n+ for (BooleanClause clause : ((BooleanQuery) booleanClause.getQuery()).clauses()) {\n+ TermQuery termQuery = (TermQuery) clause.getQuery();\n+ Entry<String, Float> entry = fieldsIter.next();\n+ assertThat(termQuery.getTerm().field(), equalTo(entry.getKey()));\n+ assertThat(termQuery.getBoost(), equalTo(entry.getValue()));\n+ assertThat(termQuery.getTerm().text().toLowerCase(Locale.ROOT), equalTo(queryTerms[i].toLowerCase(Locale.ROOT)));\n+ }\n+ }\n }\n- } else if (queryBuilder.fields().size() <= 1) {\n- assertTrue(\"Query should have been TermQuery but was \" + query.getClass().getName(), query instanceof TermQuery);\n \n- TermQuery termQuery = (TermQuery) query;\n- String field;\n- if (queryBuilder.fields().size() == 0) {\n- field = MetaData.ALL;\n- } else {\n- field = queryBuilder.fields().keySet().iterator().next();\n+ if (queryBuilder.minimumShouldMatch() != null) {\n+ int optionalClauses = queryTerms.length;\n+ if (queryBuilder.defaultOperator().equals(Operator.AND) && queryTerms.length > 1) {\n+ optionalClauses = 0;\n+ }\n+ int expectedMinimumShouldMatch = Queries.calculateMinShouldMatch(optionalClauses, queryBuilder.minimumShouldMatch());\n+ assertEquals(expectedMinimumShouldMatch, boolQuery.getMinimumNumberShouldMatch());\n }\n- assertThat(termQuery.getTerm().field(), equalTo(field));\n- assertThat(termQuery.getTerm().text().toLowerCase(Locale.ROOT), equalTo(queryBuilder.value().toLowerCase(Locale.ROOT)));\n- } else {\n- fail(\"Encountered lucene query type we do not have a validation implementation for in our \" + SimpleQueryStringBuilderTests.class.getSimpleName());\n }\n }\n \n@@ -339,15 +362,18 @@ public void testToQueryBoost() throws IOException {\n SimpleQueryStringBuilder simpleQueryStringBuilder = new SimpleQueryStringBuilder(\"test\");\n simpleQueryStringBuilder.field(STRING_FIELD_NAME, 5);\n Query query = simpleQueryStringBuilder.toQuery(shardContext);\n- assertThat(query, instanceOf(TermQuery.class));\n- assertThat(query.getBoost(), equalTo(5f));\n+ assertThat(query, instanceOf(BooleanQuery.class));\n+ TermQuery wrappedQuery = (TermQuery) ((BooleanQuery) query).clauses().get(0).getQuery();\n+ assertThat(wrappedQuery.getBoost(), equalTo(5f));\n \n simpleQueryStringBuilder = new SimpleQueryStringBuilder(\"test\");\n simpleQueryStringBuilder.field(STRING_FIELD_NAME, 5);\n simpleQueryStringBuilder.boost(2);\n query = simpleQueryStringBuilder.toQuery(shardContext);\n- assertThat(query, instanceOf(TermQuery.class));\n- assertThat(query.getBoost(), equalTo(10f));\n+ assertThat(query.getBoost(), equalTo(2f));\n+ assertThat(query, instanceOf(BooleanQuery.class));\n+ wrappedQuery = (TermQuery) ((BooleanQuery) query).clauses().get(0).getQuery();\n+ assertThat(wrappedQuery.getBoost(), equalTo(5f));\n }\n \n public void testNegativeFlags() throws IOException {\n@@ -359,4 +385,39 @@ public void testNegativeFlags() throws IOException {\n otherBuilder.flags(-1);\n assertThat(builder, equalTo(otherBuilder));\n }\n+\n+ public void testMinimumShouldMatch() throws IOException {\n+ QueryShardContext shardContext = createShardContext();\n+ int numberOfTerms = randomIntBetween(1, 4);\n+ int numberOfFields = randomIntBetween(1, 4);\n+ StringBuilder queryString = new StringBuilder();\n+ for (int i = 0; i < numberOfTerms; i++) {\n+ queryString.append(\"t\" + i + \" \");\n+ }\n+ SimpleQueryStringBuilder simpleQueryStringBuilder = new SimpleQueryStringBuilder(queryString.toString().trim());\n+ if (randomBoolean()) {\n+ simpleQueryStringBuilder.defaultOperator(Operator.AND);\n+ }\n+ for (int i = 0; i < numberOfFields; i++) {\n+ simpleQueryStringBuilder.field(\"f\" + i);\n+ }\n+ int percent = randomIntBetween(1, 100);\n+ simpleQueryStringBuilder.minimumShouldMatch(percent + \"%\");\n+ BooleanQuery query = (BooleanQuery) simpleQueryStringBuilder.toQuery(shardContext);\n+\n+ assertEquals(\"query should have one should clause per term\", numberOfTerms, query.clauses().size());\n+ int expectedMinimumShouldMatch = numberOfTerms * percent / 100;\n+ if (simpleQueryStringBuilder.defaultOperator().equals(Operator.AND) && numberOfTerms > 1) {\n+ expectedMinimumShouldMatch = 0;\n+ }\n+\n+ assertEquals(expectedMinimumShouldMatch, query.getMinimumNumberShouldMatch());\n+ for (BooleanClause clause : query.clauses()) {\n+ if (numberOfFields == 1 && numberOfTerms == 1) {\n+ assertTrue(clause.getQuery() instanceof TermQuery);\n+ } else {\n+ assertEquals(numberOfFields, ((BooleanQuery) clause.getQuery()).clauses().size());\n+ }\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java", "status": "modified" }, { "diff": "@@ -109,7 +109,6 @@ public void testSimpleQueryStringMinimumShouldMatch() throws Exception {\n client().prepareIndex(\"test\", \"type1\", \"3\").setSource(\"body\", \"foo bar\"),\n client().prepareIndex(\"test\", \"type1\", \"4\").setSource(\"body\", \"foo baz bar\"));\n \n-\n logger.info(\"--> query 1\");\n SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar\").minimumShouldMatch(\"2\")).get();\n assertHitCount(searchResponse, 2l);\n@@ -120,7 +119,13 @@ public void testSimpleQueryStringMinimumShouldMatch() throws Exception {\n assertHitCount(searchResponse, 2l);\n assertSearchHits(searchResponse, \"3\", \"4\");\n \n- logger.info(\"--> query 3\");\n+ logger.info(\"--> query 3\"); // test case from #13884\n+ searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo\")\n+ .field(\"body\").field(\"body2\").field(\"body3\").minimumShouldMatch(\"-50%\")).get();\n+ assertHitCount(searchResponse, 3l);\n+ assertSearchHits(searchResponse, \"1\", \"3\", \"4\");\n+\n+ logger.info(\"--> query 4\");\n searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar baz\").field(\"body\").field(\"body2\").minimumShouldMatch(\"70%\")).get();\n assertHitCount(searchResponse, 2l);\n assertSearchHits(searchResponse, \"3\", \"4\");\n@@ -131,17 +136,17 @@ public void testSimpleQueryStringMinimumShouldMatch() throws Exception {\n client().prepareIndex(\"test\", \"type1\", \"7\").setSource(\"body2\", \"foo bar\", \"other\", \"foo\"),\n client().prepareIndex(\"test\", \"type1\", \"8\").setSource(\"body2\", \"foo baz bar\", \"other\", \"foo\"));\n \n- logger.info(\"--> query 4\");\n+ logger.info(\"--> query 5\");\n searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar\").field(\"body\").field(\"body2\").minimumShouldMatch(\"2\")).get();\n assertHitCount(searchResponse, 4l);\n assertSearchHits(searchResponse, \"3\", \"4\", \"7\", \"8\");\n \n- logger.info(\"--> query 5\");\n+ logger.info(\"--> query 6\");\n searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar\").minimumShouldMatch(\"2\")).get();\n assertHitCount(searchResponse, 5l);\n assertSearchHits(searchResponse, \"3\", \"4\", \"6\", \"7\", \"8\");\n \n- logger.info(\"--> query 6\");\n+ logger.info(\"--> query 7\");\n searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar baz\").field(\"body2\").field(\"other\").minimumShouldMatch(\"70%\")).get();\n assertHitCount(searchResponse, 3l);\n assertSearchHits(searchResponse, \"6\", \"7\", \"8\");", "filename": "core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java", "status": "modified" } ] }
{ "body": "When running the RPM installation of Elasticsearch 1.7.3 (and checking the newer versions of it, likely any release), if you try to give it too much memory, then it claims to succeed to start and it also returns a `0` exit code:\n\n``` shell\n[pickypg@localhost init.d]$ sudo ./elasticsearch start\nStarting elasticsearch: [ OK ]\nOpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f1abfd30000, 68632248320, 0) failed; error='Cannot allocate memory' (errno=12)\n#\n# There is insufficient memory for the Java Runtime Environment to continue.\n# Native memory allocation (malloc) failed to allocate 68632248320 bytes for committing reserved memory.\n# An error report file with more information is saved as:\n# /tmp/jvm-14062/hs_error.log\n[pickypg@localhost init.d]$ echo $?\n0\n```\n\nAs noted by @inqueue, if you pass in `-x` to the init.d script, then you will find that the daemon starter appears to be to blame, which \"succeeds\".\n", "comments": [ { "body": "FWIW, systemd behaves correctly:\n\n```\n# grep HEAP_SIZE= /etc/sysconfig/elasticsearch\n# Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g\nES_HEAP_SIZE=80g\n\n# systemctl start elasticsearch.service\n# ^start^status\nsystemctl status elasticsearch.service\nelasticsearch.service - Elasticsearch\n Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled)\n Active: failed (Result: exit-code) since Fri 2015-10-16 11:45:51 EDT; 9s ago\n Docs: http://www.elastic.co\n Process: 11224 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=$PID_DIR/elasticsearch.pid -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.config=$CONF_FILE -Des.default.path.conf=$CONF_DIR (code=exited, status=1/FAILURE)\n Main PID: 11224 (code=exited, status=1/FAILURE)\n\nOct 16 11:45:51 es-mon.zmb.moc systemd[1]: Starting Elasticsearch...\nOct 16 11:45:51 es-mon.zmb.moc systemd[1]: Started Elasticsearch.\nOct 16 11:45:51 es-mon.zmb.moc elasticsearch[11224]: Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_me...=12)\nOct 16 11:45:51 es-mon.zmb.moc systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE\nOct 16 11:45:51 es-mon.zmb.moc systemd[1]: Unit elasticsearch.service entered failed state.\nHint: Some lines were ellipsized, use -l to show in full.\n```\n", "created_at": "2015-10-16T15:47:51Z" }, { "body": "With an RPM built and installed from #14170, we now see the following behavior on a Centos 6.7 system:\n\n```\n$ cat /etc/redhat-release\nCentOS release 6.7 (Final)\n$ sudo service elasticsearch start\nStarting elasticsearch: Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007fda5a660000, 85724889088, 0) failed; error='Cannot allocate memory' (errno=12)\n#\n# There is insufficient memory for the Java Runtime Environment to continue.\n# Native memory allocation (mmap) failed to map 85724889088 bytes for committing reserved memory.\n# An error report file with more information is saved as:\n# /tmp/hs_err_pid12673.log\n [FAILED]\n$ echo $?\n1\n```\n\ninstead of the behavior reported in this issue of printing `[ OK ]` and returning exit status 0.\n", "created_at": "2015-10-16T21:11:31Z" } ], "number": 14163, "title": "RPM init.d exits with 0 when JVM fails to start due to not enough memory" }
{ "body": "This commit fixes an issue where when starting Elasticsearch in\ndaemonized mode, a failed startup would not cause a non-zero exit code\nto be returned. This can prevent the SysV init system from detecting\nstartup failures.\n\nCloses #14163\n", "number": 14170, "review_comments": [], "title": "Startup script exit status should catch daemonized startup failures" }
{ "commits": [ { "message": "Startup script exit status should catch daemonized startup failures\n\nThis commit fixes an issue where when starting Elasticsearch in\ndaemonized mode, a failed startup would not cause a non-zero exit code\nto be returned. This can prevent the SysV init system from detecting\nstartup failures.\n\nCloses #14163" } ], "files": [ { "diff": "@@ -64,6 +64,7 @@ export ES_HEAP_NEWSIZE\n export ES_DIRECT_SIZE\n export ES_JAVA_OPTS\n export ES_GC_LOG_FILE\n+export ES_STARTUP_SLEEP_TIME\n export JAVA_HOME\n \n lockfile=/var/lock/subsys/$prog", "filename": "distribution/rpm/src/main/packaging/init.d/elasticsearch", "status": "modified" }, { "diff": "@@ -50,6 +50,9 @@\n #ES_USER=${packaging.elasticsearch.user}\n #ES_GROUP=${packaging.elasticsearch.group}\n \n+# The number of seconds to wait before checking if Elasticsearch started successfully as a daemon process\n+ES_STARTUP_SLEEP_TIME=${packaging.elasticsearch.startup.sleep.time}\n+\n ################################\n # System properties\n ################################", "filename": "distribution/src/main/packaging/env/elasticsearch", "status": "modified" }, { "diff": "@@ -19,6 +19,9 @@ packaging.os.max.open.files=65535\n # Maximum number of VMA (Virtual Memory Areas) a process can own\n packaging.os.max.map.count=262144\n \n+# Default number of seconds to wait before checking if Elasticsearch started successfully as a daemon process\n+packaging.elasticsearch.startup.sleep.time=5\n+\n # Simple marker to check that properties are correctly overridden\n packaging.type=tar.gz\n ", "filename": "distribution/src/main/packaging/packaging.properties", "status": "modified" }, { "diff": "@@ -142,6 +142,16 @@ if [ -z \"$daemonized\" ] ; then\n else\n exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" \\\n org.elasticsearch.bootstrap.Elasticsearch start \"$@\" <&- &\n+ retval=$?\n+ pid=$!\n+ [ $retval -eq 0 ] || exit $retval\n+ if [ ! -z \"$ES_STARTUP_SLEEP_TIME\" ]; then\n+ sleep $ES_STARTUP_SLEEP_TIME\n+ fi\n+ if ! ps -p $pid > /dev/null ; then\n+ exit 1\n+ fi\n+ exit 0\n fi\n \n exit $?", "filename": "distribution/src/main/resources/bin/elasticsearch", "status": "modified" } ] }
{ "body": "I tried to delete an index, and less than a second later, another machine attempted to perform an index. That recreated the index before the deletion propagated cleanly, leaving the cluster inconsistent. Example log output: https://gist.github.com/1187311\n", "comments": [], "number": 1296, "title": "Rapidly concurrent deleting/creating an index leaves index inconsistent" }
{ "body": "MetaDataSerivce tried to protect concurrent index creation/deletion\nfrom resulting in inconsistent indices. This was originally added a\nlong time ago via #1296 which seems to be caused by several problems\nthat we fixed already in 2.0 or even in late 1.x version. Indices where\nrecreated without being deleted and shards where deleted while being used\nwhich is now prevented on several levels. We can safely remove the semaphores\nsince we are already serializing the events on the cluster state threads.\nThis commit also fixes some expception handling bugs exposed by the added test\n", "number": 14159, "review_comments": [ { "body": "is it correct to only catch IndexNotFound here? could we potentially miss other exceptions and forget to notify the listener?\n", "created_at": "2015-10-16T14:25:28Z" }, { "body": "heh yeah I mean we can catch exception but I really wonder if we should do this on the caller of the method? I mean this can be really critical if we miss anything here?\n", "created_at": "2015-10-16T14:28:09Z" }, { "body": "I am not sure. You mean that you would prefer not to have the catch at all, or to catch Exception instead ? :)\n", "created_at": "2015-10-16T14:33:20Z" }, { "body": "I rethrow the excpetion now since I only added it to get the debug logging there and we handle the exception on the caller side anyway so we should be good here...\n", "created_at": "2015-10-16T18:34:00Z" }, { "body": "sounds good to me\n", "created_at": "2015-10-16T21:11:17Z" }, { "body": "do we want to assertAcked here?\n", "created_at": "2015-10-19T09:37:43Z" }, { "body": "having this called with a null exception is a problem in it's own. I don't think we want to suppress it?\n", "created_at": "2015-10-19T09:38:37Z" }, { "body": "was it the intention to put this under synchronized? o.w. I don't see what the sync buy us - we use an atomic integer anyway. Also - isn't it surprising the test passes? I mean an index operation that was already started before the delete (with old version) can re create the index and inject old docs into it?\n", "created_at": "2015-10-19T09:49:17Z" }, { "body": "all I am asserting in this test is that we never reuse an old shard etc. to your not being so suprised this test failed a lot for different reasons if you have a better idea go ahead.\n", "created_at": "2015-10-19T09:54:03Z" }, { "body": "more bike shedding ahead... I will make the change\n", "created_at": "2015-10-19T09:54:46Z" }, { "body": "wait the sync here is essential we want to ensure that all in-flight index operations are done before we increment that's the entire deal here? the relevant part is below when we continue indexing\n", "created_at": "2015-10-19T11:27:00Z" } ], "title": "Remove MetaDataSerivce and it's semaphores" }
{ "commits": [ { "message": "Remove MetaDataSerivce and it's semaphores\n\nMetaDataSerivce tried to protect concurrent index creation/deletion\nfrom resulting in inconsistent indices. This was originally added a\nlong time ago via #1296 which seems to be caused by several problems\nthat we fixed already in 2.0 or even in late 1.x version. Indices where\nrecreated without being deleted and shards where deleted while being used\nwhich is now prevented on several levels. We can safely remove the semaphores\nsince we are already serializing the events on the cluster state threads.\nThis commit also fixes some expception handling bugs exposed by the added test" }, { "message": "apply review comments" }, { "message": "add several code comments and apply review comments" } ], "files": [ { "diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.cluster.metadata.MetaDataMappingService;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n@@ -67,25 +68,30 @@ protected ClusterBlockException checkBlock(PutMappingRequest request, ClusterSta\n \n @Override\n protected void masterOperation(final PutMappingRequest request, final ClusterState state, final ActionListener<PutMappingResponse> listener) {\n- final String[] concreteIndices = indexNameExpressionResolver.concreteIndices(state, request);\n- PutMappingClusterStateUpdateRequest updateRequest = new PutMappingClusterStateUpdateRequest()\n- .ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout())\n- .indices(concreteIndices).type(request.type())\n- .updateAllTypes(request.updateAllTypes())\n- .source(request.source());\n+ try {\n+ final String[] concreteIndices = indexNameExpressionResolver.concreteIndices(state, request);\n+ PutMappingClusterStateUpdateRequest updateRequest = new PutMappingClusterStateUpdateRequest()\n+ .ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout())\n+ .indices(concreteIndices).type(request.type())\n+ .updateAllTypes(request.updateAllTypes())\n+ .source(request.source());\n \n- metaDataMappingService.putMapping(updateRequest, new ActionListener<ClusterStateUpdateResponse>() {\n+ metaDataMappingService.putMapping(updateRequest, new ActionListener<ClusterStateUpdateResponse>() {\n \n- @Override\n- public void onResponse(ClusterStateUpdateResponse response) {\n- listener.onResponse(new PutMappingResponse(response.isAcknowledged()));\n- }\n+ @Override\n+ public void onResponse(ClusterStateUpdateResponse response) {\n+ listener.onResponse(new PutMappingResponse(response.isAcknowledged()));\n+ }\n \n- @Override\n- public void onFailure(Throwable t) {\n- logger.debug(\"failed to put mappings on indices [{}], type [{}]\", t, concreteIndices, request.type());\n- listener.onFailure(t);\n- }\n- });\n+ @Override\n+ public void onFailure(Throwable t) {\n+ logger.debug(\"failed to put mappings on indices [{}], type [{}]\", t, concreteIndices, request.type());\n+ listener.onFailure(t);\n+ }\n+ });\n+ } catch (IndexNotFoundException ex) {\n+ logger.debug(\"failed to put mappings on indices [{}], type [{}]\", ex, request.indices(), request.type());\n+ throw ex;\n+ }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/TransportPutMappingAction.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.action.*;\n import org.elasticsearch.action.support.replication.ReplicationRequest;\n import org.elasticsearch.client.Requests;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.Nullable;\n@@ -35,6 +36,7 @@\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.lucene.uid.Versions;\n import org.elasticsearch.common.xcontent.*;\n+import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.index.VersionType;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n@@ -561,15 +563,24 @@ public VersionType versionType() {\n return this.versionType;\n }\n \n+ private Version getVersion(MetaData metaData, String concreteIndex) {\n+ // this can go away in 3.0 but is here now for easy backporting - since in 2.x we need the version on the timestamp stuff\n+ final IndexMetaData indexMetaData = metaData.getIndices().get(concreteIndex);\n+ if (indexMetaData == null) {\n+ throw new IndexNotFoundException(concreteIndex);\n+ }\n+ return Version.indexCreated(indexMetaData.getSettings());\n+ }\n+\n public void process(MetaData metaData, @Nullable MappingMetaData mappingMd, boolean allowIdGeneration, String concreteIndex) {\n // resolve the routing if needed\n routing(metaData.resolveIndexRouting(routing, index));\n+\n // resolve timestamp if provided externally\n if (timestamp != null) {\n- Version version = Version.indexCreated(metaData.getIndices().get(concreteIndex).getSettings());\n timestamp = MappingMetaData.Timestamp.parseStringTimestamp(timestamp,\n mappingMd != null ? mappingMd.timestamp().dateTimeFormatter() : TimestampFieldMapper.Defaults.DATE_TIME_FORMATTER,\n- version);\n+ getVersion(metaData, concreteIndex));\n }\n // extract values if needed\n if (mappingMd != null) {\n@@ -592,8 +603,7 @@ public void process(MetaData metaData, @Nullable MappingMetaData mappingMd, bool\n if (parseContext.shouldParseTimestamp()) {\n timestamp = parseContext.timestamp();\n if (timestamp != null) {\n- Version version = Version.indexCreated(metaData.getIndices().get(concreteIndex).getSettings());\n- timestamp = MappingMetaData.Timestamp.parseStringTimestamp(timestamp, mappingMd.timestamp().dateTimeFormatter(), version);\n+ timestamp = MappingMetaData.Timestamp.parseStringTimestamp(timestamp, mappingMd.timestamp().dateTimeFormatter(), getVersion(metaData, concreteIndex));\n }\n }\n } catch (MapperParsingException e) {\n@@ -642,8 +652,7 @@ public void process(MetaData metaData, @Nullable MappingMetaData mappingMd, bool\n if (defaultTimestamp.equals(TimestampFieldMapper.Defaults.DEFAULT_TIMESTAMP)) {\n timestamp = Long.toString(System.currentTimeMillis());\n } else {\n- Version version = Version.indexCreated(metaData.getIndices().get(concreteIndex).getSettings());\n- timestamp = MappingMetaData.Timestamp.parseStringTimestamp(defaultTimestamp, mappingMd.timestamp().dateTimeFormatter(), version);\n+ timestamp = MappingMetaData.Timestamp.parseStringTimestamp(defaultTimestamp, mappingMd.timestamp().dateTimeFormatter(), getVersion(metaData, concreteIndex));\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/index/IndexRequest.java", "status": "modified" }, { "diff": "@@ -34,7 +34,6 @@\n import org.elasticsearch.cluster.metadata.MetaDataIndexStateService;\n import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService;\n import org.elasticsearch.cluster.metadata.MetaDataMappingService;\n-import org.elasticsearch.cluster.metadata.MetaDataService;\n import org.elasticsearch.cluster.metadata.MetaDataUpdateSettingsService;\n import org.elasticsearch.cluster.node.DiscoveryNodeService;\n import org.elasticsearch.cluster.routing.OperationRouting;\n@@ -309,7 +308,6 @@ protected void configure() {\n bind(DiscoveryNodeService.class).asEagerSingleton();\n bind(ClusterService.class).to(InternalClusterService.class).asEagerSingleton();\n bind(OperationRouting.class).asEagerSingleton();\n- bind(MetaDataService.class).asEagerSingleton();\n bind(MetaDataCreateIndexService.class).asEagerSingleton();\n bind(MetaDataDeleteIndexService.class).asEagerSingleton();\n bind(MetaDataIndexStateService.class).asEagerSingleton();", "filename": "core/src/main/java/org/elasticsearch/cluster/ClusterModule.java", "status": "modified" }, { "diff": "@@ -106,32 +106,25 @@ public class MetaDataCreateIndexService extends AbstractComponent {\n public final static int MAX_INDEX_NAME_BYTES = 255;\n private static final DefaultIndexTemplateFilter DEFAULT_INDEX_TEMPLATE_FILTER = new DefaultIndexTemplateFilter();\n \n- private final ThreadPool threadPool;\n private final ClusterService clusterService;\n private final IndicesService indicesService;\n private final AllocationService allocationService;\n- private final MetaDataService metaDataService;\n private final Version version;\n private final AliasValidator aliasValidator;\n private final IndexTemplateFilter indexTemplateFilter;\n- private final NodeEnvironment nodeEnv;\n private final Environment env;\n \n @Inject\n- public MetaDataCreateIndexService(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n- IndicesService indicesService, AllocationService allocationService, MetaDataService metaDataService,\n+ public MetaDataCreateIndexService(Settings settings, ClusterService clusterService,\n+ IndicesService indicesService, AllocationService allocationService,\n Version version, AliasValidator aliasValidator,\n- Set<IndexTemplateFilter> indexTemplateFilters, Environment env,\n- NodeEnvironment nodeEnv) {\n+ Set<IndexTemplateFilter> indexTemplateFilters, Environment env) {\n super(settings);\n- this.threadPool = threadPool;\n this.clusterService = clusterService;\n this.indicesService = indicesService;\n this.allocationService = allocationService;\n- this.metaDataService = metaDataService;\n this.version = version;\n this.aliasValidator = aliasValidator;\n- this.nodeEnv = nodeEnv;\n this.env = env;\n \n if (indexTemplateFilters.isEmpty()) {\n@@ -147,29 +140,6 @@ public MetaDataCreateIndexService(Settings settings, ThreadPool threadPool, Clus\n }\n }\n \n- public void createIndex(final CreateIndexClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {\n-\n- // we lock here, and not within the cluster service callback since we don't want to\n- // block the whole cluster state handling\n- final Semaphore mdLock = metaDataService.indexMetaDataLock(request.index());\n-\n- // quick check to see if we can acquire a lock, otherwise spawn to a thread pool\n- if (mdLock.tryAcquire()) {\n- createIndex(request, listener, mdLock);\n- return;\n- }\n- threadPool.executor(ThreadPool.Names.MANAGEMENT).execute(new ActionRunnable(listener) {\n- @Override\n- public void doRun() throws InterruptedException {\n- if (!mdLock.tryAcquire(request.masterNodeTimeout().nanos(), TimeUnit.NANOSECONDS)) {\n- listener.onFailure(new ProcessClusterEventTimeoutException(request.masterNodeTimeout(), \"acquire index lock\"));\n- return;\n- }\n- createIndex(request, listener, mdLock);\n- }\n- });\n- }\n-\n public void validateIndexName(String index, ClusterState state) {\n if (state.routingTable().hasIndex(index)) {\n throw new IndexAlreadyExistsException(new Index(index));\n@@ -209,8 +179,7 @@ public void validateIndexName(String index, ClusterState state) {\n }\n }\n \n- private void createIndex(final CreateIndexClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener, final Semaphore mdLock) {\n-\n+ public void createIndex(final CreateIndexClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {\n Settings.Builder updatedSettingsBuilder = Settings.settingsBuilder();\n updatedSettingsBuilder.put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX);\n request.settings(updatedSettingsBuilder.build());\n@@ -222,24 +191,6 @@ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {\n return new ClusterStateUpdateResponse(acknowledged);\n }\n \n- @Override\n- public void onAllNodesAcked(@Nullable Throwable t) {\n- mdLock.release();\n- super.onAllNodesAcked(t);\n- }\n-\n- @Override\n- public void onAckTimeout() {\n- mdLock.release();\n- super.onAckTimeout();\n- }\n-\n- @Override\n- public void onFailure(String source, Throwable t) {\n- mdLock.release();\n- super.onFailure(source, t);\n- }\n-\n @Override\n public ClusterState execute(ClusterState currentState) throws Exception {\n boolean indexCreated = false;", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -56,50 +56,18 @@ public class MetaDataDeleteIndexService extends AbstractComponent {\n \n private final NodeIndexDeletedAction nodeIndexDeletedAction;\n \n- private final MetaDataService metaDataService;\n-\n @Inject\n public MetaDataDeleteIndexService(Settings settings, ThreadPool threadPool, ClusterService clusterService, AllocationService allocationService,\n- NodeIndexDeletedAction nodeIndexDeletedAction, MetaDataService metaDataService) {\n+ NodeIndexDeletedAction nodeIndexDeletedAction) {\n super(settings);\n this.threadPool = threadPool;\n this.clusterService = clusterService;\n this.allocationService = allocationService;\n this.nodeIndexDeletedAction = nodeIndexDeletedAction;\n- this.metaDataService = metaDataService;\n }\n \n public void deleteIndex(final Request request, final Listener userListener) {\n- // we lock here, and not within the cluster service callback since we don't want to\n- // block the whole cluster state handling\n- final Semaphore mdLock = metaDataService.indexMetaDataLock(request.index);\n-\n- // quick check to see if we can acquire a lock, otherwise spawn to a thread pool\n- if (mdLock.tryAcquire()) {\n- deleteIndex(request, userListener, mdLock);\n- return;\n- }\n-\n- threadPool.executor(ThreadPool.Names.MANAGEMENT).execute(new Runnable() {\n- @Override\n- public void run() {\n- try {\n- if (!mdLock.tryAcquire(request.masterTimeout.nanos(), TimeUnit.NANOSECONDS)) {\n- userListener.onFailure(new ProcessClusterEventTimeoutException(request.masterTimeout, \"acquire index lock\"));\n- return;\n- }\n- } catch (InterruptedException e) {\n- userListener.onFailure(e);\n- return;\n- }\n-\n- deleteIndex(request, userListener, mdLock);\n- }\n- });\n- }\n-\n- private void deleteIndex(final Request request, final Listener userListener, Semaphore mdLock) {\n- final DeleteIndexListener listener = new DeleteIndexListener(mdLock, userListener);\n+ final DeleteIndexListener listener = new DeleteIndexListener(userListener);\n clusterService.submitStateUpdateTask(\"delete-index [\" + request.index + \"]\", Priority.URGENT, new ClusterStateUpdateTask() {\n \n @Override\n@@ -181,19 +149,16 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n class DeleteIndexListener implements Listener {\n \n private final AtomicBoolean notified = new AtomicBoolean();\n- private final Semaphore mdLock;\n private final Listener listener;\n volatile ScheduledFuture<?> future;\n \n- private DeleteIndexListener(Semaphore mdLock, Listener listener) {\n- this.mdLock = mdLock;\n+ private DeleteIndexListener(Listener listener) {\n this.listener = listener;\n }\n \n @Override\n public void onResponse(final Response response) {\n if (notified.compareAndSet(false, true)) {\n- mdLock.release();\n FutureUtils.cancel(future);\n listener.onResponse(response);\n }\n@@ -202,15 +167,14 @@ public void onResponse(final Response response) {\n @Override\n public void onFailure(Throwable t) {\n if (notified.compareAndSet(false, true)) {\n- mdLock.release();\n FutureUtils.cancel(future);\n listener.onFailure(t);\n }\n }\n }\n \n \n- public static interface Listener {\n+ public interface Listener {\n \n void onResponse(Response response);\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataDeleteIndexService.java", "status": "modified" }, { "diff": "@@ -19,18 +19,28 @@\n \n package org.elasticsearch.action.admin.indices.create;\n \n+import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse;\n+import org.elasticsearch.action.index.IndexResponse;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.index.query.RangeQueryBuilder;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n import org.junit.Test;\n \n import java.util.HashMap;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.atomic.AtomicInteger;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n@@ -107,8 +117,8 @@ public void testDoubleAddMapping() throws Exception {\n public void testInvalidShardCountSettings() throws Exception {\n try {\n prepareCreate(\"test\").setSettings(Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(-10, 0))\n- .build())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(-10, 0))\n+ .build())\n .get();\n fail(\"should have thrown an exception about the primary shard count\");\n } catch (IllegalArgumentException e) {\n@@ -118,8 +128,8 @@ public void testInvalidShardCountSettings() throws Exception {\n \n try {\n prepareCreate(\"test\").setSettings(Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n- .build())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n+ .build())\n .get();\n fail(\"should have thrown an exception about the replica shard count\");\n } catch (IllegalArgumentException e) {\n@@ -129,9 +139,9 @@ public void testInvalidShardCountSettings() throws Exception {\n \n try {\n prepareCreate(\"test\").setSettings(Settings.builder()\n- .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(-10, 0))\n- .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n- .build())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(-10, 0))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n+ .build())\n .get();\n fail(\"should have thrown an exception about the shard count\");\n } catch (IllegalArgumentException e) {\n@@ -196,4 +206,63 @@ public void testInvalidShardCountSettingsWithoutPrefix() throws Exception {\n }\n }\n \n+ public void testCreateAndDeleteIndexConcurrently() throws InterruptedException {\n+ createIndex(\"test\");\n+ final AtomicInteger indexVersion = new AtomicInteger(0);\n+ final Object indexVersionLock = new Object();\n+ final CountDownLatch latch = new CountDownLatch(1);\n+ int numDocs = randomIntBetween(1, 10);\n+ for (int i = 0; i < numDocs; i++) {\n+ client().prepareIndex(\"test\", \"test\").setSource(\"index_version\", indexVersion.get()).get();\n+ }\n+ synchronized (indexVersionLock) { // not necessarily needed here but for completeness we lock here too\n+ indexVersion.incrementAndGet();\n+ }\n+ client().admin().indices().prepareDelete(\"test\").execute(new ActionListener<DeleteIndexResponse>() { // this happens async!!!\n+ @Override\n+ public void onResponse(DeleteIndexResponse deleteIndexResponse) {\n+ Thread thread = new Thread() {\n+ public void run() {\n+ try {\n+ client().prepareIndex(\"test\", \"test\").setSource(\"index_version\", indexVersion.get()).get(); // recreate that index\n+ synchronized (indexVersionLock) {\n+ // we sync here since we have to ensure that all indexing operations below for a given ID are done before we increment the\n+ // index version otherwise a doc that is in-flight could make it into an index that it was supposed to be deleted for and our assertion fail...\n+ indexVersion.incrementAndGet();\n+ }\n+ assertAcked(client().admin().indices().prepareDelete(\"test\").get()); // from here on all docs with index_version == 0|1 must be gone!!!! only 2 are ok;\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+ };\n+ thread.start();\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ throw new RuntimeException(e);\n+ }\n+ }\n+ );\n+ numDocs = randomIntBetween(100, 200);\n+ for (int i = 0; i < numDocs; i++) {\n+ try {\n+ synchronized (indexVersionLock) {\n+ client().prepareIndex(\"test\", \"test\").setSource(\"index_version\", indexVersion.get()).get();\n+ }\n+ } catch (IndexNotFoundException inf) {\n+ // fine\n+ }\n+ }\n+ latch.await();\n+ refresh();\n+\n+ // we only really assert that we never reuse segments of old indices or anything like this here and that nothing fails with crazy exceptions\n+ SearchResponse expected = client().prepareSearch(\"test\").setIndicesOptions(IndicesOptions.lenientExpandOpen()).setQuery(new RangeQueryBuilder(\"index_version\").from(indexVersion.get(), true)).get();\n+ SearchResponse all = client().prepareSearch(\"test\").setIndicesOptions(IndicesOptions.lenientExpandOpen()).get();\n+ assertEquals(expected + \" vs. \" + all, expected.getHits().getTotalHits(), all.getHits().getTotalHits());\n+ logger.info(\"total: {}\", expected.getHits().getTotalHits());\n+ }\n+\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexIT.java", "status": "modified" }, { "diff": "@@ -77,12 +77,9 @@ private static List<Throwable> putTemplate(PutRequest request) {\n null,\n null,\n null,\n- null,\n- null,\n Version.CURRENT,\n null,\n new HashSet<>(),\n- null,\n null\n );\n MetaDataIndexTemplateService service = new MetaDataIndexTemplateService(Settings.EMPTY, null, createIndexService, null);", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/template/put/MetaDataIndexTemplateServiceTests.java", "status": "modified" } ] }
{ "body": "Hi @nknize, some thing I noticed when working on the Geo-related queries was that the implementation of GeoPoint.hashCode() and equals() does not seem to follow the usual contract that when two objects are equal(), they should return the same hashCode. \nFrom the current implementation (https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/common/geo/GeoPoint.java#L133) it looks like when two points are very close (below the given lat/lon tolerance), they are considered equal, but they will return different hashCode() values. Since we rely on the GeoPoints hashCode/equals implementation in the geo queries, I'm curious about the reasons behind this and wondering if the class should be adapted so the contract holds.\n", "comments": [ { "body": "this sounds like a bug, there shouldn't be any reason for the equals/hashcode contract to be broken other than a bug, maybe the two methods simply got out of sync at some point :)\n", "created_at": "2015-10-13T11:54:50Z" }, { "body": "Thanks for the catch @cbuescher! @javanna you're absolutely right, the methods diverged when cutting over to the Lucene Geo encoding which required the tolerance. I'll open a PR to resync the hashCode method. \n", "created_at": "2015-10-14T22:36:34Z" } ], "number": 14083, "title": "[Geo] Question about Geopoint hashCode/equals contract" }
{ "body": "`Geopoint.equals` was modified to consider two points equal if they are within a threshold. This change was done to accept round-off error introduced from GeoHash encoding methods. This PR removes this trappy leniency from `GeoPoint.equals` and instead forces round-off error to be handled at the encoding source. It also fixes the broken contract between the `GeoPoint.hashCode` and `.equals` methods raised in #14083 \n\ncloses #14083\n", "number": 14124, "review_comments": [ { "body": "`TOLERANCE` is only used in tests I think it should be moved too no?\n", "created_at": "2015-10-17T18:55:31Z" }, { "body": "can we have a simple unittest that equals and hashcode don't violate it's contract? ie if `p.equals(q) then p.hashCode() == q.hashCode()`\n", "created_at": "2015-10-17T18:56:52Z" } ], "title": "Resync Geopoint hashCode/equals method" }
{ "commits": [ { "message": "Resync Geopoint hashCode/equals method\n\nGeopoint's equals method was modified to consider two points equal if they are within a threshold. This change was done to accept round-off error introduced from GeoHash encoding methods. This commit removes this trappy leniency from the GeoPoint equals method and instead forces round-off error to be handled at the encoding source." } ], "files": [ { "diff": "@@ -30,8 +30,7 @@ public final class GeoPoint {\n \n private double lat;\n private double lon;\n- private final static double TOLERANCE = XGeoUtils.TOLERANCE;\n- \n+\n public GeoPoint() {\n }\n \n@@ -126,14 +125,10 @@ public boolean equals(Object o) {\n if (this == o) return true;\n if (o == null || getClass() != o.getClass()) return false;\n \n- final GeoPoint geoPoint = (GeoPoint) o;\n- final double lonCompare = geoPoint.lon - lon;\n- final double latCompare = geoPoint.lat - lat;\n+ GeoPoint geoPoint = (GeoPoint) o;\n \n- if ((lonCompare < -TOLERANCE || lonCompare > TOLERANCE)\n- || (latCompare < -TOLERANCE || latCompare > TOLERANCE)) {\n- return false;\n- }\n+ if (Double.compare(geoPoint.lat, lat) != 0) return false;\n+ if (Double.compare(geoPoint.lon, lon) != 0) return false;\n \n return true;\n }", "filename": "core/src/main/java/org/elasticsearch/common/geo/GeoPoint.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.index.search.geo;\n \n-\n import org.apache.lucene.util.XGeoHashUtils;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.geo.GeoPoint;\n@@ -28,6 +27,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.geo.RandomGeoGenerator;\n import org.junit.Test;\n \n import java.io.IOException;\n@@ -36,46 +36,74 @@\n \n \n public class GeoPointParsingTests extends ESTestCase {\n-\n- // mind geohash precision and error\n- private static final double ERROR = 0.00001d;\n+ static double TOLERANCE = 1E-5;\n \n @Test\n public void testGeoPointReset() throws IOException {\n double lat = 1 + randomDouble() * 89;\n double lon = 1 + randomDouble() * 179;\n \n GeoPoint point = new GeoPoint(0, 0);\n- assertCloseTo(point, 0, 0);\n-\n- assertCloseTo(point.reset(lat, lon), lat, lon);\n- assertCloseTo(point.reset(0, 0), 0, 0);\n- assertCloseTo(point.resetLat(lat), lat, 0);\n- assertCloseTo(point.resetLat(0), 0, 0);\n- assertCloseTo(point.resetLon(lon), 0, lon);\n- assertCloseTo(point.resetLon(0), 0, 0);\n+ GeoPoint point2 = new GeoPoint(0, 0);\n+ assertPointsEqual(point, point2);\n+\n+ assertPointsEqual(point.reset(lat, lon), point2.reset(lat, lon));\n+ assertPointsEqual(point.reset(0, 0), point2.reset(0, 0));\n+ assertPointsEqual(point.resetLat(lat), point2.reset(lat, 0));\n+ assertPointsEqual(point.resetLat(0), point2.reset(0, 0));\n+ assertPointsEqual(point.resetLon(lon), point2.reset(0, lon));\n+ assertPointsEqual(point.resetLon(0), point2.reset(0, 0));\n assertCloseTo(point.resetFromGeoHash(XGeoHashUtils.stringEncode(lon, lat)), lat, lon);\n- assertCloseTo(point.reset(0, 0), 0, 0);\n- assertCloseTo(point.resetFromString(Double.toString(lat) + \", \" + Double.toHexString(lon)), lat, lon);\n- assertCloseTo(point.reset(0, 0), 0, 0);\n+ assertPointsEqual(point.reset(0, 0), point2.reset(0, 0));\n+ assertPointsEqual(point.resetFromString(Double.toString(lat) + \", \" + Double.toHexString(lon)), point2.reset(lat, lon));\n+ assertPointsEqual(point.reset(0, 0), point2.reset(0, 0));\n }\n- \n+\n+ @Test\n+ public void testEqualsHashCodeContract() {\n+ // generate a random geopoint\n+ final GeoPoint x = RandomGeoGenerator.randomPoint(random());\n+ final GeoPoint y = new GeoPoint(x.lat(), x.lon());\n+ final GeoPoint z = new GeoPoint(y.lat(), y.lon());\n+ // GeoPoint doesn't care about coordinate system bounds, this simply validates inequality\n+ final GeoPoint a = new GeoPoint(x.lat() + randomIntBetween(1, 5), x.lon() + randomIntBetween(1, 5));\n+\n+ /** equality test */\n+ // reflexive\n+ assertTrue(x.equals(x));\n+ // symmetry\n+ assertTrue(x.equals(y));\n+ // transitivity\n+ assertTrue(y.equals(z));\n+ assertTrue(x.equals(z));\n+ // inequality\n+ assertFalse(x.equals(a));\n+\n+ /** hashCode test */\n+ // symmetry\n+ assertTrue(x.hashCode() == y.hashCode());\n+ // transitivity\n+ assertTrue(y.hashCode() == z.hashCode());\n+ assertTrue(x.hashCode() == z.hashCode());\n+ // inequality\n+ assertFalse(x.hashCode() == a.hashCode());\n+ }\n+\n @Test\n public void testGeoPointParsing() throws IOException {\n- double lat = randomDouble() * 180 - 90;\n- double lon = randomDouble() * 360 - 180;\n- \n- GeoPoint point = GeoUtils.parseGeoPoint(objectLatLon(lat, lon));\n- assertCloseTo(point, lat, lon);\n- \n- GeoUtils.parseGeoPoint(arrayLatLon(lat, lon), point);\n- assertCloseTo(point, lat, lon);\n-\n- GeoUtils.parseGeoPoint(geohash(lat, lon), point);\n- assertCloseTo(point, lat, lon);\n-\n- GeoUtils.parseGeoPoint(stringLatLon(lat, lon), point);\n- assertCloseTo(point, lat, lon);\n+ GeoPoint randomPt = RandomGeoGenerator.randomPoint(random());\n+\n+ GeoPoint point = GeoUtils.parseGeoPoint(objectLatLon(randomPt.lat(), randomPt.lon()));\n+ assertPointsEqual(point, randomPt);\n+\n+ GeoUtils.parseGeoPoint(arrayLatLon(randomPt.lat(), randomPt.lon()), point);\n+ assertPointsEqual(point, randomPt);\n+\n+ GeoUtils.parseGeoPoint(geohash(randomPt.lat(), randomPt.lon()), point);\n+ assertCloseTo(point, randomPt.lat(), randomPt.lon());\n+\n+ GeoUtils.parseGeoPoint(stringLatLon(randomPt.lat(), randomPt.lon()), point);\n+ assertCloseTo(point, randomPt.lat(), randomPt.lon());\n }\n \n // Based on issue5390\n@@ -98,7 +126,7 @@ public void testInvalidPointEmbeddedObject() throws IOException {\n public void testInvalidPointLatHashMix() throws IOException {\n XContentBuilder content = JsonXContent.contentBuilder();\n content.startObject();\n- content.field(\"lat\", 0).field(\"geohash\", XGeoHashUtils.stringEncode(0, 0));\n+ content.field(\"lat\", 0).field(\"geohash\", XGeoHashUtils.stringEncode(0d, 0d));\n content.endObject();\n \n XContentParser parser = JsonXContent.jsonXContent.createParser(content.bytes());\n@@ -111,7 +139,7 @@ public void testInvalidPointLatHashMix() throws IOException {\n public void testInvalidPointLonHashMix() throws IOException {\n XContentBuilder content = JsonXContent.contentBuilder();\n content.startObject();\n- content.field(\"lon\", 0).field(\"geohash\", XGeoHashUtils.stringEncode(0, 0));\n+ content.field(\"lon\", 0).field(\"geohash\", XGeoHashUtils.stringEncode(0d, 0d));\n content.endObject();\n \n XContentParser parser = JsonXContent.jsonXContent.createParser(content.bytes());\n@@ -166,10 +194,15 @@ private static XContentParser geohash(double lat, double lon) throws IOException\n parser.nextToken();\n return parser;\n }\n- \n- public static void assertCloseTo(GeoPoint point, double lat, double lon) {\n- assertThat(point.lat(), closeTo(lat, ERROR));\n- assertThat(point.lon(), closeTo(lon, ERROR));\n+\n+ public static void assertPointsEqual(final GeoPoint point1, final GeoPoint point2) {\n+ assertEquals(point1, point2);\n+ assertEquals(point1.hashCode(), point2.hashCode());\n+ }\n+\n+ public static void assertCloseTo(final GeoPoint point, final double lat, final double lon) {\n+ assertEquals(point.lat(), lat, TOLERANCE);\n+ assertEquals(point.lon(), lon, TOLERANCE);\n }\n \n }", "filename": "core/src/test/java/org/elasticsearch/index/search/geo/GeoPointParsingTests.java", "status": "modified" }, { "diff": "@@ -41,6 +41,7 @@\n import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+import static org.hamcrest.Matchers.closeTo;\n \n @ESIntegTestCase.SuiteScopeTestCase\n public class MissingValueIT extends ESIntegTestCase {\n@@ -198,7 +199,8 @@ public void testGeoCentroid() {\n SearchResponse response = client().prepareSearch(\"idx\").addAggregation(geoCentroid(\"centroid\").field(\"location\").missing(\"2,1\")).get();\n assertSearchResponse(response);\n GeoCentroid centroid = response.getAggregations().get(\"centroid\");\n- assertEquals(new GeoPoint(1.5, 1.5), centroid.centroid());\n+ GeoPoint point = new GeoPoint(1.5, 1.5);\n+ assertThat(point.lat(), closeTo(centroid.centroid().lat(), 1E-5));\n+ assertThat(point.lon(), closeTo(centroid.centroid().lon(), 1E-5));\n }\n-\n }", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/MissingValueIT.java", "status": "modified" }, { "diff": "@@ -68,6 +68,7 @@ public abstract class AbstractGeoTestCase extends ESIntegTestCase {\n protected static GeoPoint singleTopLeft, singleBottomRight, multiTopLeft, multiBottomRight, singleCentroid, multiCentroid, unmappedCentroid;\n protected static ObjectIntMap<String> expectedDocCountsForGeoHash = null;\n protected static ObjectObjectMap<String, GeoPoint> expectedCentroidsForGeoHash = null;\n+ protected static final double GEOHASH_TOLERANCE = 1E-5D;\n \n @Override\n public void setupSuiteScopeCluster() throws Exception {", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/AbstractGeoTestCase.java", "status": "modified" }, { "diff": "@@ -84,7 +84,8 @@ public void partiallyUnmapped() throws Exception {\n assertThat(geoCentroid, notNullValue());\n assertThat(geoCentroid.getName(), equalTo(aggName));\n GeoPoint centroid = geoCentroid.centroid();\n- assertThat(centroid, equalTo(singleCentroid));\n+ assertThat(centroid.lat(), closeTo(singleCentroid.lat(), GEOHASH_TOLERANCE));\n+ assertThat(centroid.lon(), closeTo(singleCentroid.lon(), GEOHASH_TOLERANCE));\n }\n \n @Test\n@@ -99,7 +100,8 @@ public void singleValuedField() throws Exception {\n assertThat(geoCentroid, notNullValue());\n assertThat(geoCentroid.getName(), equalTo(aggName));\n GeoPoint centroid = geoCentroid.centroid();\n- assertThat(centroid, equalTo(singleCentroid));\n+ assertThat(centroid.lat(), closeTo(singleCentroid.lat(), GEOHASH_TOLERANCE));\n+ assertThat(centroid.lon(), closeTo(singleCentroid.lon(), GEOHASH_TOLERANCE));\n }\n \n @Test\n@@ -122,10 +124,12 @@ public void singleValueField_getProperty() throws Exception {\n assertThat(geoCentroid.getName(), equalTo(aggName));\n assertThat((GeoCentroid) global.getProperty(aggName), sameInstance(geoCentroid));\n GeoPoint centroid = geoCentroid.centroid();\n- assertThat(centroid, equalTo(singleCentroid));\n- assertThat((GeoPoint) global.getProperty(aggName + \".value\"), equalTo(singleCentroid));\n- assertThat((double) global.getProperty(aggName + \".lat\"), closeTo(singleCentroid.lat(), 1e-5));\n- assertThat((double) global.getProperty(aggName + \".lon\"), closeTo(singleCentroid.lon(), 1e-5));\n+ assertThat(centroid.lat(), closeTo(singleCentroid.lat(), GEOHASH_TOLERANCE));\n+ assertThat(centroid.lon(), closeTo(singleCentroid.lon(), GEOHASH_TOLERANCE));\n+ assertThat(((GeoPoint) global.getProperty(aggName + \".value\")).lat(), closeTo(singleCentroid.lat(), GEOHASH_TOLERANCE));\n+ assertThat(((GeoPoint) global.getProperty(aggName + \".value\")).lon(), closeTo(singleCentroid.lon(), GEOHASH_TOLERANCE));\n+ assertThat((double) global.getProperty(aggName + \".lat\"), closeTo(singleCentroid.lat(), GEOHASH_TOLERANCE));\n+ assertThat((double) global.getProperty(aggName + \".lon\"), closeTo(singleCentroid.lon(), GEOHASH_TOLERANCE));\n }\n \n @Test\n@@ -140,7 +144,8 @@ public void multiValuedField() throws Exception {\n assertThat(geoCentroid, notNullValue());\n assertThat(geoCentroid.getName(), equalTo(aggName));\n GeoPoint centroid = geoCentroid.centroid();\n- assertThat(centroid, equalTo(multiCentroid));\n+ assertThat(centroid.lat(), closeTo(multiCentroid.lat(), GEOHASH_TOLERANCE));\n+ assertThat(centroid.lon(), closeTo(multiCentroid.lon(), GEOHASH_TOLERANCE));\n }\n \n @Test\n@@ -160,7 +165,10 @@ public void singleValueFieldAsSubAggToGeohashGrid() throws Exception {\n String geohash = cell.getKeyAsString();\n GeoPoint expectedCentroid = expectedCentroidsForGeoHash.get(geohash);\n GeoCentroid centroidAgg = cell.getAggregations().get(aggName);\n- assertEquals(\"Geohash \" + geohash + \" has wrong centroid \", expectedCentroid, centroidAgg.centroid());\n+ assertThat(\"Geohash \" + geohash + \" has wrong centroid latitude \", expectedCentroid.lat(),\n+ closeTo(centroidAgg.centroid().lat(), GEOHASH_TOLERANCE));\n+ assertThat(\"Geohash \" + geohash + \" has wrong centroid longitude\", expectedCentroid.lon(),\n+ closeTo(centroidAgg.centroid().lon(), GEOHASH_TOLERANCE));\n }\n }\n }", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/metrics/GeoCentroidIT.java", "status": "modified" } ] }
{ "body": "```\n# With a one-word query and minimum_should_match=-50%, adding extra non-matching fields should not matter.\n# Tested on v1.7.2.\n\n# delete and re-create the index\ncurl -XDELETE localhost:9200/test\ncurl -XPUT localhost:9200/test\n\necho \n\n # insert a document\ncurl -XPUT 'http://localhost:9200/test/test/1' -d '\n { \"title\": \"test document\"}\n '\ncurl -XPOST 'http://localhost:9200/test/_refresh'\n\necho \n\n # this correctly finds the document (f1 is a non-existent field)\n curl -XGET 'http://localhost:9200/test/test/_search' -d '{\n \"query\" : {\n \"simple_query_string\" : {\n \"fields\" : [ \"title\", \"f1\" ],\n \"query\" : \"test\",\n \"minimum_should_match\" : \"-50%\"\n }\n }\n }\n'\n\necho \n\n# this incorrectly does not find the document (f1 and f2 are non-existent fields)\ncurl -XGET 'http://localhost:9200/test/test/_search' -d '{\n \"query\" : {\n \"simple_query_string\" : {\n \"fields\" : [ \"title\", \"f1\", \"f2\" ],\n \"query\" : \"test\",\n \"minimum_should_match\" : \"-50%\"\n }\n }\n }\n'\n\necho\n```\n", "comments": [ { "body": "This does look like a bug. The min_should_match is being applied at the wrong level:\n\n```\nGET /test/test/_validate/query?explain\n{\n \"query\" : {\n \"simple_query_string\" : {\n \"fields\" : [ \"title\", \"f1\", \"f2\" ],\n \"query\" : \"test\",\n \"minimum_should_match\" : \"-50%\"\n }\n }\n}\n```\n\nReturns an explanation of:\n\n```\n+((f1:test title:test f2:test)~2)\n```\n\nWhile the `query_string` and `multi_match` equivalents return:\n\n```\n+(title:test | f1:test | f2:test)\n```\n", "created_at": "2015-10-02T15:53:05Z" }, { "body": "I had a look and saw that `simple_query_string` iterates over all fields for each token in the query string and combines them all in boolean query `should` clauses, and we apply the `minimum_should_match` on the whole result. \n\n@clintongormley As far as I understand you, we should parse the query string for each field separately, apply the `minimum_should_match` there and then combine the result in an overall Boolean query. This however raised another question for me. Suppose we have two terms like `\"query\" : \"test document\"` instead, then currently we we get:\n\n```\n((f1:test title:test f2:test) (f1:document title:document f2:document))~1\n```\n\nIf we would instead create the query per field individually we would get something like\n\n```\n((title:test title:document)~1 (f1:test f1:document)~1 (f2:test f2:document)~1)\n```\n\nWhile treating the query string for each field individually looks like the right behaviour in this case, I wonder if this will break other cases. wdyt?\n", "created_at": "2015-10-14T10:09:12Z" }, { "body": "@cbuescher the `query_string` query takes the same approach as your first output, ie:\n\n```\n((f1:test title:test f2:test) (f1:document title:document f2:document))~1\n```\n\nI think the bug is maybe a bit more subtle. A query across 3 fields for two terms with min should match 80% results in:\n\n```\nbool:\n min_should_match: 80% (==1)\n should:\n bool:\n should: [ f1:term1, f2:term1, f3:term1]\n bool:\n should: [ f1:term2, f2:term2, f3:term2]\n```\n\nhowever with only one term it is producing:\n\n```\nbool:\n min_should_match: 80% (==2) \n should: [ f1:term1, f2:term1, f3:term1]\n```\n\nIn other words, min should match is being applied to the wrong `bool` query. Instead, even the one term case should be wrapped in another `bool` query, and the min should match should be applied at that level.\n", "created_at": "2015-10-14T11:31:18Z" }, { "body": "@clintongormley Yes, I think thats what I meant. I'm working on a PR that applies the `minimum_should_match` to sub-queries that only target one field. That way your examples above would change to something like\n\n```\nbool: \n should:\n bool:\n min_should_match: 80% (==1)\n should: [ f1:term1, f1:term2]\n bool:\n min_should_match: 80% (==1)\n should: [ f2:term1, f2:term2]\n bool:\n min_should_match: 80% (==1)\n should: [ f3:term1, f3:term2]\n```\n\nand for one term\n\n```\nbool: \n should:\n bool:\n min_should_match: 80% (==0)\n should: [ f1:term1]\n bool:\n min_should_match: 80% (==0)\n should: [ f2:term1]\n bool:\n min_should_match: 80% (==0)\n should: [ f3:term1]\n```\n\nIn the later case we already additionally simplify one-term bool queries to TermQueries.\n", "created_at": "2015-10-14T12:03:12Z" }, { "body": "@cbuescher I think that is incorrect. The simple query string query (like the query string query) is term-centric rather than field-centric. In other words, min should match should be applied to the number of terms (regardless of which field the term is in).\n\nI'm guessing that there is an \"optimization\" for the one term case where the field-level bool clause is not wrapped in an outer bool clause. Then the min should match is applied at the field level instead of at the term level, resulting in the wrong calculation.\n", "created_at": "2015-10-15T11:22:45Z" }, { "body": "That guess seems right, there is an optimization in lucenes\nSimpleQueryParser for boolean queries with 0 or 1 clauses that seems to be\nthe problem. I think we can overwrite that.\n\nOn Thu, Oct 15, 2015 at 1:23 PM, Clinton Gormley notifications@github.com\nwrote:\n\n> @cbuescher https://github.com/cbuescher I think that is incorrect. The\n> simple query string query (like the query string query) is term-centric\n> rather than field-centric. In other words, min should match should be\n> applied to the number of terms (regardless of which field the term is in).\n> \n> I'm guessing that there is an \"optimization\" for the one term case where\n> the field-level bool clause is not wrapped in an outer bool clause. Then\n> the min should match is applied at the field level instead of at the term\n> level, resulting in the wrong calculation.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elastic/elasticsearch/issues/13884#issuecomment-148358711\n> .\n\n## \n\nChristoph Büscher\n", "created_at": "2015-10-15T12:49:18Z" }, { "body": "If this is in Lucene, perhaps it should be fixed there?\n\n@jdconrad what do you think?\n", "created_at": "2015-10-15T15:04:19Z" }, { "body": "@clintongormley I don't think SimpleQueryParser#simplify() is at the root of this anymore. The problem seems to be that SimpleQueryParser parses term by term-centric, but only starts wrapping the resulting queries when combining more than two of them. For one search term and two fields I get a Boolean query with two TermQuery clauses (without enclosing Boolean query), for two terms and one field I get an enclosing Boolean query with two Boolean query subclauses. I'm not sure yet how this can be distiguished from outside of the Lucene parser without inspecting the query, and if a solution like that holds for more complicated cases.\n", "created_at": "2015-10-15T15:15:15Z" }, { "body": "Althought it would be nice if Lucene SimpleQueryParse would output a Boolquery with one should-clause and three nested Boolqueries for the 1-term/multi-field case, I think we can detect this case and do the wrapping in the additional Boolquery in the SimpleQueryStringBuilder. I just opened a PR.\n", "created_at": "2015-10-19T08:58:58Z" }, { "body": "The SQP wasn't really designed around multi-field terms, but needed to have it added afterwards for use as a default field which is why the min-should-match never gets applied down at that the level. I don't know if the correct behavior is to make it work on multi-fields. I'll have to give that some thought given that it really is as @cbuescher described as term-centric, and it sort of supposed to be disguised from the user. One thing that will make this easier to fix, though, I believe is #4707, since it will flatten the parse tree a bit.\n", "created_at": "2015-10-19T17:15:49Z" }, { "body": "@jdconrad thanks for explaining, in the meantime I opened #14186 which basically tries to distinguish the one vs. multi-field cases and tries wraps the resulting query one more time to get a correct min-should-match. Please leave comment there if my current approach will colide the plans regarding #4707.\n", "created_at": "2015-10-20T09:10:12Z" }, { "body": "Reopening this issue since the fix proposed in #14186 was too fragile. Discussed with @javanna and @jpountz, at this point we think the options are either fixing this in lucenes SimpleQueryParser so that we can apply minimum_should_match correctly on the ES side or remove this option from `simple_query_string` entirely because it cannot properly supported.\n", "created_at": "2015-11-03T11:01:39Z" }, { "body": "Trying to sum up this issue so far: \n- the number of should-clauses returned by `SimpleQueryParser` is not 1 for one search term and multiple fields, so we cannot apply `minimum_should_match` correctly in `SimpleQueryStringBuilder`. e.g. for `\"query\" : \"term1\", \"fields\" : [ \"f1\", \"f2\", \"f3\" ]` SimpleQueryParser returns a BooleanQuery with three should-clauses. As soon as we add more search terms, the number of should-clauses is the same as the number of search terms, e.g. `\"query\" : \"term1 term2\", \"fields\" : [ \"f1\", \"f2\", \"f3\" ]` returns a BooleanQuery with two subclauses, one per term.\n- it is difficult to determine the number of terms from the query string upfront, because the tokenization depends on the analyzer used, so we really need `SimpleQueryParser#parse()` for this.\n- it is hard to determine the correct number of terms from the returned lucene query without making assumptions about the inner structure of the query (which is subject to change, reason for #14186 beeing reverted). e.g. currently `\"query\" : \"term1\", \"fields\" : [ \"f1\", \"f2\", \"f3\" ]` and `\"query\" : \"term1 term2 term3\", \"fields\" : [ \"f1\" ]` will return a BooleanQuery with same structure (three should-clauses, each containing a TermQuery). \n", "created_at": "2015-11-03T11:57:40Z" }, { "body": "@cbuescher this issue is fixed by https://github.com/elastic/elasticsearch/pull/16155. \n@rmuir has pointed out a nice way to distinguish between a single word query with multiple fields against a multi word query with a single field: we just have to check if the coord are disabled on the top level BooleanQuery, the simple query parser disables the coord when the boolean query for multiple fields is built.\nThough I tested the single word with multiple fields case only, if you think of other issues please reopen this ticket or open a new one ;).\n", "created_at": "2016-02-04T18:11:51Z" }, { "body": "@jimferenczi thats great, I just checked this with a test which is close to the problem desciption here. I'm not sure if this adds anything to your tests, but just in case I justed opened #16465 which adds this as an integration test for SimpleQueryStringQuery. Maybe you can take a look and tell me if it makes sense to add those as well.\n", "created_at": "2016-02-04T21:08:13Z" }, { "body": "@cbuescher thanks, a unit test in SimpleQueryStringBuilderTest could be useful as well. The integ test does not check the minimum should match that is applied (or not) to the boolean query. \n", "created_at": "2016-02-05T08:55:03Z" } ], "number": 13884, "title": "Bug with simple_query_string, minimum_should_match, and multiple fields." }
{ "body": "So far we apply the optional `minimum_should_match` parameter\nto the top-level query that resulted from parsing the query string\nincluding all of its specified fields which lead to wrong behaviour\nin cases like stated in #13884.\n\nInstead we now parse the query string for each field/weight pair\nindividually, then apply the `minimum_should_match` parameter to\nthose queries and combine the result into an overall boolean query\nwith should-clauses for each field.\n\nCloses #13884\n", "number": 14111, "review_comments": [], "title": "Fix SimpleQueryString `minimum_should_match` handling" }
{ "commits": [ { "message": "Fix SimpleQueryString `minimum_should_match` handling\n\nSo far we applied the optional `minimum_should_match` parameter\nto the top-level query that resulted from parsing the query string\nincluding all of its specified fields which lead to wrong behaviour\nin cases like stated in #13884.\n\nInstead we now parse the query string for each field/weight pair\nindividually, then apply the `minimum_should_match` parameter to\nthose queries and combine the result into an overall boolean query\nwith should-clauses for each field.\n\nCloses #13884" } ], "files": [ { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.query;\n \n import org.apache.lucene.analysis.Analyzer;\n+import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.common.Strings;\n@@ -32,9 +33,12 @@\n import org.elasticsearch.index.query.SimpleQueryParser.Settings;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n import java.util.HashMap;\n+import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n+import java.util.Map.Entry;\n import java.util.Objects;\n import java.util.TreeMap;\n \n@@ -277,17 +281,31 @@ protected Query doToQuery(QueryShardContext context) throws IOException {\n throw new QueryShardException(context, \"[\" + SimpleQueryStringBuilder.NAME + \"] analyzer [\" + analyzer\n + \"] not found\");\n }\n+ }\n \n+ final List<Query> perFieldQueries = new ArrayList<Query>(resolvedFieldsAndWeights.size());\n+ for (Entry<String, Float> entry : resolvedFieldsAndWeights.entrySet()) {\n+ Map<String, Float> fieldAndWeight = new HashMap<String, Float>(1);\n+ fieldAndWeight.put(entry.getKey(), entry.getValue());\n+ SimpleQueryParser sqp = new SimpleQueryParser(luceneAnalyzer, fieldAndWeight, flags, settings);\n+ sqp.setDefaultOperator(defaultOperator.toBooleanClauseOccur());\n+ Query query = sqp.parse(queryText);\n+ if (minimumShouldMatch != null && query instanceof BooleanQuery) {\n+ query = Queries.applyMinimumShouldMatch((BooleanQuery) query, minimumShouldMatch);\n+ }\n+ perFieldQueries.add(query);\n }\n \n- SimpleQueryParser sqp = new SimpleQueryParser(luceneAnalyzer, resolvedFieldsAndWeights, flags, settings);\n- sqp.setDefaultOperator(defaultOperator.toBooleanClauseOccur());\n+ // if only one query, then simplify and return only that one\n+ if (perFieldQueries.size() == 1) {\n+ return perFieldQueries.get(0);\n+ }\n \n- Query query = sqp.parse(queryText);\n- if (minimumShouldMatch != null && query instanceof BooleanQuery) {\n- query = Queries.applyMinimumShouldMatch((BooleanQuery) query, minimumShouldMatch);\n+ final BooleanQuery.Builder booleanQuery = new BooleanQuery.Builder();\n+ for (Query query : perFieldQueries) {\n+ booleanQuery.add(query, BooleanClause.Occur.SHOULD);\n }\n- return query;\n+ return booleanQuery.build();\n }\n \n private static String resolveIndexName(String fieldName, QueryShardContext context) {", "filename": "core/src/main/java/org/elasticsearch/index/query/SimpleQueryStringBuilder.java", "status": "modified" }, { "diff": "@@ -272,10 +272,6 @@ protected void doAssertLuceneQuery(SimpleQueryStringBuilder queryBuilder, Query\n assertThat(termQuery.getTerm().field(), equalTo(fields.next()));\n assertThat(termQuery.getTerm().text().toLowerCase(Locale.ROOT), equalTo(queryBuilder.value().toLowerCase(Locale.ROOT)));\n }\n-\n- if (queryBuilder.minimumShouldMatch() != null) {\n- assertThat(boolQuery.getMinimumNumberShouldMatch(), greaterThan(0));\n- }\n } else if (queryBuilder.fields().size() <= 1) {\n assertTrue(\"Query should have been TermQuery but was \" + query.getClass().getName(), query instanceof TermQuery);\n \n@@ -299,7 +295,6 @@ protected void assertBoost(SimpleQueryStringBuilder queryBuilder, Query query) t\n //instead of trying to reparse the query and guess what the boost should be, we delegate boost checks to specific boost tests below\n }\n \n-\n private int shouldClauses(BooleanQuery query) {\n int result = 0;\n for (BooleanClause c : query.clauses()) {\n@@ -327,4 +322,61 @@ public void testToQueryBoost() throws IOException {\n assertThat(query, instanceOf(TermQuery.class));\n assertThat(query.getBoost(), equalTo(10f));\n }\n+\n+ @Test\n+ public void testMinimumShouldMatch() throws IOException {\n+ assumeTrue(\"test runs only when at least a type is registered\", getCurrentTypes().length > 0);\n+ QueryShardContext shardContext = createShardContext();\n+ SimpleQueryStringBuilder simpleQueryStringBuilder = new SimpleQueryStringBuilder(\"test\");\n+ simpleQueryStringBuilder.field(STRING_FIELD_NAME);\n+ simpleQueryStringBuilder.minimumShouldMatch(\"70%\");\n+\n+ // one field and one term should be reduced to simple term query\n+ TermQuery termQuery = (TermQuery) simpleQueryStringBuilder.toQuery(shardContext);\n+\n+ // one field and two terms should result in boolean query\n+ simpleQueryStringBuilder = new SimpleQueryStringBuilder(\"test me\");\n+ simpleQueryStringBuilder.field(STRING_FIELD_NAME);\n+ simpleQueryStringBuilder.minimumShouldMatch(\"70%\");\n+\n+ BooleanQuery query = (BooleanQuery) simpleQueryStringBuilder.toQuery(shardContext);\n+ assertEquals(\"expected two should clauses\", 2, query.clauses().size());\n+ assertEquals(\"minimum should match should be 1\", 1, query.getMinimumNumberShouldMatch());\n+\n+ // one field and three terms should result in boolean query\n+ simpleQueryStringBuilder = new SimpleQueryStringBuilder(\"test me too\");\n+ simpleQueryStringBuilder.field(STRING_FIELD_NAME);\n+ simpleQueryStringBuilder.minimumShouldMatch(\"70%\");\n+\n+ query = (BooleanQuery) simpleQueryStringBuilder.toQuery(shardContext);\n+ assertEquals(\"expected two should clauses\", 3, query.clauses().size());\n+ assertEquals(\"minimum should match should be 1\", 2, query.getMinimumNumberShouldMatch());\n+\n+ // two fields and three terms should result in boolean query with two clauses\n+ simpleQueryStringBuilder = new SimpleQueryStringBuilder(\"test me too\");\n+ simpleQueryStringBuilder.field(STRING_FIELD_NAME);\n+ simpleQueryStringBuilder.field(\"f1\");\n+ simpleQueryStringBuilder.minimumShouldMatch(\"70%\");\n+\n+ query = (BooleanQuery) simpleQueryStringBuilder.toQuery(shardContext);\n+ assertEquals(\"expected two should clauses\", 2, query.clauses().size());\n+ assertEquals(\"minimum should match of outer query should be 0\", 0, query.getMinimumNumberShouldMatch());\n+ for (BooleanClause clause : query.clauses()) {\n+ assertEquals(\"subclauses should have minimum_match 2\", 2, ((BooleanQuery) clause.getQuery()).getMinimumNumberShouldMatch());\n+ }\n+\n+ // three fields and one term should result in boolean query with three clauses\n+ simpleQueryStringBuilder = new SimpleQueryStringBuilder(\"test\");\n+ simpleQueryStringBuilder.field(STRING_FIELD_NAME);\n+ simpleQueryStringBuilder.field(\"f1\");\n+ simpleQueryStringBuilder.field(\"f2\");\n+ simpleQueryStringBuilder.minimumShouldMatch(\"70%\");\n+\n+ query = (BooleanQuery) simpleQueryStringBuilder.toQuery(shardContext);\n+ assertEquals(\"expected two should clauses\", 3, query.clauses().size());\n+ assertEquals(\"minimum should match of outer query should be 0\", 0, query.getMinimumNumberShouldMatch());\n+ for (BooleanClause clause : query.clauses()) {\n+ assertThat(\"subclauses should be simple term queries\", clause.getQuery(), instanceOf(TermQuery.class));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/query/SimpleQueryStringBuilderTests.java", "status": "modified" }, { "diff": "@@ -137,8 +137,30 @@ public void testSimpleQueryStringMinimumShouldMatch() throws Exception {\n \n logger.info(\"--> query 6\");\n searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo bar baz\").field(\"body2\").field(\"other\").minimumShouldMatch(\"70%\")).get();\n- assertHitCount(searchResponse, 3l);\n- assertSearchHits(searchResponse, \"6\", \"7\", \"8\");\n+ assertHitCount(searchResponse, 2l);\n+ assertSearchHits(searchResponse, \"7\", \"8\");\n+ }\n+\n+ /**\n+ * Test case from #13884, negative minimum_should_match and non existing fields\n+ */\n+ public void testNegativeMinimumShouldMatch() throws Exception {\n+ createIndex(\"test\");\n+ ensureGreen(\"test\");\n+ indexRandom(true, false,\n+ client().prepareIndex(\"test\", \"test\", \"1\").setSource(\"title\", \"foo bar\"));\n+\n+ logger.info(\"--> query 1\");\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo\")\n+ .field(\"title\").field(\"non-existing-f1\").minimumShouldMatch(\"-50%\")).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+\n+ logger.info(\"--> query 2\");\n+ searchResponse = client().prepareSearch().setQuery(simpleQueryStringQuery(\"foo\")\n+ .field(\"title\").field(\"non-existing-f1\").field(\"non-existing-f2\").minimumShouldMatch(\"-50%\")).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n }\n \n @Test", "filename": "core/src/test/java/org/elasticsearch/search/query/SimpleQueryStringIT.java", "status": "modified" } ] }
{ "body": "The current implementation of `Cache#computeIfAbsent` can lead to deadlocks in situations where dependent key loading is occurring. This is the case because of the locks that are taken to ensure that the loader is invoked at most once per key. In particular, consider two threads `t1` and `t2` invoking this method for keys `k1` and `k2` which will both trigger dependent calls to `Cache#computeIfAbsent` for keys `kd1` and `kd2`. In cases when `k1` and `kd2`and in the same segment, and`k2`and `kd1` are in the same segment then:\r\n1. `t1` locks the segment for `k1`\r\n2. `t2` locks the segment for `k2`\r\n3. `t1` blocks waiting for the lock for the segment for `kd1`\r\n4. `t2` blocks waiting for the lock for the segment for `kd2`\r\n\r\nis a deadlock. This unfortunate situation surfaced in a failed [build](http://build-us-00.elastic.co/job/es_core_master_medium/2473/).\r\n\r\n```\r\n\"elasticsearch[node_s0][warmer][T#5]\" ID=19141 WAITING on java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@2255a366 owned by \"elasticsearch[node_s0][warmer][T#4]\" ID=19139\r\n at sun.misc.Unsafe.park(Native Method)\r\n - waiting on java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@2255a366\r\n at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)\r\n at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)\r\n at org.elasticsearch.common.util.concurrent.ReleasableLock.acquire(ReleasableLock.java:55)\r\n at org.elasticsearch.common.cache.Cache$CacheSegment.get(Cache.java:187)\r\n at org.elasticsearch.common.cache.Cache.get(Cache.java:279)\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:300)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:150)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexFieldData.load(AbstractIndexFieldData.java:80)\r\n at org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder.build(GlobalOrdinalsBuilder.java:52)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.localGlobalDirect(AbstractIndexOrdinalsFieldData.java:80)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.localGlobalDirect(AbstractIndexOrdinalsFieldData.java:41)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.lambda$load$27(IndicesFieldDataCache.java:179)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$$Lambda$366/28099691.load(Unknown Source)\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:311)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:174)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.loadGlobal(AbstractIndexOrdinalsFieldData.java:68)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.loadGlobal(AbstractIndexOrdinalsFieldData.java:41)\r\n at org.elasticsearch.search.SearchService$FieldDataWarmer$3.run(SearchService.java:1019)\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n at java.lang.Thread.run(Thread.java:745)\r\n Locked synchronizers:\r\n - java.util.concurrent.ThreadPoolExecutor$Worker@41b4471c\r\n - java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@23fdb477\r\n\r\n\"elasticsearch[node_s0][warmer][T#4]\" ID=19139 WAITING on java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@23fdb477 owned by \"elasticsearch[node_s0][warmer][T#5]\" ID=19141\r\n at sun.misc.Unsafe.park(Native Method)\r\n - waiting on java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@23fdb477\r\n at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)\r\n at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)\r\n at org.elasticsearch.common.util.concurrent.ReleasableLock.acquire(ReleasableLock.java:55)\r\n at org.elasticsearch.common.cache.Cache$CacheSegment.get(Cache.java:187)\r\n at org.elasticsearch.common.cache.Cache.get(Cache.java:279)\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:300)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:150)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexFieldData.load(AbstractIndexFieldData.java:80)\r\n at org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder.build(GlobalOrdinalsBuilder.java:52)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.localGlobalDirect(AbstractIndexOrdinalsFieldData.java:80)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.localGlobalDirect(AbstractIndexOrdinalsFieldData.java:41)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.lambda$load$27(IndicesFieldDataCache.java:179)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$$Lambda$366/28099691.load(Unknown Source)\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:311)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:174)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.loadGlobal(AbstractIndexOrdinalsFieldData.java:68)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.loadGlobal(AbstractIndexOrdinalsFieldData.java:41)\r\n at org.elasticsearch.search.SearchService$FieldDataWarmer$3.run(SearchService.java:1019)\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n at java.lang.Thread.run(Thread.java:745)\r\n Locked synchronizers:\r\n - java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@2255a366\r\n - java.util.concurrent.ThreadPoolExecutor$Worker@42f5ef87\r\n```\r\n", "comments": [ { "body": "There are two pull requests open to address this issue. The first is #14068 which relaxes the constraint that `Cache#computeIfAbsent` be called at most once per key. The second is #14091 which changes the synchronization mechanism to be the key itself so that loading does not occur under the segment lock.\n\nOnly one of these two pull requests should be merged into master but given the [feedback](https://github.com/elastic/elasticsearch/pull/14068#issuecomment-147525189) on #14068 from @jpountz I wanted to explore a different approach for solving the deadlock issue.\n", "created_at": "2015-10-13T17:03:54Z" } ], "number": 14090, "title": "Cache#computeIfAbsent can lead to deadlocks" }
{ "body": "This commit changes the behavior of Cache#computeIfAbsent to not invoke\nload for a key under the segment lock. Instead, the synchronization\nmechanism to ensure that load is invoked at most once per key is\nthrough the use of a future. Under the segment lock, we put a future in\nthe cache and through this ensure that load is invoked at most once per\nkey. This will not lead to the same deadlock situation as before\nbecause a dependent key load on the same thread can not be triggered\nwhile the segment lock is held.\n\nCloses #14090\n", "number": 14091, "review_comments": [ { "body": "can we avoid the weak reference? would it be enough to also clear a key from this map whenever we clear a key from the main map?\n", "created_at": "2015-10-13T17:55:24Z" }, { "body": "could it still deadlock if consumers of this cache also use the keys as locks?\n", "created_at": "2015-10-13T17:58:47Z" }, { "body": "Yes, but I don't think that we want an internal lock per key. Back to the drawing board. :)\n", "created_at": "2015-10-13T18:08:45Z" }, { "body": "I think it would help to still have some comments explaining the mechanism, how the loader is called outside of the lock, etc.\n", "created_at": "2015-10-13T23:04:57Z" }, { "body": "Added in 9fa471c48ea5b0074fd5b5ac9d9ab0a9d17fc903.\n", "created_at": "2015-10-14T01:15:11Z" } ], "title": "Avoid deadlocks in Cache#computeIfAbsent" }
{ "commits": [ { "message": "Avoid deadlocks in Cache#computeIfAbsent\n\nThis commit changes the behavior of Cache#computeIfAbsent to not invoke\nload for a key under the segment lock. Instead, the synchronization\nmechanism to ensure that load is invoked at most once per key is\nthrough the use of a future. Under the segment lock, we put a future in\nthe cache and through this ensure that load is invoked at most once per\nkey. This will not lead to the same deadlock situation as before\nbecause a dependent key load on the same thread can not be triggered\nwhile the segment lock is held.\n\nCloses #14090" } ], "files": [ { "diff": "@@ -23,7 +23,10 @@\n import org.elasticsearch.common.util.concurrent.ReleasableLock;\n \n import java.util.*;\n+import java.util.concurrent.CompletableFuture;\n import java.util.concurrent.ExecutionException;\n+import java.util.concurrent.Future;\n+import java.util.concurrent.FutureTask;\n import java.util.concurrent.atomic.LongAdder;\n import java.util.concurrent.locks.ReadWriteLock;\n import java.util.concurrent.locks.ReentrantLock;\n@@ -172,7 +175,8 @@ private static class CacheSegment<K, V> {\n ReleasableLock readLock = new ReleasableLock(segmentLock.readLock());\n ReleasableLock writeLock = new ReleasableLock(segmentLock.writeLock());\n \n- Map<K, Entry<K, V>> map = new HashMap<>();\n+ Map<K, Future<Entry<K, V>>> map = new HashMap<>();\n+\n SegmentStats segmentStats = new SegmentStats();\n \n /**\n@@ -183,13 +187,19 @@ private static class CacheSegment<K, V> {\n * @return the entry if there was one, otherwise null\n */\n Entry<K, V> get(K key, long now) {\n- Entry<K, V> entry;\n+ Future<Entry<K, V>> future;\n+ Entry<K, V> entry = null;\n try (ReleasableLock ignored = readLock.acquire()) {\n- entry = map.get(key);\n+ future = map.get(key);\n }\n- if (entry != null) {\n+ if (future != null) {\n segmentStats.hit();\n- entry.accessTime = now;\n+ try {\n+ entry = future.get();\n+ entry.accessTime = now;\n+ } catch (ExecutionException | InterruptedException e) {\n+ throw new IllegalStateException(\"future should be a completedFuture for which get should not throw\", e);\n+ }\n } else {\n segmentStats.miss();\n }\n@@ -208,7 +218,12 @@ Tuple<Entry<K, V>, Entry<K, V>> put(K key, V value, long now) {\n Entry<K, V> entry = new Entry<>(key, value, now);\n Entry<K, V> existing;\n try (ReleasableLock ignored = writeLock.acquire()) {\n- existing = map.put(key, entry);\n+ try {\n+ Future<Entry<K, V>> future = map.put(key, CompletableFuture.completedFuture(entry));\n+ existing = future != null ? future.get() : null;\n+ } catch (ExecutionException | InterruptedException e) {\n+ throw new IllegalStateException(\"future should be a completedFuture for which get should not throw\", e);\n+ }\n }\n return Tuple.tuple(entry, existing);\n }\n@@ -220,12 +235,18 @@ Tuple<Entry<K, V>, Entry<K, V>> put(K key, V value, long now) {\n * @return the removed entry if there was one, otherwise null\n */\n Entry<K, V> remove(K key) {\n- Entry<K, V> entry;\n+ Future<Entry<K, V>> future;\n+ Entry<K, V> entry = null;\n try (ReleasableLock ignored = writeLock.acquire()) {\n- entry = map.remove(key);\n+ future = map.remove(key);\n }\n- if (entry != null) {\n+ if (future != null) {\n segmentStats.eviction();\n+ try {\n+ entry = future.get();\n+ } catch (ExecutionException | InterruptedException e) {\n+ throw new IllegalStateException(\"future should be a completedFuture for which get should not throw\", e);\n+ }\n }\n return entry;\n }\n@@ -287,7 +308,8 @@ private V get(K key, long now) {\n \n /**\n * If the specified key is not already associated with a value (or is mapped to null), attempts to compute its\n- * value using the given mapping function and enters it into this map unless null.\n+ * value using the given mapping function and enters it into this map unless null. The load method for a given key\n+ * will be invoked at most once.\n *\n * @param key the key whose associated value is to be returned or computed for if non-existant\n * @param loader the function to compute a value given a key\n@@ -299,25 +321,35 @@ public V computeIfAbsent(K key, CacheLoader<K, V> loader) throws ExecutionExcept\n long now = now();\n V value = get(key, now);\n if (value == null) {\n+ // we need to synchronize loading of a value for a given key; however, holding the segment lock while\n+ // invoking load can lead to deadlock against another thread due to dependent key loading; therefore, we\n+ // need a mechanism to ensure that load is invoked at most once, but we are not invoking load while holding\n+ // the segment lock; to do this, we atomically put a future in the map that can load the value, and then\n+ // get the value from this future on the thread that won the race to place the future into the segment map\n CacheSegment<K, V> segment = getCacheSegment(key);\n- // we synchronize against the segment lock; this is to avoid a scenario where another thread is inserting\n- // a value for the same key via put which would not be observed on this thread without a mechanism\n- // synchronizing the two threads; it is possible that the segment lock will be too expensive here (it blocks\n- // readers too!) so consider this as a possible place to optimize should contention be observed\n+ Future<Entry<K, V>> future;\n+ FutureTask<Entry<K, V>> task = new FutureTask<>(() -> new Entry<>(key, loader.load(key), now));\n try (ReleasableLock ignored = segment.writeLock.acquire()) {\n- value = get(key, now);\n- if (value == null) {\n- try {\n- value = loader.load(key);\n- } catch (Exception e) {\n- throw new ExecutionException(e);\n- }\n- if (value == null) {\n- throw new ExecutionException(new NullPointerException(\"loader returned a null value\"));\n- }\n- put(key, value, now);\n- }\n+ future = segment.map.putIfAbsent(key, task);\n+ }\n+ if (future == null) {\n+ future = task;\n+ task.run();\n+ }\n+\n+ Entry<K, V> entry;\n+ try {\n+ entry = future.get();\n+ } catch (InterruptedException e) {\n+ throw new ExecutionException(e);\n+ }\n+ if (entry.value == null) {\n+ throw new ExecutionException(new NullPointerException(\"loader returned a null value\"));\n+ }\n+ try (ReleasableLock ignored = lruLock.acquire()) {\n+ promote(entry, now);\n }\n+ value = entry.value;\n }\n return value;\n }", "filename": "core/src/main/java/org/elasticsearch/common/cache/Cache.java", "status": "modified" }, { "diff": "@@ -394,12 +394,12 @@ public void testNotificationOnInvalidateAll() {\n // randomly replace some entries, increasing the weight by 1 for each replacement, then count that the cache size\n // is correct\n public void testReplaceRecomputesSize() {\n- class Key {\n- private int key;\n+ class Value {\n+ private String value;\n private long weight;\n \n- public Key(int key, long weight) {\n- this.key = key;\n+ public Value(String value, long weight) {\n+ this.value = value;\n this.weight = weight;\n }\n \n@@ -408,28 +408,28 @@ public boolean equals(Object o) {\n if (this == o) return true;\n if (o == null || getClass() != o.getClass()) return false;\n \n- Key key1 = (Key) o;\n+ Value that = (Value) o;\n \n- return key == key1.key;\n+ return value == that.value;\n \n }\n \n @Override\n public int hashCode() {\n- return key;\n+ return value.hashCode();\n }\n }\n- Cache<Key, String> cache = CacheBuilder.<Key, String>builder().weigher((k, s) -> k.weight).build();\n+ Cache<Integer, Value> cache = CacheBuilder.<Integer, Value>builder().weigher((k, s) -> s.weight).build();\n for (int i = 0; i < numberOfEntries; i++) {\n- cache.put(new Key(i, 1), Integer.toString(i));\n+ cache.put(i, new Value(Integer.toString(i), 1));\n }\n assertEquals(numberOfEntries, cache.count());\n assertEquals(numberOfEntries, cache.weight());\n int replaced = 0;\n for (int i = 0; i < numberOfEntries; i++) {\n if (rarely()) {\n replaced++;\n- cache.put(new Key(i, 2), Integer.toString(i));\n+ cache.put(i, new Value(Integer.toString(i), 2));\n }\n }\n assertEquals(numberOfEntries, cache.count());", "filename": "core/src/test/java/org/elasticsearch/common/cache/CacheTests.java", "status": "modified" } ] }
{ "body": "The current implementation of `Cache#computeIfAbsent` can lead to deadlocks in situations where dependent key loading is occurring. This is the case because of the locks that are taken to ensure that the loader is invoked at most once per key. In particular, consider two threads `t1` and `t2` invoking this method for keys `k1` and `k2` which will both trigger dependent calls to `Cache#computeIfAbsent` for keys `kd1` and `kd2`. In cases when `k1` and `kd2`and in the same segment, and`k2`and `kd1` are in the same segment then:\r\n1. `t1` locks the segment for `k1`\r\n2. `t2` locks the segment for `k2`\r\n3. `t1` blocks waiting for the lock for the segment for `kd1`\r\n4. `t2` blocks waiting for the lock for the segment for `kd2`\r\n\r\nis a deadlock. This unfortunate situation surfaced in a failed [build](http://build-us-00.elastic.co/job/es_core_master_medium/2473/).\r\n\r\n```\r\n\"elasticsearch[node_s0][warmer][T#5]\" ID=19141 WAITING on java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@2255a366 owned by \"elasticsearch[node_s0][warmer][T#4]\" ID=19139\r\n at sun.misc.Unsafe.park(Native Method)\r\n - waiting on java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@2255a366\r\n at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)\r\n at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)\r\n at org.elasticsearch.common.util.concurrent.ReleasableLock.acquire(ReleasableLock.java:55)\r\n at org.elasticsearch.common.cache.Cache$CacheSegment.get(Cache.java:187)\r\n at org.elasticsearch.common.cache.Cache.get(Cache.java:279)\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:300)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:150)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexFieldData.load(AbstractIndexFieldData.java:80)\r\n at org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder.build(GlobalOrdinalsBuilder.java:52)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.localGlobalDirect(AbstractIndexOrdinalsFieldData.java:80)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.localGlobalDirect(AbstractIndexOrdinalsFieldData.java:41)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.lambda$load$27(IndicesFieldDataCache.java:179)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$$Lambda$366/28099691.load(Unknown Source)\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:311)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:174)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.loadGlobal(AbstractIndexOrdinalsFieldData.java:68)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.loadGlobal(AbstractIndexOrdinalsFieldData.java:41)\r\n at org.elasticsearch.search.SearchService$FieldDataWarmer$3.run(SearchService.java:1019)\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n at java.lang.Thread.run(Thread.java:745)\r\n Locked synchronizers:\r\n - java.util.concurrent.ThreadPoolExecutor$Worker@41b4471c\r\n - java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@23fdb477\r\n\r\n\"elasticsearch[node_s0][warmer][T#4]\" ID=19139 WAITING on java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@23fdb477 owned by \"elasticsearch[node_s0][warmer][T#5]\" ID=19141\r\n at sun.misc.Unsafe.park(Native Method)\r\n - waiting on java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@23fdb477\r\n at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)\r\n at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)\r\n at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)\r\n at org.elasticsearch.common.util.concurrent.ReleasableLock.acquire(ReleasableLock.java:55)\r\n at org.elasticsearch.common.cache.Cache$CacheSegment.get(Cache.java:187)\r\n at org.elasticsearch.common.cache.Cache.get(Cache.java:279)\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:300)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:150)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexFieldData.load(AbstractIndexFieldData.java:80)\r\n at org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder.build(GlobalOrdinalsBuilder.java:52)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.localGlobalDirect(AbstractIndexOrdinalsFieldData.java:80)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.localGlobalDirect(AbstractIndexOrdinalsFieldData.java:41)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.lambda$load$27(IndicesFieldDataCache.java:179)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$$Lambda$366/28099691.load(Unknown Source)\r\n at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:311)\r\n at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:174)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.loadGlobal(AbstractIndexOrdinalsFieldData.java:68)\r\n at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.loadGlobal(AbstractIndexOrdinalsFieldData.java:41)\r\n at org.elasticsearch.search.SearchService$FieldDataWarmer$3.run(SearchService.java:1019)\r\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\r\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n at java.lang.Thread.run(Thread.java:745)\r\n Locked synchronizers:\r\n - java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@2255a366\r\n - java.util.concurrent.ThreadPoolExecutor$Worker@42f5ef87\r\n```\r\n", "comments": [ { "body": "There are two pull requests open to address this issue. The first is #14068 which relaxes the constraint that `Cache#computeIfAbsent` be called at most once per key. The second is #14091 which changes the synchronization mechanism to be the key itself so that loading does not occur under the segment lock.\n\nOnly one of these two pull requests should be merged into master but given the [feedback](https://github.com/elastic/elasticsearch/pull/14068#issuecomment-147525189) on #14068 from @jpountz I wanted to explore a different approach for solving the deadlock issue.\n", "created_at": "2015-10-13T17:03:54Z" } ], "number": 14090, "title": "Cache#computeIfAbsent can lead to deadlocks" }
{ "body": "Previously, Cache#computeIfAbsent was implemented to ensure that the\nloader for a key was invoked at most once. However, this can lead to\ndeadlocks in situations where dependent key loading is occurring\nbecause of the locks taken to ensure that the loader is invoked at once\nper key. This commit changes the behavior of Cache#computeIfAbsent to\nnot be atomic and therefore no longer ensures that the loader is\ninvoked at most once per key.\n\nCloses #14090 \n", "number": 14068, "review_comments": [], "title": "Cache#computeIfAbsent no longer ensures at-most-once invocation of load" }
{ "commits": [ { "message": "Cache#computeIfAbsent no longer ensures at-most-once invocation of load\n\nPreviously, Cache#computeIfAbsent was implemented to ensure that the\nloader for a key was invoked at most once. However, this can lead to\ndeadlocks in situations where dependent key loading is occurring\nbecause of the locks taken to ensure that the loader is invoked at once\nper key. This commit changes the behavior of Cache#computeIfAbsent to\nnot be atomic and therefore no longer ensures that the loader is\ninvoked at most once per key.\n\nCloses #14090" } ], "files": [ { "diff": "@@ -199,16 +199,27 @@ Entry<K, V> get(K key, long now) {\n /**\n * put an entry into the segment\n *\n- * @param key the key of the entry to add to the cache\n- * @param value the value of the entry to add to the cache\n- * @param now the access time of this entry\n- * @return a tuple of the new entry and the existing entry, if there was one otherwise null\n+ * @param key the key of the entry to add to the cache\n+ * @param value the value of the entry to add to the cache\n+ * @param now the access time of this entry\n+ * @param onlyIfAbsent whether or not to unconditionally put the association or only if one does not exist\n+ * @return a tuple of the entry to be promoted and the entry to removed or null if no such entry\n */\n- Tuple<Entry<K, V>, Entry<K, V>> put(K key, V value, long now) {\n+ Tuple<Entry<K, V>, Entry<K, V>> put(K key, V value, long now, boolean onlyIfAbsent) {\n Entry<K, V> entry = new Entry<>(key, value, now);\n Entry<K, V> existing;\n try (ReleasableLock ignored = writeLock.acquire()) {\n- existing = map.put(key, entry);\n+ if (!onlyIfAbsent) {\n+ existing = map.put(key, entry);\n+ } else {\n+ existing = map.get(key);\n+ if (existing == null) {\n+ map.put(key, entry);\n+ } else {\n+ entry = existing;\n+ existing = null;\n+ }\n+ }\n }\n return Tuple.tuple(entry, existing);\n }\n@@ -291,33 +302,22 @@ private V get(K key, long now) {\n *\n * @param key the key whose associated value is to be returned or computed for if non-existant\n * @param loader the function to compute a value given a key\n- * @return the current (existing or computed) value associated with the specified key, or null if the computed\n- * value is null\n+ * @return the current (existing or computed) value associated with the specified key\n * @throws ExecutionException thrown if loader throws an exception\n */\n public V computeIfAbsent(K key, CacheLoader<K, V> loader) throws ExecutionException {\n long now = now();\n V value = get(key, now);\n if (value == null) {\n- CacheSegment<K, V> segment = getCacheSegment(key);\n- // we synchronize against the segment lock; this is to avoid a scenario where another thread is inserting\n- // a value for the same key via put which would not be observed on this thread without a mechanism\n- // synchronizing the two threads; it is possible that the segment lock will be too expensive here (it blocks\n- // readers too!) so consider this as a possible place to optimize should contention be observed\n- try (ReleasableLock ignored = segment.writeLock.acquire()) {\n- value = get(key, now);\n- if (value == null) {\n- try {\n- value = loader.load(key);\n- } catch (Exception e) {\n- throw new ExecutionException(e);\n- }\n- if (value == null) {\n- throw new ExecutionException(new NullPointerException(\"loader returned a null value\"));\n- }\n- put(key, value, now);\n- }\n+ try {\n+ value = loader.load(key);\n+ } catch (Exception e) {\n+ throw new ExecutionException(e);\n+ }\n+ if (value == null) {\n+ throw new ExecutionException(new NullPointerException(\"loader returned a null value\"));\n }\n+ put(key, value, now, true);\n }\n return value;\n }\n@@ -331,12 +331,12 @@ public V computeIfAbsent(K key, CacheLoader<K, V> loader) throws ExecutionExcept\n */\n public void put(K key, V value) {\n long now = now();\n- put(key, value, now);\n+ put(key, value, now, false);\n }\n \n- private void put(K key, V value, long now) {\n+ private void put(K key, V value, long now, boolean onlyIfAbsent) {\n CacheSegment<K, V> segment = getCacheSegment(key);\n- Tuple<Entry<K, V>, Entry<K, V>> tuple = segment.put(key, value, now);\n+ Tuple<Entry<K, V>, Entry<K, V>> tuple = segment.put(key, value, now, onlyIfAbsent);\n boolean replaced = false;\n try (ReleasableLock ignored = lruLock.acquire()) {\n if (tuple.v2() != null && tuple.v2().state == State.EXISTING) {", "filename": "core/src/main/java/org/elasticsearch/common/cache/Cache.java", "status": "modified" }, { "diff": "@@ -460,38 +460,6 @@ public void testNotificationOnReplace() {\n assertEquals(replacements, notifications);\n }\n \n- public void testComputeIfAbsentCallsOnce() throws InterruptedException {\n- int numberOfThreads = randomIntBetween(2, 200);\n- final Cache<Integer, String> cache = CacheBuilder.<Integer, String>builder().build();\n- List<Thread> threads = new ArrayList<>();\n- AtomicReferenceArray flags = new AtomicReferenceArray(numberOfEntries);\n- for (int j = 0; j < numberOfEntries; j++) {\n- flags.set(j, false);\n- }\n- CountDownLatch latch = new CountDownLatch(1 + numberOfThreads);\n- for (int i = 0; i < numberOfThreads; i++) {\n- Thread thread = new Thread(() -> {\n- latch.countDown();\n- for (int j = 0; j < numberOfEntries; j++) {\n- try {\n- cache.computeIfAbsent(j, key -> {\n- assertTrue(flags.compareAndSet(key, false, true));\n- return Integer.toString(key);\n- });\n- } catch (ExecutionException e) {\n- throw new RuntimeException(e);\n- }\n- }\n- });\n- threads.add(thread);\n- thread.start();\n- }\n- latch.countDown();\n- for (Thread thread : threads) {\n- thread.join();\n- }\n- }\n-\n public void testComputeIfAbsentThrowsExceptionIfLoaderReturnsANullValue() {\n final Cache<Integer, String> cache = CacheBuilder.<Integer, String>builder().build();\n try {", "filename": "core/src/test/java/org/elasticsearch/common/cache/CacheTests.java", "status": "modified" } ] }
{ "body": "_This only applies to pre-2.0.0 ES!_\n\nWhile reviewing the changes in ES 2.x I noticed the removal of the `index_analyzer` in favor of just `analyzer` and `search_analyzer` in a mapping. \nWe actually use `index_analyzer` and `search_analyzer`, so I looked for documentation for pre-2.0.0 to tell me which order is used if I just replaced `index_analyzer` with `analyzer` - this way I can use a 2.0.0-compatible mapping already in the application, and decouple the actual update of the ES cluster from the application development cycle.\n\nFrom my reading of the code that isn't actually possible: pre-2.0.0 ES parses the three settings in whatever order they appear in the map of settings, so when I use `analyzer` and `search_analyzer` in those versions it can happen that the `search_analyzer` setting gets ignored (if the parser first finds `search_analyzer` and then `analyzer`).\n\nafcedb94 (from #9371) fixes that while removing the `index_analyzer` by only applying the settings after having processed the whole map.\n\nIt would be really helpful to backport just that part to pre-2.0.0, so that migration doesn't depend on changing both the application code and the ES version at the same time.\n", "comments": [ { "body": "Closed by https://github.com/elastic/elasticsearch/pull/14060\n", "created_at": "2015-10-14T07:39:51Z" } ], "number": 14023, "title": "[pre-2.0.0] Order of analyzer/index_analyzer/search_analyzer is unpredictable" }
{ "body": "Today in the 1.x series `search_analyzer` or `index_analyzer` will be\noverwritten by `analyzer` dependent on the hash map interation order. if\nthe `analyzer` key comes last all previously set analzyers are lost.\nThis commit removes the ambigutiy such that `search_analyzer` is always used\nif specified.\n\nCloses #14023 \n\nNote: This PR is against branch `1.7` and this issue is already fixed in `2.x` and `master`\n", "number": 14060, "review_comments": [ { "body": "Strictly speaking this is different than the previous code - if none of index analyzer , search analyzer and theAnalyzer is set, we reset the indexAnalyzer and searchAnalyzer on the builder to null. We didn't use to do it...\n", "created_at": "2015-10-11T21:01:15Z" }, { "body": "wait, aren't they null already?\n", "created_at": "2015-10-12T11:06:52Z" }, { "body": "As far as I can (brief check only) they are always null, but it wasn't part of the API to change properties that are not part of the json being parsed. Not a big deal..\n", "created_at": "2015-10-12T11:34:35Z" }, { "body": "You could make it the same with an `else if` instead of `else`:\n\n```\n} else if (theAnalyzer != null) {\n builder.searchAnalyzer(theAnalyzer);\n}\n```\n", "created_at": "2015-10-12T18:04:05Z" } ], "title": "Ensure more specific analyzer is used independent of the mapping order" }
{ "commits": [ { "message": "Ensure more specific analyzer is used independent of the mapping order\n\nToday in the 1.x series `search_analyzer` or `index_analyzer` will be\noverwritten by `analyzer` dependent on the hash map interation order. if\nthe `analyzer` key comes last all previously set analzyers are lost.\nThis commit removes the ambigutiy such that `search_analyzer` is always used\nif specified.\n\nCloses #14023" } ], "files": [ { "diff": "@@ -160,6 +160,9 @@ public static void parseNumberField(NumberFieldMapper.Builder builder, String na\n }\n \n public static void parseField(AbstractFieldMapper.Builder builder, String name, Map<String, Object> fieldNode, Mapper.TypeParser.ParserContext parserContext) {\n+ NamedAnalyzer theAnalyzer = null;\n+ NamedAnalyzer searchAnalyzer = null;\n+ NamedAnalyzer indexAnalyzer = null;\n for (Map.Entry<String, Object> entry : fieldNode.entrySet()) {\n final String propName = Strings.toUnderscoreCase(entry.getKey());\n final Object propNode = entry.getValue();\n@@ -212,20 +215,19 @@ public static void parseField(AbstractFieldMapper.Builder builder, String name,\n if (analyzer == null) {\n throw new MapperParsingException(\"Analyzer [\" + propNode.toString() + \"] not found for field [\" + name + \"]\");\n }\n- builder.indexAnalyzer(analyzer);\n- builder.searchAnalyzer(analyzer);\n+ theAnalyzer = analyzer;\n } else if (propName.equals(\"index_analyzer\")) {\n NamedAnalyzer analyzer = parserContext.analysisService().analyzer(propNode.toString());\n if (analyzer == null) {\n throw new MapperParsingException(\"Analyzer [\" + propNode.toString() + \"] not found for field [\" + name + \"]\");\n }\n- builder.indexAnalyzer(analyzer);\n+ indexAnalyzer = analyzer;\n } else if (propName.equals(\"search_analyzer\")) {\n NamedAnalyzer analyzer = parserContext.analysisService().analyzer(propNode.toString());\n if (analyzer == null) {\n throw new MapperParsingException(\"Analyzer [\" + propNode.toString() + \"] not found for field [\" + name + \"]\");\n }\n- builder.searchAnalyzer(analyzer);\n+ searchAnalyzer = analyzer;\n } else if (propName.equals(\"include_in_all\")) {\n builder.includeInAll(nodeBooleanValue(propNode));\n } else if (propName.equals(\"postings_format\")) {\n@@ -243,6 +245,16 @@ public static void parseField(AbstractFieldMapper.Builder builder, String name,\n parseCopyFields(propNode, builder);\n }\n }\n+ if (indexAnalyzer != null) {\n+ builder.indexAnalyzer(indexAnalyzer);\n+ } else if (theAnalyzer != null) {\n+ builder.indexAnalyzer(theAnalyzer);\n+ }\n+ if (searchAnalyzer != null) {\n+ builder.searchAnalyzer(searchAnalyzer);\n+ } else if (theAnalyzer != null) {\n+ builder.searchAnalyzer(theAnalyzer);\n+ }\n }\n \n public static void parseMultiField(AbstractFieldMapper.Builder builder, String name, Mapper.TypeParser.ParserContext parserContext, String propName, Object propNode) {", "filename": "src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java", "status": "modified" }, { "diff": "@@ -0,0 +1,161 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper;\n+\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.index.IndexService;\n+import org.elasticsearch.index.analysis.NamedAnalyzer;\n+import org.elasticsearch.index.mapper.core.StringFieldMapper;\n+import org.elasticsearch.index.mapper.core.TypeParsers;\n+import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n+\n+import java.util.HashMap;\n+import java.util.Map;\n+\n+/**\n+ */\n+public class FieldMapperTests extends ElasticsearchSingleNodeTest {\n+\n+ public void testRandomAnalyzerOrder() {\n+ IndexService test = createIndex(\"test\");\n+ MapperService mapperService = test.mapperService();\n+ DocumentMapperParser documentMapperParser = mapperService.documentMapperParser();\n+ Mapper.TypeParser.ParserContext parserContext = documentMapperParser.parserContext();\n+ StringFieldMapper.Builder builder = new StringFieldMapper.Builder(\"test\");\n+ Map<String, Object> map = new HashMap<>();\n+ int numValues = randomIntBetween(0, 10);\n+ for (int i = 0; i < numValues; i++) {\n+ map.put(\"random_key_\" + i, \"\" + i);\n+ }\n+ final boolean hasAnalyzer = randomBoolean();\n+ if (hasAnalyzer) {\n+ map.put(\"analyzer\", \"whitespace\");\n+ }\n+ final boolean hasIndexAnalyzer = randomBoolean();\n+ if (hasIndexAnalyzer) {\n+ map.put(\"index_analyzer\", \"stop\");\n+ }\n+ final boolean hasSearchAnalyzer = randomBoolean();\n+ if (hasSearchAnalyzer) {\n+ map.put(\"search_analyzer\", \"simple\");\n+ }\n+\n+ TypeParsers.parseField(builder, \"Test\", map, parserContext);\n+ StringFieldMapper build = builder.build(new Mapper.BuilderContext(ImmutableSettings.EMPTY, new ContentPath()));\n+ NamedAnalyzer index = (NamedAnalyzer) build.indexAnalyzer();\n+ NamedAnalyzer search = (NamedAnalyzer) build.searchAnalyzer();\n+ if (hasSearchAnalyzer) {\n+ assertEquals(search.name(), \"simple\");\n+ }\n+ if (hasIndexAnalyzer) {\n+ assertEquals(index.name(), \"stop\");\n+ }\n+ if (hasAnalyzer && hasIndexAnalyzer == false) {\n+ assertEquals(index.name(), \"whitespace\");\n+ }\n+ if (hasAnalyzer && hasSearchAnalyzer == false) {\n+ assertEquals(search.name(), \"whitespace\");\n+ }\n+ }\n+\n+ public void testAnalyzerOrderAllSet() {\n+ IndexService test = createIndex(\"test\");\n+ MapperService mapperService = test.mapperService();\n+ DocumentMapperParser documentMapperParser = mapperService.documentMapperParser();\n+ Mapper.TypeParser.ParserContext parserContext = documentMapperParser.parserContext();\n+ StringFieldMapper.Builder builder = new StringFieldMapper.Builder(\"test\");\n+ Map<String, Object> map = new HashMap<>();\n+ map.put(\"analyzer\", \"whitespace\");\n+ map.put(\"index_analyzer\", \"stop\");\n+ map.put(\"search_analyzer\", \"simple\");\n+\n+ TypeParsers.parseField(builder, \"Test\", map, parserContext);\n+ StringFieldMapper build = builder.build(new Mapper.BuilderContext(ImmutableSettings.EMPTY, new ContentPath()));\n+ NamedAnalyzer index = (NamedAnalyzer) build.indexAnalyzer();\n+ NamedAnalyzer search = (NamedAnalyzer) build.searchAnalyzer();\n+ assertEquals(search.name(), \"simple\");\n+ assertEquals(index.name(), \"stop\");\n+ }\n+\n+ public void testAnalyzerOrderUseDefaultForIndex() {\n+ IndexService test = createIndex(\"test\");\n+ MapperService mapperService = test.mapperService();\n+ DocumentMapperParser documentMapperParser = mapperService.documentMapperParser();\n+ Mapper.TypeParser.ParserContext parserContext = documentMapperParser.parserContext();\n+ StringFieldMapper.Builder builder = new StringFieldMapper.Builder(\"test\");\n+ Map<String, Object> map = new HashMap<>();\n+ int numValues = randomIntBetween(0, 10);\n+ for (int i = 0; i < numValues; i++) {\n+ map.put(\"random_key_\" + i, \"\" + i);\n+ }\n+ map.put(\"analyzer\", \"whitespace\");\n+ map.put(\"search_analyzer\", \"simple\");\n+\n+ TypeParsers.parseField(builder, \"Test\", map, parserContext);\n+ StringFieldMapper build = builder.build(new Mapper.BuilderContext(ImmutableSettings.EMPTY, new ContentPath()));\n+ NamedAnalyzer index = (NamedAnalyzer) build.indexAnalyzer();\n+ NamedAnalyzer search = (NamedAnalyzer) build.searchAnalyzer();\n+ assertEquals(search.name(), \"simple\");\n+ assertEquals(index.name(), \"whitespace\");\n+ }\n+\n+ public void testAnalyzerOrderUseDefaultForSearch() {\n+ IndexService test = createIndex(\"test\");\n+ MapperService mapperService = test.mapperService();\n+ DocumentMapperParser documentMapperParser = mapperService.documentMapperParser();\n+ Mapper.TypeParser.ParserContext parserContext = documentMapperParser.parserContext();\n+ StringFieldMapper.Builder builder = new StringFieldMapper.Builder(\"test\");\n+ Map<String, Object> map = new HashMap<>();\n+ int numValues = randomIntBetween(0, 10);\n+ for (int i = 0; i < numValues; i++) {\n+ map.put(\"random_key_\" + i, \"\" + i);\n+ }\n+ map.put(\"analyzer\", \"whitespace\");\n+ map.put(\"index_analyzer\", \"simple\");\n+\n+ TypeParsers.parseField(builder, \"Test\", map, parserContext);\n+ StringFieldMapper build = builder.build(new Mapper.BuilderContext(ImmutableSettings.EMPTY, new ContentPath()));\n+ NamedAnalyzer index = (NamedAnalyzer) build.indexAnalyzer();\n+ NamedAnalyzer search = (NamedAnalyzer) build.searchAnalyzer();\n+ assertEquals(search.name(), \"whitespace\");\n+ assertEquals(index.name(), \"simple\");\n+ }\n+\n+ public void testAnalyzerOrder() {\n+ IndexService test = createIndex(\"test\");\n+ MapperService mapperService = test.mapperService();\n+ DocumentMapperParser documentMapperParser = mapperService.documentMapperParser();\n+ Mapper.TypeParser.ParserContext parserContext = documentMapperParser.parserContext();\n+ StringFieldMapper.Builder builder = new StringFieldMapper.Builder(\"test\");\n+ Map<String, Object> map = new HashMap<>();\n+ int numValues = randomIntBetween(0, 10);\n+ for (int i = 0; i < numValues; i++) {\n+ map.put(\"random_key_\" + i, \"\" + i);\n+ }\n+ map.put(\"analyzer\", \"whitespace\");\n+\n+ TypeParsers.parseField(builder, \"Test\", map, parserContext);\n+ StringFieldMapper build = builder.build(new Mapper.BuilderContext(ImmutableSettings.EMPTY, new ContentPath()));\n+ NamedAnalyzer index = (NamedAnalyzer) build.indexAnalyzer();\n+ NamedAnalyzer search = (NamedAnalyzer) build.searchAnalyzer();\n+ assertEquals(search.name(), \"whitespace\");\n+ assertEquals(index.name(), \"whitespace\");\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/index/mapper/FieldMapperTests.java", "status": "added" } ] }
{ "body": "To reproduce - create several snapshots, truncate one of the snapshot files and try getting the list of snapshots by running `curl -XGET localhost:9200/_snapshot/my_repo/_all`\n", "comments": [], "number": 13887, "title": "Single corrupted snapshot file shouldn't prevent listing all other snapshot in repository" }
{ "body": "Single corrupted snapshot file shouldn't prevent listing all other\nsnapshot in repository.\n\ncloses #13887\n", "number": 14059, "review_comments": [], "title": "Catch exception when reading corrupted snapshot." }
{ "commits": [ { "message": "Catch exception when reading corrupted snapshot.\n\nSingle corrupted snapshot file shouldn't prevent listing all other\nsnapshot in repository.\n\ncloses #13887" } ], "files": [ { "diff": "@@ -49,6 +49,7 @@\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n+import org.elasticsearch.common.compress.NotXContentException;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n@@ -61,7 +62,9 @@\n import org.elasticsearch.search.SearchShardTarget;\n import org.elasticsearch.threadpool.ThreadPool;\n \n+import java.io.FileNotFoundException;\n import java.io.IOException;\n+import java.nio.file.NoSuchFileException;\n import java.util.ArrayList;\n import java.util.Arrays;\n import java.util.Collections;\n@@ -150,8 +153,14 @@ public List<Snapshot> snapshots(String repositoryName) {\n Repository repository = repositoriesService.repository(repositoryName);\n List<SnapshotId> snapshotIds = repository.snapshots();\n for (SnapshotId snapshotId : snapshotIds) {\n- snapshotSet.add(repository.readSnapshot(snapshotId));\n+ try {\n+ snapshotSet.add(repository.readSnapshot(snapshotId));\n+ } catch (Exception ex) {\n+ logger.warn(\"failed to get snapshot : \" + snapshotId, ex);\n+ snapshotSet.add(new Snapshot(snapshotId.getSnapshot(), new ArrayList(), 0, ex.getMessage(), 0, 0, new ArrayList()));\n+ }\n }\n+\n ArrayList<Snapshot> snapshotList = new ArrayList<>(snapshotSet);\n CollectionUtil.timSort(snapshotList);\n return Collections.unmodifiableList(snapshotList);", "filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotResponse;\n+import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest;\n import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotIndexShardStage;\n@@ -53,6 +54,7 @@\n import org.elasticsearch.cluster.metadata.SnapshotId;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n@@ -77,6 +79,7 @@\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n \n+import static org.elasticsearch.client.Requests.getSnapshotsRequest;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n@@ -2000,4 +2003,46 @@ public void snapshotNameTest() throws Exception {\n assertThat(ex.getMessage(), containsString(\"Invalid snapshot name\"));\n }\n }\n+\n+ public void testListCorruptedSnapshot() throws Exception {\n+ Client client = client();\n+ Path repo = randomRepoPath();\n+ logger.info(\"--> creating repository at \" + repo.toAbsolutePath());\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(Settings.settingsBuilder()\n+ .put(\"location\", repo)\n+ .put(\"chunk_size\", randomIntBetween(100, 1000), ByteSizeUnit.BYTES)));\n+\n+ createIndex(\"test-idx-1\", \"test-idx-2\", \"test-idx-3\");\n+ ensureYellow();\n+ logger.info(\"--> indexing some data\");\n+ indexRandom(true,\n+ client().prepareIndex(\"test-idx-1\", \"doc\").setSource(\"foo\", \"bar\"),\n+ client().prepareIndex(\"test-idx-2\", \"doc\").setSource(\"foo\", \"bar\"),\n+ client().prepareIndex(\"test-idx-3\", \"doc\").setSource(\"foo\", \"bar\"));\n+\n+ logger.info(\"--> creating 2 snapshots\");\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).setIndices(\"test-idx-*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-2\").setWaitForCompletion(true).setIndices(\"test-idx-*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ logger.info(\"--> truncate snapshot file to make it unreadable\");\n+ Path snapshotPath = repo.resolve(\"snap-test-snap-2.dat\");\n+ try(SeekableByteChannel outChan = Files.newByteChannel(snapshotPath, StandardOpenOption.WRITE)) {\n+ outChan.truncate(randomInt(10));\n+ }\n+\n+ logger.info(\"--> get snapshots request should return both snapshots\");\n+ List<SnapshotInfo> snapshotInfos = client.admin().cluster().prepareGetSnapshots(\"test-repo\").get().getSnapshots();\n+\n+ assertThat(snapshotInfos.size(), equalTo(2));\n+ assertThat(snapshotInfos.get(0).state(), equalTo(SnapshotState.FAILED));\n+ assertThat(snapshotInfos.get(0).name(), equalTo(\"test-snap-2\"));\n+ assertThat(snapshotInfos.get(1).state(), equalTo(SnapshotState.SUCCESS));\n+ assertThat(snapshotInfos.get(1).name(), equalTo(\"test-snap-1\"));\n+ }\n }\n\\ No newline at end of file", "filename": "core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java", "status": "modified" } ] }
{ "body": "If I create an index by indexing a document which contains a reserved field (eg `_source`, `_id`) then I get an exception. However, indexing the same document again works, eg:\n\n```\nDELETE t\n\nPUT t/t/1\n{\n \"_id\": 12345\n}\n```\n\nreturns:\n\n```\n {\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"Mapper for [_id] conflicts with existing mapping in other types:\\n[mapper [_id] cannot be changed from type [_id] to [long]]\"\n }\n```\n\nBut indexing into an existing index works:\n\n```\nPUT t/t/1\n{\n \"_id\": 12345\n}\n```\n\nThis means that the following bulk request would throw an exception for the first document, but accept any following documents:\n\n```\nDELETE t\nPOST t/t/_bulk\n{\"create\": {}}\n{\"_id\":12345}\n{\"create\": {}}\n{\"_id\":12345}\n```\n\nRelated to #10456\n", "comments": [ { "body": "Is it ever legal to pass a metadata field (_id,_version, _ttl etc) in the body of a doc? \n", "created_at": "2015-10-07T14:10:40Z" }, { "body": "No, not any more. \n", "created_at": "2015-10-07T16:38:42Z" }, { "body": "OK. Trying out a patch that checks MapperService.isMetadataField() for legal fieldnames in the doc body. Anything else we need to tighten up in naming?\n", "created_at": "2015-10-07T16:45:32Z" }, { "body": "I added a simple check in DocumentParser:\n\n```\nif (MapperService.isMetadataField(currentFieldName)) {\n throw new MapperParsingException(\"Reserved field name [\" + currentFieldName + \"] passed in document body\");\n}\n```\n\nThe tests that failed were mainly \"testIncludeInObjectBackcompat\".\n- org.elasticsearch.index.mapper.ttl.TTLMappingTests.testIncludeInObjectBackcompat\n- org.elasticsearch.index.mapper.parent.ParentMappingTests.testParentNotSet\n- org.elasticsearch.index.mapper.parent.ParentMappingTests.testParentSetInDocBackcompat\n- org.elasticsearch.index.mapper.all.SimpleAllMapperTests.testIncludeInObjectBackcompat\n- org.elasticsearch.index.mapper.routing.RoutingTypeMapperTests.testIncludeInObjectBackcompat\n- org.elasticsearch.index.mapper.timestamp.TimestampMappingTests.testIncludeInObjectBackcompat\n- org.elasticsearch.index.mapper.id.IdMappingTests.testIncludeInObjectBackcompat\n\nThey look to check metadata-fields-in-the-body are safely ignored i.e have a comment like this:\n\n```\n // _routing in a document never worked, so backcompat is ignoring the field\n```\n\nSo do we break backcompat and now no longer tolerate these metadata fields by throwing exceptions?\n", "created_at": "2015-10-07T17:04:23Z" }, { "body": "@markharwood No, we should not break backcompat. Here is the original PR that was supposed to block this: #11074. Prior to that PR, each meta field could override `includeInObject()` to return `true`, which would cause the mapper for that meta field to be added to the rest of the mappers for document types. Thus when encountering the field while parsing the document, it would find the metadata mapper, and parse. There was also logic inside the metadata mapper that allowed this parsing.\n\nFor backcompat, I kept the parsing logic in metadata mappers that supported it before. This can be removed in master, as well as the backcompat check to add select metadata fields to the regular mappers. What must be happening here is the metadata mappers are still getting added to the regular mappers map. This shouldn't be happening on a new index. I'm still investigating why that is happening...\n", "created_at": "2015-10-07T17:42:02Z" }, { "body": "One thing I notice is we should have an extra check similar to what you propose (but the impl of isMetadataField needs to be updated, because the list of metadata fields should no longer be static, because they can be added by plugins, as _size is now a plugin). So we should add your check, but guard with a backcompat check.\n", "created_at": "2015-10-07T17:46:28Z" }, { "body": "This will block the action earlier, since right now, from the \"good\" case in Clint's example, you can see the field was actually parsed and it is trying to add the field to the mappings. This is too late.\n", "created_at": "2015-10-07T17:47:27Z" }, { "body": "Note that fixing `isMetadataField` will require a little plumbing. This should not be a static method, which means passing down the metadata fields list when creating `GetResult`, and then doing the check before creating `GetField` (so get field should have a flag, instead of the current `isMetadataField()` which calls the static `MapperService` method.\n", "created_at": "2015-10-07T17:52:39Z" }, { "body": "I have a change I'm testing that is minimally invasive for backport to 2.0. I will make a followup to cleanup `isMetadataField` to work with metadata fields added by plugins, but that requires a larger change to how metadata fields are added.\n", "created_at": "2015-10-07T19:32:40Z" }, { "body": "Closed by #14003\n", "created_at": "2015-10-08T09:41:50Z" } ], "number": 13740, "title": "Indexing reserved fields inconsistent" }
{ "body": "We previously removed the ability to specify metadata fields inside\ndocuments in #11074, but the backcompat left leniency that allowed this\nto still occur. This change locks down parsing so any metadata field\nfound while parsing a document results in an exception. This only\naffects 2.0+ indexes; backcompat is maintained.\n\ncloses #13740\ncloses #3517\n", "number": 14003, "review_comments": [], "title": "Mappings: Enforce metadata fields are not passed in documents" }
{ "commits": [ { "message": "Mappings: Enforce metadata fields are not passed in documents\n\nWe previously removed the ability to specify metadata fields inside\ndocuments in #11074, but the backcompat left leniency that allowed this\nto still occur. This change locks down parsing so any metadata field\nfound while parsing a document results in an exception. This only\naffects 2.0+ indexes; backcompat is maintained.\n\ncloses #13740" } ], "files": [ { "diff": "@@ -122,7 +122,7 @@ private ParsedDocument innerParseDocument(SourceToParse source) throws MapperPar\n // entire type is disabled\n parser.skipChildren();\n } else if (emptyDoc == false) {\n- Mapper update = parseObject(context, mapping.root);\n+ Mapper update = parseObject(context, mapping.root, true);\n if (update != null) {\n context.addDynamicMappingsUpdate(update);\n }\n@@ -194,14 +194,18 @@ private ParsedDocument innerParseDocument(SourceToParse source) throws MapperPar\n return doc;\n }\n \n- static ObjectMapper parseObject(ParseContext context, ObjectMapper mapper) throws IOException {\n+ static ObjectMapper parseObject(ParseContext context, ObjectMapper mapper, boolean atRoot) throws IOException {\n if (mapper.isEnabled() == false) {\n context.parser().skipChildren();\n return null;\n }\n XContentParser parser = context.parser();\n \n String currentFieldName = parser.currentName();\n+ if (atRoot && MapperService.isMetadataField(currentFieldName) &&\n+ Version.indexCreated(context.indexSettings()).onOrAfter(Version.V_2_0_0_beta1)) {\n+ throw new MapperParsingException(\"Field [\" + currentFieldName + \"] is a metadata field and cannot be added inside a document. Use the index API request parameters.\");\n+ }\n XContentParser.Token token = parser.currentToken();\n if (token == XContentParser.Token.VALUE_NULL) {\n // the object is null (\"obj1\" : null), simply bail\n@@ -302,7 +306,7 @@ static ObjectMapper parseObject(ParseContext context, ObjectMapper mapper) throw\n \n private static Mapper parseObjectOrField(ParseContext context, Mapper mapper) throws IOException {\n if (mapper instanceof ObjectMapper) {\n- return parseObject(context, (ObjectMapper) mapper);\n+ return parseObject(context, (ObjectMapper) mapper, false);\n } else {\n FieldMapper fieldMapper = (FieldMapper)mapper;\n Mapper update = fieldMapper.parse(context);", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java", "status": "modified" }, { "diff": "@@ -197,7 +197,7 @@ private Mapper parse(DocumentMapper mapper, DocumentMapperParser parser, XConten\n ctx.reset(XContentHelper.createParser(source.source()), new ParseContext.Document(), source);\n assertEquals(XContentParser.Token.START_OBJECT, ctx.parser().nextToken());\n ctx.parser().nextToken();\n- return DocumentParser.parseObject(ctx, mapper.root());\n+ return DocumentParser.parseObject(ctx, mapper.root(), true);\n }\n \n public void testDynamicMappingsNotNeeded() throws Exception {", "filename": "core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingTests.java", "status": "modified" }, { "diff": "@@ -43,6 +43,7 @@\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n import org.elasticsearch.index.mapper.ParsedDocument;\n+import org.elasticsearch.index.mapper.SourceToParse;\n import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n@@ -453,4 +454,17 @@ public void testIncludeInObjectBackcompat() throws Exception {\n // the backcompat behavior is actually ignoring directly specifying _all\n assertFalse(field.getAllEntries().fields().iterator().hasNext());\n }\n+\n+ public void testIncludeInObjectNotAllowed() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ docMapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject().field(\"_all\", \"foo\").endObject().bytes());\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_all] is a metadata field and cannot be added inside a document\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/all/SimpleAllMapperTests.java", "status": "modified" }, { "diff": "@@ -114,4 +114,17 @@ public void testIncludeInObjectBackcompat() throws Exception {\n // _id is not indexed so we need to check _uid\n assertEquals(Uid.createUid(\"type\", \"1\"), doc.rootDoc().get(UidFieldMapper.NAME));\n }\n+\n+ public void testIncludeInObjectNotAllowed() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ docMapper.parse(SourceToParse.source(XContentFactory.jsonBuilder()\n+ .startObject().field(\"_id\", \"1\").endObject().bytes()).type(\"type\"));\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_id] is a metadata field and cannot be added inside a document\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/id/IdMappingTests.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.SourceToParse;\n import org.elasticsearch.index.mapper.Uid;\n@@ -32,21 +33,18 @@\n \n public class ParentMappingTests extends ESSingleNodeTestCase {\n \n- public void testParentNotSet() throws Exception {\n+ public void testParentSetInDocNotAllowed() throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .endObject().endObject().string();\n DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n \n- ParsedDocument doc = docMapper.parse(SourceToParse.source(XContentFactory.jsonBuilder()\n- .startObject()\n- .field(\"_parent\", \"1122\")\n- .field(\"x_field\", \"x_value\")\n- .endObject()\n- .bytes()).type(\"type\").id(\"1\"));\n-\n- // no _parent mapping, dynamically used as a string field\n- assertNull(doc.parent());\n- assertNotNull(doc.rootDoc().get(\"_parent\"));\n+ try {\n+ docMapper.parse(SourceToParse.source(XContentFactory.jsonBuilder()\n+ .startObject().field(\"_parent\", \"1122\").endObject().bytes()).type(\"type\").id(\"1\"));\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_parent] is a metadata field and cannot be added inside a document\"));\n+ }\n }\n \n public void testParentSetInDocBackcompat() throws Exception {", "filename": "core/src/test/java/org/elasticsearch/index/mapper/parent/ParentMappingTests.java", "status": "modified" }, { "diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.SourceToParse;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n@@ -113,7 +114,7 @@ public void testIncludeInObjectBackcompat() throws Exception {\n Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_1_4_2.id).build();\n DocumentMapper docMapper = createIndex(\"test\", settings).mapperService().documentMapperParser().parse(mapping);\n \n- XContentBuilder doc = XContentFactory.jsonBuilder().startObject().field(\"_timestamp\", 2000000).endObject();\n+ XContentBuilder doc = XContentFactory.jsonBuilder().startObject().field(\"_routing\", \"foo\").endObject();\n MappingMetaData mappingMetaData = new MappingMetaData(docMapper);\n IndexRequest request = new IndexRequest(\"test\", \"type\", \"1\").source(doc);\n request.process(MetaData.builder().build(), mappingMetaData, true, \"test\");\n@@ -122,4 +123,17 @@ public void testIncludeInObjectBackcompat() throws Exception {\n assertNull(request.routing());\n assertNull(docMapper.parse(\"test\", \"type\", \"1\", doc.bytes()).rootDoc().get(\"_routing\"));\n }\n+\n+ public void testIncludeInObjectNotAllowed() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ docMapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject().field(\"_routing\", \"foo\").endObject().bytes());\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_routing] is a metadata field and cannot be added inside a document\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/routing/RoutingTypeMapperTests.java", "status": "modified" }, { "diff": "@@ -769,6 +769,21 @@ public void testIncludeInObjectBackcompat() throws Exception {\n assertNull(docMapper.parse(\"test\", \"type\", \"1\", doc.bytes()).rootDoc().get(\"_timestamp\"));\n }\n \n+ public void testIncludeInObjectNotAllowed() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", true).field(\"default\", \"1970\").field(\"format\", \"YYYY\").endObject()\n+ .endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ docMapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject().field(\"_timestamp\", 2000000).endObject().bytes());\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_timestamp] is a metadata field and cannot be added inside a document\"));\n+ }\n+ }\n+\n public void testThatEpochCanBeIgnoredWithCustomFormat() throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"_timestamp\").field(\"enabled\", true).field(\"format\", \"yyyyMMddHH\").endObject()", "filename": "core/src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java", "status": "modified" }, { "diff": "@@ -310,6 +310,21 @@ public void testIncludeInObjectBackcompat() throws Exception {\n assertNull(docMapper.parse(\"test\", \"type\", \"1\", doc.bytes()).rootDoc().get(\"_ttl\"));\n }\n \n+ public void testIncludeInObjectNotAllowed() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_ttl\").field(\"enabled\", true).endObject()\n+ .endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ docMapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject().field(\"_ttl\", \"2d\").endObject().bytes());\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_ttl] is a metadata field and cannot be added inside a document\"));\n+ }\n+ }\n+\n private org.elasticsearch.common.xcontent.XContentBuilder getMappingWithTtlEnabled() throws IOException {\n return getMappingWithTtlEnabled(null);\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/ttl/TTLMappingTests.java", "status": "modified" } ] }
{ "body": "## Expected Behavior\n\nNormally, if you try to index a document without an ID in the URI (e.g. a POST) but with an _id field in the document (and no explicit _id path mapping), it throws an error because the autogenerated ID does not match the provided _id field:\n\n``` bash\ncurl -XDELETE localhost:9200/testindex\ncurl -XPUT localhost:9200/testindex\ncurl -XPOST localhost:9200/testindex/testtype?pretty -d '{\"_id\":\"polyfractal\",\"key\":\"value\"}}}'\n```\n\n``` json\n{\n \"error\" : \"MapperParsingException[failed to parse [_id]]; nested: MapperParsingException[Provided id [O-kIgieVTRG9DpxHML7LkA] does not match the content one [polyfractal]]; \",\n \"status\" : 400\n}\n```\n## Broken Behavior\n\nHowever, if the _id field happens to be an object, Elasticsearch happily indexes the document:\n\n``` bash\ncurl -XDELETE localhost:9200/testindex\ncurl -XPUT localhost:9200/testindex\ncurl -XPOST \"localhost:9200/testindex/testtype\" -d '{\"key\":\"value\"}'\ncurl -XPOST \"localhost:9200/testindex/testtype\" -d '{\"_id\":{\"name\":\"polyfractal\"},\"key\":\"value\"}}}'\n```\n\n``` json\n{\"ok\":true,\"_index\":\"testindex\",\"_type\":\"testtype\",\"_id\":\"b2xEPk5tTfC-RLsCb1ZapA\",\"_version\":1}\n{\"ok\":true,\"_index\":\"testindex\",\"_type\":\"testtype\",\"_id\":\"BsTbRqaeTrKLIe0JoeHsWw\",\"_version\":1}\n```\n\nYou can GET it:\n\n``` bash\ncurl -XGET localhost:9200/testindex/testtype/BsTbRqaeTrKLIe0JoeHsWw?pretty\n```\n\n``` json\n{\n \"_index\" : \"testindex\",\n \"_type\" : \"testtype\",\n \"_id\" : \"BsTbRqaeTrKLIe0JoeHsWw\",\n \"_version\" : 1,\n \"exists\" : true, \"_source\" : {\"_id\":{\"name\":\"polyfractal\"},\"key\":\"value\"}}}\n}\n```\n\nIt shows up with a match_all query:\n\n``` bash\ncurl -XGET localhost:9200/testindex/testtype/_search?pretty -d '{\"query\":{\"match_all\":{}}}'\n```\n\n``` json\n{\n \"took\" : 1,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"testindex\",\n \"_type\" : \"testtype\",\n \"_id\" : \"BsTbRqaeTrKLIe0JoeHsWw\",\n \"_score\" : 1.0, \"_source\" : {\"_id\":{\"name\":\"polyfractal\"},\"key\":\"value\"}}}\n }, {\n \"_index\" : \"testindex\",\n \"_type\" : \"testtype\",\n \"_id\" : \"b2xEPk5tTfC-RLsCb1ZapA\",\n \"_score\" : 1.0, \"_source\" : {\"key\":\"value\"}\n } ]\n }\n}\n```\n\nBut doesn't show up when you search for exact values (or Match or any other search):\n\n``` bash\ncurl -XGET localhost:9200/testindex/testtype/_search?pretty -d '{\"query\":{\"term\":{\"key\":\"value\"}}}'\n```\n\n``` json\n{\n \"took\" : 1,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 0.30685282,\n \"hits\" : [ {\n \"_index\" : \"testindex\",\n \"_type\" : \"testtype\",\n \"_id\" : \"b2xEPk5tTfC-RLsCb1ZapA\",\n \"_score\" : 0.30685282, \"_source\" : {\"key\":\"value\"}\n } ]\n }\n}\n```\n\nIf you ask ES why it doesn't show up, it says there are no matching terms:\n\n``` bash\ncurl -XGET localhost:9200/testindex/testtype/BsTbRqaeTrKLIe0JoeHsWw/_explain?pretty -d '{\"query\":{\"term\":{\"key\":\"value\"}}}'\n```\n\n``` json\n{\n \"ok\" : true,\n \"_index\" : \"testindex\",\n \"_type\" : \"testtype\",\n \"_id\" : \"BsTbRqaeTrKLIe0JoeHsWw\",\n \"matched\" : false,\n \"explanation\" : {\n \"value\" : 0.0,\n \"description\" : \"no matching term\"\n }\n}\n```\n\nAnd finally, as a fun twist, you can set an explicit mapping to look inside the _id object. This works with regard to the ID (it extracts the appropriate ID), is GETable, match_all, etc. Search is still broken.\n\n``` bash\ncurl -XDELETE localhost:9200/testindex\ncurl -XPUT localhost:9200/testindex -d '{\n \"mappings\":{\n \"testtype\":{\n \"_id\" : {\n \"path\" : \"_id.name\"\n },\n \"properties\":{\n \"_id\":{\n \"type\":\"object\",\n \"properties\":{\n \"name\":{\n \"type\":\"string\"\n }\n }\n }\n }\n }\n }\n}'\n\ncurl -XPOST \"localhost:9200/testindex/testtype\" -d '{\"key\":\"value\"}'\ncurl -XPOST \"localhost:9200/testindex/testtype\" -d '{\"_id\":{\"name\":\"polyfractal\"},\"key\":\"value\"}}}'\ncurl -XGET localhost:9200/testindex/testtype/polyfractal?pretty\n```\n\n``` json\n{\n \"_index\" : \"testindex\",\n \"_type\" : \"testtype\",\n \"_id\" : \"polyfractal\",\n \"_version\" : 1,\n \"exists\" : true, \"_source\" : {\"_id\":{\"name\":\"polyfractal\"},\"key\":\"value\"}}}\n}\n```\n\n``` bash\ncurl -XGET localhost:9200/testindex/testtype/_search?pretty -d '{\"query\":{\"match_all\":{}}}'\n```\n\n``` json\n{\n \"took\" : 2,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"testindex\",\n \"_type\" : \"testtype\",\n \"_id\" : \"wsT9vaevTCW5EuKyr7nmUw\",\n \"_score\" : 1.0, \"_source\" : {\"key\":\"value\"}\n }, {\n \"_index\" : \"testindex\",\n \"_type\" : \"testtype\",\n \"_id\" : \"polyfractal\",\n \"_score\" : 1.0, \"_source\" : {\"_id\":{\"name\":\"polyfractal\"},\"key\":\"value\"}}}\n } ]\n }\n}\n```\n\n``` bash\ncurl -XGET localhost:9200/testindex/testtype/_search?pretty -d '{\"query\":{\"term\":{\"key\":\"value\"}}}'\n```\n\n``` json\n{\n \"took\" : 2,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 0.30685282,\n \"hits\" : [ {\n \"_index\" : \"testindex\",\n \"_type\" : \"testtype\",\n \"_id\" : \"wsT9vaevTCW5EuKyr7nmUw\",\n \"_score\" : 0.30685282, \"_source\" : {\"key\":\"value\"}\n } ]\n }\n}\n```\n## Reference\n\nThis was surfaced by [Scott on the mailing list](https://groups.google.com/d/msg/elasticsearch/0at1uZBvN3k/xIatIxwVziwJ).\n", "comments": [ { "body": "It's a little bit more fun than that, even: you actually get _partial_ indexing!\n\n```\ncurl -XDELETE localhost:9200/testindex\ncurl -XPUT localhost:9200/testindex\ncurl -XPOST localhost:9200/testindex/testtype -d '{\"leftkey\":\"value\",\"_id\":{\"name\":\"polyfractal\"},\"rightkey\":\"value\"}}}'\ncurl -XPOST localhost:9200/_flush\n```\n\nNow search on the field _before_ the _id:\n\n```\ncurl -XGET localhost:9200/testindex/testtype/_search?pretty -d '{\"query\":{\"term\":{\"leftkey\":\"value\"}}}'\n{\n \"took\" : 3,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 1,\n \"max_score\" : 0.30685282,\n \"hits\" : [ {\n \"_index\" : \"testindex\",\n \"_type\" : \"testtype\",\n \"_id\" : \"PalIN5CpSPKkGbhs4qNqaw\",\n \"_score\" : 0.30685282, \"_source\" : {\"leftkey\":\"value\",\"_id\":{\"name\":\"polyfractal\"},\"rightkey\":\"value\"}}}\n } ]\n }\n}\n```\n\nThere you go.\nBut search on the field _after_ the _id:\n\n```\ncurl -XGET localhost:9200/testindex/testtype/_search?pretty -d '{\"query\":{\"term\":{\"rightkey\":\"value\"}}}'\n{\n \"took\" : 1,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 0,\n \"max_score\" : null,\n \"hits\" : [ ]\n }\n}\n```\n\nAnd you get nothing.\n", "created_at": "2015-02-12T23:41:53Z" }, { "body": "I am affected by this behavior too, monogo output the field like this \n\n```\n{ \"_id\":{\"$oid\":\"54d9e3bf30320c3335017e69\"}, \"@timestamp\":\"...\"}\n```\n\nactually I did not care about the \"_id\" field, but I care about the \"@timestamp\" field which is _silently_ not indexed. Here an example that shows the behavior:\nhttps://gist.github.com/andreaskern/01d1d292f7f146186ee5\n", "created_at": "2015-02-13T07:16:12Z" }, { "body": "In 2.0, the timestamp field would now be indexed correctly, as would `_id.$oid`. Wondering if we should allow users to index `_id` field inside the body at all? /cc @rjernst \n", "created_at": "2015-05-29T17:04:05Z" }, { "body": "The ability to specify _id within a document has already been removed for 2.0+ indexes. \n", "created_at": "2015-05-29T17:36:13Z" }, { "body": "@rjernst you removed the ability to specify the main doc _id in the body, but if the body contains an `_id` field then it creates a field called `_id` in the mapping, which can't be queried. \n\nWhat I'm asking is: should we just ignore the fact that this field is not accessible (as we do in master today) or should we actually throw an exception? I'm leaning towards ignoring, as users don't always have control over the docs they receive.\n", "created_at": "2015-05-29T18:47:59Z" }, { "body": "I would be in favor of throwing an exception. This would only be for 2.0+ indexes, and it is really just field name validation (disallowing fields colliding with meta fields). The mechanism would be the same, a user would not be able to explicitly add a field `_id` in the properties for a document type.\n", "created_at": "2015-05-31T11:42:49Z" }, { "body": "@rjernst it's a tricky one. eg mongo adds `{ \"_id\": { \"$oid\": \"....\" }}`, so actually the `_id.$oid` field IS queryable... should this still throw an exception?\n", "created_at": "2015-05-31T11:44:04Z" }, { "body": "IMO, yes.\n", "created_at": "2015-05-31T11:48:27Z" }, { "body": "With #8871, I don't think that would work, because _id is both a field mapper (the real meta field), and an object mapper.\n", "created_at": "2015-05-31T11:50:17Z" }, { "body": "@rjernst yep, makes sense\n", "created_at": "2015-05-31T12:06:17Z" }, { "body": "@rjernst this still works, even with #8871 merged in\n", "created_at": "2015-06-24T17:41:42Z" }, { "body": "Closed by #14003\n", "created_at": "2015-10-14T13:21:26Z" } ], "number": 3517, "title": "If _id field is an object, no error is thrown but doc is \"unsearchable\"" }
{ "body": "We previously removed the ability to specify metadata fields inside\ndocuments in #11074, but the backcompat left leniency that allowed this\nto still occur. This change locks down parsing so any metadata field\nfound while parsing a document results in an exception. This only\naffects 2.0+ indexes; backcompat is maintained.\n\ncloses #13740\ncloses #3517\n", "number": 14003, "review_comments": [], "title": "Mappings: Enforce metadata fields are not passed in documents" }
{ "commits": [ { "message": "Mappings: Enforce metadata fields are not passed in documents\n\nWe previously removed the ability to specify metadata fields inside\ndocuments in #11074, but the backcompat left leniency that allowed this\nto still occur. This change locks down parsing so any metadata field\nfound while parsing a document results in an exception. This only\naffects 2.0+ indexes; backcompat is maintained.\n\ncloses #13740" } ], "files": [ { "diff": "@@ -122,7 +122,7 @@ private ParsedDocument innerParseDocument(SourceToParse source) throws MapperPar\n // entire type is disabled\n parser.skipChildren();\n } else if (emptyDoc == false) {\n- Mapper update = parseObject(context, mapping.root);\n+ Mapper update = parseObject(context, mapping.root, true);\n if (update != null) {\n context.addDynamicMappingsUpdate(update);\n }\n@@ -194,14 +194,18 @@ private ParsedDocument innerParseDocument(SourceToParse source) throws MapperPar\n return doc;\n }\n \n- static ObjectMapper parseObject(ParseContext context, ObjectMapper mapper) throws IOException {\n+ static ObjectMapper parseObject(ParseContext context, ObjectMapper mapper, boolean atRoot) throws IOException {\n if (mapper.isEnabled() == false) {\n context.parser().skipChildren();\n return null;\n }\n XContentParser parser = context.parser();\n \n String currentFieldName = parser.currentName();\n+ if (atRoot && MapperService.isMetadataField(currentFieldName) &&\n+ Version.indexCreated(context.indexSettings()).onOrAfter(Version.V_2_0_0_beta1)) {\n+ throw new MapperParsingException(\"Field [\" + currentFieldName + \"] is a metadata field and cannot be added inside a document. Use the index API request parameters.\");\n+ }\n XContentParser.Token token = parser.currentToken();\n if (token == XContentParser.Token.VALUE_NULL) {\n // the object is null (\"obj1\" : null), simply bail\n@@ -302,7 +306,7 @@ static ObjectMapper parseObject(ParseContext context, ObjectMapper mapper) throw\n \n private static Mapper parseObjectOrField(ParseContext context, Mapper mapper) throws IOException {\n if (mapper instanceof ObjectMapper) {\n- return parseObject(context, (ObjectMapper) mapper);\n+ return parseObject(context, (ObjectMapper) mapper, false);\n } else {\n FieldMapper fieldMapper = (FieldMapper)mapper;\n Mapper update = fieldMapper.parse(context);", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java", "status": "modified" }, { "diff": "@@ -197,7 +197,7 @@ private Mapper parse(DocumentMapper mapper, DocumentMapperParser parser, XConten\n ctx.reset(XContentHelper.createParser(source.source()), new ParseContext.Document(), source);\n assertEquals(XContentParser.Token.START_OBJECT, ctx.parser().nextToken());\n ctx.parser().nextToken();\n- return DocumentParser.parseObject(ctx, mapper.root());\n+ return DocumentParser.parseObject(ctx, mapper.root(), true);\n }\n \n public void testDynamicMappingsNotNeeded() throws Exception {", "filename": "core/src/test/java/org/elasticsearch/index/mapper/DynamicMappingTests.java", "status": "modified" }, { "diff": "@@ -43,6 +43,7 @@\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n import org.elasticsearch.index.mapper.ParsedDocument;\n+import org.elasticsearch.index.mapper.SourceToParse;\n import org.elasticsearch.index.mapper.internal.AllFieldMapper;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n@@ -453,4 +454,17 @@ public void testIncludeInObjectBackcompat() throws Exception {\n // the backcompat behavior is actually ignoring directly specifying _all\n assertFalse(field.getAllEntries().fields().iterator().hasNext());\n }\n+\n+ public void testIncludeInObjectNotAllowed() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ docMapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject().field(\"_all\", \"foo\").endObject().bytes());\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_all] is a metadata field and cannot be added inside a document\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/all/SimpleAllMapperTests.java", "status": "modified" }, { "diff": "@@ -114,4 +114,17 @@ public void testIncludeInObjectBackcompat() throws Exception {\n // _id is not indexed so we need to check _uid\n assertEquals(Uid.createUid(\"type\", \"1\"), doc.rootDoc().get(UidFieldMapper.NAME));\n }\n+\n+ public void testIncludeInObjectNotAllowed() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ docMapper.parse(SourceToParse.source(XContentFactory.jsonBuilder()\n+ .startObject().field(\"_id\", \"1\").endObject().bytes()).type(\"type\"));\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_id] is a metadata field and cannot be added inside a document\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/id/IdMappingTests.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.SourceToParse;\n import org.elasticsearch.index.mapper.Uid;\n@@ -32,21 +33,18 @@\n \n public class ParentMappingTests extends ESSingleNodeTestCase {\n \n- public void testParentNotSet() throws Exception {\n+ public void testParentSetInDocNotAllowed() throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .endObject().endObject().string();\n DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n \n- ParsedDocument doc = docMapper.parse(SourceToParse.source(XContentFactory.jsonBuilder()\n- .startObject()\n- .field(\"_parent\", \"1122\")\n- .field(\"x_field\", \"x_value\")\n- .endObject()\n- .bytes()).type(\"type\").id(\"1\"));\n-\n- // no _parent mapping, dynamically used as a string field\n- assertNull(doc.parent());\n- assertNotNull(doc.rootDoc().get(\"_parent\"));\n+ try {\n+ docMapper.parse(SourceToParse.source(XContentFactory.jsonBuilder()\n+ .startObject().field(\"_parent\", \"1122\").endObject().bytes()).type(\"type\").id(\"1\"));\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_parent] is a metadata field and cannot be added inside a document\"));\n+ }\n }\n \n public void testParentSetInDocBackcompat() throws Exception {", "filename": "core/src/test/java/org/elasticsearch/index/mapper/parent/ParentMappingTests.java", "status": "modified" }, { "diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.SourceToParse;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n@@ -113,7 +114,7 @@ public void testIncludeInObjectBackcompat() throws Exception {\n Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_1_4_2.id).build();\n DocumentMapper docMapper = createIndex(\"test\", settings).mapperService().documentMapperParser().parse(mapping);\n \n- XContentBuilder doc = XContentFactory.jsonBuilder().startObject().field(\"_timestamp\", 2000000).endObject();\n+ XContentBuilder doc = XContentFactory.jsonBuilder().startObject().field(\"_routing\", \"foo\").endObject();\n MappingMetaData mappingMetaData = new MappingMetaData(docMapper);\n IndexRequest request = new IndexRequest(\"test\", \"type\", \"1\").source(doc);\n request.process(MetaData.builder().build(), mappingMetaData, true, \"test\");\n@@ -122,4 +123,17 @@ public void testIncludeInObjectBackcompat() throws Exception {\n assertNull(request.routing());\n assertNull(docMapper.parse(\"test\", \"type\", \"1\", doc.bytes()).rootDoc().get(\"_routing\"));\n }\n+\n+ public void testIncludeInObjectNotAllowed() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ docMapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject().field(\"_routing\", \"foo\").endObject().bytes());\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_routing] is a metadata field and cannot be added inside a document\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/routing/RoutingTypeMapperTests.java", "status": "modified" }, { "diff": "@@ -769,6 +769,21 @@ public void testIncludeInObjectBackcompat() throws Exception {\n assertNull(docMapper.parse(\"test\", \"type\", \"1\", doc.bytes()).rootDoc().get(\"_timestamp\"));\n }\n \n+ public void testIncludeInObjectNotAllowed() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", true).field(\"default\", \"1970\").field(\"format\", \"YYYY\").endObject()\n+ .endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ docMapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject().field(\"_timestamp\", 2000000).endObject().bytes());\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_timestamp] is a metadata field and cannot be added inside a document\"));\n+ }\n+ }\n+\n public void testThatEpochCanBeIgnoredWithCustomFormat() throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n .startObject(\"_timestamp\").field(\"enabled\", true).field(\"format\", \"yyyyMMddHH\").endObject()", "filename": "core/src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java", "status": "modified" }, { "diff": "@@ -310,6 +310,21 @@ public void testIncludeInObjectBackcompat() throws Exception {\n assertNull(docMapper.parse(\"test\", \"type\", \"1\", doc.bytes()).rootDoc().get(\"_ttl\"));\n }\n \n+ public void testIncludeInObjectNotAllowed() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_ttl\").field(\"enabled\", true).endObject()\n+ .endObject().endObject().string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ try {\n+ docMapper.parse(\"test\", \"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject().field(\"_ttl\", \"2d\").endObject().bytes());\n+ fail(\"Expected failure to parse metadata field\");\n+ } catch (MapperParsingException e) {\n+ assertTrue(e.getMessage(), e.getMessage().contains(\"Field [_ttl] is a metadata field and cannot be added inside a document\"));\n+ }\n+ }\n+\n private org.elasticsearch.common.xcontent.XContentBuilder getMappingWithTtlEnabled() throws IOException {\n return getMappingWithTtlEnabled(null);\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/ttl/TTLMappingTests.java", "status": "modified" } ] }
{ "body": "This has changed recently in master, we used to not replay translogs for shadow shards, however, I get the following exception:\n\n```\nRemoteTransportException[[Umbo][inet[/127.0.0.1:9301]][indices:monitor/stats[s]]]; nested: NotSerializableExceptionWrapper[shadow engines don't have translogs];\nCaused by: NotSerializableExceptionWrapper[shadow engines don't have translogs]\n at org.elasticsearch.index.engine.ShadowEngine.getTranslog(ShadowEngine.java:178)\n at org.elasticsearch.index.shard.IndexShard.translogStats(IndexShard.java:652)\n at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:173)\n at org.elasticsearch.action.admin.indices.stats.ShardStats.<init>(ShardStats.java:54)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:192)\n at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:56)\n at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:270)\n at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:266)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:299)\n at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nCaused by the exception thrown in `ShadowEngine` from:\n\n``` java\n@Override\npublic Translog getTranslog() {\n throw new UnsupportedOperationException(\"shadow engines don't have translogs\");\n}\n```\n", "comments": [], "number": 12730, "title": "Shadow shards try to access translog for shadow engine" }
{ "body": "ShadowEngine doesn't have a translog but instead throws an\nUOE when it's requested. ShadowIndexShard should not try to pull\nstats for the translog either and should return null instead.\n\nCloses #12730\n", "number": 14000, "review_comments": [], "title": "Don't pull translog from shadow engine" }
{ "commits": [ { "message": "Don't pull translog from shadow engine\n\nShadowEngine doesn't have a translog but instead throws an\nUOE when it's requested. ShadowIndexShard should not try to pull\nstats for the translog either and should return null instead.\n\nCloses #12730" }, { "message": "add test and fix another bug on the way" }, { "message": "test and fix TranslogStats - you wouldn't believe how hard it is to sum up two values" } ], "files": [ { "diff": "@@ -33,7 +33,6 @@\n import org.elasticsearch.action.admin.indices.upgrade.post.UpgradeRequest;\n import org.elasticsearch.action.termvectors.TermVectorsRequest;\n import org.elasticsearch.action.termvectors.TermVectorsResponse;\n-import org.elasticsearch.bootstrap.Elasticsearch;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.ShardRouting;", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.index.merge.MergeStats;\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.store.Store;\n+import org.elasticsearch.index.translog.TranslogStats;\n \n import java.io.IOException;\n \n@@ -82,4 +83,9 @@ public boolean shouldFlush() {\n public boolean allowsPrimaryPromotion() {\n return false;\n }\n+\n+ @Override\n+ public TranslogStats translogStats() {\n+ return null; // shadow engine has no translog\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/shard/ShadowIndexShard.java", "status": "modified" }, { "diff": "@@ -18,11 +18,10 @@\n */\n package org.elasticsearch.index.translog;\n \n+import org.elasticsearch.action.support.ToXContentToBytes;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n-import org.elasticsearch.common.unit.ByteSizeValue;\n-import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentBuilderString;\n \n@@ -31,17 +30,23 @@\n /**\n *\n */\n-public class TranslogStats implements ToXContent, Streamable {\n+public class TranslogStats extends ToXContentToBytes implements Streamable {\n \n- private long translogSizeInBytes = 0;\n- private int estimatedNumberOfOperations = -1;\n+ private long translogSizeInBytes;\n+ private int numberOfOperations;\n \n public TranslogStats() {\n }\n \n- public TranslogStats(int estimatedNumberOfOperations, long translogSizeInBytes) {\n+ public TranslogStats(int numberOfOperations, long translogSizeInBytes) {\n+ if (numberOfOperations < 0) {\n+ throw new IllegalArgumentException(\"numberOfOperations must be >= 0\");\n+ }\n+ if (translogSizeInBytes < 0) {\n+ throw new IllegalArgumentException(\"translogSizeInBytes must be >= 0\");\n+ }\n assert translogSizeInBytes >= 0 : \"translogSizeInBytes must be >= 0, got [\" + translogSizeInBytes + \"]\";\n- this.estimatedNumberOfOperations = estimatedNumberOfOperations;\n+ this.numberOfOperations = numberOfOperations;\n this.translogSizeInBytes = translogSizeInBytes;\n }\n \n@@ -50,22 +55,22 @@ public void add(TranslogStats translogStats) {\n return;\n }\n \n- this.estimatedNumberOfOperations += translogStats.estimatedNumberOfOperations;\n- this.translogSizeInBytes = +translogStats.translogSizeInBytes;\n+ this.numberOfOperations += translogStats.numberOfOperations;\n+ this.translogSizeInBytes += translogStats.translogSizeInBytes;\n }\n \n- public ByteSizeValue translogSizeInBytes() {\n- return new ByteSizeValue(translogSizeInBytes);\n+ public long getTranslogSizeInBytes() {\n+ return translogSizeInBytes;\n }\n \n public long estimatedNumberOfOperations() {\n- return estimatedNumberOfOperations;\n+ return numberOfOperations;\n }\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject(Fields.TRANSLOG);\n- builder.field(Fields.OPERATIONS, estimatedNumberOfOperations);\n+ builder.field(Fields.OPERATIONS, numberOfOperations);\n builder.byteSizeField(Fields.SIZE_IN_BYTES, Fields.SIZE, translogSizeInBytes);\n builder.endObject();\n return builder;\n@@ -80,13 +85,13 @@ static final class Fields {\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n- estimatedNumberOfOperations = in.readVInt();\n+ numberOfOperations = in.readVInt();\n translogSizeInBytes = in.readVLong();\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n- out.writeVInt(estimatedNumberOfOperations);\n+ out.writeVInt(numberOfOperations);\n out.writeVLong(translogSizeInBytes);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/translog/TranslogStats.java", "status": "modified" }, { "diff": "@@ -24,6 +24,8 @@\n import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n+import org.elasticsearch.action.admin.indices.stats.ShardStats;\n import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.index.IndexResponse;\n@@ -36,6 +38,7 @@\n import org.elasticsearch.discovery.Discovery;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShadowIndexShard;\n+import org.elasticsearch.index.translog.TranslogStats;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.plugins.Plugin;\n@@ -175,6 +178,7 @@ public void testIndexWithFewDocuments() throws Exception {\n Settings idxSettings = Settings.builder()\n .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 2)\n+ .put(IndexShard.INDEX_TRANSLOG_DISABLE_FLUSH, true)\n .put(IndexMetaData.SETTING_DATA_PATH, dataPath.toAbsolutePath().toString())\n .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n .put(IndexMetaData.SETTING_SHARED_FILESYSTEM, true)\n@@ -188,6 +192,21 @@ public void testIndexWithFewDocuments() throws Exception {\n client().prepareIndex(IDX, \"doc\", \"1\").setSource(\"foo\", \"bar\").get();\n client().prepareIndex(IDX, \"doc\", \"2\").setSource(\"foo\", \"bar\").get();\n \n+ IndicesStatsResponse indicesStatsResponse = client().admin().indices().prepareStats(IDX).clear().setTranslog(true).get();\n+ assertEquals(2, indicesStatsResponse.getIndex(IDX).getPrimaries().getTranslog().estimatedNumberOfOperations());\n+ assertEquals(2, indicesStatsResponse.getIndex(IDX).getTotal().getTranslog().estimatedNumberOfOperations());\n+ for (IndicesService service : internalCluster().getInstances(IndicesService.class)) {\n+ IndexService indexService = service.indexService(IDX);\n+ if (indexService != null) {\n+ IndexShard shard = indexService.getShard(0);\n+ TranslogStats translogStats = shard.translogStats();\n+ assertTrue(translogStats != null || shard instanceof ShadowIndexShard);\n+ if (translogStats != null) {\n+ assertEquals(2, translogStats.estimatedNumberOfOperations());\n+ }\n+ }\n+ }\n+\n // Check that we can get doc 1 and 2, because we are doing realtime\n // gets and getting from the primary\n GetResponse gResp1 = client().prepareGet(IDX, \"doc\", \"1\").setRealtime(true).setFields(\"foo\").get();", "filename": "core/src/test/java/org/elasticsearch/index/IndexWithShadowReplicasIT.java", "status": "modified" }, { "diff": "@@ -965,4 +965,13 @@ public void run() {\n // (shadow engine is already shut down in the try-with-resources)\n IOUtils.close(srStore, pEngine, pStore);\n }\n+\n+ public void testNoTranslog() {\n+ try {\n+ replicaEngine.getTranslog();\n+ fail(\"shadow engine has no translog\");\n+ } catch (UnsupportedOperationException ex) {\n+ // all good\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/engine/ShadowEngineTests.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n@@ -276,32 +277,63 @@ public void testStats() throws IOException {\n final long firstOperationPosition = translog.getFirstOperationPosition();\n TranslogStats stats = stats();\n assertThat(stats.estimatedNumberOfOperations(), equalTo(0l));\n- long lastSize = stats.translogSizeInBytes().bytes();\n+ long lastSize = stats.getTranslogSizeInBytes();\n assertThat((int) firstOperationPosition, greaterThan(CodecUtil.headerLength(TranslogWriter.TRANSLOG_CODEC)));\n assertThat(lastSize, equalTo(firstOperationPosition));\n-\n+ TranslogStats total = new TranslogStats();\n translog.add(new Translog.Index(\"test\", \"1\", new byte[]{1}));\n stats = stats();\n+ total.add(stats);\n assertThat(stats.estimatedNumberOfOperations(), equalTo(1l));\n- assertThat(stats.translogSizeInBytes().bytes(), greaterThan(lastSize));\n- lastSize = stats.translogSizeInBytes().bytes();\n+ assertThat(stats.getTranslogSizeInBytes(), greaterThan(lastSize));\n+ lastSize = stats.getTranslogSizeInBytes();\n \n translog.add(new Translog.Delete(newUid(\"2\")));\n stats = stats();\n+ total.add(stats);\n assertThat(stats.estimatedNumberOfOperations(), equalTo(2l));\n- assertThat(stats.translogSizeInBytes().bytes(), greaterThan(lastSize));\n- lastSize = stats.translogSizeInBytes().bytes();\n+ assertThat(stats.getTranslogSizeInBytes(), greaterThan(lastSize));\n+ lastSize = stats.getTranslogSizeInBytes();\n \n translog.add(new Translog.Delete(newUid(\"3\")));\n translog.prepareCommit();\n stats = stats();\n+ total.add(stats);\n assertThat(stats.estimatedNumberOfOperations(), equalTo(3l));\n- assertThat(stats.translogSizeInBytes().bytes(), greaterThan(lastSize));\n+ assertThat(stats.getTranslogSizeInBytes(), greaterThan(lastSize));\n \n translog.commit();\n stats = stats();\n+ total.add(stats);\n assertThat(stats.estimatedNumberOfOperations(), equalTo(0l));\n- assertThat(stats.translogSizeInBytes().bytes(), equalTo(firstOperationPosition));\n+ assertThat(stats.getTranslogSizeInBytes(), equalTo(firstOperationPosition));\n+ assertEquals(6, total.estimatedNumberOfOperations());\n+ assertEquals(431, total.getTranslogSizeInBytes());\n+\n+ BytesStreamOutput out = new BytesStreamOutput();\n+ total.writeTo(out);\n+ TranslogStats copy = new TranslogStats();\n+ copy.readFrom(StreamInput.wrap(out.bytes()));\n+\n+ assertEquals(6, copy.estimatedNumberOfOperations());\n+ assertEquals(431, copy.getTranslogSizeInBytes());\n+ assertEquals(\"\\\"translog\\\"{\\n\" +\n+ \" \\\"operations\\\" : 6,\\n\" +\n+ \" \\\"size_in_bytes\\\" : 431\\n\" +\n+ \"}\", copy.toString().trim());\n+\n+ try {\n+ new TranslogStats(1, -1);\n+ fail(\"must be positive\");\n+ } catch (IllegalArgumentException ex) {\n+ //all well\n+ }\n+ try {\n+ new TranslogStats(-1, 1);\n+ fail(\"must be positive\");\n+ } catch (IllegalArgumentException ex) {\n+ //all well\n+ }\n }\n \n @Test", "filename": "core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java", "status": "modified" } ] }
{ "body": "I don't think this is going to work:\n\n```\nNote, the scripts need to be in the classpath of elasticsearch. One simple way to do it is to create a directory under plugins (choose a descriptive name), and place the jar / classes files there. They will be automatically loaded.\n```\n\nWe should recommend something else, e.g. to put the jar in lib/, if they are supposed to be in the main classpath.\n", "comments": [ { "body": "Nice catch. I wonder if we should look by default in a specific user dir so upgrading elasticsearch would be easier?\n\nLoad scripts from plugins._scripts for example?\n", "created_at": "2015-09-26T02:51:11Z" }, { "body": "If we do that, then the fix is more complex than a doc one. I think we should do the simple doc fix for 2.0? what percentage of users are using native scripts?\n\nSeparately, I tend to agree it would be cleaner, there is something to be said for \"improving\" native script support by loading the user classes in a child loader (versus encouraging them to put them in lib/ which will be the main classpath). The downside is that if we do that, we should really add e.g. a qa scenario that tests it, since its more involved. \n\nI'm also hesitant to overhaul native script support, since scripting engine support is currently also undergoing refactoring in master, and we might want to do things completely differently there (e.g. perhaps native scripting is a plugin or something, i dont know, have not even looked at it).\n", "created_at": "2015-09-26T02:56:43Z" }, { "body": "I think we should just stick to a doc fix for now. At the moment, native scripts must now be loaded through a plugin. We can think about how we could make it easier for 2.1 or 2.2 (although I personally think a plugin is the right thing here), but that's the way it is for 2.0. \n", "created_at": "2015-09-26T03:42:48Z" }, { "body": "Agreed\n", "created_at": "2015-09-26T03:50:26Z" }, { "body": "+1. If they follow the current procedure in the native script documentation today, they will not be able to start up the ES node in 2.x (as expected). \n\n```\nException in thread \"main\" java.lang.IllegalStateException: Unable to initialize plugins\nLikely root cause: java.nio.file.NoSuchFileException: /ELK/elasticsearch-2.1.0-SNAPSHOT/plugins/test/plugin-descriptor.properties\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)\n at java.nio.file.Files.newByteChannel(Files.java:315)\n at java.nio.file.Files.newByteChannel(Files.java:361)\n at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:380)\n at java.nio.file.Files.newInputStream(Files.java:106)\n at org.elasticsearch.plugins.PluginInfo.readFromProperties(PluginInfo.java:86)\n at org.elasticsearch.plugins.PluginsService.getPluginBundles(PluginsService.java:301)\n at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:107)\n at org.elasticsearch.node.Node.<init>(Node.java:148)\n at org.elasticsearch.node.Node.<init>(Node.java:129)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:168)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:268)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\nRefer to the log for complete error details.\n```\n\nAs part of the doc resolution to this ticket, let's also include this in our breaking changes documentation (https://www.elastic.co/guide/en/elasticsearch/reference/2.0/_plugin_and_packaging_changes.html and/or https://www.elastic.co/guide/en/elasticsearch/reference/2.0/_scripting_changes.html). Thx!\n", "created_at": "2015-09-28T14:45:05Z" }, { "body": "has this been fixed\nWe upgraded to the ES2.0 and tried restarting it and\n\n```\n[2015-10-30 01:34:45,391][INFO ][node ] [Spitfire] version[2.0.0], pid[5824], build[de54438/2015-10-22T08:09:48Z]\n[2015-10-30 01:34:45,392][INFO ][node ] [Spitfire] initializing ...\n[2015-10-30 01:34:45,405][ERROR][bootstrap ] Exception\njava.lang.IllegalStateException: Unable to initialize plugins\n at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:115)\n at org.elasticsearch.node.Node.<init>(Node.java:144)\n at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)\n at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\nCaused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/plugins/_site/plugin-descriptor.properties\n at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)\n at java.nio.file.Files.newByteChannel(Files.java:361)\n at java.nio.file.Files.newByteChannel(Files.java:407)\n at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)\n at java.nio.file.Files.newInputStream(Files.java:152)\n at org.elasticsearch.plugins.PluginInfo.readFromProperties(PluginInfo.java:86)\n at org.elasticsearch.plugins.PluginsService.getPluginBundles(PluginsService.java:306)\n at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:112)\n ... 5 more\n\n```\n", "created_at": "2015-10-30T01:48:45Z" }, { "body": "Did you create the descriptor file? https://www.elastic.co/guide/en/elasticsearch/plugins/current/plugin-authors.html#_plugin_descriptor_file\n", "created_at": "2015-10-30T03:23:43Z" }, { "body": "No nothing, but i already had few plugins\n", "created_at": "2015-10-30T03:42:04Z" }, { "body": "Weird that things work fine on my local machine,\nBut on my local machine when i do\n`sudo service elasticsearch status`\nI get \n`elasticsearch is not running`\n BUT\n'http://localhost:9200/' return \n\n``` json\n{\n \"status\" : 200,\n \"name\" : \"Orphan\",\n \"cluster_name\" : \"elasticsearch\",\n \"version\" : {\n \"number\" : \"1.4.4\",\n \"build_hash\" : \"c88f77ffc81301dfa9dfd81ca2232f09588bd512\",\n \"build_timestamp\" : \"2015-02-19T13:05:36Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"4.10.3\"\n },\n \"tagline\" : \"You Know, for Search\"\n}\n\n```\n", "created_at": "2015-10-30T03:45:19Z" }, { "body": "I don't understand. You wrote elasticsearch 2.0 but here it's 1.4.4.\n\nFeel free to ask your questions on discuss.elastic.co.\n", "created_at": "2015-10-30T08:09:08Z" }, { "body": "oh well i had to delete all previous node data to get the new version running..\nI wil post here later onces i configure mine and see if our online server does not give same error.\n", "created_at": "2015-10-30T10:46:48Z" } ], "number": 13811, "title": "native scripts docs are out of date" }
{ "body": "Closes #13811\n", "number": 13996, "review_comments": [ { "body": "you'll need a proper link here, because plugin-authors is in a separate \"book\", ie:\n\n```\n{plugins}/plugin-authors.html[documentation]\n```\n", "created_at": "2015-10-07T16:47:11Z" } ], "title": "Rewrite native script documentation" }
{ "commits": [ { "message": "[doc] Rewrite native script documentation\n\nCloses #13811" } ], "files": [ { "diff": "@@ -351,28 +351,86 @@ to `false`.\n [float]\n === Native (Java) Scripts\n \n-Even though `groovy` is pretty fast, this allows to register native Java based\n-scripts for faster execution.\n-\n-In order to allow for scripts, the `NativeScriptFactory` needs to be\n-implemented that constructs the script that will be executed. There are\n-two main types, one that extends `AbstractExecutableScript` and one that\n-extends `AbstractSearchScript` (probably the one most users will extend,\n-with additional helper classes in `AbstractLongSearchScript`,\n-`AbstractDoubleSearchScript`, and `AbstractFloatSearchScript`).\n-\n-Registering them can either be done by settings, for example:\n-`script.native.my.type` set to `sample.MyNativeScriptFactory` will\n-register a script named `my`. Another option is in a plugin, access\n-`ScriptModule` and call `registerScript` on it.\n-\n-Executing the script is done by specifying the `lang` as `native`, and\n-the name of the script as the `script`.\n-\n-Note, the scripts need to be in the classpath of elasticsearch. One\n-simple way to do it is to create a directory under plugins (choose a\n-descriptive name), and place the jar / classes files there. They will be\n-automatically loaded.\n+Sometimes `groovy` and `expressions` aren't enough. For those times you can\n+implement a native script.\n+\n+The best way to implement a native script is to write a plugin and install it.\n+The plugin {plugins}/plugin-authors.html[documentation] has more information on\n+how to write a plugin so that Elasticsearch will properly load it.\n+\n+To register the actual script you'll need to implement `NativeScriptFactory`\n+to construct the script. The actual script will extend either\n+`AbstractExecutableScript` or `AbstractSearchScript`. The second one is likely\n+the most useful and has several helpful subclasses you can extend like\n+`AbstractLongSearchScript`, `AbstractDoubleSearchScript`, and\n+`AbstractFloatSearchScript`. Finally, your plugin should register the native\n+script by declaring the `onModule(ScriptModule)` method.\n+\n+If you squashed the whole thing into one class it'd look like:\n+\n+[source,java]\n+--------------------------------------------------\n+public class MyNativeScriptPlugin extends Plugin {\n+ @Override\n+ public String name() {\n+ return \"my-native-script\";\n+ }\n+ @Override\n+ public String description() {\n+ return \"my native script that does something great\";\n+ }\n+ public void onModule(ScriptModule scriptModule) {\n+ scriptModule.registerScript(\"my_script\", MyNativeScriptFactory.class);\n+ }\n+\n+ public static class MyNativeScriptFactory implements NativeScriptFactory {\n+ @Override\n+ public ExecutableScript newScript(@Nullable Map<String, Object> params) {\n+ return new MyNativeScript();\n+ }\n+ @Override\n+ public boolean needsScores() {\n+ return false;\n+ }\n+ }\n+\n+ public static class MyNativeScript extends AbstractFloatSearchScript {\n+ @Override\n+ public float runAsFloat() {\n+ float a = (float) source().get(\"a\");\n+ float b = (float) source().get(\"b\");\n+ return a * b;\n+ }\n+ }\n+}\n+--------------------------------------------------\n+\n+You can execute the script by specifying its `lang` as `native`, and the name\n+of the script as the `id`:\n+\n+[source,js]\n+--------------------------------------------------\n+curl -XPOST localhost:9200/_search -d '{\n+ \"query\": {\n+ \"function_score\": {\n+ \"query\": {\n+ \"match\": {\n+ \"body\": \"foo\"\n+ }\n+ },\n+ \"functions\": [\n+ {\n+ \"script_score\": {\n+ \"id\": \"my_script\",\n+ \"lang\" : \"native\"\n+ }\n+ }\n+ ]\n+ }\n+ }\n+}'\n+--------------------------------------------------\n+\n \n [float]\n === Lucene Expressions Scripts", "filename": "docs/reference/modules/scripting.asciidoc", "status": "modified" } ] }
{ "body": "When running `plugin.bat` on Windows, passing Java options via the command line results in the following error:\n\n```\nc:\\elasticsearch\\bin>plugin \"-Des.plugins.staging=true\" install analysis-icu\nERROR: unknown command [-Des.plugins.staging=true]. Use [-h] option to list available commands\n```\n\nAdding `-Des.plugins.staging=true` directly to the script [here](https://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/plugin.bat#L14) works.\n\nThis might be because `plugin.bat` is missing the equivalent of [this in the bash script.](https://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/plugin#L69-L83).\n", "comments": [], "number": 13616, "title": "plugin.bat error parsing Java options" }
{ "body": "Fixes #13616\n", "number": 13989, "review_comments": [ { "body": "why do we need the distinction of -D and -D_=_ ? Can we not handle it like here https://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/elasticsearch.bat ? I don't actually know a lot of bat so I am really just curious.\n", "created_at": "2015-10-07T10:02:18Z" }, { "body": "The idea is to model the same functionality as in the bash file:\n\nhttps://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/plugin#L83-L90\n\nThis means that properties can be passed as either `-Dmy.prop true` or `-Dmy.prop=true`.\n\nAs for why the parsing is not done using `FOR /...` as in elasticsearch.bat, I could not get that approach to work with quoted parameters, for example `plugin \"-Dmy.property=some value with spaces\" install foo`.\n", "created_at": "2015-10-07T10:14:08Z" }, { "body": "> This means that properties can be passed as either -Dmy.prop true or -Dmy.prop=true\n\nha! I did not even know we can do this! thanks for clarifying. But that means that it is different in `elasticsearch.bat` and also the bash elasticsearch script. I just checked, `-Dmy.prop true` does not work with them. I will open an issue for it.\n", "created_at": "2015-10-07T11:15:56Z" }, { "body": "That does not work with java. We should just stick with the way the jvm takes sysprops...\n\n```\n[14:04:26][~/Code/test]$ java -cp . Test\nnull\n[14:04:50][~/Code/test]$ java -cp . -Dfoo bar Test\nError: Could not find or load main class bar\n[14:05:03][~/Code/test]$ java -cp . -Dfoo=bar Test\nbar\n```\n", "created_at": "2015-10-07T21:06:20Z" } ], "title": "Parse Java system properties in plugin.bat" }
{ "commits": [ { "message": "Parse Java system properties in plugin.bat\n\nCloses #13616" } ], "files": [ { "diff": "@@ -1,6 +1,6 @@\n @echo off\n \n-SETLOCAL\n+SETLOCAL enabledelayedexpansion\n \n if NOT DEFINED JAVA_HOME goto err\n \n@@ -9,9 +9,46 @@ for %%I in (\"%SCRIPT_DIR%..\") do set ES_HOME=%%~dpfI\n \n TITLE Elasticsearch Plugin Manager ${project.version}\n \n+SET properties=\n+SET args=\n+\n+:loop\n+SET \"current=%~1\"\n+SHIFT\n+IF \"x!current!\" == \"x\" GOTO breakloop\n+\n+IF \"!current:~0,2%!\" == \"-D\" (\n+ ECHO \"!current!\" | FINDSTR /C:\"=\">nul && (\n+ :: current matches -D*=*\n+ IF \"x!properties!\" NEQ \"x\" (\n+ SET properties=!properties! \"!current!\"\n+ ) ELSE (\n+ SET properties=\"!current!\"\n+ )\n+ ) || (\n+ :: current matches -D*\n+ IF \"x!properties!\" NEQ \"x\" (\n+ SET properties=!properties! \"!current!=%~1\"\n+ ) ELSE (\n+ SET properties=\"!current!=%~1\"\n+ )\n+ SHIFT\n+ )\n+) ELSE (\n+ :: current matches *\n+ IF \"x!args!\" NEQ \"x\" (\n+ SET args=!args! \"!current!\"\n+ ) ELSE (\n+ SET args=\"!current!\"\n+ )\n+)\n+\n+GOTO loop\n+:breakloop\n+\n SET HOSTNAME=%COMPUTERNAME%\n \n-\"%JAVA_HOME%\\bin\\java\" -client -Des.path.home=\"%ES_HOME%\" -cp \"%ES_HOME%/lib/*;\" \"org.elasticsearch.plugins.PluginManagerCliParser\" %*\n+\"%JAVA_HOME%\\bin\\java\" -client -Des.path.home=\"%ES_HOME%\" !properties! -cp \"%ES_HOME%/lib/*;\" \"org.elasticsearch.plugins.PluginManagerCliParser\" !args!\n goto finally\n \n ", "filename": "distribution/src/main/resources/bin/plugin.bat", "status": "modified" } ] }
{ "body": "A mapping conflict such as:\n\n```\nPUT my_index\n{\n \"mappings\": {\n \"type_one\": {\n \"properties\": {\n \"text\": {\n \"type\": \"string\",\n \"analyzer\": \"standard\",\n \"search_analyzer\": \"whitespace\"\n }\n }\n },\n \"type_two\": {\n \"properties\": {\n \"text\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}\n```\n\nthrows an exception like:\n\n```\n {\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"mapper_parsing_exception\",\n \"reason\": \"mapping [type_two]\",\n \"stack_trace\": \"MapperParsingException[mapping [type_two]]; ... 6 more\\n\"\n }\n ],\n \"type\": \"mapper_parsing_exception\",\n \"reason\": \"mapping [type_two]\",\n \"caused_by\": {\n \"type\": \"illegal_argument_exception\",\n \"reason\": \"Mapper for [text] conflicts with existing mapping in other types:\\n[mapper [text] has different analyzer, mapper [text] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types.]\",\n \"stack_trace\": \"java.lang....\\n\"\n },\n \"stack_trace\": \"MapperParsingException[mapping [type_two]]; nested: IllegalArgumentException[Mapper for [text] conflicts with existing mapping in other types:\\n[mapper [text] has different analyzer, mapper [text] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types.]];...\\n\"\n },\n \"status\": 400\n }\n```\n\nThe `error.root_cause.reason` should contain the text from `error.type.caused_by.reason`\n", "comments": [ { "body": "I'm not sure if this is a bug.\nThe root causes here block looks like correct. Inside of it, there could be a caused_by key-value pair like this:\n\n```\n \"root_cause\": [\n {\n \"type\": \"mapper_parsing_exception\",\n \"reason\": \"mapping [type_two]\",\n \"stack_trace\": \"MapperParsingException[mapping [type_two]]; ... 6 more\\n\"\n \"caused_by\": {.......} // skipped\n }\n\n```\n\nWe skip it on purpose, [here](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/rest/BytesRestResponse.java#L130)\n", "created_at": "2015-08-14T05:52:13Z" }, { "body": "I don't think it is a bug, but a choice that was made. The root_cause stops at a subclass of ElasticsearchException, I believe. Perhaps @s1monw can elaborate on why we don't go down to the original root cause (the IAE here)?\n", "created_at": "2015-08-14T06:01:45Z" }, { "body": "+1 on improving exception rendering to try to better expose the actual root cause of the issue\n", "created_at": "2015-08-14T13:27:46Z" } ], "number": 12839, "title": "Mapping conflict exception not bubbling up to root cause" }
{ "body": "Creates a new ElasticsearchException, ConflictingFieldTypesException, that\nit then throws on any mapping conflicts.\n\nThe error messages for mapping conflicts now look like:\n\n```\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"mapper_parsing_exception\",\n \"reason\" : \"Failed to parse mapping [type_one]: Mapper for [text] conflicts with existing mapping in other types:\\n[mapper [text] has different [analyzer], mapper [text] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types., mapper [text] is used by multiple types. Set update_all_types to true to update [search_quote_analyzer] across all types.]\"\n } ],\n \"type\" : \"mapper_parsing_exception\",\n \"reason\" : \"Failed to parse mapping [type_one]: Mapper for [text] conflicts with existing mapping in other types:\\n[mapper [text] has different [analyzer], mapper [text] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types., mapper [text] is used by multiple types. Set update_all_types to true to update [search_quote_analyzer] across all types.]\",\n \"caused_by\" : {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Mapper for [text] conflicts with existing mapping in other types:\\n[mapper [text] has different [analyzer], mapper [text] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types., mapper [text] is used by multiple types. Set update_all_types to true to update [search_quote_analyzer] across all types.]\"\n }\n },\n \"status\" : 400\n}\n```\n\nCloses #12839\n", "number": 13976, "review_comments": [], "title": "Make root_cause of field conflicts more obvious" }
{ "commits": [ { "message": "Make root_cause of field conflicts more obvious\n\nDoes so by improving the error message passed to MapperParsingException.\n\nThe error messages for mapping conflicts now look like:\n```\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"mapper_parsing_exception\",\n \"reason\" : \"Failed to parse mapping [type_one]: Mapper for [text] conflicts with existing mapping in other types:\\n[mapper [text] has different [analyzer], mapper [text] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types., mapper [text] is used by multiple types. Set update_all_types to true to update [search_quote_analyzer] across all types.]\"\n } ],\n \"type\" : \"mapper_parsing_exception\",\n \"reason\" : \"Failed to parse mapping [type_one]: Mapper for [text] conflicts with existing mapping in other types:\\n[mapper [text] has different [analyzer], mapper [text] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types., mapper [text] is used by multiple types. Set update_all_types to true to update [search_quote_analyzer] across all types.]\",\n \"caused_by\" : {\n \"type\" : \"illegal_argument_exception\",\n \"reason\" : \"Mapper for [text] conflicts with existing mapping in other types:\\n[mapper [text] has different [analyzer], mapper [text] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types., mapper [text] is used by multiple types. Set update_all_types to true to update [search_quote_analyzer] across all types.]\"\n }\n },\n \"status\" : 400\n}\n```\n\nCloses #12839\n\nChange implementation\n\nRather than make a new exception this improves the error message of the old\nexception." } ], "files": [ { "diff": "@@ -324,7 +324,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n mapperService.merge(MapperService.DEFAULT_MAPPING, new CompressedXContent(XContentFactory.jsonBuilder().map(mappings.get(MapperService.DEFAULT_MAPPING)).string()), false, request.updateAllTypes());\n } catch (Exception e) {\n removalReason = \"failed on parsing default mapping on index creation\";\n- throw new MapperParsingException(\"mapping [\" + MapperService.DEFAULT_MAPPING + \"]\", e);\n+ throw new MapperParsingException(\"Failed to parse mapping [{}]: {}\", e, MapperService.DEFAULT_MAPPING, e.getMessage());\n }\n }\n for (Map.Entry<String, Map<String, Object>> entry : mappings.entrySet()) {\n@@ -336,7 +336,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n mapperService.merge(entry.getKey(), new CompressedXContent(XContentFactory.jsonBuilder().map(entry.getValue()).string()), true, request.updateAllTypes());\n } catch (Exception e) {\n removalReason = \"failed on parsing mappings on index creation\";\n- throw new MapperParsingException(\"mapping [\" + entry.getKey() + \"]\", e);\n+ throw new MapperParsingException(\"Failed to parse mapping [{}]: {}\", e, entry.getKey(), e.getMessage());\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -39,4 +39,8 @@ public MapperException(String message) {\n public MapperException(String message, Throwable cause) {\n super(message, cause);\n }\n+\n+ public MapperException(String message, Throwable cause, Object... args) {\n+ super(message, cause, args);\n+ }\n }\n\\ No newline at end of file", "filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperException.java", "status": "modified" }, { "diff": "@@ -41,6 +41,10 @@ public MapperParsingException(String message, Throwable cause) {\n super(message, cause);\n }\n \n+ public MapperParsingException(String message, Throwable cause, Object... args) {\n+ super(message, cause, args);\n+ }\n+\n @Override\n public RestStatus status() {\n return RestStatus.BAD_REQUEST;", "filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperParsingException.java", "status": "modified" }, { "diff": "@@ -19,11 +19,9 @@\n \n package org.elasticsearch.action.admin.indices.create;\n \n-import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse;\n-import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.cluster.ClusterState;\n@@ -32,11 +30,11 @@\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.IndexNotFoundException;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.query.RangeQueryBuilder;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n-import org.junit.Test;\n \n import java.util.HashMap;\n import java.util.concurrent.CountDownLatch;\n@@ -45,13 +43,15 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertBlocked;\n-import static org.hamcrest.Matchers.*;\n+import static org.hamcrest.Matchers.allOf;\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n+import static org.hamcrest.Matchers.lessThanOrEqualTo;\n import static org.hamcrest.core.IsNull.notNullValue;\n \n @ClusterScope(scope = Scope.TEST)\n public class CreateIndexIT extends ESIntegTestCase {\n-\n- @Test\n public void testCreationDate_Given() {\n prepareCreate(\"test\").setSettings(Settings.builder().put(IndexMetaData.SETTING_CREATION_DATE, 4l)).get();\n ClusterStateResponse response = client().admin().cluster().prepareState().get();\n@@ -67,7 +67,6 @@ public void testCreationDate_Given() {\n assertThat(index.getCreationDate(), equalTo(4l));\n }\n \n- @Test\n public void testCreationDate_Generated() {\n long timeBeforeRequest = System.currentTimeMillis();\n prepareCreate(\"test\").get();\n@@ -85,7 +84,6 @@ public void testCreationDate_Generated() {\n assertThat(index.getCreationDate(), allOf(lessThanOrEqualTo(timeAfterRequest), greaterThanOrEqualTo(timeBeforeRequest)));\n }\n \n- @Test\n public void testDoubleAddMapping() throws Exception {\n try {\n prepareCreate(\"test\")\n@@ -113,7 +111,6 @@ public void testDoubleAddMapping() throws Exception {\n }\n }\n \n- @Test\n public void testInvalidShardCountSettings() throws Exception {\n try {\n prepareCreate(\"test\").setSettings(Settings.builder()\n@@ -152,7 +149,6 @@ public void testInvalidShardCountSettings() throws Exception {\n }\n }\n \n- @Test\n public void testCreateIndexWithBlocks() {\n try {\n setClusterReadOnly(true);\n@@ -162,14 +158,12 @@ public void testCreateIndexWithBlocks() {\n }\n }\n \n- @Test\n public void testCreateIndexWithMetadataBlocks() {\n assertAcked(prepareCreate(\"test\").setSettings(Settings.builder().put(IndexMetaData.SETTING_BLOCKS_METADATA, true)));\n assertBlocked(client().admin().indices().prepareGetSettings(\"test\"), IndexMetaData.INDEX_METADATA_BLOCK);\n disableIndexBlock(\"test\", IndexMetaData.SETTING_BLOCKS_METADATA);\n }\n \n- @Test\n public void testInvalidShardCountSettingsWithoutPrefix() throws Exception {\n try {\n prepareCreate(\"test\").setSettings(Settings.builder()\n@@ -222,7 +216,8 @@ public void testCreateAndDeleteIndexConcurrently() throws InterruptedException {\n @Override\n public void onResponse(DeleteIndexResponse deleteIndexResponse) {\n Thread thread = new Thread() {\n- public void run() {\n+ @Override\n+ public void run() {\n try {\n client().prepareIndex(\"test\", \"test\").setSource(\"index_version\", indexVersion.get()).get(); // recreate that index\n synchronized (indexVersionLock) {\n@@ -265,4 +260,29 @@ public void onFailure(Throwable e) {\n logger.info(\"total: {}\", expected.getHits().getTotalHits());\n }\n \n+ /**\n+ * Asserts that the root cause of mapping conflicts is readable.\n+ */\n+ public void testMappingConflictRootCause() throws Exception {\n+ CreateIndexRequestBuilder b = prepareCreate(\"test\");\n+ b.addMapping(\"type1\", jsonBuilder().startObject().startObject(\"properties\")\n+ .startObject(\"text\")\n+ .field(\"type\", \"string\")\n+ .field(\"analyzer\", \"standard\")\n+ .field(\"search_analyzer\", \"whitespace\")\n+ .endObject().endObject().endObject());\n+ b.addMapping(\"type2\", jsonBuilder().humanReadable(true).startObject().startObject(\"properties\")\n+ .startObject(\"text\")\n+ .field(\"type\", \"string\")\n+ .endObject().endObject().endObject());\n+ try {\n+ b.get();\n+ } catch (MapperParsingException e) {\n+ StringBuilder messages = new StringBuilder();\n+ for (Exception rootCause: e.guessRootCauses()) {\n+ messages.append(rootCause.getMessage());\n+ }\n+ assertThat(messages.toString(), containsString(\"mapper [text] is used by multiple types\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexIT.java", "status": "modified" }, { "diff": "@@ -40,6 +40,7 @@\n import static org.elasticsearch.test.VersionUtils.randomVersionBetween;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoSearchHits;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.is;\n \n /**\n@@ -174,7 +175,8 @@ public void testDynamicDateDetectionIn2xDoesNotSupportEpochs() throws Exception\n createIndex(Version.CURRENT, mapping);\n fail(\"Expected a MapperParsingException, but did not happen\");\n } catch (MapperParsingException e) {\n- assertThat(e.getMessage(), is(\"mapping [\" + type + \"]\"));\n+ assertThat(e.getMessage(), containsString(\"Failed to parse mapping [\" + type + \"]\"));\n+ assertThat(e.getMessage(), containsString(\"Epoch [epoch_seconds] is not supported as dynamic date format\"));\n }\n }\n ", "filename": "core/src/test/java/org/elasticsearch/index/mapper/date/DateBackwardsCompatibilityTests.java", "status": "modified" } ] }
{ "body": "This PR brings this spec more inline with other api's and minimizes the known route variables in the API\n\nSee: https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/rest/action/admin/indices/mapping/get/RestGetFieldMappingAction.java#L66\n", "comments": [ { "body": "LGTM\n", "created_at": "2015-10-02T16:48:37Z" }, { "body": "merged to master and cherry-picked to `2.0`. `2.1`, `2.x`.\n", "created_at": "2015-10-06T10:01:20Z" } ], "number": 13902, "title": "Get field mapping documented fields as field" }
{ "body": "After `field` has been renamed to `fields` in #13902\n", "number": 13962, "review_comments": [], "title": "Fix indices.get_field_mapping rest tests" }
{ "commits": [ { "message": "Fix indices.get_field_mapping rest tests\n\nAfter `field` has been renamed to `fields` in #13902" } ], "files": [ { "diff": "@@ -18,7 +18,7 @@ setup:\n \n - do:\n indices.get_field_mapping:\n- field: text\n+ fields: text\n \n - match: {test_index.mappings.test_type.text.mapping.text.type: string}\n \n@@ -27,7 +27,7 @@ setup:\n - do:\n indices.get_field_mapping:\n index: test_index\n- field: text\n+ fields: text\n \n - match: {test_index.mappings.test_type.text.mapping.text.type: string}\n \n@@ -38,7 +38,7 @@ setup:\n indices.get_field_mapping:\n index: test_index\n type: test_type\n- field: text\n+ fields: text\n \n - match: {test_index.mappings.test_type.text.mapping.text.type: string}\n \n@@ -49,7 +49,7 @@ setup:\n indices.get_field_mapping:\n index: test_index\n type: test_type\n- field: [ text , text1 ]\n+ fields: [ text , text1 ]\n \n - match: {test_index.mappings.test_type.text.mapping.text.type: string}\n - is_false: test_index.mappings.test_type.text1\n@@ -61,19 +61,19 @@ setup:\n indices.get_field_mapping:\n index: test_index\n type: test_type\n- field: text\n+ fields: text\n include_defaults: true\n \n - match: {test_index.mappings.test_type.text.mapping.text.type: string}\n - match: {test_index.mappings.test_type.text.mapping.text.analyzer: default}\n \n ---\n-\"Get field mapping should work without index specifying type and field\": \n+\"Get field mapping should work without index specifying type and fields\":\n \n - do:\n indices.get_field_mapping:\n type: test_type\n- field: text\n+ fields: text\n \n - match: {test_index.mappings.test_type.text.mapping.text.type: string}\n ", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_field_mapping/10_basic.yaml", "status": "modified" }, { "diff": "@@ -19,6 +19,6 @@\n indices.get_field_mapping:\n index: test_index\n type: test_type\n- field: not_existent\n+ fields: not_existent\n \n - match: { '': {}}", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_field_mapping/20_missing_field.yaml", "status": "modified" }, { "diff": "@@ -20,5 +20,5 @@\n indices.get_field_mapping:\n index: test_index\n type: not_test_type\n- field: text\n+ fields: text\n ", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_field_mapping/30_missing_type.yaml", "status": "modified" }, { "diff": "@@ -6,6 +6,6 @@\n indices.get_field_mapping:\n index: test_index\n type: type\n- field: field\n+ fields: field\n \n ", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_field_mapping/40_missing_index.yaml", "status": "modified" }, { "diff": "@@ -49,7 +49,7 @@ setup:\n \n - do:\n indices.get_field_mapping:\n- field: \"*\"\n+ fields: \"*\"\n \n - match: {test_index.mappings.test_type.t1.full_name: t1 }\n - match: {test_index.mappings.test_type.t2.full_name: t2 }\n@@ -63,7 +63,7 @@ setup:\n - do:\n indices.get_field_mapping:\n index: test_index\n- field: \"t*\"\n+ fields: \"t*\"\n \n - match: {test_index.mappings.test_type.t1.full_name: t1 }\n - match: {test_index.mappings.test_type.t2.full_name: t2 }\n@@ -75,7 +75,7 @@ setup:\n - do:\n indices.get_field_mapping:\n index: test_index\n- field: \"*t1\"\n+ fields: \"*t1\"\n - match: {test_index.mappings.test_type.t1.full_name: t1 }\n - match: {test_index.mappings.test_type.obj\\.t1.full_name: obj.t1 }\n - match: {test_index.mappings.test_type.obj\\.i_t1.full_name: obj.i_t1 }\n@@ -87,7 +87,7 @@ setup:\n - do:\n indices.get_field_mapping:\n index: test_index\n- field: \"obj.i_*\"\n+ fields: \"obj.i_*\"\n - match: {test_index.mappings.test_type.obj\\.i_t1.full_name: obj.i_t1 }\n - match: {test_index.mappings.test_type.obj\\.i_t3.full_name: obj.i_t3 }\n - length: {test_index.mappings.test_type: 2}\n@@ -99,7 +99,7 @@ setup:\n indices.get_field_mapping:\n index: _all\n type: _all\n- field: \"t*\"\n+ fields: \"t*\"\n - match: {test_index.mappings.test_type.t1.full_name: t1 }\n - match: {test_index.mappings.test_type.t2.full_name: t2 }\n - length: {test_index.mappings.test_type: 2}\n@@ -114,7 +114,7 @@ setup:\n indices.get_field_mapping:\n index: '*'\n type: '*'\n- field: \"t*\"\n+ fields: \"t*\"\n - match: {test_index.mappings.test_type.t1.full_name: t1 }\n - match: {test_index.mappings.test_type.t2.full_name: t2 }\n - length: {test_index.mappings.test_type: 2}\n@@ -129,7 +129,7 @@ setup:\n indices.get_field_mapping:\n index: 'test_index,test_index_2'\n type: 'test_type,test_type_2'\n- field: \"t*\"\n+ fields: \"t*\"\n - match: {test_index.mappings.test_type.t1.full_name: t1 }\n - match: {test_index.mappings.test_type.t2.full_name: t2 }\n - length: {test_index.mappings.test_type: 2}", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/indices.get_field_mapping/50_field_wildcards.yaml", "status": "modified" } ] }
{ "body": "today we don't verify that the actual checksum written to VerifyingIndexOutput\nis the actual checksum we are expecting.\n", "comments": [ { "body": "I pushed a new commit @jasontedor @dakrone @rmuir @mikemccand \n", "created_at": "2015-09-29T16:04:18Z" }, { "body": "@mikemccand @rmuir pushed a new commit\n", "created_at": "2015-09-30T08:13:31Z" }, { "body": "@mikemccand next iteration pushed\n", "created_at": "2015-09-30T11:15:31Z" }, { "body": "LGTM, thanks @s1monw!\n", "created_at": "2015-09-30T13:31:06Z" } ], "number": 13848, "title": "Verify actually written checksum in VerifyingIndexOutput" }
{ "body": "The fix in #13848 has an off by one issue where the first byte of the checksum\nwas never written. Unfortunately most tests shadowed the problem and the first\nbyte of the checksum seems to be very likely a 0 which causes only very rare\nfailures.\n\nRelates to #13896\nRelates to #13848\n", "number": 13923, "review_comments": [], "title": "Record all bytes of the checksum in VerifyingIndexOutput" }
{ "commits": [ { "message": "Record all bytes of the checksum in VerifyingIndexOutput\n\nThe fix in #13848 has an off by one issue where the first byte of the checksum\nwas never written. Unfortunately most tests shadowed the problem and the first\nbyte of the checksum seems to be very likely a 0 which causes only very rare\nfailures.\n\nRelates to #13896\nRelates to #13848" } ], "files": [ { "diff": "@@ -1286,14 +1286,15 @@ public void verify() throws IOException {\n @Override\n public void writeByte(byte b) throws IOException {\n final long writtenBytes = this.writtenBytes++;\n- if (writtenBytes == checksumPosition) {\n- readAndCompareChecksum();\n- } else if (writtenBytes > checksumPosition) { // we are writing parts of the checksum....\n+ if (writtenBytes >= checksumPosition) { // we are writing parts of the checksum....\n+ if (writtenBytes == checksumPosition) {\n+ readAndCompareChecksum();\n+ }\n final int index = Math.toIntExact(writtenBytes - checksumPosition);\n if (index < footerChecksum.length) {\n footerChecksum[index] = b;\n if (index == footerChecksum.length-1) {\n- verify();// we have recorded the entire checksum\n+ verify(); // we have recorded the entire checksum\n }\n } else {\n verify(); // fail if we write more than expected\n@@ -1315,24 +1316,14 @@ private void readAndCompareChecksum() throws IOException {\n @Override\n public void writeBytes(byte[] b, int offset, int length) throws IOException {\n if (writtenBytes + length > checksumPosition) {\n- if (actualChecksum == null) {\n- assert writtenBytes <= checksumPosition;\n- final int bytesToWrite = (int) (checksumPosition - writtenBytes);\n- out.writeBytes(b, offset, bytesToWrite);\n- readAndCompareChecksum();\n- offset += bytesToWrite;\n- length -= bytesToWrite;\n- writtenBytes += bytesToWrite;\n- }\n- for (int i = 0; i < length; i++) {\n+ for (int i = 0; i < length; i++) { // don't optimze writing the last block of bytes\n writeByte(b[offset+i]);\n }\n } else {\n out.writeBytes(b, offset, length);\n writtenBytes += length;\n }\n }\n-\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/index/store/Store.java", "status": "modified" }, { "diff": "@@ -208,11 +208,6 @@ public void testChecksumCorrupted() throws IOException {\n verifyingOutput.writeByte(checksumBytes.bytes[i]);\n }\n }\n- if (randomBoolean()) {\n- appendRandomData(verifyingOutput);\n- } else {\n- Store.verify(verifyingOutput);\n- }\n fail(\"should be a corrupted index\");\n } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) {\n // ok", "filename": "core/src/test/java/org/elasticsearch/index/store/StoreTests.java", "status": "modified" } ] }
{ "body": "Out of box, ES expects its stuff to be in particular places. We should not be appending to ES_CLASSPATH, allowing users to specify stuff there, like we do in elasticsearch.bin.sh\n\nIf the user sets it, its not going to work out of box.\n", "comments": [ { "body": "+1 to nuke this. Users shouldnt be adding jars to the classpath. \n", "created_at": "2015-09-26T04:05:28Z" } ], "number": 13812, "title": "nuke ES_CLASSPATH appending" }
{ "body": "Old ES 1.x startup scripts were buggy, adding empty classpath elements. But this can be horrible: it means \"CWD\" to java, and when starting from a service maybe that is /, and now JarHell is scanning your entire computer (like #13864). \n\nIt would be better to just fail hard on a bogus classpath, and hint at the possible cause `(outdated shell script from a previous version?)`\n\nAdditionally, users can still override ES_CLASSPATH, which caused the whole bug in the first place for the 1.x scripts, but we should fail on that too. Its not going to work and you will just get a securityexception. Instead we can tell the user how to do it better.\n\nMaybe we want to clean this up for earlier versions (e.g. 2.1 or maybe even 2.0) too, as some of it is \"our fault\" (the old broken scripts) and possible still \"our fault\" (maybe packaging is not upgrading them properly?)\n\nRelates to #13864\nCloses #13812\n", "number": 13880, "review_comments": [], "title": "Nuke ES_CLASSPATH appending, JarHell fail on empty classpath elements" }
{ "commits": [ { "message": "Fail hard on empty classpath elements.\n\nThis can happen easily, if somehow old 1.x shellscripts survive and try to launch 2.x code.\nI have the feeling this happens maybe because of packaging upgrades or something.\nEither way: we can just fail hard and clear in this situation, rather than the current situation\nwhere CWD might be /, and we might traverse the entire filesystem until we hit an error...\n\nRelates to #13864" }, { "message": "Nuke ES_CLASSPATH appending\n\nOut of box, ES expects its stuff to be in particular places. We should not be appending to ES_CLASSPATH, allowing users to specify stuff there, like we do in elasticsearch.bin.sh\n\nIf the user sets it, its not going to work out of box.\n\nCloses #13812" }, { "message": "windows is terrible" } ], "files": [ { "diff": "@@ -88,17 +88,35 @@ public static void checkJarHell() throws Exception {\n }\n \n /**\n- * Parses the classpath into a set of URLs\n+ * Parses the classpath into an array of URLs\n+ * @return array of URLs\n+ * @throws IllegalStateException if the classpath contains empty elements\n */\n- @SuppressForbidden(reason = \"resolves against CWD because that is how classpaths work\")\n public static URL[] parseClassPath() {\n- String elements[] = System.getProperty(\"java.class.path\").split(System.getProperty(\"path.separator\"));\n+ return parseClassPath(System.getProperty(\"java.class.path\"));\n+ }\n+\n+ /**\n+ * Parses the classpath into a set of URLs. For testing.\n+ * @param classPath classpath to parse (typically the system property {@code java.class.path})\n+ * @return array of URLs\n+ * @throws IllegalStateException if the classpath contains empty elements\n+ */\n+ @SuppressForbidden(reason = \"resolves against CWD because that is how classpaths work\")\n+ static URL[] parseClassPath(String classPath) {\n+ String elements[] = classPath.split(System.getProperty(\"path.separator\"));\n URL urlElements[] = new URL[elements.length];\n for (int i = 0; i < elements.length; i++) {\n String element = elements[i];\n- // empty classpath element behaves like CWD.\n+ // Technically empty classpath element behaves like CWD.\n+ // So below is the \"correct\" code, however in practice with ES, this is usually just a misconfiguration,\n+ // from old shell scripts left behind or something:\n+ // if (element.isEmpty()) {\n+ // element = System.getProperty(\"user.dir\");\n+ // }\n+ // Instead we just throw an exception, and keep it clean.\n if (element.isEmpty()) {\n- element = System.getProperty(\"user.dir\");\n+ throw new IllegalStateException(\"Classpath should not contain empty elements! (outdated shell script from a previous version?) classpath='\" + classPath + \"'\");\n }\n try {\n urlElements[i] = PathUtils.get(element).toUri().toURL();", "filename": "core/src/main/java/org/elasticsearch/bootstrap/JarHell.java", "status": "modified" }, { "diff": "@@ -275,4 +275,64 @@ public void testInvalidVersions() {\n }\n }\n }\n+\n+ // classpath testing is system specific, so we just write separate tests for *nix and windows cases\n+\n+ /**\n+ * Parse a simple classpath with two elements on unix\n+ */\n+ public void testParseClassPathUnix() throws Exception {\n+ assumeTrue(\"test is designed for unix-like systems only\", \":\".equals(System.getProperty(\"path.separator\")));\n+ assumeTrue(\"test is designed for unix-like systems only\", \"/\".equals(System.getProperty(\"file.separator\")));\n+\n+ Path element1 = createTempDir();\n+ Path element2 = createTempDir();\n+\n+ URL expected[] = { element1.toUri().toURL(), element2.toUri().toURL() };\n+ assertArrayEquals(expected, JarHell.parseClassPath(element1.toString() + \":\" + element2.toString()));\n+ }\n+\n+ /**\n+ * Make sure an old unix classpath with an empty element (implicitly CWD: i'm looking at you 1.x ES scripts) fails\n+ */\n+ public void testEmptyClassPathUnix() throws Exception {\n+ assumeTrue(\"test is designed for unix-like systems only\", \":\".equals(System.getProperty(\"path.separator\")));\n+ assumeTrue(\"test is designed for unix-like systems only\", \"/\".equals(System.getProperty(\"file.separator\")));\n+\n+ try {\n+ JarHell.parseClassPath(\":/element1:/element2\");\n+ fail(\"should have hit exception\");\n+ } catch (IllegalStateException expected) {\n+ assertTrue(expected.getMessage().contains(\"should not contain empty elements\"));\n+ }\n+ }\n+\n+ /**\n+ * Parse a simple classpath with two elements on windows\n+ */\n+ public void testParseClassPathWindows() throws Exception {\n+ assumeTrue(\"test is designed for windows-like systems only\", \";\".equals(System.getProperty(\"path.separator\")));\n+ assumeTrue(\"test is designed for windows-like systems only\", \"\\\\\".equals(System.getProperty(\"file.separator\")));\n+\n+ Path element1 = createTempDir();\n+ Path element2 = createTempDir();\n+\n+ URL expected[] = { element1.toUri().toURL(), element2.toUri().toURL() };\n+ assertArrayEquals(expected, JarHell.parseClassPath(element1.toString() + \";\" + element2.toString()));\n+ }\n+\n+ /**\n+ * Make sure an old windows classpath with an empty element (implicitly CWD: i'm looking at you 1.x ES scripts) fails\n+ */\n+ public void testEmptyClassPathWindows() throws Exception {\n+ assumeTrue(\"test is designed for windows-like systems only\", \";\".equals(System.getProperty(\"path.separator\")));\n+ assumeTrue(\"test is designed for windows-like systems only\", \"\\\\\".equals(System.getProperty(\"file.separator\")));\n+\n+ try {\n+ JarHell.parseClassPath(\";c:\\\\element1;c:\\\\element2\");\n+ fail(\"should have hit exception\");\n+ } catch (IllegalStateException expected) {\n+ assertTrue(expected.getMessage().contains(\"should not contain empty elements\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/bootstrap/JarHellTests.java", "status": "modified" }, { "diff": "@@ -91,10 +91,13 @@ set JAVA_OPTS=%JAVA_OPTS% -Dfile.encoding=UTF-8\n REM Use our provided JNA always versus the system one\n set JAVA_OPTS=%JAVA_OPTS% -Djna.nosys=true\n \n-set CORE_CLASSPATH=%ES_HOME%/lib/${project.build.finalName}.jar;%ES_HOME%/lib/*\n+REM check in case a user was using this mechanism\n if \"%ES_CLASSPATH%\" == \"\" (\n-set ES_CLASSPATH=%CORE_CLASSPATH%\n+set ES_CLASSPATH=%ES_HOME%/lib/${project.build.finalName}.jar;%ES_HOME%/lib/*\n ) else (\n-set ES_CLASSPATH=%ES_CLASSPATH%;%CORE_CLASSPATH%\n+ECHO Error: Don't modify the classpath with ES_CLASSPATH, Best is to add 1>&2\n+ECHO additional elements via the plugin mechanism, or if code must really be 1>&2\n+ECHO added to the main classpath, add jars to lib\\, unsupported 1>&2\n+EXIT /B 1\n )\n set ES_PARAMS=-Delasticsearch -Des-foreground=yes -Des.path.home=\"%ES_HOME%\"", "filename": "distribution/src/main/resources/bin/elasticsearch.in.bat", "status": "modified" }, { "diff": "@@ -1,13 +1,17 @@\n #!/bin/sh\n \n-CORE_CLASSPATH=\"$ES_HOME/lib/${project.build.finalName}.jar:$ES_HOME/lib/*\"\n-\n-if [ \"x$ES_CLASSPATH\" = \"x\" ]; then\n- ES_CLASSPATH=\"$CORE_CLASSPATH\"\n-else\n- ES_CLASSPATH=\"$ES_CLASSPATH:$CORE_CLASSPATH\"\n+# check in case a user was using this mechanism\n+if [ \"x$ES_CLASSPATH\" != \"x\" ]; then\n+ cat >&2 << EOF\n+Error: Don't modify the classpath with ES_CLASSPATH. Best is to add\n+additional elements via the plugin mechanism, or if code must really be\n+added to the main classpath, add jars to lib/ (unsupported).\n+EOF\n+ exit 1\n fi\n \n+ES_CLASSPATH=\"$ES_HOME/lib/${project.build.finalName}.jar:$ES_HOME/lib/*\"\n+\n if [ \"x$ES_MIN_MEM\" = \"x\" ]; then\n ES_MIN_MEM=${packaging.elasticsearch.heap.min}\n fi", "filename": "distribution/src/main/resources/bin/elasticsearch.in.sh", "status": "modified" } ] }
{ "body": "This happens when using the standard analyzer as a pre and post filter after a direct candidate generator (ES 1.6). When this query was run in a multi-node environment, the error seemed to persist and required a rolling restart. For example: \n\n``` json\n{ \"suggest\" : {\n \"text\" : \"pizeo\",\n \"simple_phrase\" : {\n \"phrase\" : {\n \"analyzer\" : \"english\",\n \"field\" : \"synopses\",\n \"direct_generator\" : [ {\n \"field\" : \"synopses\",\n \"suggest_mode\" : \"always\",\n \"min_word_length\" : 1\n }, {\n \"field\" : \"synopses\",\n \"suggest_mode\" : \"always\",\n \"min_word_length\" : 1,\n \"pre_filter\" : \"standard\",\n \"post_filter\" : \"standard\"\n } ]\n }\n }\n }\n}\n```\n\nWill throw this error:\n\n``` json\n{\n \"took\": 21,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 4,\n \"successful\": 1,\n \"failed\": 3,\n \"failures\": [\n {\n \"index\": \"project\",\n \"shard\": 0,\n \"status\": 500,\n \"reason\": \"IllegalStateException[TokenStream contract violation: close() call missing]\"\n },\n {\n \"index\": \"project\",\n \"shard\": 2,\n \"status\": 500,\n \"reason\": \"IllegalStateException[TokenStream contract violation: close() call missing]\"\n },\n {\n \"index\": \"project\",\n \"shard\": 3,\n \"status\": 500,\n \"reason\": \"IllegalStateException[TokenStream contract violation: close() call missing]\"\n }\n ]\n },\n \"suggest\": {\n \"simple_phrase\": [\n {\n \"text\": \"pizo\",\n \"offset\": 0,\n \"length\": 4,\n \"options\": []\n }\n ]\n }\n}\n```\n\n[2015-06-30 10:16:05,888][DEBUG][action.search.type ] [Wiz Kid] [project][0], node[OhG1Vc65RPyjCcSlXDbCJQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@7bd06ad7] lastShard [true]\njava.lang.IllegalStateException: TokenStream contract violation: close() call missing\n at org.apache.lucene.analysis.Tokenizer.setReader(Tokenizer.java:90)\n at org.apache.lucene.analysis.Analyzer$TokenStreamComponents.setReader(Analyzer.java:323)\n at org.apache.lucene.analysis.standard.StandardAnalyzer$1.setReader(StandardAnalyzer.java:133)\n at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:147)\n at org.elasticsearch.search.suggest.SuggestUtils.analyze(SuggestUtils.java:119)\n at org.elasticsearch.search.suggest.SuggestUtils.analyze(SuggestUtils.java:115)\n at org.elasticsearch.search.suggest.phrase.DirectCandidateGenerator.preFilter(DirectCandidateGenerator.java:133)\n at org.elasticsearch.search.suggest.phrase.DirectCandidateGenerator.frequency(DirectCandidateGenerator.java:92)\n at org.elasticsearch.search.suggest.phrase.DirectCandidateGenerator$2.nextToken(DirectCandidateGenerator.java:155)\n at org.elasticsearch.search.suggest.SuggestUtils.analyze(SuggestUtils.java:130)\n at org.elasticsearch.search.suggest.SuggestUtils.analyze(SuggestUtils.java:122)\n at org.elasticsearch.search.suggest.SuggestUtils.analyze(SuggestUtils.java:115)\n at org.elasticsearch.search.suggest.phrase.DirectCandidateGenerator.postFilter(DirectCandidateGenerator.java:148)\n at org.elasticsearch.search.suggest.phrase.DirectCandidateGenerator.drawCandidates(DirectCandidateGenerator.java:122)\n at org.elasticsearch.search.suggest.phrase.MultiCandidateGeneratorWrapper.drawCandidates(MultiCandidateGeneratorWrapper.java:52)\n at org.elasticsearch.search.suggest.phrase.NoisyChannelSpellChecker.getCorrections(NoisyChannelSpellChecker.java:116)\n at org.elasticsearch.search.suggest.phrase.PhraseSuggester.innerExecute(PhraseSuggester.java:98)\n at org.elasticsearch.search.suggest.phrase.PhraseSuggester.innerExecute(PhraseSuggester.java:54)\n at org.elasticsearch.search.suggest.Suggester.execute(Suggester.java:43)\n at org.elasticsearch.search.suggest.SuggestPhase.execute(SuggestPhase.java:85)\n at org.elasticsearch.search.suggest.SuggestPhase.execute(SuggestPhase.java:74)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:170)\n at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:289)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:300)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n", "comments": [ { "body": "@areek please could you investigate\n", "created_at": "2015-07-01T07:37:57Z" }, { "body": "We ran into this issue as well (also 1.6).\n", "created_at": "2015-08-03T23:29:32Z" }, { "body": "I ran in this issue with 2.0-beta2. With a similar configuration of @jhariani.\nThe shards are random failing executing many times the same call.\nThe fails are related to preFilter call in o.e.search.suggest.phrase.DirectCandidateGenerator.preFilter(DirectCandidateGenerator.java:115)\n", "created_at": "2015-09-30T10:02:23Z" }, { "body": "This was supposed to have been fixed in d97afd52f3e743547277c8fea6fb02fbe2bdc1ff ... and that fix is in 2.0-beta2 ... @aparo can you post more details about the exception? Was there an initial exception before you hit the `close() call missing`?\n", "created_at": "2015-09-30T10:31:51Z" }, { "body": "We are still seeing this issue on elastic 1.7.1 when using the following generator in a phrase suggester\n\n```\n {\n \"field\" : \"title.reversed\",\n \"prefix_length\" : 2,\n \"pre_filter\" : \"reverser\",\n \"post_filter\" : \"reverser\"\n }\n```\n\nWhere reverser looks like this :\n\n```\n \"reverser\": {\n \"tokenizer\": \"standard\",\n \"filter\": [\n \"lowercase\",\n \"dutch_stopwords\",\n \"asciifolding\",\n \"reverse\"\n ]\n }\n```\n\nif I take out the pre_filter things it does not fail\n", "created_at": "2015-11-13T16:06:27Z" }, { "body": "@jelmerk the fix is in 2.0\n", "created_at": "2015-11-17T16:35:56Z" }, { "body": "Hi,\r\nI`ve got the same error on elasticsearch 6.2.4:\r\n\r\nIndex:\r\n`PUT phrase-suggester-test-index\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"number_of_shards\": 1,\r\n \"analysis\": {\r\n \"analyzer\": {\r\n \"trigram\": {\r\n \"type\": \"custom\",\r\n \"tokenizer\": \"standard\",\r\n \"filter\": [\r\n \"standard\",\r\n \"shingle\"\r\n ]\r\n },\r\n \"reverse\": {\r\n \"type\": \"custom\",\r\n \"tokenizer\": \"standard\",\r\n \"filter\": [\r\n \"standard\",\r\n \"reverse\"\r\n ]\r\n },\r\n \"wordnet-synonym-analyzer\": {\r\n \"type\": \"custom\",\r\n \"tokenizer\": \"standard\",\r\n \"filter\": [\r\n \"lowercase\",\r\n \"synonym\"\r\n ]\r\n }\r\n },\r\n \"filter\": {\r\n \"shingle\": {\r\n \"type\": \"shingle\",\r\n \"min_shingle_size\": 2,\r\n \"max_shingle_size\": 3\r\n },\r\n \"synonym\": {\r\n \"type\": \"synonym\",\r\n \"format\": \"wordnet\",\r\n \"synonyms_path\": \"analysis/wn_s.pl\"\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"mappings\": {\r\n \"test\": {\r\n \"properties\": {\r\n \"title\": {\r\n \"type\": \"text\",\r\n \"fields\": {\r\n \"trigram\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"trigram\"\r\n },\r\n \"reverse\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"reverse\"\r\n },\r\n \"synonym\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"wordnet-synonym-analyzer\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n }\r\n}`\r\n\r\nData:\r\n`POST phrase-suggester-test-index/test?refresh=true\r\n{\"title\": \"Captain America\"}\r\n`\r\n\r\nRequest:\r\n`POST phrase-suggester-test-index/_search\r\n{\r\n \"suggest\": {\r\n \"text\" : \"captain usa\",\r\n \"simple_phrase\" : {\r\n \"phrase\" : {\r\n \"field\" : \"title.synonym\",\r\n \"size\" : 1,\r\n \"direct_generator\" : [{\r\n \"field\" : \"title.synonym\",\r\n \"suggest_mode\" : \"always\",\r\n \"pre_filter\" : \"wordnet-synonym-analyzer\",\r\n \"post_filter\" : \"wordnet-synonym-analyzer\"\r\n }],\r\n \"collate\": {\r\n \"query\": { \r\n \"source\" : {\r\n \"match\": {\r\n \"{{field_name}}\" : \"{{suggestion}}\" \r\n }\r\n }\r\n },\r\n \"params\": {\"field_name\" : \"title\"}, \r\n \"prune\": true \r\n }\r\n }\r\n }\r\n }\r\n}`\r\n\r\nResponse:\r\n`{\r\n \"error\": {\r\n \"root_cause\": [\r\n {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"TokenStream contract violation: close() call missing\"\r\n }\r\n ],\r\n \"type\": \"search_phase_execution_exception\",\r\n \"reason\": \"all shards failed\",\r\n \"phase\": \"query\",\r\n \"grouped\": true,\r\n \"failed_shards\": [\r\n {\r\n \"shard\": 0,\r\n \"index\": \"phrase-suggester-test-index\",\r\n \"node\": \"SKe2nqf-QSuSialfTHt2jA\",\r\n \"reason\": {\r\n \"type\": \"illegal_state_exception\",\r\n \"reason\": \"TokenStream contract violation: close() call missing\"\r\n }\r\n }\r\n ]\r\n },\r\n \"status\": 500\r\n}`\r\n\r\nLogs:\r\n`[2018-05-10T12:45:11,479][WARN ][r.suppressed ] path: /phrase-suggester-test-index/_search, params: {index=phrase-suggester-test-index}\r\norg.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:274) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:132) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:243) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:107) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.access$100(InitialSearchPhase.java:49) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase$2.lambda$onFailure$1(InitialSearchPhase.java:217) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.maybeFork(InitialSearchPhase.java:171) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase.access$000(InitialSearchPhase.java:49) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.InitialSearchPhase$2.onFailure(InitialSearchPhase.java:217) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.SearchExecutionStatsCollector.onFailure(SearchExecutionStatsCollector.java:73) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:51) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.SearchTransportService$ConnectionCountingHandler.handleException(SearchTransportService.java:527) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1098) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:1191) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1175) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.transport.TaskTransportChannel.sendResponse(TaskTransportChannel.java:66) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.SearchTransportService$6$1.onFailure(SearchTransportService.java:385) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.SearchService$2.onFailure(SearchService.java:324) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:318) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:312) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.SearchService$3.doRun(SearchService.java:1002) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:672) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4]\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]\r\n\tat java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]\r\nCaused by: org.elasticsearch.ElasticsearchException$1: TokenStream contract violation: close() call missing\r\n\tat org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:619) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.SearchPhaseExecutionException.guessRootCauses(SearchPhaseExecutionException.java:170) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.SearchPhaseExecutionException.getCause(SearchPhaseExecutionException.java:111) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:139) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:122) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.impl.MutableLogEvent.getThrownProxy(MutableLogEvent.java:326) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:64) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:333) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:232) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:217) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:57) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:177) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:170) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:161) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:448) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:433) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:403) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146) ~[log4j-core-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.spi.ExtendedLoggerWrapper.logMessage(ExtendedLoggerWrapper.java:217) ~[log4j-api-2.9.1.jar:2.9.1]\r\n\tat org.elasticsearch.common.logging.PrefixLogger.logMessage(PrefixLogger.java:102) ~[elasticsearch-core-6.2.4.jar:6.2.4]\r\n\tat org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2116) ~[log4j-api-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2100) ~[log4j-api-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1989) ~[log4j-api-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1851) ~[log4j-api-2.9.1.jar:2.9.1]\r\n\tat org.apache.logging.log4j.spi.AbstractLogger.warn(AbstractLogger.java:2589) ~[log4j-api-2.9.1.jar:2.9.1]\r\n\tat org.elasticsearch.rest.BytesRestResponse.build(BytesRestResponse.java:133) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:96) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:91) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:91) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.raisePhaseFailure(AbstractSearchAsyncAction.java:222) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\t... 28 more\r\nCaused by: java.lang.IllegalStateException: TokenStream contract violation: close() call missing\r\n\tat org.apache.lucene.analysis.Tokenizer.setReader(Tokenizer.java:90) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]\r\n\tat org.apache.lucene.analysis.Analyzer$TokenStreamComponents.setReader(Analyzer.java:412) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]\r\n\tat org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:165) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]\r\n\tat org.elasticsearch.search.suggest.phrase.DirectCandidateGenerator.analyze(DirectCandidateGenerator.java:316) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.phrase.DirectCandidateGenerator.preFilter(DirectCandidateGenerator.java:149) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.phrase.DirectCandidateGenerator.frequency(DirectCandidateGenerator.java:110) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.phrase.MultiCandidateGeneratorWrapper.frequency(MultiCandidateGeneratorWrapper.java:46) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.phrase.CandidateGenerator.createCandidate(CandidateGenerator.java:40) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.phrase.NoisyChannelSpellChecker$1.nextToken(NoisyChannelSpellChecker.java:95) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.phrase.DirectCandidateGenerator.analyze(DirectCandidateGenerator.java:333) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.phrase.NoisyChannelSpellChecker.getCorrections(NoisyChannelSpellChecker.java:65) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.phrase.PhraseSuggester.innerExecute(PhraseSuggester.java:96) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.phrase.PhraseSuggester.innerExecute(PhraseSuggester.java:52) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.Suggester.execute(Suggester.java:39) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.suggest.SuggestPhase.execute(SuggestPhase.java:64) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:95) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:307) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:340) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\tat org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:316) ~[elasticsearch-6.2.4.jar:6.2.4]\r\n\t... 9 more`\r\n", "created_at": "2018-05-10T09:52:30Z" } ], "number": 11947, "title": "Use of pre and post filters throws \"TokenStream contract violation: close() call missing\"" }
{ "body": "We have a couple places where we might fail to close a `TokenStream` on exception ...\n\nMaybe closes #11947\n", "number": 13870, "review_comments": [], "title": "Close TokenStream in finally clause" }
{ "commits": [ { "message": "close TokenStream in finally" }, { "message": "move close responsibility back down to SuggestUtils.analyze" }, { "message": "cutover more Analyzer.tokenStream to try-with-resources" }, { "message": "another try-with-resources" } ], "files": [ { "diff": "@@ -26,6 +26,7 @@\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.*;\n import org.apache.lucene.util.automaton.RegExp;\n+import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.index.mapper.MappedFieldType;\n@@ -484,30 +485,31 @@ private Query getPossiblyAnalyzedPrefixQuery(String field, String termStr) throw\n if (!settings.analyzeWildcard()) {\n return super.getPrefixQuery(field, termStr);\n }\n+ List<String> tlist;\n // get Analyzer from superclass and tokenize the term\n- TokenStream source;\n+ TokenStream source = null;\n try {\n- source = getAnalyzer().tokenStream(field, termStr);\n- source.reset();\n- } catch (IOException e) {\n- return super.getPrefixQuery(field, termStr);\n- }\n- List<String> tlist = new ArrayList<>();\n- CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class);\n-\n- while (true) {\n try {\n- if (!source.incrementToken()) break;\n+ source = getAnalyzer().tokenStream(field, termStr);\n+ source.reset();\n } catch (IOException e) {\n- break;\n+ return super.getPrefixQuery(field, termStr);\n }\n- tlist.add(termAtt.toString());\n- }\n+ tlist = new ArrayList<>();\n+ CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class);\n \n- try {\n- source.close();\n- } catch (IOException e) {\n- // ignore\n+ while (true) {\n+ try {\n+ if (!source.incrementToken()) break;\n+ } catch (IOException e) {\n+ break;\n+ }\n+ tlist.add(termAtt.toString());\n+ }\n+ } finally {\n+ if (source != null) {\n+ IOUtils.closeWhileHandlingException(source);\n+ }\n }\n \n if (tlist.size() == 1) {\n@@ -617,8 +619,7 @@ private Query getPossiblyAnalyzedWildcardQuery(String field, String termStr) thr\n char c = termStr.charAt(i);\n if (c == '?' || c == '*') {\n if (isWithinToken) {\n- try {\n- TokenStream source = getAnalyzer().tokenStream(field, tmp.toString());\n+ try (TokenStream source = getAnalyzer().tokenStream(field, tmp.toString())) {\n source.reset();\n CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class);\n if (source.incrementToken()) {\n@@ -633,7 +634,6 @@ private Query getPossiblyAnalyzedWildcardQuery(String field, String termStr) thr\n // no tokens, just use what we have now\n aggStr.append(tmp);\n }\n- source.close();\n } catch (IOException e) {\n aggStr.append(tmp);\n }\n@@ -648,22 +648,22 @@ private Query getPossiblyAnalyzedWildcardQuery(String field, String termStr) thr\n }\n if (isWithinToken) {\n try {\n- TokenStream source = getAnalyzer().tokenStream(field, tmp.toString());\n- source.reset();\n- CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class);\n- if (source.incrementToken()) {\n- String term = termAtt.toString();\n- if (term.length() == 0) {\n+ try (TokenStream source = getAnalyzer().tokenStream(field, tmp.toString())) {\n+ source.reset();\n+ CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class);\n+ if (source.incrementToken()) {\n+ String term = termAtt.toString();\n+ if (term.length() == 0) {\n+ // no tokens, just use what we have now\n+ aggStr.append(tmp);\n+ } else {\n+ aggStr.append(term);\n+ }\n+ } else {\n // no tokens, just use what we have now\n aggStr.append(tmp);\n- } else {\n- aggStr.append(term);\n }\n- } else {\n- // no tokens, just use what we have now\n- aggStr.append(tmp);\n }\n- source.close();\n } catch (IOException e) {\n aggStr.append(tmp);\n }", "filename": "core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java", "status": "modified" }, { "diff": "@@ -959,11 +959,9 @@ final Automaton toLookupAutomaton(final CharSequence key) throws IOException {\n // TODO: is there a Reader from a CharSequence?\n // Turn tokenstream into automaton:\n Automaton automaton = null;\n- TokenStream ts = queryAnalyzer.tokenStream(\"\", key.toString());\n- try {\n+ \n+ try (TokenStream ts = queryAnalyzer.tokenStream(\"\", key.toString())) {\n automaton = getTokenStreamToAutomaton().toAutomaton(ts);\n- } finally {\n- IOUtils.closeWhileHandlingException(ts);\n }\n \n automaton = replaceSep(automaton);", "filename": "core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java", "status": "modified" }, { "diff": "@@ -217,12 +217,10 @@ protected AnalyzeResponse shardOperation(AnalyzeRequest request, ShardId shardId\n }\n \n List<AnalyzeResponse.AnalyzeToken> tokens = new ArrayList<>();\n- TokenStream stream = null;\n int lastPosition = -1;\n int lastOffset = 0;\n for (String text : request.text()) {\n- try {\n- stream = analyzer.tokenStream(field, text);\n+ try (TokenStream stream = analyzer.tokenStream(field, text)) {\n stream.reset();\n CharTermAttribute term = stream.addAttribute(CharTermAttribute.class);\n PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class);\n@@ -243,11 +241,8 @@ protected AnalyzeResponse shardOperation(AnalyzeRequest request, ShardId shardId\n \n lastPosition += analyzer.getPositionIncrementGap(field);\n lastOffset += analyzer.getOffsetGap(field);\n-\n } catch (IOException e) {\n throw new ElasticsearchException(\"failed to analyze\", e);\n- } finally {\n- IOUtils.closeWhileHandlingException(stream);\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/analyze/TransportAnalyzeAction.java", "status": "modified" }, { "diff": "@@ -314,7 +314,9 @@ public static boolean isCharacterTokenStream(TokenStream tokenStream) {\n * @see #isCharacterTokenStream(TokenStream)\n */\n public static boolean generatesCharacterTokenStream(Analyzer analyzer, String fieldName) throws IOException {\n- return isCharacterTokenStream(analyzer.tokenStream(fieldName, \"\"));\n+ try (TokenStream ts = analyzer.tokenStream(fieldName, \"\")) {\n+ return isCharacterTokenStream(ts);\n+ }\n }\n \n }", "filename": "core/src/main/java/org/elasticsearch/index/analysis/Analysis.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.mapper.core;\n \n+import org.apache.lucene.analysis.Analyzer;\n import org.apache.lucene.analysis.TokenStream;\n import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;\n import org.apache.lucene.document.Field;\n@@ -145,7 +146,7 @@ protected void parseCreateField(ParseContext context, List<Field> fields) throws\n if (valueAndBoost.value() == null) {\n count = fieldType().nullValue();\n } else {\n- count = countPositions(analyzer.analyzer().tokenStream(simpleName(), valueAndBoost.value()));\n+ count = countPositions(analyzer, simpleName(), valueAndBoost.value());\n }\n addIntegerFields(context, fields, count, valueAndBoost.boost());\n }\n@@ -156,12 +157,14 @@ protected void parseCreateField(ParseContext context, List<Field> fields) throws\n \n /**\n * Count position increments in a token stream. Package private for testing.\n- * @param tokenStream token stream to count\n+ * @param analyzer analyzer to create token stream\n+ * @param fieldName field name to pass to analyzer\n+ * @param fieldValue field value to pass to analyzer\n * @return number of position increments in a token stream\n * @throws IOException if tokenStream throws it\n */\n- static int countPositions(TokenStream tokenStream) throws IOException {\n- try {\n+ static int countPositions(Analyzer analyzer, String fieldName, String fieldValue) throws IOException {\n+ try (TokenStream tokenStream = analyzer.tokenStream(fieldName, fieldValue)) {\n int count = 0;\n PositionIncrementAttribute position = tokenStream.addAttribute(PositionIncrementAttribute.class);\n tokenStream.reset();\n@@ -171,8 +174,6 @@ static int countPositions(TokenStream tokenStream) throws IOException {\n tokenStream.end();\n count += position.getPositionIncrement();\n return count;\n- } finally {\n- tokenStream.close();\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapper.java", "status": "modified" }, { "diff": "@@ -88,10 +88,11 @@ MemoryIndex indexDoc(ParseContext.Document d, Analyzer analyzer, MemoryIndex mem\n try {\n // TODO: instead of passing null here, we can have a CTL<Map<String,TokenStream>> and pass previous,\n // like the indexer does\n- TokenStream tokenStream = field.tokenStream(analyzer, null);\n- if (tokenStream != null) {\n- memoryIndex.addField(field.name(), tokenStream, field.boost());\n- }\n+ try (TokenStream tokenStream = field.tokenStream(analyzer, null)) {\n+ if (tokenStream != null) {\n+ memoryIndex.addField(field.name(), tokenStream, field.boost());\n+ }\n+ }\n } catch (IOException e) {\n throw new ElasticsearchException(\"Failed to create token stream\", e);\n }", "filename": "core/src/main/java/org/elasticsearch/percolator/MultiDocumentPercolatorIndex.java", "status": "modified" }, { "diff": "@@ -56,10 +56,11 @@ public void prepare(PercolateContext context, ParsedDocument parsedDocument) {\n Analyzer analyzer = context.mapperService().documentMapper(parsedDocument.type()).mappers().indexAnalyzer();\n // TODO: instead of passing null here, we can have a CTL<Map<String,TokenStream>> and pass previous,\n // like the indexer does\n- TokenStream tokenStream = field.tokenStream(analyzer, null);\n- if (tokenStream != null) {\n- memoryIndex.addField(field.name(), tokenStream, field.boost());\n- }\n+ try (TokenStream tokenStream = field.tokenStream(analyzer, null)) {\n+ if (tokenStream != null) {\n+ memoryIndex.addField(field.name(), tokenStream, field.boost());\n+ }\n+ }\n } catch (Exception e) {\n throw new ElasticsearchException(\"Failed to create token stream for [\" + field.name() + \"]\", e);\n }", "filename": "core/src/main/java/org/elasticsearch/percolator/SingleDocumentPercolatorIndex.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import org.apache.lucene.search.highlight.TextFragment;\n import org.apache.lucene.util.BytesRefHash;\n import org.apache.lucene.util.CollectionUtil;\n+import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.text.StringText;\n import org.elasticsearch.common.text.Text;\n@@ -109,15 +110,16 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n for (Object textToHighlight : textsToHighlight) {\n String text = textToHighlight.toString();\n \n- TokenStream tokenStream = analyzer.tokenStream(mapper.fieldType().names().indexName(), text);\n- if (!tokenStream.hasAttribute(CharTermAttribute.class) || !tokenStream.hasAttribute(OffsetAttribute.class)) {\n- // can't perform highlighting if the stream has no terms (binary token stream) or no offsets\n- continue;\n- }\n- TextFragment[] bestTextFragments = entry.getBestTextFragments(tokenStream, text, false, numberOfFragments);\n- for (TextFragment bestTextFragment : bestTextFragments) {\n- if (bestTextFragment != null && bestTextFragment.getScore() > 0) {\n- fragsList.add(bestTextFragment);\n+ try (TokenStream tokenStream = analyzer.tokenStream(mapper.fieldType().names().indexName(), text)) {\n+ if (!tokenStream.hasAttribute(CharTermAttribute.class) || !tokenStream.hasAttribute(OffsetAttribute.class)) {\n+ // can't perform highlighting if the stream has no terms (binary token stream) or no offsets\n+ continue;\n+ }\n+ TextFragment[] bestTextFragments = entry.getBestTextFragments(tokenStream, text, false, numberOfFragments);\n+ for (TextFragment bestTextFragment : bestTextFragments) {\n+ if (bestTextFragment != null && bestTextFragment.getScore() > 0) {\n+ fragsList.add(bestTextFragment);\n+ }\n }\n }\n }\n@@ -165,7 +167,7 @@ public int compare(TextFragment o1, TextFragment o2) {\n String fieldContents = textsToHighlight.get(0).toString();\n int end;\n try {\n- end = findGoodEndForNoHighlightExcerpt(noMatchSize, analyzer.tokenStream(mapper.fieldType().names().indexName(), fieldContents));\n+ end = findGoodEndForNoHighlightExcerpt(noMatchSize, analyzer, mapper.fieldType().names().indexName(), fieldContents);\n } catch (Exception e) {\n throw new FetchPhaseExecutionException(context, \"Failed to highlight field [\" + highlighterContext.fieldName + \"]\", e);\n }\n@@ -181,8 +183,8 @@ public boolean canHighlight(FieldMapper fieldMapper) {\n return true;\n }\n \n- private static int findGoodEndForNoHighlightExcerpt(int noMatchSize, TokenStream tokenStream) throws IOException {\n- try {\n+ private static int findGoodEndForNoHighlightExcerpt(int noMatchSize, Analyzer analyzer, String fieldName, String contents) throws IOException {\n+ try (TokenStream tokenStream = analyzer.tokenStream(fieldName, contents)) {\n if (!tokenStream.hasAttribute(OffsetAttribute.class)) {\n // Can't split on term boundaries without offsets\n return -1;\n@@ -200,11 +202,9 @@ private static int findGoodEndForNoHighlightExcerpt(int noMatchSize, TokenStream\n }\n end = attr.endOffset();\n }\n+ tokenStream.end();\n // We've exhausted the token stream so we should just highlight everything.\n return end;\n- } finally {\n- tokenStream.end();\n- tokenStream.close();\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/search/highlight/PlainHighlighter.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.apache.lucene.util.BytesRefBuilder;\n import org.apache.lucene.util.CharsRef;\n import org.apache.lucene.util.CharsRefBuilder;\n+import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.automaton.LevenshteinAutomata;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParseFieldMatcher;\n@@ -116,22 +117,34 @@ public static int analyze(Analyzer analyzer, BytesRef toAnalyze, String field, T\n }\n \n public static int analyze(Analyzer analyzer, CharsRef toAnalyze, String field, TokenConsumer consumer) throws IOException {\n- TokenStream ts = analyzer.tokenStream(\n- field, new FastCharArrayReader(toAnalyze.chars, toAnalyze.offset, toAnalyze.length)\n- );\n- return analyze(ts, consumer);\n+ try (TokenStream ts = analyzer.tokenStream(\n+ field, new FastCharArrayReader(toAnalyze.chars, toAnalyze.offset, toAnalyze.length))) {\n+ return analyze(ts, consumer);\n+ }\n }\n \n+ /** NOTE: this method closes the TokenStream, even on exception, which is awkward\n+ * because really the caller who called {@link Analyzer#tokenStream} should close it,\n+ * but when trying that there are recursion issues when we try to use the same\n+ * TokenStrem twice in the same recursion... */\n public static int analyze(TokenStream stream, TokenConsumer consumer) throws IOException {\n- stream.reset();\n- consumer.reset(stream);\n int numTokens = 0;\n- while (stream.incrementToken()) {\n- consumer.nextToken();\n- numTokens++;\n+ boolean success = false;\n+ try {\n+ stream.reset();\n+ consumer.reset(stream);\n+ while (stream.incrementToken()) {\n+ consumer.nextToken();\n+ numTokens++;\n+ }\n+ consumer.end();\n+ } finally {\n+ if (success) {\n+ stream.close();\n+ } else {\n+ IOUtils.closeWhileHandlingException(stream);\n+ }\n }\n- consumer.end();\n- stream.close();\n return numTokens;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/suggest/SuggestUtils.java", "status": "modified" }, { "diff": "@@ -100,9 +100,7 @@ public void end() throws IOException {\n \n @Override\n public void close() throws IOException {\n- if (posInc == -1) {\n- input.close();\n- }\n+ input.close();\n }\n \n public static interface ToFiniteStrings {", "filename": "core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionTokenStream.java", "status": "modified" }, { "diff": "@@ -92,12 +92,13 @@ public Suggestion<? extends Entry<? extends Option>> innerExecute(String name, P\n if (gens.size() > 0 && suggestTerms != null) {\n final NoisyChannelSpellChecker checker = new NoisyChannelSpellChecker(realWordErrorLikelihood, suggestion.getRequireUnigram(), suggestion.getTokenLimit());\n final BytesRef separator = suggestion.separator();\n- TokenStream stream = checker.tokenStream(suggestion.getAnalyzer(), suggestion.getText(), spare, suggestion.getField());\n- \n WordScorer wordScorer = suggestion.model().newScorer(indexReader, suggestTerms, suggestField, realWordErrorLikelihood, separator);\n- Result checkerResult = checker.getCorrections(stream, new MultiCandidateGeneratorWrapper(suggestion.getShardSize(),\n- gens.toArray(new CandidateGenerator[gens.size()])), suggestion.maxErrors(),\n- suggestion.getShardSize(), wordScorer, suggestion.confidence(), suggestion.gramSize());\n+ Result checkerResult;\n+ try (TokenStream stream = checker.tokenStream(suggestion.getAnalyzer(), suggestion.getText(), spare, suggestion.getField())) {\n+ checkerResult = checker.getCorrections(stream, new MultiCandidateGeneratorWrapper(suggestion.getShardSize(),\n+ gens.toArray(new CandidateGenerator[gens.size()])), suggestion.maxErrors(),\n+ suggestion.getShardSize(), wordScorer, suggestion.confidence(), suggestion.gramSize());\n+ }\n \n PhraseSuggestion.Entry resultEntry = buildResultEntry(suggestion, spare, checkerResult.cutoffScore);\n response.addTerm(resultEntry);", "filename": "core/src/main/java/org/elasticsearch/search/suggest/phrase/PhraseSuggester.java", "status": "modified" }, { "diff": "@@ -19,7 +19,9 @@\n \n package org.elasticsearch.index.mapper.core;\n \n+import org.apache.lucene.analysis.Analyzer;\n import org.apache.lucene.analysis.CannedTokenStream;\n+import org.apache.lucene.analysis.MockTokenizer;\n import org.apache.lucene.analysis.Token;\n import org.apache.lucene.analysis.TokenStream;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -87,7 +89,14 @@ public void testCountPositions() throws IOException {\n int finalTokenIncrement = 4; // Count the final token increment on the rare token streams that have them\n Token[] tokens = new Token[] {t1, t2, t3};\n Collections.shuffle(Arrays.asList(tokens), getRandom());\n- TokenStream tokenStream = new CannedTokenStream(finalTokenIncrement, 0, tokens);\n- assertThat(TokenCountFieldMapper.countPositions(tokenStream), equalTo(7));\n+ final TokenStream tokenStream = new CannedTokenStream(finalTokenIncrement, 0, tokens);\n+ // TODO: we have no CannedAnalyzer?\n+ Analyzer analyzer = new Analyzer() {\n+ @Override\n+ public TokenStreamComponents createComponents(String fieldName) {\n+ return new TokenStreamComponents(new MockTokenizer(), tokenStream);\n+ }\n+ };\n+ assertThat(TokenCountFieldMapper.countPositions(analyzer, \"\", \"\"), equalTo(7));\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapperTests.java", "status": "modified" } ] }
{ "body": "I have accidently created an index with name \".\" (dot). It was created without any problems. Tried to delete it and elastic did deleted whole my data on whole cluster.\n", "comments": [], "number": 13858, "title": "Unsafe index names" }
{ "body": "Fixes #13858\n", "number": 13862, "review_comments": [], "title": "Forbid index name `.` and `..`" }
{ "commits": [ { "message": "Forbid index name with '.' and '..'.\n\nFixes #13858" } ], "files": [ { "diff": "@@ -203,6 +203,9 @@ public void validateIndexName(String index, ClusterState state) {\n if (state.metaData().hasAlias(index)) {\n throw new InvalidIndexNameException(new Index(index), index, \"already exists as alias\");\n }\n+ if (index.equals(\".\") || index.equals(\"..\")) {\n+ throw new InvalidIndexNameException(new Index(index), index, \"must not be '.' or '..'\");\n+ }\n }\n \n private void createIndex(final CreateIndexClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener, final Semaphore mdLock) {", "filename": "core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -203,7 +203,7 @@ public void testCreateIndexWithLongName() {\n \n try {\n // Catch chars that are more than a single byte\n- client().prepareIndex(randomAsciiOfLength(MetaDataCreateIndexService.MAX_INDEX_NAME_BYTES -1).toLowerCase(Locale.ROOT) +\n+ client().prepareIndex(randomAsciiOfLength(MetaDataCreateIndexService.MAX_INDEX_NAME_BYTES - 1).toLowerCase(Locale.ROOT) +\n \"Ϟ\".toLowerCase(Locale.ROOT),\n \"mytype\").setSource(\"foo\", \"bar\").get();\n fail(\"exception should have been thrown on too-long index name\");\n@@ -215,4 +215,22 @@ public void testCreateIndexWithLongName() {\n // we can create an index of max length\n createIndex(randomAsciiOfLength(MetaDataCreateIndexService.MAX_INDEX_NAME_BYTES).toLowerCase(Locale.ROOT));\n }\n+\n+ public void testInvalidIndexName() {\n+ try {\n+ createIndex(\".\");\n+ fail(\"exception should have been thrown on dot index name\");\n+ } catch (InvalidIndexNameException e) {\n+ assertThat(\"exception contains message about index name is dot \" + e.getMessage(),\n+ e.getMessage().contains(\"Invalid index name [.], must not be \\'.\\' or '..'\"), equalTo(true));\n+ }\n+\n+ try {\n+ createIndex(\"..\");\n+ fail(\"exception should have been thrown on dot index name\");\n+ } catch (InvalidIndexNameException e) {\n+ assertThat(\"exception contains message about index name is dot \" + e.getMessage(),\n+ e.getMessage().contains(\"Invalid index name [..], must not be \\'.\\' or '..'\"), equalTo(true));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/indexing/IndexActionIT.java", "status": "modified" } ] }
{ "body": "When a shard becomes inactive we trigger a sync flush in order to speed up future recoveries. The sync flush causes a new translog generation to be made, which in turn confuses the IndexingMemoryController making it think that the shard is active. If no documents come along in the next 5m, the shard is made inactive again , triggering a sync flush and so forth.\n\nTo avoid this, the IndexingMemoryController is changed to ignore empty translogs when checking if a shard became active. This comes with the price of potentially missing indexing operations which are followed by a flush. This is acceptable as if no more index operation come in, it's OK to leave the shard inactive.\n\nA new unit test is introduced and comparable integration tests are removed.\n\nI think we need to push this all the way to 1.7.3 ... \n", "comments": [ { "body": "This was a great catch @bleskes, and the cutover from IT to unit test is wonderful.\n\nI left a bunch of small comments, but otherwise LGTM.\n\nI still find that massive if statement in `updateShardStatuses` confusing but I can't quickly see an easy way to improve it...\n", "created_at": "2015-09-26T09:35:35Z" }, { "body": "@mikemccand thx for the review. I pushed another round. Tried to simplify the 'big if'\n", "created_at": "2015-09-26T18:58:10Z" }, { "body": "LGTM, thanks @bleskes!\n", "created_at": "2015-09-26T19:21:12Z" }, { "body": "push this to master. will give CI some time before pushing it all the way back to 1.7...\n", "created_at": "2015-09-27T05:35:32Z" }, { "body": "> Pre-existing issue: I think 30s default is too large, because an index that was inactive and suddenly becomes active with a 512 KB indexing\n\nI was thinking about this some more (morning run, he he) - I think we should fold all the decisions and state about being active or not into the indexshard. Then it will be easy to reset on every index operation (immediately). We also already have the notion of an inactivity listener - we can extend it to have an active event as well. We can also add an checkIfInactive method on IndexShard which check if there was any indexing ops (we already have stats) since the last time it was checked. To avoid making IndexShard more complex, we can push this functionality ShardIndexingService - which is already in charge of stats.\n", "created_at": "2015-09-27T09:19:09Z" }, { "body": "> I think we should fold all the decisions and state about being active or not into the indexshard.\n\n+1, this would be cleaner. E.g. I don't like how `IndexShard.updateBufferSize` \"infers\" that it's going inactive by looking if the new size is 512 KB ... would be better it was more directly aware it's going inactive.\n\nI can try to tackle this.\n\n> To avoid making IndexShard more complex, we can push this functionality ShardIndexingService - which is already in charge of stats.\n\nOr, why not absorb `ShardIndexingService` into `IndexShard`? All it does is stats gathering and calling listeners (which I think is only used by percolator?). It's also confusing how it has `postCreate` and `postCreateUnderLock`. Seems like maybe an excessive abstraction at this point...\n", "created_at": "2015-09-27T14:39:31Z" }, { "body": "I personally don't mind folding it into indexshard though having a dedicate stats/active component has benefit (clearer testing suite) . I know Simon has an opinion on this.\n\nOn 27 sep. 2015 4:39 PM +0200, Michael McCandlessnotifications@github.com, wrote:\n\n> > I think we should fold all the decisions and state about being active or not into the indexshard.\n> \n> +1, this would be cleaner. E.g. I don't like howIndexShard.updateBufferSize\"infers\" that it's going inactive by looking if the new size is 512 KB ... would be better it was more directly aware it's going inactive.\n> \n> I can try to tackle this.\n> \n> > To avoid making IndexShard more complex, we can push this functionality ShardIndexingService - which is already in charge of stats.\n> \n> Or, why not absorbShardIndexingServiceintoIndexShard? All it does is stats gathering and calling listeners (which I think is only used by percolator?). It's also confusing how it haspostCreateandpostCreateUnderLock. Seems like maybe an excessive abstraction at this point...\n> \n> —\n> Reply to this email directly orview it on GitHub(https://github.com/elastic/elasticsearch/pull/13802#issuecomment-143563355).\n", "created_at": "2015-09-27T15:03:29Z" } ], "number": 13802, "title": "An inactive shard is activated by triggered synced flush" }
{ "body": "When a shard becomes in active we trigger a sync flush in order to speed up future recoveries. The sync flush causes a new translog generation to be made, which in turn confuses the IndexingMemoryController making it think that the shard is active. If no documents comes along in the next 5m, the shard is made inactive again , triggering a sync flush and so forth.\n\nTo avoid this, the IndexingMemoryController is changed to ignore empty translogs when checking if a shard became active. This comes with the price of potentially missing indexing operations which are followed by a flush. This is acceptable as if no more index operation come in, it's OK to leave the shard in active.\n\nA new unit test is introduced and comparable integration tests are removed.\n\nRelates #13802\nIncludes a backport of #13784\n", "number": 13824, "review_comments": [ { "body": "Could you replace null above with `TRANSLOG_BUFFER_SIZE_SETTING`? (This is a separate issue, but I never backported to 1.7.x...)\n", "created_at": "2015-09-28T10:27:24Z" }, { "body": "Use the new string constants here too (like in 2.x/master)?\n", "created_at": "2015-09-28T10:30:02Z" }, { "body": "not sure I follow - can you unpack? the null here is a default value... not settings name.\n", "created_at": "2015-09-28T10:35:54Z" }, { "body": "Duh sorry you're right ... don't change it!\n", "created_at": "2015-09-28T10:59:41Z" } ], "title": "An inactive shard is activated by triggered synced flush (backport of #13802)" }
{ "commits": [ { "message": "Internal: an inactive shard is activated by triggered synced flush\n\nWhen a shard becomes in active we trigger a sync flush in order to speed up future recoveries. The sync flush causes a new translog generation to be made, which in turn confuses the IndexingMemoryController making it think that the shard is active. If no documents comes along in the next 5m, the shard is made inactive again , triggering a sync flush and so forth.\n\nTo avoid this, the IndexingMemoryController is changed to ignore empty translogs when checking if a shard became active. This comes with the price of potentially missing indexing operations which are followed by a flush. This is acceptable as if no more index operation come in, it's OK to leave the shard in active.\n\nA new unit test is introduced and comparable integration tests are removed.\n\nRelates #13802\nIncludes a backport of #13784" }, { "message": "feedback" } ], "files": [ { "diff": "@@ -238,10 +238,7 @@ public boolean equals(Object o) {\n \n ByteSizeValue sizeValue = (ByteSizeValue) o;\n \n- if (size != sizeValue.size) return false;\n- if (sizeUnit != sizeValue.sizeUnit) return false;\n-\n- return true;\n+ return bytes() == sizeValue.bytes();\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/common/unit/ByteSizeValue.java", "status": "modified" }, { "diff": "@@ -19,8 +19,8 @@\n \n package org.elasticsearch.indices.memory;\n \n-import com.google.common.collect.Lists;\n import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -48,6 +48,44 @@\n */\n public class IndexingMemoryController extends AbstractLifecycleComponent<IndexingMemoryController> {\n \n+\n+ /** How much heap (% or bytes) we will share across all actively indexing shards on this node (default: 10%). */\n+ public static final String INDEX_BUFFER_SIZE_SETTING = \"indices.memory.index_buffer_size\";\n+\n+ /** Only applies when <code>indices.memory.index_buffer_size</code> is a %, to set a floor on the actual size in bytes (default: 48 MB). */\n+ public static final String MIN_INDEX_BUFFER_SIZE_SETTING = \"indices.memory.min_index_buffer_size\";\n+\n+ /** Only applies when <code>indices.memory.index_buffer_size</code> is a %, to set a ceiling on the actual size in bytes (default: not set). */\n+ public static final String MAX_INDEX_BUFFER_SIZE_SETTING = \"indices.memory.max_index_buffer_size\";\n+\n+ /** Sets a floor on the per-shard index buffer size (default: 4 MB). */\n+ public static final String MIN_SHARD_INDEX_BUFFER_SIZE_SETTING = \"indices.memory.min_shard_index_buffer_size\";\n+\n+ /** Sets a ceiling on the per-shard index buffer size (default: 512 MB). */\n+ public static final String MAX_SHARD_INDEX_BUFFER_SIZE_SETTING = \"indices.memory.max_shard_index_buffer_size\";\n+\n+ /** How much heap (% or bytes) we will share across all actively indexing shards for the translog buffer (default: 1%). */\n+ public static final String TRANSLOG_BUFFER_SIZE_SETTING = \"indices.memory.translog_buffer_size\";\n+\n+ /** Only applies when <code>indices.memory.translog_buffer_size</code> is a %, to set a floor on the actual size in bytes (default: 256 KB). */\n+ public static final String MIN_TRANSLOG_BUFFER_SIZE_SETTING = \"indices.memory.min_translog_buffer_size\";\n+\n+ /** Only applies when <code>indices.memory.translog_buffer_size</code> is a %, to set a ceiling on the actual size in bytes (default: not set). */\n+ public static final String MAX_TRANSLOG_BUFFER_SIZE_SETTING = \"indices.memory.max_translog_buffer_size\";\n+\n+ /** Sets a floor on the per-shard translog buffer size (default: 2 KB). */\n+ public static final String MIN_SHARD_TRANSLOG_BUFFER_SIZE_SETTING = \"indices.memory.min_shard_translog_buffer_size\";\n+\n+ /** Sets a ceiling on the per-shard translog buffer size (default: 64 KB). */\n+ public static final String MAX_SHARD_TRANSLOG_BUFFER_SIZE_SETTING = \"indices.memory.max_shard_translog_buffer_size\";\n+\n+ /** If we see no indexing operations after this much time for a given shard, we consider that shard inactive (default: 5 minutes). */\n+ public static final String SHARD_INACTIVE_TIME_SETTING = \"indices.memory.shard_inactive_time\";\n+\n+ /** How frequently we check shards to find inactive ones (default: 30 seconds). */\n+ public static final String SHARD_INACTIVE_INTERVAL_TIME_SETTING = \"indices.memory.interval\";\n+\n+\n private final ThreadPool threadPool;\n private final IndicesService indicesService;\n \n@@ -67,19 +105,26 @@ public class IndexingMemoryController extends AbstractLifecycleComponent<Indexin\n private static final EnumSet<IndexShardState> CAN_UPDATE_INDEX_BUFFER_STATES = EnumSet.of(\n IndexShardState.RECOVERING, IndexShardState.POST_RECOVERY, IndexShardState.STARTED, IndexShardState.RELOCATED);\n \n+ private final ShardsIndicesStatusChecker statusChecker;\n+\n @Inject\n public IndexingMemoryController(Settings settings, ThreadPool threadPool, IndicesService indicesService) {\n+ this(settings, threadPool, indicesService, JvmInfo.jvmInfo().getMem().getHeapMax().bytes());\n+ }\n+\n+ // for testing\n+ protected IndexingMemoryController(Settings settings, ThreadPool threadPool, IndicesService indicesService, long jvmMemoryInBytes) {\n super(settings);\n this.threadPool = threadPool;\n this.indicesService = indicesService;\n \n ByteSizeValue indexingBuffer;\n- String indexingBufferSetting = componentSettings.get(\"index_buffer_size\", \"10%\");\n+ String indexingBufferSetting = settings.get(INDEX_BUFFER_SIZE_SETTING, \"10%\");\n if (indexingBufferSetting.endsWith(\"%\")) {\n double percent = Double.parseDouble(indexingBufferSetting.substring(0, indexingBufferSetting.length() - 1));\n- indexingBuffer = new ByteSizeValue((long) (((double) JvmInfo.jvmInfo().mem().heapMax().bytes()) * (percent / 100)));\n- ByteSizeValue minIndexingBuffer = componentSettings.getAsBytesSize(\"min_index_buffer_size\", new ByteSizeValue(48, ByteSizeUnit.MB));\n- ByteSizeValue maxIndexingBuffer = componentSettings.getAsBytesSize(\"max_index_buffer_size\", null);\n+ indexingBuffer = new ByteSizeValue((long) (((double) jvmMemoryInBytes) * (percent / 100)));\n+ ByteSizeValue minIndexingBuffer = settings.getAsBytesSize(MIN_INDEX_BUFFER_SIZE_SETTING, new ByteSizeValue(48, ByteSizeUnit.MB));\n+ ByteSizeValue maxIndexingBuffer = settings.getAsBytesSize(MAX_INDEX_BUFFER_SIZE_SETTING, null);\n \n if (indexingBuffer.bytes() < minIndexingBuffer.bytes()) {\n indexingBuffer = minIndexingBuffer;\n@@ -91,18 +136,17 @@ public IndexingMemoryController(Settings settings, ThreadPool threadPool, Indice\n indexingBuffer = ByteSizeValue.parseBytesSizeValue(indexingBufferSetting, null);\n }\n this.indexingBuffer = indexingBuffer;\n- this.minShardIndexBufferSize = componentSettings.getAsBytesSize(\"min_shard_index_buffer_size\", new ByteSizeValue(4, ByteSizeUnit.MB));\n+ this.minShardIndexBufferSize = settings.getAsBytesSize(MIN_SHARD_INDEX_BUFFER_SIZE_SETTING, new ByteSizeValue(4, ByteSizeUnit.MB));\n // LUCENE MONITOR: Based on this thread, currently (based on Mike), having a large buffer does not make a lot of sense: https://issues.apache.org/jira/browse/LUCENE-2324?focusedCommentId=13005155&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13005155\n- this.maxShardIndexBufferSize = componentSettings.getAsBytesSize(\"max_shard_index_buffer_size\", new ByteSizeValue(512, ByteSizeUnit.MB));\n+ this.maxShardIndexBufferSize = settings.getAsBytesSize(MAX_SHARD_INDEX_BUFFER_SIZE_SETTING, new ByteSizeValue(512, ByteSizeUnit.MB));\n \n ByteSizeValue translogBuffer;\n- String translogBufferSetting = componentSettings.get(\"translog_buffer_size\", \"1%\");\n+ String translogBufferSetting = settings.get(TRANSLOG_BUFFER_SIZE_SETTING, \"1%\");\n if (translogBufferSetting.endsWith(\"%\")) {\n double percent = Double.parseDouble(translogBufferSetting.substring(0, translogBufferSetting.length() - 1));\n- translogBuffer = new ByteSizeValue((long) (((double) JvmInfo.jvmInfo().mem().heapMax().bytes()) * (percent / 100)));\n- ByteSizeValue minTranslogBuffer = componentSettings.getAsBytesSize(\"min_translog_buffer_size\", new ByteSizeValue(256, ByteSizeUnit.KB));\n- ByteSizeValue maxTranslogBuffer = componentSettings.getAsBytesSize(\"max_translog_buffer_size\", null);\n-\n+ translogBuffer = new ByteSizeValue((long) (((double) jvmMemoryInBytes) * (percent / 100)));\n+ ByteSizeValue minTranslogBuffer = settings.getAsBytesSize(MIN_TRANSLOG_BUFFER_SIZE_SETTING, new ByteSizeValue(256, ByteSizeUnit.KB));\n+ ByteSizeValue maxTranslogBuffer = settings.getAsBytesSize(MAX_TRANSLOG_BUFFER_SIZE_SETTING, null);\n if (translogBuffer.bytes() < minTranslogBuffer.bytes()) {\n translogBuffer = minTranslogBuffer;\n }\n@@ -113,21 +157,26 @@ public IndexingMemoryController(Settings settings, ThreadPool threadPool, Indice\n translogBuffer = ByteSizeValue.parseBytesSizeValue(translogBufferSetting, null);\n }\n this.translogBuffer = translogBuffer;\n- this.minShardTranslogBufferSize = componentSettings.getAsBytesSize(\"min_shard_translog_buffer_size\", new ByteSizeValue(2, ByteSizeUnit.KB));\n- this.maxShardTranslogBufferSize = componentSettings.getAsBytesSize(\"max_shard_translog_buffer_size\", new ByteSizeValue(64, ByteSizeUnit.KB));\n+ this.minShardTranslogBufferSize = settings.getAsBytesSize(MIN_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, new ByteSizeValue(2, ByteSizeUnit.KB));\n+ this.maxShardTranslogBufferSize = settings.getAsBytesSize(MAX_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, new ByteSizeValue(64, ByteSizeUnit.KB));\n \n- this.inactiveTime = componentSettings.getAsTime(\"shard_inactive_time\", TimeValue.timeValueMinutes(5));\n+ this.inactiveTime = settings.getAsTime(SHARD_INACTIVE_TIME_SETTING, TimeValue.timeValueMinutes(5));\n // we need to have this relatively small to move a shard from inactive to active fast (enough)\n- this.interval = componentSettings.getAsTime(\"interval\", TimeValue.timeValueSeconds(30));\n-\n- logger.debug(\"using index_buffer_size [{}], with min_shard_index_buffer_size [{}], max_shard_index_buffer_size [{}], shard_inactive_time [{}]\", this.indexingBuffer, this.minShardIndexBufferSize, this.maxShardIndexBufferSize, this.inactiveTime);\n-\n+ this.interval = settings.getAsTime(SHARD_INACTIVE_INTERVAL_TIME_SETTING, TimeValue.timeValueSeconds(30));\n+\n+ this.statusChecker = new ShardsIndicesStatusChecker();\n+ logger.debug(\"using indexing buffer size [{}], with {} [{}], {} [{}], {} [{}], {} [{}]\",\n+ this.indexingBuffer,\n+ MIN_SHARD_INDEX_BUFFER_SIZE_SETTING, this.minShardIndexBufferSize,\n+ MAX_SHARD_INDEX_BUFFER_SIZE_SETTING, this.maxShardIndexBufferSize,\n+ SHARD_INACTIVE_TIME_SETTING, this.inactiveTime,\n+ SHARD_INACTIVE_INTERVAL_TIME_SETTING, this.interval);\n }\n \n @Override\n protected void doStart() throws ElasticsearchException {\n // its fine to run it on the scheduler thread, no busy work\n- this.scheduler = threadPool.scheduleWithFixedDelay(new ShardsIndicesStatusChecker(), interval);\n+ this.scheduler = threadPool.scheduleWithFixedDelay(statusChecker, interval);\n }\n \n @Override\n@@ -148,6 +197,89 @@ public ByteSizeValue indexingBufferSize() {\n return indexingBuffer;\n }\n \n+ /**\n+ * returns the current budget for the total amount of translog buffers of\n+ * active shards on this node\n+ */\n+ public ByteSizeValue translogBufferSize() {\n+ return translogBuffer;\n+ }\n+\n+\n+ protected List<ShardId> availableShards() {\n+ ArrayList<ShardId> list = new ArrayList<>();\n+\n+ for (IndexService indexService : indicesService) {\n+ for (IndexShard indexShard : indexService) {\n+ if (shardAvailable(indexShard)) {\n+ list.add(indexShard.shardId());\n+ }\n+ }\n+ }\n+ return list;\n+ }\n+\n+ /** returns true if shard exists and is availabe for updates */\n+ protected boolean shardAvailable(ShardId shardId) {\n+ return shardAvailable(getShard(shardId));\n+ }\n+\n+ /** returns true if shard exists and is availabe for updates */\n+ protected boolean shardAvailable(@Nullable IndexShard shard) {\n+ return shard != null && CAN_UPDATE_INDEX_BUFFER_STATES.contains(shard.state());\n+ }\n+\n+ /** gets an {@link IndexShard} instance for the given shard. returns null if the shard doesn't exist */\n+ protected IndexShard getShard(ShardId shardId) {\n+ IndexService indexService = indicesService.indexService(shardId.index().name());\n+ if (indexService != null) {\n+ IndexShard indexShard = indexService.shard(shardId.id());\n+ return indexShard;\n+ }\n+ return null;\n+ }\n+\n+ protected void updateShardBuffers(ShardId shardId, ByteSizeValue shardIndexingBufferSize, ByteSizeValue shardTranslogBufferSize) {\n+ final IndexShard shard = getShard(shardId);\n+ if (shard != null) {\n+ try {\n+ shard.updateBufferSize(shardIndexingBufferSize, shardTranslogBufferSize);\n+ } catch (EngineClosedException e) {\n+ // ignore\n+ } catch (FlushNotAllowedEngineException e) {\n+ // ignore\n+ } catch (Exception e) {\n+ logger.warn(\"failed to set shard {} index buffer to [{}]\", shardId, shardIndexingBufferSize);\n+ }\n+ }\n+ }\n+\n+\n+ /** returns the current translog status (generation id + ops) for the given shard id. Returns null if unavailable. */\n+ protected ShardIndexingStatus getTranslogStatus(ShardId shardId) {\n+ final IndexShard indexShard = getShard(shardId);\n+ if (indexShard == null) {\n+ return null;\n+ }\n+ final Translog translog;\n+ try {\n+ translog = indexShard.translog();\n+ } catch (EngineClosedException e) {\n+ // not ready yet to be checked for activity\n+ return null;\n+ }\n+\n+ ShardIndexingStatus status = new ShardIndexingStatus();\n+ status.translogId = translog.currentId();\n+ status.translogNumberOfOperations = translog.estimatedNumberOfOperations();\n+ return status;\n+ }\n+\n+ // used for tests\n+ void forceCheck() {\n+ statusChecker.run();\n+ }\n+\n class ShardsIndicesStatusChecker implements Runnable {\n \n private final Map<ShardId, ShardIndexingStatus> shardsIndicesStatus = new HashMap<>();\n@@ -159,17 +291,10 @@ public void run() {\n \n changes.addAll(purgeDeletedAndClosedShards());\n \n- final List<IndexShard> activeToInactiveIndexingShards = Lists.newArrayList();\n+ final List<ShardId> activeToInactiveIndexingShards = new ArrayList<>();\n final int activeShards = updateShardStatuses(changes, activeToInactiveIndexingShards);\n- for (IndexShard indexShard : activeToInactiveIndexingShards) {\n- // update inactive indexing buffer size\n- try {\n- indexShard.markAsInactive();\n- } catch (EngineClosedException e) {\n- // ignore\n- } catch (FlushNotAllowedEngineException e) {\n- // ignore\n- }\n+ for (ShardId indexShard : activeToInactiveIndexingShards) {\n+ markShardAsInactive(indexShard);\n }\n if (!changes.isEmpty()) {\n calcAndSetShardBuffers(activeShards, \"[\" + changes + \"]\");\n@@ -181,57 +306,40 @@ public void run() {\n *\n * @return the current count of active shards\n */\n- private int updateShardStatuses(EnumSet<ShardStatusChangeType> changes, List<IndexShard> activeToInactiveIndexingShards) {\n+ private int updateShardStatuses(EnumSet<ShardStatusChangeType> changes, List<ShardId> activeToInactiveIndexingShards) {\n int activeShards = 0;\n- for (IndexService indexService : indicesService) {\n- for (IndexShard indexShard : indexService) {\n-\n- if (!CAN_UPDATE_INDEX_BUFFER_STATES.contains(indexShard.state())) {\n- // not ready to be updated yet.\n- continue;\n- }\n-\n- final long time = threadPool.estimatedTimeInMillis();\n+ for (ShardId shardId : availableShards()) {\n+ final ShardIndexingStatus currentStatus = getTranslogStatus(shardId);\n \n- Translog translog = indexShard.translog();\n- ShardIndexingStatus status = shardsIndicesStatus.get(indexShard.shardId());\n- if (status == null) {\n- status = new ShardIndexingStatus();\n- shardsIndicesStatus.put(indexShard.shardId(), status);\n- changes.add(ShardStatusChangeType.ADDED);\n- }\n- // check if it is deemed to be inactive (sam translogId and numberOfOperations over a long period of time)\n- if (status.translogId == translog.currentId() && translog.estimatedNumberOfOperations() == status.translogNumberOfOperations) {\n- if (status.time == -1) { // first time\n- status.time = time;\n- }\n- // inactive?\n- if (status.activeIndexing) {\n- // mark it as inactive only if enough time has passed and there are no ongoing merges going on...\n- if ((time - status.time) > inactiveTime.millis() && indexShard.mergeStats().getCurrent() == 0) {\n- // inactive for this amount of time, mark it\n- activeToInactiveIndexingShards.add(indexShard);\n- status.activeIndexing = false;\n- changes.add(ShardStatusChangeType.BECAME_INACTIVE);\n- logger.debug(\"marking shard [{}][{}] as inactive (inactive_time[{}]) indexing wise, setting size to [{}]\", indexShard.shardId().index().name(), indexShard.shardId().id(), inactiveTime, EngineConfig.INACTIVE_SHARD_INDEXING_BUFFER);\n- }\n- }\n- } else {\n- if (!status.activeIndexing) {\n- status.activeIndexing = true;\n- changes.add(ShardStatusChangeType.BECAME_ACTIVE);\n- logger.debug(\"marking shard [{}][{}] as active indexing wise\", indexShard.shardId().index().name(), indexShard.shardId().id());\n- }\n- status.time = -1;\n- }\n- status.translogId = translog.currentId();\n- status.translogNumberOfOperations = translog.estimatedNumberOfOperations();\n+ if (currentStatus == null) {\n+ // shard was closed..\n+ continue;\n+ }\n \n- if (status.activeIndexing) {\n- activeShards++;\n+ ShardIndexingStatus status = shardsIndicesStatus.get(shardId);\n+ if (status == null) {\n+ status = currentStatus;\n+ shardsIndicesStatus.put(shardId, status);\n+ changes.add(ShardStatusChangeType.ADDED);\n+ } else {\n+ final boolean lastActiveIndexing = status.activeIndexing;\n+ status.updateWith(currentTimeInNanos(), currentStatus, inactiveTime.nanos());\n+ if (lastActiveIndexing && (status.activeIndexing == false)) {\n+ activeToInactiveIndexingShards.add(shardId);\n+ changes.add(ShardStatusChangeType.BECAME_INACTIVE);\n+ logger.debug(\"marking shard {} as inactive (inactive_time[{}]) indexing wise, setting size to [{}]\",\n+ shardId,\n+ inactiveTime, EngineConfig.INACTIVE_SHARD_INDEXING_BUFFER);\n+ } else if ((lastActiveIndexing == false) && status.activeIndexing) {\n+ changes.add(ShardStatusChangeType.BECAME_ACTIVE);\n+ logger.debug(\"marking shard {} as active indexing wise\", shardId);\n }\n }\n+ if (status.activeIndexing) {\n+ activeShards++;\n+ }\n }\n+\n return activeShards;\n }\n \n@@ -245,26 +353,10 @@ private EnumSet<ShardStatusChangeType> purgeDeletedAndClosedShards() {\n \n Iterator<ShardId> statusShardIdIterator = shardsIndicesStatus.keySet().iterator();\n while (statusShardIdIterator.hasNext()) {\n- ShardId statusShardId = statusShardIdIterator.next();\n- IndexService indexService = indicesService.indexService(statusShardId.getIndex());\n- boolean remove = false;\n- try {\n- if (indexService == null) {\n- remove = true;\n- continue;\n- }\n- IndexShard indexShard = indexService.shard(statusShardId.id());\n- if (indexShard == null) {\n- remove = true;\n- continue;\n- }\n- remove = !CAN_UPDATE_INDEX_BUFFER_STATES.contains(indexShard.state());\n-\n- } finally {\n- if (remove) {\n- changes.add(ShardStatusChangeType.DELETED);\n- statusShardIdIterator.remove();\n- }\n+ ShardId shardId = statusShardIdIterator.next();\n+ if (shardAvailable(shardId) == false) {\n+ changes.add(ShardStatusChangeType.DELETED);\n+ statusShardIdIterator.remove();\n }\n }\n return changes;\n@@ -291,41 +383,80 @@ private void calcAndSetShardBuffers(int activeShards, String reason) {\n }\n \n logger.debug(\"recalculating shard indexing buffer (reason={}), total is [{}] with [{}] active shards, each shard set to indexing=[{}], translog=[{}]\", reason, indexingBuffer, activeShards, shardIndexingBufferSize, shardTranslogBufferSize);\n- for (IndexService indexService : indicesService) {\n- for (IndexShard indexShard : indexService) {\n- IndexShardState state = indexShard.state();\n- if (!CAN_UPDATE_INDEX_BUFFER_STATES.contains(state)) {\n- logger.trace(\"shard [{}] is not yet ready for index buffer update. index shard state: [{}]\", indexShard.shardId(), state);\n- continue;\n- }\n- ShardIndexingStatus status = shardsIndicesStatus.get(indexShard.shardId());\n- if (status == null || status.activeIndexing) {\n- try {\n- indexShard.updateBufferSize(shardIndexingBufferSize, shardTranslogBufferSize);\n- } catch (EngineClosedException e) {\n- // ignore\n- continue;\n- } catch (FlushNotAllowedEngineException e) {\n- // ignore\n- continue;\n- } catch (Exception e) {\n- logger.warn(\"failed to set shard {} index buffer to [{}]\", indexShard.shardId(), shardIndexingBufferSize);\n- }\n- }\n+ for (ShardId shardId : availableShards()) {\n+ ShardIndexingStatus status = shardsIndicesStatus.get(shardId);\n+ if (status == null || status.activeIndexing) {\n+ updateShardBuffers(shardId, shardIndexingBufferSize, shardTranslogBufferSize);\n }\n }\n }\n }\n \n+ protected long currentTimeInNanos() {\n+ return System.nanoTime();\n+ }\n+\n+ // update inactive indexing buffer size\n+ protected void markShardAsInactive(ShardId shardId) {\n+ String ignoreReason = null;\n+ final IndexShard shard = getShard(shardId);\n+ if (shard != null) {\n+ try {\n+ shard.markAsInactive();\n+ } catch (EngineClosedException e) {\n+ // ignore\n+ ignoreReason = \"EngineClosedException\";\n+ } catch (FlushNotAllowedEngineException e) {\n+ // ignore\n+ ignoreReason = \"FlushNotAllowedEngineException\";\n+ }\n+ } else {\n+ ignoreReason = \"shard not found\";\n+ }\n+ if (ignoreReason != null) {\n+ logger.trace(\"ignore [{}] while marking shard {} as inactive\", ignoreReason, shardId);\n+ }\n+ }\n+\n private static enum ShardStatusChangeType {\n ADDED, DELETED, BECAME_ACTIVE, BECAME_INACTIVE\n }\n \n-\n static class ShardIndexingStatus {\n long translogId = -1;\n int translogNumberOfOperations = -1;\n boolean activeIndexing = true;\n- long time = -1; // contains the first time we saw this shard with no operations done on it\n+ long idleSinceNanoTime = -1; // contains the first time we saw this shard with no operations done on it\n+\n+\n+ /** update status based on a new sample. updates all internal variables */\n+ public void updateWith(long currentNanoTime, ShardIndexingStatus current, long inactiveNanoInterval) {\n+ final boolean idle = (translogId == current.translogId && translogNumberOfOperations == current.translogNumberOfOperations);\n+ if (activeIndexing && idle) {\n+ // no indexing activity detected.\n+ if (idleSinceNanoTime < 0) {\n+ // first time we see this, start the clock.\n+ idleSinceNanoTime = currentNanoTime;\n+ } else if ((currentNanoTime - idleSinceNanoTime) > inactiveNanoInterval) {\n+ // shard is inactive. mark it as such.\n+ activeIndexing = false;\n+ }\n+ } else if (activeIndexing == false // we weren't indexing before\n+ && idle == false // but we do now\n+ && current.translogNumberOfOperations > 0 // but only if we're really sure - see note bellow\n+ ) {\n+ // since we sync flush once a shard becomes inactive, the translog id can change, however that\n+ // doesn't mean the an indexing operation has happened. Note that if we're really unlucky and a flush happens\n+ // immediately after an indexing operation we may not become active immediately. The following\n+ // indexing operation will mark the shard as active, so it's OK. If that one doesn't come, we might as well stay\n+ // inactive\n+\n+ activeIndexing = true;\n+ idleSinceNanoTime = -1;\n+ }\n+\n+ translogId = current.translogId;\n+ translogNumberOfOperations = current.translogNumberOfOperations;\n+ }\n }\n }", "filename": "src/main/java/org/elasticsearch/indices/memory/IndexingMemoryController.java", "status": "modified" }, { "diff": "@@ -0,0 +1,286 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.indices.memory;\n+\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.ByteSizeUnit;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+\n+import java.util.ArrayList;\n+import java.util.HashMap;\n+import java.util.List;\n+import java.util.Map;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.not;\n+\n+public class IndexingMemoryControllerUnitTests extends ElasticsearchTestCase {\n+\n+ static class MockController extends IndexingMemoryController {\n+\n+ final static ByteSizeValue INACTIVE = new ByteSizeValue(-1);\n+\n+ final Map<ShardId, Long> translogIds = new HashMap<>();\n+ final Map<ShardId, Integer> translogOps = new HashMap<>();\n+\n+ final Map<ShardId, ByteSizeValue> indexingBuffers = new HashMap<>();\n+ final Map<ShardId, ByteSizeValue> translogBuffers = new HashMap<>();\n+\n+ long currentTimeSec = TimeValue.timeValueNanos(System.nanoTime()).seconds();\n+\n+ public MockController(Settings settings) {\n+ super(ImmutableSettings.builder()\n+ .put(SHARD_INACTIVE_INTERVAL_TIME_SETTING, \"200h\") // disable it\n+ .put(SHARD_INACTIVE_TIME_SETTING, \"0s\") // immediate\n+ .put(settings)\n+ .build(),\n+ null, null, 100 * 1024 * 1024); // fix jvm mem size to 100mb\n+ }\n+\n+ public void incTranslog(ShardId shard1, int id, int ops) {\n+ setTranslog(shard1, translogIds.get(shard1) + id, translogOps.get(shard1) + ops);\n+ }\n+\n+ public void setTranslog(ShardId id, long translogId, int ops) {\n+ translogIds.put(id, translogId);\n+ translogOps.put(id, ops);\n+ }\n+\n+ public void deleteShard(ShardId id) {\n+ translogIds.remove(id);\n+ translogOps.remove(id);\n+ indexingBuffers.remove(id);\n+ translogBuffers.remove(id);\n+ }\n+\n+ public void assertActive(ShardId id) {\n+ assertThat(indexingBuffers.get(id), not(equalTo(INACTIVE)));\n+ assertThat(translogBuffers.get(id), not(equalTo(INACTIVE)));\n+ }\n+\n+ public void assertBuffers(ShardId id, ByteSizeValue indexing, ByteSizeValue translog) {\n+ assertThat(indexingBuffers.get(id), equalTo(indexing));\n+ assertThat(translogBuffers.get(id), equalTo(translog));\n+ }\n+\n+ public void assertInActive(ShardId id) {\n+ assertThat(indexingBuffers.get(id), equalTo(INACTIVE));\n+ assertThat(translogBuffers.get(id), equalTo(INACTIVE));\n+ }\n+\n+ @Override\n+ protected long currentTimeInNanos() {\n+ return TimeValue.timeValueSeconds(currentTimeSec).nanos();\n+ }\n+\n+ @Override\n+ protected List<ShardId> availableShards() {\n+ return new ArrayList<>(translogIds.keySet());\n+ }\n+\n+ @Override\n+ protected boolean shardAvailable(ShardId shardId) {\n+ return translogIds.containsKey(shardId);\n+ }\n+\n+ @Override\n+ protected void markShardAsInactive(ShardId shardId) {\n+ indexingBuffers.put(shardId, INACTIVE);\n+ translogBuffers.put(shardId, INACTIVE);\n+ }\n+\n+ @Override\n+ protected ShardIndexingStatus getTranslogStatus(ShardId shardId) {\n+ if (!shardAvailable(shardId)) {\n+ return null;\n+ }\n+ ShardIndexingStatus status = new ShardIndexingStatus();\n+ status.translogId = translogIds.get(shardId);\n+ status.translogNumberOfOperations = translogOps.get(shardId);\n+ return status;\n+ }\n+\n+ @Override\n+ protected void updateShardBuffers(ShardId shardId, ByteSizeValue shardIndexingBufferSize, ByteSizeValue shardTranslogBufferSize) {\n+ indexingBuffers.put(shardId, shardIndexingBufferSize);\n+ translogBuffers.put(shardId, shardTranslogBufferSize);\n+ }\n+\n+ public void incrementTimeSec(int sec) {\n+ currentTimeSec += sec;\n+ }\n+\n+ public void simulateFlush(ShardId shard) {\n+ setTranslog(shard, translogIds.get(shard) + 1, 0);\n+ }\n+ }\n+\n+ public void testShardAdditionAndRemoval() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"100kb\").build());\n+ final ShardId shard1 = new ShardId(\"test\", 1);\n+ controller.setTranslog(shard1, randomInt(10), randomInt(10));\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n+\n+ // add another shard\n+ final ShardId shard2 = new ShardId(\"test\", 2);\n+ controller.setTranslog(shard2, randomInt(10), randomInt(10));\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+\n+ // remove first shard\n+ controller.deleteShard(shard1);\n+ controller.forceCheck();\n+ controller.assertBuffers(shard2, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n+\n+ // remove second shard\n+ controller.deleteShard(shard2);\n+ controller.forceCheck();\n+\n+ // add a new one\n+ final ShardId shard3 = new ShardId(\"test\", 3);\n+ controller.setTranslog(shard3, randomInt(10), randomInt(10));\n+ controller.forceCheck();\n+ controller.assertBuffers(shard3, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n+ }\n+\n+ public void testActiveInactive() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"100kb\")\n+ .put(IndexingMemoryController.SHARD_INACTIVE_TIME_SETTING, \"5s\")\n+ .build());\n+\n+ final ShardId shard1 = new ShardId(\"test\", 1);\n+ controller.setTranslog(shard1, 0, 0);\n+ final ShardId shard2 = new ShardId(\"test\", 2);\n+ controller.setTranslog(shard2, 0, 0);\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+\n+ // index into both shards, move the clock and see that they are still active\n+ controller.setTranslog(shard1, randomInt(2), randomInt(2) + 1);\n+ controller.setTranslog(shard2, randomInt(2) + 1, randomInt(2));\n+ // the controller doesn't know when the ops happened, so even if this is more\n+ // than the inactive time the shard is still marked as active\n+ controller.incrementTimeSec(10);\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+\n+ // index into one shard only, see other shard is made inactive correctly\n+ controller.incTranslog(shard1, randomInt(2), randomInt(2) + 1);\n+ controller.forceCheck(); // register what happened with the controller (shard is still active)\n+ controller.incrementTimeSec(3); // increment but not enough\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+\n+ controller.incrementTimeSec(3); // increment some more\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB));\n+ controller.assertInActive(shard2);\n+\n+ if (randomBoolean()) {\n+ // once a shard gets inactive it will be synced flushed and a new translog generation will be made\n+ controller.simulateFlush(shard2);\n+ controller.forceCheck();\n+ controller.assertInActive(shard2);\n+ }\n+\n+ // index some and shard becomes immediately active\n+ controller.incTranslog(shard2, randomInt(2), 1 + randomInt(2)); // we must make sure translog ops is never 0\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ }\n+\n+ public void testMinShardBufferSizes() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"50kb\")\n+ .put(IndexingMemoryController.MIN_SHARD_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n+ .put(IndexingMemoryController.MIN_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, \"40kb\").build());\n+\n+ assertTwoActiveShards(controller, new ByteSizeValue(6, ByteSizeUnit.MB), new ByteSizeValue(40, ByteSizeUnit.KB));\n+ }\n+\n+ public void testMaxShardBufferSizes() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"50kb\")\n+ .put(IndexingMemoryController.MAX_SHARD_INDEX_BUFFER_SIZE_SETTING, \"3mb\")\n+ .put(IndexingMemoryController.MAX_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, \"10kb\").build());\n+\n+ assertTwoActiveShards(controller, new ByteSizeValue(3, ByteSizeUnit.MB), new ByteSizeValue(10, ByteSizeUnit.KB));\n+ }\n+\n+ public void testRelativeBufferSizes() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"50%\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"0.5%\")\n+ .build());\n+\n+ assertThat(controller.indexingBufferSize(), equalTo(new ByteSizeValue(50, ByteSizeUnit.MB)));\n+ assertThat(controller.translogBufferSize(), equalTo(new ByteSizeValue(512, ByteSizeUnit.KB)));\n+ }\n+\n+\n+ public void testMinBufferSizes() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"0.001%\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"0.001%\")\n+ .put(IndexingMemoryController.MIN_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n+ .put(IndexingMemoryController.MIN_TRANSLOG_BUFFER_SIZE_SETTING, \"512kb\").build());\n+\n+ assertThat(controller.indexingBufferSize(), equalTo(new ByteSizeValue(6, ByteSizeUnit.MB)));\n+ assertThat(controller.translogBufferSize(), equalTo(new ByteSizeValue(512, ByteSizeUnit.KB)));\n+ }\n+\n+ public void testMaxBufferSizes() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"90%\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"90%\")\n+ .put(IndexingMemoryController.MAX_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n+ .put(IndexingMemoryController.MAX_TRANSLOG_BUFFER_SIZE_SETTING, \"512kb\").build());\n+\n+ assertThat(controller.indexingBufferSize(), equalTo(new ByteSizeValue(6, ByteSizeUnit.MB)));\n+ assertThat(controller.translogBufferSize(), equalTo(new ByteSizeValue(512, ByteSizeUnit.KB)));\n+ }\n+\n+ protected void assertTwoActiveShards(MockController controller, ByteSizeValue indexBufferSize, ByteSizeValue translogBufferSize) {\n+ final ShardId shard1 = new ShardId(\"test\", 1);\n+ controller.setTranslog(shard1, 0, 0);\n+ final ShardId shard2 = new ShardId(\"test\", 2);\n+ controller.setTranslog(shard2, 0, 0);\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, indexBufferSize, translogBufferSize);\n+ controller.assertBuffers(shard2, indexBufferSize, translogBufferSize);\n+\n+ }\n+\n+}", "filename": "src/test/java/org/elasticsearch/indices/memory/IndexingMemoryControllerUnitTests.java", "status": "added" } ] }
{ "body": "currently ByteSizeValue.parse(\"1GB\") is not equal to ByteSizeValue.parse(\"1024MB\")\n", "comments": [ { "body": "@javanna can you take a look please?\n", "created_at": "2015-09-25T06:55:20Z" }, { "body": "@jpountz pushed another commit.\n", "created_at": "2015-09-25T10:59:48Z" }, { "body": "LGTM\n", "created_at": "2015-09-25T11:10:41Z" }, { "body": "thx @jpountz are you +1 on pushing this to 2.0 as well?\n", "created_at": "2015-09-25T11:12:04Z" }, { "body": "It looks like a low-risk fix, so 2.0 works for me.\n", "created_at": "2015-09-25T11:17:36Z" } ], "number": 13784, "title": "ByteSizeValue.equals should normalize units" }
{ "body": "When a shard becomes in active we trigger a sync flush in order to speed up future recoveries. The sync flush causes a new translog generation to be made, which in turn confuses the IndexingMemoryController making it think that the shard is active. If no documents comes along in the next 5m, the shard is made inactive again , triggering a sync flush and so forth.\n\nTo avoid this, the IndexingMemoryController is changed to ignore empty translogs when checking if a shard became active. This comes with the price of potentially missing indexing operations which are followed by a flush. This is acceptable as if no more index operation come in, it's OK to leave the shard in active.\n\nA new unit test is introduced and comparable integration tests are removed.\n\nRelates #13802\nIncludes a backport of #13784\n", "number": 13824, "review_comments": [ { "body": "Could you replace null above with `TRANSLOG_BUFFER_SIZE_SETTING`? (This is a separate issue, but I never backported to 1.7.x...)\n", "created_at": "2015-09-28T10:27:24Z" }, { "body": "Use the new string constants here too (like in 2.x/master)?\n", "created_at": "2015-09-28T10:30:02Z" }, { "body": "not sure I follow - can you unpack? the null here is a default value... not settings name.\n", "created_at": "2015-09-28T10:35:54Z" }, { "body": "Duh sorry you're right ... don't change it!\n", "created_at": "2015-09-28T10:59:41Z" } ], "title": "An inactive shard is activated by triggered synced flush (backport of #13802)" }
{ "commits": [ { "message": "Internal: an inactive shard is activated by triggered synced flush\n\nWhen a shard becomes in active we trigger a sync flush in order to speed up future recoveries. The sync flush causes a new translog generation to be made, which in turn confuses the IndexingMemoryController making it think that the shard is active. If no documents comes along in the next 5m, the shard is made inactive again , triggering a sync flush and so forth.\n\nTo avoid this, the IndexingMemoryController is changed to ignore empty translogs when checking if a shard became active. This comes with the price of potentially missing indexing operations which are followed by a flush. This is acceptable as if no more index operation come in, it's OK to leave the shard in active.\n\nA new unit test is introduced and comparable integration tests are removed.\n\nRelates #13802\nIncludes a backport of #13784" }, { "message": "feedback" } ], "files": [ { "diff": "@@ -238,10 +238,7 @@ public boolean equals(Object o) {\n \n ByteSizeValue sizeValue = (ByteSizeValue) o;\n \n- if (size != sizeValue.size) return false;\n- if (sizeUnit != sizeValue.sizeUnit) return false;\n-\n- return true;\n+ return bytes() == sizeValue.bytes();\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/common/unit/ByteSizeValue.java", "status": "modified" }, { "diff": "@@ -19,8 +19,8 @@\n \n package org.elasticsearch.indices.memory;\n \n-import com.google.common.collect.Lists;\n import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -48,6 +48,44 @@\n */\n public class IndexingMemoryController extends AbstractLifecycleComponent<IndexingMemoryController> {\n \n+\n+ /** How much heap (% or bytes) we will share across all actively indexing shards on this node (default: 10%). */\n+ public static final String INDEX_BUFFER_SIZE_SETTING = \"indices.memory.index_buffer_size\";\n+\n+ /** Only applies when <code>indices.memory.index_buffer_size</code> is a %, to set a floor on the actual size in bytes (default: 48 MB). */\n+ public static final String MIN_INDEX_BUFFER_SIZE_SETTING = \"indices.memory.min_index_buffer_size\";\n+\n+ /** Only applies when <code>indices.memory.index_buffer_size</code> is a %, to set a ceiling on the actual size in bytes (default: not set). */\n+ public static final String MAX_INDEX_BUFFER_SIZE_SETTING = \"indices.memory.max_index_buffer_size\";\n+\n+ /** Sets a floor on the per-shard index buffer size (default: 4 MB). */\n+ public static final String MIN_SHARD_INDEX_BUFFER_SIZE_SETTING = \"indices.memory.min_shard_index_buffer_size\";\n+\n+ /** Sets a ceiling on the per-shard index buffer size (default: 512 MB). */\n+ public static final String MAX_SHARD_INDEX_BUFFER_SIZE_SETTING = \"indices.memory.max_shard_index_buffer_size\";\n+\n+ /** How much heap (% or bytes) we will share across all actively indexing shards for the translog buffer (default: 1%). */\n+ public static final String TRANSLOG_BUFFER_SIZE_SETTING = \"indices.memory.translog_buffer_size\";\n+\n+ /** Only applies when <code>indices.memory.translog_buffer_size</code> is a %, to set a floor on the actual size in bytes (default: 256 KB). */\n+ public static final String MIN_TRANSLOG_BUFFER_SIZE_SETTING = \"indices.memory.min_translog_buffer_size\";\n+\n+ /** Only applies when <code>indices.memory.translog_buffer_size</code> is a %, to set a ceiling on the actual size in bytes (default: not set). */\n+ public static final String MAX_TRANSLOG_BUFFER_SIZE_SETTING = \"indices.memory.max_translog_buffer_size\";\n+\n+ /** Sets a floor on the per-shard translog buffer size (default: 2 KB). */\n+ public static final String MIN_SHARD_TRANSLOG_BUFFER_SIZE_SETTING = \"indices.memory.min_shard_translog_buffer_size\";\n+\n+ /** Sets a ceiling on the per-shard translog buffer size (default: 64 KB). */\n+ public static final String MAX_SHARD_TRANSLOG_BUFFER_SIZE_SETTING = \"indices.memory.max_shard_translog_buffer_size\";\n+\n+ /** If we see no indexing operations after this much time for a given shard, we consider that shard inactive (default: 5 minutes). */\n+ public static final String SHARD_INACTIVE_TIME_SETTING = \"indices.memory.shard_inactive_time\";\n+\n+ /** How frequently we check shards to find inactive ones (default: 30 seconds). */\n+ public static final String SHARD_INACTIVE_INTERVAL_TIME_SETTING = \"indices.memory.interval\";\n+\n+\n private final ThreadPool threadPool;\n private final IndicesService indicesService;\n \n@@ -67,19 +105,26 @@ public class IndexingMemoryController extends AbstractLifecycleComponent<Indexin\n private static final EnumSet<IndexShardState> CAN_UPDATE_INDEX_BUFFER_STATES = EnumSet.of(\n IndexShardState.RECOVERING, IndexShardState.POST_RECOVERY, IndexShardState.STARTED, IndexShardState.RELOCATED);\n \n+ private final ShardsIndicesStatusChecker statusChecker;\n+\n @Inject\n public IndexingMemoryController(Settings settings, ThreadPool threadPool, IndicesService indicesService) {\n+ this(settings, threadPool, indicesService, JvmInfo.jvmInfo().getMem().getHeapMax().bytes());\n+ }\n+\n+ // for testing\n+ protected IndexingMemoryController(Settings settings, ThreadPool threadPool, IndicesService indicesService, long jvmMemoryInBytes) {\n super(settings);\n this.threadPool = threadPool;\n this.indicesService = indicesService;\n \n ByteSizeValue indexingBuffer;\n- String indexingBufferSetting = componentSettings.get(\"index_buffer_size\", \"10%\");\n+ String indexingBufferSetting = settings.get(INDEX_BUFFER_SIZE_SETTING, \"10%\");\n if (indexingBufferSetting.endsWith(\"%\")) {\n double percent = Double.parseDouble(indexingBufferSetting.substring(0, indexingBufferSetting.length() - 1));\n- indexingBuffer = new ByteSizeValue((long) (((double) JvmInfo.jvmInfo().mem().heapMax().bytes()) * (percent / 100)));\n- ByteSizeValue minIndexingBuffer = componentSettings.getAsBytesSize(\"min_index_buffer_size\", new ByteSizeValue(48, ByteSizeUnit.MB));\n- ByteSizeValue maxIndexingBuffer = componentSettings.getAsBytesSize(\"max_index_buffer_size\", null);\n+ indexingBuffer = new ByteSizeValue((long) (((double) jvmMemoryInBytes) * (percent / 100)));\n+ ByteSizeValue minIndexingBuffer = settings.getAsBytesSize(MIN_INDEX_BUFFER_SIZE_SETTING, new ByteSizeValue(48, ByteSizeUnit.MB));\n+ ByteSizeValue maxIndexingBuffer = settings.getAsBytesSize(MAX_INDEX_BUFFER_SIZE_SETTING, null);\n \n if (indexingBuffer.bytes() < minIndexingBuffer.bytes()) {\n indexingBuffer = minIndexingBuffer;\n@@ -91,18 +136,17 @@ public IndexingMemoryController(Settings settings, ThreadPool threadPool, Indice\n indexingBuffer = ByteSizeValue.parseBytesSizeValue(indexingBufferSetting, null);\n }\n this.indexingBuffer = indexingBuffer;\n- this.minShardIndexBufferSize = componentSettings.getAsBytesSize(\"min_shard_index_buffer_size\", new ByteSizeValue(4, ByteSizeUnit.MB));\n+ this.minShardIndexBufferSize = settings.getAsBytesSize(MIN_SHARD_INDEX_BUFFER_SIZE_SETTING, new ByteSizeValue(4, ByteSizeUnit.MB));\n // LUCENE MONITOR: Based on this thread, currently (based on Mike), having a large buffer does not make a lot of sense: https://issues.apache.org/jira/browse/LUCENE-2324?focusedCommentId=13005155&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13005155\n- this.maxShardIndexBufferSize = componentSettings.getAsBytesSize(\"max_shard_index_buffer_size\", new ByteSizeValue(512, ByteSizeUnit.MB));\n+ this.maxShardIndexBufferSize = settings.getAsBytesSize(MAX_SHARD_INDEX_BUFFER_SIZE_SETTING, new ByteSizeValue(512, ByteSizeUnit.MB));\n \n ByteSizeValue translogBuffer;\n- String translogBufferSetting = componentSettings.get(\"translog_buffer_size\", \"1%\");\n+ String translogBufferSetting = settings.get(TRANSLOG_BUFFER_SIZE_SETTING, \"1%\");\n if (translogBufferSetting.endsWith(\"%\")) {\n double percent = Double.parseDouble(translogBufferSetting.substring(0, translogBufferSetting.length() - 1));\n- translogBuffer = new ByteSizeValue((long) (((double) JvmInfo.jvmInfo().mem().heapMax().bytes()) * (percent / 100)));\n- ByteSizeValue minTranslogBuffer = componentSettings.getAsBytesSize(\"min_translog_buffer_size\", new ByteSizeValue(256, ByteSizeUnit.KB));\n- ByteSizeValue maxTranslogBuffer = componentSettings.getAsBytesSize(\"max_translog_buffer_size\", null);\n-\n+ translogBuffer = new ByteSizeValue((long) (((double) jvmMemoryInBytes) * (percent / 100)));\n+ ByteSizeValue minTranslogBuffer = settings.getAsBytesSize(MIN_TRANSLOG_BUFFER_SIZE_SETTING, new ByteSizeValue(256, ByteSizeUnit.KB));\n+ ByteSizeValue maxTranslogBuffer = settings.getAsBytesSize(MAX_TRANSLOG_BUFFER_SIZE_SETTING, null);\n if (translogBuffer.bytes() < minTranslogBuffer.bytes()) {\n translogBuffer = minTranslogBuffer;\n }\n@@ -113,21 +157,26 @@ public IndexingMemoryController(Settings settings, ThreadPool threadPool, Indice\n translogBuffer = ByteSizeValue.parseBytesSizeValue(translogBufferSetting, null);\n }\n this.translogBuffer = translogBuffer;\n- this.minShardTranslogBufferSize = componentSettings.getAsBytesSize(\"min_shard_translog_buffer_size\", new ByteSizeValue(2, ByteSizeUnit.KB));\n- this.maxShardTranslogBufferSize = componentSettings.getAsBytesSize(\"max_shard_translog_buffer_size\", new ByteSizeValue(64, ByteSizeUnit.KB));\n+ this.minShardTranslogBufferSize = settings.getAsBytesSize(MIN_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, new ByteSizeValue(2, ByteSizeUnit.KB));\n+ this.maxShardTranslogBufferSize = settings.getAsBytesSize(MAX_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, new ByteSizeValue(64, ByteSizeUnit.KB));\n \n- this.inactiveTime = componentSettings.getAsTime(\"shard_inactive_time\", TimeValue.timeValueMinutes(5));\n+ this.inactiveTime = settings.getAsTime(SHARD_INACTIVE_TIME_SETTING, TimeValue.timeValueMinutes(5));\n // we need to have this relatively small to move a shard from inactive to active fast (enough)\n- this.interval = componentSettings.getAsTime(\"interval\", TimeValue.timeValueSeconds(30));\n-\n- logger.debug(\"using index_buffer_size [{}], with min_shard_index_buffer_size [{}], max_shard_index_buffer_size [{}], shard_inactive_time [{}]\", this.indexingBuffer, this.minShardIndexBufferSize, this.maxShardIndexBufferSize, this.inactiveTime);\n-\n+ this.interval = settings.getAsTime(SHARD_INACTIVE_INTERVAL_TIME_SETTING, TimeValue.timeValueSeconds(30));\n+\n+ this.statusChecker = new ShardsIndicesStatusChecker();\n+ logger.debug(\"using indexing buffer size [{}], with {} [{}], {} [{}], {} [{}], {} [{}]\",\n+ this.indexingBuffer,\n+ MIN_SHARD_INDEX_BUFFER_SIZE_SETTING, this.minShardIndexBufferSize,\n+ MAX_SHARD_INDEX_BUFFER_SIZE_SETTING, this.maxShardIndexBufferSize,\n+ SHARD_INACTIVE_TIME_SETTING, this.inactiveTime,\n+ SHARD_INACTIVE_INTERVAL_TIME_SETTING, this.interval);\n }\n \n @Override\n protected void doStart() throws ElasticsearchException {\n // its fine to run it on the scheduler thread, no busy work\n- this.scheduler = threadPool.scheduleWithFixedDelay(new ShardsIndicesStatusChecker(), interval);\n+ this.scheduler = threadPool.scheduleWithFixedDelay(statusChecker, interval);\n }\n \n @Override\n@@ -148,6 +197,89 @@ public ByteSizeValue indexingBufferSize() {\n return indexingBuffer;\n }\n \n+ /**\n+ * returns the current budget for the total amount of translog buffers of\n+ * active shards on this node\n+ */\n+ public ByteSizeValue translogBufferSize() {\n+ return translogBuffer;\n+ }\n+\n+\n+ protected List<ShardId> availableShards() {\n+ ArrayList<ShardId> list = new ArrayList<>();\n+\n+ for (IndexService indexService : indicesService) {\n+ for (IndexShard indexShard : indexService) {\n+ if (shardAvailable(indexShard)) {\n+ list.add(indexShard.shardId());\n+ }\n+ }\n+ }\n+ return list;\n+ }\n+\n+ /** returns true if shard exists and is availabe for updates */\n+ protected boolean shardAvailable(ShardId shardId) {\n+ return shardAvailable(getShard(shardId));\n+ }\n+\n+ /** returns true if shard exists and is availabe for updates */\n+ protected boolean shardAvailable(@Nullable IndexShard shard) {\n+ return shard != null && CAN_UPDATE_INDEX_BUFFER_STATES.contains(shard.state());\n+ }\n+\n+ /** gets an {@link IndexShard} instance for the given shard. returns null if the shard doesn't exist */\n+ protected IndexShard getShard(ShardId shardId) {\n+ IndexService indexService = indicesService.indexService(shardId.index().name());\n+ if (indexService != null) {\n+ IndexShard indexShard = indexService.shard(shardId.id());\n+ return indexShard;\n+ }\n+ return null;\n+ }\n+\n+ protected void updateShardBuffers(ShardId shardId, ByteSizeValue shardIndexingBufferSize, ByteSizeValue shardTranslogBufferSize) {\n+ final IndexShard shard = getShard(shardId);\n+ if (shard != null) {\n+ try {\n+ shard.updateBufferSize(shardIndexingBufferSize, shardTranslogBufferSize);\n+ } catch (EngineClosedException e) {\n+ // ignore\n+ } catch (FlushNotAllowedEngineException e) {\n+ // ignore\n+ } catch (Exception e) {\n+ logger.warn(\"failed to set shard {} index buffer to [{}]\", shardId, shardIndexingBufferSize);\n+ }\n+ }\n+ }\n+\n+\n+ /** returns the current translog status (generation id + ops) for the given shard id. Returns null if unavailable. */\n+ protected ShardIndexingStatus getTranslogStatus(ShardId shardId) {\n+ final IndexShard indexShard = getShard(shardId);\n+ if (indexShard == null) {\n+ return null;\n+ }\n+ final Translog translog;\n+ try {\n+ translog = indexShard.translog();\n+ } catch (EngineClosedException e) {\n+ // not ready yet to be checked for activity\n+ return null;\n+ }\n+\n+ ShardIndexingStatus status = new ShardIndexingStatus();\n+ status.translogId = translog.currentId();\n+ status.translogNumberOfOperations = translog.estimatedNumberOfOperations();\n+ return status;\n+ }\n+\n+ // used for tests\n+ void forceCheck() {\n+ statusChecker.run();\n+ }\n+\n class ShardsIndicesStatusChecker implements Runnable {\n \n private final Map<ShardId, ShardIndexingStatus> shardsIndicesStatus = new HashMap<>();\n@@ -159,17 +291,10 @@ public void run() {\n \n changes.addAll(purgeDeletedAndClosedShards());\n \n- final List<IndexShard> activeToInactiveIndexingShards = Lists.newArrayList();\n+ final List<ShardId> activeToInactiveIndexingShards = new ArrayList<>();\n final int activeShards = updateShardStatuses(changes, activeToInactiveIndexingShards);\n- for (IndexShard indexShard : activeToInactiveIndexingShards) {\n- // update inactive indexing buffer size\n- try {\n- indexShard.markAsInactive();\n- } catch (EngineClosedException e) {\n- // ignore\n- } catch (FlushNotAllowedEngineException e) {\n- // ignore\n- }\n+ for (ShardId indexShard : activeToInactiveIndexingShards) {\n+ markShardAsInactive(indexShard);\n }\n if (!changes.isEmpty()) {\n calcAndSetShardBuffers(activeShards, \"[\" + changes + \"]\");\n@@ -181,57 +306,40 @@ public void run() {\n *\n * @return the current count of active shards\n */\n- private int updateShardStatuses(EnumSet<ShardStatusChangeType> changes, List<IndexShard> activeToInactiveIndexingShards) {\n+ private int updateShardStatuses(EnumSet<ShardStatusChangeType> changes, List<ShardId> activeToInactiveIndexingShards) {\n int activeShards = 0;\n- for (IndexService indexService : indicesService) {\n- for (IndexShard indexShard : indexService) {\n-\n- if (!CAN_UPDATE_INDEX_BUFFER_STATES.contains(indexShard.state())) {\n- // not ready to be updated yet.\n- continue;\n- }\n-\n- final long time = threadPool.estimatedTimeInMillis();\n+ for (ShardId shardId : availableShards()) {\n+ final ShardIndexingStatus currentStatus = getTranslogStatus(shardId);\n \n- Translog translog = indexShard.translog();\n- ShardIndexingStatus status = shardsIndicesStatus.get(indexShard.shardId());\n- if (status == null) {\n- status = new ShardIndexingStatus();\n- shardsIndicesStatus.put(indexShard.shardId(), status);\n- changes.add(ShardStatusChangeType.ADDED);\n- }\n- // check if it is deemed to be inactive (sam translogId and numberOfOperations over a long period of time)\n- if (status.translogId == translog.currentId() && translog.estimatedNumberOfOperations() == status.translogNumberOfOperations) {\n- if (status.time == -1) { // first time\n- status.time = time;\n- }\n- // inactive?\n- if (status.activeIndexing) {\n- // mark it as inactive only if enough time has passed and there are no ongoing merges going on...\n- if ((time - status.time) > inactiveTime.millis() && indexShard.mergeStats().getCurrent() == 0) {\n- // inactive for this amount of time, mark it\n- activeToInactiveIndexingShards.add(indexShard);\n- status.activeIndexing = false;\n- changes.add(ShardStatusChangeType.BECAME_INACTIVE);\n- logger.debug(\"marking shard [{}][{}] as inactive (inactive_time[{}]) indexing wise, setting size to [{}]\", indexShard.shardId().index().name(), indexShard.shardId().id(), inactiveTime, EngineConfig.INACTIVE_SHARD_INDEXING_BUFFER);\n- }\n- }\n- } else {\n- if (!status.activeIndexing) {\n- status.activeIndexing = true;\n- changes.add(ShardStatusChangeType.BECAME_ACTIVE);\n- logger.debug(\"marking shard [{}][{}] as active indexing wise\", indexShard.shardId().index().name(), indexShard.shardId().id());\n- }\n- status.time = -1;\n- }\n- status.translogId = translog.currentId();\n- status.translogNumberOfOperations = translog.estimatedNumberOfOperations();\n+ if (currentStatus == null) {\n+ // shard was closed..\n+ continue;\n+ }\n \n- if (status.activeIndexing) {\n- activeShards++;\n+ ShardIndexingStatus status = shardsIndicesStatus.get(shardId);\n+ if (status == null) {\n+ status = currentStatus;\n+ shardsIndicesStatus.put(shardId, status);\n+ changes.add(ShardStatusChangeType.ADDED);\n+ } else {\n+ final boolean lastActiveIndexing = status.activeIndexing;\n+ status.updateWith(currentTimeInNanos(), currentStatus, inactiveTime.nanos());\n+ if (lastActiveIndexing && (status.activeIndexing == false)) {\n+ activeToInactiveIndexingShards.add(shardId);\n+ changes.add(ShardStatusChangeType.BECAME_INACTIVE);\n+ logger.debug(\"marking shard {} as inactive (inactive_time[{}]) indexing wise, setting size to [{}]\",\n+ shardId,\n+ inactiveTime, EngineConfig.INACTIVE_SHARD_INDEXING_BUFFER);\n+ } else if ((lastActiveIndexing == false) && status.activeIndexing) {\n+ changes.add(ShardStatusChangeType.BECAME_ACTIVE);\n+ logger.debug(\"marking shard {} as active indexing wise\", shardId);\n }\n }\n+ if (status.activeIndexing) {\n+ activeShards++;\n+ }\n }\n+\n return activeShards;\n }\n \n@@ -245,26 +353,10 @@ private EnumSet<ShardStatusChangeType> purgeDeletedAndClosedShards() {\n \n Iterator<ShardId> statusShardIdIterator = shardsIndicesStatus.keySet().iterator();\n while (statusShardIdIterator.hasNext()) {\n- ShardId statusShardId = statusShardIdIterator.next();\n- IndexService indexService = indicesService.indexService(statusShardId.getIndex());\n- boolean remove = false;\n- try {\n- if (indexService == null) {\n- remove = true;\n- continue;\n- }\n- IndexShard indexShard = indexService.shard(statusShardId.id());\n- if (indexShard == null) {\n- remove = true;\n- continue;\n- }\n- remove = !CAN_UPDATE_INDEX_BUFFER_STATES.contains(indexShard.state());\n-\n- } finally {\n- if (remove) {\n- changes.add(ShardStatusChangeType.DELETED);\n- statusShardIdIterator.remove();\n- }\n+ ShardId shardId = statusShardIdIterator.next();\n+ if (shardAvailable(shardId) == false) {\n+ changes.add(ShardStatusChangeType.DELETED);\n+ statusShardIdIterator.remove();\n }\n }\n return changes;\n@@ -291,41 +383,80 @@ private void calcAndSetShardBuffers(int activeShards, String reason) {\n }\n \n logger.debug(\"recalculating shard indexing buffer (reason={}), total is [{}] with [{}] active shards, each shard set to indexing=[{}], translog=[{}]\", reason, indexingBuffer, activeShards, shardIndexingBufferSize, shardTranslogBufferSize);\n- for (IndexService indexService : indicesService) {\n- for (IndexShard indexShard : indexService) {\n- IndexShardState state = indexShard.state();\n- if (!CAN_UPDATE_INDEX_BUFFER_STATES.contains(state)) {\n- logger.trace(\"shard [{}] is not yet ready for index buffer update. index shard state: [{}]\", indexShard.shardId(), state);\n- continue;\n- }\n- ShardIndexingStatus status = shardsIndicesStatus.get(indexShard.shardId());\n- if (status == null || status.activeIndexing) {\n- try {\n- indexShard.updateBufferSize(shardIndexingBufferSize, shardTranslogBufferSize);\n- } catch (EngineClosedException e) {\n- // ignore\n- continue;\n- } catch (FlushNotAllowedEngineException e) {\n- // ignore\n- continue;\n- } catch (Exception e) {\n- logger.warn(\"failed to set shard {} index buffer to [{}]\", indexShard.shardId(), shardIndexingBufferSize);\n- }\n- }\n+ for (ShardId shardId : availableShards()) {\n+ ShardIndexingStatus status = shardsIndicesStatus.get(shardId);\n+ if (status == null || status.activeIndexing) {\n+ updateShardBuffers(shardId, shardIndexingBufferSize, shardTranslogBufferSize);\n }\n }\n }\n }\n \n+ protected long currentTimeInNanos() {\n+ return System.nanoTime();\n+ }\n+\n+ // update inactive indexing buffer size\n+ protected void markShardAsInactive(ShardId shardId) {\n+ String ignoreReason = null;\n+ final IndexShard shard = getShard(shardId);\n+ if (shard != null) {\n+ try {\n+ shard.markAsInactive();\n+ } catch (EngineClosedException e) {\n+ // ignore\n+ ignoreReason = \"EngineClosedException\";\n+ } catch (FlushNotAllowedEngineException e) {\n+ // ignore\n+ ignoreReason = \"FlushNotAllowedEngineException\";\n+ }\n+ } else {\n+ ignoreReason = \"shard not found\";\n+ }\n+ if (ignoreReason != null) {\n+ logger.trace(\"ignore [{}] while marking shard {} as inactive\", ignoreReason, shardId);\n+ }\n+ }\n+\n private static enum ShardStatusChangeType {\n ADDED, DELETED, BECAME_ACTIVE, BECAME_INACTIVE\n }\n \n-\n static class ShardIndexingStatus {\n long translogId = -1;\n int translogNumberOfOperations = -1;\n boolean activeIndexing = true;\n- long time = -1; // contains the first time we saw this shard with no operations done on it\n+ long idleSinceNanoTime = -1; // contains the first time we saw this shard with no operations done on it\n+\n+\n+ /** update status based on a new sample. updates all internal variables */\n+ public void updateWith(long currentNanoTime, ShardIndexingStatus current, long inactiveNanoInterval) {\n+ final boolean idle = (translogId == current.translogId && translogNumberOfOperations == current.translogNumberOfOperations);\n+ if (activeIndexing && idle) {\n+ // no indexing activity detected.\n+ if (idleSinceNanoTime < 0) {\n+ // first time we see this, start the clock.\n+ idleSinceNanoTime = currentNanoTime;\n+ } else if ((currentNanoTime - idleSinceNanoTime) > inactiveNanoInterval) {\n+ // shard is inactive. mark it as such.\n+ activeIndexing = false;\n+ }\n+ } else if (activeIndexing == false // we weren't indexing before\n+ && idle == false // but we do now\n+ && current.translogNumberOfOperations > 0 // but only if we're really sure - see note bellow\n+ ) {\n+ // since we sync flush once a shard becomes inactive, the translog id can change, however that\n+ // doesn't mean the an indexing operation has happened. Note that if we're really unlucky and a flush happens\n+ // immediately after an indexing operation we may not become active immediately. The following\n+ // indexing operation will mark the shard as active, so it's OK. If that one doesn't come, we might as well stay\n+ // inactive\n+\n+ activeIndexing = true;\n+ idleSinceNanoTime = -1;\n+ }\n+\n+ translogId = current.translogId;\n+ translogNumberOfOperations = current.translogNumberOfOperations;\n+ }\n }\n }", "filename": "src/main/java/org/elasticsearch/indices/memory/IndexingMemoryController.java", "status": "modified" }, { "diff": "@@ -0,0 +1,286 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.indices.memory;\n+\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.ByteSizeUnit;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+\n+import java.util.ArrayList;\n+import java.util.HashMap;\n+import java.util.List;\n+import java.util.Map;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.not;\n+\n+public class IndexingMemoryControllerUnitTests extends ElasticsearchTestCase {\n+\n+ static class MockController extends IndexingMemoryController {\n+\n+ final static ByteSizeValue INACTIVE = new ByteSizeValue(-1);\n+\n+ final Map<ShardId, Long> translogIds = new HashMap<>();\n+ final Map<ShardId, Integer> translogOps = new HashMap<>();\n+\n+ final Map<ShardId, ByteSizeValue> indexingBuffers = new HashMap<>();\n+ final Map<ShardId, ByteSizeValue> translogBuffers = new HashMap<>();\n+\n+ long currentTimeSec = TimeValue.timeValueNanos(System.nanoTime()).seconds();\n+\n+ public MockController(Settings settings) {\n+ super(ImmutableSettings.builder()\n+ .put(SHARD_INACTIVE_INTERVAL_TIME_SETTING, \"200h\") // disable it\n+ .put(SHARD_INACTIVE_TIME_SETTING, \"0s\") // immediate\n+ .put(settings)\n+ .build(),\n+ null, null, 100 * 1024 * 1024); // fix jvm mem size to 100mb\n+ }\n+\n+ public void incTranslog(ShardId shard1, int id, int ops) {\n+ setTranslog(shard1, translogIds.get(shard1) + id, translogOps.get(shard1) + ops);\n+ }\n+\n+ public void setTranslog(ShardId id, long translogId, int ops) {\n+ translogIds.put(id, translogId);\n+ translogOps.put(id, ops);\n+ }\n+\n+ public void deleteShard(ShardId id) {\n+ translogIds.remove(id);\n+ translogOps.remove(id);\n+ indexingBuffers.remove(id);\n+ translogBuffers.remove(id);\n+ }\n+\n+ public void assertActive(ShardId id) {\n+ assertThat(indexingBuffers.get(id), not(equalTo(INACTIVE)));\n+ assertThat(translogBuffers.get(id), not(equalTo(INACTIVE)));\n+ }\n+\n+ public void assertBuffers(ShardId id, ByteSizeValue indexing, ByteSizeValue translog) {\n+ assertThat(indexingBuffers.get(id), equalTo(indexing));\n+ assertThat(translogBuffers.get(id), equalTo(translog));\n+ }\n+\n+ public void assertInActive(ShardId id) {\n+ assertThat(indexingBuffers.get(id), equalTo(INACTIVE));\n+ assertThat(translogBuffers.get(id), equalTo(INACTIVE));\n+ }\n+\n+ @Override\n+ protected long currentTimeInNanos() {\n+ return TimeValue.timeValueSeconds(currentTimeSec).nanos();\n+ }\n+\n+ @Override\n+ protected List<ShardId> availableShards() {\n+ return new ArrayList<>(translogIds.keySet());\n+ }\n+\n+ @Override\n+ protected boolean shardAvailable(ShardId shardId) {\n+ return translogIds.containsKey(shardId);\n+ }\n+\n+ @Override\n+ protected void markShardAsInactive(ShardId shardId) {\n+ indexingBuffers.put(shardId, INACTIVE);\n+ translogBuffers.put(shardId, INACTIVE);\n+ }\n+\n+ @Override\n+ protected ShardIndexingStatus getTranslogStatus(ShardId shardId) {\n+ if (!shardAvailable(shardId)) {\n+ return null;\n+ }\n+ ShardIndexingStatus status = new ShardIndexingStatus();\n+ status.translogId = translogIds.get(shardId);\n+ status.translogNumberOfOperations = translogOps.get(shardId);\n+ return status;\n+ }\n+\n+ @Override\n+ protected void updateShardBuffers(ShardId shardId, ByteSizeValue shardIndexingBufferSize, ByteSizeValue shardTranslogBufferSize) {\n+ indexingBuffers.put(shardId, shardIndexingBufferSize);\n+ translogBuffers.put(shardId, shardTranslogBufferSize);\n+ }\n+\n+ public void incrementTimeSec(int sec) {\n+ currentTimeSec += sec;\n+ }\n+\n+ public void simulateFlush(ShardId shard) {\n+ setTranslog(shard, translogIds.get(shard) + 1, 0);\n+ }\n+ }\n+\n+ public void testShardAdditionAndRemoval() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"100kb\").build());\n+ final ShardId shard1 = new ShardId(\"test\", 1);\n+ controller.setTranslog(shard1, randomInt(10), randomInt(10));\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n+\n+ // add another shard\n+ final ShardId shard2 = new ShardId(\"test\", 2);\n+ controller.setTranslog(shard2, randomInt(10), randomInt(10));\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+\n+ // remove first shard\n+ controller.deleteShard(shard1);\n+ controller.forceCheck();\n+ controller.assertBuffers(shard2, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n+\n+ // remove second shard\n+ controller.deleteShard(shard2);\n+ controller.forceCheck();\n+\n+ // add a new one\n+ final ShardId shard3 = new ShardId(\"test\", 3);\n+ controller.setTranslog(shard3, randomInt(10), randomInt(10));\n+ controller.forceCheck();\n+ controller.assertBuffers(shard3, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB)); // translog is maxed at 64K\n+ }\n+\n+ public void testActiveInactive() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"100kb\")\n+ .put(IndexingMemoryController.SHARD_INACTIVE_TIME_SETTING, \"5s\")\n+ .build());\n+\n+ final ShardId shard1 = new ShardId(\"test\", 1);\n+ controller.setTranslog(shard1, 0, 0);\n+ final ShardId shard2 = new ShardId(\"test\", 2);\n+ controller.setTranslog(shard2, 0, 0);\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+\n+ // index into both shards, move the clock and see that they are still active\n+ controller.setTranslog(shard1, randomInt(2), randomInt(2) + 1);\n+ controller.setTranslog(shard2, randomInt(2) + 1, randomInt(2));\n+ // the controller doesn't know when the ops happened, so even if this is more\n+ // than the inactive time the shard is still marked as active\n+ controller.incrementTimeSec(10);\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+\n+ // index into one shard only, see other shard is made inactive correctly\n+ controller.incTranslog(shard1, randomInt(2), randomInt(2) + 1);\n+ controller.forceCheck(); // register what happened with the controller (shard is still active)\n+ controller.incrementTimeSec(3); // increment but not enough\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+\n+ controller.incrementTimeSec(3); // increment some more\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(10, ByteSizeUnit.MB), new ByteSizeValue(64, ByteSizeUnit.KB));\n+ controller.assertInActive(shard2);\n+\n+ if (randomBoolean()) {\n+ // once a shard gets inactive it will be synced flushed and a new translog generation will be made\n+ controller.simulateFlush(shard2);\n+ controller.forceCheck();\n+ controller.assertInActive(shard2);\n+ }\n+\n+ // index some and shard becomes immediately active\n+ controller.incTranslog(shard2, randomInt(2), 1 + randomInt(2)); // we must make sure translog ops is never 0\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ controller.assertBuffers(shard2, new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(50, ByteSizeUnit.KB));\n+ }\n+\n+ public void testMinShardBufferSizes() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"50kb\")\n+ .put(IndexingMemoryController.MIN_SHARD_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n+ .put(IndexingMemoryController.MIN_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, \"40kb\").build());\n+\n+ assertTwoActiveShards(controller, new ByteSizeValue(6, ByteSizeUnit.MB), new ByteSizeValue(40, ByteSizeUnit.KB));\n+ }\n+\n+ public void testMaxShardBufferSizes() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"10mb\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"50kb\")\n+ .put(IndexingMemoryController.MAX_SHARD_INDEX_BUFFER_SIZE_SETTING, \"3mb\")\n+ .put(IndexingMemoryController.MAX_SHARD_TRANSLOG_BUFFER_SIZE_SETTING, \"10kb\").build());\n+\n+ assertTwoActiveShards(controller, new ByteSizeValue(3, ByteSizeUnit.MB), new ByteSizeValue(10, ByteSizeUnit.KB));\n+ }\n+\n+ public void testRelativeBufferSizes() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"50%\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"0.5%\")\n+ .build());\n+\n+ assertThat(controller.indexingBufferSize(), equalTo(new ByteSizeValue(50, ByteSizeUnit.MB)));\n+ assertThat(controller.translogBufferSize(), equalTo(new ByteSizeValue(512, ByteSizeUnit.KB)));\n+ }\n+\n+\n+ public void testMinBufferSizes() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"0.001%\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"0.001%\")\n+ .put(IndexingMemoryController.MIN_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n+ .put(IndexingMemoryController.MIN_TRANSLOG_BUFFER_SIZE_SETTING, \"512kb\").build());\n+\n+ assertThat(controller.indexingBufferSize(), equalTo(new ByteSizeValue(6, ByteSizeUnit.MB)));\n+ assertThat(controller.translogBufferSize(), equalTo(new ByteSizeValue(512, ByteSizeUnit.KB)));\n+ }\n+\n+ public void testMaxBufferSizes() {\n+ MockController controller = new MockController(ImmutableSettings.builder()\n+ .put(IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, \"90%\")\n+ .put(IndexingMemoryController.TRANSLOG_BUFFER_SIZE_SETTING, \"90%\")\n+ .put(IndexingMemoryController.MAX_INDEX_BUFFER_SIZE_SETTING, \"6mb\")\n+ .put(IndexingMemoryController.MAX_TRANSLOG_BUFFER_SIZE_SETTING, \"512kb\").build());\n+\n+ assertThat(controller.indexingBufferSize(), equalTo(new ByteSizeValue(6, ByteSizeUnit.MB)));\n+ assertThat(controller.translogBufferSize(), equalTo(new ByteSizeValue(512, ByteSizeUnit.KB)));\n+ }\n+\n+ protected void assertTwoActiveShards(MockController controller, ByteSizeValue indexBufferSize, ByteSizeValue translogBufferSize) {\n+ final ShardId shard1 = new ShardId(\"test\", 1);\n+ controller.setTranslog(shard1, 0, 0);\n+ final ShardId shard2 = new ShardId(\"test\", 2);\n+ controller.setTranslog(shard2, 0, 0);\n+ controller.forceCheck();\n+ controller.assertBuffers(shard1, indexBufferSize, translogBufferSize);\n+ controller.assertBuffers(shard2, indexBufferSize, translogBufferSize);\n+\n+ }\n+\n+}", "filename": "src/test/java/org/elasticsearch/indices/memory/IndexingMemoryControllerUnitTests.java", "status": "added" } ] }
{ "body": "By default, our security stuff will reject this (as do other apps).\nSee https://bugs.launchpad.net/ubuntu/+source/jayatana/+bug/1441487\n\nHowever its not really the user's fault, ubuntu screws up here by\ninstalling this agent by default. We don't want any agents.\n\nSo instead, we drop it like this:\n\n```\n$ bin/elasticsearch\nWarning: Ignoring JAVA_TOOL_OPTIONS=-Bogus1 -Bogus2\nPlease pass JVM parameters via JAVA_OPTS instead\n[2015-09-25 23:34:39,777][INFO ][node ] [Doctor Bong] version[3.0.0-SNAPSHOT], pid[19044], build[2f5b6ea/2015-09-26T03:18:16Z]\n...\n```\n\nCloses #13785\n", "comments": [ { "body": "LGTM\n", "created_at": "2015-09-26T03:39:04Z" }, { "body": "It looks good to me\n", "created_at": "2015-09-26T03:39:09Z" }, { "body": "This should be fixed in Ubuntu (by getting rid of this hack), and not require workaround in Elasticsearch etc.\n", "created_at": "2015-09-29T08:28:17Z" }, { "body": "Don't complain to me, I don't work on ubuntu.\n", "created_at": "2015-09-29T09:52:29Z" }, { "body": "I don't have a link but when I looked around it looked like ubuntu is\nmoving away from it. But they'll be slow and we got this into the first\nversion that suffers from it.\nOn Sep 29, 2015 11:52 AM, \"Robert Muir\" notifications@github.com wrote:\n\n> Don't complain to me, I don't work on ubuntu.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elastic/elasticsearch/pull/13813#issuecomment-144009788\n> .\n", "created_at": "2015-09-29T10:22:26Z" }, { "body": "I linked to the bug on this issue.\n", "created_at": "2015-09-29T10:23:18Z" } ], "number": 13813, "title": "Don't let ubuntu try to install its crazy jayatana agent." }
{ "body": "Installs javatana in vivid, emulates its on-login actions when starting\nelasticsearch and verifies that elasticsearch turns off javatana.\n\nRelates to #13813\n", "number": 13821, "review_comments": [], "title": "Test that Jayatana is ignored" }
{ "commits": [ { "message": "[test] Jayatana is ignored\n\nInstalls javatana in vivid, emulates its on-login actions when starting\nelasticsearch and verifies that elasticsearch turns off javatana.\n\nRelates to #13813" } ], "files": [ { "diff": "@@ -32,7 +32,10 @@ Vagrant.configure(2) do |config|\n end\n config.vm.define \"vivid\" do |config|\n config.vm.box = \"ubuntu/vivid64\"\n- ubuntu_common config\n+ ubuntu_common config, extra: <<-SHELL\n+ # Install Jayatana so we can work around it being present.\n+ [ -f /usr/share/java/jayatanaag.jar ] || install jayatana\n+ SHELL\n end\n # Wheezy's backports don't contain Openjdk 8 and the backflips required to\n # get the sun jdk on there just aren't worth it. We have jessie for testing\n@@ -116,11 +119,11 @@ SOURCE_PROMPT\n end\n end\n \n-def ubuntu_common(config)\n- deb_common config, 'apt-add-repository -y ppa:openjdk-r/ppa > /dev/null 2>&1', 'openjdk-r-*'\n+def ubuntu_common(config, extra: '')\n+ deb_common config, 'apt-add-repository -y ppa:openjdk-r/ppa > /dev/null 2>&1', 'openjdk-r-*', extra: extra\n end\n \n-def deb_common(config, add_openjdk_repository_command, openjdk_list)\n+def deb_common(config, add_openjdk_repository_command, openjdk_list, extra: '')\n # http://foo-o-rama.com/vagrant--stdin-is-not-a-tty--fix.html\n config.vm.provision \"fix-no-tty\", type: \"shell\" do |s|\n s.privileged = false\n@@ -137,6 +140,7 @@ def deb_common(config, add_openjdk_repository_command, openjdk_list)\n (echo \"Importing java-8 ppa\" &&\n #{add_openjdk_repository_command} &&\n apt-get update)\n+ #{extra}\n SHELL\n )\n end", "filename": "Vagrantfile", "status": "modified" }, { "diff": "@@ -243,8 +243,14 @@ start_elasticsearch_service() {\n # su and the Elasticsearch init script work together to break bats.\n # sudo isolates bats enough from the init script so everything continues\n # to tick along\n- sudo -u elasticsearch /tmp/elasticsearch/bin/elasticsearch -d \\\n- -p /tmp/elasticsearch/elasticsearch.pid\n+ sudo -u elasticsearch bash <<BASH\n+# If jayatana is installed then we try to use it. Elasticsearch should ignore it even when we try.\n+# If it doesn't ignore it then Elasticsearch will fail to start because of security errors.\n+# This line is attempting to emulate the on login behavior of /usr/share/upstart/sessions/jayatana.conf\n+[ -f /usr/share/java/jayatanaag.jar ] && export JAVA_TOOL_OPTIONS=\"-javaagent:/usr/share/java/jayatanaag.jar\"\n+# And now we can start Elasticsearch normally, in the background (-d) and with a pidfile (-p).\n+/tmp/elasticsearch/bin/elasticsearch -d -p /tmp/elasticsearch/elasticsearch.pid\n+BASH\n elif is_systemd; then\n run systemctl daemon-reload\n [ \"$status\" -eq 0 ]", "filename": "qa/vagrant/src/test/resources/packaging/scripts/packaging_test_utils.bash", "status": "modified" } ] }
{ "body": "For example, `open_contexts`. When there are open search contexts when an index is being deleted, the count is accumulated in `OldShardsStats` and it keeps growing.\n(It looks it's because [freeing context](https://github.com/elastic/elasticsearch/blob/4173b00241cf9522427eeb81e9451619fe334c76/core/src/main/java/org/elasticsearch/search/SearchService.java#L175-L189) is called after [it collects old shard stats](https://github.com/elastic/elasticsearch/blob/8a06fef019a608153cc22b3ba6d29b5943348ead/core/src/main/java/org/elasticsearch/indices/IndicesService.java#L458-L469))\n\nAnother example is `current*` in merge stats. If an index is deleted while it's merging segment(s), those stats are accumulated.\n\nI think those point-in-time stats shouldn't be accumulated.\n", "comments": [ { "body": "Also, applies to closing indices. eg. start with a node restarted with no contexts outstanding, create a scroll, close index, reopen index, _stats api will show that contexts are cleared, but node stats api will show that there are still open contexts.\n", "created_at": "2015-09-28T10:49:40Z" } ], "number": 13386, "title": "Some stats for deleted indices shouldn't be accumulated" }
{ "body": "Closes #13386\n", "number": 13801, "review_comments": [], "title": "Omit current* stats for OldShardStats" }
{ "commits": [ { "message": "Omit current* stats for OldShardStats (closes #13386)" } ], "files": [ { "diff": "@@ -50,6 +50,10 @@ public void add(long total, long totalTimeInMillis) {\n }\n \n public void add(FlushStats flushStats) {\n+ addTotals(flushStats);\n+ }\n+\n+ public void addTotals(FlushStats flushStats) {\n if (flushStats == null) {\n return;\n }", "filename": "core/src/main/java/org/elasticsearch/index/flush/FlushStats.java", "status": "modified" }, { "diff": "@@ -51,6 +51,14 @@ public GetStats(long existsCount, long existsTimeInMillis, long missingCount, lo\n }\n \n public void add(GetStats stats) {\n+ if (stats == null) {\n+ return;\n+ }\n+ current += stats.current;\n+ addTotals(stats);\n+ }\n+\n+ public void addTotals(GetStats stats) {\n if (stats == null) {\n return;\n }", "filename": "core/src/main/java/org/elasticsearch/index/get/GetStats.java", "status": "modified" }, { "diff": "@@ -232,7 +232,7 @@ public void add(IndexingStats indexingStats, boolean includeTypes) {\n if (indexingStats == null) {\n return;\n }\n- totalStats.add(indexingStats.totalStats);\n+ addTotals(indexingStats);\n if (includeTypes && indexingStats.typeStats != null && !indexingStats.typeStats.isEmpty()) {\n if (typeStats == null) {\n typeStats = new HashMap<>(indexingStats.typeStats.size());\n@@ -248,6 +248,13 @@ public void add(IndexingStats indexingStats, boolean includeTypes) {\n }\n }\n \n+ public void addTotals(IndexingStats indexingStats) {\n+ if (indexingStats == null) {\n+ return;\n+ }\n+ totalStats.add(indexingStats.totalStats);\n+ }\n+\n public Stats getTotal() {\n return this.totalStats;\n }", "filename": "core/src/main/java/org/elasticsearch/index/indexing/IndexingStats.java", "status": "modified" }, { "diff": "@@ -76,16 +76,24 @@ public void add(long totalMerges, long totalMergeTime, long totalNumDocs, long t\n }\n \n public void add(MergeStats mergeStats) {\n+ if (mergeStats == null) {\n+ return;\n+ }\n+ this.current += mergeStats.current;\n+ this.currentNumDocs += mergeStats.currentNumDocs;\n+ this.currentSizeInBytes += mergeStats.currentSizeInBytes;\n+\n+ addTotals(mergeStats);\n+ }\n+\n+ public void addTotals(MergeStats mergeStats) {\n if (mergeStats == null) {\n return;\n }\n this.total += mergeStats.total;\n this.totalTimeInMillis += mergeStats.totalTimeInMillis;\n this.totalNumDocs += mergeStats.totalNumDocs;\n this.totalSizeInBytes += mergeStats.totalSizeInBytes;\n- this.current += mergeStats.current;\n- this.currentNumDocs += mergeStats.currentNumDocs;\n- this.currentSizeInBytes += mergeStats.currentSizeInBytes;\n this.totalStoppedTimeInMillis += mergeStats.totalStoppedTimeInMillis;\n this.totalThrottledTimeInMillis += mergeStats.totalThrottledTimeInMillis;\n if (this.totalBytesPerSecAutoThrottle == Long.MAX_VALUE || mergeStats.totalBytesPerSecAutoThrottle == Long.MAX_VALUE) {", "filename": "core/src/main/java/org/elasticsearch/index/merge/MergeStats.java", "status": "modified" }, { "diff": "@@ -47,15 +47,11 @@ public void add(RecoveryStats recoveryStats) {\n if (recoveryStats != null) {\n this.currentAsSource.addAndGet(recoveryStats.currentAsSource());\n this.currentAsTarget.addAndGet(recoveryStats.currentAsTarget());\n- this.throttleTimeInNanos.addAndGet(recoveryStats.throttleTime().nanos());\n }\n+ addTotals(recoveryStats);\n }\n \n- /**\n- * add statistics that should be accumulated about old shards after they have been\n- * deleted or relocated\n- */\n- public void addAsOld(RecoveryStats recoveryStats) {\n+ public void addTotals(RecoveryStats recoveryStats) {\n if (recoveryStats != null) {\n this.throttleTimeInNanos.addAndGet(recoveryStats.throttleTime().nanos());\n }", "filename": "core/src/main/java/org/elasticsearch/index/recovery/RecoveryStats.java", "status": "modified" }, { "diff": "@@ -50,6 +50,10 @@ public void add(long total, long totalTimeInMillis) {\n }\n \n public void add(RefreshStats refreshStats) {\n+ addTotals(refreshStats);\n+ }\n+\n+ public void addTotals(RefreshStats refreshStats) {\n if (refreshStats == null) {\n return;\n }", "filename": "core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java", "status": "modified" }, { "diff": "@@ -221,7 +221,7 @@ public void add(SearchStats searchStats, boolean includeTypes) {\n if (searchStats == null) {\n return;\n }\n- totalStats.add(searchStats.totalStats);\n+ addTotals(searchStats);\n openContexts += searchStats.openContexts;\n if (includeTypes && searchStats.groupStats != null && !searchStats.groupStats.isEmpty()) {\n if (groupStats == null) {\n@@ -238,6 +238,13 @@ public void add(SearchStats searchStats, boolean includeTypes) {\n }\n }\n \n+ public void addTotals(SearchStats searchStats) {\n+ if (searchStats == null) {\n+ return;\n+ }\n+ totalStats.add(searchStats.totalStats);\n+ }\n+\n public Stats getTotal() {\n return this.totalStats;\n }", "filename": "core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java", "status": "modified" }, { "diff": "@@ -446,13 +446,13 @@ static class OldShardsStats extends IndicesLifecycle.Listener {\n public synchronized void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard,\n @IndexSettings Settings indexSettings) {\n if (indexShard != null) {\n- getStats.add(indexShard.getStats());\n- indexingStats.add(indexShard.indexingStats(), false);\n- searchStats.add(indexShard.searchStats(), false);\n- mergeStats.add(indexShard.mergeStats());\n- refreshStats.add(indexShard.refreshStats());\n- flushStats.add(indexShard.flushStats());\n- recoveryStats.addAsOld(indexShard.recoveryStats());\n+ getStats.addTotals(indexShard.getStats());\n+ indexingStats.addTotals(indexShard.indexingStats());\n+ searchStats.addTotals(indexShard.searchStats());\n+ mergeStats.addTotals(indexShard.mergeStats());\n+ refreshStats.addTotals(indexShard.refreshStats());\n+ flushStats.addTotals(indexShard.flushStats());\n+ recoveryStats.addTotals(indexShard.recoveryStats());\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" } ] }
{ "body": "here is how to reproduce (i know, kinda crazy):\n1. force GroovyScriptEngineService to make NPE on close:\n\n```\n+++ b/core/src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java\n@@ -99,17 +99,19 @@ public class GroovyScriptEngineService extends AbstractComponent implements Scri\n }\n\n // Groovy class loader to isolate Groovy-land code\n- this.loader = new GroovyClassLoader(getClass().getClassLoader(), config);\n+ this.loader = null; //new GroovyClassLoader(getClass().getClassLoader(), config);\n```\n1. mvn install -DskipTests from core/\n2. mvn test from plugins/cloud-azure. \n\nLeakFS detects a file leak:\n\n```\nThrowable #1: java.lang.RuntimeException: file handle leaks: [FileChannel(/home/rmuir/workspace/elasticsearch/plugins/cloud-azure/target/J0/temp/org.elasticsearch.index.store.SmbMMapFsTests_2C60FEE3D73A0680-001/tempDir-001/d0/SUITE-CHILD_VM=[0]-CLUSTER_SEED=[-1301520842927563395]-HASH=[11EC1C68B1C5F]-cluster/nodes/2/node.lock), \n```\n\nThe issue might just be our test harness stuff, but my concern is it could happen \"for real\" too. In the case of SimpleFSLock it could be annoying (user has to then remove lock file).\n", "comments": [], "number": 13685, "title": "Possibly leak of lock, if plugin hits exception on close()" }
{ "body": "We are leaking all kinds of resources if something during Node#close() barfs.\nThis commit cuts over to a list of closeables to release resources that\nalso closed remaining services if one or more services fail to close.\n\nCloses #13685\n", "number": 13693, "review_comments": [], "title": "Ensure all resoruces are closed on Node#close()" }
{ "commits": [ { "message": "Ensure all resoruces are closed on Node#close()\n\nWe are leaking all kinds of resources if something during Node#close() barfs.\nThis commit cuts over to a list of closeables to release resources that\nalso closed remaining services if one or more services fail to close.\n\nCloses #13685" } ], "files": [ { "diff": "@@ -29,13 +29,14 @@\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.threadpool.ThreadPool;\n \n+import java.io.Closeable;\n import java.util.Arrays;\n import java.util.Locale;\n \n import static org.elasticsearch.common.recycler.Recyclers.*;\n \n /** A recycler of fixed-size pages. */\n-public class PageCacheRecycler extends AbstractComponent {\n+public class PageCacheRecycler extends AbstractComponent implements Closeable {\n \n public static final String TYPE = \"recycler.page.type\";\n public static final String LIMIT_HEAP = \"recycler.page.limit.heap\";", "filename": "core/src/main/java/org/elasticsearch/cache/recycler/PageCacheRecycler.java", "status": "modified" }, { "diff": "@@ -69,6 +69,8 @@\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.settings.Settings;\n \n+import java.io.Closeable;\n+\n /**\n * A client provides a one stop interface for performing actions/operations against the cluster.\n * <p/>\n@@ -82,7 +84,7 @@\n * @see org.elasticsearch.node.Node#client()\n * @see org.elasticsearch.client.transport.TransportClient\n */\n-public interface Client extends ElasticsearchClient, Releasable {\n+public interface Client extends ElasticsearchClient, Releasable, Closeable {\n \n String CLIENT_TYPE_SETTING = \"client.type\";\n ", "filename": "core/src/main/java/org/elasticsearch/client/Client.java", "status": "modified" }, { "diff": "@@ -21,13 +21,14 @@\n \n import org.elasticsearch.common.settings.Settings;\n \n+import java.io.Closeable;\n import java.util.List;\n import java.util.concurrent.CopyOnWriteArrayList;\n \n /**\n *\n */\n-public abstract class AbstractLifecycleComponent<T> extends AbstractComponent implements LifecycleComponent<T> {\n+public abstract class AbstractLifecycleComponent<T> extends AbstractComponent implements LifecycleComponent<T>, Closeable {\n \n protected final Lifecycle lifecycle = new Lifecycle();\n ", "filename": "core/src/main/java/org/elasticsearch/common/component/AbstractLifecycleComponent.java", "status": "modified" }, { "diff": "@@ -27,7 +27,7 @@\n /**\n *\n */\n-public interface LifecycleComponent<T> extends Releasable {\n+public interface LifecycleComponent<T> extends Releasable, Closeable {\n \n Lifecycle.State lifecycleState();\n ", "filename": "core/src/main/java/org/elasticsearch/common/component/LifecycleComponent.java", "status": "modified" }, { "diff": "@@ -41,12 +41,13 @@\n import org.elasticsearch.index.shard.ShardUtils;\n import org.elasticsearch.threadpool.ThreadPool;\n \n+import java.io.Closeable;\n import java.util.ArrayList;\n import java.util.List;\n \n /**\n */\n-public class IndicesFieldDataCache extends AbstractComponent implements RemovalListener<IndicesFieldDataCache.Key, Accountable> {\n+public class IndicesFieldDataCache extends AbstractComponent implements RemovalListener<IndicesFieldDataCache.Key, Accountable>, Closeable {\n \n public static final String FIELDDATA_CLEAN_INTERVAL_SETTING = \"indices.fielddata.cache.cleanup_interval\";\n public static final String FIELDDATA_CACHE_CONCURRENCY_LEVEL = \"indices.fielddata.cache.concurrency_level\";", "filename": "core/src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java", "status": "modified" }, { "diff": "@@ -19,7 +19,9 @@\n \n package org.elasticsearch.node;\n \n+import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.Build;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionModule;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n@@ -94,10 +96,9 @@\n import org.elasticsearch.watcher.ResourceWatcherModule;\n import org.elasticsearch.watcher.ResourceWatcherService;\n \n+import java.io.Closeable;\n import java.io.IOException;\n-import java.util.Arrays;\n-import java.util.Collection;\n-import java.util.Collections;\n+import java.util.*;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n@@ -329,87 +330,93 @@ public synchronized void close() {\n if (!lifecycle.moveToClosed()) {\n return;\n }\n+ List<Closeable> toClose = new ArrayList<>();\n \n ESLogger logger = Loggers.getLogger(Node.class, settings.get(\"name\"));\n logger.info(\"closing ...\");\n \n StopWatch stopWatch = new StopWatch(\"node_close\");\n- stopWatch.start(\"tribe\");\n- injector.getInstance(TribeService.class).close();\n- stopWatch.stop().start(\"http\");\n+ toClose.add(() -> stopWatch.start(\"tribe\"));\n+ toClose.add(injector.getInstance(TribeService.class));\n if (settings.getAsBoolean(\"http.enabled\", true)) {\n- injector.getInstance(HttpServer.class).close();\n+ toClose.add(() -> stopWatch.stop().start(\"http\"));\n+ toClose.add(injector.getInstance(HttpServer.class));\n }\n- stopWatch.stop().start(\"snapshot_service\");\n- injector.getInstance(SnapshotsService.class).close();\n- injector.getInstance(SnapshotShardsService.class).close();\n- stopWatch.stop().start(\"client\");\n- Releasables.close(injector.getInstance(Client.class));\n- stopWatch.stop().start(\"indices_cluster\");\n- injector.getInstance(IndicesClusterStateService.class).close();\n- stopWatch.stop().start(\"indices\");\n- injector.getInstance(IndexingMemoryController.class).close();\n- injector.getInstance(IndicesTTLService.class).close();\n- injector.getInstance(IndicesService.class).close();\n+ toClose.add(() -> stopWatch.stop().start(\"snapshot_service\"));\n+\n+ toClose.add(injector.getInstance(SnapshotsService.class));\n+ toClose.add(injector.getInstance(SnapshotShardsService.class));\n+ toClose.add(() -> stopWatch.stop().start(\"client\"));\n+ toClose.add(injector.getInstance(Client.class));\n+ toClose.add(() -> stopWatch.stop().start(\"indices_cluster\"));\n+ toClose.add(injector.getInstance(IndicesClusterStateService.class));\n+ toClose.add(() -> stopWatch.stop().start(\"indices\"));\n+ toClose.add(injector.getInstance(IndexingMemoryController.class));\n+ toClose.add(injector.getInstance(IndicesTTLService.class));\n+ toClose.add(injector.getInstance(IndicesService.class));\n // close filter/fielddata caches after indices\n- injector.getInstance(IndicesQueryCache.class).close();\n- injector.getInstance(IndicesFieldDataCache.class).close();\n- injector.getInstance(IndicesStore.class).close();\n- stopWatch.stop().start(\"routing\");\n- injector.getInstance(RoutingService.class).close();\n- stopWatch.stop().start(\"cluster\");\n- injector.getInstance(ClusterService.class).close();\n- stopWatch.stop().start(\"discovery\");\n- injector.getInstance(DiscoveryService.class).close();\n- stopWatch.stop().start(\"monitor\");\n- injector.getInstance(MonitorService.class).close();\n- stopWatch.stop().start(\"gateway\");\n- injector.getInstance(GatewayService.class).close();\n- stopWatch.stop().start(\"search\");\n- injector.getInstance(SearchService.class).close();\n- stopWatch.stop().start(\"rest\");\n- injector.getInstance(RestController.class).close();\n- stopWatch.stop().start(\"transport\");\n- injector.getInstance(TransportService.class).close();\n- stopWatch.stop().start(\"percolator_service\");\n- injector.getInstance(PercolatorService.class).close();\n+ toClose.add(injector.getInstance(IndicesQueryCache.class));\n+ toClose.add(injector.getInstance(IndicesFieldDataCache.class));\n+ toClose.add(injector.getInstance(IndicesStore.class));\n+ toClose.add(() -> stopWatch.stop().start(\"routing\"));\n+ toClose.add(injector.getInstance(RoutingService.class));\n+ toClose.add(() -> stopWatch.stop().start(\"cluster\"));\n+ toClose.add(injector.getInstance(ClusterService.class));\n+ toClose.add(() -> stopWatch.stop().start(\"discovery\"));\n+ toClose.add(injector.getInstance(DiscoveryService.class));\n+ toClose.add(() -> stopWatch.stop().start(\"monitor\"));\n+ toClose.add(injector.getInstance(MonitorService.class));\n+ toClose.add(() -> stopWatch.stop().start(\"gateway\"));\n+ toClose.add(injector.getInstance(GatewayService.class));\n+ toClose.add(() -> stopWatch.stop().start(\"search\"));\n+ toClose.add(injector.getInstance(SearchService.class));\n+ toClose.add(() -> stopWatch.stop().start(\"rest\"));\n+ toClose.add(injector.getInstance(RestController.class));\n+ toClose.add(() -> stopWatch.stop().start(\"transport\"));\n+ toClose.add(injector.getInstance(TransportService.class));\n+ toClose.add(() -> stopWatch.stop().start(\"percolator_service\"));\n+ toClose.add(injector.getInstance(PercolatorService.class));\n \n for (Class<? extends LifecycleComponent> plugin : pluginsService.nodeServices()) {\n- stopWatch.stop().start(\"plugin(\" + plugin.getName() + \")\");\n- injector.getInstance(plugin).close();\n+ toClose.add(() -> stopWatch.stop().start(\"plugin(\" + plugin.getName() + \")\"));\n+ toClose.add(injector.getInstance(plugin));\n }\n \n- stopWatch.stop().start(\"script\");\n- try {\n- injector.getInstance(ScriptService.class).close();\n- } catch(IOException e) {\n- logger.warn(\"ScriptService close failed\", e);\n- }\n-\n- stopWatch.stop().start(\"thread_pool\");\n+ toClose.add(() -> stopWatch.stop().start(\"script\"));\n+ toClose.add(injector.getInstance(ScriptService.class));\n // TODO this should really use ThreadPool.terminate()\n- injector.getInstance(ThreadPool.class).shutdown();\n- try {\n- injector.getInstance(ThreadPool.class).awaitTermination(10, TimeUnit.SECONDS);\n- } catch (InterruptedException e) {\n- // ignore\n- }\n- stopWatch.stop().start(\"thread_pool_force_shutdown\");\n- try {\n- injector.getInstance(ThreadPool.class).shutdownNow();\n- } catch (Exception e) {\n- // ignore\n- }\n- stopWatch.stop();\n+ toClose.add(() -> {\n+ stopWatch.stop().start(\"thread_pool\");\n+ injector.getInstance(ThreadPool.class).shutdown();\n+ try {\n+ injector.getInstance(ThreadPool.class).awaitTermination(10, TimeUnit.SECONDS);\n+ } catch (InterruptedException e) {\n+ // ignore\n+ } finally {\n+ stopWatch.stop().start(\"thread_pool_force_shutdown\");\n+ try {\n+ injector.getInstance(ThreadPool.class).shutdownNow();\n+ } catch (Exception e) {\n+ // ignore\n+ }\n+ }\n+ });\n \n+ toClose.add(() -> stopWatch.stop());\n if (logger.isTraceEnabled()) {\n logger.trace(\"Close times for each service:\\n{}\", stopWatch.prettyPrint());\n }\n \n- injector.getInstance(NodeEnvironment.class).close();\n- injector.getInstance(PageCacheRecycler.class).close();\n+ toClose.add(injector.getInstance(NodeEnvironment.class));\n+ toClose.add(injector.getInstance(PageCacheRecycler.class));\n+ try {\n+ IOUtils.close(toClose);\n+ } catch (IOException ex) {\n+ throw new ElasticsearchException(\"close failed \", ex);\n+ } finally {\n+ logger.info(\"closed\");\n+ }\n \n- logger.info(\"closed\");\n }\n \n ", "filename": "core/src/main/java/org/elasticsearch/node/Node.java", "status": "modified" }, { "diff": "@@ -88,13 +88,13 @@\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n-import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator;\n import org.elasticsearch.search.aggregations.pipeline.SiblingPipelineAggregator;\n import org.elasticsearch.search.highlight.HighlightField;\n import org.elasticsearch.search.highlight.HighlightPhase;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.sort.SortParseElement;\n \n+import java.io.Closeable;\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n@@ -107,7 +107,7 @@\n import static org.elasticsearch.percolator.QueryCollector.match;\n import static org.elasticsearch.percolator.QueryCollector.matchAndScore;\n \n-public class PercolatorService extends AbstractComponent {\n+public class PercolatorService extends AbstractComponent implements Closeable {\n \n public final static float NO_SCORE = Float.NEGATIVE_INFINITY;\n public final static String TYPE_NAME = \".percolator\";", "filename": "core/src/main/java/org/elasticsearch/percolator/PercolatorService.java", "status": "modified" } ] }
{ "body": "I try to get a document using date math, but Elasticsearch returns a error.\n\nVersion: Elasticsearch 2.0.0-beta2\n\nPut Document\n\n$ curl -XPUT 'localhost:9200/logstash-2015.09.19/animals/1' -d '{\"name\":\"puppy\"}'\n{\"_index\":\"logstash-2015.09.19\",\"_type\":\"animals\",\"_id\":\"1\",\"_version\":1,\"_shards\":{\"total\":2,\"successful\":1,\"failed\":0},\"created\":true}\n\nGet Document\n\n$ curl -XGET 'localhost:9200/logstash-2015.09.19/animals/1'\n{\"_index\":\"logstash-2015.09.19\",\"_type\":\"animals\",\"_id\":\"1\",\"_version\":1,\"found\":true,\"_source\":{\"name\":\"puppy\"}}\n\nGet Document with Date Math\n\n$ curl -XGET 'localhost:9200/`<logstash-{now/d}`>/animals/1'\nNo handler found for uri `[/<logstash-now/d>/animals/1]` and method [GET]\n", "comments": [ { "body": "It's not supposed to work AFAIK. Did you see that somewhere in docs?\nYou should do that on the client side.\n\nMay be using an alias would help you?\n", "created_at": "2015-09-20T05:56:04Z" }, { "body": "This page includes the date math information.\nhttps://www.elastic.co/guide/en/elasticsearch/reference/master/date-math-index-names.html\n\nExample from page.\ncurl -XGET 'localhost:9200/<logstash-{now/d-2d}>/_search' {\n \"query\" : {\n ...\n }\n}\n\nI try this curl command and get a error.\ncurl -XGET 'localhost:9200/<logstash-{now/d}>/_search' -d '{\n \"query\" : {\n \"match\" : {}\n }\n}'\n\n{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no such index\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"<logstash-now\",\"index\":\"<logstash-now\"}],\"type\":\"index_not_found_exception\",\"reason\":\"no such index\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"<logstash-now\",\"index\":\"<logstash-now\"},\"status\":404}\n\nThere is also this pull request.\n\nAdd date math support in index names #12209\nhttps://github.com/elastic/elasticsearch/pull/12209\n", "created_at": "2015-09-20T06:27:15Z" }, { "body": "Great. I missed that !\n", "created_at": "2015-09-20T08:46:40Z" }, { "body": "Thanks for reporting @sirdavidhuang, this is indeed a bug. The issue here is that the index name is cut of at the date rounding (`/)` and only `<logstash-{now` ends up being received as index name.\n", "created_at": "2015-09-21T07:08:59Z" }, { "body": "Hi @sirdavidhuang we had initially treated this as a bug and fixed it, but afterwards we found out that the fix introduced a regression (#14177), which made us take a step back. Any slash that should not be used as a path separator in a uri should be properly escaped, and it is wrong to try and make distinctions between the different slashes on the server side depending on what surrounds them, that heuristic is not going to fly (in fact it didn't :) )\n\nWe are going to revert the initial fix then, the solution to this problem is to escape the '/' in the url like this: `curl -XGET 'localhost:9200/<logstash-{now%2Fd}>/animals/1'`. That is going to work and didn't require any fix in the first place.\n", "created_at": "2015-10-20T20:02:13Z" } ], "number": 13665, "title": "Date Math Not Working for GET" }
{ "body": "Index name expressions like: `<logstash-{now/D}>` are broken up into: `<logstash-{now` and `D}>`. This shouldn't happen. This PR fixes this by preventing the `PathTrie` to split based on `/` if it is currently in between between `<` and `>` characters.\n\nPR for #13665\n", "number": 13691, "review_comments": [ { "body": "more of a question than a comment: seems to me that using an array here complicates the code a bit. I guess that is for performance reasons? Otherwise we could use a list and iterate only once, and also save us the arraycopy at the end?\n", "created_at": "2015-09-23T07:37:20Z" }, { "body": "+1 This simplifies the logic.\n", "created_at": "2015-09-25T08:53:47Z" } ], "title": "Index name expressions should not be broken up" }
{ "commits": [], "files": [] }
{ "body": "Based on @jprante comment here https://github.com/elastic/elasticsearch/pull/13314#issuecomment-137662468\n\n> Just a note: `mvn javadoc:javadoc` fails, because Java 8 javadoc performs a strict doclint check. See also http://openjdk.java.net/jeps/172\n", "comments": [], "number": 13336, "title": "[build] mvn javadoc:javadoc fails with Java 8" }
{ "body": "Java 8's javadoc defaults to very strict linting. It is very `-Wall -Werr`\nstyle. And Elasticsearch's Javadocs do not pass and it'd be a huge and not\nsuper useful effort to get them to pass the linting. So this disables it.\n\nCloses #13336\n", "number": 13689, "review_comments": [], "title": "Disable doclint" }
{ "commits": [ { "message": "[build] disable doclint\n\nJava 8's javadoc defaults to very strict linting. It is very `-Wall -Werr`\nstyle. And Elasticsearch's Javadocs do not pass and it'd be a huge and not\nsuper useful effort to get them to pass the linting. So this disables it.\n\nCloses #13336" } ], "files": [ { "diff": "@@ -1261,10 +1261,14 @@ org.eclipse.jdt.ui.text.custom_code_templates=<?xml version\\=\"1.0\" encoding\\=\"UT\n <version>2.0.0</version>\n </plugin>\n <plugin>\n- <!-- We just declare which plugin version to use. Each project can have then its own settings -->\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-javadoc-plugin</artifactId>\n <version>2.10.3</version>\n+ <configuration>\n+ <!-- Doclint defaults to very strict. While it'd be great to have wonderful Javadocs we\n+ prefer working Javadocs over total failure. -->\n+ <additionalparam>-Xdoclint:none</additionalparam>\n+ </configuration>\n </plugin>\n <plugin>\n <!-- We just declare which plugin version to use. Each project can have then its own settings -->", "filename": "pom.xml", "status": "modified" } ] }
{ "body": "Log messages used to remove the `org.elasticsearch.` prefix from the logger name, eg:\n\n```\n [2015-09-18 18:15:00,996][TRACE][discovery.zen.ping.unicast] [U-Man] [1] disconnecting from...\n```\n\nIn 2.0.0-beta2 it logs this line as:\n\n```\n[2015-09-18 18:15:00,996][TRACE][org.elasticsearch.discovery.zen.ping.unicast] [U-Man] [1] disconnecting from...\n```\n\nThe `config/logging.yml` still refers to the logger names without the prefix, eg:\n\n```\n # discovery\n discovery: TRACE\n```\n\nThis needs to be changed to the following to work:\n\n```\n # discovery\n org.elasticsearch.discovery: TRACE\n```\n\nPersonally I'd prefer to go back to removing the prefix, but not sure if there was a reason for adding the prefix in? Either way, the config file should be consistent with what we do in practice.\n", "comments": [ { "body": "I ran out of time to chase down the appropriate fix, but this is the first _static_ use of the `Loggers` class, which is triggering the system property to be read _before_ it's set in `org.elasticsearch.bootstrap.Bootstrap`.\n\n```\n at org.elasticsearch.common.logging.Loggers.<clinit>(Loggers.java:44) // I added a static block to catch it\n at org.elasticsearch.common.MacAddressProvider.<clinit>(MacAddressProvider.java:32)\n at org.elasticsearch.common.TimeBasedUUIDGenerator.<clinit>(TimeBasedUUIDGenerator.java:38)\n at org.elasticsearch.common.Strings.<clinit>(Strings.java:64)\n at org.elasticsearch.common.settings.Settings$Builder.replacePropertyPlaceholders(Settings.java:1178)\n at org.elasticsearch.node.internal.InternalSettingsPreparer.initializeSettings(InternalSettingsPreparer.java:158)\n at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:84)\n at org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:107)\n at org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:100)\n at org.elasticsearch.bootstrap.BootstrapCLIParser.<init>(BootstrapCLIParser.java:48)\n at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:222)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\n```\n\nSimplest fix is to just put the system property at the top of the `org.elasticsearch.bootstrap.Bootstrap#init` method.\n", "created_at": "2015-09-18T17:14:00Z" } ], "number": 13658, "title": "Logger name doesn't correspond to logging config" }
{ "body": "Closes #13658\n", "number": 13660, "review_comments": [], "title": "Moving system property setting to before it can be used" }
{ "commits": [ { "message": "Moving system property setting to before it can be used" }, { "message": "Remove unnecessary suppression" } ], "files": [ { "diff": "@@ -218,14 +218,16 @@ static void stop() {\n * to startup elasticsearch.\n */\n static void init(String[] args) throws Throwable {\n+ // Set the system property before anything has a chance to trigger its use\n+ System.setProperty(\"es.logger.prefix\", \"\");\n+\n BootstrapCLIParser bootstrapCLIParser = new BootstrapCLIParser();\n CliTool.ExitStatus status = bootstrapCLIParser.execute(args);\n \n if (CliTool.ExitStatus.OK != status) {\n System.exit(status.status());\n }\n \n- System.setProperty(\"es.logger.prefix\", \"\");\n INSTANCE = new Bootstrap();\n \n boolean foreground = !\"false\".equals(System.getProperty(\"es.foreground\", System.getProperty(\"es-foreground\")));", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java", "status": "modified" } ] }
{ "body": "If the `geohash_precision` is set to anything except the default (12), and `geohash_prefix` is not true, the geohash is not indexed:\n\n```\nPUT my_index\n{\n \"mappings\": {\n \"my_type\": {\n \"properties\": {\n \"location\": {\n \"type\": \"geo_point\",\n \"geohash\": true,\n \"geohash_precision\": 11\n }\n }\n }\n }\n}\n\n\nPUT my_index/my_type/1\n{\n \"location\": {\n \"lat\": 41.12,\n \"lon\": -71.34\n }\n}\n\nGET my_index/_search?fielddata_fields=location.geohash\n```\n", "comments": [ { "body": "This is a silly mappings refactor mistake... the bigger issue is that we didn't have a test for catching this. I'll backport and make sure there's sufficient test coverage.\n", "created_at": "2015-09-17T22:12:53Z" }, { "body": "Merged #13649 \n", "created_at": "2015-09-23T14:39:28Z" } ], "number": 12467, "title": "Geohash is not indexed unless the precision is 12" }
{ "body": "Fixes a bug with GeoPointFieldMapper where geohash was only indexed if `geohash_precision` was set to 12. This also adds test coverage for varying geohash precision and `geohash_prefix` indexing.\n\ncloses #12467 \n", "number": 13649, "review_comments": [ { "body": "Do we really need a new file? We have thousands of files...it would be nice if these tests could go in GeoPointFieldMapperTests...\n", "created_at": "2015-09-18T17:44:27Z" } ], "title": "Fix GeoPointFieldMapper to index geohash at correct precision." }
{ "commits": [], "files": [] }
{ "body": "OS info\n\n```\nelk@master-clone:/opt$ uname -a\nLinux master-clone 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u3 x86_64 GNU/Linux\n```\n\nJava info\n\n```\nelk@master-clone:/opt$ java -version\njava version \"1.6.0_36\"\nOpenJDK Runtime Environment (IcedTea6 1.13.8) (6b36-1.13.8-1~deb7u1)\nOpenJDK 64-Bit Server VM (build 23.25-b01, mixed mode)\n```\n\nDownload and Install deb (probably affects other packages)\n\n```\nelk@master-clone:/opt$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.deb\n--2015-09-08 13:31:03-- https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.deb\nResolving download.elastic.co (download.elastic.co)... 107.22.213.126, 107.22.237.122, 107.20.222.112, ...\nConnecting to download.elastic.co (download.elastic.co)|107.22.213.126|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 27321880 (26M) [application/x-debian-package]\nSaving to: `elasticsearch-1.7.1.deb'\n\n100%[=================================================================================================================================================================================================================================================================================>] 27,321,880 4.43M/s in 6.6s \n\n2015-09-08 13:31:11 (3.94 MB/s) - `elasticsearch-1.7.1.deb' saved [27321880/27321880]\n\nelk@master-clone:/opt$ sudo dpkg -i elasticsearch-1.7.1.deb \nSelecting previously unselected package elasticsearch.\n(Reading database ... 143837 files and directories currently installed.)\nUnpacking elasticsearch (from elasticsearch-1.7.1.deb) ...\nCreating elasticsearch group... OK\nCreating elasticsearch user... OK\nSetting up elasticsearch (1.7.1) ...\n```\n\nLaunch service\n\n```\nelk@master-clone:/opt$ sudo /etc/init.d/elasticsearch start\n[FAIL] Starting Elasticsearch Server: failed!\n```\n\nremoving -b flag , from /etc/init.d/elasticsearch\n\n```\nstart-stop-daemon --start -b --user \"$ES_USER\" -c \"$ES_USER\" --pidfile \"$PID_FILE\" --exec $DAEMON -- $DAEMON_OPTS\n```\n\nto\n\n```\nstart-stop-daemon --start --user \"$ES_USER\" -c \"$ES_USER\" --pidfile \"$PID_FILE\" --exec $DAEMON -- $DAEMON_OPTS\n start-stop-daemon --stop --pidfile \"$PID_FILE\" \\\n```\n\nto make this surface\n\n```\nelk@master-clone:/opt$ sudo /etc/init.d/elasticsearch start\n[....] Starting Elasticsearch Server:Exception in thread \"main\" java.lang.UnsupportedClassVersionError: org/elasticsearch/bootstrap/Elasticsearch : Unsupported major.minor version 51.0\n at java.lang.ClassLoader.defineClass1(Native Method)\n at java.lang.ClassLoader.defineClass(ClassLoader.java:643)\n at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)\n at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)\n at java.net.URLClassLoader.access$000(URLClassLoader.java:73)\n at java.net.URLClassLoader$1.run(URLClassLoader.java:212)\n at java.security.AccessController.doPrivileged(Native Method)\n at java.net.URLClassLoader.findClass(URLClassLoader.java:205)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:323)\n at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:268)\nCould not find the main class: org.elasticsearch.bootstrap.Elasticsearch. Program will exit.\n failed!\n```\n\nThis is really bad user experience and it would be very desirable at least to catch this output and present it to the user.\n", "comments": [ { "body": "will try this on 2.x \n", "created_at": "2015-09-08T13:40:09Z" }, { "body": "still reproducible on 2.x I'm afraid\n\n```\nLast login: Tue Sep 8 11:12:34 2015 from antonios-macbook-air.local\nelk@master-clone:~$ wget https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/deb/elasticsearch/2.0.0-beta1/elasticsearch-2.0.0-beta1.deb\n--2015-09-09 17:31:08-- https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/deb/elasticsearch/2.0.0-beta1/elasticsearch-2.0.0-beta1.deb\nResolving download.elasticsearch.org (download.elasticsearch.org)... 107.21.118.106, 107.20.198.195, 184.72.232.248, ...\nConnecting to download.elasticsearch.org (download.elasticsearch.org)|107.21.118.106|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 28431018 (27M) [application/x-debian-package]\nSaving to: `elasticsearch-2.0.0-beta1.deb'\n\n100%[=================================================================================================================================================================================================================================================================================>] 28,431,018 4.48M/s in 9.1s \n\n2015-09-09 17:31:19 (2.99 MB/s) - `elasticsearch-2.0.0-beta1.deb' saved [28431018/28431018]\n\nelk@master-clone:~$ sudo dpkg -i elasticsearch-2.0.0-beta1.deb \n[sudo] password for elk: \n\nSelecting previously unselected package elasticsearch.\n(Reading database ... 143837 files and directories currently installed.)\nUnpacking elasticsearch (from elasticsearch-2.0.0-beta1.deb) ...\nCreating elasticsearch group... OK\nCreating elasticsearch user... OK\nSetting up elasticsearch (2.0.0~beta1) ...\n\nelk@master-clone:~$ uname -a\nLinux master-clone 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u3 x86_64 GNU/Linux\nelk@master-clone:~$ cat /etc/debian_version \n7.9\nelk@master-clone:~$ java -version\njava version \"1.6.0_36\"\nOpenJDK Runtime Environment (IcedTea6 1.13.8) (6b36-1.13.8-1~deb7u1)\nOpenJDK 64-Bit Server VM (build 23.25-b01, mixed mode)\nelk@master-clone:~$ time sudo /etc/init.d/elasticsearch start\n[FAIL] Starting Elasticsearch Server: failed!\n\nreal 0m11.111s\nuser 0m0.004s\nsys 0m0.012s\nelk@master-clone:~$ sudo cp /etc/init.d/elasticsearch ~/\nelk@master-clone:~$ egrep start-stop-daemon /etc/init.d/elasticsearch \n# Description: Starts elasticsearch using start-stop-daemon\n start-stop-daemon -d $ES_HOME --start -b --user \"$ES_USER\" -c \"$ES_USER\" --pidfile \"$PID_FILE\" --exec $DAEMON -- $DAEMON_OPTS\n start-stop-daemon --stop --pidfile \"$PID_FILE\" \\\n```\n\nremove background flag\n\n```\nelk@master-clone:~$ sudo sed -i 's/\\-b//g' /etc/init.d/elasticsearch \n```\n\n```\nelk@master-clone:~$ egrep start-stop-daemon /etc/init.d/elasticsearch \n# Description: Starts elasticsearch using start-stop-daemon\n start-stop-daemon -d $ES_HOME --start --user \"$ES_USER\" -c \"$ES_USER\" --pidfile \"$PID_FILE\" --exec $DAEMON -- $DAEMON_OPTS\n start-stop-daemon --stop --pidfile \"$PID_FILE\" \\\nelk@master-clone:~$ time sudo /etc/init.d/elasticsearch start\n[....] Starting Elasticsearch Server:Exception in thread \"main\" java.lang.UnsupportedClassVersionError: org/elasticsearch/bootstrap/Elasticsearch : Unsupported major.minor version 51.0\n at java.lang.ClassLoader.defineClass1(Native Method)\n at java.lang.ClassLoader.defineClass(ClassLoader.java:643)\n at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)\n at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)\n at java.net.URLClassLoader.access$000(URLClassLoader.java:73)\n at java.net.URLClassLoader$1.run(URLClassLoader.java:212)\n at java.security.AccessController.doPrivileged(Native Method)\n at java.net.URLClassLoader.findClass(URLClassLoader.java:205)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:323)\n at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:268)\nCould not find the main class: org.elasticsearch.bootstrap.Elasticsearch. Program will exit.\n failed!\n\nreal 0m11.101s\nuser 0m0.004s\nsys 0m0.008s\n```\n", "created_at": "2015-09-09T15:44:50Z" }, { "body": "I've dropped the discuss label because I'm pretty sure this is worth fixing. I'll have a go at it sometime in the next few days if no one else wants it. I'm claiming it now so I don't forget about it but if anyone wants it chime in. I don't enjoy working on packaging issues but I enjoy having them solved....\n", "created_at": "2015-09-09T15:50:27Z" }, { "body": "Its too bad `java` launcher is so broken, since it has a `-version:xxx` with fancy support for lists, wildcards, ranges, etc, to specify what is needed.\n\nBut it does not work at all.\n", "created_at": "2015-09-09T17:43:18Z" }, { "body": "OK! I'm going to have a look at this.\n\nI noticed that in systemd we've got elasticsearch setup as a `simple` service (the default) but if we switched to a `notify` service then we'd be able to send notifications back to systemd over a socket that tells it how Elasticsearch is doing. Neat stuff. Minimally I'd hope that we could send it a notification once the node is started. We could even send it information about where we are in the startup process.... Neat! Anyway, [man sd_notify](http://www.dsm.fordham.edu/cgi-bin/man-cgi.pl?topic=sd_notify&ampsect=3) has good docs for this. sd_notify doesn't have a Java library so that'd be trouble there.\n\nNow I'll investigate the init script side....\n", "created_at": "2015-09-17T14:41:46Z" }, { "body": "I think this can be fixed simply by not double-daemonizing as we do today. Closing in favour of https://github.com/elastic/elasticsearch/issues/8796\n\n(If I've got this wrong, please reopen)\n", "created_at": "2016-01-28T15:51:49Z" } ], "number": 13392, "title": "2.0.0-beta1 Init script startup fails silently when hitting java.lang.UnsupportedClassVersionError (unsupported JVM)" }
{ "body": "If bin/elasticsearch is run with the option to daemonize and the option to\nwrite a pidfile then it will wait for 30 seconds for Elasticsearch to write\nthe pidfile. If it fails to write the pidfile before the timeout then it\nwill warn the user to check the logs and further warn them that if nothing\nshows up in the logs that they should attempt to run Elasticsearch in the\nforeground.\n\nCloses #13392\n", "number": 13643, "review_comments": [ { "body": "caused -> caused by\n", "created_at": "2015-09-17T21:14:44Z" }, { "body": "broken logging config would cause any failures to be written to stdout/stderr right? And maybe we could do a simple check on java version from within here, just to check the major version?\n", "created_at": "2015-09-17T21:16:00Z" }, { "body": "I thought about running a small java program with minimal arguments but figured that'd end up more complex than something like this. It might not be. It'd certainly be more clear on catching that case.\n\nI think stdout is trouble here because of `<&- &` but it'd be worth checking.....\n", "created_at": "2015-09-17T21:21:44Z" }, { "body": "I think we should check if the file exists, read its content and check if the process id is still running before deleting the file\n\nedit: the process starts\n", "created_at": "2015-09-22T13:20:47Z" }, { "body": "Executing `bin/elasticsearch -d -p pid` produces the error `awk: line 1: syntax error at or near ,`\n", "created_at": "2015-09-22T13:22:36Z" }, { "body": "Awe! OK. I hate that bash's `getops` doesn't support long arguments....\n", "created_at": "2015-09-22T13:24:21Z" }, { "body": "Probably. Is it an error to restart if elasticsearch is still running with that pidfile?\n\nThis is getting a bit complex.\n", "created_at": "2015-09-22T13:25:00Z" }, { "body": "Can we just print the pidfile value in the log message so that the user knows which pidfile should have been written?\n", "created_at": "2015-09-22T13:25:02Z" }, { "body": "> This is getting a bit complex.\n\nI agree, but with this PR a new ES instance is started and the pid file is overridden... If the pid file exists and refers to a running process I think printing a message and exiting (not 0) is enough.\n", "created_at": "2015-09-22T13:53:24Z" } ], "title": "Make bin/elasticsearch wait for pidfile" }
{ "commits": [ { "message": "Make bin/elasticsearch wait for pidfile\n\nIf bin/elasticsearch is run with the option to daemonize and the option to\nwrite a pidfile then it will wait for 30 seconds for Elasticsearch to write\nthe pidfile. If it fails to write the pidfile before the timeout then it\nwill warn the user to check the logs and further warn them that if nothing\nshows up in the logs that they should attempt to run Elasticsearch in the\nforeground.\n\nCloses #13392" } ], "files": [ { "diff": "@@ -130,10 +130,30 @@ export HOSTNAME\n daemonized=`echo $* | grep -E -- '(^-d |-d$| -d |--daemonize$|--daemonize )'`\n if [ -z \"$daemonized\" ] ; then\n exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" \\\n- org.elasticsearch.bootstrap.Elasticsearch start \"$@\"\n-else\n- exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" \\\n- org.elasticsearch.bootstrap.Elasticsearch start \"$@\" <&- &\n+ org.elasticsearch.bootstrap.Elasticsearch start \"$@\"\n+ exit $?\n fi\n \n+pidfile=$(echo $* | awk '{match($0, /-p(idfile)? +([^ ]+)/,a);print a[2]}')\n+if [ ! -z \"$pidfile\" ]; then\n+ rm -f \"$pidfile\"\n+fi\n+exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" \\\n+ org.elasticsearch.bootstrap.Elasticsearch start \"$@\" <&- &\n+if [ ! -z \"$pidfile\" ]; then\n+ end=$(($(date +%s) + 30))\n+ while [ ! -f \"$pidfile\" ] && [ $(date +%s) -le $end ]; do\n+ sleep 1\n+ done\n+\n+ if [ ! -f \"$pidfile\" ]; then\n+ cat >&2 << EOF\n+Elasticsearch never wrote its pidfile. It probably was not able to start properly.\n+Check its log file for more information. If it did not log anything then try to\n+run it in the foreground. Errors without any logs are likely caused unsupported\n+versions of Java or broken logging configuration.\n+EOF\n+ exit 1\n+ fi\n+fi\n exit $?", "filename": "distribution/src/main/resources/bin/elasticsearch", "status": "modified" } ] }
{ "body": "Hi,\n\nUsing fielddata_fields with geo_point raises an exception when running at least two instances of Elasticsearch.\n\nTo reproduce it : \n- run two instances of elasticsearch (on the same machine)\n- run the gist : https://gist.github.com/clement-tourriere/8d06a0985f859666ae532\n- You maybe need to run the search request twice to see the error\n\nA patch could be : https://gist.github.com/clement-tourriere/2aa7219bd1da96393cbd\n\nFor more information, I have create a thread here (with another geo_point problem) : \nhttps://discuss.elastic.co/t/problems-with-geo-point-type-and-doc-values/28256.\n", "comments": [ { "body": "Hi @clement-tourriere the gist you provided in here provides a 404 (looks like you added an extra \"2\" at the end of the URL). https://gist.github.com/clement-tourriere/8d06a0985f859666ae53 should be the correct URL. Correct me if I'm wrong here please :)\n", "created_at": "2015-09-04T19:12:42Z" }, { "body": "You're right, the correct link is the one without the '2' at the end. Sorry about that, I thank you for pointing this out \n", "created_at": "2015-09-05T07:37:17Z" }, { "body": "Hi @clement-tourriere \n\nThanks for the good recreation. I can confirm that this is still broken in master. Would you like to send a PR?\n", "created_at": "2015-09-06T13:44:43Z" }, { "body": "@clintongormley Would it be possible seeing this patch to land in 1.x ?\n\nSome features just are not there in 2.x and some people, like me, need to keep 1.x while being affected by this bug (in fact it's not exactly the same one, it affect `scripted_fields`, but I do believe this patch fixes it too, for the moment my workaround is far from nice...)\n", "created_at": "2016-02-11T11:13:13Z" }, { "body": "@clintongormley @temsa i'm also having this problem in ES 1.5.2\n", "created_at": "2016-03-28T10:50:36Z" } ], "number": 13340, "title": "Problem with geo_point and script_fields/fielddata_fields" }
{ "body": "Fixes transport serialization for geo_point type in multi node environment (possibility to use geo_point in script and fielddata_fields).\n\nCloses #13340\n", "number": 13632, "review_comments": [], "title": "Add GeoPoint in StreamInput/StreamOutput" }
{ "commits": [ { "message": "Add GeoPoint in StreamInput/StreamOutput\nCloses #13340" } ], "files": [ { "diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.text.StringAndBytesText;\n import org.elasticsearch.common.text.Text;\n import org.joda.time.DateTime;\n@@ -420,11 +421,17 @@ public Object readGenericValue() throws IOException {\n return readDoubleArray();\n case 21:\n return readBytesRef();\n+ case 22:\n+ return readGeoPoint();\n default:\n throw new IOException(\"Can't read unknown type [\" + type + \"]\");\n }\n }\n \n+ public GeoPoint readGeoPoint() throws IOException {\n+ return new GeoPoint(readDouble(), readDouble());\n+ }\n+\n public int[] readIntArray() throws IOException {\n int length = readVInt();\n int[] values = new int[length];", "filename": "core/src/main/java/org/elasticsearch/common/io/stream/StreamInput.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.text.Text;\n import org.joda.time.ReadableInstant;\n \n@@ -408,6 +409,10 @@ public void writeGenericValue(@Nullable Object value) throws IOException {\n } else if (value instanceof BytesRef) {\n writeByte((byte) 21);\n writeBytesRef((BytesRef) value);\n+ } else if (type == GeoPoint.class) {\n+ writeByte((byte) 22);\n+ writeDouble(((GeoPoint) value).lat());\n+ writeDouble(((GeoPoint) value).lon());\n } else {\n throw new IOException(\"Can't write type [\" + type + \"]\");\n }", "filename": "core/src/main/java/org/elasticsearch/common/io/stream/StreamOutput.java", "status": "modified" } ] }
{ "body": "I don't know why they do it this way, but it causes grief.\n\nCurrent code does this which will fail on java 9:\n\n```\nOperatingSystemMXBean osMxBean = ManagementFactory.getOperatingSystemMXBean();\nMethod getTotalPhysicalMemorySize = osMxBean.getClass().getMethod(\"getTotalPhysicalMemorySize\");\ngetTotalPhysicalMemorySize.setAccessible(true);\nSystem.out.println(getTotalPhysicalMemorySize.invoke(osMxBean));\n```\n\nInstead, do it like this:\n\n```\nOperatingSystemMXBean osMxBean = ManagementFactory.getOperatingSystemMXBean();\nClass c = Class.forName(\"com.sun.management.OperatingSystemMXBean\");\nMethod getTotalPhysicalMemorySize = c.getMethod(\"getTotalPhysicalMemorySize\");\nSystem.out.println(getTotalPhysicalMemorySize.invoke(osMxBean));\n```\n\nI would maybe also add a class.cast for simplicity and better errors/debugging too.\n\nSeparately, we need to ban setAccessible completely.\n", "comments": [ { "body": "And we also just revert af2df9aef67462adba9146a28e464043ee7ae122 as part of this too. I will take care of this one.\n", "created_at": "2015-09-11T21:28:37Z" }, { "body": "Copypasted from hangouts:\n\nyou dont need al that:\nhttps://docs.oracle.com/javase/8/docs/jre/api/management/extension/com/sun/management/OperatingSystemMXBean.html\nthis one is officially documented in the JDK javadocs\nthe problem may only be suppressforbidden, because it detects the class name as \"sun.\"\nits part of compact3\nas you see it also has annotation @Exported\nyou can get the bean by passing the interface name to the managmentfactory\n public static <T extends PlatformManagedObject> T getPlatformMXBean(Class<T> mxbeanInterface)\nyou may add @SuppressForbidden\ni know this is a bit controversal... ☺\nIBM J9 may not implement this, because its not required to have the bean\nbut the interface should be there if they are compatible\nyou also have that one officially documented: https://docs.oracle.com/javase/8/docs/jre/api/management/extension/com/sun/management/UnixOperatingSystemMXBean.html\n", "created_at": "2015-09-11T21:41:52Z" }, { "body": "In any case, why do you need the Class#forName() in your code. Removing the setAccessible should be fine, or what happens?\n", "created_at": "2015-09-11T21:42:52Z" }, { "body": "> In any case, why do you need the Class#forName() in your code. Removing the setAccessible should be fine, or what happens?\n\nit fails, even with java 8. you have to get the correct method. Sorry Uwe, it may be officially doc'ed as an extension but we should not rely on that, for jvm support everywhere. There is a simple way to do it, and its what i listed above. I will fix the code.\n", "created_at": "2015-09-12T02:06:09Z" }, { "body": "And yes you see it here (https://docs.oracle.com/javase/7/docs/jre/api/management/extension/com/sun/management/package-summary.html) but if you try IBM:\n\n```\nException in thread \"main\" java.lang.ClassNotFoundException: com.sun.management.OperatingSystemMXBean\n at java.lang.Class.forName(Class.java:139)\n at test2.main(test2.java:8)\n```\n\nThis is why we will do it the correct way!\n", "created_at": "2015-09-12T02:18:38Z" }, { "body": "> it fails, even with java 8. you have to get the correct method\n\nit fails also on Java 8, because it is an interface... So you have to reflect the interface and then call the method on the interface.\n", "created_at": "2015-09-12T09:09:00Z" } ], "number": 13527, "title": "OS/ProcessProbe need to not use setAccessible" }
{ "body": "In addition to being a big security problem, setAccessible is a risk for java 9 migration. We need to clean up our code so we can ban it and eventually enforce this with security manager for third-party code, too, or we may have problems.\n\nInstead of using setAccessible for monitoring stats, we just use the correct interface that is public (but may not be there under some IBM vms). This restores them for java 9, which does not allow what we do today.\n\nInstead of using setAccessible for injection code, use the correct modifier (e.g. public). Injected constructors need to be public. I scanned for problems with `grep` and eclipse call hierarchy and don't think any are missing (I did not trust our tests really would find everything). \n\nTODO: ban in tests\nTODO: ban in security manager at runtime\nTODO: make a nice asm-based scan to enforce proper modifiers and similar stuff like this.\n\nCloses #13527\n", "number": 13531, "review_comments": [], "title": "Ban setAccessible from core code, restore monitoring stats under java 9" }
{ "commits": [ { "message": "Revert \"Update monitor probe tests for java 9: this stuff is no longer accessible\"\n\nThis reverts commit af2df9aef67462adba9146a28e464043ee7ae122." }, { "message": "get our stats back by reflecting mxbeans correctly" }, { "message": "ban setAccessible from core code.\n\nIn addition to being a big security problem, setAccessible is a risk\nfor java 9 migration. We need to clean up our code so we can ban it\nand eventually enforce this with security manager for third-party code, too,\nor we may have problems.\n\nInstead of using setAccessible, use the correct modifier (e.g. public).\n\nTODO: ban in tests\nTODO: ban in security manager at runtime" } ], "files": [ { "diff": "@@ -30,7 +30,7 @@\n */\n public abstract class ActionRequest<T extends ActionRequest> extends TransportRequest {\n \n- protected ActionRequest() {\n+ public ActionRequest() {\n super();\n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/ActionRequest.java", "status": "modified" }, { "diff": "@@ -47,7 +47,7 @@ public class ClusterHealthRequest extends MasterNodeReadRequest<ClusterHealthReq\n private String waitForNodes = \"\";\n private Priority waitForEvents = null;\n \n- ClusterHealthRequest() {\n+ public ClusterHealthRequest() {\n }\n \n public ClusterHealthRequest(String... indices) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthRequest.java", "status": "modified" }, { "diff": "@@ -124,7 +124,7 @@ private void executeHealth(final ClusterHealthRequest request, final ActionListe\n if (request.waitForNodes().isEmpty()) {\n waitFor--;\n }\n- if (request.indices().length == 0) { // check that they actually exists in the meta data\n+ if (request.indices() == null || request.indices().length == 0) { // check that they actually exists in the meta data\n waitFor--;\n }\n \n@@ -199,7 +199,7 @@ private boolean prepareResponse(final ClusterHealthRequest request, final Cluste\n if (request.waitForActiveShards() != -1 && response.getActiveShards() >= request.waitForActiveShards()) {\n waitForCounter++;\n }\n- if (request.indices().length > 0) {\n+ if (request.indices() != null && request.indices().length > 0) {\n try {\n indexNameExpressionResolver.concreteIndices(clusterState, IndicesOptions.strictExpand(), request.indices());\n waitForCounter++;", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java", "status": "modified" }, { "diff": "@@ -38,7 +38,7 @@ public class NodesHotThreadsRequest extends BaseNodesRequest<NodesHotThreadsRequ\n boolean ignoreIdleThreads = true;\n \n // for serialization\n- NodesHotThreadsRequest() {\n+ public NodesHotThreadsRequest() {\n \n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/hotthreads/NodesHotThreadsRequest.java", "status": "modified" }, { "diff": "@@ -94,11 +94,11 @@ protected boolean accumulateExceptions() {\n return false;\n }\n \n- static class NodeRequest extends BaseNodeRequest {\n+ public static class NodeRequest extends BaseNodeRequest {\n \n NodesHotThreadsRequest request;\n \n- NodeRequest() {\n+ public NodeRequest() {\n }\n \n NodeRequest(String nodeId, NodesHotThreadsRequest request) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/hotthreads/TransportNodesHotThreadsAction.java", "status": "modified" }, { "diff": "@@ -88,11 +88,11 @@ protected boolean accumulateExceptions() {\n return false;\n }\n \n- static class NodeInfoRequest extends BaseNodeRequest {\n+ public static class NodeInfoRequest extends BaseNodeRequest {\n \n NodesInfoRequest request;\n \n- NodeInfoRequest() {\n+ public NodeInfoRequest() {\n }\n \n NodeInfoRequest(String nodeId, NodesInfoRequest request) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java", "status": "modified" }, { "diff": "@@ -42,7 +42,7 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {\n private boolean breaker;\n private boolean script;\n \n- protected NodesStatsRequest() {\n+ public NodesStatsRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequest.java", "status": "modified" }, { "diff": "@@ -88,11 +88,11 @@ protected boolean accumulateExceptions() {\n return false;\n }\n \n- static class NodeStatsRequest extends BaseNodeRequest {\n+ public static class NodeStatsRequest extends BaseNodeRequest {\n \n NodesStatsRequest request;\n \n- NodeStatsRequest() {\n+ public NodeStatsRequest() {\n }\n \n NodeStatsRequest(String nodeId, NodesStatsRequest request) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java", "status": "modified" }, { "diff": "@@ -37,7 +37,7 @@ public class DeleteRepositoryRequest extends AcknowledgedRequest<DeleteRepositor\n \n private String name;\n \n- DeleteRepositoryRequest() {\n+ public DeleteRepositoryRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/delete/DeleteRepositoryRequest.java", "status": "modified" }, { "diff": "@@ -36,7 +36,7 @@ public class GetRepositoriesRequest extends MasterNodeReadRequest<GetRepositorie\n \n private String[] repositories = Strings.EMPTY_ARRAY;\n \n- GetRepositoriesRequest() {\n+ public GetRepositoriesRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/GetRepositoriesRequest.java", "status": "modified" }, { "diff": "@@ -55,7 +55,7 @@ public class PutRepositoryRequest extends AcknowledgedRequest<PutRepositoryReque\n \n private Settings settings = EMPTY_SETTINGS;\n \n- PutRepositoryRequest() {\n+ public PutRepositoryRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java", "status": "modified" }, { "diff": "@@ -37,7 +37,7 @@ public class VerifyRepositoryRequest extends AcknowledgedRequest<VerifyRepositor\n \n private String name;\n \n- VerifyRepositoryRequest() {\n+ public VerifyRepositoryRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/VerifyRepositoryRequest.java", "status": "modified" }, { "diff": "@@ -79,7 +79,7 @@ public class CreateSnapshotRequest extends MasterNodeRequest<CreateSnapshotReque\n \n private boolean waitForCompletion;\n \n- CreateSnapshotRequest() {\n+ public CreateSnapshotRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java", "status": "modified" }, { "diff": "@@ -41,7 +41,7 @@ public class GetSnapshotsRequest extends MasterNodeRequest<GetSnapshotsRequest>\n \n private String[] snapshots = Strings.EMPTY_ARRAY;\n \n- GetSnapshotsRequest() {\n+ public GetSnapshotsRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java", "status": "modified" }, { "diff": "@@ -64,7 +64,7 @@ public class RestoreSnapshotRequest extends MasterNodeRequest<RestoreSnapshotReq\n private Settings indexSettings = EMPTY_SETTINGS;\n private String[] ignoreIndexSettings = Strings.EMPTY_ARRAY;\n \n- RestoreSnapshotRequest() {\n+ public RestoreSnapshotRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java", "status": "modified" }, { "diff": "@@ -137,7 +137,7 @@ protected boolean accumulateExceptions() {\n return true;\n }\n \n- static class Request extends BaseNodesRequest<Request> {\n+ public static class Request extends BaseNodesRequest<Request> {\n \n private SnapshotId[] snapshotIds;\n \n@@ -203,11 +203,11 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n \n- static class NodeRequest extends BaseNodeRequest {\n+ public static class NodeRequest extends BaseNodeRequest {\n \n private SnapshotId[] snapshotIds;\n \n- NodeRequest() {\n+ public NodeRequest() {\n }\n \n NodeRequest(String nodeId, TransportNodesSnapshotsStatus.Request request) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportNodesSnapshotsStatus.java", "status": "modified" }, { "diff": "@@ -30,7 +30,7 @@\n */\n public class ClusterStatsRequest extends BaseNodesRequest<ClusterStatsRequest> {\n \n- ClusterStatsRequest() {\n+ public ClusterStatsRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsRequest.java", "status": "modified" }, { "diff": "@@ -145,11 +145,11 @@ protected boolean accumulateExceptions() {\n return false;\n }\n \n- static class ClusterStatsNodeRequest extends BaseNodeRequest {\n+ public static class ClusterStatsNodeRequest extends BaseNodeRequest {\n \n ClusterStatsRequest request;\n \n- ClusterStatsNodeRequest() {\n+ public ClusterStatsNodeRequest() {\n }\n \n ClusterStatsNodeRequest(String nodeId, ClusterStatsRequest request) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java", "status": "modified" }, { "diff": "@@ -37,7 +37,7 @@ public class ClearIndicesCacheRequest extends BroadcastRequest<ClearIndicesCache\n private String[] fields = null;\n \n \n- ClearIndicesCacheRequest() {\n+ public ClearIndicesCacheRequest() {\n }\n \n public ClearIndicesCacheRequest(String... indices) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/cache/clear/ClearIndicesCacheRequest.java", "status": "modified" }, { "diff": "@@ -39,7 +39,7 @@ public class CloseIndexRequest extends AcknowledgedRequest<CloseIndexRequest> im\n private String[] indices;\n private IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, false, true, false);\n \n- CloseIndexRequest() {\n+ public CloseIndexRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/close/CloseIndexRequest.java", "status": "modified" }, { "diff": "@@ -78,7 +78,7 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>\n \n private boolean updateAllTypes = false;\n \n- CreateIndexRequest() {\n+ public CreateIndexRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java", "status": "modified" }, { "diff": "@@ -44,7 +44,7 @@ public class DeleteIndexRequest extends MasterNodeRequest<DeleteIndexRequest> im\n private IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, true, true, true);\n private TimeValue timeout = AcknowledgedRequest.DEFAULT_ACK_TIMEOUT;\n \n- DeleteIndexRequest() {\n+ public DeleteIndexRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/delete/DeleteIndexRequest.java", "status": "modified" }, { "diff": "@@ -37,7 +37,7 @@ public class IndicesExistsRequest extends MasterNodeReadRequest<IndicesExistsReq\n private IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, false, true, true);\n \n // for serialization\n- IndicesExistsRequest() {\n+ public IndicesExistsRequest() {\n \n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/exists/indices/IndicesExistsRequest.java", "status": "modified" }, { "diff": "@@ -38,7 +38,7 @@ public class TypesExistsRequest extends MasterNodeReadRequest<TypesExistsRequest\n \n private IndicesOptions indicesOptions = IndicesOptions.strictExpandOpen();\n \n- TypesExistsRequest() {\n+ public TypesExistsRequest() {\n }\n \n public TypesExistsRequest(String[] indices, String... types) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/exists/types/TypesExistsRequest.java", "status": "modified" }, { "diff": "@@ -42,7 +42,7 @@ public class FlushRequest extends BroadcastRequest<FlushRequest> {\n private boolean force = false;\n private boolean waitIfOngoing = false;\n \n- FlushRequest() {\n+ public FlushRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/FlushRequest.java", "status": "modified" }, { "diff": "@@ -29,7 +29,7 @@\n \n import java.io.IOException;\n \n-class GetFieldMappingsIndexRequest extends SingleShardRequest<GetFieldMappingsIndexRequest> {\n+public class GetFieldMappingsIndexRequest extends SingleShardRequest<GetFieldMappingsIndexRequest> {\n \n private boolean probablySingleFieldRequest;\n private boolean includeDefaults;\n@@ -38,7 +38,7 @@ class GetFieldMappingsIndexRequest extends SingleShardRequest<GetFieldMappingsIn\n \n private OriginalIndices originalIndices;\n \n- GetFieldMappingsIndexRequest() {\n+ public GetFieldMappingsIndexRequest() {\n }\n \n GetFieldMappingsIndexRequest(GetFieldMappingsRequest other, String index, boolean probablySingleFieldRequest) {", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsIndexRequest.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ public class PutMappingRequest extends AcknowledgedRequest<PutMappingRequest> im\n \n private boolean updateAllTypes = false;\n \n- PutMappingRequest() {\n+ public PutMappingRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequest.java", "status": "modified" }, { "diff": "@@ -39,7 +39,7 @@ public class OpenIndexRequest extends AcknowledgedRequest<OpenIndexRequest> impl\n private String[] indices;\n private IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, false, false, true);\n \n- OpenIndexRequest() {\n+ public OpenIndexRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequest.java", "status": "modified" }, { "diff": "@@ -33,7 +33,7 @@\n */\n public class RefreshRequest extends BroadcastRequest<RefreshRequest> {\n \n- RefreshRequest() {\n+ public RefreshRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/RefreshRequest.java", "status": "modified" }, { "diff": "@@ -48,7 +48,7 @@ public class UpdateSettingsRequest extends AcknowledgedRequest<UpdateSettingsReq\n private IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, false, true, true);\n private Settings settings = EMPTY_SETTINGS;\n \n- UpdateSettingsRequest() {\n+ public UpdateSettingsRequest() {\n }\n \n /**", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequest.java", "status": "modified" } ] }
{ "body": "It makes decision consistent.\nFixes #12522\n", "comments": [ { "body": "This looks good to me, but I think @s1monw should take a look also just to make sure.\n\nAlso, not sure about where this should go, @clintongormley what do you think?\n", "created_at": "2015-08-13T22:23:11Z" }, { "body": "left trivial comments - LGTM though\n", "created_at": "2015-09-10T21:35:04Z" }, { "body": "I think we should let it bake on master and 2.x and if it's stable port to 2.0?\n", "created_at": "2015-09-10T21:35:29Z" }, { "body": "Thanks. Updated test names.\nPushed it to master, 2.x and 1.7 for now.\n", "created_at": "2015-09-11T05:54:11Z" }, { "body": "if you push to 1.7 you also need to push to 2.0 otherwise this makes no sense\n", "created_at": "2015-09-11T07:20:44Z" }, { "body": "oh... pushed to 2.0 as well.\n", "created_at": "2015-09-11T08:00:01Z" }, { "body": "Late to the party, but I think there is a problem with this fix. Left a comment on it. The fact we didn't catch it also means our tests are not strong enough. @masaruh let me know if you want to continue here, o.w. I'll pick it up.\n", "created_at": "2015-09-11T09:33:38Z" }, { "body": "@bleskes you are right... Created #13512 (the test fails without the fix).\n", "created_at": "2015-09-11T11:47:24Z" } ], "number": 12551, "title": "Take initializing shards into consideration during awareness allocation" }
{ "body": "Previous fix #12551 counted twice for relocating shard (source and target).\nFix it to consider only target node.\n", "number": 13512, "review_comments": [ { "body": "Maybe \"Note: this also counts relocation targets as that will be the new location of the shard. Relocation sources should not be counted as the shard is moving away\"\n", "created_at": "2015-09-11T12:01:07Z" }, { "body": "That's better.\n", "created_at": "2015-09-11T12:10:03Z" } ], "title": "Take relocating shard into consideration during awareness allocation" }
{ "commits": [ { "message": "Take relocating shard into consideration during awareness allocation\n\nPrevious fix #12551 counted twice for relocating shard (source and target).\nFix it to consider only target node." } ], "files": [ { "diff": "@@ -184,11 +184,9 @@ private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, Rout\n // build the count of shards per attribute value\n ObjectIntHashMap<String> shardPerAttribute = new ObjectIntHashMap<>();\n for (ShardRouting assignedShard : allocation.routingNodes().assignedShards(shardRouting)) {\n- // if the shard is relocating, then make sure we count it as part of the node it is relocating to\n- if (assignedShard.relocating()) {\n- RoutingNode relocationNode = allocation.routingNodes().node(assignedShard.relocatingNodeId());\n- shardPerAttribute.addTo(relocationNode.node().attributes().get(awarenessAttribute), 1);\n- } else if (assignedShard.started() || assignedShard.initializing()) {\n+ if (assignedShard.started() || assignedShard.initializing()) {\n+ // Note: this also counts relocation targets as that will be the new location of the shard.\n+ // Relocation sources should not be counted as the shard is moving away\n RoutingNode routingNode = allocation.routingNodes().node(assignedShard.currentNodeId());\n shardPerAttribute.addTo(routingNode.node().attributes().get(awarenessAttribute), 1);\n }", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java", "status": "modified" }, { "diff": "@@ -28,9 +28,13 @@\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n+import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands;\n+import org.elasticsearch.cluster.routing.allocation.command.CancelAllocationCommand;\n+import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.cluster.routing.allocation.decider.ClusterRebalanceAllocationDecider;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ESAllocationTestCase;\n import org.junit.Test;\n \n@@ -853,6 +857,7 @@ public void testUnassignedShardsWithUnbalancedZones() {\n .put(newNode(\"A-1\", ImmutableMap.of(\"zone\", \"a\")))\n .put(newNode(\"A-2\", ImmutableMap.of(\"zone\", \"a\")))\n .put(newNode(\"A-3\", ImmutableMap.of(\"zone\", \"a\")))\n+ .put(newNode(\"A-4\", ImmutableMap.of(\"zone\", \"a\")))\n .put(newNode(\"B-0\", ImmutableMap.of(\"zone\", \"b\")))\n ).build();\n routingTable = strategy.reroute(clusterState).routingTable();\n@@ -866,5 +871,25 @@ public void testUnassignedShardsWithUnbalancedZones() {\n assertThat(clusterState.getRoutingNodes().shardsWithState(STARTED).size(), equalTo(1));\n assertThat(clusterState.getRoutingNodes().shardsWithState(INITIALIZING).size(), equalTo(3));\n assertThat(clusterState.getRoutingNodes().shardsWithState(UNASSIGNED).size(), equalTo(1)); // Unassigned shard is expected.\n+\n+ // Cancel all initializing shards and move started primary to another node.\n+ AllocationCommands commands = new AllocationCommands();\n+ String primaryNode = null;\n+ for (ShardRouting routing : routingTable.allShards()) {\n+ if (routing.primary()) {\n+ primaryNode = routing.currentNodeId();\n+ } else if (routing.initializing()) {\n+ commands.add(new CancelAllocationCommand(routing.shardId(), routing.currentNodeId(), false));\n+ }\n+ }\n+ commands.add(new MoveAllocationCommand(new ShardId(\"test\", 0), primaryNode, \"A-4\"));\n+\n+ routingTable = strategy.reroute(clusterState, commands).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+\n+ assertThat(clusterState.getRoutingNodes().shardsWithState(STARTED).size(), equalTo(0));\n+ assertThat(clusterState.getRoutingNodes().shardsWithState(RELOCATING).size(), equalTo(1));\n+ assertThat(clusterState.getRoutingNodes().shardsWithState(INITIALIZING).size(), equalTo(4)); // +1 for relocating shard.\n+ assertThat(clusterState.getRoutingNodes().shardsWithState(UNASSIGNED).size(), equalTo(1)); // Still 1 unassigned.\n }\n }", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/AwarenessAllocationTests.java", "status": "modified" } ] }
{ "body": "The _cat/health endpoint [is documented](http://www.elastic.co/guide/en/elasticsearch/reference/current/cat-health.html) that the timestamp can be disabled in the output. This is not working for me in 1.4.4:\n\n```\n$ curl localhost:9200\n{\n \"status\" : 200,\n \"name\" : \"lb-1\",\n \"cluster_name\" : \"es-example-service\",\n \"version\" : {\n \"number\" : \"1.4.4\",\n \"build_hash\" : \"c88f77ffc81301dfa9dfd81ca2232f09588bd512\",\n \"build_timestamp\" : \"2015-02-19T13:05:36Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"4.10.3\"\n },\n \"tagline\" : \"You Know, for Search\"\n} \n\n$ curl 'localhost:9200/_cat/health?v'\nepoch timestamp cluster status node.total node.data shards pri relo init unassign \n1426529705 18:15:05 es-example-service green 7 3 0 0 0 0 0 \n\n$ curl 'localhost:9200/_cat/health?v&ts=0'\nepoch timestamp cluster status node.total node.data shards pri relo init unassign \n1426529713 18:15:13 es-example-service green 7 3 0 0 0 0 0\n```\n", "comments": [ { "body": "I agree that the documentation is incorrect here as we don't support this `ts=0` option.\n\nFor now, you can run:\n\n``` sh\ncurl -XGET \"http://localhost:9200/_cat/health?h=epoch,cluster,status,node.total,node.data,shards,pri,relo,init,unassign,pending_tasks\"\n```\n\n@drewr Are we supposed to support this `ts=0` option?\n", "created_at": "2015-03-16T19:06:24Z" }, { "body": "This was originally supported, but got lost somewhere.\n\nThe feature really needs to be a `RestTable` enhancement that can add timestamp to any API (and possibly leave it off by default).\n", "created_at": "2015-03-18T20:25:35Z" } ], "number": 10109, "title": "cat health not respecting 'ts' parameter" }
{ "body": "If ts=0, cat health disable epoch and timestamp\n Add rest-api test and test\n\nCloses #10109\n", "number": 13508, "review_comments": [], "title": "Cat health supports ts=0 option" }
{ "commits": [ { "message": "Cat: cat health supports ts=0 option\n\nIf ts=0, cat health disable epoch and timestamp\nBe Constant String timestamp and epoch\nMove timestamp and epoch to Table\nAdd rest-api test and test\n\nCloses #10109" } ], "files": [ { "diff": "@@ -19,10 +19,14 @@\n \n package org.elasticsearch.common;\n \n+import org.joda.time.format.DateTimeFormat;\n+import org.joda.time.format.DateTimeFormatter;\n+\n import java.util.ArrayList;\n import java.util.HashMap;\n import java.util.List;\n import java.util.Map;\n+import java.util.concurrent.TimeUnit;\n \n import static java.util.Collections.emptyMap;\n \n@@ -36,13 +40,25 @@ public class Table {\n private Map<String, Cell> headerMap = new HashMap<>();\n private List<Cell> currentCells;\n private boolean inHeaders = false;\n+ private boolean withTime = false;\n+ public static final String EPOCH = \"epoch\";\n+ public static final String TIMESTAMP = \"timestamp\";\n \n public Table startHeaders() {\n inHeaders = true;\n currentCells = new ArrayList<>();\n return this;\n }\n \n+ public Table startHeadersWithTimestamp() {\n+ startHeaders();\n+ this.withTime = true;\n+ addCell(\"epoch\", \"alias:t,time;desc:seconds since 1970-01-01 00:00:00\");\n+ addCell(\"timestamp\", \"alias:ts,hms,hhmmss;desc:time in HH:MM:SS\");\n+ return this;\n+ }\n+\n+\n public Table endHeaders() {\n if (currentCells == null || currentCells.isEmpty()) {\n throw new IllegalStateException(\"no headers added...\");\n@@ -69,11 +85,18 @@ public Table endHeaders() {\n return this;\n }\n \n+ private DateTimeFormatter dateFormat = DateTimeFormat.forPattern(\"HH:mm:ss\");\n+\n public Table startRow() {\n if (headers.isEmpty()) {\n throw new IllegalStateException(\"no headers added...\");\n }\n currentCells = new ArrayList<>(headers.size());\n+ if (withTime) {\n+ long time = System.currentTimeMillis();\n+ addCell(TimeUnit.SECONDS.convert(time, TimeUnit.MILLISECONDS));\n+ addCell(dateFormat.print(time));\n+ }\n return this;\n }\n ", "filename": "core/src/main/java/org/elasticsearch/common/Table.java", "status": "modified" }, { "diff": "@@ -37,10 +37,6 @@\n import org.elasticsearch.rest.action.support.RestResponseListener;\n import org.elasticsearch.rest.action.support.RestTable;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n-import org.joda.time.format.DateTimeFormat;\n-import org.joda.time.format.DateTimeFormatter;\n-\n-import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.rest.RestRequest.Method.GET;\n \n@@ -88,22 +84,15 @@ public RestResponse buildResponse(SearchResponse countResponse) throws Exception\n @Override\n protected Table getTableWithHeader(final RestRequest request) {\n Table table = new Table();\n- table.startHeaders();\n- table.addCell(\"epoch\", \"alias:t,time;desc:seconds since 1970-01-01 00:00:00, that the count was executed\");\n- table.addCell(\"timestamp\", \"alias:ts,hms;desc:time that the count was executed\");\n+ table.startHeadersWithTimestamp();\n table.addCell(\"count\", \"alias:dc,docs.count,docsCount;desc:the document count\");\n table.endHeaders();\n return table;\n }\n \n- private DateTimeFormatter dateFormat = DateTimeFormat.forPattern(\"HH:mm:ss\");\n-\n private Table buildTable(RestRequest request, SearchResponse response) {\n Table table = getTableWithHeader(request);\n- long time = System.currentTimeMillis();\n table.startRow();\n- table.addCell(TimeUnit.SECONDS.convert(time, TimeUnit.MILLISECONDS));\n- table.addCell(dateFormat.print(time));\n table.addCell(response.getHits().totalHits());\n table.endRow();\n ", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestCountAction.java", "status": "modified" }, { "diff": "@@ -31,11 +31,8 @@\n import org.elasticsearch.rest.RestResponse;\n import org.elasticsearch.rest.action.support.RestResponseListener;\n import org.elasticsearch.rest.action.support.RestTable;\n-import org.joda.time.format.DateTimeFormat;\n-import org.joda.time.format.DateTimeFormatter;\n \n import java.util.Locale;\n-import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.rest.RestRequest.Method.GET;\n \n@@ -67,9 +64,7 @@ public RestResponse buildResponse(final ClusterHealthResponse health) throws Exc\n @Override\n protected Table getTableWithHeader(final RestRequest request) {\n Table t = new Table();\n- t.startHeaders();\n- t.addCell(\"epoch\", \"alias:t,time;desc:seconds since 1970-01-01 00:00:00\");\n- t.addCell(\"timestamp\", \"alias:ts,hms,hhmmss;desc:time in HH:MM:SS\");\n+ t.startHeadersWithTimestamp();\n t.addCell(\"cluster\", \"alias:cl;desc:cluster name\");\n t.addCell(\"status\", \"alias:st;desc:health status\");\n t.addCell(\"node.total\", \"alias:nt,nodeTotal;text-align:right;desc:total number of nodes\");\n@@ -87,14 +82,9 @@ protected Table getTableWithHeader(final RestRequest request) {\n return t;\n }\n \n- private DateTimeFormatter dateFormat = DateTimeFormat.forPattern(\"HH:mm:ss\");\n-\n private Table buildTable(final ClusterHealthResponse health, final RestRequest request) {\n- long time = System.currentTimeMillis();\n Table t = getTableWithHeader(request);\n t.startRow();\n- t.addCell(TimeUnit.SECONDS.convert(time, TimeUnit.MILLISECONDS));\n- t.addCell(dateFormat.print(time));\n t.addCell(health.getClusterName());\n t.addCell(health.getStatus().name().toLowerCase(Locale.ROOT));\n t.addCell(health.getNumberOfNodes());", "filename": "core/src/main/java/org/elasticsearch/rest/action/cat/RestHealthAction.java", "status": "modified" }, { "diff": "@@ -133,7 +133,7 @@ static List<DisplayHeader> buildDisplayHeaders(Table table, RestRequest request)\n }\n }\n \n- if (dispHeader != null) {\n+ if (dispHeader != null && checkOutputTimestamp(dispHeader, request)) {\n // We know we need the header asked for:\n display.add(dispHeader);\n \n@@ -153,14 +153,28 @@ static List<DisplayHeader> buildDisplayHeaders(Table table, RestRequest request)\n } else {\n for (Table.Cell cell : table.getHeaders()) {\n String d = cell.attr.get(\"default\");\n- if (Booleans.parseBoolean(d, true)) {\n+ if (Booleans.parseBoolean(d, true) && checkOutputTimestamp(cell.value.toString(), request)) {\n display.add(new DisplayHeader(cell.value.toString(), cell.value.toString()));\n }\n }\n }\n return display;\n }\n \n+\n+ static boolean checkOutputTimestamp(DisplayHeader dispHeader, RestRequest request) {\n+ return checkOutputTimestamp(dispHeader.name, request);\n+ }\n+\n+ static boolean checkOutputTimestamp(String disp, RestRequest request) {\n+ if (Table.TIMESTAMP.equals(disp) || Table.EPOCH.equals(disp)) {\n+ return request.paramAsBoolean(\"ts\", true);\n+ } else {\n+ return true;\n+ }\n+ }\n+\n+\n /**\n * Extracts all the required fields from the RestRequest 'h' parameter. In order to support wildcards like\n * 'bulk.*' this needs potentially parse all the configured headers and its aliases and needs to ensure", "filename": "core/src/main/java/org/elasticsearch/rest/action/support/RestTable.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import java.util.List;\n import java.util.Map;\n \n+import static org.hamcrest.Matchers.instanceOf;\n import static org.hamcrest.Matchers.is;\n \n public class TableTests extends ESTestCase {\n@@ -173,6 +174,32 @@ public void testSimple() {\n assertNull(cell);\n }\n \n+ public void testWithTimestamp() {\n+ Table table = new Table();\n+ table.startHeadersWithTimestamp();\n+ table.endHeaders();\n+\n+ List<Table.Cell> headers = table.getHeaders();\n+ assertEquals(2, headers.size());\n+ assertEquals(Table.EPOCH, headers.get(0).value.toString());\n+ assertEquals(Table.TIMESTAMP, headers.get(1).value.toString());\n+ assertEquals(2, headers.get(0).attr.size());\n+ assertEquals(\"t,time\", headers.get(0).attr.get(\"alias\"));\n+ assertEquals(\"seconds since 1970-01-01 00:00:00\", headers.get(0).attr.get(\"desc\"));\n+ assertEquals(2, headers.get(1).attr.size());\n+ assertEquals(\"ts,hms,hhmmss\", headers.get(1).attr.get(\"alias\"));\n+ assertEquals(\"time in HH:MM:SS\", headers.get(1).attr.get(\"desc\"));\n+\n+ // check row's timestamp\n+ table.startRow();\n+ table.endRow();\n+ List<List<Table.Cell>> rows = table.getRows();\n+ assertEquals(1, rows.size());\n+ assertEquals(2, rows.get(0).size());\n+ assertThat(rows.get(0).get(0).value, instanceOf(Long.class));\n+\n+ }\n+\n private Table getTableWithHeaders() {\n Table table = new Table();\n table.startHeaders();", "filename": "core/src/test/java/org/elasticsearch/common/TableTests.java", "status": "modified" }, { "diff": "@@ -48,17 +48,19 @@ public class RestTableTests extends ESTestCase {\n private static final String CONTENT_TYPE = \"Content-Type\";\n private static final String ACCEPT = \"Accept\";\n private static final String TEXT_PLAIN = \"text/plain; charset=UTF-8\";\n- private static final String TEXT_TABLE_BODY = \"foo foo foo foo foo foo\\n\";\n+ private static final String TEXT_TABLE_BODY = \"foo foo foo foo foo foo foo foo\\n\";\n private static final String JSON_TABLE_BODY = \"[{\\\"bulk.foo\\\":\\\"foo\\\",\\\"bulk.bar\\\":\\\"foo\\\",\\\"aliasedBulk\\\":\\\"foo\\\",\" +\n \"\\\"aliasedSecondBulk\\\":\\\"foo\\\",\\\"unmatched\\\":\\\"foo\\\",\" +\n- \"\\\"invalidAliasesBulk\\\":\\\"foo\\\"}]\";\n+ \"\\\"invalidAliasesBulk\\\":\\\"foo\\\",\\\"timestamp\\\":\\\"foo\\\",\\\"epoch\\\":\\\"foo\\\"}]\";\n private static final String YAML_TABLE_BODY = \"---\\n\" +\n \"- bulk.foo: \\\"foo\\\"\\n\" +\n \" bulk.bar: \\\"foo\\\"\\n\" +\n \" aliasedBulk: \\\"foo\\\"\\n\" +\n \" aliasedSecondBulk: \\\"foo\\\"\\n\" +\n \" unmatched: \\\"foo\\\"\\n\" +\n- \" invalidAliasesBulk: \\\"foo\\\"\\n\";\n+ \" invalidAliasesBulk: \\\"foo\\\"\\n\" +\n+ \" timestamp: \\\"foo\\\"\\n\" +\n+ \" epoch: \\\"foo\\\"\\n\";\n private Table table = new Table();\n private FakeRestRequest restRequest = new FakeRestRequest();\n \n@@ -74,6 +76,9 @@ public void setup() {\n table.addCell(\"unmatched\", \"alias:un.matched;desc:bar\");\n // invalid alias\n table.addCell(\"invalidAliasesBulk\", \"alias:,,,;desc:bar\");\n+ // timestamp\n+ table.addCell(\"timestamp\", \"alias:ts\");\n+ table.addCell(\"epoch\", \"alias:t\");\n table.endHeaders();\n }\n \n@@ -129,6 +134,17 @@ public void testIgnoreContentType() throws Exception {\n TEXT_TABLE_BODY);\n }\n \n+ public void testThatDisplayHeadersWithoutTimestamp() throws Exception {\n+ restRequest.params().put(\"h\", \"timestamp,epoch,bulk*\");\n+ restRequest.params().put(\"ts\", \"0\");\n+ List<RestTable.DisplayHeader> headers = buildDisplayHeaders(table, restRequest);\n+\n+ List<String> headerNames = getHeaderNames(headers);\n+ assertThat(headerNames, contains(\"bulk.foo\", \"bulk.bar\", \"aliasedBulk\", \"aliasedSecondBulk\"));\n+ assertThat(headerNames, not(hasItem(\"timestamp\")));\n+ assertThat(headerNames, not(hasItem(\"epoch\")));\n+ }\n+\n private RestResponse assertResponseContentType(Map<String, String> headers, String mediaType) throws Exception {\n FakeRestRequest requestWithAcceptHeader = new FakeRestRequest(headers);\n table.startRow();\n@@ -138,6 +154,8 @@ private RestResponse assertResponseContentType(Map<String, String> headers, Stri\n table.addCell(\"foo\");\n table.addCell(\"foo\");\n table.addCell(\"foo\");\n+ table.addCell(\"foo\");\n+ table.addCell(\"foo\");\n table.endRow();\n RestResponse response = buildResponse(table, new AbstractRestChannel(requestWithAcceptHeader, true) {\n @Override", "filename": "core/src/test/java/org/elasticsearch/rest/action/support/RestTableTests.java", "status": "modified" }, { "diff": "@@ -50,3 +50,30 @@\n \\n\n )+\n $/\n+\n+\n+---\n+\"With ts parameter\":\n+\n+ - do:\n+ cat.health:\n+ ts: 0\n+\n+ - match:\n+ $body: |\n+ /^\n+ ( \\S+ \\s+ # cluster\n+ \\w+ \\s+ # status\n+ \\d+ \\s+ # node.total\n+ \\d+ \\s+ # node.data\n+ \\d+ \\s+ # shards\n+ \\d+ \\s+ # pri\n+ \\d+ \\s+ # relo\n+ \\d+ \\s+ # init\n+ \\d+ \\s+ # unassign\n+ \\d+ \\s+ # pending_tasks\n+ (-|\\d+[.]\\d+ms|s) \\s+ # max task waiting time\n+ \\d+\\.\\d+% # active shards percent\n+ \\n\n+ )+\n+ $/", "filename": "rest-api-spec/src/main/resources/rest-api-spec/test/cat.health/10_basic.yaml", "status": "modified" } ] }
{ "body": "Discovery does not work anymore in azure and ec2.\n\nIt's caused by the following commit which does not call anymore `addUnicastHostProvider`.\n- AWS: https://github.com/elastic/elasticsearch/commit/40f119d85a4eaa39d0a6e594e46f70de725557f0#diff-767ee396aa86de938ebb72b4c2f359a0L35\n- Azure: https://github.com/elastic/elasticsearch/commit/40f119d85a4eaa39d0a6e594e46f70de725557f0#diff-56a7239bf69dfcd365858d333edca510L43\n\n@rjernst Could you have a look at it please?\n\ncc @drewr \n", "comments": [ { "body": "This should be simple to fix. The `onModule(DiscoveryModule)` methods in the aws and azure plugins just need to add the provider there (the discovery module has this method now). This is what I intended, but missed it since all tests pass without this! We really need some tests here...\n", "created_at": "2015-09-10T21:18:31Z" }, { "body": "I still think some mocks for integs that listen on a socket is the way to go. means something on a socket in pre-integration-test and the code really treats it like AWS X service or azure Y service. \n", "created_at": "2015-09-10T21:22:10Z" }, { "body": "Definitely. I started playing with [this lib](https://github.com/treelogic-swe/aws-mock) (running that in a Jetty container to simulate AWS calls). Was working fine at the beginning but at the end I was missing some important methods.\nI started to contribute a bit but then started something else... \n", "created_at": "2015-09-10T21:27:23Z" }, { "body": "Reopening as the fix for Azure was wrong. \nReverted with https://github.com/elastic/elasticsearch/commit/163c34127f877c70a0cb5bf95c64c6efada484eb\nTests are now failing.\n", "created_at": "2015-09-11T08:27:46Z" } ], "number": 13492, "title": "[ec2/azure] discovery does not work anymore from 2.0.0-beta1" }
{ "body": "Closes #13492\n", "number": 13501, "review_comments": [], "title": "EC2/Azure discovery plugins must declare their UnicastHostsProvider" }
{ "commits": [ { "message": "[ec2/azure] discovery plugins must declare their UnicastHostsProvider\n\nCloses #13492" } ], "files": [ { "diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.inject.Module;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.discovery.DiscoveryModule;\n+import org.elasticsearch.discovery.ec2.AwsEc2UnicastHostsProvider;\n import org.elasticsearch.discovery.ec2.Ec2Discovery;\n import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository;\n import org.elasticsearch.plugins.Plugin;\n@@ -82,5 +83,6 @@ public void onModule(RepositoriesModule repositoriesModule) {\n \n public void onModule(DiscoveryModule discoveryModule) {\n discoveryModule.addDiscoveryType(\"ec2\", Ec2Discovery.class);\n+ discoveryModule.addUnicastHostProvider(AwsEc2UnicastHostsProvider.class);\n }\n }", "filename": "plugins/cloud-aws/src/main/java/org/elasticsearch/plugin/cloud/aws/CloudAwsPlugin.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.discovery.DiscoveryModule;\n import org.elasticsearch.discovery.azure.AzureDiscovery;\n+import org.elasticsearch.discovery.azure.AzureUnicastHostsProvider;\n import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository;\n import org.elasticsearch.index.store.IndexStoreModule;\n import org.elasticsearch.index.store.smbmmapfs.SmbMmapFsIndexStore;\n@@ -80,6 +81,7 @@ public void onModule(RepositoriesModule module) {\n \n public void onModule(DiscoveryModule discoveryModule) {\n discoveryModule.addDiscoveryType(\"azure\", AzureDiscovery.class);\n+ discoveryModule.addUnicastHostProvider(AzureUnicastHostsProvider.class);\n }\n \n public void onModule(IndexStoreModule storeModule) {", "filename": "plugins/cloud-azure/src/main/java/org/elasticsearch/plugin/cloud/azure/CloudAzurePlugin.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.discovery.DiscoveryModule;\n import org.elasticsearch.discovery.gce.GceDiscovery;\n+import org.elasticsearch.discovery.gce.GceUnicastHostsProvider;\n import org.elasticsearch.plugins.Plugin;\n \n import java.util.ArrayList;\n@@ -72,6 +73,7 @@ public Collection<Class<? extends LifecycleComponent>> nodeServices() {\n \n public void onModule(DiscoveryModule discoveryModule) {\n discoveryModule.addDiscoveryType(\"gce\", GceDiscovery.class);\n+ discoveryModule.addUnicastHostProvider(GceUnicastHostsProvider.class);\n }\n \n }", "filename": "plugins/cloud-gce/src/main/java/org/elasticsearch/plugin/cloud/gce/CloudGcePlugin.java", "status": "modified" } ] }
{ "body": "If a pipeline is defined as a top-level agg (e.g a `sum_bucket` at the very root), it won't have any parent aggregations and thus sidesteps the current parser validation in AggregatorsParser. The final clause which deals with top-level pipelines needs extra logic to validate pipelines ... but it will likely require some changes since the current pipeline validation methods require an agg parent.\n\n/cc @colings86 \n", "comments": [], "number": 13179, "title": "Top-level pipeline aggs are not validated" }
{ "body": "Previously PipelineAggregatorFactory's at the root to the agg tree (top-level aggs) were not validated. This commit adds a call to PipelineAggregatoFactory.validate() for that case.\n\nCloses #13179\n", "number": 13475, "review_comments": [ { "body": "Feels a bit weird that one method is returning a list and the other one an array\n", "created_at": "2015-09-10T14:48:11Z" }, { "body": "agreed, should I return a list from both and do the conversion to an array in the AggParsers class?\n", "created_at": "2015-09-10T14:52:13Z" }, { "body": "Why do we need to convert to an array? In general I think we should try to use either lists or arrays everywhere instead of doing back-and-forth conversions? (not for the cost of the conversions, more for consistency) Maybe this should be a different PR though, it looks like an existing issue to me. It was just weird when looking at the diff\n", "created_at": "2015-09-10T14:54:56Z" }, { "body": "I agree and would prefer to use a list everywhere for aggs but I would rather not tackle that in this PR. I think the idea is that its a list here in the Builder as it needs to be dynamic so you can call addAggregator() but in the build() method it becomes an array because it doesn't change after that.\n", "created_at": "2015-09-10T14:58:07Z" } ], "title": "Pipeline Aggregations at the root of the agg tree are now validated" }
{ "commits": [ { "message": "Aggregations: Pipeline Aggregations at the root of the agg tree are now validated\n\nPreviously PipelineAggregatorFactory's at the root to the agg tree (top-level aggs) were not validated. This commit adds a call to PipelineAggregatoFactory.validate() for that case.\n\nCloses #13179" } ], "files": [ { "diff": "@@ -244,5 +244,13 @@ private void resolvePipelineAggregatorOrder(Map<String, AggregatorFactory> aggFa\n orderedPipelineAggregators.add(factory);\n }\n }\n+\n+ AggregatorFactory[] getAggregatorFactories() {\n+ return this.factories.toArray(new AggregatorFactory[this.factories.size()]);\n+ }\n+\n+ List<PipelineAggregatorFactory> getPipelineAggregatorFactories() {\n+ return this.pipelineAggregatorFactories;\n+ }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java", "status": "modified" }, { "diff": "@@ -80,7 +80,7 @@ public Aggregator.Parser parser(String type) {\n /**\n * Returns the parser that is registered under the given pipeline aggregator\n * type.\n- * \n+ *\n * @param type\n * The pipeline aggregator type\n * @return The parser associated with the given pipeline aggregator type.\n@@ -228,6 +228,10 @@ private AggregatorFactories parseAggregators(XContentParser parser, SearchContex\n throw new SearchParseException(context, \"Aggregation [\" + aggregationName + \"] cannot define sub-aggregations\",\n parser.getTokenLocation());\n }\n+ if (level == 0) {\n+ pipelineAggregatorFactory\n+ .validate(null, factories.getAggregatorFactories(), factories.getPipelineAggregatorFactories());\n+ }\n factories.addPipelineAggregator(pipelineAggregatorFactory);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/AggregatorParsers.java", "status": "modified" }, { "diff": "@@ -19,10 +19,10 @@\n * under the License.\n */\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile;\n@@ -41,9 +41,9 @@\n import static org.elasticsearch.search.aggregations.AggregationBuilders.sum;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.percentilesBucket;\n-import static org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders.sumBucket;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+import static org.hamcrest.Matchers.containsString;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.core.IsNull.notNullValue;\n@@ -433,30 +433,22 @@ public void testWrongPercents() throws Exception {\n }\n \n @Test\n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/13179\")\n public void testBadPercents() throws Exception {\n Double[] badPercents = {-1.0, 110.0};\n \n try {\n- SearchResponse response = client().prepareSearch(\"idx\")\n+ client().prepareSearch(\"idx\")\n .addAggregation(terms(\"terms\").field(\"tag\").subAggregation(sum(\"sum\").field(SINGLE_VALUED_FIELD_NAME)))\n .addAggregation(percentilesBucket(\"percentiles_bucket\")\n .setBucketsPaths(\"terms>sum\")\n .percents(badPercents)).execute().actionGet();\n \n- assertSearchResponse(response);\n-\n- Terms terms = response.getAggregations().get(\"terms\");\n- assertThat(terms, notNullValue());\n- assertThat(terms.getName(), equalTo(\"terms\"));\n- List<Terms.Bucket> buckets = terms.getBuckets();\n- assertThat(buckets.size(), equalTo(0));\n-\n- PercentilesBucket percentilesBucketValue = response.getAggregations().get(\"percentiles_bucket\");\n-\n fail(\"Illegal percent's were provided but no exception was thrown.\");\n } catch (SearchPhaseExecutionException exception) {\n- // All good\n+ ElasticsearchException[] rootCauses = exception.guessRootCauses();\n+ assertThat(rootCauses.length, equalTo(1));\n+ ElasticsearchException rootCause = rootCauses[0];\n+ assertThat(rootCause.getMessage(), containsString(\"must only contain non-null doubles from 0.0-100.0 inclusive\"));\n }\n \n }\n@@ -466,7 +458,7 @@ public void testBadPercents_asSubAgg() throws Exception {\n Double[] badPercents = {-1.0, 110.0};\n \n try {\n- SearchResponse response = client()\n+ client()\n .prepareSearch(\"idx\")\n .addAggregation(\n terms(\"terms\")\n@@ -479,11 +471,12 @@ public void testBadPercents_asSubAgg() throws Exception {\n .setBucketsPaths(\"histo>_count\")\n .percents(badPercents))).execute().actionGet();\n \n- PercentilesBucket percentilesBucketValue = response.getAggregations().get(\"percentiles_bucket\");\n-\n fail(\"Illegal percent's were provided but no exception was thrown.\");\n } catch (SearchPhaseExecutionException exception) {\n- // All good\n+ ElasticsearchException[] rootCauses = exception.guessRootCauses();\n+ assertThat(rootCauses.length, equalTo(1));\n+ ElasticsearchException rootCause = rootCauses[0];\n+ assertThat(rootCause.getMessage(), containsString(\"must only contain non-null doubles from 0.0-100.0 inclusive\"));\n }\n \n }", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/pipeline/PercentilesBucketIT.java", "status": "modified" } ] }
{ "body": "The delete-by-query plugin wraps all queries in a `query` element, so the parsed queries are wrapperd in a QueryWrapperFilter in the end. This does not change the matching documents but prevents me from removing the `query` query which is deprecated (which takes a query and turns it into a filter).\n", "comments": [], "number": 13326, "title": "The delete-by-query plugin wraps all queries in a `query` element" }
{ "body": "This removes support for the `and`, `or`, `fquery`, `limit` and `filtered`\nqueries. `query` is still supported until #13326 is fixed.\n", "number": 13418, "review_comments": [ { "body": "can you leave this class for now, add the same TODO as in the parser? We can't have a parser without a builder in the query-refactoring branch.\n", "created_at": "2015-09-09T12:11:21Z" }, { "body": "ok\n", "created_at": "2015-09-09T12:37:44Z" } ], "title": "Remove support for deprecated queries." }
{ "commits": [ { "message": "Remove support for deprecated queries.\n\nThis removes support for the `and`, `or`, `fquery`, `limit` and `filtered`\nqueries. `query` is still supported until #13326 is fixed." } ], "files": [ { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.index.query;\n \n-import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.geo.GeoPoint;\n@@ -352,19 +351,6 @@ public static FieldMaskingSpanQueryBuilder fieldMaskingSpanQuery(SpanQueryBuilde\n return new FieldMaskingSpanQueryBuilder(query, field);\n }\n \n- /**\n- * A query that applies a filter to the results of another query.\n- *\n- * @param queryBuilder The query to apply the filter to\n- * @param filterBuilder The filter to apply on the query\n- * @deprecated Use {@link #boolQuery()} instead with a {@code must} clause\n- * for the query and a {@code filter} clause for the filter.\n- */\n- @Deprecated\n- public static FilteredQueryBuilder filteredQuery(@Nullable QueryBuilder queryBuilder, @Nullable QueryBuilder filterBuilder) {\n- return new FilteredQueryBuilder(queryBuilder, filterBuilder);\n- }\n-\n /**\n * A query that wraps another query and simply returns a constant score equal to the\n * query boost for every document in the query.\n@@ -777,41 +763,6 @@ public static NotQueryBuilder notQuery(QueryBuilder filter) {\n return new NotQueryBuilder(filter);\n }\n \n- /**\n- * Create a new {@link OrQueryBuilder} composed of the given filters.\n- * @deprecated Use {@link #boolQuery()} instead\n- */\n- @Deprecated\n- public static OrQueryBuilder orQuery(QueryBuilder... filters) {\n- return new OrQueryBuilder(filters);\n- }\n-\n- /**\n- * Create a new {@link AndQueryBuilder} composed of the given filters.\n- * @deprecated Use {@link #boolQuery()} instead\n- */\n- @Deprecated\n- public static AndQueryBuilder andQuery(QueryBuilder... filters) {\n- return new AndQueryBuilder(filters);\n- }\n-\n- /**\n- * @deprecated Use {@link SearchRequestBuilder#setTerminateAfter(int)} instead\n- */\n- @Deprecated\n- public static LimitQueryBuilder limitQuery(int limit) {\n- return new LimitQueryBuilder(limit);\n- }\n-\n- /**\n- * @deprecated Useless now that queries and filters are merged: pass the\n- * query as a filter directly.\n- */\n- @Deprecated\n- public static QueryFilterBuilder queryFilter(QueryBuilder query) {\n- return new QueryFilterBuilder(query);\n- }\n-\n private QueryBuilders() {\n \n }", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryBuilders.java", "status": "modified" }, { "diff": "@@ -28,13 +28,12 @@\n * @deprecated Useless now that queries and filters are merged: pass the\n * query as a filter directly.\n */\n+//TODO: remove when https://github.com/elastic/elasticsearch/issues/13326 is fixed\n @Deprecated\n public class QueryFilterBuilder extends QueryBuilder {\n \n private final QueryBuilder queryBuilder;\n \n- private String queryName;\n-\n /**\n * A filter that simply wraps a query.\n *\n@@ -44,27 +43,9 @@ public QueryFilterBuilder(QueryBuilder queryBuilder) {\n this.queryBuilder = queryBuilder;\n }\n \n- /**\n- * Sets the query name for the filter that can be used when searching for matched_filters per hit.\n- */\n- public QueryFilterBuilder queryName(String queryName) {\n- this.queryName = queryName;\n- return this;\n- }\n-\n @Override\n protected void doXContent(XContentBuilder builder, Params params) throws IOException {\n- if (queryName == null) {\n- builder.field(QueryFilterParser.NAME);\n- queryBuilder.toXContent(builder, params);\n- } else {\n- builder.startObject(FQueryFilterParser.NAME);\n- builder.field(\"query\");\n- queryBuilder.toXContent(builder, params);\n- if (queryName != null) {\n- builder.field(\"_name\", queryName);\n- }\n- builder.endObject();\n- }\n+ builder.field(QueryFilterParser.NAME);\n+ queryBuilder.toXContent(builder, params);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryFilterBuilder.java", "status": "modified" }, { "diff": "@@ -19,12 +19,12 @@\n \n package org.elasticsearch.index.query;\n \n-import org.apache.lucene.search.ConstantScoreQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.common.inject.Inject;\n \n import java.io.IOException;\n \n+// TODO: remove when https://github.com/elastic/elasticsearch/issues/13326 is fixed\n @Deprecated\n public class QueryFilterParser implements QueryParser {\n \n@@ -41,6 +41,6 @@ public String[] names() {\n \n @Override\n public Query parse(QueryParseContext parseContext) throws IOException, QueryParsingException {\n- return new ConstantScoreQuery(parseContext.parseInnerQuery());\n+ return parseContext.parseInnerQuery();\n }\n }\n\\ No newline at end of file", "filename": "core/src/main/java/org/elasticsearch/index/query/QueryFilterParser.java", "status": "modified" }, { "diff": "@@ -82,7 +82,6 @@ private void registerBuiltinQueryParsers() {\n registerQueryParser(RangeQueryParser.class);\n registerQueryParser(PrefixQueryParser.class);\n registerQueryParser(WildcardQueryParser.class);\n- registerQueryParser(FilteredQueryParser.class);\n registerQueryParser(ConstantScoreQueryParser.class);\n registerQueryParser(SpanTermQueryParser.class);\n registerQueryParser(SpanNotQueryParser.class);\n@@ -101,17 +100,13 @@ private void registerBuiltinQueryParsers() {\n registerQueryParser(SimpleQueryStringParser.class);\n registerQueryParser(TemplateQueryParser.class);\n registerQueryParser(TypeQueryParser.class);\n- registerQueryParser(LimitQueryParser.class);\n registerQueryParser(ScriptQueryParser.class);\n registerQueryParser(GeoDistanceQueryParser.class);\n registerQueryParser(GeoDistanceRangeQueryParser.class);\n registerQueryParser(GeoBoundingBoxQueryParser.class);\n registerQueryParser(GeohashCellQuery.Parser.class);\n registerQueryParser(GeoPolygonQueryParser.class);\n registerQueryParser(QueryFilterParser.class);\n- registerQueryParser(FQueryFilterParser.class);\n- registerQueryParser(AndQueryParser.class);\n- registerQueryParser(OrQueryParser.class);\n registerQueryParser(NotQueryParser.class);\n registerQueryParser(ExistsQueryParser.class);\n registerQueryParser(MissingQueryParser.class);", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesModule.java", "status": "modified" }, { "diff": "@@ -31,9 +31,8 @@\n import java.util.concurrent.ExecutionException;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n-import static org.elasticsearch.index.query.QueryBuilders.andQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.notQuery;\n import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.CoreMatchers.equalTo;\n@@ -119,11 +118,8 @@ public void onFailure(Throwable e) {\n assertHitCount(\n client().prepareSearch()\n .setQuery(matchAllQuery())\n- .setPostFilter(\n- andQuery(\n- matchAllQuery(),\n- notQuery(andQuery(termQuery(\"field1\", \"value1\"),\n- termQuery(\"field1\", \"value2\"))))).get(),\n+ .setPostFilter(boolQuery().must(matchAllQuery()).mustNot(boolQuery().must(termQuery(\"field1\", \"value1\")).must(termQuery(\"field1\", \"value2\"))))\n+ .get(),\n 3l);\n }\n latch.await();", "filename": "core/src/test/java/org/elasticsearch/action/admin/HotThreadsIT.java", "status": "modified" }, { "diff": "@@ -36,7 +36,7 @@\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.filteredQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.hasChildQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n@@ -167,9 +167,9 @@ public void run() {\n for (int j = 0; j < QUERY_COUNT; j++) {\n SearchResponse searchResponse = client.prepareSearch(indexName)\n .setQuery(\n- filteredQuery(\n- matchAllQuery(),\n- hasChildQuery(\"child\", termQuery(\"field2\", \"value\" + random.nextInt(numValues)))\n+ boolQuery()\n+ .must(matchAllQuery())\n+ .filter(hasChildQuery(\"child\", termQuery(\"field2\", \"value\" + random.nextInt(numValues)))\n )\n )\n .execute().actionGet();\n@@ -184,10 +184,9 @@ public void run() {\n for (int j = 1; j <= QUERY_COUNT; j++) {\n SearchResponse searchResponse = client.prepareSearch(indexName)\n .setQuery(\n- filteredQuery(\n- matchAllQuery(),\n- hasChildQuery(\"child\", matchAllQuery())\n- )\n+ boolQuery()\n+ .must(matchAllQuery())\n+ .filter(hasChildQuery(\"child\", matchAllQuery()))\n )\n .execute().actionGet();\n if (searchResponse.getFailedShards() > 0) {", "filename": "core/src/test/java/org/elasticsearch/benchmark/search/child/ChildSearchAndIndexingBenchmark.java", "status": "modified" }, { "diff": "@@ -130,10 +130,9 @@ public static void main(String[] args) throws Exception {\n for (int j = 0; j < QUERY_WARMUP; j++) {\n SearchResponse searchResponse = client.prepareSearch(indexName)\n .setQuery(\n- filteredQuery(\n- matchAllQuery(),\n- hasChildQuery(\"child\", termQuery(\"field2\", parentChildIndexGenerator.getQueryValue()))\n- )\n+ boolQuery()\n+ .must(matchAllQuery())\n+ .filter(hasChildQuery(\"child\", termQuery(\"field2\", parentChildIndexGenerator.getQueryValue())))\n )\n .execute().actionGet();\n if (searchResponse.getFailedShards() > 0) {\n@@ -145,10 +144,9 @@ public static void main(String[] args) throws Exception {\n for (int j = 0; j < QUERY_COUNT; j++) {\n SearchResponse searchResponse = client.prepareSearch(indexName)\n .setQuery(\n- filteredQuery(\n- matchAllQuery(),\n- hasChildQuery(\"child\", termQuery(\"field2\", parentChildIndexGenerator.getQueryValue()))\n- )\n+ boolQuery()\n+ .must(matchAllQuery())\n+ .filter(hasChildQuery(\"child\", termQuery(\"field2\", parentChildIndexGenerator.getQueryValue())))\n )\n .execute().actionGet();\n if (searchResponse.getFailedShards() > 0) {\n@@ -166,10 +164,9 @@ public static void main(String[] args) throws Exception {\n for (int j = 1; j <= QUERY_COUNT; j++) {\n SearchResponse searchResponse = client.prepareSearch(indexName)\n .setQuery(\n- filteredQuery(\n- matchAllQuery(),\n- hasChildQuery(\"child\", matchAllQuery())\n- )\n+ boolQuery()\n+ .must(matchAllQuery())\n+ .filter(hasChildQuery(\"child\", matchAllQuery()))\n )\n .execute().actionGet();\n if (searchResponse.getFailedShards() > 0) {\n@@ -226,10 +223,9 @@ public static void main(String[] args) throws Exception {\n for (int j = 0; j < QUERY_WARMUP; j++) {\n SearchResponse searchResponse = client.prepareSearch(indexName)\n .setQuery(\n- filteredQuery(\n- matchAllQuery(),\n- hasParentQuery(\"parent\", termQuery(\"field1\", parentChildIndexGenerator.getQueryValue()))\n- )\n+ boolQuery()\n+ .must(matchAllQuery())\n+ .filter(hasParentQuery(\"parent\", termQuery(\"field1\", parentChildIndexGenerator.getQueryValue())))\n )\n .execute().actionGet();\n if (searchResponse.getFailedShards() > 0) {\n@@ -241,10 +237,9 @@ public static void main(String[] args) throws Exception {\n for (int j = 1; j <= QUERY_COUNT; j++) {\n SearchResponse searchResponse = client.prepareSearch(indexName)\n .setQuery(\n- filteredQuery(\n- matchAllQuery(),\n- hasParentQuery(\"parent\", termQuery(\"field1\", parentChildIndexGenerator.getQueryValue()))\n- )\n+ boolQuery()\n+ .must(matchAllQuery())\n+ .filter(hasParentQuery(\"parent\", termQuery(\"field1\", parentChildIndexGenerator.getQueryValue())))\n )\n .execute().actionGet();\n if (searchResponse.getFailedShards() > 0) {\n@@ -261,10 +256,11 @@ public static void main(String[] args) throws Exception {\n totalQueryTime = 0;\n for (int j = 1; j <= QUERY_COUNT; j++) {\n SearchResponse searchResponse = client.prepareSearch(indexName)\n- .setQuery(filteredQuery(\n- matchAllQuery(),\n- hasParentQuery(\"parent\", matchAllQuery())\n- ))\n+ .setQuery(\n+ boolQuery()\n+ .must(matchAllQuery())\n+ .filter(hasParentQuery(\"parent\", matchAllQuery()))\n+ )\n .execute().actionGet();\n if (searchResponse.getFailedShards() > 0) {\n System.err.println(\"Search Failures \" + Arrays.toString(searchResponse.getShardFailures()));", "filename": "core/src/test/java/org/elasticsearch/benchmark/search/child/ChildSearchBenchmark.java", "status": "modified" }, { "diff": "@@ -40,7 +40,7 @@\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.filteredQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.hasChildQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n@@ -158,7 +158,7 @@ public static void main(String[] args) throws Exception {\n for (int i = 1; i < PARENT_COUNT; i *= 2) {\n for (int j = 0; j < QUERY_COUNT; j++) {\n SearchResponse searchResponse = client.prepareSearch(indexName)\n- .setQuery(filteredQuery(matchAllQuery(), hasChildQuery(\"child\", matchQuery(\"field2\", i))))\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", matchQuery(\"field2\", i))))\n .execute().actionGet();\n if (searchResponse.getHits().totalHits() != i) {\n System.err.println(\"--> mismatch on hits\");", "filename": "core/src/test/java/org/elasticsearch/benchmark/search/child/ChildSearchShortCircuitBenchmark.java", "status": "modified" }, { "diff": "@@ -31,7 +31,7 @@\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.geoDistanceQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.filteredQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n \n /**\n@@ -191,7 +191,7 @@ public static void main(String[] args) throws Exception {\n public static void run(Client client, GeoDistance geoDistance, String optimizeBbox) {\n client.prepareSearch() // from NY\n .setSize(0)\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceQuery(\"location\")\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(geoDistanceQuery(\"location\")\n .distance(\"2km\")\n .optimizeBbox(optimizeBbox)\n .geoDistance(geoDistance)", "filename": "core/src/test/java/org/elasticsearch/benchmark/search/geo/GeoDistanceSearchBenchmark.java", "status": "modified" }, { "diff": "@@ -414,7 +414,7 @@ public void testExistsFilter() throws IOException, ExecutionException, Interrupt\n client().prepareIndex(indexName, \"type1\", \"3\").setSource(jsonBuilder().startObject().startObject(\"obj2\").field(\"obj2_val\", \"1\").endObject().field(\"y1\", \"y_1\").field(\"field2\", \"value2_3\").endObject()),\n client().prepareIndex(indexName, \"type1\", \"4\").setSource(jsonBuilder().startObject().startObject(\"obj2\").field(\"obj2_val\", \"1\").endObject().field(\"y2\", \"y_2\").field(\"field3\", \"value3_4\").endObject()));\n \n- CountResponse countResponse = client().prepareCount().setQuery(filteredQuery(matchAllQuery(), existsQuery(\"field1\"))).get();\n+ CountResponse countResponse = client().prepareCount().setQuery(existsQuery(\"field1\")).get();\n assertHitCount(countResponse, 2l);\n \n countResponse = client().prepareCount().setQuery(constantScoreQuery(existsQuery(\"field1\"))).get();\n@@ -423,24 +423,24 @@ public void testExistsFilter() throws IOException, ExecutionException, Interrupt\n countResponse = client().prepareCount().setQuery(queryStringQuery(\"_exists_:field1\")).get();\n assertHitCount(countResponse, 2l);\n \n- countResponse = client().prepareCount().setQuery(filteredQuery(matchAllQuery(), existsQuery(\"field2\"))).get();\n+ countResponse = client().prepareCount().setQuery(existsQuery(\"field2\")).get();\n assertHitCount(countResponse, 2l);\n \n- countResponse = client().prepareCount().setQuery(filteredQuery(matchAllQuery(), existsQuery(\"field3\"))).get();\n+ countResponse = client().prepareCount().setQuery(existsQuery(\"field3\")).get();\n assertHitCount(countResponse, 1l);\n \n // wildcard check\n- countResponse = client().prepareCount().setQuery(filteredQuery(matchAllQuery(), existsQuery(\"x*\"))).get();\n+ countResponse = client().prepareCount().setQuery(existsQuery(\"x*\")).get();\n assertHitCount(countResponse, 2l);\n \n // object check\n- countResponse = client().prepareCount().setQuery(filteredQuery(matchAllQuery(), existsQuery(\"obj1\"))).get();\n+ countResponse = client().prepareCount().setQuery(existsQuery(\"obj1\")).get();\n assertHitCount(countResponse, 2l);\n \n- countResponse = client().prepareCount().setQuery(filteredQuery(matchAllQuery(), missingQuery(\"field1\"))).get();\n+ countResponse = client().prepareCount().setQuery(missingQuery(\"field1\")).get();\n assertHitCount(countResponse, 2l);\n \n- countResponse = client().prepareCount().setQuery(filteredQuery(matchAllQuery(), missingQuery(\"field1\"))).get();\n+ countResponse = client().prepareCount().setQuery(missingQuery(\"field1\")).get();\n assertHitCount(countResponse, 2l);\n \n countResponse = client().prepareCount().setQuery(constantScoreQuery(missingQuery(\"field1\"))).get();\n@@ -450,11 +450,11 @@ public void testExistsFilter() throws IOException, ExecutionException, Interrupt\n assertHitCount(countResponse, 2l);\n \n // wildcard check\n- countResponse = client().prepareCount().setQuery(filteredQuery(matchAllQuery(), missingQuery(\"x*\"))).get();\n+ countResponse = client().prepareCount().setQuery(missingQuery(\"x*\")).get();\n assertHitCount(countResponse, 2l);\n \n // object check\n- countResponse = client().prepareCount().setQuery(filteredQuery(matchAllQuery(), missingQuery(\"obj1\"))).get();\n+ countResponse = client().prepareCount().setQuery(missingQuery(\"obj1\")).get();\n assertHitCount(countResponse, 2l);\n if (!backwardsCluster().upgradeOneNode()) {\n break;", "filename": "core/src/test/java/org/elasticsearch/bwcompat/BasicBackwardsCompatibilityIT.java", "status": "modified" }, { "diff": "@@ -27,9 +27,8 @@\n import org.apache.lucene.queries.ExtendedCommonTermsQuery;\n import org.apache.lucene.queries.TermsQuery;\n import org.apache.lucene.search.*;\n-import org.apache.lucene.search.BooleanClause.Occur;\n-import org.apache.lucene.search.join.ToParentBlockJoinQuery;\n import org.apache.lucene.search.spans.*;\n+import org.apache.lucene.search.BooleanClause.Occur;\n import org.apache.lucene.spatial.prefix.IntersectsPrefixTreeFilter;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.BytesRefBuilder;\n@@ -52,8 +51,6 @@\n import org.elasticsearch.common.unit.DistanceUnit;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.common.xcontent.XContentHelper;\n-import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.mapper.MapperService;\n@@ -533,39 +530,6 @@ public void testPrefixBoostQuery() throws IOException {\n assertThat((double) prefixQuery.getBoost(), closeTo(1.2, 0.00001));\n }\n \n- @Test\n- public void testPrefiFilteredQueryBuilder() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- Query parsedQuery = queryParser.parse(filteredQuery(termQuery(\"name.first\", \"shay\"), prefixQuery(\"name.first\", \"sh\"))).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new PrefixQuery(new Term(\"name.first\", \"sh\")));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testPrefiFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/prefix-filter.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new PrefixQuery(new Term(\"name.first\", \"sh\")));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testPrefixNamedFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/prefix-filter-named.json\");\n- ParsedQuery parsedQuery = queryParser.parse(query);\n- assertThat(parsedQuery.namedFilters().containsKey(\"test\"), equalTo(true));\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new PrefixQuery(new Term(\"name.first\", \"sh\")));\n- assertEquals(expected, parsedQuery.query());\n- }\n-\n @Test\n public void testPrefixQueryBoostQueryBuilder() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n@@ -626,62 +590,6 @@ public void testRegexpQueryWithMaxDeterminizedStates() throws IOException {\n assertThat(regexpQuery.getField(), equalTo(\"name.first\"));\n }\n \n- @Test\n- public void testRegexpFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/regexp-filter.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new RegexpQuery(new Term(\"name.first\", \"s.*y\")));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testRegexpFilteredQueryWithMaxDeterminizedStates() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/regexp-filter-max-determinized-states.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new RegexpQuery(new Term(\"name.first\", \"s.*y\")));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testNamedRegexpFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/regexp-filter-named.json\");\n- ParsedQuery parsedQuery = queryParser.parse(query);\n- assertThat(parsedQuery.namedFilters().containsKey(\"test\"), equalTo(true));\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new RegexpQuery(new Term(\"name.first\", \"s.*y\")));\n- assertEquals(expected, parsedQuery.query());\n- }\n-\n- @Test\n- public void testRegexpWithFlagsFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/regexp-filter-flags.json\");\n- ParsedQuery parsedQuery = queryParser.parse(query);\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new RegexpQuery(new Term(\"name.first\", \"s.*y\")));\n- assertEquals(expected, parsedQuery.query());\n- }\n-\n- @Test\n- public void testNamedAndCachedRegexpWithFlagsFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/regexp-filter-flags-named-cached.json\");\n- ParsedQuery parsedQuery = queryParser.parse(query);\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new RegexpQuery(new Term(\"name.first\", \"s.*y\")));\n- assertEquals(expected, parsedQuery.query());\n- }\n-\n @Test\n public void testRegexpBoostQuery() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n@@ -767,186 +675,20 @@ public void testRange2Query() throws IOException {\n assertThat(rangeQuery.includesMax(), equalTo(false));\n }\n \n- @Test\n- public void testRangeFilteredQueryBuilder() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- Query parsedQuery = queryParser.parse(filteredQuery(termQuery(\"name.first\", \"shay\"), rangeQuery(\"age\").from(23).to(54).includeLower(true).includeUpper(false))).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- NumericRangeQuery.newLongRange(\"age\", 23L, 54L, true, false));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testRangeFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/range-filter.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- NumericRangeQuery.newLongRange(\"age\", 23L, 54L, true, false));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testRangeNamedFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/range-filter-named.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- NumericRangeQuery.newLongRange(\"age\", 23L, 54L, true, false));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testBoolFilteredQueryBuilder() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- Query parsedQuery = queryParser.parse(filteredQuery(termQuery(\"name.first\", \"shay\"), boolQuery().must(termQuery(\"name.first\", \"shay1\")).must(termQuery(\"name.first\", \"shay4\")).mustNot(termQuery(\"name.first\", \"shay2\")).should(termQuery(\"name.first\", \"shay3\")))).query();\n-\n- BooleanQuery.Builder filter = new BooleanQuery.Builder();\n- filter.add(new TermQuery(new Term(\"name.first\", \"shay1\")), Occur.MUST);\n- filter.add(new TermQuery(new Term(\"name.first\", \"shay4\")), Occur.MUST);\n- filter.add(new TermQuery(new Term(\"name.first\", \"shay2\")), Occur.MUST_NOT);\n- filter.add(new TermQuery(new Term(\"name.first\", \"shay3\")), Occur.SHOULD);\n- filter.setMinimumNumberShouldMatch(1);\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- filter.build());\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testBoolFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/bool-filter.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- BooleanQuery.Builder filter = new BooleanQuery.Builder();\n- filter.add(new TermQuery(new Term(\"name.first\", \"shay1\")), Occur.MUST);\n- filter.add(new TermQuery(new Term(\"name.first\", \"shay4\")), Occur.MUST);\n- filter.add(new TermQuery(new Term(\"name.first\", \"shay2\")), Occur.MUST_NOT);\n- filter.add(new TermQuery(new Term(\"name.first\", \"shay3\")), Occur.SHOULD);\n- filter.setMinimumNumberShouldMatch(1);\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- filter.build());\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testAndFilteredQueryBuilder() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- Query parsedQuery = queryParser.parse(filteredQuery(matchAllQuery(), andQuery(termQuery(\"name.first\", \"shay1\"), termQuery(\"name.first\", \"shay4\")))).query();\n- BooleanQuery.Builder and = new BooleanQuery.Builder();\n- and.add(new TermQuery(new Term(\"name.first\", \"shay1\")), Occur.MUST);\n- and.add(new TermQuery(new Term(\"name.first\", \"shay4\")), Occur.MUST);\n- BooleanQuery.Builder builder = new BooleanQuery.Builder();\n- builder.add(new MatchAllDocsQuery(), Occur.MUST);\n- builder.add(and.build(), Occur.FILTER);\n- assertEquals(builder.build(), parsedQuery);\n- }\n-\n- @Test\n- public void testAndFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/and-filter.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- BooleanQuery.Builder and = new BooleanQuery.Builder();\n- and.add(new TermQuery(new Term(\"name.first\", \"shay1\")), Occur.MUST);\n- and.add(new TermQuery(new Term(\"name.first\", \"shay4\")), Occur.MUST);\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- and.build());\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testAndNamedFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/and-filter-named.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- BooleanQuery.Builder and = new BooleanQuery.Builder();\n- and.add(new TermQuery(new Term(\"name.first\", \"shay1\")), Occur.MUST);\n- and.add(new TermQuery(new Term(\"name.first\", \"shay4\")), Occur.MUST);\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- and.build());\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testAndFilteredQuery2() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/and-filter2.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- BooleanQuery.Builder and = new BooleanQuery.Builder();\n- and.add(new TermQuery(new Term(\"name.first\", \"shay1\")), Occur.MUST);\n- and.add(new TermQuery(new Term(\"name.first\", \"shay4\")), Occur.MUST);\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- and.build());\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testOrFilteredQueryBuilder() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- Query parsedQuery = queryParser.parse(filteredQuery(matchAllQuery(), orQuery(termQuery(\"name.first\", \"shay1\"), termQuery(\"name.first\", \"shay4\")))).query();\n- BooleanQuery.Builder or = new BooleanQuery.Builder();\n- or.add(new TermQuery(new Term(\"name.first\", \"shay1\")), Occur.SHOULD);\n- or.add(new TermQuery(new Term(\"name.first\", \"shay4\")), Occur.SHOULD);\n- BooleanQuery.Builder builder = new BooleanQuery.Builder();\n- builder.add(new MatchAllDocsQuery(), Occur.MUST);\n- builder.add(or.build(), Occur.FILTER);\n- assertEquals(builder.build(), parsedQuery);\n- }\n-\n- @Test\n- public void testOrFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/or-filter.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- BooleanQuery.Builder or = new BooleanQuery.Builder();\n- or.add(new TermQuery(new Term(\"name.first\", \"shay1\")), Occur.SHOULD);\n- or.add(new TermQuery(new Term(\"name.first\", \"shay4\")), Occur.SHOULD);\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- or.build());\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testOrFilteredQuery2() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/or-filter2.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- BooleanQuery.Builder or = new BooleanQuery.Builder();\n- or.add(new TermQuery(new Term(\"name.first\", \"shay1\")), Occur.SHOULD);\n- or.add(new TermQuery(new Term(\"name.first\", \"shay4\")), Occur.SHOULD);\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- or.build());\n- assertEquals(expected, parsedQuery);\n- }\n-\n @Test\n public void testNotFilteredQueryBuilder() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n- Query parsedQuery = queryParser.parse(filteredQuery(matchAllQuery(), notQuery(termQuery(\"name.first\", \"shay1\")))).query();\n- BooleanQuery.Builder builder = new BooleanQuery.Builder();\n- builder.add(new MatchAllDocsQuery(), Occur.MUST);\n- builder.add(Queries.not(new TermQuery(new Term(\"name.first\", \"shay1\"))), Occur.FILTER);\n- assertEquals(builder.build(), parsedQuery);\n+ Query parsedQuery = queryParser.parse(notQuery(termQuery(\"name.first\", \"shay1\"))).query();\n+ assertEquals(Queries.not(new TermQuery(new Term(\"name.first\", \"shay1\"))), parsedQuery);\n }\n \n @Test\n public void testNotFilteredQuery() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/not-filter.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- Queries.not(new TermQuery(new Term(\"name.first\", \"shay1\"))));\n+ Query expected = \n+ Queries.not(new TermQuery(new Term(\"name.first\", \"shay1\")));\n assertEquals(expected, parsedQuery);\n }\n \n@@ -955,9 +697,7 @@ public void testNotFilteredQuery2() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/not-filter2.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- Queries.not(new TermQuery(new Term(\"name.first\", \"shay1\"))));\n+ Query expected = Queries.not(new TermQuery(new Term(\"name.first\", \"shay1\")));\n assertEquals(expected, parsedQuery);\n }\n \n@@ -966,9 +706,7 @@ public void testNotFilteredQuery3() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/not-filter3.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- Queries.not(new TermQuery(new Term(\"name.first\", \"shay1\"))));\n+ Query expected = Queries.not(new TermQuery(new Term(\"name.first\", \"shay1\")));\n assertEquals(expected, parsedQuery);\n }\n \n@@ -1125,9 +863,7 @@ public void testTermsQueryWithMultipleFields() throws IOException {\n public void testTermsFilterWithMultipleFields() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = XContentFactory.jsonBuilder().startObject()\n- .startObject(\"filtered\")\n- .startObject(\"query\").startObject(\"match_all\").endObject().endObject()\n- .startObject(\"filter\").startObject(\"terms\").array(\"foo\", 123).array(\"bar\", 456).endObject().endObject()\n+ .startObject(\"terms\").array(\"foo\", 123).array(\"bar\", 456)\n .endObject().string();\n try {\n queryParser.parse(query).query();\n@@ -1159,97 +895,6 @@ public void testInQuery() throws IOException {\n assertThat(clauses[2].getOccur(), equalTo(BooleanClause.Occur.SHOULD));\n }\n \n- @Test\n- public void testFilteredQueryBuilder() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- Query parsedQuery = queryParser.parse(filteredQuery(termQuery(\"name.first\", \"shay\"), termQuery(\"name.last\", \"banon\"))).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new TermQuery(new Term(\"name.last\", \"banon\")));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testFilteredQuery() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/filtered-query.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new TermQuery(new Term(\"name.last\", \"banon\")));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testFilteredQuery2() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/filtered-query2.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new TermQuery(new Term(\"name.last\", \"banon\")));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testFilteredQuery3() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/filtered-query3.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- NumericRangeQuery.newLongRange(\"age\", 23L, 54L, true, false));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testFilteredQuery4() throws IOException {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/filtered-query4.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expectedQuery = new WildcardQuery(new Term(\"name.first\", \"sh*\"));\n- expectedQuery.setBoost(1.1f);\n- Query expected = Queries.filtered(\n- expectedQuery,\n- new TermQuery(new Term(\"name.last\", \"banon\")));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testTermFilterQuery() throws Exception {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/term-filter.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new TermQuery(new Term(\"name.last\", \"banon\")));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testTermNamedFilterQuery() throws Exception {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/term-filter-named.json\");\n- ParsedQuery parsedQuery = queryParser.parse(query);\n- assertThat(parsedQuery.namedFilters().containsKey(\"test\"), equalTo(true));\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new TermQuery(new Term(\"name.last\", \"banon\")));\n- assertEquals(expected, parsedQuery.query());\n- }\n-\n- @Test\n- public void testTermQueryParserShouldOnlyAllowSingleTerm() throws Exception {\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/term-filter-broken-multi-terms.json\");\n- assertQueryParsingFailureDueToMultipleTermsInTermFilter(query);\n- }\n-\n- @Test\n- public void testTermQueryParserShouldOnlyAllowSingleTermInAlternateFormat() throws Exception {\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/term-filter-broken-multi-terms-2.json\");\n- assertQueryParsingFailureDueToMultipleTermsInTermFilter(query);\n- }\n-\n private void assertQueryParsingFailureDueToMultipleTermsInTermFilter(String query) throws IOException {\n IndexQueryParserService queryParser = queryParser();\n try {\n@@ -1263,10 +908,8 @@ private void assertQueryParsingFailureDueToMultipleTermsInTermFilter(String quer\n @Test\n public void testTermsFilterQueryBuilder() throws Exception {\n IndexQueryParserService queryParser = queryParser();\n- Query parsedQuery = queryParser.parse(filteredQuery(termQuery(\"name.first\", \"shay\"), termsQuery(\"name.last\", \"banon\", \"kimchy\"))).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new TermsQuery(\"name.last\", new BytesRef(\"banon\"), new BytesRef(\"kimchy\")));\n+ Query parsedQuery = queryParser.parse(constantScoreQuery(termsQuery(\"name.last\", \"banon\", \"kimchy\"))).query();\n+ Query expected = new ConstantScoreQuery(new TermsQuery(\"name.last\", new BytesRef(\"banon\"), new BytesRef(\"kimchy\")));\n assertEquals(expected, parsedQuery);\n }\n \n@@ -1276,9 +919,7 @@ public void testTermsFilterQuery() throws Exception {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/terms-filter.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new TermsQuery(\"name.last\", new BytesRef(\"banon\"), new BytesRef(\"kimchy\")));\n+ Query expected = new ConstantScoreQuery(new TermsQuery(\"name.last\", new BytesRef(\"banon\"), new BytesRef(\"kimchy\")));\n assertEquals(expected, parsedQuery);\n }\n \n@@ -1288,9 +929,7 @@ public void testTermsWithNameFilterQuery() throws Exception {\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/terms-filter-named.json\");\n ParsedQuery parsedQuery = queryParser.parse(query);\n assertThat(parsedQuery.namedFilters().containsKey(\"test\"), equalTo(true));\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new TermsQuery(\"name.last\", new BytesRef(\"banon\"), new BytesRef(\"kimchy\")));\n+ Query expected = new ConstantScoreQuery(new TermsQuery(\"name.last\", new BytesRef(\"banon\"), new BytesRef(\"kimchy\")));\n assertEquals(expected, parsedQuery.query());\n }\n \n@@ -1614,39 +1253,6 @@ public void testSpanMultiTermTermRangeQuery() throws IOException {\n assertThat(wrapper, equalTo(new SpanMultiTermQueryWrapper<MultiTermQuery>(expectedWrapped)));\n }\n \n- @Test\n- public void testQueryQueryBuilder() throws Exception {\n- IndexQueryParserService queryParser = queryParser();\n- Query parsedQuery = queryParser.parse(filteredQuery(termQuery(\"name.first\", \"shay\"), termQuery(\"name.last\", \"banon\"))).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new TermQuery(new Term(\"name.last\", \"banon\")));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testQueryFilter() throws Exception {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/query-filter.json\");\n- Query parsedQuery = queryParser.parse(query).query();\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new ConstantScoreQuery(new TermQuery(new Term(\"name.last\", \"banon\"))));\n- assertEquals(expected, parsedQuery);\n- }\n-\n- @Test\n- public void testFQueryFilter() throws Exception {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/fquery-filter.json\");\n- ParsedQuery parsedQuery = queryParser.parse(query);\n- assertThat(parsedQuery.namedFilters().containsKey(\"test\"), equalTo(true));\n- Query expected = Queries.filtered(\n- new TermQuery(new Term(\"name.first\", \"shay\")),\n- new ConstantScoreQuery(new TermQuery(new Term(\"name.last\", \"banon\"))));\n- assertEquals(expected, parsedQuery.query());\n- }\n-\n @Test\n public void testMoreLikeThisBuilder() throws Exception {\n IndexQueryParserService queryParser = queryParser();\n@@ -1784,16 +1390,7 @@ public void testGeoDistanceRangeQueryNamed() throws IOException {\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance-named.json\");\n ParsedQuery parsedQuery = queryParser.parse(query);\n assertThat(parsedQuery.namedFilters().containsKey(\"test\"), equalTo(true));\n- assertThat(parsedQuery.query(), instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery.query();\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery.query();\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -1806,16 +1403,7 @@ public void testGeoDistanceRangeQuery1() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance1.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -1828,16 +1416,7 @@ public void testGeoDistanceRangeQuery2() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance2.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -1850,16 +1429,7 @@ public void testGeoDistanceRangeQuery3() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance3.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -1872,16 +1442,7 @@ public void testGeoDistanceRangeQuery4() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance4.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -1894,16 +1455,7 @@ public void testGeoDistanceRangeQuery5() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance5.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -1916,16 +1468,7 @@ public void testGeoDistanceRangeQuery6() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance6.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -1938,16 +1481,7 @@ public void testGeoDistanceRangeQuery7() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance7.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -1960,16 +1494,7 @@ public void testGeoDistanceRangeQuery8() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance8.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -1982,16 +1507,7 @@ public void testGeoDistanceRangeQuery9() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance9.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -2004,16 +1520,7 @@ public void testGeoDistanceRangeQuery10() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance10.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -2026,16 +1533,7 @@ public void testGeoDistanceRangeQuery11() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance11.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -2048,16 +1546,7 @@ public void testGeoDistanceRangeQuery12() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_distance12.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoDistanceRangeQuery.class));\n- GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) booleanClause.getQuery();\n+ GeoDistanceRangeQuery filter = (GeoDistanceRangeQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.lat(), closeTo(40, 0.00001));\n assertThat(filter.lon(), closeTo(-70, 0.00001));\n@@ -2071,16 +1560,7 @@ public void testGeoBoundingBoxFilterNamed() throws IOException {\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_boundingbox-named.json\");\n ParsedQuery parsedQuery = queryParser.parse(query);\n assertThat(parsedQuery.namedFilters().containsKey(\"test\"), equalTo(true));\n- assertThat(parsedQuery.query(), instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery.query();\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(InMemoryGeoBoundingBoxQuery.class));\n- InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) booleanClause.getQuery();\n+ InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) parsedQuery.query();\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.topLeft().lat(), closeTo(40, 0.00001));\n assertThat(filter.topLeft().lon(), closeTo(-70, 0.00001));\n@@ -2093,16 +1573,7 @@ public void testGeoBoundingBoxFilter1() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_boundingbox1.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(InMemoryGeoBoundingBoxQuery.class));\n- InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) booleanClause.getQuery();\n+ InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.topLeft().lat(), closeTo(40, 0.00001));\n assertThat(filter.topLeft().lon(), closeTo(-70, 0.00001));\n@@ -2115,16 +1586,7 @@ public void testGeoBoundingBoxFilter2() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_boundingbox2.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(InMemoryGeoBoundingBoxQuery.class));\n- InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) booleanClause.getQuery();\n+ InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.topLeft().lat(), closeTo(40, 0.00001));\n assertThat(filter.topLeft().lon(), closeTo(-70, 0.00001));\n@@ -2137,16 +1599,7 @@ public void testGeoBoundingBoxFilter3() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_boundingbox3.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(InMemoryGeoBoundingBoxQuery.class));\n- InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) booleanClause.getQuery();\n+ InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.topLeft().lat(), closeTo(40, 0.00001));\n assertThat(filter.topLeft().lon(), closeTo(-70, 0.00001));\n@@ -2159,16 +1612,7 @@ public void testGeoBoundingBoxFilter4() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_boundingbox4.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(InMemoryGeoBoundingBoxQuery.class));\n- InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) booleanClause.getQuery();\n+ InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.topLeft().lat(), closeTo(40, 0.00001));\n assertThat(filter.topLeft().lon(), closeTo(-70, 0.00001));\n@@ -2181,16 +1625,7 @@ public void testGeoBoundingBoxFilter5() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_boundingbox5.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(InMemoryGeoBoundingBoxQuery.class));\n- InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) booleanClause.getQuery();\n+ InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.topLeft().lat(), closeTo(40, 0.00001));\n assertThat(filter.topLeft().lon(), closeTo(-70, 0.00001));\n@@ -2203,16 +1638,7 @@ public void testGeoBoundingBoxFilter6() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_boundingbox6.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(InMemoryGeoBoundingBoxQuery.class));\n- InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) booleanClause.getQuery();\n+ InMemoryGeoBoundingBoxQuery filter = (InMemoryGeoBoundingBoxQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.topLeft().lat(), closeTo(40, 0.00001));\n assertThat(filter.topLeft().lon(), closeTo(-70, 0.00001));\n@@ -2227,16 +1653,7 @@ public void testGeoPolygonNamedFilter() throws IOException {\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_polygon-named.json\");\n ParsedQuery parsedQuery = queryParser.parse(query);\n assertThat(parsedQuery.namedFilters().containsKey(\"test\"), equalTo(true));\n- assertThat(parsedQuery.query(), instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery.query();\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoPolygonQuery.class));\n- GeoPolygonQuery filter = (GeoPolygonQuery) booleanClause.getQuery();\n+ GeoPolygonQuery filter = (GeoPolygonQuery) parsedQuery.query();\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.points().length, equalTo(4));\n assertThat(filter.points()[0].lat(), closeTo(40, 0.00001));\n@@ -2275,16 +1692,7 @@ public void testGeoPolygonFilter1() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_polygon1.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoPolygonQuery.class));\n- GeoPolygonQuery filter = (GeoPolygonQuery) booleanClause.getQuery();\n+ GeoPolygonQuery filter = (GeoPolygonQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.points().length, equalTo(4));\n assertThat(filter.points()[0].lat(), closeTo(40, 0.00001));\n@@ -2300,16 +1708,7 @@ public void testGeoPolygonFilter2() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_polygon2.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoPolygonQuery.class));\n- GeoPolygonQuery filter = (GeoPolygonQuery) booleanClause.getQuery();\n+ GeoPolygonQuery filter = (GeoPolygonQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.points().length, equalTo(4));\n assertThat(filter.points()[0].lat(), closeTo(40, 0.00001));\n@@ -2325,16 +1724,7 @@ public void testGeoPolygonFilter3() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_polygon3.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoPolygonQuery.class));\n- GeoPolygonQuery filter = (GeoPolygonQuery) booleanClause.getQuery();\n+ GeoPolygonQuery filter = (GeoPolygonQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.points().length, equalTo(4));\n assertThat(filter.points()[0].lat(), closeTo(40, 0.00001));\n@@ -2350,16 +1740,7 @@ public void testGeoPolygonFilter4() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geo_polygon4.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(GeoPolygonQuery.class));\n- GeoPolygonQuery filter = (GeoPolygonQuery) booleanClause.getQuery();\n+ GeoPolygonQuery filter = (GeoPolygonQuery) parsedQuery;\n assertThat(filter.fieldName(), equalTo(\"location\"));\n assertThat(filter.points().length, equalTo(4));\n assertThat(filter.points()[0].lat(), closeTo(40, 0.00001));\n@@ -2375,15 +1756,7 @@ public void testGeoShapeFilter() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/geoShape-filter.json\");\n Query parsedQuery = queryParser.parse(query).query();\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(ConstantScoreQuery.class));\n- ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) booleanClause.getQuery();\n+ ConstantScoreQuery constantScoreQuery = (ConstantScoreQuery) parsedQuery;\n assertThat(constantScoreQuery.getQuery(), instanceOf(IntersectsPrefixTreeFilter.class));\n }\n \n@@ -2573,16 +1946,6 @@ public void testEmptyBooleanQuery() throws Exception {\n assertThat(parsedQuery, instanceOf(MatchAllDocsQuery.class));\n }\n \n- // https://github.com/elasticsearch/elasticsearch/issues/7240\n- @Test\n- public void testEmptyBooleanQueryInsideFQuery() throws Exception {\n- IndexQueryParserService queryParser = queryParser();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/fquery-with-empty-bool-query.json\");\n- XContentParser parser = XContentHelper.createParser(new BytesArray(query));\n- ParsedQuery parsedQuery = queryParser.parseInnerFilter(parser);\n- assertEquals(new ConstantScoreQuery(Queries.filtered(new TermQuery(new Term(\"text\", \"apache\")), new TermQuery(new Term(\"text\", \"apache\")))), parsedQuery.query());\n- }\n-\n @Test\n public void testProperErrorMessageWhenTwoFunctionsDefinedInQueryBody() throws IOException {\n IndexQueryParserService queryParser = queryParser();\n@@ -2662,28 +2025,6 @@ public void testProperErrorMessagesForMisplacedWeightsAndFunctions() throws IOEx\n assertThat(e.getDetailedMessage(), containsString(\"you can either define [functions] array or a single function, not both. already found [weight], now encountering [functions].\"));\n }\n }\n-\n- // https://github.com/elasticsearch/elasticsearch/issues/6722\n- public void testEmptyBoolSubClausesIsMatchAll() throws IOException {\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/bool-query-with-empty-clauses-for-parsing.json\");\n- IndexService indexService = createIndex(\"testidx\", client().admin().indices().prepareCreate(\"testidx\")\n- .addMapping(\"foo\", \"nested\", \"type=nested\"));\n- SearchContext.setCurrent(createSearchContext(indexService));\n- IndexQueryParserService queryParser = indexService.queryParserService();\n- Query parsedQuery = queryParser.parse(query).query();\n- assertThat(parsedQuery, instanceOf(BooleanQuery.class));\n- BooleanQuery booleanQuery = (BooleanQuery) parsedQuery;\n- assertThat(booleanQuery.clauses().size(), equalTo(2));\n- BooleanClause booleanClause = booleanQuery.clauses().get(0);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.MUST));\n- assertThat(booleanClause.getQuery(), instanceOf(MatchAllDocsQuery.class));\n- booleanClause = booleanQuery.clauses().get(1);\n- assertThat(booleanClause.getOccur(), equalTo(Occur.FILTER));\n- assertThat(booleanClause.getQuery(), instanceOf(ToParentBlockJoinQuery.class));\n- ToParentBlockJoinQuery toParentBlockJoinQuery = (ToParentBlockJoinQuery) booleanClause.getQuery();\n- assertThat(toParentBlockJoinQuery.toString(), equalTo(\"ToParentBlockJoinQuery (+*:* #QueryWrapperFilter(_type:__nested))\"));\n- SearchContext.removeCurrent();\n- }\n \n /** \n * helper to extract term from TermQuery. */", "filename": "core/src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java", "status": "modified" }, { "diff": "@@ -130,7 +130,7 @@ public void simpleNested() throws Exception {\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n \n // filter\n- searchResponse = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), nestedQuery(\"nested1\",\n+ searchResponse = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).mustNot(nestedQuery(\"nested1\",\n boolQuery().must(termQuery(\"nested1.n_field1\", \"n_value1_1\")).must(termQuery(\"nested1.n_field2\", \"n_value2_1\"))))).execute().actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));", "filename": "core/src/test/java/org/elasticsearch/nested/SimpleNestedIT.java", "status": "modified" }, { "diff": "@@ -1594,9 +1594,9 @@ public void percolateNonMatchingConstantScoreQuery() throws Exception {\n logger.info(\"--> register a query\");\n client().prepareIndex(\"test\", PercolatorService.TYPE_NAME, \"1\")\n .setSource(jsonBuilder().startObject()\n- .field(\"query\", QueryBuilders.constantScoreQuery(QueryBuilders.andQuery(\n- QueryBuilders.queryStringQuery(\"root\"),\n- QueryBuilders.termQuery(\"message\", \"tree\"))))\n+ .field(\"query\", QueryBuilders.constantScoreQuery(QueryBuilders.boolQuery()\n+ .must(QueryBuilders.queryStringQuery(\"root\"))\n+ .must(QueryBuilders.termQuery(\"message\", \"tree\"))))\n .endObject())\n .setRefresh(true)\n .execute().actionGet();", "filename": "core/src/test/java/org/elasticsearch/percolator/PercolatorIT.java", "status": "modified" }, { "diff": "@@ -22,7 +22,7 @@\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.index.query.AndQueryBuilder;\n+import org.elasticsearch.index.query.BoolQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n@@ -110,7 +110,7 @@ public void simple() throws Exception {\n // https://github.com/elasticsearch/elasticsearch/issues/8438\n @Test\n public void emptyFilterDeclarations() throws Exception {\n- QueryBuilder emptyFilter = new AndQueryBuilder();\n+ QueryBuilder emptyFilter = new BoolQueryBuilder();\n SearchResponse response = client().prepareSearch(\"idx\").addAggregation(filter(\"tag1\").filter(emptyFilter)).execute().actionGet();\n \n assertSearchResponse(response);", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/FilterIT.java", "status": "modified" }, { "diff": "@@ -23,7 +23,7 @@\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.index.query.AndQueryBuilder;\n+import org.elasticsearch.index.query.BoolQueryBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.aggregations.bucket.filters.Filters;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n@@ -138,7 +138,7 @@ public void simple() throws Exception {\n // https://github.com/elasticsearch/elasticsearch/issues/8438\n @Test\n public void emptyFilterDeclarations() throws Exception {\n- QueryBuilder emptyFilter = new AndQueryBuilder();\n+ QueryBuilder emptyFilter = new BoolQueryBuilder();\n SearchResponse response = client().prepareSearch(\"idx\")\n .addAggregation(filters(\"tags\").filter(\"all\", emptyFilter).filter(\"tag1\", termQuery(\"tag\", \"tag1\"))).execute()\n .actionGet();", "filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/FiltersIT.java", "status": "modified" }, { "diff": "@@ -62,7 +62,6 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.filteredQuery;\n import static org.elasticsearch.index.query.QueryBuilders.hasParentQuery;\n import static org.elasticsearch.index.query.QueryBuilders.idsQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n@@ -133,25 +132,25 @@ public void multiLevelChild() throws Exception {\n SearchResponse searchResponse = client()\n .prepareSearch(\"test\")\n .setQuery(\n- filteredQuery(\n- matchAllQuery(),\n- hasChildQuery(\n+ boolQuery()\n+ .must(matchAllQuery())\n+ .filter(hasChildQuery(\n \"child\",\n- filteredQuery(termQuery(\"c_field\", \"c_value1\"),\n- hasChildQuery(\"grandchild\", termQuery(\"gc_field\", \"gc_value1\")))))).get();\n+ boolQuery().must(termQuery(\"c_field\", \"c_value1\"))\n+ .filter(hasChildQuery(\"grandchild\", termQuery(\"gc_field\", \"gc_value1\")))))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"p1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), hasParentQuery(\"parent\", termQuery(\"p_field\", \"p_value1\")))).execute()\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", termQuery(\"p_field\", \"p_value1\")))).execute()\n .actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"c1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), hasParentQuery(\"child\", termQuery(\"c_field\", \"c_value1\")))).execute()\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"child\", termQuery(\"c_field\", \"c_value1\")))).execute()\n .actionGet();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n@@ -170,25 +169,6 @@ public void multiLevelChild() throws Exception {\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"gc1\"));\n }\n \n- @Test\n- // see #6722\n- public void test6722() throws IOException {\n- assertAcked(prepareCreate(\"test\")\n- .addMapping(\"foo\")\n- .addMapping(\"test\", \"_parent\", \"type=foo\"));\n- ensureGreen();\n-\n- // index simple data\n- client().prepareIndex(\"test\", \"foo\", \"1\").setSource(\"foo\", 1).get();\n- client().prepareIndex(\"test\", \"test\", \"2\").setSource(\"foo\", 1).setParent(\"1\").get();\n- refresh();\n- String query = copyToStringFromClasspath(\"/org/elasticsearch/search/child/bool-query-with-empty-clauses.json\");\n- SearchResponse searchResponse = client().prepareSearch(\"test\").setSource(query).get();\n- assertNoFailures(searchResponse);\n- assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n- assertThat(searchResponse.getHits().getAt(0).getId(), equalTo(\"2\"));\n- }\n-\n @Test\n // see #2744\n public void test2744() throws IOException {\n@@ -553,12 +533,12 @@ public void testHasChildAndHasParentFailWhenSomeSegmentsDontContainAnyParentOrCh\n client().admin().indices().prepareFlush(\"test\").get();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), hasChildQuery(\"child\", matchAllQuery()))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", matchAllQuery()))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), hasParentQuery(\"parent\", matchAllQuery()))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", matchAllQuery()))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n }\n@@ -818,13 +798,13 @@ public void testHasChildAndHasParentFilter_withFilter() throws Exception {\n client().admin().indices().prepareFlush(\"test\").get();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), hasChildQuery(\"child\", termQuery(\"c_field\", 1)))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", termQuery(\"c_field\", 1)))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().hits()[0].id(), equalTo(\"1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), hasParentQuery(\"parent\", termQuery(\"p_field\", 1)))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", termQuery(\"p_field\", 1)))).get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().hits()[0].id(), equalTo(\"2\"));\n@@ -844,19 +824,19 @@ public void testHasChildAndHasParentWrappedInAQueryFilter() throws Exception {\n refresh();\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), hasChildQuery(\"child\", matchQuery(\"c_field\", 1)))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", matchQuery(\"c_field\", 1)))).get();\n assertSearchHit(searchResponse, 1, hasId(\"1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), hasParentQuery(\"parent\", matchQuery(\"p_field\", 1)))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", matchQuery(\"p_field\", 1)))).get();\n assertSearchHit(searchResponse, 1, hasId(\"2\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), boolQuery().must(hasChildQuery(\"child\", matchQuery(\"c_field\", 1))))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(boolQuery().must(hasChildQuery(\"child\", matchQuery(\"c_field\", 1))))).get();\n assertSearchHit(searchResponse, 1, hasId(\"1\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), boolQuery().must(hasParentQuery(\"parent\", matchQuery(\"p_field\", 1))))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(boolQuery().must(hasParentQuery(\"parent\", matchQuery(\"p_field\", 1))))).get();\n assertSearchHit(searchResponse, 1, hasId(\"2\"));\n }\n \n@@ -1008,59 +988,59 @@ public void testParentFieldFilter() throws Exception {\n ensureGreen();\n \n // test term filter\n- SearchResponse response = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), termQuery(\"_parent\", \"p1\")))\n+ SearchResponse response = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).filter(termQuery(\"_parent\", \"p1\")))\n .get();\n assertHitCount(response, 0l);\n \n client().prepareIndex(\"test\", \"some_type\", \"1\").setSource(\"field\", \"value\").get();\n client().prepareIndex(\"test\", \"parent\", \"p1\").setSource(\"p_field\", \"value\").get();\n client().prepareIndex(\"test\", \"child\", \"c1\").setSource(\"c_field\", \"value\").setParent(\"p1\").get();\n \n- response = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), termQuery(\"_parent\", \"p1\"))).execute()\n+ response = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).filter(termQuery(\"_parent\", \"p1\"))).execute()\n .actionGet();\n assertHitCount(response, 0l);\n refresh();\n \n- response = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), termQuery(\"_parent\", \"p1\"))).execute()\n+ response = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).filter(termQuery(\"_parent\", \"p1\"))).execute()\n .actionGet();\n assertHitCount(response, 1l);\n \n- response = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), termQuery(\"_parent\", \"parent#p1\"))).execute()\n+ response = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).filter(termQuery(\"_parent\", \"parent#p1\"))).execute()\n .actionGet();\n assertHitCount(response, 1l);\n \n client().prepareIndex(\"test\", \"parent2\", \"p1\").setSource(\"p_field\", \"value\").setRefresh(true).get();\n \n- response = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), termQuery(\"_parent\", \"p1\"))).execute()\n+ response = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).filter(termQuery(\"_parent\", \"p1\"))).execute()\n .actionGet();\n assertHitCount(response, 1l);\n \n- response = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), termQuery(\"_parent\", \"parent#p1\"))).execute()\n+ response = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).filter(termQuery(\"_parent\", \"parent#p1\"))).execute()\n .actionGet();\n assertHitCount(response, 1l);\n \n // test terms filter\n client().prepareIndex(\"test\", \"child2\", \"c1\").setSource(\"c_field\", \"value\").setParent(\"p1\").get();\n- response = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), termsQuery(\"_parent\", \"p1\"))).execute()\n+ response = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).filter(termsQuery(\"_parent\", \"p1\"))).execute()\n .actionGet();\n assertHitCount(response, 1l);\n \n- response = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), termsQuery(\"_parent\", \"parent#p1\"))).execute()\n+ response = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).filter(termsQuery(\"_parent\", \"parent#p1\"))).execute()\n .actionGet();\n assertHitCount(response, 1l);\n \n refresh();\n- response = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), termsQuery(\"_parent\", \"p1\"))).execute()\n+ response = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).filter(termsQuery(\"_parent\", \"p1\"))).execute()\n .actionGet();\n assertHitCount(response, 2l);\n \n refresh();\n- response = client().prepareSearch(\"test\").setQuery(filteredQuery(matchAllQuery(), termsQuery(\"_parent\", \"p1\", \"p1\"))).execute()\n+ response = client().prepareSearch(\"test\").setQuery(boolQuery().must(matchAllQuery()).filter(termsQuery(\"_parent\", \"p1\", \"p1\"))).execute()\n .actionGet();\n assertHitCount(response, 2l);\n \n response = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(), termsQuery(\"_parent\", \"parent#p1\", \"parent2#p1\"))).get();\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(termsQuery(\"_parent\", \"parent#p1\", \"parent2#p1\"))).get();\n assertHitCount(response, 2l);\n }\n \n@@ -1107,7 +1087,7 @@ private QueryBuilder randomHasChild(String type, String field, String value) {\n if (randomBoolean()) {\n return constantScoreQuery(hasChildQuery(type, termQuery(field, value)));\n } else {\n- return filteredQuery(matchAllQuery(), hasChildQuery(type, termQuery(field, value)));\n+ return boolQuery().must(matchAllQuery()).filter(hasChildQuery(type, termQuery(field, value)));\n }\n } else {\n return hasChildQuery(type, termQuery(field, value));\n@@ -1119,7 +1099,7 @@ private QueryBuilder randomHasParent(String type, String field, String value) {\n if (randomBoolean()) {\n return constantScoreQuery(hasParentQuery(type, termQuery(field, value)));\n } else {\n- return filteredQuery(matchAllQuery(), hasParentQuery(type, termQuery(field, value)));\n+ return boolQuery().must(matchAllQuery()).filter(hasParentQuery(type, termQuery(field, value)));\n }\n } else {\n return hasParentQuery(type, termQuery(field, value));\n@@ -1259,13 +1239,13 @@ public void testHasChildQueryWithNestedInnerObjects() throws Exception {\n \n String scoreMode = ScoreType.values()[getRandom().nextInt(ScoreType.values().length)].name().toLowerCase(Locale.ROOT);\n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(QueryBuilders.hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\")).scoreType(scoreMode), notQuery(termQuery(\"p_field\", \"3\"))))\n+ .setQuery(boolQuery().must(QueryBuilders.hasChildQuery(\"child\", termQuery(\"c_field\", \"blue\")).scoreType(scoreMode)).filter(notQuery(termQuery(\"p_field\", \"3\"))))\n .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(QueryBuilders.hasChildQuery(\"child\", termQuery(\"c_field\", \"red\")).scoreType(scoreMode), notQuery(termQuery(\"p_field\", \"3\"))))\n+ .setQuery(boolQuery().must(QueryBuilders.hasChildQuery(\"child\", termQuery(\"c_field\", \"red\")).scoreType(scoreMode)).filter(notQuery(termQuery(\"p_field\", \"3\"))))\n .get();\n assertNoFailures(searchResponse);\n assertThat(searchResponse.getHits().totalHits(), equalTo(2l));\n@@ -1419,7 +1399,7 @@ public void testParentChildCaching() throws Exception {\n \n for (int i = 0; i < 2; i++) {\n SearchResponse searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(), boolQuery()\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(boolQuery()\n .must(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"c_field\", \"red\")))\n .must(matchAllQuery())))\n .get();\n@@ -1431,7 +1411,7 @@ public void testParentChildCaching() throws Exception {\n client().admin().indices().prepareRefresh(\"test\").get();\n \n SearchResponse searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(), boolQuery()\n+ .setQuery(boolQuery().must(matchAllQuery()).filter(boolQuery()\n .must(QueryBuilders.hasChildQuery(\"child\", matchQuery(\"c_field\", \"red\")))\n .must(matchAllQuery())))\n .get();\n@@ -1454,9 +1434,9 @@ public void testParentChildQueriesViaScrollApi() throws Exception {\n \n QueryBuilder[] queries = new QueryBuilder[]{\n hasChildQuery(\"child\", matchAllQuery()),\n- filteredQuery(matchAllQuery(), hasChildQuery(\"child\", matchAllQuery())),\n+ boolQuery().must(matchAllQuery()).filter(hasChildQuery(\"child\", matchAllQuery())),\n hasParentQuery(\"parent\", matchAllQuery()),\n- filteredQuery(matchAllQuery(), hasParentQuery(\"parent\", matchAllQuery()))\n+ boolQuery().must(matchAllQuery()).filter(hasParentQuery(\"parent\", matchAllQuery()))\n };\n \n for (QueryBuilder query : queries) {", "filename": "core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java", "status": "modified" }, { "diff": "@@ -26,8 +26,9 @@\n import org.elasticsearch.test.ESIntegTestCase;\n import org.junit.Test;\n \n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n+\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.geoBoundingBoxQuery;\n import static org.elasticsearch.index.query.QueryBuilders.*;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.anyOf;\n@@ -90,7 +91,7 @@ public void simpleBoundingBoxTest() throws Exception {\n client().admin().indices().prepareRefresh().execute().actionGet();\n \n SearchResponse searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoBoundingBoxQuery(\"location\").topLeft(40.73, -74.1).bottomRight(40.717, -73.99)))\n+ .setQuery(geoBoundingBoxQuery(\"location\").topLeft(40.73, -74.1).bottomRight(40.717, -73.99))\n .execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(2l));\n assertThat(searchResponse.getHits().hits().length, equalTo(2));\n@@ -99,7 +100,7 @@ public void simpleBoundingBoxTest() throws Exception {\n }\n \n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoBoundingBoxQuery(\"location\").topLeft(40.73, -74.1).bottomRight(40.717, -73.99).type(\"indexed\")))\n+ .setQuery(geoBoundingBoxQuery(\"location\").topLeft(40.73, -74.1).bottomRight(40.717, -73.99).type(\"indexed\"))\n .execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(2l));\n assertThat(searchResponse.getHits().hits().length, equalTo(2));\n@@ -159,52 +160,52 @@ public void limitsBoundingBoxTest() throws Exception {\n refresh();\n \n SearchResponse searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(), geoBoundingBoxQuery(\"location\").topLeft(41, -11).bottomRight(40, 9)))\n+ .setQuery(geoBoundingBoxQuery(\"location\").topLeft(41, -11).bottomRight(40, 9))\n .execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().hits().length, equalTo(1));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"2\"));\n searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(), geoBoundingBoxQuery(\"location\").topLeft(41, -11).bottomRight(40, 9).type(\"indexed\")))\n+ .setQuery(geoBoundingBoxQuery(\"location\").topLeft(41, -11).bottomRight(40, 9).type(\"indexed\"))\n .execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().hits().length, equalTo(1));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"2\"));\n \n searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(), geoBoundingBoxQuery(\"location\").topLeft(41, -9).bottomRight(40, 11)))\n+ .setQuery(geoBoundingBoxQuery(\"location\").topLeft(41, -9).bottomRight(40, 11))\n .execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().hits().length, equalTo(1));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"3\"));\n searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(), geoBoundingBoxQuery(\"location\").topLeft(41, -9).bottomRight(40, 11).type(\"indexed\")))\n+ .setQuery(geoBoundingBoxQuery(\"location\").topLeft(41, -9).bottomRight(40, 11).type(\"indexed\"))\n .execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().hits().length, equalTo(1));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"3\"));\n \n searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(), geoBoundingBoxQuery(\"location\").topLeft(11, 171).bottomRight(1, -169)))\n+ .setQuery(geoBoundingBoxQuery(\"location\").topLeft(11, 171).bottomRight(1, -169))\n .execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().hits().length, equalTo(1));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"5\"));\n searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(), geoBoundingBoxQuery(\"location\").topLeft(11, 171).bottomRight(1, -169).type(\"indexed\")))\n+ .setQuery(geoBoundingBoxQuery(\"location\").topLeft(11, 171).bottomRight(1, -169).type(\"indexed\"))\n .execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().hits().length, equalTo(1));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"5\"));\n \n searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(), geoBoundingBoxQuery(\"location\").topLeft(9, 169).bottomRight(-1, -171)))\n+ .setQuery(geoBoundingBoxQuery(\"location\").topLeft(9, 169).bottomRight(-1, -171))\n .execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().hits().length, equalTo(1));\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"9\"));\n searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(), geoBoundingBoxQuery(\"location\").topLeft(9, 169).bottomRight(-1, -171).type(\"indexed\")))\n+ .setQuery(geoBoundingBoxQuery(\"location\").topLeft(9, 169).bottomRight(-1, -171).type(\"indexed\"))\n .execute().actionGet();\n assertThat(searchResponse.getHits().getTotalHits(), equalTo(1l));\n assertThat(searchResponse.getHits().hits().length, equalTo(1));\n@@ -237,26 +238,26 @@ public void limit2BoundingBoxTest() throws Exception {\n \n SearchResponse searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(termQuery(\"userid\", 880),\n+ boolQuery().must(termQuery(\"userid\", 880)).filter(\n geoBoundingBoxQuery(\"location\").topLeft(74.579421999999994, 143.5).bottomRight(-66.668903999999998, 113.96875))\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(termQuery(\"userid\", 880),\n+ boolQuery().must(termQuery(\"userid\", 880)).filter(\n geoBoundingBoxQuery(\"location\").topLeft(74.579421999999994, 143.5).bottomRight(-66.668903999999998, 113.96875).type(\"indexed\"))\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n \n searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(termQuery(\"userid\", 534),\n+ boolQuery().must(termQuery(\"userid\", 534)).filter(\n geoBoundingBoxQuery(\"location\").topLeft(74.579421999999994, 143.5).bottomRight(-66.668903999999998, 113.96875))\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(termQuery(\"userid\", 534),\n+ boolQuery().must(termQuery(\"userid\", 534)).filter(\n geoBoundingBoxQuery(\"location\").topLeft(74.579421999999994, 143.5).bottomRight(-66.668903999999998, 113.96875).type(\"indexed\"))\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n@@ -288,51 +289,43 @@ public void completeLonRangeTest() throws Exception {\n \n SearchResponse searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(matchAllQuery(),\n- geoBoundingBoxQuery(\"location\").coerce(true).topLeft(50, -180).bottomRight(-50, 180))\n+ geoBoundingBoxQuery(\"location\").coerce(true).topLeft(50, -180).bottomRight(-50, 180)\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(matchAllQuery(),\n- geoBoundingBoxQuery(\"location\").coerce(true).topLeft(50, -180).bottomRight(-50, 180).type(\"indexed\"))\n+ geoBoundingBoxQuery(\"location\").coerce(true).topLeft(50, -180).bottomRight(-50, 180).type(\"indexed\")\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(matchAllQuery(),\n- geoBoundingBoxQuery(\"location\").coerce(true).topLeft(90, -180).bottomRight(-90, 180))\n+ geoBoundingBoxQuery(\"location\").coerce(true).topLeft(90, -180).bottomRight(-90, 180)\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(2l));\n searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(matchAllQuery(),\n- geoBoundingBoxQuery(\"location\").coerce(true).topLeft(90, -180).bottomRight(-90, 180).type(\"indexed\"))\n+ geoBoundingBoxQuery(\"location\").coerce(true).topLeft(90, -180).bottomRight(-90, 180).type(\"indexed\")\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(2l));\n \n searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(matchAllQuery(),\n- geoBoundingBoxQuery(\"location\").coerce(true).topLeft(50, 0).bottomRight(-50, 360))\n+ geoBoundingBoxQuery(\"location\").coerce(true).topLeft(50, 0).bottomRight(-50, 360)\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(matchAllQuery(),\n- geoBoundingBoxQuery(\"location\").coerce(true).topLeft(50, 0).bottomRight(-50, 360).type(\"indexed\"))\n+ geoBoundingBoxQuery(\"location\").coerce(true).topLeft(50, 0).bottomRight(-50, 360).type(\"indexed\")\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(matchAllQuery(),\n- geoBoundingBoxQuery(\"location\").coerce(true).topLeft(90, 0).bottomRight(-90, 360))\n+ geoBoundingBoxQuery(\"location\").coerce(true).topLeft(90, 0).bottomRight(-90, 360)\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(2l));\n searchResponse = client().prepareSearch()\n .setQuery(\n- filteredQuery(matchAllQuery(),\n- geoBoundingBoxQuery(\"location\").coerce(true).topLeft(90, 0).bottomRight(-90, 360).type(\"indexed\"))\n+ geoBoundingBoxQuery(\"location\").coerce(true).topLeft(90, 0).bottomRight(-90, 360).type(\"indexed\")\n ).execute().actionGet();\n assertThat(searchResponse.getHits().totalHits(), equalTo(2l));\n }", "filename": "core/src/test/java/org/elasticsearch/search/geo/GeoBoundingBoxIT.java", "status": "modified" }, { "diff": "@@ -42,7 +42,6 @@\n import java.util.List;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.filteredQuery;\n import static org.elasticsearch.index.query.QueryBuilders.geoDistanceQuery;\n import static org.elasticsearch.index.query.QueryBuilders.geoDistanceRangeQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n@@ -109,15 +108,15 @@ public void simpleDistanceTests() throws Exception {\n .endObject()));\n \n SearchResponse searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceQuery(\"location\").distance(\"3km\").point(40.7143528, -74.0059731)))\n+ .setQuery(geoDistanceQuery(\"location\").distance(\"3km\").point(40.7143528, -74.0059731))\n .execute().actionGet();\n assertHitCount(searchResponse, 5);\n assertThat(searchResponse.getHits().hits().length, equalTo(5));\n for (SearchHit hit : searchResponse.getHits()) {\n assertThat(hit.id(), anyOf(equalTo(\"1\"), equalTo(\"3\"), equalTo(\"4\"), equalTo(\"5\"), equalTo(\"6\")));\n }\n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceQuery(\"location\").distance(\"3km\").point(40.7143528, -74.0059731).optimizeBbox(\"indexed\")))\n+ .setQuery(geoDistanceQuery(\"location\").distance(\"3km\").point(40.7143528, -74.0059731).optimizeBbox(\"indexed\"))\n .execute().actionGet();\n assertHitCount(searchResponse, 5);\n assertThat(searchResponse.getHits().hits().length, equalTo(5));\n@@ -127,7 +126,7 @@ public void simpleDistanceTests() throws Exception {\n \n // now with a PLANE type\n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceQuery(\"location\").distance(\"3km\").geoDistance(GeoDistance.PLANE).point(40.7143528, -74.0059731)))\n+ .setQuery(geoDistanceQuery(\"location\").distance(\"3km\").geoDistance(GeoDistance.PLANE).point(40.7143528, -74.0059731))\n .execute().actionGet();\n assertHitCount(searchResponse, 5);\n assertThat(searchResponse.getHits().hits().length, equalTo(5));\n@@ -138,15 +137,15 @@ public void simpleDistanceTests() throws Exception {\n // factor type is really too small for this resolution\n \n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceQuery(\"location\").distance(\"2km\").point(40.7143528, -74.0059731)))\n+ .setQuery(geoDistanceQuery(\"location\").distance(\"2km\").point(40.7143528, -74.0059731))\n .execute().actionGet();\n assertHitCount(searchResponse, 4);\n assertThat(searchResponse.getHits().hits().length, equalTo(4));\n for (SearchHit hit : searchResponse.getHits()) {\n assertThat(hit.id(), anyOf(equalTo(\"1\"), equalTo(\"3\"), equalTo(\"4\"), equalTo(\"5\")));\n }\n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceQuery(\"location\").distance(\"2km\").point(40.7143528, -74.0059731).optimizeBbox(\"indexed\")))\n+ .setQuery(geoDistanceQuery(\"location\").distance(\"2km\").point(40.7143528, -74.0059731).optimizeBbox(\"indexed\"))\n .execute().actionGet();\n assertHitCount(searchResponse, 4);\n assertThat(searchResponse.getHits().hits().length, equalTo(4));\n@@ -155,15 +154,15 @@ public void simpleDistanceTests() throws Exception {\n }\n \n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceQuery(\"location\").distance(\"1.242mi\").point(40.7143528, -74.0059731)))\n+ .setQuery(geoDistanceQuery(\"location\").distance(\"1.242mi\").point(40.7143528, -74.0059731))\n .execute().actionGet();\n assertHitCount(searchResponse, 4);\n assertThat(searchResponse.getHits().hits().length, equalTo(4));\n for (SearchHit hit : searchResponse.getHits()) {\n assertThat(hit.id(), anyOf(equalTo(\"1\"), equalTo(\"3\"), equalTo(\"4\"), equalTo(\"5\")));\n }\n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceQuery(\"location\").distance(\"1.242mi\").point(40.7143528, -74.0059731).optimizeBbox(\"indexed\")))\n+ .setQuery(geoDistanceQuery(\"location\").distance(\"1.242mi\").point(40.7143528, -74.0059731).optimizeBbox(\"indexed\"))\n .execute().actionGet();\n assertHitCount(searchResponse, 4);\n assertThat(searchResponse.getHits().hits().length, equalTo(4));\n@@ -172,15 +171,15 @@ public void simpleDistanceTests() throws Exception {\n }\n \n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceRangeQuery(\"location\").from(\"1.0km\").to(\"2.0km\").point(40.7143528, -74.0059731)))\n+ .setQuery(geoDistanceRangeQuery(\"location\").from(\"1.0km\").to(\"2.0km\").point(40.7143528, -74.0059731))\n .execute().actionGet();\n assertHitCount(searchResponse, 2);\n assertThat(searchResponse.getHits().hits().length, equalTo(2));\n for (SearchHit hit : searchResponse.getHits()) {\n assertThat(hit.id(), anyOf(equalTo(\"4\"), equalTo(\"5\")));\n }\n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceRangeQuery(\"location\").from(\"1.0km\").to(\"2.0km\").point(40.7143528, -74.0059731).optimizeBbox(\"indexed\")))\n+ .setQuery(geoDistanceRangeQuery(\"location\").from(\"1.0km\").to(\"2.0km\").point(40.7143528, -74.0059731).optimizeBbox(\"indexed\"))\n .execute().actionGet();\n assertHitCount(searchResponse, 2);\n assertThat(searchResponse.getHits().hits().length, equalTo(2));\n@@ -189,13 +188,13 @@ public void simpleDistanceTests() throws Exception {\n }\n \n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceRangeQuery(\"location\").to(\"2.0km\").point(40.7143528, -74.0059731)))\n+ .setQuery(geoDistanceRangeQuery(\"location\").to(\"2.0km\").point(40.7143528, -74.0059731))\n .execute().actionGet();\n assertHitCount(searchResponse, 4);\n assertThat(searchResponse.getHits().hits().length, equalTo(4));\n \n searchResponse = client().prepareSearch() // from NY\n- .setQuery(filteredQuery(matchAllQuery(), geoDistanceRangeQuery(\"location\").from(\"2.0km\").point(40.7143528, -74.0059731)))\n+ .setQuery(geoDistanceRangeQuery(\"location\").from(\"2.0km\").point(40.7143528, -74.0059731))\n .execute().actionGet();\n assertHitCount(searchResponse, 3);\n assertThat(searchResponse.getHits().hits().length, equalTo(3));", "filename": "core/src/test/java/org/elasticsearch/search/geo/GeoDistanceIT.java", "status": "modified" }, { "diff": "@@ -427,20 +427,16 @@ public void bulktest() throws Exception {\n }\n \n SearchResponse world = client().prepareSearch().addField(\"pin\").setQuery(\n- filteredQuery(\n- matchAllQuery(),\n- geoBoundingBoxQuery(\"pin\")\n- .topLeft(90, -179.99999)\n- .bottomRight(-90, 179.99999))\n+ geoBoundingBoxQuery(\"pin\")\n+ .topLeft(90, -179.99999)\n+ .bottomRight(-90, 179.99999)\n ).execute().actionGet();\n \n assertHitCount(world, 53);\n \n SearchResponse distance = client().prepareSearch().addField(\"pin\").setQuery(\n- filteredQuery(\n- matchAllQuery(),\n- geoDistanceQuery(\"pin\").distance(\"425km\").point(51.11, 9.851)\n- )).execute().actionGet();\n+ geoDistanceQuery(\"pin\").distance(\"425km\").point(51.11, 9.851)\n+ ).execute().actionGet();\n \n assertHitCount(distance, 5);\n GeoPoint point = new GeoPoint();", "filename": "core/src/test/java/org/elasticsearch/search/geo/GeoFilterIT.java", "status": "modified" }, { "diff": "@@ -43,10 +43,8 @@\n import java.util.Locale;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.filteredQuery;\n import static org.elasticsearch.index.query.QueryBuilders.geoIntersectionQuery;\n import static org.elasticsearch.index.query.QueryBuilders.geoShapeQuery;\n-import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n@@ -101,8 +99,7 @@ public void testIndexPointsFilterRectangle() throws Exception {\n ShapeBuilder shape = ShapeBuilder.newEnvelope().topLeft(-45, 45).bottomRight(45, -45);\n \n SearchResponse searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(),\n- geoIntersectionQuery(\"location\", shape)))\n+ .setQuery(geoIntersectionQuery(\"location\", shape))\n .execute().actionGet();\n \n assertSearchResponse(searchResponse);\n@@ -151,8 +148,7 @@ public void testEdgeCases() throws Exception {\n // This search would fail if both geoshape indexing and geoshape filtering\n // used the bottom-level optimization in SpatialPrefixTree#recursiveGetNodes.\n SearchResponse searchResponse = client().prepareSearch()\n- .setQuery(filteredQuery(matchAllQuery(),\n- geoIntersectionQuery(\"location\", query)))\n+ .setQuery(geoIntersectionQuery(\"location\", query))\n .execute().actionGet();\n \n assertSearchResponse(searchResponse);\n@@ -187,8 +183,7 @@ public void testIndexedShapeReference() throws Exception {\n .endObject()));\n \n SearchResponse searchResponse = client().prepareSearch(\"test\")\n- .setQuery(filteredQuery(matchAllQuery(),\n- geoIntersectionQuery(\"location\", \"Big_Rectangle\", \"shape_type\")))\n+ .setQuery(geoIntersectionQuery(\"location\", \"Big_Rectangle\", \"shape_type\"))\n .execute().actionGet();\n \n assertSearchResponse(searchResponse);", "filename": "core/src/test/java/org/elasticsearch/search/geo/GeoShapeIntegrationIT.java", "status": "modified" } ] }
{ "body": "http://build-us-00.elastic.co/job/es_core_2x_strong/28/\n\nThe issue is reproducible most of the time running this seed (on osx), further parameters having no influence already removed:\n\n```\nmvn verify -Pdev -Dskip.unit.tests -pl org.elasticsearch:elasticsearch -Dtests.seed=872FF07FDDFBCB -Dtests.class=org.elasticsearch.document.BulkIT -Dtests.method=\"testBulkUpdate_largerVolume\" -Des.logger.level=DEBUG -Des.node.mode=local\n```\n\nWhen not running in `local` mode, this does not occur, also adding `waitForRelocations` makes it vanish, but I am not sure, if this is the underlying issue here.\n", "comments": [ { "body": "I think this is a bug in `InternalEngine.flush()`. When we flush we call `refresh()` and `translog.commit()` in the wrong order here: https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java#L759\nThis leaves a short gap in which we can't get a document from translog and index. \nThe test fails reliable for me but when I switch the order it passes.\nI will work on a unit test and make a pr.\n", "created_at": "2015-09-08T22:13:48Z" }, { "body": "> I think this is a bug in InternalEngine.flush(). When we flush we call refresh() and translog.commit() in the wrong order here:\n\nwhoa!!! good catch we need a get and commit stress test... that reproduces this problem in isolation\n", "created_at": "2015-09-09T07:34:13Z" } ], "number": 13379, "title": "CI: BulkIT.testBulkUpdate_largerVolume fails one of its GetRequests" }
{ "body": "When we commit the translog, documents that were in it before cannot be retrieved from\nit anymore via get and have to be retrieved from the index instead. But they will only\nbe visible if between index and get a refresh is called. Therfore we have to call\nfirst refresh and then translog.commit() because otherwise there is a small gap\nin which we cannot read from the translog anymore but also not from the index.\n\ncloses #13379\n", "number": 13414, "review_comments": [ { "body": "these must be final for backporting?\n", "created_at": "2015-09-09T08:33:00Z" }, { "body": "can the sleep go?\n", "created_at": "2015-09-09T08:34:40Z" }, { "body": "yes\n", "created_at": "2015-09-09T08:37:38Z" } ], "title": "Engine: refresh before translog commit" }
{ "commits": [ { "message": "Engine: refresh before translog commit\n\nWhen we commit the translog, documents that were in it before cannot be retrieved from\nit anymore via get and have to be retrieved from the index instead. But they will only\nbe visible if between index and get a refresh is called. Therfore we have to call\nfirst refresh and then translog.commit() because otherwise there is a small gap\nin which we cannot read from the translog anymore but also not from the index.\n\ncloses #13379" } ], "files": [ { "diff": "@@ -756,9 +756,10 @@ public CommitId flush(boolean force, boolean waitIfOngoing) throws EngineExcepti\n logger.trace(\"starting commit for flush; commitTranslog=true\");\n commitIndexWriter(indexWriter, translog);\n logger.trace(\"finished commit for flush\");\n- translog.commit();\n // we need to refresh in order to clear older version values\n refresh(\"version_table_flush\");\n+ // after refresh documents can be retrieved from the index so we can now commit the translog\n+ translog.commit();\n } catch (Throwable e) {\n throw new FlushFailedEngineException(shardId, e);\n }", "filename": "core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java", "status": "modified" }, { "diff": "@@ -94,6 +94,7 @@\n import java.nio.file.Path;\n import java.util.*;\n import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicInteger;\n import java.util.concurrent.atomic.AtomicReference;\n import java.util.regex.Pattern;\n@@ -521,6 +522,36 @@ public IndexSearcher wrap(EngineConfig engineConfig, IndexSearcher searcher) thr\n IOUtils.close(store, engine);\n }\n \n+ @Test\n+ /* */\n+ public void testConcurrentGetAndFlush() throws Exception {\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocumentWithTextField(), B_1, null);\n+ engine.create(new Engine.Create(newUid(\"1\"), doc));\n+ final AtomicReference<Engine.GetResult> latestGetResult = new AtomicReference<>();\n+ final AtomicBoolean flushFinished = new AtomicBoolean(false);\n+ Thread getThread = new Thread() {\n+ @Override\n+ public void run() {\n+ while (flushFinished.get() == false) {\n+ Engine.GetResult previousGetResult = latestGetResult.get();\n+ if (previousGetResult != null) {\n+ previousGetResult.release();\n+ }\n+ latestGetResult.set(engine.get(new Engine.Get(true, newUid(\"1\"))));\n+ if (latestGetResult.get().exists() == false) {\n+ break;\n+ }\n+ }\n+ }\n+ };\n+ getThread.start();\n+ engine.flush();\n+ flushFinished.set(true);\n+ getThread.join();\n+ assertTrue(latestGetResult.get().exists());\n+ latestGetResult.get().release();\n+ }\n+\n @Test\n public void testSimpleOperations() throws Exception {\n Engine.Searcher searchResult = engine.acquireSearcher(\"test\");", "filename": "core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java", "status": "modified" } ] }
{ "body": "```\n[2015-08-28 01:17:40,304][ERROR][org.elasticsearch.gateway] [bcapp.dev] failed to read local state, exiting...\njava.lang.IllegalStateException: unable to upgrade the mappings for the index [myindex], reason: [Mapper for [content] conflicts with existing mapping in other types:\n[mapper [content] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types.]]\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:337)\n at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:113)\n at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:226)\n...\n```\n\nThe field in question is:\n\n```\n\"content\" : {\n \"type\" : \"string\",\n \"analyzer\": \"html_strip\"\n}\n```\n\nThis is the relevant analyzer configuration:\n\n```\n{\n \"index\" : {\n \"analysis\" : {\n \"analyzer\" : {\n \"html_strip\" : {\n \"filter\" : [\n \"standard\",\n \"lowercase\",\n \"stop\",\n \"asciifolding\",\n \"minimal_stemmer\"\n ],\n \"char_filter\" : [\n \"html_strip\"\n ],\n \"tokenizer\" : \"standard\",\n \"type\": \"custom\"\n }\n },\n \"filter\" : {\n \"minimal_stemmer\" : {\n \"type\" : \"stemmer\",\n \"name\" : \"minimal_english\"\n }\n }\n }\n }\n}\n```\n\nThis field is defined on 2 out of 6 types that I have and is identical for those two types.\n", "comments": [ { "body": "Also getting a similar error on a `_parent` field which I have on only one type.\n\n```\n\"_parent\": {\n \"type\": \"product\",\n \"fielddata\": {\n \"loading\": \"eager_global_ordinals\"\n }\n}\n```\n\n```\njava.lang.IllegalStateException: unable to upgrade the mappings for the index [myindex], reason: [Mapper for [_parent] conflicts with existing mapping in other types:\n[mapper [_parent] is used by multiple types. Set update_all_types to true to update [fielddata] across all types.]]\n```\n\nAlso, the migration plugin only had informational messages (eg. `required` on `_routing`), nothing that would stop an upgrade.\n", "created_at": "2015-08-28T01:29:24Z" }, { "body": "Hi @rayward \n\nThanks for testing out the beta, and for reporting these bugs. I think the analyzer problem will be fixed by https://github.com/rjernst/elasticsearch/commit/2e4a053b4283f941999e5ab580f3883c36c952b7 but the _parent issue needs further investigation.\n\nTo reproduce, create this index on 1.x:\n\n```\nPUT /test\n{\n \"mappings\": {\n \"parent\": {},\n \"child\": {\n \"_parent\": {\n \"type\": \"parent\",\n \"fielddata\": {\n \"loading\": \"eager_global_ordinals\"\n }\n }\n }\n }\n}\n```\n", "created_at": "2015-08-28T12:37:31Z" }, { "body": "I agree the first issue described should be fixed by #13206.\n\nThe issue with `_parent` is a known issue, and unavoidable at this time (without further refactoring of how mappings work). @clintongormley There are two types in your example, `parent`, and `child`. Because parent is added first (types are added one at a time), it gets the default `_parent` setup with lazy loading. When the second `child` type is parsed, the `_parent` field is now different.\n\nThe problem is that all types have a `_parent` field. To fix this, we need to selectively add `_parent` (only when the user actually specifies it for the type), but that is much different than we do today (all meta fields are added to all types). I'm also not sure what effect not having `_parent` on all types would have an existing parent/child code.\n", "created_at": "2015-08-31T06:07:39Z" }, { "body": "@martijnvg could you comment on https://github.com/elastic/elasticsearch/issues/13169#issuecomment-136276860 please?\n", "created_at": "2015-08-31T11:54:16Z" }, { "body": "@rjernst @clintongormley I think it may work. I'm going to try out if making the `_parent` optional is working out.\n", "created_at": "2015-09-01T12:23:51Z" }, { "body": "Internally removing the _parent field from DocumentMapper works out for the old p/c implementation, but not for the new implementation (that uses doc values and Lucene's JoinUtil). The disabled `_parent` for parent types is used to write to a doc values join field and if the parent field is removed has_child/has_parent won't be able to work because parent docs don't write to the join field.\n\nJust thinking out loud, but what we can try to do is:\n1) Remove the notion of 'active' from the parent field\n2) add the notion that a document can be a child or a parent\n3) all type we always have a _parent field in the mapping\n4) but it only gets used by documents if it either is a parent or a child\n5) I think this way all _parent fields in all types can have the same settings. (besides the `type` there are pointing to, but that is ok)\n", "created_at": "2015-09-01T14:16:14Z" }, { "body": "Maybe the bug is that `loading` should be allowed to be different across types? The `_parent` field actually stores data in a field that depends on the type name (_parent#type) so having different settings per type does not introduce inconsistencies at the Lucene level?\n", "created_at": "2015-09-02T08:20:40Z" }, { "body": "> The _parent field actually stores data in a field that depends on the type name (_parent#type) so having different settings per type\n\nMaybe the problem is we aren't using this name in map of field types? If we use that as the key, instead of `_parent` we won't have any conflicts.\n", "created_at": "2015-09-02T16:06:32Z" }, { "body": "@rjernst you mean internally, rather than in the API? Using it in the API would make it difficult to explain\n", "created_at": "2015-09-02T18:18:51Z" }, { "body": "Yes, I mean internally.\n", "created_at": "2015-09-02T18:37:11Z" }, { "body": "@rjernst @jpountz I like the idea of internally mapping the _parent field under a name that takes the type into account. I quickly checked this and this seem to work out. Both the upgrade works without an error and has_child/has_parent queries work. \n\nHowever after running some more tests, I realized that there are other issues. One can include the `_parent` field in the search hits, query by `_parent` field or sort by it. In these case we lookup the field mapping and that doesn't map correctly. There is no `_parent` field, but fields with `_parent*` prefix. Not sure how to resolve this. \n", "created_at": "2015-09-03T13:46:53Z" }, { "body": "Hmm, so maybe we should be able to have 2 field types? One called _parent that would only be stored and another one called _parent#type that would only be doc-valued?\n", "created_at": "2015-09-03T14:01:30Z" }, { "body": "Right, that would solve it, just not sure how to would like. The ParentFieldMapper can only produce one `MappedFieldType`. We then need to add another field mapper?\n\nTaking a step back, maybe just allowing the the `loading` part of field data settings to be different is okay for 2.0? This is a much smaller change. We can then work on a better fix for 2.x and 3.0. The fact that `ParentFieldMapper` now embeds more fields then it used to should be addressed? We maybe break it out in different fields (multiple unique join fields that connects docs of a different type).\n", "created_at": "2015-09-03T14:12:31Z" }, { "body": "Yesterday we discussed that _parent field should use an unique internal field type name per type. So that when the mapping compatibility check is performed no check would fail since each field would be unique and having different field data settings wouldn't matter.\n\nMaking that change has one important implication the `_parent` field can't used directly in many places in all the APIs. The most notable place would be in the query dsl and we decided that we should have a `_parent_id` query to cover this. Also how we lookup the `_parent` id for search hits in the _search api requires a special sub fetch phase. \n\nI went a head and made change: https://github.com/martijnvg/elasticsearch/commit/26e2e5d95d2b3a3f778a759ed7a7ea2de9b09842\n\nHowever things got tricky and I had to make more changes that I would like:\n- Due to difference of p/c query implementation, several checks had to be in place to make sure that the right fields were picked.\n- Sorting and aggregating by _parent field wouldn't work too. Whether we should remain to support this is a good question (I lean towards not). \n- To maintain the support for specifying the parent id in the `_parent` field in a document required a very subtle change for 1.x indices. (this is no longer allowed in 2.x indices)\n- In the field data cache we would end up with multiple _parent field entries per field type. (not sure if this is bad)\n- Clear cache by _parent field wouldn't work too.\n\nThe change grew to a level that I'm not comfortable porting it back to 2.0 and I think we should completely revise the _parent field. For example with the new implementation we don't need to store _parent indexed and stored field. The join doc values fields are sufficient. The _parent Lucene fields just exist for the old parent child implementation. Also if we going to revise types this is going to have a big impact on the _parent field (if it still exists then).\n\nI think for 2.0 we need to make a compromise in order to keep the change small and low risk. Disabling the field data settings would work, but that wouldn't be nice for all the other fields. So I think disabling the field type compatibility check on the _parent field is maybe the right workaround for now:\nhttps://github.com/martijnvg/elasticsearch/commits/mapping/disable_compatibility_check_for_parent_field\n\nThe only field type related setting that can be set on _parent field is field data loading. All the other settings aren't configurable. I think that makes this workaround better then disabling field data settings for all fields.\n", "created_at": "2015-09-04T16:47:58Z" }, { "body": "@martijnvg ++\n", "created_at": "2015-09-06T13:21:23Z" } ], "number": 13169, "title": "Mapping conflict error upgrading from 1.7 to 2.0" }
{ "body": "Split the _parent field mapping's field type into two feld types:\n1) A shared immutable fieldtype for the _parent field (used for direct access to that field in the dsl). This field type is stored and indexed.\n2) A per type field type for the join field. The field type has doc values enabled and field data type is allowed to be changed.\n\nThis resolves the issue that a mapping is not compatible if parent and child types have different field data loading settings.\n\nPR for #13169\n", "number": 13399, "review_comments": [ { "body": "I don't think this should go here. It can be a special method on ParentFieldMapper? IIRC, the users already have a ParentFieldMapper? Or is this because the warming code is generic?\n", "created_at": "2015-09-08T15:04:31Z" }, { "body": "Why not keep this as a member of the builder, and have setting field data settings set the fielddatatype on it directly?\n", "created_at": "2015-09-08T15:06:11Z" }, { "body": "I don't think we need a ref here? This field type is always associated directly with this mapper.\n", "created_at": "2015-09-08T15:07:21Z" }, { "body": "Why do we need this? The public ctor is used for new types, when intializing all the metadata fields. But in that case, there is nothing shared so no need to initialize the join fieldtype (it should not be used unless/until _parent is set on the new type, in which case it will be parsed and the protected ctor will be used).\n", "created_at": "2015-09-08T15:10:02Z" }, { "body": "The more I think about this, the more I still think having something like `getJoinFieldType()` is the way to go. Then there is no need for `joinField` below, because the field name is included with the field type?\n", "created_at": "2015-09-08T15:11:14Z" }, { "body": "You can use joinFieldType.name() right? Instead of `joinField`\n", "created_at": "2015-09-08T15:12:07Z" }, { "body": "Or better, `createJoinField` as the other places do?\n", "created_at": "2015-09-08T15:13:20Z" }, { "body": "This should not be necessary. You can just clone the joinFieldType and set?\n", "created_at": "2015-09-08T15:14:19Z" }, { "body": "Won't the `indexName` be wrong below, since it is just `_parent`? Maybe we should just have a special case here for `_parent` for now? Something like:\n\n```\nString indexName = fieldMapper.fieldType.names().indexName();\nFieldDataType fieldDataType = fieldMapper.fieldType().fieldDataType();\nif (fieldMapper instanceOf ParentFieldMapper) {\n ParentFieldMapper parentMapper = (ParentFieldMapper)fieldMapper;\n indexName = parentMapper.getJoinFieldType().names().indexName();\n fieldDataType = parentMapper.getJoinFieldType().fieldDataType();\n}\n```\n", "created_at": "2015-09-08T15:20:26Z" }, { "body": "Yes, the warming code is generic. It doesn't know right now what the concrete field mapper impl is.\n", "created_at": "2015-09-08T15:30:27Z" }, { "body": "+1\n", "created_at": "2015-09-08T15:30:47Z" }, { "body": "If a document is both parent and child then it has two join fields. So maybe we should have two `MappedFieldType`?\n", "created_at": "2015-09-08T15:33:57Z" }, { "body": "I'm not a big fan of this exception logic. This works out now because `_parent` field name is mapped to a special field data implementation in the field data service.\n", "created_at": "2015-09-08T15:36:20Z" }, { "body": "ok, I'll change that\n", "created_at": "2015-09-08T15:38:25Z" }, { "body": "Can we remove that logic if we setup the correct field type here?\n", "created_at": "2015-09-08T15:40:35Z" }, { "body": "Sure, so parentJointFieldType and childJoinFieldType?\n", "created_at": "2015-09-08T15:46:23Z" }, { "body": "I think this would be correct then to essentially move the hack here, instead of basing on name? You could have this in a private helper method too?\n\n```\nif (fieldMapper instanceOf ParentFieldMapper) {\n handleParentField((ParentFieldMapper)fieldMapper, warmUp);\n continue;\n}\n```\n\nThen that method can handle adding the field types for both the parent and/or child join fields, whichever exist?\n", "created_at": "2015-09-08T15:50:00Z" }, { "body": "I don't think we can remove this. When adding a parent type no _parent field is defined and the default instance is used and that instance is used when indexing a parent document. (to store its id in the join field (this is the same join field that child type parent field writes to parent id into) )\n", "created_at": "2015-09-08T21:07:48Z" }, { "body": "We should look at requiring this to be explicit in the future. I think it would be much cleaner to have to say \"this is a parent\" and \"this is a child\", instead of just the latter. But ok for now.\n", "created_at": "2015-09-08T21:11:57Z" }, { "body": "Can we call this `getChildJoinFieldType()`?\n", "created_at": "2015-09-08T21:16:16Z" }, { "body": "We should freeze the field type here.\n", "created_at": "2015-09-08T21:16:38Z" }, { "body": "Freeze the parent and child field types.\n", "created_at": "2015-09-08T21:17:21Z" }, { "body": "I think you should call checkCompatibility with the strict option. It will give a more clear error for what has changed.\n", "created_at": "2015-09-08T21:20:05Z" }, { "body": "No need for this. Just do:\n\n```\nchildJoinFieldType = fieldMergeWith.childJoinFieldType.clone();\n```\n", "created_at": "2015-09-08T21:21:00Z" }, { "body": "Is the parentJoinFieldType never added for warming?\n", "created_at": "2015-09-08T21:23:41Z" }, { "body": "Why do we need this default as a member? Isn't it always Defaults.JOIN_FIELD_TYPE?\n", "created_at": "2015-09-08T21:24:18Z" }, { "body": "oops, I forgot. I'd expect test to fail if this was forgotten.\n", "created_at": "2015-09-08T21:24:38Z" }, { "body": "But we don't have anything testing these field types yet! Indeed tests will fail if it is the \"real\" field type for the mapper, but not this internal hidden field type. We need to test that directly.\n", "created_at": "2015-09-08T21:26:52Z" }, { "body": "no, it is added via the child type's childJoinFieldType. If there is an active child parent field it is added via that parent field instance (childType.childJoinFieldType == parentType. parentJoinFieldType)\n", "created_at": "2015-09-08T21:30:12Z" }, { "body": "true\n", "created_at": "2015-09-08T21:33:23Z" } ], "title": "Split the _parent field mapping's field type into two field types" }
{ "commits": [ { "message": "parent/child: Split the _parent field mapping's field type into three field types:\n1) A shared immutable fieldtype for the _parent field (used for direct access to that field in the dsl). This field type is stored and indexed.\n2) A per type field type for the child join field. The field type has doc values enabled if index is created on or post 2.0 and field data type is allowed to be changed.\n3) A per type field type for the parent join field. The field type has doc values enabled if index is created on or post 2.0.\n\nThis resolves the issue that a mapping is not compatible if parent and child types have different field data loading settings.\n\nCloses #13169" } ], "files": [ { "diff": "@@ -113,7 +113,7 @@ public Builder(Settings indexSettings, RootObjectMapper.Builder builder, MapperS\n this.rootMappers.put(TimestampFieldMapper.class, new TimestampFieldMapper(indexSettings, mapperService.fullName(TimestampFieldMapper.NAME)));\n this.rootMappers.put(TTLFieldMapper.class, new TTLFieldMapper(indexSettings));\n this.rootMappers.put(VersionFieldMapper.class, new VersionFieldMapper(indexSettings));\n- this.rootMappers.put(ParentFieldMapper.class, new ParentFieldMapper(indexSettings, mapperService.fullName(ParentFieldMapper.NAME)));\n+ this.rootMappers.put(ParentFieldMapper.class, new ParentFieldMapper(indexSettings, mapperService.fullName(ParentFieldMapper.NAME), /* parent type */builder.name()));\n // _field_names last so that it can see all other fields\n this.rootMappers.put(FieldNamesFieldMapper.class, new FieldNamesFieldMapper(indexSettings, mapperService.fullName(FieldNamesFieldMapper.NAME)));\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java", "status": "modified" }, { "diff": "@@ -147,8 +147,8 @@ public void putRootTypeParser(String type, Mapper.TypeParser typeParser) {\n }\n }\n \n- public Mapper.TypeParser.ParserContext parserContext() {\n- return new Mapper.TypeParser.ParserContext(analysisService, similarityLookupService, mapperService, typeParsers, indexVersionCreated, parseFieldMatcher);\n+ public Mapper.TypeParser.ParserContext parserContext(String type) {\n+ return new Mapper.TypeParser.ParserContext(type, analysisService, similarityLookupService, mapperService, typeParsers, indexVersionCreated, parseFieldMatcher);\n }\n \n public DocumentMapper parse(String source) throws MapperParsingException {\n@@ -206,7 +206,7 @@ private DocumentMapper parse(String type, Map<String, Object> mapping, String de\n }\n \n \n- Mapper.TypeParser.ParserContext parserContext = parserContext();\n+ Mapper.TypeParser.ParserContext parserContext = parserContext(type);\n // parse RootObjectMapper\n DocumentMapper.Builder docBuilder = doc(indexSettings, (RootObjectMapper.Builder) rootObjectTypeParser.parse(type, mapping, parserContext), mapperService);\n // Add default mapping for the plugged-in meta mappers", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java", "status": "modified" }, { "diff": "@@ -81,6 +81,8 @@ public interface TypeParser {\n \n class ParserContext {\n \n+ private final String type;\n+\n private final AnalysisService analysisService;\n \n private final SimilarityLookupService similarityLookupService;\n@@ -93,9 +95,10 @@ class ParserContext {\n \n private final ParseFieldMatcher parseFieldMatcher;\n \n- public ParserContext(AnalysisService analysisService, SimilarityLookupService similarityLookupService,\n+ public ParserContext(String type, AnalysisService analysisService, SimilarityLookupService similarityLookupService,\n MapperService mapperService, ImmutableMap<String, TypeParser> typeParsers,\n- Version indexVersionCreated, ParseFieldMatcher parseFieldMatcher) {\n+ Version indexVersionCreated, ParseFieldMatcher parseFieldMatcher) {\n+ this.type = type;\n this.analysisService = analysisService;\n this.similarityLookupService = similarityLookupService;\n this.mapperService = mapperService;\n@@ -104,6 +107,10 @@ public ParserContext(AnalysisService analysisService, SimilarityLookupService si\n this.parseFieldMatcher = parseFieldMatcher;\n }\n \n+ public String type() {\n+ return type;\n+ }\n+\n public AnalysisService analysisService() {\n return analysisService;\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/Mapper.java", "status": "modified" }, { "diff": "@@ -70,7 +70,6 @@\n import java.util.Map;\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.locks.ReentrantReadWriteLock;\n-import java.util.function.Predicate;\n \n import static org.elasticsearch.common.collect.MapBuilder.newMapBuilder;\n \n@@ -556,7 +555,7 @@ public MappedFieldType unmappedFieldType(String type) {\n final ImmutableMap<String, MappedFieldType> unmappedFieldMappers = this.unmappedFieldTypes;\n MappedFieldType fieldType = unmappedFieldMappers.get(type);\n if (fieldType == null) {\n- final Mapper.TypeParser.ParserContext parserContext = documentMapperParser().parserContext();\n+ final Mapper.TypeParser.ParserContext parserContext = documentMapperParser().parserContext(type);\n Mapper.TypeParser typeParser = parserContext.typeParser(type);\n if (typeParser == null) {\n throw new IllegalArgumentException(\"No mapper found for type [\" + type + \"]\");", "filename": "core/src/main/java/org/elasticsearch/index/mapper/MapperService.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n \n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.SortedDocValuesField;\n+import org.apache.lucene.index.DocValuesType;\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.queries.TermsQuery;\n import org.apache.lucene.search.Query;\n@@ -33,15 +34,7 @@\n import org.elasticsearch.common.settings.loader.SettingsLoader;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.fielddata.FieldDataType;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.MappedFieldType;\n-import org.elasticsearch.index.mapper.Mapper;\n-import org.elasticsearch.index.mapper.MapperParsingException;\n-import org.elasticsearch.index.mapper.MergeMappingException;\n-import org.elasticsearch.index.mapper.MergeResult;\n-import org.elasticsearch.index.mapper.MetadataFieldMapper;\n-import org.elasticsearch.index.mapper.ParseContext;\n-import org.elasticsearch.index.mapper.Uid;\n+import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.query.QueryParseContext;\n \n import java.io.IOException;\n@@ -67,6 +60,7 @@ public static class Defaults {\n public static final String NAME = ParentFieldMapper.NAME;\n \n public static final MappedFieldType FIELD_TYPE = new ParentFieldType();\n+ public static final MappedFieldType JOIN_FIELD_TYPE = new ParentFieldType();\n \n static {\n FIELD_TYPE.setIndexOptions(IndexOptions.DOCS);\n@@ -77,41 +71,66 @@ public static class Defaults {\n FIELD_TYPE.setSearchAnalyzer(Lucene.KEYWORD_ANALYZER);\n FIELD_TYPE.setNames(new MappedFieldType.Names(NAME));\n FIELD_TYPE.freeze();\n+\n+ JOIN_FIELD_TYPE.setHasDocValues(true);\n+ JOIN_FIELD_TYPE.setDocValuesType(DocValuesType.SORTED);\n+ JOIN_FIELD_TYPE.freeze();\n }\n }\n \n public static class Builder extends MetadataFieldMapper.Builder<Builder, ParentFieldMapper> {\n \n+ private String parentType;\n+\n protected String indexName;\n \n- private String type;\n+ private final String documentType;\n+\n+ private final MappedFieldType parentJoinFieldType = Defaults.JOIN_FIELD_TYPE.clone();\n \n- public Builder() {\n+ private final MappedFieldType childJoinFieldType = Defaults.JOIN_FIELD_TYPE.clone();\n+\n+ public Builder(String documentType) {\n super(Defaults.NAME, Defaults.FIELD_TYPE);\n this.indexName = name;\n+ this.documentType = documentType;\n builder = this;\n }\n \n public Builder type(String type) {\n- this.type = type;\n+ this.parentType = type;\n return builder;\n }\n \n+ @Override\n+ public Builder fieldDataSettings(Settings fieldDataSettings) {\n+ Settings settings = Settings.builder().put(childJoinFieldType.fieldDataType().getSettings()).put(fieldDataSettings).build();\n+ childJoinFieldType.setFieldDataType(new FieldDataType(childJoinFieldType.fieldDataType().getType(), settings));\n+ return this;\n+ }\n+\n @Override\n public ParentFieldMapper build(BuilderContext context) {\n- if (type == null) {\n+ if (parentType == null) {\n throw new MapperParsingException(\"[_parent] field mapping must contain the [type] option\");\n }\n- setupFieldType(context);\n- fieldType.setHasDocValues(context.indexCreatedVersion().onOrAfter(Version.V_2_0_0_beta1));\n- return new ParentFieldMapper(fieldType, type, context.indexSettings());\n+ parentJoinFieldType.setNames(new MappedFieldType.Names(joinField(documentType)));\n+ parentJoinFieldType.setFieldDataType(null);\n+ childJoinFieldType.setNames(new MappedFieldType.Names(joinField(parentType)));\n+ if (context.indexCreatedVersion().before(Version.V_2_0_0_beta1)) {\n+ childJoinFieldType.setHasDocValues(false);\n+ childJoinFieldType.setDocValuesType(DocValuesType.NONE);\n+ parentJoinFieldType.setHasDocValues(false);\n+ parentJoinFieldType.setDocValuesType(DocValuesType.NONE);\n+ }\n+ return new ParentFieldMapper(fieldType, parentJoinFieldType, childJoinFieldType, parentType, context.indexSettings());\n }\n }\n \n public static class TypeParser implements Mapper.TypeParser {\n @Override\n public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n- Builder builder = new Builder();\n+ Builder builder = new Builder(parserContext.type());\n for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();\n String fieldName = Strings.toUnderscoreCase(entry.getKey());\n@@ -222,25 +241,50 @@ public Query termsQuery(List values, @Nullable QueryParseContext context) {\n }\n }\n \n- private final String type;\n+ private final String parentType;\n+ // determines the field data settings\n+ private MappedFieldType childJoinFieldType;\n+ // has no impact of field data settings, is just here for creating a join field, the parent field mapper in the child type pointing to this type determines the field data settings for this join field\n+ private final MappedFieldType parentJoinFieldType;\n+\n+ protected ParentFieldMapper(MappedFieldType fieldType, MappedFieldType parentJoinFieldType, MappedFieldType childJoinFieldType, String parentType, Settings indexSettings) {\n+ super(NAME, fieldType, Defaults.FIELD_TYPE, indexSettings);\n+ this.parentType = parentType;\n+ this.parentJoinFieldType = parentJoinFieldType;\n+ this.parentJoinFieldType.freeze();\n+ this.childJoinFieldType = childJoinFieldType;\n+ if (childJoinFieldType != null) {\n+ this.childJoinFieldType.freeze();\n+ }\n+ }\n+\n+ public ParentFieldMapper(Settings indexSettings, MappedFieldType existing, String parentType) {\n+ this(existing == null ? Defaults.FIELD_TYPE.clone() : existing.clone(), joinFieldTypeForParentType(parentType, indexSettings), null, null, indexSettings);\n+ }\n+\n+ private static MappedFieldType joinFieldTypeForParentType(String parentType, Settings indexSettings) {\n+ MappedFieldType parentJoinFieldType = Defaults.JOIN_FIELD_TYPE.clone();\n+ parentJoinFieldType.setNames(new MappedFieldType.Names(joinField(parentType)));\n \n- protected ParentFieldMapper(MappedFieldType fieldType, String type, Settings indexSettings) {\n- super(NAME, setupDocValues(indexSettings, fieldType), setupDocValues(indexSettings, Defaults.FIELD_TYPE), indexSettings);\n- this.type = type;\n+ Version indexCreated = Version.indexCreated(indexSettings);\n+ if (indexCreated.before(Version.V_2_0_0_beta1)) {\n+ parentJoinFieldType.setHasDocValues(false);\n+ parentJoinFieldType.setDocValuesType(DocValuesType.NONE);\n+ }\n+ parentJoinFieldType.freeze();\n+ return parentJoinFieldType;\n }\n \n- public ParentFieldMapper(Settings indexSettings, MappedFieldType existing) {\n- this(existing == null ? Defaults.FIELD_TYPE.clone() : existing.clone(), null, indexSettings);\n+ public MappedFieldType getParentJoinFieldType() {\n+ return parentJoinFieldType;\n }\n \n- static MappedFieldType setupDocValues(Settings indexSettings, MappedFieldType fieldType) {\n- fieldType = fieldType.clone();\n- fieldType.setHasDocValues(Version.indexCreated(indexSettings).onOrAfter(Version.V_2_0_0_beta1));\n- return fieldType;\n+ public MappedFieldType getChildJoinFieldType() {\n+ return childJoinFieldType;\n }\n \n public String type() {\n- return type;\n+ return parentType;\n }\n \n @Override\n@@ -257,8 +301,8 @@ public void postParse(ParseContext context) throws IOException {\n @Override\n protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n boolean parent = context.docMapper().isParent(context.type());\n- if (parent && fieldType().hasDocValues()) {\n- fields.add(createJoinField(context.type(), context.id()));\n+ if (parent) {\n+ addJoinFieldIfNeeded(fields, parentJoinFieldType, context.id());\n }\n \n if (!active()) {\n@@ -269,10 +313,8 @@ protected void parseCreateField(ParseContext context, List<Field> fields) throws\n // we are in the parsing of _parent phase\n String parentId = context.parser().text();\n context.sourceToParse().parent(parentId);\n- fields.add(new Field(fieldType().names().indexName(), Uid.createUid(context.stringBuilder(), type, parentId), fieldType()));\n- if (fieldType().hasDocValues()) {\n- fields.add(createJoinField(type, parentId));\n- }\n+ fields.add(new Field(fieldType().names().indexName(), Uid.createUid(context.stringBuilder(), parentType, parentId), fieldType()));\n+ addJoinFieldIfNeeded(fields, childJoinFieldType, parentId);\n } else {\n // otherwise, we are running it post processing of the xcontent\n String parsedParentId = context.doc().get(Defaults.NAME);\n@@ -283,21 +325,20 @@ protected void parseCreateField(ParseContext context, List<Field> fields) throws\n throw new MapperParsingException(\"No parent id provided, not within the document, and not externally\");\n }\n // we did not add it in the parsing phase, add it now\n- fields.add(new Field(fieldType().names().indexName(), Uid.createUid(context.stringBuilder(), type, parentId), fieldType()));\n- if (fieldType().hasDocValues()) {\n- fields.add(createJoinField(type, parentId));\n- }\n- } else if (parentId != null && !parsedParentId.equals(Uid.createUid(context.stringBuilder(), type, parentId))) {\n+ fields.add(new Field(fieldType().names().indexName(), Uid.createUid(context.stringBuilder(), parentType, parentId), fieldType()));\n+ addJoinFieldIfNeeded(fields, childJoinFieldType, parentId);\n+ } else if (parentId != null && !parsedParentId.equals(Uid.createUid(context.stringBuilder(), parentType, parentId))) {\n throw new MapperParsingException(\"Parent id mismatch, document value is [\" + Uid.createUid(parsedParentId).id() + \"], while external value is [\" + parentId + \"]\");\n }\n }\n }\n // we have parent mapping, yet no value was set, ignore it...\n }\n \n- private SortedDocValuesField createJoinField(String parentType, String id) {\n- String joinField = joinField(parentType);\n- return new SortedDocValuesField(joinField, new BytesRef(id));\n+ private void addJoinFieldIfNeeded(List<Field> fields, MappedFieldType fieldType, String id) {\n+ if (fieldType.hasDocValues()) {\n+ fields.add(new SortedDocValuesField(fieldType.names().indexName(), new BytesRef(id)));\n+ }\n }\n \n public static String joinField(String parentType) {\n@@ -309,6 +350,10 @@ protected String contentType() {\n return CONTENT_TYPE;\n }\n \n+ private boolean joinFieldHasCustomFieldDataSettings() {\n+ return childJoinFieldType != null && childJoinFieldType.fieldDataType() != null && childJoinFieldType.fieldDataType().equals(Defaults.JOIN_FIELD_TYPE.fieldDataType()) == false;\n+ }\n+\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n if (!active()) {\n@@ -317,9 +362,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n boolean includeDefaults = params.paramAsBoolean(\"include_defaults\", false);\n \n builder.startObject(CONTENT_TYPE);\n- builder.field(\"type\", type);\n- if (includeDefaults || hasCustomFieldDataSettings()) {\n- builder.field(\"fielddata\", (Map) fieldType().fieldDataType().getSettings().getAsMap());\n+ builder.field(\"type\", parentType);\n+ if (includeDefaults || joinFieldHasCustomFieldDataSettings()) {\n+ builder.field(\"fielddata\", (Map) childJoinFieldType.fieldDataType().getSettings().getAsMap());\n }\n builder.endObject();\n return builder;\n@@ -329,16 +374,31 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n public void merge(Mapper mergeWith, MergeResult mergeResult) throws MergeMappingException {\n super.merge(mergeWith, mergeResult);\n ParentFieldMapper fieldMergeWith = (ParentFieldMapper) mergeWith;\n- if (Objects.equals(type, fieldMergeWith.type) == false) {\n- mergeResult.addConflict(\"The _parent field's type option can't be changed: [\" + type + \"]->[\" + fieldMergeWith.type + \"]\");\n+ if (Objects.equals(parentType, fieldMergeWith.parentType) == false) {\n+ mergeResult.addConflict(\"The _parent field's type option can't be changed: [\" + parentType + \"]->[\" + fieldMergeWith.parentType + \"]\");\n+ }\n+\n+ List<String> conflicts = new ArrayList<>();\n+ fieldType().checkCompatibility(fieldMergeWith.fieldType(), conflicts, true); // always strict, this cannot change\n+ parentJoinFieldType.checkCompatibility(fieldMergeWith.parentJoinFieldType, conflicts, true); // same here\n+ if (childJoinFieldType != null) {\n+ // TODO: this can be set to false when the old parent/child impl is removed, we can do eager global ordinals loading per type.\n+ childJoinFieldType.checkCompatibility(fieldMergeWith.childJoinFieldType, conflicts, mergeResult.updateAllTypes() == false);\n+ }\n+ for (String conflict : conflicts) {\n+ mergeResult.addConflict(conflict);\n+ }\n+\n+ if (active() && mergeResult.simulate() == false && mergeResult.hasConflicts() == false) {\n+ childJoinFieldType = fieldMergeWith.childJoinFieldType.clone();\n }\n }\n \n /**\n * @return Whether the _parent field is actually configured.\n */\n public boolean active() {\n- return type != null;\n+ return parentType != null;\n }\n \n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/internal/ParentFieldMapper.java", "status": "modified" }, { "diff": "@@ -245,7 +245,7 @@ public Mapper.Builder findTemplateBuilder(ParseContext context, String name, Str\n if (dynamicTemplate == null) {\n return null;\n }\n- Mapper.TypeParser.ParserContext parserContext = context.docMapperParser().parserContext();\n+ Mapper.TypeParser.ParserContext parserContext = context.docMapperParser().parserContext(name);\n String mappingType = dynamicTemplate.mappingType(dynamicType);\n Mapper.TypeParser typeParser = parserContext.typeParser(mappingType);\n if (typeParser == null) {", "filename": "core/src/main/java/org/elasticsearch/index/mapper/object/RootObjectMapper.java", "status": "modified" }, { "diff": "@@ -62,6 +62,7 @@\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MappedFieldType.Loading;\n import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n import org.elasticsearch.index.query.TemplateQueryParser;\n import org.elasticsearch.index.search.stats.ShardSearchStats;\n import org.elasticsearch.index.search.stats.StatsGroupsParseElement;\n@@ -910,15 +911,29 @@ public TerminationHandle warmNewReaders(final IndexShard indexShard, IndexMetaDa\n final Map<String, MappedFieldType> warmUp = new HashMap<>();\n for (DocumentMapper docMapper : mapperService.docMappers(false)) {\n for (FieldMapper fieldMapper : docMapper.mappers()) {\n- final FieldDataType fieldDataType = fieldMapper.fieldType().fieldDataType();\n+ final FieldDataType fieldDataType;\n+ final String indexName;\n+ if (fieldMapper instanceof ParentFieldMapper) {\n+ MappedFieldType joinFieldType = ((ParentFieldMapper) fieldMapper).getChildJoinFieldType();\n+ if (joinFieldType == null) {\n+ continue;\n+ }\n+ fieldDataType = joinFieldType.fieldDataType();\n+ // TODO: this can be removed in 3.0 when the old parent/child impl is removed:\n+ // related to: https://github.com/elastic/elasticsearch/pull/12418\n+ indexName = fieldMapper.fieldType().names().indexName();\n+ } else {\n+ fieldDataType = fieldMapper.fieldType().fieldDataType();\n+ indexName = fieldMapper.fieldType().names().indexName();\n+ }\n+\n if (fieldDataType == null) {\n continue;\n }\n if (fieldDataType.getLoading() == Loading.LAZY) {\n continue;\n }\n \n- final String indexName = fieldMapper.fieldType().names().indexName();\n if (warmUp.containsKey(indexName)) {\n continue;\n }\n@@ -964,14 +979,27 @@ public TerminationHandle warmTopReader(final IndexShard indexShard, IndexMetaDat\n final Map<String, MappedFieldType> warmUpGlobalOrdinals = new HashMap<>();\n for (DocumentMapper docMapper : mapperService.docMappers(false)) {\n for (FieldMapper fieldMapper : docMapper.mappers()) {\n- final FieldDataType fieldDataType = fieldMapper.fieldType().fieldDataType();\n+ final FieldDataType fieldDataType;\n+ final String indexName;\n+ if (fieldMapper instanceof ParentFieldMapper) {\n+ MappedFieldType joinFieldType = ((ParentFieldMapper) fieldMapper).getChildJoinFieldType();\n+ if (joinFieldType == null) {\n+ continue;\n+ }\n+ fieldDataType = joinFieldType.fieldDataType();\n+ // TODO: this can be removed in 3.0 when the old parent/child impl is removed:\n+ // related to: https://github.com/elastic/elasticsearch/pull/12418\n+ indexName = fieldMapper.fieldType().names().indexName();\n+ } else {\n+ fieldDataType = fieldMapper.fieldType().fieldDataType();\n+ indexName = fieldMapper.fieldType().names().indexName();\n+ }\n if (fieldDataType == null) {\n continue;\n }\n if (fieldDataType.getLoading() != Loading.EAGER_GLOBAL_ORDINALS) {\n continue;\n }\n- final String indexName = fieldMapper.fieldType().names().indexName();\n if (warmUpGlobalOrdinals.containsKey(indexName)) {\n continue;\n }", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -88,7 +88,7 @@ public <IFD extends IndexFieldData<?>> IFD getForField(FieldDataType type, Strin\n } else if (type.getType().equals(\"geo_point\")) {\n fieldType = MapperBuilders.geoPointField(fieldName).docValues(docValues).fieldDataSettings(type.getSettings()).build(context).fieldType();\n } else if (type.getType().equals(\"_parent\")) {\n- fieldType = new ParentFieldMapper.Builder().type(fieldName).build(context).fieldType();\n+ fieldType = new ParentFieldMapper.Builder(\"_type\").type(fieldName).build(context).fieldType();\n } else if (type.getType().equals(\"binary\")) {\n fieldType = MapperBuilders.binaryField(fieldName).docValues(docValues).fieldDataSettings(type.getSettings()).build(context).fieldType();\n } else {", "filename": "core/src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataTestCase.java", "status": "modified" }, { "diff": "@@ -0,0 +1,158 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.mapper.internal;\n+\n+import org.apache.lucene.index.DocValuesType;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.fielddata.FieldDataType;\n+import org.elasticsearch.index.mapper.ContentPath;\n+import org.elasticsearch.index.mapper.MappedFieldType.Loading;\n+import org.elasticsearch.index.mapper.Mapper;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.is;\n+import static org.hamcrest.Matchers.nullValue;\n+\n+public class ParentFieldMapperTests extends ESTestCase {\n+\n+ public void testPost2Dot0LazyLoading() {\n+ ParentFieldMapper.Builder builder = new ParentFieldMapper.Builder(\"child\");\n+ builder.type(\"parent\");\n+ builder.fieldDataSettings(createFDSettings(Loading.LAZY));\n+\n+ ParentFieldMapper parentFieldMapper = builder.build(new Mapper.BuilderContext(post2Dot0IndexSettings(), new ContentPath(0)));\n+\n+ assertThat(parentFieldMapper.getParentJoinFieldType().names().indexName(), equalTo(\"_parent#child\"));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().fieldDataType(), nullValue());\n+ assertThat(parentFieldMapper.getParentJoinFieldType().hasDocValues(), is(true));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().docValuesType(), equalTo(DocValuesType.SORTED));\n+\n+ assertThat(parentFieldMapper.getChildJoinFieldType().names().indexName(), equalTo(\"_parent#parent\"));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().fieldDataType().getLoading(), equalTo(Loading.LAZY));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().hasDocValues(), is(true));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().docValuesType(), equalTo(DocValuesType.SORTED));\n+ }\n+\n+ public void testPost2Dot0EagerLoading() {\n+ ParentFieldMapper.Builder builder = new ParentFieldMapper.Builder(\"child\");\n+ builder.type(\"parent\");\n+ builder.fieldDataSettings(createFDSettings(Loading.EAGER));\n+\n+ ParentFieldMapper parentFieldMapper = builder.build(new Mapper.BuilderContext(post2Dot0IndexSettings(), new ContentPath(0)));\n+\n+ assertThat(parentFieldMapper.getParentJoinFieldType().names().indexName(), equalTo(\"_parent#child\"));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().fieldDataType(), nullValue());\n+ assertThat(parentFieldMapper.getParentJoinFieldType().hasDocValues(), is(true));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().docValuesType(), equalTo(DocValuesType.SORTED));\n+\n+ assertThat(parentFieldMapper.getChildJoinFieldType().names().indexName(), equalTo(\"_parent#parent\"));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().fieldDataType().getLoading(), equalTo(Loading.EAGER));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().hasDocValues(), is(true));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().docValuesType(), equalTo(DocValuesType.SORTED));\n+ }\n+\n+ public void testPost2Dot0EagerGlobalOrdinalsLoading() {\n+ ParentFieldMapper.Builder builder = new ParentFieldMapper.Builder(\"child\");\n+ builder.type(\"parent\");\n+ builder.fieldDataSettings(createFDSettings(Loading.EAGER_GLOBAL_ORDINALS));\n+\n+ ParentFieldMapper parentFieldMapper = builder.build(new Mapper.BuilderContext(post2Dot0IndexSettings(), new ContentPath(0)));\n+\n+ assertThat(parentFieldMapper.getParentJoinFieldType().names().indexName(), equalTo(\"_parent#child\"));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().fieldDataType(), nullValue());\n+ assertThat(parentFieldMapper.getParentJoinFieldType().hasDocValues(), is(true));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().docValuesType(), equalTo(DocValuesType.SORTED));\n+\n+ assertThat(parentFieldMapper.getChildJoinFieldType().names().indexName(), equalTo(\"_parent#parent\"));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().fieldDataType().getLoading(), equalTo(Loading.EAGER_GLOBAL_ORDINALS));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().hasDocValues(), is(true));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().docValuesType(), equalTo(DocValuesType.SORTED));\n+ }\n+\n+ public void testPre2Dot0LazyLoading() {\n+ ParentFieldMapper.Builder builder = new ParentFieldMapper.Builder(\"child\");\n+ builder.type(\"parent\");\n+ builder.fieldDataSettings(createFDSettings(Loading.LAZY));\n+\n+ ParentFieldMapper parentFieldMapper = builder.build(new Mapper.BuilderContext(pre2Dot0IndexSettings(), new ContentPath(0)));\n+\n+ assertThat(parentFieldMapper.getParentJoinFieldType().names().indexName(), equalTo(\"_parent#child\"));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().fieldDataType(), nullValue());\n+ assertThat(parentFieldMapper.getParentJoinFieldType().hasDocValues(), is(false));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().docValuesType(), equalTo(DocValuesType.NONE));\n+\n+ assertThat(parentFieldMapper.getChildJoinFieldType().names().indexName(), equalTo(\"_parent#parent\"));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().fieldDataType().getLoading(), equalTo(Loading.LAZY));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().hasDocValues(), is(false));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().docValuesType(), equalTo(DocValuesType.NONE));\n+ }\n+\n+ public void testPre2Dot0EagerLoading() {\n+ ParentFieldMapper.Builder builder = new ParentFieldMapper.Builder(\"child\");\n+ builder.type(\"parent\");\n+ builder.fieldDataSettings(createFDSettings(Loading.EAGER));\n+\n+ ParentFieldMapper parentFieldMapper = builder.build(new Mapper.BuilderContext(pre2Dot0IndexSettings(), new ContentPath(0)));\n+\n+ assertThat(parentFieldMapper.getParentJoinFieldType().names().indexName(), equalTo(\"_parent#child\"));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().fieldDataType(), nullValue());\n+ assertThat(parentFieldMapper.getParentJoinFieldType().hasDocValues(), is(false));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().docValuesType(), equalTo(DocValuesType.NONE));\n+\n+ assertThat(parentFieldMapper.getChildJoinFieldType().names().indexName(), equalTo(\"_parent#parent\"));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().fieldDataType().getLoading(), equalTo(Loading.EAGER));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().hasDocValues(), is(false));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().docValuesType(), equalTo(DocValuesType.NONE));\n+ }\n+\n+ public void testPre2Dot0EagerGlobalOrdinalsLoading() {\n+ ParentFieldMapper.Builder builder = new ParentFieldMapper.Builder(\"child\");\n+ builder.type(\"parent\");\n+ builder.fieldDataSettings(createFDSettings(Loading.EAGER_GLOBAL_ORDINALS));\n+\n+ ParentFieldMapper parentFieldMapper = builder.build(new Mapper.BuilderContext(pre2Dot0IndexSettings(), new ContentPath(0)));\n+\n+ assertThat(parentFieldMapper.getParentJoinFieldType().names().indexName(), equalTo(\"_parent#child\"));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().fieldDataType(), nullValue());\n+ assertThat(parentFieldMapper.getParentJoinFieldType().hasDocValues(), is(false));\n+ assertThat(parentFieldMapper.getParentJoinFieldType().docValuesType(), equalTo(DocValuesType.NONE));\n+\n+ assertThat(parentFieldMapper.getChildJoinFieldType().names().indexName(), equalTo(\"_parent#parent\"));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().fieldDataType().getLoading(), equalTo(Loading.EAGER_GLOBAL_ORDINALS));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().hasDocValues(), is(false));\n+ assertThat(parentFieldMapper.getChildJoinFieldType().docValuesType(), equalTo(DocValuesType.NONE));\n+ }\n+\n+ private static Settings pre2Dot0IndexSettings() {\n+ return Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_1_6_3).build();\n+ }\n+\n+ private static Settings post2Dot0IndexSettings() {\n+ return Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_2_1_0).build();\n+ }\n+\n+ private static Settings createFDSettings(Loading loading) {\n+ return new FieldDataType(\"child\", settingsBuilder().put(Loading.KEY, loading)).getSettings();\n+ }\n+\n+}", "filename": "core/src/test/java/org/elasticsearch/index/mapper/internal/ParentFieldMapperTests.java", "status": "added" }, { "diff": "@@ -1199,7 +1199,7 @@ public void testAddingParentToExistingMapping() throws IOException {\n .endObject().endObject()).get();\n fail();\n } catch (MergeMappingException e) {\n- assertThat(e.toString(), containsString(\"Merge failed with failures {[The _parent field's type option can't be changed: [null]->[parent]]}\"));\n+ assertThat(e.toString(), containsString(\"Merge failed with failures {[The _parent field's type option can't be changed: [null]->[parent]\"));\n }\n }\n ", "filename": "core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java", "status": "modified" }, { "diff": "@@ -129,14 +129,14 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", childMapping(MappedFieldType.Loading.LAZY))\n- .setUpdateAllTypes(true));\n+ .addMapping(\"child\", childMapping(MappedFieldType.Loading.LAZY)));\n ensureGreen();\n \n client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"{}\").get();\n client().prepareIndex(\"test\", \"child\", \"1\").setParent(\"1\").setSource(\"{}\").get();\n refresh();\n \n+ IndicesStatsResponse r = client().admin().indices().prepareStats(\"test\").setFieldData(true).setFieldDataFields(\"*\").get();\n ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().get();\n assertThat(response.getIndicesStats().getFieldData().getMemorySizeInBytes(), equalTo(0l));\n \n@@ -145,8 +145,7 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", \"_parent\", \"type=parent\")\n- .setUpdateAllTypes(true));\n+ .addMapping(\"child\", \"_parent\", \"type=parent\"));\n ensureGreen();\n \n client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"{}\").get();\n@@ -162,8 +161,7 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER))\n- .setUpdateAllTypes(true));\n+ .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER)));\n ensureGreen();\n \n client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"{}\").get();\n@@ -178,8 +176,7 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER_GLOBAL_ORDINALS))\n- .setUpdateAllTypes(true));\n+ .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER_GLOBAL_ORDINALS)));\n ensureGreen();\n \n // Need to do 2 separate refreshes, otherwise we have 1 segment and then we can't measure if global ordinals\n@@ -227,7 +224,7 @@ public void run() {\n MapperService mapperService = indexService.mapperService();\n DocumentMapper documentMapper = mapperService.documentMapper(\"child\");\n if (documentMapper != null) {\n- verified = documentMapper.parentFieldMapper().fieldType().fieldDataType().getLoading() == MappedFieldType.Loading.EAGER_GLOBAL_ORDINALS;\n+ verified = documentMapper.parentFieldMapper().getChildJoinFieldType().fieldDataType().getLoading() == MappedFieldType.Loading.EAGER_GLOBAL_ORDINALS;\n }\n }\n assertTrue(verified);", "filename": "core/src/test/java/org/elasticsearch/search/child/ParentFieldLoadingBwcIT.java", "status": "modified" }, { "diff": "@@ -57,8 +57,7 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", childMapping(MappedFieldType.Loading.LAZY))\n- .setUpdateAllTypes(true));\n+ .addMapping(\"child\", childMapping(MappedFieldType.Loading.LAZY)));\n ensureGreen();\n \n client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"{}\").get();\n@@ -73,8 +72,7 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", \"_parent\", \"type=parent\")\n- .setUpdateAllTypes(true));\n+ .addMapping(\"child\", \"_parent\", \"type=parent\"));\n ensureGreen();\n \n client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"{}\").get();\n@@ -89,8 +87,7 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER))\n- .setUpdateAllTypes(true));\n+ .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER)));\n ensureGreen();\n \n client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"{}\").get();\n@@ -105,8 +102,7 @@ public void testEagerParentFieldLoading() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .setSettings(indexSettings)\n .addMapping(\"parent\")\n- .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER_GLOBAL_ORDINALS))\n- .setUpdateAllTypes(true));\n+ .addMapping(\"child\", childMapping(MappedFieldType.Loading.EAGER_GLOBAL_ORDINALS)));\n ensureGreen();\n \n // Need to do 2 separate refreshes, otherwise we have 1 segment and then we can't measure if global ordinals\n@@ -153,7 +149,7 @@ public void run() {\n MapperService mapperService = indexService.mapperService();\n DocumentMapper documentMapper = mapperService.documentMapper(\"child\");\n if (documentMapper != null) {\n- verified = documentMapper.parentFieldMapper().fieldType().fieldDataType().getLoading() == MappedFieldType.Loading.EAGER_GLOBAL_ORDINALS;\n+ verified = documentMapper.parentFieldMapper().getChildJoinFieldType().fieldDataType().getLoading() == MappedFieldType.Loading.EAGER_GLOBAL_ORDINALS;\n }\n }\n assertTrue(verified);", "filename": "core/src/test/java/org/elasticsearch/search/child/ParentFieldLoadingIT.java", "status": "modified" } ] }
{ "body": "After addressing the issues in #13247 and getting the service to start. Stopping the service in 2.x/master no longer works either and fails with the following error:\n\n```\n2015-09-08 09:38:56 Commons Daemon procrun stderr initialized\njava.lang.NoSuchMethodError: close\n```\n\n[Apache Commons Daemon procrun](http://commons.apache.org/proper/commons-daemon/procrun.html) requires a stop method (`--StopMethod`), [which use to exist](https://github.com/elastic/elasticsearch/blob/1.7/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java#L27) but has been removed.\n\nCan this be added back?\n", "comments": [ { "body": "I think its fine, as long as we add comments about how its used? There were tons of methods like this in bootstrap: I probably removed a bunch of them. Technically, this one just needs to call Bootstrap.INSTANCE.stop() I think.\n", "created_at": "2015-09-08T15:51:33Z" }, { "body": "Thanks @rmuir, I've updated #13398 to include this as well.\n", "created_at": "2015-09-08T20:06:06Z" } ], "number": 13401, "title": "Windows service doesn't stop properly" }
{ "body": "This PR addresses a few issues with the service.bat script:\n\n1) Fixes a bug in the concatenation of java options in elasticsearch.in.bat which tripped up Apache Commons Daemon, and caused ES to startup without any params, eventually leading to the \"path.home is not configured\" exception.\n\n2) service.bat was not passing the `start` argument to ES\n\n3) The service could not be stopped gracefully via the `stop` command because there wasn't a method for procrun to call (`--StopMethod`): http://commons.apache.org/proper/commons-daemon/procrun.html\n\nCloses #13247\nCloses #13401\n", "number": 13398, "review_comments": [ { "body": "Must this really be public? According to the documentation it need only be package private. That would be greatly preferable, because it makes it easier to protect down the road.\n", "created_at": "2015-09-08T20:08:16Z" }, { "body": "Yea, good call, it doesn't need to be. I just added a new commit.\n", "created_at": "2015-09-08T20:43:59Z" } ], "title": "Fix service.bat start/stop issues" }
{ "commits": [ { "message": "Packaging: Fix Windows service start/stop issues\n\nThis commit addresses several bugs that prevented the Windows\nservice from being started or stopped:\n\n- Extra white space in the concatenation of java options in\n elasticsearch.in.bat which tripped up Apache Commons Daemon\n and caused ES to startup without any params, eventually leading\n to the \"path.home is not configured\" exception.\n\n- service.bat was not passing the start argument to ES\n\n- The service could not be stopped gracefully via the stop command\n because there wasn't a method for procrun to call.\n\nCloses #13247\nCloses #13401" } ], "files": [ { "diff": "@@ -107,7 +107,7 @@ public static void initializeNatives(boolean mlockAll, boolean ctrlHandler) {\n public boolean handle(int code) {\n if (CTRL_CLOSE_EVENT == code) {\n logger.info(\"running graceful exit on windows\");\n- Bootstrap.INSTANCE.stop();\n+ Bootstrap.stop();\n return true;\n }\n return false;\n@@ -205,11 +205,11 @@ private void start() {\n keepAliveThread.start();\n }\n \n- private void stop() {\n+ static void stop() {\n try {\n- Releasables.close(node);\n+ Releasables.close(INSTANCE.node);\n } finally {\n- keepAliveLatch.countDown();\n+ INSTANCE.keepAliveLatch.countDown();\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java", "status": "modified" }, { "diff": "@@ -39,4 +39,16 @@ public static void main(String[] args) throws StartupError {\n throw new StartupError(t);\n }\n }\n+\n+ /**\n+ * Required method that's called by Apache Commons procrun when\n+ * running as a service on Windows, when the service is stopped.\n+ *\n+ * http://commons.apache.org/proper/commons-daemon/procrun.html\n+ *\n+ * NOTE: If this method is renamed and/or moved, make sure to update service.bat!\n+ */\n+ static void close(String[] args) {\n+ Bootstrap.stop();\n+ }\n }\n\\ No newline at end of file", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java", "status": "modified" }, { "diff": "@@ -59,7 +59,7 @@ set ES_GC_OPTS=%ES_GC_OPTS% -XX:+UseCMSInitiatingOccupancyOnly\n REM When running under Java 7\n REM JAVA_OPTS=%JAVA_OPTS% -XX:+UseCondCardMark\n )\n-set JAVA_OPTS=%JAVA_OPTS% %ES_GC_OPTS%\n+set JAVA_OPTS=%JAVA_OPTS%%ES_GC_OPTS%\n \n if \"%ES_GC_LOG_FILE%\" == \"\" goto nogclog\n ", "filename": "distribution/src/main/resources/bin/elasticsearch.in.bat", "status": "modified" }, { "diff": "@@ -159,7 +159,7 @@ if not \"%ES_JAVA_OPTS%\" == \"\" set JVM_OPTS=%JVM_OPTS%;%JVM_ES_JAVA_OPTS%\n if \"%ES_START_TYPE%\" == \"\" set ES_START_TYPE=manual\n if \"%ES_STOP_TIMEOUT%\" == \"\" set ES_STOP_TIMEOUT=0\n \n-\"%EXECUTABLE%\" //IS//%SERVICE_ID% --Startup %ES_START_TYPE% --StopTimeout %ES_STOP_TIMEOUT% --StartClass org.elasticsearch.bootstrap.Elasticsearch --StopClass org.elasticsearch.bootstrap.Elasticsearch --StartMethod main --StopMethod close --Classpath \"%ES_CLASSPATH%\" --JvmSs %JVM_SS% --JvmMs %JVM_XMS% --JvmMx %JVM_XMX% --JvmOptions %JVM_OPTS% ++JvmOptions %ES_PARAMS% %LOG_OPTS% --PidFile \"%SERVICE_ID%.pid\" --DisplayName \"Elasticsearch %ES_VERSION% (%SERVICE_ID%)\" --Description \"Elasticsearch %ES_VERSION% Windows Service - http://elasticsearch.org\" --Jvm \"%JVM_DLL%\" --StartMode jvm --StopMode jvm --StartPath \"%ES_HOME%\"\n+\"%EXECUTABLE%\" //IS//%SERVICE_ID% --Startup %ES_START_TYPE% --StopTimeout %ES_STOP_TIMEOUT% --StartClass org.elasticsearch.bootstrap.Elasticsearch --StopClass org.elasticsearch.bootstrap.Elasticsearch --StartMethod main --StopMethod close --Classpath \"%ES_CLASSPATH%\" --JvmSs %JVM_SS% --JvmMs %JVM_XMS% --JvmMx %JVM_XMX% --JvmOptions %JVM_OPTS% ++JvmOptions %ES_PARAMS% %LOG_OPTS% --PidFile \"%SERVICE_ID%.pid\" --DisplayName \"Elasticsearch %ES_VERSION% (%SERVICE_ID%)\" --Description \"Elasticsearch %ES_VERSION% Windows Service - http://elasticsearch.org\" --Jvm \"%JVM_DLL%\" --StartMode jvm --StopMode jvm --StartPath \"%ES_HOME%\" ++StartParams start\n \n \n if not errorlevel 1 goto installed", "filename": "distribution/src/main/resources/bin/service.bat", "status": "modified" } ] }
{ "body": "Trying to follow https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-service-win.html doesn't work with master.\n\nI get this error \"The data area passed to a system call is too small\"\n", "comments": [ { "body": "I think this functionality is also untested.\n", "created_at": "2015-09-01T13:56:28Z" }, { "body": "@gmarz would you be able to take a look at this?\n", "created_at": "2015-09-01T15:51:37Z" }, { "body": "I get the same thing as well:\n\n```\nException in thread \"main\" areSettings(InternalSettingsPreparer.java:87)\n at org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:108)\n at org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:101)\n at org.elasticsearch.bootstrap.BootstrapCLIParser.<init>(BootstrapCLIParser.java:48)\n at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:226)\n at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)\nThe data area passed to a system call is too small.\n\nFailed to start service\n```\n\nI'll start digging...\n", "created_at": "2015-09-01T17:36:52Z" }, { "body": "> I think this functionality is also untested.\n\nVery much. I was looking at getting vagrant tests for this too but it'd be very different than the existing bats tests. And I'm not sure what the licenses are for running windows VMs for testing. If someone can point me to something that makes it clear that window's license lets people spin up VMs for testing then I'll dig more into it and try to use the same technique we use for linux. Without bash, I guess.\n", "created_at": "2015-09-01T18:10:29Z" }, { "body": "I found that changes in elasticsarch.in.bat related to ES_GC_OPTS environment variables caused JAVA_OPTS environment variable to contain \" \" two consecutive spaces. These are converted by service.bat into \";;\" two consecutive semicolons. This causes procrun to stop adding Java options to service and service start fails with:\n\n```\nJava HotSpot(TM) 64-Bit Server VM warning: Using the ParNew young collector with the Serial old \ncollector is deprecated and will likely be removed in a future release\nException in thread \"main\" areSettings(InternalSettingsPreparer.java:97)\nat org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:107)\nat org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:100)\nat org.elasticsearch.bootstrap.BootstrapCLIParser.<init>(BootstrapCLIParser.java:48)\nat org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:221)\nat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)\n```\n\nHowever even after manipulating ES_GC_OPTS / JAVA_OPTS to not contain two consecutive spaces (and procrun taking all Java options), service start still fails with\n\n```\nERROR: command not specified\n```\n\nI found that this can be fixed by adding \"--StartParams start\" to service.bat line 162 (starts with \"%EXECUTABLE%\" //IS//%SERVICE_ID%)\n\nNow stopping fails with\n\n```\nMethod 'static void close(String[])' not found in Class org/elasticsearch/bootstrap/Elasticsearch\n```\n", "created_at": "2015-09-08T12:51:42Z" }, { "body": "@jochen-st yea, that's one of the issues with the script. The other, which you're running into now has to do with the fact that the new elasticsearch CLI now requires a [start](https://github.com/elastic/elasticsearch/blob/master/distribution/src/main/resources/bin/elasticsearch.bat#L46) argument, which the service doesn't pass.\n\nI'm working on a fix and will open a PR shortly.\n", "created_at": "2015-09-08T13:05:56Z" }, { "body": "@jochen-st sorry, just realized I misread your comment and that you already discovered that the script doesn't pass the `start` param. I just opened the above PR to address the startup issues.\n\nThe stopping issue is a bit more involved. Apache Commons Daemon requires a static stop method (--StopMethod) in order to shutdown gracefully. This use to exist (`org.elasticsearch.bootstrap.Elasticsearch.close()`) but was removed in ES 2.x. The script was never updated, hence the error message.\n\nNot sure the reasoning for removing it and how it should be added back. I'm going to open a new issue for this.\n", "created_at": "2015-09-08T15:01:15Z" } ], "number": 13247, "title": "windows as service does not work" }
{ "body": "This PR addresses a few issues with the service.bat script:\n\n1) Fixes a bug in the concatenation of java options in elasticsearch.in.bat which tripped up Apache Commons Daemon, and caused ES to startup without any params, eventually leading to the \"path.home is not configured\" exception.\n\n2) service.bat was not passing the `start` argument to ES\n\n3) The service could not be stopped gracefully via the `stop` command because there wasn't a method for procrun to call (`--StopMethod`): http://commons.apache.org/proper/commons-daemon/procrun.html\n\nCloses #13247\nCloses #13401\n", "number": 13398, "review_comments": [ { "body": "Must this really be public? According to the documentation it need only be package private. That would be greatly preferable, because it makes it easier to protect down the road.\n", "created_at": "2015-09-08T20:08:16Z" }, { "body": "Yea, good call, it doesn't need to be. I just added a new commit.\n", "created_at": "2015-09-08T20:43:59Z" } ], "title": "Fix service.bat start/stop issues" }
{ "commits": [ { "message": "Packaging: Fix Windows service start/stop issues\n\nThis commit addresses several bugs that prevented the Windows\nservice from being started or stopped:\n\n- Extra white space in the concatenation of java options in\n elasticsearch.in.bat which tripped up Apache Commons Daemon\n and caused ES to startup without any params, eventually leading\n to the \"path.home is not configured\" exception.\n\n- service.bat was not passing the start argument to ES\n\n- The service could not be stopped gracefully via the stop command\n because there wasn't a method for procrun to call.\n\nCloses #13247\nCloses #13401" } ], "files": [ { "diff": "@@ -107,7 +107,7 @@ public static void initializeNatives(boolean mlockAll, boolean ctrlHandler) {\n public boolean handle(int code) {\n if (CTRL_CLOSE_EVENT == code) {\n logger.info(\"running graceful exit on windows\");\n- Bootstrap.INSTANCE.stop();\n+ Bootstrap.stop();\n return true;\n }\n return false;\n@@ -205,11 +205,11 @@ private void start() {\n keepAliveThread.start();\n }\n \n- private void stop() {\n+ static void stop() {\n try {\n- Releasables.close(node);\n+ Releasables.close(INSTANCE.node);\n } finally {\n- keepAliveLatch.countDown();\n+ INSTANCE.keepAliveLatch.countDown();\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java", "status": "modified" }, { "diff": "@@ -39,4 +39,16 @@ public static void main(String[] args) throws StartupError {\n throw new StartupError(t);\n }\n }\n+\n+ /**\n+ * Required method that's called by Apache Commons procrun when\n+ * running as a service on Windows, when the service is stopped.\n+ *\n+ * http://commons.apache.org/proper/commons-daemon/procrun.html\n+ *\n+ * NOTE: If this method is renamed and/or moved, make sure to update service.bat!\n+ */\n+ static void close(String[] args) {\n+ Bootstrap.stop();\n+ }\n }\n\\ No newline at end of file", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Elasticsearch.java", "status": "modified" }, { "diff": "@@ -59,7 +59,7 @@ set ES_GC_OPTS=%ES_GC_OPTS% -XX:+UseCMSInitiatingOccupancyOnly\n REM When running under Java 7\n REM JAVA_OPTS=%JAVA_OPTS% -XX:+UseCondCardMark\n )\n-set JAVA_OPTS=%JAVA_OPTS% %ES_GC_OPTS%\n+set JAVA_OPTS=%JAVA_OPTS%%ES_GC_OPTS%\n \n if \"%ES_GC_LOG_FILE%\" == \"\" goto nogclog\n ", "filename": "distribution/src/main/resources/bin/elasticsearch.in.bat", "status": "modified" }, { "diff": "@@ -159,7 +159,7 @@ if not \"%ES_JAVA_OPTS%\" == \"\" set JVM_OPTS=%JVM_OPTS%;%JVM_ES_JAVA_OPTS%\n if \"%ES_START_TYPE%\" == \"\" set ES_START_TYPE=manual\n if \"%ES_STOP_TIMEOUT%\" == \"\" set ES_STOP_TIMEOUT=0\n \n-\"%EXECUTABLE%\" //IS//%SERVICE_ID% --Startup %ES_START_TYPE% --StopTimeout %ES_STOP_TIMEOUT% --StartClass org.elasticsearch.bootstrap.Elasticsearch --StopClass org.elasticsearch.bootstrap.Elasticsearch --StartMethod main --StopMethod close --Classpath \"%ES_CLASSPATH%\" --JvmSs %JVM_SS% --JvmMs %JVM_XMS% --JvmMx %JVM_XMX% --JvmOptions %JVM_OPTS% ++JvmOptions %ES_PARAMS% %LOG_OPTS% --PidFile \"%SERVICE_ID%.pid\" --DisplayName \"Elasticsearch %ES_VERSION% (%SERVICE_ID%)\" --Description \"Elasticsearch %ES_VERSION% Windows Service - http://elasticsearch.org\" --Jvm \"%JVM_DLL%\" --StartMode jvm --StopMode jvm --StartPath \"%ES_HOME%\"\n+\"%EXECUTABLE%\" //IS//%SERVICE_ID% --Startup %ES_START_TYPE% --StopTimeout %ES_STOP_TIMEOUT% --StartClass org.elasticsearch.bootstrap.Elasticsearch --StopClass org.elasticsearch.bootstrap.Elasticsearch --StartMethod main --StopMethod close --Classpath \"%ES_CLASSPATH%\" --JvmSs %JVM_SS% --JvmMs %JVM_XMS% --JvmMx %JVM_XMX% --JvmOptions %JVM_OPTS% ++JvmOptions %ES_PARAMS% %LOG_OPTS% --PidFile \"%SERVICE_ID%.pid\" --DisplayName \"Elasticsearch %ES_VERSION% (%SERVICE_ID%)\" --Description \"Elasticsearch %ES_VERSION% Windows Service - http://elasticsearch.org\" --Jvm \"%JVM_DLL%\" --StartMode jvm --StopMode jvm --StartPath \"%ES_HOME%\" ++StartParams start\n \n \n if not errorlevel 1 goto installed", "filename": "distribution/src/main/resources/bin/service.bat", "status": "modified" } ] }
{ "body": "For example, this is a simple unit test that fails:\n\n``` java\npublic void testCrazyURL() {\n Map<String, String> params = newHashMap();\n\n // This is a valid URL\n String uri = \"example.com/:@-._~!$&'()*+,=;:@-._~!$&'()*+,=:@-._~!$&'()*+,==?/?:@-._~!$'()*+,;=/?:@-._~!$'()*+,;==#/?:@-._~!$&'()*+,;=\";\n RestUtils.decodeQueryString(uri, uri.indexOf('?') + 1, params);\n assertThat(params.get(\"/?:@-._~!$'()* ,;\"), equalTo(\"/?:@-._~!$'()* ,;==\"));\n assertThat(params.size(), equalTo(1));\n}\n```\n", "comments": [], "number": 13320, "title": "RestUtils.decodeQueryString does not correctly handle fragments" }
{ "body": "Fixes #13320 \n", "number": 13365, "review_comments": [], "title": "RestUtils.decodeQueryString ignores the URI fragment when parsing a query string" }
{ "commits": [ { "message": "Fixes #13320" } ], "files": [ { "diff": "@@ -59,12 +59,14 @@ public static void decodeQueryString(String s, int fromIndex, Map<String, String\n if (fromIndex >= s.length()) {\n return;\n }\n+ \n+ int queryStringLength = s.contains(\"#\") ? s.indexOf(\"#\") : s.length();\n \n String name = null;\n int pos = fromIndex; // Beginning of the unprocessed region\n int i; // End of the unprocessed region\n char c = 0; // Current character\n- for (i = fromIndex; i < s.length(); i++) {\n+ for (i = fromIndex; i < queryStringLength; i++) {\n c = s.charAt(i);\n if (c == '=' && name == null) {\n if (pos != i) {", "filename": "core/src/main/java/org/elasticsearch/rest/support/RestUtils.java", "status": "modified" }, { "diff": "@@ -139,6 +139,16 @@ public void testCorsSettingIsARegex() {\n assertCorsSettingRegexIsNull(\"\");\n assertThat(RestUtils.getCorsSettingRegex(Settings.EMPTY), is(nullValue()));\n }\n+ \n+ public void testCrazyURL() {\n+ Map<String, String> params = newHashMap();\n+\n+ // This is a valid URL\n+ String uri = \"example.com/:@-._~!$&'()*+,=;:@-._~!$&'()*+,=:@-._~!$&'()*+,==?/?:@-._~!$'()*+,;=/?:@-._~!$'()*+,;==#/?:@-._~!$&'()*+,;=\";\n+ RestUtils.decodeQueryString(uri, uri.indexOf('?') + 1, params);\n+ assertThat(params.get(\"/?:@-._~!$'()* ,;\"), equalTo(\"/?:@-._~!$'()* ,;==\"));\n+ assertThat(params.size(), equalTo(1));\n+ }\n \n private void assertCorsSettingRegexIsNull(String settingsValue) {\n assertThat(RestUtils.getCorsSettingRegex(settingsBuilder().put(\"http.cors.allow-origin\", settingsValue).build()), is(nullValue()));", "filename": "core/src/test/java/org/elasticsearch/rest/util/RestUtilsTests.java", "status": "modified" } ] }
{ "body": "The `ignore_unavailable` option for snapshot restore documented [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html#_restore) does not appear to be working:\n\n```\nPOST _snapshot/my_repo/curator-20150903160109/_restore\n{\n \"indices\": \"data-index-1\",\n \"ignore_unavailable\": true,\n \"include_global_state\": false\n}\n```\n\n```\n{\n \"error\": \"ElasticsearchIllegalArgumentException[failed to parse repository source [{\\n \\\"indices\\\": \\\"data-index-1\\\",\\n \\\"ignore_unavailable\\\": true,\\n \\\"include_global_state\\\": false\\n}\\n]]; nested: ElasticsearchIllegalArgumentException[Unknown parameter ignore_unavailable]; \",\n \"status\": 400\n}\n```\n\nGetting this error on 1.7.1, but it works on 1.4.5.\n", "comments": [], "number": 13335, "title": "snapshot restore ignore_unavailable not working" }
{ "body": "Fixes an issue introduced in #10744 as a result of which the restore request stopped accepting indices options such as ignore_unavailable.\n\nFixes #13335\n", "number": 13357, "review_comments": [], "title": "Snapshot restore request should accept indices options" }
{ "commits": [ { "message": "Snapshot restore request should accept indices options\n\nFixes an issue introduced in #10744 as a result of which the restore request stopped accepting indices options such as ignore_unavailable.\n\nFixes #13335" } ], "files": [ { "diff": "@@ -537,7 +537,9 @@ public RestoreSnapshotRequest source(Map source) {\n throw new IllegalArgumentException(\"malformed ignore_index_settings section, should be an array of strings\");\n }\n } else {\n- throw new IllegalArgumentException(\"Unknown parameter \" + name);\n+ if (IndicesOptions.isIndicesOptions(name) == false) {\n+ throw new IllegalArgumentException(\"Unknown parameter \" + name);\n+ }\n }\n }\n indicesOptions(IndicesOptions.fromMap((Map<String, Object>) source, IndicesOptions.lenientExpandOpen()));", "filename": "core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java", "status": "modified" }, { "diff": "@@ -154,6 +154,16 @@ public static IndicesOptions fromMap(Map<String, Object> map, IndicesOptions def\n defaultSettings);\n }\n \n+ /**\n+ * Returns true if the name represents a valid name for one of the indices option\n+ * false otherwise\n+ */\n+ public static boolean isIndicesOptions(String name) {\n+ return \"expand_wildcards\".equals(name) || \"expandWildcards\".equals(name) ||\n+ \"ignore_unavailable\".equals(name) || \"ignoreUnavailable\".equals(name) ||\n+ \"allow_no_indices\".equals(name) || \"allowNoIndices\".equals(name);\n+ }\n+\n public static IndicesOptions fromParameters(Object wildcardsString, Object ignoreUnavailableString, Object allowNoIndicesString, IndicesOptions defaultSettings) {\n if (wildcardsString == null && ignoreUnavailableString == null && allowNoIndicesString == null) {\n return defaultSettings;", "filename": "core/src/main/java/org/elasticsearch/action/support/IndicesOptions.java", "status": "modified" }, { "diff": "@@ -0,0 +1,154 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.snapshots;\n+\n+import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest;\n+import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;\n+import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.test.ESTestCase;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+\n+public class SnapshotRequestsTests extends ESTestCase {\n+ @Test\n+ public void testRestoreSnapshotRequestParsing() throws IOException {\n+\n+ RestoreSnapshotRequest request = new RestoreSnapshotRequest(\"test-repo\", \"test-snap\");\n+\n+ XContentBuilder builder = jsonBuilder().startObject();\n+\n+ if(randomBoolean()) {\n+ builder.field(\"indices\", \"foo,bar,baz\");\n+ } else {\n+ builder.startArray(\"indices\");\n+ builder.value(\"foo\");\n+ builder.value(\"bar\");\n+ builder.value(\"baz\");\n+ builder.endArray();\n+ }\n+\n+ IndicesOptions indicesOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean());\n+ if (indicesOptions.expandWildcardsClosed()) {\n+ if (indicesOptions.expandWildcardsOpen()) {\n+ builder.field(\"expand_wildcards\", \"all\");\n+ } else {\n+ builder.field(\"expand_wildcards\", \"closed\");\n+ }\n+ } else {\n+ if (indicesOptions.expandWildcardsOpen()) {\n+ builder.field(\"expand_wildcards\", \"open\");\n+ } else {\n+ builder.field(\"expand_wildcards\", \"none\");\n+ }\n+ }\n+ builder.field(\"allow_no_indices\", indicesOptions.allowNoIndices());\n+ builder.field(\"rename_pattern\", \"rename-from\");\n+ builder.field(\"rename_replacement\", \"rename-to\");\n+ boolean partial = randomBoolean();\n+ builder.field(\"partial\", partial);\n+ builder.startObject(\"settings\").field(\"set1\", \"val1\").endObject();\n+ builder.startObject(\"index_settings\").field(\"set1\", \"val2\").endObject();\n+ if (randomBoolean()) {\n+ builder.field(\"ignore_index_settings\", \"set2,set3\");\n+ } else {\n+ builder.startArray(\"ignore_index_settings\");\n+ builder.value(\"set2\");\n+ builder.value(\"set3\");\n+ builder.endArray();\n+ }\n+\n+ byte[] bytes = builder.endObject().bytes().toBytes();\n+\n+\n+ request.source(bytes);\n+\n+ assertEquals(\"test-repo\", request.repository());\n+ assertEquals(\"test-snap\", request.snapshot());\n+ assertArrayEquals(request.indices(), new String[]{\"foo\", \"bar\", \"baz\"});\n+ assertEquals(\"rename-from\", request.renamePattern());\n+ assertEquals(\"rename-to\", request.renameReplacement());\n+ assertEquals(partial, request.partial());\n+ assertEquals(\"val1\", request.settings().get(\"set1\"));\n+ assertArrayEquals(request.ignoreIndexSettings(), new String[]{\"set2\", \"set3\"});\n+\n+ }\n+\n+ @Test\n+ public void testCreateSnapshotRequestParsing() throws IOException {\n+\n+ CreateSnapshotRequest request = new CreateSnapshotRequest(\"test-repo\", \"test-snap\");\n+\n+ XContentBuilder builder = jsonBuilder().startObject();\n+\n+ if(randomBoolean()) {\n+ builder.field(\"indices\", \"foo,bar,baz\");\n+ } else {\n+ builder.startArray(\"indices\");\n+ builder.value(\"foo\");\n+ builder.value(\"bar\");\n+ builder.value(\"baz\");\n+ builder.endArray();\n+ }\n+\n+ IndicesOptions indicesOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean());\n+ if (indicesOptions.expandWildcardsClosed()) {\n+ if (indicesOptions.expandWildcardsOpen()) {\n+ builder.field(\"expand_wildcards\", \"all\");\n+ } else {\n+ builder.field(\"expand_wildcards\", \"closed\");\n+ }\n+ } else {\n+ if (indicesOptions.expandWildcardsOpen()) {\n+ builder.field(\"expand_wildcards\", \"open\");\n+ } else {\n+ builder.field(\"expand_wildcards\", \"none\");\n+ }\n+ }\n+ builder.field(\"allow_no_indices\", indicesOptions.allowNoIndices());\n+ boolean partial = randomBoolean();\n+ builder.field(\"partial\", partial);\n+ builder.startObject(\"settings\").field(\"set1\", \"val1\").endObject();\n+ builder.startObject(\"index_settings\").field(\"set1\", \"val2\").endObject();\n+ if (randomBoolean()) {\n+ builder.field(\"ignore_index_settings\", \"set2,set3\");\n+ } else {\n+ builder.startArray(\"ignore_index_settings\");\n+ builder.value(\"set2\");\n+ builder.value(\"set3\");\n+ builder.endArray();\n+ }\n+\n+ byte[] bytes = builder.endObject().bytes().toBytes();\n+\n+\n+ request.source(bytes);\n+\n+ assertEquals(\"test-repo\", request.repository());\n+ assertEquals(\"test-snap\", request.snapshot());\n+ assertArrayEquals(request.indices(), new String[]{\"foo\", \"bar\", \"baz\"});\n+ assertEquals(partial, request.partial());\n+ assertEquals(\"val1\", request.settings().get(\"set1\"));\n+ }\n+\n+}", "filename": "core/src/test/java/org/elasticsearch/snapshots/SnapshotRequestsTests.java", "status": "added" } ] }
{ "body": "Looks like this:\n\nHEARTBEAT J3 PID(22805@beast): 2015-09-03T12:01:58, stalled for 891s at: DiscoveryWithServiceDisruptionsIT.testReadOnPostRecoveryShards\n\nIts a big time-waster when trying to get changes in, because it forces me to totally restart mvn verify.\n", "comments": [ { "body": "I muted the test for now. sorry, I did not get around to it today.\n", "created_at": "2015-09-03T16:30:47Z" } ], "number": 13316, "title": "DiscoveryWithServiceDisruptionsIT frequently hangs completely" }
{ "body": "We waited for relocation to start on node_2 but never made sure node_1 knows about\nthis too. If node_1 does not yet have the cluster state that should trigger relocation\nthen relocation will never start because we block cluster state processing on node_1.\nThe log then fills with these messages:\n\n1> [2015-09-02 20:34:59,153][INFO ][discovery ] sent: internal:index/shard/recovery/start_recovery, relocation starts\n1> [2015-09-02 20:34:59,153][DEBUG][indices.recovery ] [node_t1] delaying recovery of [test][0] as it is not listed as assigned to target node\n\nWe have to wait until node_1 sends an acknowledgement that relocation starts\n(response to internal:index/shard/recovery/start_recovery).\n\ncloses #13316\n", "number": 13343, "review_comments": [ { "body": "can we order the clients? :) I'm OCDing ..\n", "created_at": "2015-09-08T14:44:26Z" }, { "body": "this actually waits until the recovery is completed and the shard start message was already sent to the master. Not sure if this is intended. \n", "created_at": "2015-09-08T14:49:22Z" }, { "body": "I don't understand. endRelocationLatchNode2 signals the shard started. this one just signals that recovery starts?\n", "created_at": "2015-09-14T15:41:54Z" }, { "body": "I'm sorry, I left the comment on the wrong line. I meant line 1067 and beginRelocationLatchNode1 - it waits on on a response to the shard_start action, which is only done at the end of the recovery , after the target node has moved the shard to POST_RECOVERY and sent shard started message to the master.\n", "created_at": "2015-09-15T09:21:10Z" }, { "body": "That was indeed unintentional. I changed now to block once the files info request is received.\n", "created_at": "2015-09-15T11:47:39Z" } ], "title": "[test] fix hanging testRefreshDoesNotMissShards" }
{ "commits": [ { "message": "[test] fix hanging testRefreshDoesNotMissShards\n\nWe waited for relocation to start on node_2 but never made sure node_1 knows about\nthis too. If node_1 does not yet have the cluster state that should trigger relocation\nthen relocation will never start because we block cluster state processing on node_1.\nThe log then fills with these messages:\n\n1> [2015-09-02 20:34:59,153][INFO ][discovery ] sent: internal:index/shard/recovery/start_recovery, relocation starts\n1> [2015-09-02 20:34:59,153][DEBUG][indices.recovery ] [node_t1] delaying recovery of [test][0] as it is not listed as assigned to target node\n\nWe have to wait until node_1 sends an acknowledgement that relocation starts\n(response to internal:index/shard/recovery/start_recovery).\n\ncloses #13316" }, { "message": "reorder to not trigger OCD in reviewer" }, { "message": "ise files info request to signal start of recovery on target" } ], "files": [ { "diff": "@@ -58,6 +58,7 @@\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.recovery.RecoverySource;\n+import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.indices.store.IndicesStoreIntegrationIT;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.test.ESIntegTestCase;\n@@ -944,18 +945,21 @@ public void testNodeNotReachableFromMaster() throws Exception {\n }\n \n /*\n- * Tests a visibility issue if a shard is in POST_RECOVERY\n+ * Tests that shards do not miss refresh requests when relocating\n *\n- * When a user indexes a document, then refreshes and then a executes a search and all are successful and no timeouts etc then\n- * the document must be visible for the search.\n+ * After a shard relocated there is no guarantee that a node that sends a refresh actually knows that the shard has\n+ * relocated. This test tests that shards don't miss refreshes in case the coordinating node for the refresh lags\n+ * behind with cluster state processing.\n+ *\n+ * Detailed description:\n *\n * When a primary is relocating from node_1 to node_2, there can be a short time where both old and new primary\n- * are started and accept indexing and read requests. However, the new primary might not be visible to nodes\n- * that lag behind one cluster state. If such a node then sends a refresh to the index, this refresh request\n- * must reach the new primary on node_2 too. Otherwise a different node that searches on the new primary might not\n- * find the indexed document although a refresh was executed before.\n+ * are started and accept indexing and read requests. While the shard is in POST_RECOVERY on node_2 it must not\n+ * miss any refresh, otherwise documents that were indexed in this time will not be visible if a search is executed\n+ * on this shard.\n+ *\n+ * The test replays this course of events:\n *\n- * In detail:\n * Cluster state 0:\n * node_1: [index][0] STARTED (ShardRoutingState)\n * node_2: no shard\n@@ -973,7 +977,7 @@ public void testNodeNotReachableFromMaster() throws Exception {\n * 2. any node receives an index request which is then executed on node_1 and node_2\n *\n * 3. node_3 sends a refresh but it is a little behind with cluster state processing and still on cluster state 0.\n- * If refresh was a broadcast operation it send it to node_1 only because it does not know node_2 has a shard too\n+ * So it sends it to node_1 instead\n *\n * 4. node_3 catches up with the cluster state and acks it to master which now can process the shard started message\n * from node_2 before and updates cluster state to:\n@@ -990,8 +994,7 @@ public void testNodeNotReachableFromMaster() throws Exception {\n * successful.\n */\n @Test\n- @AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/13316\")\n- public void testReadOnPostRecoveryShards() throws Exception {\n+ public void testRefreshDoesNotMissShards() throws Exception {\n List<BlockClusterStateProcessing> clusterStateBlocks = new ArrayList<>();\n try {\n configureUnicastCluster(5, null, 1);\n@@ -1035,7 +1038,11 @@ public void testReadOnPostRecoveryShards() throws Exception {\n MockTransportService transportServiceNode2 = (MockTransportService) internalCluster().getInstance(TransportService.class, node_2);\n CountDownLatch beginRelocationLatchNode2 = new CountDownLatch(1);\n CountDownLatch endRelocationLatchNode2 = new CountDownLatch(1);\n- transportServiceNode2.addTracer(new StartRecoveryToShardStaredTracer(logger, beginRelocationLatchNode2, endRelocationLatchNode2));\n+ transportServiceNode2.addTracer(new StartRecoveryOnTargetToShardStartedTracer(logger, beginRelocationLatchNode2, endRelocationLatchNode2));\n+\n+ MockTransportService transportServiceNode1 = (MockTransportService) internalCluster().getInstance(TransportService.class, node_1);\n+ CountDownLatch beginRelocationLatchNode1 = new CountDownLatch(1);\n+ transportServiceNode1.addTracer(new StartRecoveryOnSourceTracer(logger, beginRelocationLatchNode1));\n \n // block cluster state updates on node_1 and node_2 so that we end up with two primaries\n BlockClusterStateProcessing disruptionNode2 = new BlockClusterStateProcessing(node_2, getRandom());\n@@ -1050,16 +1057,21 @@ public void testReadOnPostRecoveryShards() throws Exception {\n Future<ClusterRerouteResponse> rerouteFuture = internalCluster().client().admin().cluster().prepareReroute().add(new MoveAllocationCommand(new ShardId(\"test\", 0), node_1, node_2)).setTimeout(new TimeValue(1000, TimeUnit.MILLISECONDS)).execute();\n \n logger.info(\"--> wait for relocation to start\");\n- // wait for relocation to start\n- beginRelocationLatchNode2.await();\n+\n // start to block cluster state updates on node_1 and node_2 so that we end up with two primaries\n // one STARTED on node_1 and one in POST_RECOVERY on node_2\n- disruptionNode1.startDisrupting();\n+ // wait for relocation to start on node_2\n+ beginRelocationLatchNode2.await();\n disruptionNode2.startDisrupting();\n+\n+ // wait until node_1 actually acknowledges that relocation starts\n+ beginRelocationLatchNode1.await();\n+ disruptionNode1.startDisrupting();\n+\n endRelocationLatchNode2.await();\n- final Client node3Client = internalCluster().client(node_3);\n- final Client node2Client = internalCluster().client(node_2);\n final Client node1Client = internalCluster().client(node_1);\n+ final Client node2Client = internalCluster().client(node_2);\n+ final Client node3Client = internalCluster().client(node_3);\n final Client node4Client = internalCluster().client(node_4);\n logger.info(\"--> index doc\");\n logLocalClusterStates(node1Client, node2Client, node3Client, node4Client);\n@@ -1138,12 +1150,12 @@ public void run() {\n /**\n * This Tracer can be used to signal start of a recovery and shard started event after translog was copied\n */\n- public static class StartRecoveryToShardStaredTracer extends MockTransportService.Tracer {\n+ public static class StartRecoveryOnTargetToShardStartedTracer extends MockTransportService.Tracer {\n private final ESLogger logger;\n private final CountDownLatch beginRelocationLatch;\n private final CountDownLatch sentShardStartedLatch;\n \n- public StartRecoveryToShardStaredTracer(ESLogger logger, CountDownLatch beginRelocationLatch, CountDownLatch sentShardStartedLatch) {\n+ public StartRecoveryOnTargetToShardStartedTracer(ESLogger logger, CountDownLatch beginRelocationLatch, CountDownLatch sentShardStartedLatch) {\n this.logger = logger;\n this.beginRelocationLatch = beginRelocationLatch;\n this.sentShardStartedLatch = sentShardStartedLatch;\n@@ -1152,16 +1164,37 @@ public StartRecoveryToShardStaredTracer(ESLogger logger, CountDownLatch beginRel\n @Override\n public void requestSent(DiscoveryNode node, long requestId, String action, TransportRequestOptions options) {\n if (action.equals(RecoverySource.Actions.START_RECOVERY)) {\n- logger.info(\"sent: {}, relocation starts\", action);\n+ logger.info(\"sent request: {}, relocation starts on target\", action);\n beginRelocationLatch.countDown();\n }\n if (action.equals(ShardStateAction.SHARD_STARTED_ACTION_NAME)) {\n- logger.info(\"sent: {}, shard started\", action);\n+ logger.info(\"sent request: {}, shard started on target\", action);\n sentShardStartedLatch.countDown();\n }\n }\n }\n \n+ /**\n+ * This Tracer can be used to signal that a recovery source has acknowledged the start of recovery\n+ */\n+ public static class StartRecoveryOnSourceTracer extends MockTransportService.Tracer {\n+ private final ESLogger logger;\n+ private final CountDownLatch beginRelocationLatch;\n+\n+ public StartRecoveryOnSourceTracer(ESLogger logger, CountDownLatch beginRelocationLatch) {\n+ this.logger = logger;\n+ this.beginRelocationLatch = beginRelocationLatch;\n+ }\n+\n+ public void requestSent(DiscoveryNode node, long requestId, String action, TransportRequestOptions options) {\n+ logger.info(\"sent response: {}\", action);\n+ if (action.equals(RecoveryTarget.Actions.FILES_INFO)) {\n+ beginRelocationLatch.countDown();\n+ logger.info(\"request sent: {}, relocation starts on source\", action);\n+ }\n+ }\n+ }\n+\n private void logLocalClusterStates(Client... clients) {\n int counter = 1;\n for (Client client : clients) {", "filename": "core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java", "status": "modified" } ] }
{ "body": "prerequisite to #9421\nsee also #12600\n", "comments": [ { "body": "I love this change looks very good. I left some comments around unittesting \n", "created_at": "2015-08-25T09:19:32Z" }, { "body": "I agree with @s1monw that this looks great. I left minor comments here and there.\n", "created_at": "2015-08-28T14:40:36Z" }, { "body": "LGTM \n", "created_at": "2015-08-31T12:30:47Z" }, { "body": "@s1monw thanks for the review! \n\naddressed all comments. @bleskes want to have one more look?\n", "created_at": "2015-08-31T13:34:20Z" }, { "body": "Looks awesome. I only miss a test for the aggregation of results from multiple shard level responses.\n", "created_at": "2015-08-31T14:15:22Z" }, { "body": "@bleskes addressed all comments and added test here: https://github.com/elastic/elasticsearch/pull/13068/files#diff-6030559b5ed4d55d9a754523f5c6ce6dR137 \n", "created_at": "2015-08-31T15:37:06Z" }, { "body": "LGTM! (minor comments, no need for another review)\n", "created_at": "2015-08-31T15:58:39Z" } ], "number": 13068, "title": "Make refresh a replicated action" }
{ "body": "Before #13068 refresh and flush ignored all exceptions that matched\nTransportActions.isShardNotAvailableException(e) and this should not change.\nIn addition, refresh and flush which are based on broadcast replication\nmight now get UnavailableShardsException from TransportReplicationAction if a shard\nis unavailable and this is not caught by TransportActions.isShardNotAvailableException(e).\nThis must be ignored as well.\n", "number": 13341, "review_comments": [ { "body": "This feels weird to me - what would be the implication of adding UnavailableShardsException to the exceptions TransportActions.isShardNotAvailableException returns true on?\n", "created_at": "2015-09-04T08:49:06Z" }, { "body": "None that I can see and the tests pass if I add it too. I changed that now\n", "created_at": "2015-09-04T09:26:05Z" } ], "title": "Fix exception handling for unavailable shards in broadcast replication action" }
{ "commits": [ { "message": "fix exception handling for unavailable shards in broadcast replication action\n\nBefore #13068 refresh and flush ignored all exceptions that matched\nTransportActions.isShardNotAvailableException(e) and this should not change.\nIn addition, refresh and flush which are based on broadcast replication\nmight now get UnavailableShardsException from TransportReplicationAction if a shard\nis unavailable and this is not caught by TransportActions.isShardNotAvailableException(e).\nThis must be ignored as well." } ], "files": [ { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.NoShardAvailableActionException;\n+import org.elasticsearch.action.UnavailableShardsException;\n import org.elasticsearch.index.IndexNotFoundException;\n import org.elasticsearch.index.shard.IllegalIndexShardStateException;\n import org.elasticsearch.index.shard.ShardNotFoundException;\n@@ -34,7 +35,8 @@ public static boolean isShardNotAvailableException(Throwable t) {\n if (actual instanceof ShardNotFoundException ||\n actual instanceof IndexNotFoundException ||\n actual instanceof IllegalIndexShardStateException ||\n- actual instanceof NoShardAvailableActionException) {\n+ actual instanceof NoShardAvailableActionException ||\n+ actual instanceof UnavailableShardsException) {\n return true;\n }\n return false;", "filename": "core/src/main/java/org/elasticsearch/action/support/TransportActions.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.DefaultShardOperationFailedException;\n import org.elasticsearch.action.support.HandledTransportAction;\n+import org.elasticsearch.action.support.TransportActions;\n import org.elasticsearch.action.support.broadcast.BroadcastRequest;\n import org.elasticsearch.action.support.broadcast.BroadcastResponse;\n import org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException;\n@@ -90,7 +91,7 @@ public void onFailure(Throwable e) {\n int totalNumCopies = clusterState.getMetaData().index(shardId.index().getName()).getNumberOfReplicas() + 1;\n ShardResponse shardResponse = newShardResponse();\n ActionWriteResponse.ShardInfo.Failure[] failures;\n- if (ExceptionsHelper.unwrap(e, UnavailableShardsException.class) != null) {\n+ if (TransportActions.isShardNotAvailableException(e)) {\n failures = new ActionWriteResponse.ShardInfo.Failure[0];\n } else {\n ActionWriteResponse.ShardInfo.Failure failure = new ActionWriteResponse.ShardInfo.Failure(shardId.index().name(), shardId.id(), null, e, ExceptionsHelper.status(e), true);", "filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportBroadcastReplicationAction.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.ActionWriteResponse;\n+import org.elasticsearch.action.NoShardAvailableActionException;\n import org.elasticsearch.action.UnavailableShardsException;\n import org.elasticsearch.action.admin.indices.flush.FlushRequest;\n import org.elasticsearch.action.admin.indices.flush.FlushResponse;\n@@ -103,13 +104,16 @@ public static void afterClass() {\n @Test\n public void testNotStartedPrimary() throws InterruptedException, ExecutionException, IOException {\n final String index = \"test\";\n- final ShardId shardId = new ShardId(index, 0);\n clusterService.setState(state(index, randomBoolean(),\n randomBoolean() ? ShardRoutingState.INITIALIZING : ShardRoutingState.UNASSIGNED, ShardRoutingState.UNASSIGNED));\n logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n Future<BroadcastResponse> response = (broadcastReplicationAction.execute(new BroadcastRequest().indices(index)));\n for (Tuple<ShardId, ActionListener<ActionWriteResponse>> shardRequests : broadcastReplicationAction.capturedShardRequests) {\n- shardRequests.v2().onFailure(new UnavailableShardsException(shardId, \"test exception expected\"));\n+ if (randomBoolean()) {\n+ shardRequests.v2().onFailure(new NoShardAvailableActionException(shardRequests.v1()));\n+ } else {\n+ shardRequests.v2().onFailure(new UnavailableShardsException(shardRequests.v1(), \"test exception\"));\n+ }\n }\n response.get();\n logger.info(\"total shards: {}, \", response.get().getTotalShards());", "filename": "core/src/test/java/org/elasticsearch/action/support/replication/BroadcastReplicationTests.java", "status": "modified" } ] }
{ "body": "At least in one of our jenkins servers we have:\n\n```\ndocker0\n inet 172.17.42.1 netmask:255.255.0.0 broadcast:0.0.0.0 scope:site\n hardware 56:84:7A:FE:97:99\n MULTICAST mtu:1500 index:3\n```\n\nIs this normal? The fact it has a broadcast address like that (which looks totally bogus), means that if the user configures 0.0.0.0, we will fail. I can fix the logic to deal with it in several ways (the interface is not marked up, we can not do the check explicitly for a wildcard address) but if anyone understands this I'd love to know why this docker interface looks like that.\n", "comments": [ { "body": "Relevant jenkins fail: http://build-us-00.elastic.co/job/es_g1gc_master_metal/16773/\n", "created_at": "2015-09-03T19:45:51Z" }, { "body": "It's not only `docker0`, might be an issue with LXC in general. This is on Ubuntu 15.04 Vivid:\n\n```\nlxcbr0 Link encap:Ethernet HWaddr 9e:ee:28:2e:2e:85\n inet addr:10.0.3.1 Bcast:0.0.0.0 Mask:255.255.255.0\n inet6 addr: fe80::9cee:28ff:fe2e:2e85/64 Scope:Link\n UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1\n RX packets:0 errors:0 dropped:0 overruns:0 frame:0\n TX packets:371713 errors:0 dropped:0 overruns:0 carrier:0\n collisions:0 txqueuelen:0\n RX bytes:0 (0.0 B) TX bytes:29988240 (29.9 MB)\n```\n", "created_at": "2015-09-03T19:52:42Z" }, { "body": "ok, its just intended as a safety check to prevent the user from making a mistake. I'll remove the broadcast check completely since apparently these containers don't understand how networking works.\n", "created_at": "2015-09-03T19:54:29Z" } ], "number": 13327, "title": "can't bind to all interfaces with some docker configurations?" }
{ "body": "This was supposed to just help the user, in case they misconfigured something.\nBroadcast is an ipv4 only thing, the only way you can really detect its a broadcast\naddress, is to look and see if an interface has that address as its broadcast address.\n\nBut we cannot trust that container interfaces won't have a crazy setup...\n\nCloses #13327\n", "number": 13328, "review_comments": [], "title": "Remove broadcast address check." }
{ "commits": [ { "message": "Remove broadcast address check.\n\nThis was supposed to just help the user, in case they misconfigured something.\nBroadcast is an ipv4 only thing, the only way you can really detect its a broadcast\naddress, is to look and see if an interface has that address as its broadcast address.\n\nBut we cannot trust that container interfaces won't have a crazy setup...\n\nCloses #13327" } ], "files": [ { "diff": "@@ -27,8 +27,6 @@\n \n import java.io.IOException;\n import java.net.InetAddress;\n-import java.net.InterfaceAddress;\n-import java.net.NetworkInterface;\n import java.net.UnknownHostException;\n import java.util.List;\n import java.util.concurrent.CopyOnWriteArrayList;\n@@ -120,14 +118,6 @@ public InetAddress[] resolveBindHostAddress(String bindHost) throws IOException\n if (address.isMulticastAddress()) {\n throw new IllegalArgumentException(\"bind address: {\" + NetworkAddress.format(address) + \"} is invalid: multicast address\");\n }\n- // check if its broadcast: flat out mistake\n- for (NetworkInterface nic : NetworkUtils.getInterfaces()) {\n- for (InterfaceAddress intf : nic.getInterfaceAddresses()) {\n- if (address.equals(intf.getBroadcast())) {\n- throw new IllegalArgumentException(\"bind address: {\" + NetworkAddress.format(address) + \"} is invalid: broadcast address\");\n- }\n- }\n- }\n }\n }\n return addresses;\n@@ -161,14 +151,6 @@ public InetAddress resolvePublishHostAddress(String publishHost) throws IOExcept\n if (address.isMulticastAddress()) {\n throw new IllegalArgumentException(\"publish address: {\" + NetworkAddress.format(address) + \"} is invalid: multicast address\");\n }\n- // check if its broadcast: flat out mistake\n- for (NetworkInterface nic : NetworkUtils.getInterfaces()) {\n- for (InterfaceAddress intf : nic.getInterfaceAddresses()) {\n- if (address.equals(intf.getBroadcast())) {\n- throw new IllegalArgumentException(\"publish address: {\" + NetworkAddress.format(address) + \"} is invalid: broadcast address\");\n- }\n- }\n- }\n // wildcard address, probably set by network.host\n if (address.isAnyLocalAddress()) {\n InetAddress old = address;", "filename": "core/src/main/java/org/elasticsearch/common/network/NetworkService.java", "status": "modified" }, { "diff": "@@ -23,11 +23,6 @@\n import org.elasticsearch.test.ESTestCase;\n \n import java.net.InetAddress;\n-import java.net.InterfaceAddress;\n-import java.net.NetworkInterface;\n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.List;\n \n /**\n * Tests for network service... try to keep them safe depending upon configuration\n@@ -87,41 +82,6 @@ public void testPublishMulticastV6() throws Exception {\n }\n }\n \n- /** \n- * ensure exception if we bind/publish to broadcast address \n- */\n- public void testBindPublishBroadcast() throws Exception {\n- NetworkService service = new NetworkService(Settings.EMPTY);\n- // collect any broadcast addresses on the system\n- List<InetAddress> addresses = new ArrayList<>();\n- for (NetworkInterface nic : Collections.list(NetworkInterface.getNetworkInterfaces())) {\n- for (InterfaceAddress intf : nic.getInterfaceAddresses()) {\n- InetAddress address = intf.getBroadcast();\n- if (address != null) {\n- addresses.add(address);\n- }\n- }\n- }\n- // can easily happen (ipv6-only, localhost-only, ...)\n- assumeTrue(\"test requires broadcast addresses configured\", addresses.size() > 0);\n- // make sure we fail on each one\n- for (InetAddress address : addresses) {\n- try {\n- service.resolveBindHostAddress(NetworkAddress.formatAddress(address));\n- fail(\"should have hit exception for broadcast address: \" + address);\n- } catch (IllegalArgumentException e) {\n- assertTrue(e.getMessage().contains(\"invalid: broadcast\"));\n- }\n- \n- try {\n- service.resolvePublishHostAddress(NetworkAddress.formatAddress(address));\n- fail(\"should have hit exception for broadcast address: \" + address);\n- } catch (IllegalArgumentException e) {\n- assertTrue(e.getMessage().contains(\"invalid: broadcast\"));\n- }\n- }\n- }\n-\n /** \n * ensure specifying wildcard ipv4 address will bind to all interfaces \n */", "filename": "core/src/test/java/org/elasticsearch/common/network/NetworkServiceTests.java", "status": "modified" } ] }
{ "body": "When compiled with source level=8 this test fails.\n", "comments": [ { "body": "This is due to the improvements in [target-type inference](http://openjdk.java.net/jeps/101) in Java 8. I'll open a pull request that addresses.\n\nThere are two tests failing here for the same reason.\n\nPrior to Java 7, the line\n\n```\nString.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue())\n```\n\nwould be interpreted as invoking [`String.valueOf(String)`](http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#valueOf%28java.lang.Object%29). However, due to the improvements in target-type inference this line is now interpreted as invoking [`String.valueOf(char[])`](http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#valueOf%28char[]%29). This causes the test to now throw a `ClassCastException`. Simply casting the parameter to `Object` will force the previously invoked overload to be invoked.\n", "created_at": "2015-09-03T16:52:29Z" }, { "body": "great, thanks for looking into this!\n", "created_at": "2015-09-03T16:54:39Z" } ], "number": 13315, "title": "InnerHitsIT has java 8 compatibility problems" }
{ "body": "Target-type inference has been improved in Java 8. This leads to these\nlines now being interpreted as invoking `String#valueOf(char[])` whereas\nthey previously were interpreted as invoking `String#valueOf(Object)`.\nThis change leads to `ClassCastException`s during test execution. Simply\ncasting the parameter to `Object` restores the old invocation.\n\nCloses #13315\n", "number": 13318, "review_comments": [], "title": "Workaround pitfall in Java 8 target-type inference" }
{ "commits": [ { "message": "Workaround pitfall in Java 8 target-type inference\n\nTarget-type inference has been improved in Java 8. This leads to these\nlines now being interpreted as invoking String#valueOf(char[]) whereas\nthey previously were interpreted as invoking String#valueOf(Object).\nThis change leads to ClassCastExceptions during test execution. Simply\ncasting the parameter to Object restores the old invocation.\n\nCloses #13315" } ], "files": [ { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.search.innerhits;\n \n-import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n@@ -47,11 +46,8 @@\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n import static org.hamcrest.Matchers.*;\n \n-import org.apache.lucene.util.LuceneTestCase.AwaitsFix;\n-\n /**\n */\n-@AwaitsFix(bugUrl = \"https://github.com/elastic/elasticsearch/issues/13315\")\n public class InnerHitsIT extends ESIntegTestCase {\n \n @Test\n@@ -737,7 +733,7 @@ public void testNestedInnerHitsWithStoredFieldsAndNoSourceBackcompat() throws Ex\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n- assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue()), equalTo(\"fox eat quick\"));\n+ assertThat(String.valueOf((Object)response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue()), equalTo(\"fox eat quick\"));\n }\n \n @Test\n@@ -814,7 +810,7 @@ public void testNestedInnerHitsWithExcludeSourceBackcompat() throws Exception {\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getField().string(), equalTo(\"comments\"));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getOffset(), equalTo(0));\n assertThat(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).getNestedIdentity().getChild(), nullValue());\n- assertThat(String.valueOf(response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue()), equalTo(\"fox eat quick\"));\n+ assertThat(String.valueOf((Object)response.getHits().getAt(0).getInnerHits().get(\"comments\").getAt(0).fields().get(\"comments.message\").getValue()), equalTo(\"fox eat quick\"));\n }\n \n @Test", "filename": "core/src/test/java/org/elasticsearch/search/innerhits/InnerHitsIT.java", "status": "modified" } ] }
{ "body": "Using 1.7.1 and shadow replicas. When I close an index that was created with these settings `{\"index.shadow_replicas\": \"true\",\"index.data_path\": \"/data/es\"}` the index and its data is not deleted.\nIf the index is open, the data is deleted off disk but the top most directory (eg: `/data/es/0/myindex-2015-09-01`) still exists.\nObviously 'normal' indices are completely deleted whether they've been closed or not.\nNot sure if this is by design - if so, I think it should be documented.\n\nFull steps taken to reproduce the issue are here: https://gist.github.com/natefox/306c3429825ce5eb182b\nI made a Discuss thread here as well: https://discuss.elastic.co/t/deleting-indices-with-shadow-replicas/28261\n", "comments": [ { "body": "@dakrone can you please take a look at this?\n", "created_at": "2015-09-03T07:44:45Z" }, { "body": "@natefox I'm definitely able to reproduce this now, thanks for opening this, I'll take a look!\n", "created_at": "2015-09-03T11:48:30Z" } ], "number": 13297, "title": "Deleting closed shadow replica index doesnt delete off disk" }
{ "body": "Previously we skip deleting the index store for indices on a shared\nfilesystem, because we don't want to delete the data when the shard is\nrelocating around the cluster. This adds a flag to the\n`deleteIndexStore` method signifying that the index is closed and that\nwe should allow deleting the contents even if it is on a shared\nfilesystem.\n\nIncludes a unit test for the IndicesService.canDeleteIndexContents and\nintegration tests ensure a closed shadow replica index deletes files\ncorrectly.\n\nResolves #13297\n", "number": 13309, "review_comments": [ { "body": "I actually think rather than passing in the flag for the closed, we could infer from the cluster state whether the index is closed, however, passing a flag in is less complexity since we have a separate `deleteClosedIndex` so we already know.\n\nAnyone have thoughts about this?\n", "created_at": "2015-09-03T12:30:11Z" }, { "body": "can't this just be `if (IndexMetaData.isOnSharedFilesystem(indexSettings) == false || closed)`\n", "created_at": "2015-09-04T20:46:36Z" }, { "body": "Yeah that's much simpler, changing...\n", "created_at": "2015-09-04T20:58:18Z" } ], "title": "Allow deleting closed indices with shadow replicas" }
{ "commits": [ { "message": "Allow deleting closed indices with shadow replicas\n\nPreviously we skip deleting the index store for indices on a shared\nfilesystem, because we don't want to delete the data when the shard is\nrelocating around the cluster. This adds a flag to the\n`deleteIndexStore` method signifying that the index is closed and that\nwe should allow deleting the contents even if it is on a shared\nfilesystem.\n\nIncludes a unit test for the IndicesService.canDeleteIndexContents and\nintegration tests ensure a closed shadow replica index deletes files\ncorrectly.\n\nResolves #13297" } ], "files": [ { "diff": "@@ -437,7 +437,7 @@ public Closeable apply(Class<? extends Closeable> input) {\n final Settings indexSettings = indexService.getIndexSettings();\n indicesLifecycle.afterIndexDeleted(indexService.index(), indexSettings);\n // now we are done - try to wipe data on disk if possible\n- deleteIndexStore(reason, indexService.index(), indexSettings);\n+ deleteIndexStore(reason, indexService.index(), indexSettings, false);\n }\n } catch (IOException ex) {\n throw new ElasticsearchException(\"failed to remove index \" + index, ex);\n@@ -490,7 +490,7 @@ public void deleteClosedIndex(String reason, IndexMetaData metaData, ClusterStat\n final IndexMetaData index = clusterState.metaData().index(indexName);\n throw new IllegalStateException(\"Can't delete closed index store for [\" + indexName + \"] - it's still part of the cluster state [\" + index.getIndexUUID() + \"] [\" + metaData.getIndexUUID() + \"]\");\n }\n- deleteIndexStore(reason, metaData, clusterState);\n+ deleteIndexStore(reason, metaData, clusterState, true);\n } catch (IOException e) {\n logger.warn(\"[{}] failed to delete closed index\", e, metaData.index());\n }\n@@ -501,7 +501,7 @@ public void deleteClosedIndex(String reason, IndexMetaData metaData, ClusterStat\n * Deletes the index store trying to acquire all shards locks for this index.\n * This method will delete the metadata for the index even if the actual shards can't be locked.\n */\n- public void deleteIndexStore(String reason, IndexMetaData metaData, ClusterState clusterState) throws IOException {\n+ public void deleteIndexStore(String reason, IndexMetaData metaData, ClusterState clusterState, boolean closed) throws IOException {\n if (nodeEnv.hasNodeFile()) {\n synchronized (this) {\n String indexName = metaData.index();\n@@ -518,18 +518,18 @@ public void deleteIndexStore(String reason, IndexMetaData metaData, ClusterState\n }\n Index index = new Index(metaData.index());\n final Settings indexSettings = buildIndexSettings(metaData);\n- deleteIndexStore(reason, index, indexSettings);\n+ deleteIndexStore(reason, index, indexSettings, closed);\n }\n }\n \n- private void deleteIndexStore(String reason, Index index, Settings indexSettings) throws IOException {\n+ private void deleteIndexStore(String reason, Index index, Settings indexSettings, boolean closed) throws IOException {\n boolean success = false;\n try {\n // we are trying to delete the index store here - not a big deal if the lock can't be obtained\n // the store metadata gets wiped anyway even without the lock this is just best effort since\n // every shards deletes its content under the shard lock it owns.\n logger.debug(\"{} deleting index store reason [{}]\", index, reason);\n- if (canDeleteIndexContents(index, indexSettings)) {\n+ if (canDeleteIndexContents(index, indexSettings, closed)) {\n nodeEnv.deleteIndexDirectorySafe(index, 0, indexSettings);\n }\n success = true;\n@@ -583,11 +583,11 @@ public void deleteShardStore(String reason, ShardId shardId, ClusterState cluste\n logger.debug(\"{} deleted shard reason [{}]\", shardId, reason);\n \n if (clusterState.nodes().localNode().isMasterNode() == false && // master nodes keep the index meta data, even if having no shards..\n- canDeleteIndexContents(shardId.index(), indexSettings)) {\n+ canDeleteIndexContents(shardId.index(), indexSettings, false)) {\n if (nodeEnv.findAllShardIds(shardId.index()).isEmpty()) {\n try {\n // note that deleteIndexStore have more safety checks and may throw an exception if index was concurrently created.\n- deleteIndexStore(\"no longer used\", metaData, clusterState);\n+ deleteIndexStore(\"no longer used\", metaData, clusterState, false);\n } catch (Exception e) {\n // wrap the exception to indicate we already deleted the shard\n throw new ElasticsearchException(\"failed to delete unused index after deleting its last shard (\" + shardId + \")\", e);\n@@ -606,9 +606,11 @@ public void deleteShardStore(String reason, ShardId shardId, ClusterState cluste\n * @param indexSettings {@code Settings} for the given index\n * @return true if the index can be deleted on this node\n */\n- public boolean canDeleteIndexContents(Index index, Settings indexSettings) {\n+ public boolean canDeleteIndexContents(Index index, Settings indexSettings, boolean closed) {\n final IndexServiceInjectorPair indexServiceInjectorPair = this.indices.get(index.name());\n- if (IndexMetaData.isOnSharedFilesystem(indexSettings) == false) {\n+ // Closed indices may be deleted, even if they are on a shared\n+ // filesystem. Since it is closed we aren't deleting it for relocation\n+ if (IndexMetaData.isOnSharedFilesystem(indexSettings) == false || closed) {\n if (indexServiceInjectorPair == null && nodeEnv.hasNodeFile()) {\n return true;\n }", "filename": "core/src/main/java/org/elasticsearch/indices/IndicesService.java", "status": "modified" }, { "diff": "@@ -44,6 +44,7 @@\n import org.elasticsearch.snapshots.SnapshotState;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.InternalTestCluster;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.test.transport.MockTransportService;\n import org.elasticsearch.transport.TransportException;\n import org.elasticsearch.transport.TransportRequest;\n@@ -759,4 +760,52 @@ public void testIndexOnSharedFSRecoversToAnyNode() throws Exception {\n assertShardCountOn(newFooNode, 5);\n assertNoShardsOn(barNodes.get());\n }\n+\n+ public void testDeletingClosedIndexRemovesFiles() throws Exception {\n+ Path dataPath = createTempDir();\n+ Path dataPath2 = createTempDir();\n+ Settings nodeSettings = nodeSettings(dataPath.getParent());\n+\n+ internalCluster().startNodesAsync(2, nodeSettings).get();\n+ String IDX = \"test\";\n+ String IDX2 = \"test2\";\n+\n+ Settings idxSettings = Settings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 5)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)\n+ .put(IndexMetaData.SETTING_DATA_PATH, dataPath.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n+ .put(IndexMetaData.SETTING_SHARED_FILESYSTEM, true)\n+ .build();\n+ Settings idx2Settings = Settings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 5)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)\n+ .put(IndexMetaData.SETTING_DATA_PATH, dataPath2.toAbsolutePath().toString())\n+ .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n+ .put(IndexMetaData.SETTING_SHARED_FILESYSTEM, true)\n+ .build();\n+\n+ prepareCreate(IDX).setSettings(idxSettings).addMapping(\"doc\", \"foo\", \"type=string\").get();\n+ prepareCreate(IDX2).setSettings(idx2Settings).addMapping(\"doc\", \"foo\", \"type=string\").get();\n+ ensureGreen(IDX, IDX2);\n+\n+ int docCount = randomIntBetween(10, 100);\n+ List<IndexRequestBuilder> builders = new ArrayList<>();\n+ for (int i = 0; i < docCount; i++) {\n+ builders.add(client().prepareIndex(IDX, \"doc\", i + \"\").setSource(\"foo\", \"bar\"));\n+ builders.add(client().prepareIndex(IDX2, \"doc\", i + \"\").setSource(\"foo\", \"bar\"));\n+ }\n+ indexRandom(true, true, true, builders);\n+ flushAndRefresh(IDX, IDX2);\n+\n+ logger.info(\"--> closing index {}\", IDX);\n+ client().admin().indices().prepareClose(IDX).get();\n+\n+ logger.info(\"--> deleting non-closed index\");\n+ client().admin().indices().prepareDelete(IDX2).get();\n+ assertPathHasBeenCleared(dataPath2);\n+ logger.info(\"--> deleting closed index\");\n+ client().admin().indices().prepareDelete(IDX).get();\n+ assertPathHasBeenCleared(dataPath);\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/IndexWithShadowReplicasIT.java", "status": "modified" }, { "diff": "@@ -23,9 +23,11 @@\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.gateway.GatewayMetaState;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.ShardPath;\n@@ -51,6 +53,19 @@ protected boolean resetNodeAfterTest() {\n return true;\n }\n \n+ public void testCanDeleteIndexContent() {\n+ IndicesService indicesService = getIndicesService();\n+\n+ Settings idxSettings = settings(Version.CURRENT)\n+ .put(IndexMetaData.SETTING_SHADOW_REPLICAS, true)\n+ .put(IndexMetaData.SETTING_DATA_PATH, \"/foo/bar\")\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 4))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(0, 3))\n+ .build();\n+ assertFalse(\"shard on shared filesystem\", indicesService.canDeleteIndexContents(new Index(\"test\"), idxSettings, false));\n+ assertTrue(\"shard on shared filesystem and closed\", indicesService.canDeleteIndexContents(new Index(\"test\"), idxSettings, true));\n+ }\n+\n public void testCanDeleteShardContent() {\n IndicesService indicesService = getIndicesService();\n IndexMetaData meta = IndexMetaData.builder(\"test\").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(\n@@ -71,7 +86,7 @@ public void testDeleteIndexStore() throws Exception {\n assertTrue(test.hasShard(0));\n \n try {\n- indicesService.deleteIndexStore(\"boom\", firstMetaData, clusterService.state());\n+ indicesService.deleteIndexStore(\"boom\", firstMetaData, clusterService.state(), false);\n fail();\n } catch (IllegalStateException ex) {\n // all good\n@@ -98,7 +113,7 @@ public void testDeleteIndexStore() throws Exception {\n assertTrue(path.exists());\n \n try {\n- indicesService.deleteIndexStore(\"boom\", secondMetaData, clusterService.state());\n+ indicesService.deleteIndexStore(\"boom\", secondMetaData, clusterService.state(), false);\n fail();\n } catch (IllegalStateException ex) {\n // all good\n@@ -108,7 +123,7 @@ public void testDeleteIndexStore() throws Exception {\n \n // now delete the old one and make sure we resolve against the name\n try {\n- indicesService.deleteIndexStore(\"boom\", firstMetaData, clusterService.state());\n+ indicesService.deleteIndexStore(\"boom\", firstMetaData, clusterService.state(), false);\n fail();\n } catch (IllegalStateException ex) {\n // all good", "filename": "core/src/test/java/org/elasticsearch/indices/IndicesServiceTests.java", "status": "modified" } ] }
{ "body": "or probably broadcast/multicast/etc.\n\nIn the case of wildcard (e.g. 0.0.0.0), this is ok as a _bind_host_ for listening only. But for a publish_host these addresses will be problematic. we should just fail.\n\nseparately, If you specify `Des.network.bind_host=0.0.0.0` es will seemingly do the semi-right thing and pick a publish host like 127.0.0.1. But this should be special cased as well, because javadocs say (http://docs.oracle.com/javase/7/docs/api/java/net/NetworkInterface.html#getByInetAddress%28java.net.InetAddress%29);\n\n```\nIf the specified IP address is bound to multiple network interfaces it is not defined which network interface is returned.\n```\n", "comments": [ { "body": "I think we can do two cases. We can handle `0.0.0.0` just like we handle multiple addresses today, using the same logic we'd use if you picked a hostname that resolved to all the addresses on your machine. we pick by v4/v6 preference, then by reachability. \n\nFor multicast and broadcast we should fail, thats just a misconfiguration.\n", "created_at": "2015-09-02T16:48:05Z" }, { "body": "sounds good to me\n", "created_at": "2015-09-02T18:29:50Z" } ], "number": 13274, "title": "fail if publish host is set to wildcard address" }
{ "body": "Users might specify something like -Des.network.host=0.0.0.0, as that\nwas the old default with previous versions of elasticsearch. This means\nto bind to all interfaces, but it makes no sense as a publish address.\n\nPick a good one in this case, just like we do in other cases where\npublish isn't explicitly specified and we are bound to multiple (e.g.\nwhen configured by interface, or dns hostname with multiple addresses).\nHowever, in this case warn the user about it: since its arbitrarily\npicking the first non-loopback address like the old versions\ndid, thats a little too heuristical, but lets make the cutover easy.\n\nSeparately, fail hard if things like multicast or broadcast addresses are\nconfigured as bind or publish addresses, as that is simply invalid.\n\nCloses #13274\n", "number": 13299, "review_comments": [], "title": "Improve situation when network.host is set to wildcard (e.g. 0.0.0.0)" }
{ "commits": [ { "message": "Improve situation when network.host is set to wildcard (e.g. 0.0.0.0)\n\nUsers might specify something like -Des.network.host=0.0.0.0, as that\nwas the old default with previous versions of elasticsearch. This means\nto bind to all interfaces, but it makes no sense as a publish address.\n\nPick a good one in this case, just like we do in other cases where\npublish isn't explicitly specified and we are bound to multiple (e.g.\nwhen configured by interface, or dns hostname with multiple addresses).\nHowever, in this case warn the user about it: since its arbitrarily\npicking the first non-loopback address like the old versions\ndid, thats a little too heuristical, but lets make the cutover easy.\n\nSeparately, fail hard if things like multicast or broadcast addresses are\nconfigured as bind or publish addresses, as that is simply invalid.\n\nCloses #13274" } ], "files": [ { "diff": "@@ -22,12 +22,13 @@\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.transport.InetSocketTransportAddress;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n \n import java.io.IOException;\n import java.net.InetAddress;\n+import java.net.InterfaceAddress;\n+import java.net.NetworkInterface;\n import java.net.UnknownHostException;\n import java.util.List;\n import java.util.concurrent.CopyOnWriteArrayList;\n@@ -110,7 +111,26 @@ public InetAddress[] resolveBindHostAddress(String bindHost) throws IOException\n if (bindHost == null) {\n bindHost = DEFAULT_NETWORK_HOST;\n }\n- return resolveInetAddress(bindHost);\n+ InetAddress addresses[] = resolveInetAddress(bindHost);\n+\n+ // try to deal with some (mis)configuration\n+ if (addresses != null) {\n+ for (InetAddress address : addresses) {\n+ // check if its multicast: flat out mistake\n+ if (address.isMulticastAddress()) {\n+ throw new IllegalArgumentException(\"bind address: {\" + NetworkAddress.format(address) + \"} is invalid: multicast address\");\n+ }\n+ // check if its broadcast: flat out mistake\n+ for (NetworkInterface nic : NetworkUtils.getInterfaces()) {\n+ for (InterfaceAddress intf : nic.getInterfaceAddresses()) {\n+ if (address.equals(intf.getBroadcast())) {\n+ throw new IllegalArgumentException(\"bind address: {\" + NetworkAddress.format(address) + \"} is invalid: broadcast address\");\n+ }\n+ }\n+ }\n+ }\n+ }\n+ return addresses;\n }\n \n // TODO: needs to be InetAddress[]\n@@ -133,7 +153,31 @@ public InetAddress resolvePublishHostAddress(String publishHost) throws IOExcept\n publishHost = DEFAULT_NETWORK_HOST;\n }\n // TODO: allow publishing multiple addresses\n- return resolveInetAddress(publishHost)[0];\n+ InetAddress address = resolveInetAddress(publishHost)[0];\n+\n+ // try to deal with some (mis)configuration\n+ if (address != null) {\n+ // check if its multicast: flat out mistake\n+ if (address.isMulticastAddress()) {\n+ throw new IllegalArgumentException(\"publish address: {\" + NetworkAddress.format(address) + \"} is invalid: multicast address\");\n+ }\n+ // check if its broadcast: flat out mistake\n+ for (NetworkInterface nic : NetworkUtils.getInterfaces()) {\n+ for (InterfaceAddress intf : nic.getInterfaceAddresses()) {\n+ if (address.equals(intf.getBroadcast())) {\n+ throw new IllegalArgumentException(\"publish address: {\" + NetworkAddress.format(address) + \"} is invalid: broadcast address\");\n+ }\n+ }\n+ }\n+ // wildcard address, probably set by network.host\n+ if (address.isAnyLocalAddress()) {\n+ InetAddress old = address;\n+ address = NetworkUtils.getFirstNonLoopbackAddresses()[0];\n+ logger.warn(\"publish address: {{}} is a wildcard address, falling back to first non-loopback: {{}}\", \n+ NetworkAddress.format(old), NetworkAddress.format(address));\n+ }\n+ }\n+ return address;\n }\n \n private InetAddress[] resolveInetAddress(String host) throws UnknownHostException, IOException {", "filename": "core/src/main/java/org/elasticsearch/common/network/NetworkService.java", "status": "modified" }, { "diff": "@@ -0,0 +1,170 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.network;\n+\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.net.InetAddress;\n+import java.net.InterfaceAddress;\n+import java.net.NetworkInterface;\n+import java.util.ArrayList;\n+import java.util.Collections;\n+import java.util.List;\n+\n+/**\n+ * Tests for network service... try to keep them safe depending upon configuration\n+ * please don't actually bind to anything, just test the addresses.\n+ */\n+public class NetworkServiceTests extends ESTestCase {\n+\n+ /** \n+ * ensure exception if we bind to multicast ipv4 address \n+ */\n+ public void testBindMulticastV4() throws Exception {\n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ try {\n+ service.resolveBindHostAddress(\"239.1.1.1\");\n+ fail(\"should have hit exception\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"invalid: multicast\"));\n+ }\n+ }\n+ \n+ /** \n+ * ensure exception if we bind to multicast ipv6 address \n+ */\n+ public void testBindMulticastV6() throws Exception {\n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ try {\n+ service.resolveBindHostAddress(\"FF08::108\");\n+ fail(\"should have hit exception\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"invalid: multicast\"));\n+ }\n+ }\n+ \n+ /** \n+ * ensure exception if we publish to multicast ipv4 address \n+ */\n+ public void testPublishMulticastV4() throws Exception {\n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ try {\n+ service.resolvePublishHostAddress(\"239.1.1.1\");\n+ fail(\"should have hit exception\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"invalid: multicast\"));\n+ }\n+ }\n+ \n+ /** \n+ * ensure exception if we publish to multicast ipv6 address \n+ */\n+ public void testPublishMulticastV6() throws Exception {\n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ try {\n+ service.resolvePublishHostAddress(\"FF08::108\");\n+ fail(\"should have hit exception\");\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"invalid: multicast\"));\n+ }\n+ }\n+\n+ /** \n+ * ensure exception if we bind/publish to broadcast address \n+ */\n+ public void testBindPublishBroadcast() throws Exception {\n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ // collect any broadcast addresses on the system\n+ List<InetAddress> addresses = new ArrayList<>();\n+ for (NetworkInterface nic : Collections.list(NetworkInterface.getNetworkInterfaces())) {\n+ for (InterfaceAddress intf : nic.getInterfaceAddresses()) {\n+ InetAddress address = intf.getBroadcast();\n+ if (address != null) {\n+ addresses.add(address);\n+ }\n+ }\n+ }\n+ // can easily happen (ipv6-only, localhost-only, ...)\n+ assumeTrue(\"test requires broadcast addresses configured\", addresses.size() > 0);\n+ // make sure we fail on each one\n+ for (InetAddress address : addresses) {\n+ try {\n+ service.resolveBindHostAddress(NetworkAddress.formatAddress(address));\n+ fail(\"should have hit exception for broadcast address: \" + address);\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"invalid: broadcast\"));\n+ }\n+ \n+ try {\n+ service.resolvePublishHostAddress(NetworkAddress.formatAddress(address));\n+ fail(\"should have hit exception for broadcast address: \" + address);\n+ } catch (IllegalArgumentException e) {\n+ assertTrue(e.getMessage().contains(\"invalid: broadcast\"));\n+ }\n+ }\n+ }\n+\n+ /** \n+ * ensure specifying wildcard ipv4 address will bind to all interfaces \n+ */\n+ public void testBindAnyLocalV4() throws Exception {\n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ assertEquals(InetAddress.getByName(\"0.0.0.0\"), service.resolveBindHostAddress(\"0.0.0.0\")[0]);\n+ }\n+ \n+ /** \n+ * ensure specifying wildcard ipv6 address will bind to all interfaces \n+ */\n+ public void testBindAnyLocalV6() throws Exception {\n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ assertEquals(InetAddress.getByName(\"::\"), service.resolveBindHostAddress(\"::\")[0]);\n+ }\n+\n+ /** \n+ * ensure specifying wildcard ipv4 address selects reasonable publish address \n+ */\n+ public void testPublishAnyLocalV4() throws Exception {\n+ InetAddress expected = null;\n+ try {\n+ expected = NetworkUtils.getFirstNonLoopbackAddresses()[0];\n+ } catch (Exception e) {\n+ assumeNoException(\"test requires up-and-running non-loopback address\", e);\n+ }\n+ \n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ assertEquals(expected, service.resolvePublishHostAddress(\"0.0.0.0\"));\n+ }\n+\n+ /** \n+ * ensure specifying wildcard ipv6 address selects reasonable publish address \n+ */\n+ public void testPublishAnyLocalV6() throws Exception {\n+ InetAddress expected = null;\n+ try {\n+ expected = NetworkUtils.getFirstNonLoopbackAddresses()[0];\n+ } catch (Exception e) {\n+ assumeNoException(\"test requires up-and-running non-loopback address\", e);\n+ }\n+ \n+ NetworkService service = new NetworkService(Settings.EMPTY);\n+ assertEquals(expected, service.resolvePublishHostAddress(\"::\"));\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/common/network/NetworkServiceTests.java", "status": "added" } ] }
{ "body": "No logs generated for ES (in Debug mode) or system log. process does not start at all. Commenting the line , elasticsearch starts with no issues. Tried creating directory path and empty file with full permission on gc.log ES_GC_LOG_FILE=/var/log/elasticsearch/gc.log, still unable to start with no error message any where.\n", "comments": [ { "body": "@ajaybhatnagar when I specify it on the command line, I get this error:\n\n```\n» ES_GC_LOG_FILE=/tmp/foo/gc.log bin/elasticsearch\nInvalid file name for use with -Xloggc: Filename can only contain the characters [A-Z][a-z][0-9]-_.%[p|t] but it has been \"/tmp/foo/gc.log\"\nNote %p or %t can only be used once\nError: Could not create the Java Virtual Machine.\nError: A fatal exception has occurred. Program will exit.\n```\n\nWhich file are you putting it in? It could be the same error but just not being printed to the right place.\n\nNote that even if I specify a super simple log filename (\"foo\"), I still get the same error, so I think this is a bug:\n\n```\n» ES_GC_LOG_FILE=foo bin/elasticsearch\nInvalid file name for use with -Xloggc: Filename can only contain the characters [A-Z][a-z][0-9]-_.%[p|t] but it has been \"foo\"\nNote %p or %t can only be used once\nError: Could not create the Java Virtual Machine.\nError: A fatal exception has occurred. Program will exit.\n```\n", "created_at": "2015-09-02T22:32:52Z" } ], "number": 13277, "title": "Elasticsearch 2.0.0beta1 does not start if ES_GC_LOG_FILE value is set in init file." }
{ "body": "The `-Xloggc:filename.log` parameter has very strict filename semantics:\n\n```\n[A-Z][a-z][0-9]-_.%[p|t]\n```\n\nOur script specifies \\\" and \\\" to surround it, which makes Java think we\nare sending: -Xloggc:\"foo.log\" and it fails with:\n\n```\nInvalid file name for use with -Xloggc: Filename can only contain the characters [A-Z][a-z][0-9]-_.%[p|t] but it has been \"foo.log\"\nNote %p or %t can only be used once\nError: Could not create the Java Virtual Machine.\nError: A fatal exception has occurred. Program will exit.\n```\n\nWe can't quote this, and we should not need to since the valid\ncharacters don't include a space character, so we don't need to worry\nabout quoting.\n\nResolves #13277\n", "number": 13296, "review_comments": [], "title": "Don't surround -Xloggc log filename with quotes" }
{ "commits": [ { "message": "Don't surround -Xloggc log filename with quotes\n\nThe `-Xloggc:filename.log` parameter has very strict filename semantics:\n\n```\n[A-Z][a-z][0-9]-_.%[p|t]\n```\n\nOur script specifies \\\" and \\\" to surround it, which makes Java think we\nare sending: -Xloggc:\"foo.log\" and it fails with:\n\n```\nInvalid file name for use with -Xloggc: Filename can only contain the characters [A-Z][a-z][0-9]-_.%[p|t] but it has been \"foo.log\"\nNote %p or %t can only be used once\nError: Could not create the Java Virtual Machine.\nError: A fatal exception has occurred. Program will exit.\n```\n\nWe can't quote this, and we should not need to since the valid\ncharacters don't include a space character, so we don't need to worry\nabout quoting.\n\nResolves #13277" } ], "files": [ { "diff": "@@ -62,7 +62,7 @@ if [ -n \"$ES_GC_LOG_FILE\" ]; then\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintClassHistogram\"\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintTenuringDistribution\"\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintGCApplicationStoppedTime\"\n- JAVA_OPTS=\"$JAVA_OPTS -Xloggc:\\\"$ES_GC_LOG_FILE\\\"\"\n+ JAVA_OPTS=\"$JAVA_OPTS -Xloggc:$ES_GC_LOG_FILE\"\n \n # Ensure that the directory for the log file exists: the JVM will not create it.\n mkdir -p \"`dirname \\\"$ES_GC_LOG_FILE\\\"`\"", "filename": "distribution/src/main/resources/bin/elasticsearch.in.sh", "status": "modified" } ] }
{ "body": "Prioritized allocation enables the recovery in the order of `index.priority` > `index.creation_date` > `index.name` (reversed). However, I've found that when allowing it to work based on `index.creation_date` (the default mechanism), it does it in the reverse of the expected order **relative to replicas**.\n\nIt's easy enough to reproduce with enough daily indices by manually deleting the replicas from one node, throttling the heck out of recovery, and speeding up monitoring:\n\n``` http\nPUT /_cluster/settings\n{\n \"transient\": {\n \"cluster.routing.allocation.node_concurrent_recoveries\" : 1,\n \"indices.recovery.concurrent_streams\" : 1,\n \"indices.recovery.concurrent_small_file_streams\" : 1,\n \"indices.recovery.max_bytes_per_sec\" : \"1mb\",\n \"marvel.agent.interval\" : \"500ms\"\n }\n}\n```\n\nAs I was watching it, I decided to take some screenshots:\n1. ![Reversed Priority 1 of 2](https://cloud.githubusercontent.com/assets/1501235/9609848/04e554c2-50a4-11e5-9984-0175a05bc146.png)\n2. ![Reversed Priority 2 of 2](https://cloud.githubusercontent.com/assets/1501235/9609849/04e56f20-50a4-11e5-861b-d7f108bca25b.png)\n\nThis also appears to not be honoring the `index.priority` either, as I tried to use it as a workaround and it did not impact the recovery order at all, which makes me assume that this is not even coming into play during replica recovery.\n", "comments": [ { "body": "@s1monw could you take a look at this please?\n", "created_at": "2015-09-01T17:07:45Z" }, { "body": "I don't understand what you are testing here. I can't see the priorities you are giving, I don't see if the replicas where allocated before and if not there will be no ordering as far as I can tell. I don't see if primaries got allocated first and I wonder what you expected to see sorry it's unclear.\n", "created_at": "2015-09-01T18:09:22Z" }, { "body": "> I don't understand what you are testing here. I can't see the priorities you are giving, I don't see if the replicas where allocated before and if not there will be no ordering as far as I can tell. I don't see if primaries got allocated first and I wonder what you expected to see sorry it's unclear.\n\nI'm not familiar with the screenshot source but it looks like the indexes are recovering oldest to newest rather than newest to oldest. But I'm likely reading that wrong.\n", "created_at": "2015-09-01T18:11:51Z" }, { "body": "@s1monw \n\n> I don't understand what you are testing here.\n\nReplica recovery order with 2 nodes.\n1. I throttled recovery as shown.\n2. I took the second node offline.\n3. I deleted all of its `.marvel-*` indices from the offline node.\n4. I restarted the offline node and watched recovery.\n\n> I can't see the priorities you are giving.\n\nI only set `index.priority` after seeing the images above. I picked arbitrary indices in the middle of the group and gave higher values for them individually (e.g., `.marvel-2015.08.22` I gave the priority of 200). All of the creation dates are going to be roughly around midnight of the date of the index (no weirdness or cheating on creation of the indices).\n\n> I don't see if primaries got allocated first\n\nThey did. Synced flushed replica shards (not shown) also got recovered before these replicas were recovered.\n\n> I wonder what you expected to see sorry it's unclear.\n\nI expected to see what @nik9000 suggested: the newest to oldest recovery of the **replicas**. Basically, `.marvel-2015.08.28`'s replica should be recovered before `.marvel-2015.08.27`'s replica, which should be recovered before `.marvel-2015.08.26`'s replica (and so on).\n\nIt seems like the replica's do not consider priority in their recovery order and the oldest indices are being recovered.\n", "created_at": "2015-09-01T18:18:21Z" }, { "body": "> I deleted all of its .marvel-\\* indices from the offline node.\n\nif you don't let the gateway allocator fetch any replicas to recover it won't respect priorities and will leave the rest to the shard balancer. The balancer will do it's own sorting at this point. This has never been implemented\n", "created_at": "2015-09-01T18:22:20Z" } ], "number": 13249, "title": "Prioritized Replica Recovery is reversed by date" }
{ "body": "Today we try to allocate primaries first and then replicas\nbut don't take the index creation date and priority into account\nas we do in the GatewayAlloactor.\n\nCloses #13249\n", "number": 13256, "review_comments": [ { "body": "This was a bit hard for me to read due to the order in which comparators are checked. Could it be rewritten in a more idiomatic way, ie.\n\n``` java\nif (o1.isPrimary() != o2.isPrimary()) {\n return o1.isPrimary() ? -1 : 1;\n}\nfinal int secondaryCmp = secondaryComparator.compare(o1, o2);\nif (secondaryCmp != 0) {\n return secondaryCmp;\n}\nfinal int indexCmp = o1.index().compareTo(o2.index()));\nif (indexCmp != 0) {\n return indexCmp;\n}\nfinal int idCmp = o1.getId() - o2.getId();\nif (idCmp != 0) {\n return idCmp;\n}\n```\n\nIt would be helpful at least for me to see more quickly in which order comparisons are performed.\n", "created_at": "2015-09-02T07:48:42Z" }, { "body": "I wanted to have the most expensive comparator last though... but I can add a comment?\n", "created_at": "2015-09-02T08:42:33Z" }, { "body": "oh I see, indeed a comment would help\n", "created_at": "2015-09-02T10:14:53Z" }, { "body": "hmm, is there a typo somewhere in the end (s/shared/shards maybe?), I don't understand it. But otherwise, I would have been happy with just \"Apply secondaryComparator last as it is more expensive than other comparators\"\n", "created_at": "2015-09-02T10:35:29Z" }, { "body": "it's `shards`\n", "created_at": "2015-09-02T10:45:03Z" } ], "title": "Also use PriorityComparator in shard balancer" }
{ "commits": [ { "message": "Also use PriorityComparator in shard balancer\n\nToday we try to allocate primaries first and then replicas\nbut don't take the index creation date and priority into account\nas we do in the GatewayAlloactor.\n\nCloses #13249" } ], "files": [ { "diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.util.ArrayUtil;\n import org.apache.lucene.util.IntroSorter;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.RoutingNode;\n@@ -36,6 +37,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.gateway.PriorityComparator;\n import org.elasticsearch.node.settings.NodeSettingsService;\n \n import java.util.*;\n@@ -559,6 +561,7 @@ private boolean allocateUnassigned(RoutingNodes.UnassignedShards unassigned) {\n * use the sorter to save some iterations. \n */\n final AllocationDeciders deciders = allocation.deciders();\n+ final PriorityComparator secondaryComparator = PriorityComparator.getAllocationComparator(allocation);\n final Comparator<ShardRouting> comparator = new Comparator<ShardRouting>() {\n @Override\n public int compare(ShardRouting o1,\n@@ -570,7 +573,12 @@ public int compare(ShardRouting o1,\n if ((indexCmp = o1.index().compareTo(o2.index())) == 0) {\n return o1.getId() - o2.getId();\n }\n- return indexCmp;\n+ // this comparator is more expensive than all the others up there\n+ // that's why it's added last even though it could be easier to read\n+ // if we'd apply it earlier. this comparator will only differentiate across\n+ // indices all shards of the same index is treated equally.\n+ final int secondary = secondaryComparator.compare(o1, o2);\n+ return secondary == 0 ? indexCmp : secondary;\n }\n };\n /*", "filename": "core/src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java", "status": "modified" }, { "diff": "@@ -120,14 +120,7 @@ public boolean allocateUnassigned(final RoutingAllocation allocation) {\n boolean changed = false;\n \n RoutingNodes.UnassignedShards unassigned = allocation.routingNodes().unassigned();\n- unassigned.sort(new PriorityComparator() {\n-\n- @Override\n- protected Settings getIndexSettings(String index) {\n- IndexMetaData indexMetaData = allocation.metaData().index(index);\n- return indexMetaData.getSettings();\n- }\n- }); // sort for priority ordering\n+ unassigned.sort(PriorityComparator.getAllocationComparator(allocation)); // sort for priority ordering\n \n changed |= primaryShardAllocator.allocateUnassigned(allocation);\n changed |= replicaShardAllocator.processExistingRecoveries(allocation);", "filename": "core/src/main/java/org/elasticsearch/gateway/GatewayAllocator.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.routing.ShardRouting;\n+import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.common.settings.Settings;\n \n import java.util.Comparator;\n@@ -33,7 +34,7 @@\n * here the newer indices matter more). If even that is the same, we compare the index name which is useful\n * if the date is baked into the index name. ie logstash-2015.05.03.\n */\n-abstract class PriorityComparator implements Comparator<ShardRouting> {\n+public abstract class PriorityComparator implements Comparator<ShardRouting> {\n \n @Override\n public final int compare(ShardRouting o1, ShardRouting o2) {\n@@ -63,4 +64,17 @@ private long timeCreated(Settings settings) {\n }\n \n protected abstract Settings getIndexSettings(String index);\n+\n+ /**\n+ * Returns a PriorityComparator that uses the RoutingAllocation index metadata to access the index setting per index.\n+ */\n+ public static PriorityComparator getAllocationComparator(final RoutingAllocation allocation) {\n+ return new PriorityComparator() {\n+ @Override\n+ protected Settings getIndexSettings(String index) {\n+ IndexMetaData indexMetaData = allocation.metaData().index(index);\n+ return indexMetaData.getSettings();\n+ }\n+ };\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/gateway/PriorityComparator.java", "status": "modified" }, { "diff": "@@ -0,0 +1,98 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.cluster.routing.allocation;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.RoutingTable;\n+import org.elasticsearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider;\n+import org.elasticsearch.test.ESAllocationTestCase;\n+\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n+import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n+\n+public class AllocationPriorityTests extends ESAllocationTestCase {\n+\n+ /**\n+ * Tests that higher prioritized primaries and replicas are allocated first even on the balanced shard allocator\n+ * See https://github.com/elastic/elasticsearch/issues/13249 for details\n+ */\n+ public void testPrioritizedIndicesAllocatedFirst() {\n+ AllocationService allocation = createAllocationService(settingsBuilder().\n+ put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_CONCURRENT_RECOVERIES, 1)\n+ .put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_NODE_INITIAL_PRIMARIES_RECOVERIES, 1)\n+ .put(ThrottlingAllocationDecider.CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES, 1).build());\n+ final String highPriorityName;\n+ final String lowPriorityName;\n+ final int priorityFirst;\n+ final int prioritySecond;\n+ if (randomBoolean()) {\n+ highPriorityName = \"first\";\n+ lowPriorityName = \"second\";\n+ prioritySecond = 1;\n+ priorityFirst = 100;\n+ } else {\n+ lowPriorityName = \"first\";\n+ highPriorityName = \"second\";\n+ prioritySecond = 100;\n+ priorityFirst = 1;\n+ }\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"first\").settings(settings(Version.CURRENT).put(IndexMetaData.SETTING_PRIORITY, priorityFirst)).numberOfShards(2).numberOfReplicas(1))\n+ .put(IndexMetaData.builder(\"second\").settings(settings(Version.CURRENT).put(IndexMetaData.SETTING_PRIORITY, prioritySecond)).numberOfShards(2).numberOfReplicas(1))\n+ .build();\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"first\"))\n+ .addAsNew(metaData.index(\"second\"))\n+ .build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n+\n+ clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n+ RoutingAllocation.Result rerouteResult = allocation.reroute(clusterState);\n+ clusterState = ClusterState.builder(clusterState).routingTable(rerouteResult.routingTable()).build();\n+\n+ routingTable = allocation.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ assertEquals(2, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).size());\n+ assertEquals(highPriorityName, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).get(0).index());\n+ assertEquals(highPriorityName, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).get(1).index());\n+\n+ routingTable = allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ assertEquals(2, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).size());\n+ assertEquals(lowPriorityName, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).get(0).index());\n+ assertEquals(lowPriorityName, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).get(1).index());\n+\n+ routingTable = allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ assertEquals(2, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).size());\n+ assertEquals(highPriorityName, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).get(0).index());\n+ assertEquals(highPriorityName, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).get(1).index());\n+\n+ routingTable = allocation.applyStartedShards(clusterState, clusterState.getRoutingNodes().shardsWithState(INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ assertEquals(2, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).size());\n+ assertEquals(lowPriorityName, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).get(0).index());\n+ assertEquals(lowPriorityName, clusterState.getRoutingNodes().shardsWithState(INITIALIZING).get(1).index());\n+\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/cluster/routing/allocation/AllocationPriorityTests.java", "status": "added" } ] }
{ "body": "prerequisite to #9421\nsee also #12600\n", "comments": [ { "body": "I love this change looks very good. I left some comments around unittesting \n", "created_at": "2015-08-25T09:19:32Z" }, { "body": "I agree with @s1monw that this looks great. I left minor comments here and there.\n", "created_at": "2015-08-28T14:40:36Z" }, { "body": "LGTM \n", "created_at": "2015-08-31T12:30:47Z" }, { "body": "@s1monw thanks for the review! \n\naddressed all comments. @bleskes want to have one more look?\n", "created_at": "2015-08-31T13:34:20Z" }, { "body": "Looks awesome. I only miss a test for the aggregation of results from multiple shard level responses.\n", "created_at": "2015-08-31T14:15:22Z" }, { "body": "@bleskes addressed all comments and added test here: https://github.com/elastic/elasticsearch/pull/13068/files#diff-6030559b5ed4d55d9a754523f5c6ce6dR137 \n", "created_at": "2015-08-31T15:37:06Z" }, { "body": "LGTM! (minor comments, no need for another review)\n", "created_at": "2015-08-31T15:58:39Z" } ], "number": 13068, "title": "Make refresh a replicated action" }
{ "body": "Currently, we do not allow reads on shards which are in POST_RECOVERY which\nunfortunately can cause search failures on shards which just recovered if there no replicas (#9421).\nThe reason why we did not allow reads on shards that are in POST_RECOVERY is\nthat after relocating a shard might miss a refresh if the node that executed the\nrefresh is behind with cluster state processing. If that happens, a user might execute\nindex/refresh/search but still not find the document that was indexed.\n\nWe changed how refresh works now in #13068 to make sure that shards cannot miss a refresh this\nway by sending refresh requests the same way that we send write requests.\n\nThis commit changes IndexShard to allow reads on POST_RECOVERY now.\nIn addition it adds two test:\n- test for issue #9421 (After relocation shards might temporarily not be searchable if still in POST_RECOVERY)\n- test for visibility issue with relocation and refresh if reads allowed when shard is in POST_RECOVERY\n\ncloses #9421\n", "number": 13246, "review_comments": [ { "body": "We should probably update the exception message to be `started/relocated/post-recovery` also?\n", "created_at": "2015-09-01T14:16:35Z" }, { "body": "can we make an enum set out of these?\n", "created_at": "2015-09-01T14:34:45Z" }, { "body": "great. Can we do the same to the write allowed states? writeAllowedOnPrimaryStates, writeAllowedOnReplicaStates?\n", "created_at": "2015-09-01T19:10:37Z" }, { "body": "yes! but can we do this in another pr?\n", "created_at": "2015-09-01T19:44:47Z" }, { "body": "Sure\n\nOn Tue, Sep 1, 2015 at 9:45 PM, Britta Weber notifications@github.com\nwrote:\n\n> > @@ -191,6 +192,8 @@\n> > \n> > ```\n> > private final IndexShardOperationCounter indexShardOperationCounter;\n> > ```\n> > - private EnumSet<IndexShardState> readAllowedStates = EnumSet.of(IndexShardState.STARTED, IndexShardState.RELOCATED, IndexShardState.POST_RECOVERY);\n> > yes! but can we do this in another pr?\n> > ---\n> > Reply to this email directly or view it on GitHub:\n> > https://github.com/elastic/elasticsearch/pull/13246/files#r38462475\n", "created_at": "2015-09-01T19:57:15Z" } ], "title": "Allow reads on shards that are in POST_RECOVERY" }
{ "commits": [ { "message": "Allow reads on shards that are in POST_RECOVERY\n\nCurrently, we do not allow reads on shards which are in POST_RECOVERY which\nunfortunately can cause search failures on shards which just recovered if there no replicas (#9421).\nThe reason why we did not allow reads on shards that are in POST_RECOVERY is\nthat after relocating a shard might miss a refresh if the node that executed the\nrefresh is behind with cluster state processing. If that happens, a user might execute\nindex/refresh/search but still not find the document that was indexed.\n\nWe changed how refresh works now in #13068 to make sure that shards cannot miss a refresh this\nway by sending refresh requests the same way that we send write requests.\n\nThis commit changes IndexShard to allow reads on POST_RECOVERY now.\nIn addition it adds two test:\n\n- test for issue #9421 (After relocation shards might temporarily not be searchable if still in POST_RECOVERY)\n- test for visibility issue with relocation and refresh if reads allowed when shard is in POST_RECOVERY\n\ncloses #9421" } ], "files": [ { "diff": "@@ -111,6 +111,7 @@\n import java.io.PrintStream;\n import java.nio.channels.ClosedByInterruptException;\n import java.util.Arrays;\n+import java.util.EnumSet;\n import java.util.Locale;\n import java.util.Map;\n import java.util.concurrent.CopyOnWriteArrayList;\n@@ -191,6 +192,8 @@ public class IndexShard extends AbstractIndexShardComponent {\n \n private final IndexShardOperationCounter indexShardOperationCounter;\n \n+ private EnumSet<IndexShardState> readAllowedStates = EnumSet.of(IndexShardState.STARTED, IndexShardState.RELOCATED, IndexShardState.POST_RECOVERY);\n+\n @Inject\n public IndexShard(ShardId shardId, IndexSettingsService indexSettingsService, IndicesLifecycle indicesLifecycle, Store store, StoreRecoveryService storeRecoveryService,\n ThreadPool threadPool, MapperService mapperService, IndexQueryParserService queryParserService, IndexCache indexCache, IndexAliasesService indexAliasesService,\n@@ -953,8 +956,8 @@ public boolean ignoreRecoveryAttempt() {\n \n public void readAllowed() throws IllegalIndexShardStateException {\n IndexShardState state = this.state; // one time volatile read\n- if (state != IndexShardState.STARTED && state != IndexShardState.RELOCATED) {\n- throw new IllegalIndexShardStateException(shardId, state, \"operations only allowed when started/relocated\");\n+ if (readAllowedStates.contains(state) == false) {\n+ throw new IllegalIndexShardStateException(shardId, state, \"operations only allowed when shard state is one of \" + readAllowedStates.toString());\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -23,21 +23,29 @@\n import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n-import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse;\n+import org.elasticsearch.action.admin.indices.flush.FlushResponse;\n+import org.elasticsearch.action.admin.indices.refresh.RefreshResponse;\n+import org.elasticsearch.action.count.CountResponse;\n import org.elasticsearch.action.get.GetResponse;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.*;\n+import org.elasticsearch.cluster.action.shard.ShardStateAction;\n import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.DjbHashFunction;\n+import org.elasticsearch.cluster.routing.RoutingNode;\n+import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.discovery.zen.ZenDiscovery;\n@@ -48,14 +56,21 @@\n import org.elasticsearch.discovery.zen.ping.ZenPingService;\n import org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing;\n import org.elasticsearch.discovery.zen.publish.PublishClusterStateAction;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.recovery.RecoverySource;\n+import org.elasticsearch.indices.store.IndicesStoreIntegrationIT;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.InternalTestCluster;\n import org.elasticsearch.test.discovery.ClusterDiscoveryConfiguration;\n import org.elasticsearch.test.disruption.*;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.test.transport.MockTransportService;\n-import org.elasticsearch.transport.*;\n+import org.elasticsearch.transport.TransportException;\n+import org.elasticsearch.transport.TransportRequest;\n+import org.elasticsearch.transport.TransportRequestOptions;\n+import org.elasticsearch.transport.TransportService;\n import org.junit.Before;\n import org.junit.Test;\n \n@@ -812,7 +827,9 @@ public void isolatedUnicastNodes() throws Exception {\n }\n \n \n- /** Test cluster join with issues in cluster state publishing * */\n+ /**\n+ * Test cluster join with issues in cluster state publishing *\n+ */\n @Test\n public void testClusterJoinDespiteOfPublishingIssues() throws Exception {\n List<String> nodes = startCluster(2, 1);\n@@ -919,6 +936,277 @@ public void testNodeNotReachableFromMaster() throws Exception {\n ensureStableCluster(3);\n }\n \n+ /*\n+ * Tests a visibility issue if a shard is in POST_RECOVERY\n+ *\n+ * When a user indexes a document, then refreshes and then a executes a search and all are successful and no timeouts etc then\n+ * the document must be visible for the search.\n+ *\n+ * When a primary is relocating from node_1 to node_2, there can be a short time where both old and new primary\n+ * are started and accept indexing and read requests. However, the new primary might not be visible to nodes\n+ * that lag behind one cluster state. If such a node then sends a refresh to the index, this refresh request\n+ * must reach the new primary on node_2 too. Otherwise a different node that searches on the new primary might not\n+ * find the indexed document although a refresh was executed before.\n+ *\n+ * In detail:\n+ * Cluster state 0:\n+ * node_1: [index][0] STARTED (ShardRoutingState)\n+ * node_2: no shard\n+ *\n+ * 0. primary ([index][0]) relocates from node_1 to node_2\n+ * Cluster state 1:\n+ * node_1: [index][0] RELOCATING (ShardRoutingState), (STARTED from IndexShardState perspective on node_1)\n+ * node_2: [index][0] INITIALIZING (ShardRoutingState), (IndexShardState on node_2 is RECOVERING)\n+ *\n+ * 1. node_2 is done recovering, moves its shard to IndexShardState.POST_RECOVERY and sends a message to master that the shard is ShardRoutingState.STARTED\n+ * Cluster state is still the same but the IndexShardState on node_2 has changed and it now accepts writes and reads:\n+ * node_1: [index][0] RELOCATING (ShardRoutingState), (STARTED from IndexShardState perspective on node_1)\n+ * node_2: [index][0] INITIALIZING (ShardRoutingState), (IndexShardState on node_2 is POST_RECOVERY)\n+ *\n+ * 2. any node receives an index request which is then executed on node_1 and node_2\n+ *\n+ * 3. node_3 sends a refresh but it is a little behind with cluster state processing and still on cluster state 0.\n+ * If refresh was a broadcast operation it send it to node_1 only because it does not know node_2 has a shard too\n+ *\n+ * 4. node_3 catches up with the cluster state and acks it to master which now can process the shard started message\n+ * from node_2 before and updates cluster state to:\n+ * Cluster state 2:\n+ * node_1: [index][0] no shard\n+ * node_2: [index][0] STARTED (ShardRoutingState), (IndexShardState on node_2 is still POST_RECOVERY)\n+ *\n+ * master sends this to all nodes.\n+ *\n+ * 5. node_4 and node_3 process cluster state 2, but node_1 and node_2 have not yet\n+ *\n+ * If now node_4 searches for document that was indexed before, it will search at node_2 because it is on\n+ * cluster state 2. It should be able to retrieve it with a search because the refresh from before was\n+ * successful.\n+ */\n+ @Test\n+ public void testReadOnPostRecoveryShards() throws Exception {\n+ List<BlockClusterStateProcessing> clusterStateBlocks = new ArrayList<>();\n+ try {\n+ configureUnicastCluster(5, null, 1);\n+ // we could probably write a test without a dedicated master node but it is easier if we use one\n+ Future<String> masterNodeFuture = internalCluster().startMasterOnlyNodeAsync();\n+ // node_1 will have the shard in the beginning\n+ Future<String> node1Future = internalCluster().startDataOnlyNodeAsync();\n+ final String masterNode = masterNodeFuture.get();\n+ final String node_1 = node1Future.get();\n+ logger.info(\"--> creating index [test] with one shard and zero replica\");\n+ assertAcked(prepareCreate(\"test\").setSettings(\n+ Settings.builder().put(indexSettings())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .put(IndexShard.INDEX_REFRESH_INTERVAL, -1))\n+ .addMapping(\"doc\", jsonBuilder().startObject().startObject(\"doc\")\n+ .startObject(\"properties\").startObject(\"text\").field(\"type\", \"string\").endObject().endObject()\n+ .endObject().endObject())\n+ );\n+ ensureGreen(\"test\");\n+ logger.info(\"--> starting three more data nodes\");\n+ List<String> nodeNamesFuture = internalCluster().startDataOnlyNodesAsync(3).get();\n+ final String node_2 = nodeNamesFuture.get(0);\n+ final String node_3 = nodeNamesFuture.get(1);\n+ final String node_4 = nodeNamesFuture.get(2);\n+ logger.info(\"--> running cluster_health\");\n+ ClusterHealthResponse clusterHealth = client().admin().cluster().prepareHealth()\n+ .setWaitForNodes(\"5\")\n+ .setWaitForRelocatingShards(0)\n+ .get();\n+ assertThat(clusterHealth.isTimedOut(), equalTo(false));\n+\n+ logger.info(\"--> move shard from node_1 to node_2, and wait for relocation to finish\");\n+\n+ // block cluster state updates on node_3 so that it only sees the shard on node_1\n+ BlockClusterStateProcessing disruptionNode3 = new BlockClusterStateProcessing(node_3, getRandom());\n+ clusterStateBlocks.add(disruptionNode3);\n+ internalCluster().setDisruptionScheme(disruptionNode3);\n+ disruptionNode3.startDisrupting();\n+ // register a Tracer that notifies begin and end of a relocation\n+ MockTransportService transportServiceNode2 = (MockTransportService) internalCluster().getInstance(TransportService.class, node_2);\n+ CountDownLatch beginRelocationLatchNode2 = new CountDownLatch(1);\n+ CountDownLatch endRelocationLatchNode2 = new CountDownLatch(1);\n+ transportServiceNode2.addTracer(new StartRecoveryToShardStaredTracer(logger, beginRelocationLatchNode2, endRelocationLatchNode2));\n+\n+ // block cluster state updates on node_1 and node_2 so that we end up with two primaries\n+ BlockClusterStateProcessing disruptionNode2 = new BlockClusterStateProcessing(node_2, getRandom());\n+ clusterStateBlocks.add(disruptionNode2);\n+ disruptionNode2.applyToCluster(internalCluster());\n+ BlockClusterStateProcessing disruptionNode1 = new BlockClusterStateProcessing(node_1, getRandom());\n+ clusterStateBlocks.add(disruptionNode1);\n+ disruptionNode1.applyToCluster(internalCluster());\n+\n+ logger.info(\"--> move shard from node_1 to node_2\");\n+ // don't block on the relocation. cluster state updates are blocked on node_3 and the relocation would timeout\n+ Future<ClusterRerouteResponse> rerouteFuture = internalCluster().client().admin().cluster().prepareReroute().add(new MoveAllocationCommand(new ShardId(\"test\", 0), node_1, node_2)).setTimeout(new TimeValue(1000, TimeUnit.MILLISECONDS)).execute();\n+\n+ logger.info(\"--> wait for relocation to start\");\n+ // wait for relocation to start\n+ beginRelocationLatchNode2.await();\n+ // start to block cluster state updates on node_1 and node_2 so that we end up with two primaries\n+ // one STARTED on node_1 and one in POST_RECOVERY on node_2\n+ disruptionNode1.startDisrupting();\n+ disruptionNode2.startDisrupting();\n+ endRelocationLatchNode2.await();\n+ final Client node3Client = internalCluster().client(node_3);\n+ final Client node2Client = internalCluster().client(node_2);\n+ final Client node1Client = internalCluster().client(node_1);\n+ final Client node4Client = internalCluster().client(node_4);\n+ logger.info(\"--> index doc\");\n+ logLocalClusterStates(node1Client, node2Client, node3Client, node4Client);\n+ assertTrue(node3Client.prepareIndex(\"test\", \"doc\").setSource(\"{\\\"text\\\":\\\"a\\\"}\").get().isCreated());\n+ //sometimes refresh and sometimes flush\n+ int refreshOrFlushType = randomIntBetween(1, 2);\n+ switch (refreshOrFlushType) {\n+ case 1: {\n+ logger.info(\"--> refresh from node_3\");\n+ RefreshResponse refreshResponse = node3Client.admin().indices().prepareRefresh().get();\n+ assertThat(refreshResponse.getFailedShards(), equalTo(0));\n+ // the total shards is num replicas + 1 so that can be lower here because one shard\n+ // is relocating and counts twice as successful\n+ assertThat(refreshResponse.getTotalShards(), equalTo(2));\n+ assertThat(refreshResponse.getSuccessfulShards(), equalTo(2));\n+ break;\n+ }\n+ case 2: {\n+ logger.info(\"--> flush from node_3\");\n+ FlushResponse flushResponse = node3Client.admin().indices().prepareFlush().get();\n+ assertThat(flushResponse.getFailedShards(), equalTo(0));\n+ // the total shards is num replicas + 1 so that can be lower here because one shard\n+ // is relocating and counts twice as successful\n+ assertThat(flushResponse.getTotalShards(), equalTo(2));\n+ assertThat(flushResponse.getSuccessfulShards(), equalTo(2));\n+ break;\n+ }\n+ default:\n+ fail(\"this is test bug, number should be between 1 and 2\");\n+ }\n+ // now stop disrupting so that node_3 can ack last cluster state to master and master can continue\n+ // to publish the next cluster state\n+ logger.info(\"--> stop disrupting node_3\");\n+ disruptionNode3.stopDisrupting();\n+ rerouteFuture.get();\n+ logger.info(\"--> wait for node_4 to get new cluster state\");\n+ // wait until node_4 actually has the new cluster state in which node_1 has no shard\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ ClusterState clusterState = node4Client.admin().cluster().prepareState().setLocal(true).get().getState();\n+ // get the node id from the name. TODO: Is there a better way to do this?\n+ String nodeId = null;\n+ for (RoutingNode node : clusterState.getRoutingNodes()) {\n+ if (node.node().name().equals(node_1)) {\n+ nodeId = node.nodeId();\n+ }\n+ }\n+ assertNotNull(nodeId);\n+ // check that node_1 does not have the shard in local cluster state\n+ assertFalse(clusterState.getRoutingNodes().routingNodeIter(nodeId).hasNext());\n+ }\n+ });\n+\n+ logger.info(\"--> run count from node_4\");\n+ logLocalClusterStates(node1Client, node2Client, node3Client, node4Client);\n+ CountResponse countResponse = node4Client.prepareCount(\"test\").setPreference(\"local\").get();\n+ assertThat(countResponse.getCount(), equalTo(1l));\n+ logger.info(\"--> stop disrupting node_1 and node_2\");\n+ disruptionNode2.stopDisrupting();\n+ disruptionNode1.stopDisrupting();\n+ // wait for relocation to finish\n+ logger.info(\"--> wait for relocation to finish\");\n+ clusterHealth = client().admin().cluster().prepareHealth()\n+ .setWaitForRelocatingShards(0)\n+ .get();\n+ assertThat(clusterHealth.isTimedOut(), equalTo(false));\n+ } catch (AssertionError e) {\n+ for (BlockClusterStateProcessing blockClusterStateProcessing : clusterStateBlocks) {\n+ blockClusterStateProcessing.stopDisrupting();\n+ }\n+ throw e;\n+ }\n+ }\n+\n+ /**\n+ * This Tracer can be used to signal start of a recovery and shard started event after translog was copied\n+ */\n+ public static class StartRecoveryToShardStaredTracer extends MockTransportService.Tracer {\n+ private final ESLogger logger;\n+ private final CountDownLatch beginRelocationLatch;\n+ private final CountDownLatch sentShardStartedLatch;\n+\n+ public StartRecoveryToShardStaredTracer(ESLogger logger, CountDownLatch beginRelocationLatch, CountDownLatch sentShardStartedLatch) {\n+ this.logger = logger;\n+ this.beginRelocationLatch = beginRelocationLatch;\n+ this.sentShardStartedLatch = sentShardStartedLatch;\n+ }\n+\n+ @Override\n+ public void requestSent(DiscoveryNode node, long requestId, String action, TransportRequestOptions options) {\n+ if (action.equals(RecoverySource.Actions.START_RECOVERY)) {\n+ logger.info(\"sent: {}, relocation starts\", action);\n+ beginRelocationLatch.countDown();\n+ }\n+ if (action.equals(ShardStateAction.SHARD_STARTED_ACTION_NAME)) {\n+ logger.info(\"sent: {}, shard started\", action);\n+ sentShardStartedLatch.countDown();\n+ }\n+ }\n+ }\n+\n+ private void logLocalClusterStates(Client... clients) {\n+ int counter = 1;\n+ for (Client client : clients) {\n+ ClusterState clusterState = client.admin().cluster().prepareState().setLocal(true).get().getState();\n+ logger.info(\"--> cluster state on node_{} {}\", counter, clusterState.prettyPrint());\n+ counter++;\n+ }\n+ }\n+\n+ /**\n+ * This test creates a scenario where a primary shard (0 replicas) relocates and is in POST_RECOVERY on the target\n+ * node but already deleted on the source node. Search request should still work.\n+ */\n+ @Test\n+ public void searchWithRelocationAndSlowClusterStateProcessing() throws Exception {\n+ configureUnicastCluster(3, null, 1);\n+ Future<String> masterNodeFuture = internalCluster().startMasterOnlyNodeAsync();\n+ Future<String> node_1Future = internalCluster().startDataOnlyNodeAsync();\n+\n+ final String node_1 = node_1Future.get();\n+ final String masterNode = masterNodeFuture.get();\n+ logger.info(\"--> creating index [test] with one shard and on replica\");\n+ assertAcked(prepareCreate(\"test\").setSettings(\n+ Settings.builder().put(indexSettings())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0))\n+ );\n+ ensureGreen(\"test\");\n+\n+ Future<String> node_2Future = internalCluster().startDataOnlyNodeAsync();\n+ final String node_2 = node_2Future.get();\n+ List<IndexRequestBuilder> indexRequestBuilderList = new ArrayList<>();\n+ for (int i = 0; i < 100; i++) {\n+ indexRequestBuilderList.add(client().prepareIndex().setIndex(\"test\").setType(\"doc\").setSource(\"{\\\"int_field\\\":1}\"));\n+ }\n+ indexRandom(true, indexRequestBuilderList);\n+ SingleNodeDisruption disruption = new BlockClusterStateProcessing(node_2, getRandom());\n+\n+ internalCluster().setDisruptionScheme(disruption);\n+ MockTransportService transportServiceNode2 = (MockTransportService) internalCluster().getInstance(TransportService.class, node_2);\n+ CountDownLatch beginRelocationLatch = new CountDownLatch(1);\n+ CountDownLatch endRelocationLatch = new CountDownLatch(1);\n+ transportServiceNode2.addTracer(new IndicesStoreIntegrationIT.ReclocationStartEndTracer(logger, beginRelocationLatch, endRelocationLatch));\n+ internalCluster().client().admin().cluster().prepareReroute().add(new MoveAllocationCommand(new ShardId(\"test\", 0), node_1, node_2)).get();\n+ // wait for relocation to start\n+ beginRelocationLatch.await();\n+ disruption.startDisrupting();\n+ // wait for relocation to finish\n+ endRelocationLatch.await();\n+ // now search for the documents and see if we get a reply\n+ assertThat(client().prepareCount().get().getCount(), equalTo(100l));\n+ }\n+\n @Test\n public void testIndexImportedFromDataOnlyNodesIfMasterLostDataFolder() throws Exception {\n // test for https://github.com/elastic/elasticsearch/issues/8823\n@@ -932,6 +1220,7 @@ public void testIndexImportedFromDataOnlyNodesIfMasterLostDataFolder() throws Ex\n ensureGreen();\n \n internalCluster().restartNode(masterNode, new InternalTestCluster.RestartCallback() {\n+ @Override\n public boolean clearData(String nodeName) {\n return true;\n }", "filename": "core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java", "status": "modified" } ] }
{ "body": "SimpleSortTests.testIssue8226 for example fails about once a week. Example failure:\nhttp://build-us-00.elasticsearch.org/job/es_g1gc_1x_metal/3129/\n\nI can reproduce it locally (although very rarely) with some additional logging (action.search.type: TRACE). \n\nHere is a brief analysis of what happened. Would be great if someone could take a look and let me know if this makes sense.\n\nFailure:\n\n```\n1> REPRODUCE WITH : mvn clean test -Dtests.seed=774A2866F1B6042D -Dtests.class=org.elasticsearch.search.sort.SimpleSortTests -Dtests.method=\"testIssue8226 {#76 seed=[774A2866F1B6042D:ACB4FF9F8C8CA341]}\" -Des.logger.level=DEBUG -Des.node.mode=network -Dtests.security.manager=true -Dtests.nightly=false -Dtests.client.ratio=0.0 -Dtests.heap.size=512m -Dtests.jvm.argline=\"-server -XX:+UseConcMarkSweepGC -XX:-UseCompressedOops -XX:+AggressiveOpts -Djava.net.preferIPv4Stack=true\" -Dtests.locale=fi_FI -Dtests.timezone=Etc/GMT+9 -Dtests.processors=4\n 1> Throwable:\n 1> java.lang.AssertionError: One or more shards were not successful but didn't trigger a failure\n 1> Expected: <47>\n 1> but: was <46>\n```\n\nHere is an example failure in detail, the relevant parts of the logs are below:\n## State\n\nnode_0 is master.\n[test_5][0] is relocating from node_1 to node_0.\nCluster state 3673 has the shard as relocating, in cluster state 3674 it is started.\nnode_0 is the coordinating node for the search request.\n\nIn brief, the request fails for shard [test_5][0] because node_0 operates on an older cluster state 3673 when processing the search request, while node_1 is already on 3674.\n## Course of events:\n1. node_0 sends shard started, but the shard is still in state POST_RECOVERY and will remain so until it receives the new cluster state and applies it locally\n2. node_0(master) receives the shard started request and publishes the new cluster state 3674 to node_0 and node_1\n3. node_1 receives the cluster state 3674 and applies it locally\n4. node_0 sends search request for [test_5][0] to node_1 because according to cluster state 3673 the shard is there and relocating\n -> request fails with IndexShardMissingException because node_1 already applied cluster state 3674 and deleted the shard.\n5. node_0 then sends request for [test_5][0] to node_0 because the shard is there as well (according to cluster state 3673 it is and initializing)\n -> request fails with IllegalIndexShardStateException because node_0 has not yet processed cluster state 3674 and therefore the shard is in POST_RECOVERY instead of STARTED\n No shard failure is logged because IndexShardMissingException and IllegalIndexShardStateException are explicitly excluded from shard failures.\n6. node_0 finally also gets to process the new cluster state and moves the shard [test_5][0] to STARTED but it is too late\n\nThis is a very rare condition and maybe too bad on client side because the information that one shard did not deliver results is there although it is not explicitly listed as shard failure. We can probably make the test pass easily be just waiting for relocations before executing the search request but that seems wrong because any search request can fail this way.\n## Sample log\n\n```\n[....]\n\n 1> [2015-01-26 09:27:14,435][DEBUG][indices.recovery ] [node_0] [test_5][0] recovery completed from [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}], took [84ms]\n 1> [2015-01-26 09:27:14,435][DEBUG][cluster.action.shard ] [node_0] sending shard started for [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n 1> [2015-01-26 09:27:14,435][DEBUG][cluster.action.shard ] [node_0] received shard started for [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n 1> [2015-01-26 09:27:14,436][DEBUG][cluster.service ] [node_0] processing [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]: execute\n 1> [2015-01-26 09:27:14,436][DEBUG][cluster.action.shard ] [node_0] [test_5][0] will apply shard started [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n\n\n[....]\n\n\n 1> [2015-01-26 09:27:14,441][DEBUG][cluster.service ] [node_0] cluster state updated, version [3674], source [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]\n 1> [2015-01-26 09:27:14,441][DEBUG][cluster.service ] [node_0] publishing cluster state version 3674\n 1> [2015-01-26 09:27:14,442][DEBUG][discovery.zen.publish ] [node_1] received cluster state version 3674\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] processing [zen-disco-receive(from master [[node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}])]: execute\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] cluster state updated, version [3674], source [zen-disco-receive(from master [[node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}])]\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] set local cluster state to version 3674\n 1> [2015-01-26 09:27:14,443][DEBUG][indices.cluster ] [node_1] [test_5][0] removing shard (not allocated)\n 1> [2015-01-26 09:27:14,443][DEBUG][index ] [node_1] [test_5] [0] closing... (reason: [removing shard (not allocated)])\n 1> [2015-01-26 09:27:14,443][INFO ][test.store ] [node_1] [test_5][0] Shard state before potentially flushing is STARTED\n 1> [2015-01-26 09:27:14,453][DEBUG][search.sort ] cluster state:\n 1> version: 3673\n 1> meta data version: 2043\n 1> nodes:\n 1> [node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}\n 1> [node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}, local, master\n 1> routing_table (version 3006):\n 1> -- index [test_4]\n 1> ----shard_id [test_4][4]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][7]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][0]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][3]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][1]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][5]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][6]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][2]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_3]\n 1> ----shard_id [test_3][2]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][0]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][3]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][1]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][5]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][6]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][4]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_2]\n 1> ----shard_id [test_2][2]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][0]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][3]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][1]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][5]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][4]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_1]\n 1> ----shard_id [test_1][0]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_1][1]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_0]\n 1> ----shard_id [test_0][4]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][0]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][7]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][3]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][1]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][5]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][6]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][2]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_8]\n 1> ----shard_id [test_8][0]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_8][1]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_8][2]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> -- index [test_7]\n 1> ----shard_id [test_7][2]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][0]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][3]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][1]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][4]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_6]\n 1> ----shard_id [test_6][0]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][3]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][1]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][2]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_5]\n 1> ----shard_id [test_5][0]\n 1> --------[test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]\n 1> ----shard_id [test_5][3]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][1]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][2]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> routing_nodes:\n 1> -----node_id[GQ6yYxmyRT-sfvT0cmuqQQ][V]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> -----node_id[G4AEDzbrRae5BC_UD9zItA][V]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ---- unassigned\n 1>\n 1> tasks: (1):\n 1> 13638/URGENT/shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]/13ms\n 1>\n\n\n\n[...]\n\n\n 1> [2015-01-26 09:27:14,459][TRACE][action.search.type ] [node_0] got first-phase result from [G4AEDzbrRae5BC_UD9zItA][test_3][3]\n 1> [2015-01-26 09:27:14,460][TRACE][action.search.type ] [node_0] [test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1469040a] lastShard [false]\n 1> org.elasticsearch.transport.RemoteTransportException: [node_1][inet[/192.168.2.102:9401]][indices:data/read/search[phase/dfs]]\n 1> Caused by: org.elasticsearch.index.IndexShardMissingException: [test_5][0] missing\n 1> at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:203)\n 1> at org.elasticsearch.search.SearchService.createContext(SearchService.java:539)\n 1> at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:523)\n 1> at org.elasticsearch.search.SearchService.executeDfsPhase(SearchService.java:208)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$SearchDfsTransportHandler.messageReceived(SearchServiceTransportAction.java:757)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$SearchDfsTransportHandler.messageReceived(SearchServiceTransportAction.java:748)\n 1> at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:275)\n 1> at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_6][1]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_5][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_7][1]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_3][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_8][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_1][0]\n\n\n[...]\n\n\n 1> [2015-01-26 09:27:14,463][TRACE][action.search.type ] [node_0] [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1469040a] lastShard [true]\n 1> org.elasticsearch.index.shard.IllegalIndexShardStateException: [test_5][0] CurrentState[POST_RECOVERY] operations only allowed when started/relocated\n 1> at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:839)\n 1> at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:651)\n 1> at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:647)\n 1> at org.elasticsearch.search.SearchService.createContext(SearchService.java:543)\n 1> at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:523)\n 1> at org.elasticsearch.search.SearchService.executeDfsPhase(SearchService.java:208)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$3.call(SearchServiceTransportAction.java:197)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$3.call(SearchServiceTransportAction.java:194)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n 1> [2015-01-26 09:27:14,459][TRACE][action.search.type ] [node_0] got first-phase result from [G4AEDzbrRae5BC_UD9zItA][test_2][4]\n\n\n\n[...]\n\n\n\n 1> [2015-01-26 09:27:14,493][DEBUG][cluster.service ] [node_0] set local cluster state to version 3674\n 1> [2015-01-26 09:27:14,493][DEBUG][index.shard ] [node_0] [test_5][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]\n 1> [2015-01-26 09:27:14,493][DEBUG][river.cluster ] [node_0] processing [reroute_rivers_node_changed]: execute\n 1> [2015-01-26 09:27:14,493][DEBUG][river.cluster ] [node_0] processing [reroute_rivers_node_changed]: no change in cluster_state\n 1> [2015-01-26 09:27:14,493][DEBUG][cluster.service ] [node_0] processing [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]: done applying updated cluster_state (version: 3674)\n 1> [2015-01-26 09:27:14,456][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_2][3]\n\n\n[...]\n\n 1> [2015-01-26 09:27:14,527][DEBUG][search.sort ] cluster state:\n 1> version: 3674\n 1> meta data version: 2043\n 1> nodes:\n 1> [node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}\n 1> [node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}, local, master\n 1> routing_table (version 3007):\n 1> -- index [test_4]\n 1> ----shard_id [test_4][2]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][7]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][0]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][3]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][1]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][5]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][6]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][4]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_3]\n 1> ----shard_id [test_3][4]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][0]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][3]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][1]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][5]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][6]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][2]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_2]\n 1> ----shard_id [test_2][4]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][0]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][3]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][1]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][5]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][2]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_1]\n 1> ----shard_id [test_1][0]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_1][1]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_0]\n 1> ----shard_id [test_0][2]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][0]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][7]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][3]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][1]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][5]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][6]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][4]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_8]\n 1> ----shard_id [test_8][0]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_8][1]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_8][2]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> -- index [test_7]\n 1> ----shard_id [test_7][4]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][0]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][3]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][1]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][2]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_6]\n 1> ----shard_id [test_6][0]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][3]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][1]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][2]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_5]\n 1> ----shard_id [test_5][0]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_5][3]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][1]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][2]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> routing_nodes:\n 1> -----node_id[GQ6yYxmyRT-sfvT0cmuqQQ][V]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> -----node_id[G4AEDzbrRae5BC_UD9zItA][V]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ---- unassigned\n 1>\n 1> tasks: (0):\n 1>\n\n[...]\n```\n", "comments": [ { "body": "A similar test failure:\n\n`org.elasticsearch.deleteByQuery.DeleteByQueryTests.testDeleteAllOneIndex`\n\nhttp://build-us-00.elasticsearch.org/job/es_g1gc_master_metal/2579/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\nhttp://build-us-00.elasticsearch.org/job/es_core_master_centos/2640/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\nhttp://build-us-00.elasticsearch.org/job/es_core_master_regression/1263/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\n\nIt fails on the:\n\n``` java\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.numPrimaries));\n```\n\nWhich I believe relates to the relocation issue Britta mentioned.\n", "created_at": "2015-01-27T00:21:32Z" }, { "body": "I think this is unrelated. I actually fixed the DeleteByQueryTests yesterday (c3f1982f21150336f87b7b4def74e019e8bdac18) and this commit does not seem to be in the build you linked to.\n\nA brief explanation: DeleteByQuery is a write operation. The shard header returned and checked in DeleteByQueryTests is different from the one return for search requests. The reason why DeleteByQuery failed is because I added the check \n\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.totalNumShards));\n\nbefore which was wrong because there was no ensureGreen() so some of the replicas might not have ben initialized yet. I fixed this in c3f1982f2 by instead checking\n\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.numPrimaries));\n", "created_at": "2015-01-27T08:57:22Z" }, { "body": "I wonder if we should just allow reads in the POST_RECOVERY phase. At that point the shards is effectively ready to do everything it needs to do. @brwe this will solve the issue, right?\n", "created_at": "2015-01-27T10:36:17Z" }, { "body": "@brwe okay, does that mean I can unmute the `DeleteByQueryTests.testDeleteAllOneIndex`?\n", "created_at": "2015-01-27T16:42:41Z" }, { "body": "yes\n", "created_at": "2015-01-27T16:45:32Z" }, { "body": "Unmuted the `DeleteByQueryTests.testDeleteAllOneIndex` test\n", "created_at": "2015-01-27T17:42:04Z" }, { "body": "@bleskes I think that would fix it. However, before I push I want to try and write a test that reproduces reliably. Will not do before next week.\n", "created_at": "2015-01-28T15:25:51Z" }, { "body": "@brwe please ping before starting on this. I want to make sure that we capture the original issue which caused us to introduce POST_RECOVERY. I don't recall exactly recall what the problem was (it was refresh related) and I think it was solved by a more recent change to how refresh work (#6545) but it requires careful thought\n", "created_at": "2015-02-24T12:48:00Z" }, { "body": "@bleskes ping :)\nI finally came back to this and wrote a test that reproduces the failure reliably (#10194) but I did not quite get what you meant by \"capture the original issue\". Can you elaborate?\n", "created_at": "2015-03-20T21:11:20Z" }, { "body": "@kimchy do you recall why we can't read in that state?\n", "created_at": "2015-04-13T14:39:28Z" } ], "number": 9421, "title": "After relocation shards might temporarily not be searchable if still in POST_RECOVERY" }
{ "body": "Currently, we do not allow reads on shards which are in POST_RECOVERY which\nunfortunately can cause search failures on shards which just recovered if there no replicas (#9421).\nThe reason why we did not allow reads on shards that are in POST_RECOVERY is\nthat after relocating a shard might miss a refresh if the node that executed the\nrefresh is behind with cluster state processing. If that happens, a user might execute\nindex/refresh/search but still not find the document that was indexed.\n\nWe changed how refresh works now in #13068 to make sure that shards cannot miss a refresh this\nway by sending refresh requests the same way that we send write requests.\n\nThis commit changes IndexShard to allow reads on POST_RECOVERY now.\nIn addition it adds two test:\n- test for issue #9421 (After relocation shards might temporarily not be searchable if still in POST_RECOVERY)\n- test for visibility issue with relocation and refresh if reads allowed when shard is in POST_RECOVERY\n\ncloses #9421\n", "number": 13246, "review_comments": [ { "body": "We should probably update the exception message to be `started/relocated/post-recovery` also?\n", "created_at": "2015-09-01T14:16:35Z" }, { "body": "can we make an enum set out of these?\n", "created_at": "2015-09-01T14:34:45Z" }, { "body": "great. Can we do the same to the write allowed states? writeAllowedOnPrimaryStates, writeAllowedOnReplicaStates?\n", "created_at": "2015-09-01T19:10:37Z" }, { "body": "yes! but can we do this in another pr?\n", "created_at": "2015-09-01T19:44:47Z" }, { "body": "Sure\n\nOn Tue, Sep 1, 2015 at 9:45 PM, Britta Weber notifications@github.com\nwrote:\n\n> > @@ -191,6 +192,8 @@\n> > \n> > ```\n> > private final IndexShardOperationCounter indexShardOperationCounter;\n> > ```\n> > - private EnumSet<IndexShardState> readAllowedStates = EnumSet.of(IndexShardState.STARTED, IndexShardState.RELOCATED, IndexShardState.POST_RECOVERY);\n> > yes! but can we do this in another pr?\n> > ---\n> > Reply to this email directly or view it on GitHub:\n> > https://github.com/elastic/elasticsearch/pull/13246/files#r38462475\n", "created_at": "2015-09-01T19:57:15Z" } ], "title": "Allow reads on shards that are in POST_RECOVERY" }
{ "commits": [ { "message": "Allow reads on shards that are in POST_RECOVERY\n\nCurrently, we do not allow reads on shards which are in POST_RECOVERY which\nunfortunately can cause search failures on shards which just recovered if there no replicas (#9421).\nThe reason why we did not allow reads on shards that are in POST_RECOVERY is\nthat after relocating a shard might miss a refresh if the node that executed the\nrefresh is behind with cluster state processing. If that happens, a user might execute\nindex/refresh/search but still not find the document that was indexed.\n\nWe changed how refresh works now in #13068 to make sure that shards cannot miss a refresh this\nway by sending refresh requests the same way that we send write requests.\n\nThis commit changes IndexShard to allow reads on POST_RECOVERY now.\nIn addition it adds two test:\n\n- test for issue #9421 (After relocation shards might temporarily not be searchable if still in POST_RECOVERY)\n- test for visibility issue with relocation and refresh if reads allowed when shard is in POST_RECOVERY\n\ncloses #9421" } ], "files": [ { "diff": "@@ -111,6 +111,7 @@\n import java.io.PrintStream;\n import java.nio.channels.ClosedByInterruptException;\n import java.util.Arrays;\n+import java.util.EnumSet;\n import java.util.Locale;\n import java.util.Map;\n import java.util.concurrent.CopyOnWriteArrayList;\n@@ -191,6 +192,8 @@ public class IndexShard extends AbstractIndexShardComponent {\n \n private final IndexShardOperationCounter indexShardOperationCounter;\n \n+ private EnumSet<IndexShardState> readAllowedStates = EnumSet.of(IndexShardState.STARTED, IndexShardState.RELOCATED, IndexShardState.POST_RECOVERY);\n+\n @Inject\n public IndexShard(ShardId shardId, IndexSettingsService indexSettingsService, IndicesLifecycle indicesLifecycle, Store store, StoreRecoveryService storeRecoveryService,\n ThreadPool threadPool, MapperService mapperService, IndexQueryParserService queryParserService, IndexCache indexCache, IndexAliasesService indexAliasesService,\n@@ -953,8 +956,8 @@ public boolean ignoreRecoveryAttempt() {\n \n public void readAllowed() throws IllegalIndexShardStateException {\n IndexShardState state = this.state; // one time volatile read\n- if (state != IndexShardState.STARTED && state != IndexShardState.RELOCATED) {\n- throw new IllegalIndexShardStateException(shardId, state, \"operations only allowed when started/relocated\");\n+ if (readAllowedStates.contains(state) == false) {\n+ throw new IllegalIndexShardStateException(shardId, state, \"operations only allowed when shard state is one of \" + readAllowedStates.toString());\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -23,21 +23,29 @@\n import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n-import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteResponse;\n+import org.elasticsearch.action.admin.indices.flush.FlushResponse;\n+import org.elasticsearch.action.admin.indices.refresh.RefreshResponse;\n+import org.elasticsearch.action.count.CountResponse;\n import org.elasticsearch.action.get.GetResponse;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.*;\n+import org.elasticsearch.cluster.action.shard.ShardStateAction;\n import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.DjbHashFunction;\n+import org.elasticsearch.cluster.routing.RoutingNode;\n+import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.discovery.zen.ZenDiscovery;\n@@ -48,14 +56,21 @@\n import org.elasticsearch.discovery.zen.ping.ZenPingService;\n import org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing;\n import org.elasticsearch.discovery.zen.publish.PublishClusterStateAction;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.recovery.RecoverySource;\n+import org.elasticsearch.indices.store.IndicesStoreIntegrationIT;\n import org.elasticsearch.plugins.Plugin;\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.InternalTestCluster;\n import org.elasticsearch.test.discovery.ClusterDiscoveryConfiguration;\n import org.elasticsearch.test.disruption.*;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.test.transport.MockTransportService;\n-import org.elasticsearch.transport.*;\n+import org.elasticsearch.transport.TransportException;\n+import org.elasticsearch.transport.TransportRequest;\n+import org.elasticsearch.transport.TransportRequestOptions;\n+import org.elasticsearch.transport.TransportService;\n import org.junit.Before;\n import org.junit.Test;\n \n@@ -812,7 +827,9 @@ public void isolatedUnicastNodes() throws Exception {\n }\n \n \n- /** Test cluster join with issues in cluster state publishing * */\n+ /**\n+ * Test cluster join with issues in cluster state publishing *\n+ */\n @Test\n public void testClusterJoinDespiteOfPublishingIssues() throws Exception {\n List<String> nodes = startCluster(2, 1);\n@@ -919,6 +936,277 @@ public void testNodeNotReachableFromMaster() throws Exception {\n ensureStableCluster(3);\n }\n \n+ /*\n+ * Tests a visibility issue if a shard is in POST_RECOVERY\n+ *\n+ * When a user indexes a document, then refreshes and then a executes a search and all are successful and no timeouts etc then\n+ * the document must be visible for the search.\n+ *\n+ * When a primary is relocating from node_1 to node_2, there can be a short time where both old and new primary\n+ * are started and accept indexing and read requests. However, the new primary might not be visible to nodes\n+ * that lag behind one cluster state. If such a node then sends a refresh to the index, this refresh request\n+ * must reach the new primary on node_2 too. Otherwise a different node that searches on the new primary might not\n+ * find the indexed document although a refresh was executed before.\n+ *\n+ * In detail:\n+ * Cluster state 0:\n+ * node_1: [index][0] STARTED (ShardRoutingState)\n+ * node_2: no shard\n+ *\n+ * 0. primary ([index][0]) relocates from node_1 to node_2\n+ * Cluster state 1:\n+ * node_1: [index][0] RELOCATING (ShardRoutingState), (STARTED from IndexShardState perspective on node_1)\n+ * node_2: [index][0] INITIALIZING (ShardRoutingState), (IndexShardState on node_2 is RECOVERING)\n+ *\n+ * 1. node_2 is done recovering, moves its shard to IndexShardState.POST_RECOVERY and sends a message to master that the shard is ShardRoutingState.STARTED\n+ * Cluster state is still the same but the IndexShardState on node_2 has changed and it now accepts writes and reads:\n+ * node_1: [index][0] RELOCATING (ShardRoutingState), (STARTED from IndexShardState perspective on node_1)\n+ * node_2: [index][0] INITIALIZING (ShardRoutingState), (IndexShardState on node_2 is POST_RECOVERY)\n+ *\n+ * 2. any node receives an index request which is then executed on node_1 and node_2\n+ *\n+ * 3. node_3 sends a refresh but it is a little behind with cluster state processing and still on cluster state 0.\n+ * If refresh was a broadcast operation it send it to node_1 only because it does not know node_2 has a shard too\n+ *\n+ * 4. node_3 catches up with the cluster state and acks it to master which now can process the shard started message\n+ * from node_2 before and updates cluster state to:\n+ * Cluster state 2:\n+ * node_1: [index][0] no shard\n+ * node_2: [index][0] STARTED (ShardRoutingState), (IndexShardState on node_2 is still POST_RECOVERY)\n+ *\n+ * master sends this to all nodes.\n+ *\n+ * 5. node_4 and node_3 process cluster state 2, but node_1 and node_2 have not yet\n+ *\n+ * If now node_4 searches for document that was indexed before, it will search at node_2 because it is on\n+ * cluster state 2. It should be able to retrieve it with a search because the refresh from before was\n+ * successful.\n+ */\n+ @Test\n+ public void testReadOnPostRecoveryShards() throws Exception {\n+ List<BlockClusterStateProcessing> clusterStateBlocks = new ArrayList<>();\n+ try {\n+ configureUnicastCluster(5, null, 1);\n+ // we could probably write a test without a dedicated master node but it is easier if we use one\n+ Future<String> masterNodeFuture = internalCluster().startMasterOnlyNodeAsync();\n+ // node_1 will have the shard in the beginning\n+ Future<String> node1Future = internalCluster().startDataOnlyNodeAsync();\n+ final String masterNode = masterNodeFuture.get();\n+ final String node_1 = node1Future.get();\n+ logger.info(\"--> creating index [test] with one shard and zero replica\");\n+ assertAcked(prepareCreate(\"test\").setSettings(\n+ Settings.builder().put(indexSettings())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ .put(IndexShard.INDEX_REFRESH_INTERVAL, -1))\n+ .addMapping(\"doc\", jsonBuilder().startObject().startObject(\"doc\")\n+ .startObject(\"properties\").startObject(\"text\").field(\"type\", \"string\").endObject().endObject()\n+ .endObject().endObject())\n+ );\n+ ensureGreen(\"test\");\n+ logger.info(\"--> starting three more data nodes\");\n+ List<String> nodeNamesFuture = internalCluster().startDataOnlyNodesAsync(3).get();\n+ final String node_2 = nodeNamesFuture.get(0);\n+ final String node_3 = nodeNamesFuture.get(1);\n+ final String node_4 = nodeNamesFuture.get(2);\n+ logger.info(\"--> running cluster_health\");\n+ ClusterHealthResponse clusterHealth = client().admin().cluster().prepareHealth()\n+ .setWaitForNodes(\"5\")\n+ .setWaitForRelocatingShards(0)\n+ .get();\n+ assertThat(clusterHealth.isTimedOut(), equalTo(false));\n+\n+ logger.info(\"--> move shard from node_1 to node_2, and wait for relocation to finish\");\n+\n+ // block cluster state updates on node_3 so that it only sees the shard on node_1\n+ BlockClusterStateProcessing disruptionNode3 = new BlockClusterStateProcessing(node_3, getRandom());\n+ clusterStateBlocks.add(disruptionNode3);\n+ internalCluster().setDisruptionScheme(disruptionNode3);\n+ disruptionNode3.startDisrupting();\n+ // register a Tracer that notifies begin and end of a relocation\n+ MockTransportService transportServiceNode2 = (MockTransportService) internalCluster().getInstance(TransportService.class, node_2);\n+ CountDownLatch beginRelocationLatchNode2 = new CountDownLatch(1);\n+ CountDownLatch endRelocationLatchNode2 = new CountDownLatch(1);\n+ transportServiceNode2.addTracer(new StartRecoveryToShardStaredTracer(logger, beginRelocationLatchNode2, endRelocationLatchNode2));\n+\n+ // block cluster state updates on node_1 and node_2 so that we end up with two primaries\n+ BlockClusterStateProcessing disruptionNode2 = new BlockClusterStateProcessing(node_2, getRandom());\n+ clusterStateBlocks.add(disruptionNode2);\n+ disruptionNode2.applyToCluster(internalCluster());\n+ BlockClusterStateProcessing disruptionNode1 = new BlockClusterStateProcessing(node_1, getRandom());\n+ clusterStateBlocks.add(disruptionNode1);\n+ disruptionNode1.applyToCluster(internalCluster());\n+\n+ logger.info(\"--> move shard from node_1 to node_2\");\n+ // don't block on the relocation. cluster state updates are blocked on node_3 and the relocation would timeout\n+ Future<ClusterRerouteResponse> rerouteFuture = internalCluster().client().admin().cluster().prepareReroute().add(new MoveAllocationCommand(new ShardId(\"test\", 0), node_1, node_2)).setTimeout(new TimeValue(1000, TimeUnit.MILLISECONDS)).execute();\n+\n+ logger.info(\"--> wait for relocation to start\");\n+ // wait for relocation to start\n+ beginRelocationLatchNode2.await();\n+ // start to block cluster state updates on node_1 and node_2 so that we end up with two primaries\n+ // one STARTED on node_1 and one in POST_RECOVERY on node_2\n+ disruptionNode1.startDisrupting();\n+ disruptionNode2.startDisrupting();\n+ endRelocationLatchNode2.await();\n+ final Client node3Client = internalCluster().client(node_3);\n+ final Client node2Client = internalCluster().client(node_2);\n+ final Client node1Client = internalCluster().client(node_1);\n+ final Client node4Client = internalCluster().client(node_4);\n+ logger.info(\"--> index doc\");\n+ logLocalClusterStates(node1Client, node2Client, node3Client, node4Client);\n+ assertTrue(node3Client.prepareIndex(\"test\", \"doc\").setSource(\"{\\\"text\\\":\\\"a\\\"}\").get().isCreated());\n+ //sometimes refresh and sometimes flush\n+ int refreshOrFlushType = randomIntBetween(1, 2);\n+ switch (refreshOrFlushType) {\n+ case 1: {\n+ logger.info(\"--> refresh from node_3\");\n+ RefreshResponse refreshResponse = node3Client.admin().indices().prepareRefresh().get();\n+ assertThat(refreshResponse.getFailedShards(), equalTo(0));\n+ // the total shards is num replicas + 1 so that can be lower here because one shard\n+ // is relocating and counts twice as successful\n+ assertThat(refreshResponse.getTotalShards(), equalTo(2));\n+ assertThat(refreshResponse.getSuccessfulShards(), equalTo(2));\n+ break;\n+ }\n+ case 2: {\n+ logger.info(\"--> flush from node_3\");\n+ FlushResponse flushResponse = node3Client.admin().indices().prepareFlush().get();\n+ assertThat(flushResponse.getFailedShards(), equalTo(0));\n+ // the total shards is num replicas + 1 so that can be lower here because one shard\n+ // is relocating and counts twice as successful\n+ assertThat(flushResponse.getTotalShards(), equalTo(2));\n+ assertThat(flushResponse.getSuccessfulShards(), equalTo(2));\n+ break;\n+ }\n+ default:\n+ fail(\"this is test bug, number should be between 1 and 2\");\n+ }\n+ // now stop disrupting so that node_3 can ack last cluster state to master and master can continue\n+ // to publish the next cluster state\n+ logger.info(\"--> stop disrupting node_3\");\n+ disruptionNode3.stopDisrupting();\n+ rerouteFuture.get();\n+ logger.info(\"--> wait for node_4 to get new cluster state\");\n+ // wait until node_4 actually has the new cluster state in which node_1 has no shard\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ ClusterState clusterState = node4Client.admin().cluster().prepareState().setLocal(true).get().getState();\n+ // get the node id from the name. TODO: Is there a better way to do this?\n+ String nodeId = null;\n+ for (RoutingNode node : clusterState.getRoutingNodes()) {\n+ if (node.node().name().equals(node_1)) {\n+ nodeId = node.nodeId();\n+ }\n+ }\n+ assertNotNull(nodeId);\n+ // check that node_1 does not have the shard in local cluster state\n+ assertFalse(clusterState.getRoutingNodes().routingNodeIter(nodeId).hasNext());\n+ }\n+ });\n+\n+ logger.info(\"--> run count from node_4\");\n+ logLocalClusterStates(node1Client, node2Client, node3Client, node4Client);\n+ CountResponse countResponse = node4Client.prepareCount(\"test\").setPreference(\"local\").get();\n+ assertThat(countResponse.getCount(), equalTo(1l));\n+ logger.info(\"--> stop disrupting node_1 and node_2\");\n+ disruptionNode2.stopDisrupting();\n+ disruptionNode1.stopDisrupting();\n+ // wait for relocation to finish\n+ logger.info(\"--> wait for relocation to finish\");\n+ clusterHealth = client().admin().cluster().prepareHealth()\n+ .setWaitForRelocatingShards(0)\n+ .get();\n+ assertThat(clusterHealth.isTimedOut(), equalTo(false));\n+ } catch (AssertionError e) {\n+ for (BlockClusterStateProcessing blockClusterStateProcessing : clusterStateBlocks) {\n+ blockClusterStateProcessing.stopDisrupting();\n+ }\n+ throw e;\n+ }\n+ }\n+\n+ /**\n+ * This Tracer can be used to signal start of a recovery and shard started event after translog was copied\n+ */\n+ public static class StartRecoveryToShardStaredTracer extends MockTransportService.Tracer {\n+ private final ESLogger logger;\n+ private final CountDownLatch beginRelocationLatch;\n+ private final CountDownLatch sentShardStartedLatch;\n+\n+ public StartRecoveryToShardStaredTracer(ESLogger logger, CountDownLatch beginRelocationLatch, CountDownLatch sentShardStartedLatch) {\n+ this.logger = logger;\n+ this.beginRelocationLatch = beginRelocationLatch;\n+ this.sentShardStartedLatch = sentShardStartedLatch;\n+ }\n+\n+ @Override\n+ public void requestSent(DiscoveryNode node, long requestId, String action, TransportRequestOptions options) {\n+ if (action.equals(RecoverySource.Actions.START_RECOVERY)) {\n+ logger.info(\"sent: {}, relocation starts\", action);\n+ beginRelocationLatch.countDown();\n+ }\n+ if (action.equals(ShardStateAction.SHARD_STARTED_ACTION_NAME)) {\n+ logger.info(\"sent: {}, shard started\", action);\n+ sentShardStartedLatch.countDown();\n+ }\n+ }\n+ }\n+\n+ private void logLocalClusterStates(Client... clients) {\n+ int counter = 1;\n+ for (Client client : clients) {\n+ ClusterState clusterState = client.admin().cluster().prepareState().setLocal(true).get().getState();\n+ logger.info(\"--> cluster state on node_{} {}\", counter, clusterState.prettyPrint());\n+ counter++;\n+ }\n+ }\n+\n+ /**\n+ * This test creates a scenario where a primary shard (0 replicas) relocates and is in POST_RECOVERY on the target\n+ * node but already deleted on the source node. Search request should still work.\n+ */\n+ @Test\n+ public void searchWithRelocationAndSlowClusterStateProcessing() throws Exception {\n+ configureUnicastCluster(3, null, 1);\n+ Future<String> masterNodeFuture = internalCluster().startMasterOnlyNodeAsync();\n+ Future<String> node_1Future = internalCluster().startDataOnlyNodeAsync();\n+\n+ final String node_1 = node_1Future.get();\n+ final String masterNode = masterNodeFuture.get();\n+ logger.info(\"--> creating index [test] with one shard and on replica\");\n+ assertAcked(prepareCreate(\"test\").setSettings(\n+ Settings.builder().put(indexSettings())\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0))\n+ );\n+ ensureGreen(\"test\");\n+\n+ Future<String> node_2Future = internalCluster().startDataOnlyNodeAsync();\n+ final String node_2 = node_2Future.get();\n+ List<IndexRequestBuilder> indexRequestBuilderList = new ArrayList<>();\n+ for (int i = 0; i < 100; i++) {\n+ indexRequestBuilderList.add(client().prepareIndex().setIndex(\"test\").setType(\"doc\").setSource(\"{\\\"int_field\\\":1}\"));\n+ }\n+ indexRandom(true, indexRequestBuilderList);\n+ SingleNodeDisruption disruption = new BlockClusterStateProcessing(node_2, getRandom());\n+\n+ internalCluster().setDisruptionScheme(disruption);\n+ MockTransportService transportServiceNode2 = (MockTransportService) internalCluster().getInstance(TransportService.class, node_2);\n+ CountDownLatch beginRelocationLatch = new CountDownLatch(1);\n+ CountDownLatch endRelocationLatch = new CountDownLatch(1);\n+ transportServiceNode2.addTracer(new IndicesStoreIntegrationIT.ReclocationStartEndTracer(logger, beginRelocationLatch, endRelocationLatch));\n+ internalCluster().client().admin().cluster().prepareReroute().add(new MoveAllocationCommand(new ShardId(\"test\", 0), node_1, node_2)).get();\n+ // wait for relocation to start\n+ beginRelocationLatch.await();\n+ disruption.startDisrupting();\n+ // wait for relocation to finish\n+ endRelocationLatch.await();\n+ // now search for the documents and see if we get a reply\n+ assertThat(client().prepareCount().get().getCount(), equalTo(100l));\n+ }\n+\n @Test\n public void testIndexImportedFromDataOnlyNodesIfMasterLostDataFolder() throws Exception {\n // test for https://github.com/elastic/elasticsearch/issues/8823\n@@ -932,6 +1220,7 @@ public void testIndexImportedFromDataOnlyNodesIfMasterLostDataFolder() throws Ex\n ensureGreen();\n \n internalCluster().restartNode(masterNode, new InternalTestCluster.RestartCallback() {\n+ @Override\n public boolean clearData(String nodeName) {\n return true;\n }", "filename": "core/src/test/java/org/elasticsearch/discovery/DiscoveryWithServiceDisruptionsIT.java", "status": "modified" } ] }
{ "body": "While using the 2.0 branch, @jbudz and I are noticing that `_default_` mappings always cause unresolvable type conflicts.\n\nThis is the mapping we are testing with:\n\n``` curl\ncurl -XPOST localhost:9200/logstash-2015.08.25 -d '{\n \"mappings\":{\n \"_default_\": {\n \"properties\":{\n \"ip\":{\n \"type\":\"ip\"\n }\n }\n }\n }\n}'\n```\n\nAnd the documents that we are sending:\n\n```\ncurl -XPOST localhost:9200/_bulk -d '{\"index\":{\"_index\":\"logstash-2015.08.25\",\"_type\":\"apache\"}}\n{\"index\":\"logstash-2015.08.25\",\"ip\":\"192.168.1.1\"}\n{\"index\":{\"_index\":\"logstash-2015.08.25\",\"_type\":\"nginx\"}}\n{\"index\":\"logstash-2015.08.25\",\"ip\":\"192.168.1.1\"}\n'\n```\n\nBut the bulk fails with `\"Mapper for [ip] conflicts with existing mapping in other types:\\n[mapper [ip] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types.]\"`. \n\nSince we are using `_default_` as the type, it seems like `update_all_types` should not be needed, since we are not actually creating the types until the documents are indexed.\n", "comments": [ { "body": "@rjernst could you take a look please\n", "created_at": "2015-08-26T09:31:14Z" }, { "body": "Actually, this isn't about default mappings, it is a problem with fields of type `ip`. This request will replicate the same problem:\n\n```\nPUT my_index\n{\n \"mappings\": {\n \"one\": {\n \"properties\": {\n \"field\": {\n \"type\": \"ip\"\n }\n }\n },\n \"two\": {\n \"properties\": {\n \"field\": {\n \"type\": \"ip\"\n }\n }\n }\n }\n}\n```\n", "created_at": "2015-08-26T13:50:33Z" }, { "body": "@clintongormley Yes, it affects ip field type. The problem is how the search analyzer for ip fields is setup. For other numeric fields, it sets up an index analyzer for the numeric precision, and a search analyzer for the \"max\" value, and generally has constants for each possible precision. However, ip fields create these (even the \"max\" on for search analyzer) on the fly every time. This exposed a bug in our comparison of search analyzer when checking compatibility: we are looking at reference equality, instead of the name of the analyzer.\n\nWhen I found the bug, I realized we had a hole in the unit tests (it was something I was relying on existing mappings tests for, but that was obviously insufficient!). I have a fix ready, and have been working on an improved base test which will exhaustively test compatibility checks for every property, for every field type.\n", "created_at": "2015-08-26T15:34:11Z" } ], "number": 13112, "title": "IP fields cause mapping conflicts" }
{ "body": "The field type tests for mappings had a huge hole: check compatibility\nwas not tested directly at all! I had meant for this to happen in a\nfollow up after #8871, and was relying on existing mapping tests.\nHowever, there were a number of issues.\n\nThis change reworks the fieldtype tests to be able to check all settable\nproperties on a field type work with checkCompatibility. It fixes a\nhandful of small bugs in various field types. In particular, analyzer\ncomparison was just wrong: it was comparing reference equality for\nsearch analyzer instead of the analyzer name. There was also no check\nfor search quote analyzer.\n\ncloses #13112\n", "number": 13206, "review_comments": [ { "body": "I don't have a better idea, but this makes me slightly nervous since nothing in the API prevents you from doing\n\n``` java\nAnalyzer a1 = new NamedAnalyzer(\"foo\", someAnalyzer);\nAnalyzer a2 = new NamedAnalyzer(\"foo\", otherAnalyzer);\n```\n", "created_at": "2015-09-01T07:13:52Z" }, { "body": "Since an analyzer is essentially code (made up of bundles of other code like tokenizers and token filters), I don't think there is really a way to make them comparable. So I'm not sure there is really anything we can do, the name is the only comparable identifier we have.\n", "created_at": "2015-09-01T18:44:00Z" } ], "title": "Fix numerous checks for equality and compatibility in mapper field types" }
{ "commits": [ { "message": "Mappings: Fix numerous checks for equality and compatibility\n\nThe field type tests for mappings had a huge hole: check compatibility\nwas not tested directly at all! I had meant for this to happen in a\nfollow up after #8871, and was relying on existing mapping tests.\nHowever, there were a number of issues.\n\nThis change reworks the fieldtype tests to be able to check all settable\nproperties on a field type work with checkCompatibility. It fixes a\nhandful of small bugs in various field types. In particular, analyzer\ncomparison was just wrong: it was comparing reference equality for\nsearch analyzer instead of the analyzer name. There was also no check\nfor search quote analyzer.\n\ncloses #13112" } ], "files": [ { "diff": "@@ -22,6 +22,8 @@\n import org.apache.lucene.analysis.Analyzer;\n import org.apache.lucene.analysis.DelegatingAnalyzerWrapper;\n \n+import java.util.Objects;\n+\n /**\n * Named analyzer is an analyzer wrapper around an actual analyzer ({@link #analyzer} that is associated\n * with a name ({@link #name()}.\n@@ -104,4 +106,17 @@ public void setReusableComponents(Analyzer a, String f, TokenStreamComponents c)\n throw new IllegalStateException(\"NamedAnalyzer cannot be wrapped with a wrapper, only a delegator\");\n }\n };\n+\n+ @Override\n+ public boolean equals(Object o) {\n+ if (this == o) return true;\n+ if (!(o instanceof NamedAnalyzer)) return false;\n+ NamedAnalyzer that = (NamedAnalyzer) o;\n+ return Objects.equals(name, that.name);\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return Objects.hash(name);\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/index/analysis/NamedAnalyzer.java", "status": "modified" }, { "diff": "@@ -192,13 +192,24 @@ public MappedFieldType() {\n public boolean equals(Object o) {\n if (!super.equals(o)) return false;\n MappedFieldType fieldType = (MappedFieldType) o;\n+ // check similarity first because we need to check the name, and it might be null\n+ // TODO: SimilarityProvider should have equals?\n+ if (similarity == null || fieldType.similarity == null) {\n+ if (similarity != fieldType.similarity) {\n+ return false;\n+ }\n+ } else {\n+ if (Objects.equals(similarity.name(), fieldType.similarity.name()) == false) {\n+ return false;\n+ }\n+ }\n+\n return boost == fieldType.boost &&\n docValues == fieldType.docValues &&\n Objects.equals(names, fieldType.names) &&\n Objects.equals(indexAnalyzer, fieldType.indexAnalyzer) &&\n Objects.equals(searchAnalyzer, fieldType.searchAnalyzer) &&\n Objects.equals(searchQuoteAnalyzer(), fieldType.searchQuoteAnalyzer()) &&\n- Objects.equals(similarity, fieldType.similarity) &&\n Objects.equals(normsLoading, fieldType.normsLoading) &&\n Objects.equals(fieldDataType, fieldType.fieldDataType) &&\n Objects.equals(nullValue, fieldType.nullValue) &&\n@@ -207,10 +218,11 @@ public boolean equals(Object o) {\n \n @Override\n public int hashCode() {\n- return Objects.hash(super.hashCode(), names, boost, docValues, indexAnalyzer, searchAnalyzer, searchQuoteAnalyzer, similarity, normsLoading, fieldDataType, nullValue, nullValueAsString);\n+ return Objects.hash(super.hashCode(), names, boost, docValues, indexAnalyzer, searchAnalyzer, searchQuoteAnalyzer,\n+ similarity == null ? null : similarity.name(), normsLoading, fieldDataType, nullValue, nullValueAsString);\n }\n \n-// norelease: we need to override freeze() and add safety checks that all settings are actually set\n+ // norelease: we need to override freeze() and add safety checks that all settings are actually set\n \n /** Returns the name of this type, as would be specified in mapping properties */\n public abstract String typeName();\n@@ -234,51 +246,48 @@ public void checkCompatibility(MappedFieldType other, List<String> conflicts, bo\n boolean mergeWithIndexed = other.indexOptions() != IndexOptions.NONE;\n // TODO: should be validating if index options go \"up\" (but \"down\" is ok)\n if (indexed != mergeWithIndexed || tokenized() != other.tokenized()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different index values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [index] values\");\n }\n if (stored() != other.stored()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different store values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [store] values\");\n }\n if (hasDocValues() == false && other.hasDocValues()) {\n // don't add conflict if this mapper has doc values while the mapper to merge doesn't since doc values are implicitly set\n // when the doc_values field data format is configured\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different doc_values values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [doc_values] values, cannot change from disabled to enabled\");\n }\n if (omitNorms() && !other.omitNorms()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] cannot enable norms (`norms.enabled`)\");\n- }\n- if (tokenized() != other.tokenized()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different tokenize values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [omit_norms] values, cannot change from disable to enabled\");\n }\n if (storeTermVectors() != other.storeTermVectors()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different store_term_vector values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [store_term_vector] values\");\n }\n if (storeTermVectorOffsets() != other.storeTermVectorOffsets()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different store_term_vector_offsets values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [store_term_vector_offsets] values\");\n }\n if (storeTermVectorPositions() != other.storeTermVectorPositions()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different store_term_vector_positions values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [store_term_vector_positions] values\");\n }\n if (storeTermVectorPayloads() != other.storeTermVectorPayloads()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different store_term_vector_payloads values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [store_term_vector_payloads] values\");\n }\n \n // null and \"default\"-named index analyzers both mean the default is used\n if (indexAnalyzer() == null || \"default\".equals(indexAnalyzer().name())) {\n if (other.indexAnalyzer() != null && \"default\".equals(other.indexAnalyzer().name()) == false) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different analyzer\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [analyzer]\");\n }\n } else if (other.indexAnalyzer() == null || \"default\".equals(other.indexAnalyzer().name())) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different analyzer\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [analyzer]\");\n } else if (indexAnalyzer().name().equals(other.indexAnalyzer().name()) == false) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different analyzer\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [analyzer]\");\n }\n \n if (!names().indexName().equals(other.names().indexName())) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different index_name\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [index_name]\");\n }\n if (Objects.equals(similarity(), other.similarity()) == false) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different similarity\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [similarity]\");\n }\n \n if (strict) {\n@@ -289,11 +298,14 @@ public void checkCompatibility(MappedFieldType other, List<String> conflicts, bo\n conflicts.add(\"mapper [\" + names().fullName() + \"] is used by multiple types. Set update_all_types to true to update [boost] across all types.\");\n }\n if (normsLoading() != other.normsLoading()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] is used by multiple types. Set update_all_types to true to update [norms].loading across all types.\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] is used by multiple types. Set update_all_types to true to update [norms.loading] across all types.\");\n }\n if (Objects.equals(searchAnalyzer(), other.searchAnalyzer()) == false) {\n conflicts.add(\"mapper [\" + names().fullName() + \"] is used by multiple types. Set update_all_types to true to update [search_analyzer] across all types.\");\n }\n+ if (Objects.equals(searchQuoteAnalyzer(), other.searchQuoteAnalyzer()) == false) {\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] is used by multiple types. Set update_all_types to true to update [search_quote_analyzer] across all types.\");\n+ }\n if (Objects.equals(fieldDataType(), other.fieldDataType()) == false) {\n conflicts.add(\"mapper [\" + names().fullName() + \"] is used by multiple types. Set update_all_types to true to update [fielddata] across all types.\");\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/MappedFieldType.java", "status": "modified" }, { "diff": "@@ -134,6 +134,15 @@ public String typeName() {\n return CONTENT_TYPE;\n }\n \n+ @Override\n+ public void checkCompatibility(MappedFieldType fieldType, List<String> conflicts, boolean strict) {\n+ super.checkCompatibility(fieldType, conflicts, strict);\n+ BinaryFieldType other = (BinaryFieldType)fieldType;\n+ if (tryUncompressing() != other.tryUncompressing()) {\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [try_uncompressing] (IMPOSSIBLE)\");\n+ }\n+ }\n+\n public boolean tryUncompressing() {\n return tryUncompressing;\n }", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/BinaryFieldMapper.java", "status": "modified" }, { "diff": "@@ -57,6 +57,7 @@\n import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n+import java.util.Objects;\n import java.util.Set;\n import java.util.SortedMap;\n \n@@ -237,6 +238,27 @@ protected CompletionFieldType(CompletionFieldType ref) {\n this.contextMapping = ref.contextMapping;\n }\n \n+ @Override\n+ public boolean equals(Object o) {\n+ if (this == o) return true;\n+ if (!(o instanceof CompletionFieldType)) return false;\n+ if (!super.equals(o)) return false;\n+ CompletionFieldType fieldType = (CompletionFieldType) o;\n+ return analyzingSuggestLookupProvider.getPreserveSep() == fieldType.analyzingSuggestLookupProvider.getPreserveSep() &&\n+ analyzingSuggestLookupProvider.getPreservePositionsIncrements() == fieldType.analyzingSuggestLookupProvider.getPreservePositionsIncrements() &&\n+ analyzingSuggestLookupProvider.hasPayloads() == fieldType.analyzingSuggestLookupProvider.hasPayloads() &&\n+ Objects.equals(getContextMapping(), fieldType.getContextMapping());\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return Objects.hash(super.hashCode(),\n+ analyzingSuggestLookupProvider.getPreserveSep(),\n+ analyzingSuggestLookupProvider.getPreservePositionsIncrements(),\n+ analyzingSuggestLookupProvider.hasPayloads(),\n+ getContextMapping());\n+ }\n+\n @Override\n public CompletionFieldType clone() {\n return new CompletionFieldType(this);\n@@ -252,16 +274,16 @@ public void checkCompatibility(MappedFieldType fieldType, List<String> conflicts\n super.checkCompatibility(fieldType, conflicts, strict);\n CompletionFieldType other = (CompletionFieldType)fieldType;\n if (analyzingSuggestLookupProvider.hasPayloads() != other.analyzingSuggestLookupProvider.hasPayloads()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different payload values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [payload] values\");\n }\n if (analyzingSuggestLookupProvider.getPreservePositionsIncrements() != other.analyzingSuggestLookupProvider.getPreservePositionsIncrements()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different 'preserve_position_increments' values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [preserve_position_increments] values\");\n }\n if (analyzingSuggestLookupProvider.getPreserveSep() != other.analyzingSuggestLookupProvider.getPreserveSep()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different 'preserve_separators' values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [preserve_separators] values\");\n }\n if(!ContextMapping.mappingsAreEqual(getContextMapping(), other.getContextMapping())) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different 'context_mapping' values\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [context_mapping] values\");\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/mapper/core/CompletionFieldMapper.java", "status": "modified" }, { "diff": "@@ -350,20 +350,26 @@ public void checkCompatibility(MappedFieldType fieldType, List<String> conflicts\n super.checkCompatibility(fieldType, conflicts, strict);\n GeoPointFieldType other = (GeoPointFieldType)fieldType;\n if (isLatLonEnabled() != other.isLatLonEnabled()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different lat_lon\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [lat_lon]\");\n }\n if (isGeohashEnabled() != other.isGeohashEnabled()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different geohash\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [geohash]\");\n }\n if (geohashPrecision() != other.geohashPrecision()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different geohash_precision\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [geohash_precision]\");\n }\n if (isGeohashPrefixEnabled() != other.isGeohashPrefixEnabled()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different geohash_prefix\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [geohash_prefix]\");\n }\n if (isLatLonEnabled() && other.isLatLonEnabled() &&\n latFieldType().numericPrecisionStep() != other.latFieldType().numericPrecisionStep()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different precision_step\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [precision_step]\");\n+ }\n+ if (ignoreMalformed() != other.ignoreMalformed()) {\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [ignore_malformed]\");\n+ }\n+ if (coerce() != other.coerce()) {\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [coerce]\");\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java", "status": "modified" }, { "diff": "@@ -280,21 +280,30 @@ public void checkCompatibility(MappedFieldType fieldType, List<String> conflicts\n GeoShapeFieldType other = (GeoShapeFieldType)fieldType;\n // prevent user from changing strategies\n if (strategyName().equals(other.strategyName()) == false) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different strategy\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [strategy]\");\n }\n \n // prevent user from changing trees (changes encoding)\n if (tree().equals(other.tree()) == false) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different tree\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [tree]\");\n }\n \n // TODO we should allow this, but at the moment levels is used to build bookkeeping variables\n // in lucene's SpatialPrefixTree implementations, need a patch to correct that first\n if (treeLevels() != other.treeLevels()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different tree_levels\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [tree_levels]\");\n }\n if (precisionInMeters() != other.precisionInMeters()) {\n- conflicts.add(\"mapper [\" + names().fullName() + \"] has different precision\");\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] has different [precision]\");\n+ }\n+\n+ if (strict) {\n+ if (orientation() != other.orientation()) {\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] is used by multiple types. Set update_all_types to true to update [orientation] across all types.\");\n+ }\n+ if (distanceErrorPct() != other.distanceErrorPct()) {\n+ conflicts.add(\"mapper [\" + names().fullName() + \"] is used by multiple types. Set update_all_types to true to update [distance_error_pct] across all types.\");\n+ }\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java", "status": "modified" }, { "diff": "@@ -167,6 +167,7 @@ public String typeName() {\n \n @Override\n public void checkCompatibility(MappedFieldType fieldType, List<String> conflicts, boolean strict) {\n+ super.checkCompatibility(fieldType, conflicts, strict);\n if (strict) {\n FieldNamesFieldType other = (FieldNamesFieldType)fieldType;\n if (isEnabled() != other.isEnabled()) {", "filename": "core/src/main/java/org/elasticsearch/index/mapper/internal/FieldNamesFieldMapper.java", "status": "modified" }, { "diff": "@@ -182,14 +182,14 @@ public void testCheckCompatibilityConflict() {\n lookup.checkCompatibility(newList(f3), false);\n fail(\"expected conflict\");\n } catch (IllegalArgumentException e) {\n- assertTrue(e.getMessage().contains(\"has different store values\"));\n+ assertTrue(e.getMessage().contains(\"has different [store] values\"));\n }\n // even with updateAllTypes == true, incompatible\n try {\n lookup.checkCompatibility(newList(f3), true);\n fail(\"expected conflict\");\n } catch (IllegalArgumentException e) {\n- assertTrue(e.getMessage().contains(\"has different store values\"));\n+ assertTrue(e.getMessage().contains(\"has different [store] values\"));\n }\n }\n ", "filename": "core/src/test/java/org/elasticsearch/index/mapper/FieldTypeLookupTests.java", "status": "modified" }, { "diff": "@@ -18,57 +18,197 @@\n */\n package org.elasticsearch.index.mapper;\n \n-import org.elasticsearch.common.lucene.Lucene;\n+import org.apache.lucene.analysis.standard.StandardAnalyzer;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.fielddata.FieldDataType;\n import org.elasticsearch.index.similarity.BM25SimilarityProvider;\n import org.elasticsearch.test.ESTestCase;\n \n import java.util.ArrayList;\n+import java.util.Arrays;\n import java.util.List;\n \n /** Base test case for subclasses of MappedFieldType */\n public abstract class FieldTypeTestCase extends ESTestCase {\n \n+ /** Abstraction for mutating a property of a MappedFieldType */\n+ public static abstract class Modifier {\n+ /** The name of the property that is being modified. Used in test failure messages. */\n+ public final String property;\n+ /** true if this modifier only makes types incompatible in strict mode, false otherwise */\n+ public final boolean strictOnly;\n+ /** true if reversing the order of checkCompatibility arguments should result in the same conflicts, false otherwise **/\n+ public final boolean symmetric;\n+\n+ public Modifier(String property, boolean strictOnly, boolean symmetric) {\n+ this.property = property;\n+ this.strictOnly = strictOnly;\n+ this.symmetric = symmetric;\n+ }\n+\n+ /** Modifies the property */\n+ public abstract void modify(MappedFieldType ft);\n+ /**\n+ * Optional method to implement that allows the field type that will be compared to be modified,\n+ * so that it does not have the default value for the property being modified.\n+ */\n+ public void normalizeOther(MappedFieldType other) {}\n+ }\n+\n+ private final List<Modifier> modifiers = new ArrayList<>(Arrays.asList(\n+ new Modifier(\"boost\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setBoost(1.1f);\n+ }\n+ },\n+ new Modifier(\"doc_values\", false, false) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setHasDocValues(ft.hasDocValues() == false);\n+ }\n+ },\n+ new Modifier(\"analyzer\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setIndexAnalyzer(new NamedAnalyzer(\"bar\", new StandardAnalyzer()));\n+ }\n+ },\n+ new Modifier(\"analyzer\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setIndexAnalyzer(new NamedAnalyzer(\"bar\", new StandardAnalyzer()));\n+ }\n+ @Override\n+ public void normalizeOther(MappedFieldType other) {\n+ other.setIndexAnalyzer(new NamedAnalyzer(\"foo\", new StandardAnalyzer()));\n+ }\n+ },\n+ new Modifier(\"search_analyzer\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setSearchAnalyzer(new NamedAnalyzer(\"bar\", new StandardAnalyzer()));\n+ }\n+ },\n+ new Modifier(\"search_analyzer\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setSearchAnalyzer(new NamedAnalyzer(\"bar\", new StandardAnalyzer()));\n+ }\n+ @Override\n+ public void normalizeOther(MappedFieldType other) {\n+ other.setSearchAnalyzer(new NamedAnalyzer(\"foo\", new StandardAnalyzer()));\n+ }\n+ },\n+ new Modifier(\"search_quote_analyzer\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setSearchQuoteAnalyzer(new NamedAnalyzer(\"bar\", new StandardAnalyzer()));\n+ }\n+ },\n+ new Modifier(\"search_quote_analyzer\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setSearchQuoteAnalyzer(new NamedAnalyzer(\"bar\", new StandardAnalyzer()));\n+ }\n+ @Override\n+ public void normalizeOther(MappedFieldType other) {\n+ other.setSearchQuoteAnalyzer(new NamedAnalyzer(\"foo\", new StandardAnalyzer()));\n+ }\n+ },\n+ new Modifier(\"similarity\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setSimilarity(new BM25SimilarityProvider(\"foo\", Settings.EMPTY));\n+ }\n+ },\n+ new Modifier(\"similarity\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setSimilarity(new BM25SimilarityProvider(\"foo\", Settings.EMPTY));\n+ }\n+ @Override\n+ public void normalizeOther(MappedFieldType other) {\n+ other.setSimilarity(new BM25SimilarityProvider(\"bar\", Settings.EMPTY));\n+ }\n+ },\n+ new Modifier(\"norms.loading\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setNormsLoading(MappedFieldType.Loading.LAZY);\n+ }\n+ },\n+ new Modifier(\"fielddata\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setFieldDataType(new FieldDataType(\"foo\", Settings.builder().put(\"loading\", \"eager\").build()));\n+ }\n+ },\n+ new Modifier(\"null_value\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ft.setNullValue(dummyNullValue);\n+ }\n+ }\n+ ));\n+\n+ /**\n+ * Add a mutation that will be tested for all expected semantics of equality and compatibility.\n+ * These should be added in an @Before method.\n+ */\n+ protected void addModifier(Modifier modifier) {\n+ modifiers.add(modifier);\n+ }\n+\n+ private Object dummyNullValue = \"dummyvalue\";\n+\n+ /** Sets the null value used by the modifier for null value testing. This should be set in an @Before method. */\n+ protected void setDummyNullValue(Object value) {\n+ dummyNullValue = value;\n+ }\n+\n /** Create a default constructed fieldtype */\n protected abstract MappedFieldType createDefaultFieldType();\n \n- MappedFieldType createNamedDefaultFieldType(String name) {\n+ MappedFieldType createNamedDefaultFieldType() {\n MappedFieldType fieldType = createDefaultFieldType();\n- fieldType.setNames(new MappedFieldType.Names(name));\n+ fieldType.setNames(new MappedFieldType.Names(\"foo\"));\n return fieldType;\n }\n \n- /** A dummy null value to use when modifying null value */\n- protected Object dummyNullValue() {\n- return \"dummyvalue\";\n+ // TODO: remove this once toString is no longer final on FieldType...\n+ protected void assertFieldTypeEquals(String property, MappedFieldType ft1, MappedFieldType ft2) {\n+ if (ft1.equals(ft2) == false) {\n+ fail(\"Expected equality, testing property \" + property + \"\\nexpected: \" + toString(ft1) + \"; \\nactual: \" + toString(ft2) + \"\\n\");\n+ }\n }\n \n- /** Returns the number of properties that can be modified for the fieldtype */\n- protected int numProperties() {\n- return 10;\n+ protected void assertFieldTypeNotEquals(String property, MappedFieldType ft1, MappedFieldType ft2) {\n+ if (ft1.equals(ft2)) {\n+ fail(\"Expected inequality, testing property \" + property + \"\\nfirst: \" + toString(ft1) + \"; \\nsecond: \" + toString(ft2) + \"\\n\");\n+ }\n }\n \n- /** Modifies a property, identified by propNum, on the given fieldtype */\n- protected void modifyProperty(MappedFieldType ft, int propNum) {\n- switch (propNum) {\n- case 0: ft.setNames(new MappedFieldType.Names(\"dummy\")); break;\n- case 1: ft.setBoost(1.1f); break;\n- case 2: ft.setHasDocValues(!ft.hasDocValues()); break;\n- case 3: ft.setIndexAnalyzer(Lucene.STANDARD_ANALYZER); break;\n- case 4: ft.setSearchAnalyzer(Lucene.STANDARD_ANALYZER); break;\n- case 5: ft.setSearchQuoteAnalyzer(Lucene.STANDARD_ANALYZER); break;\n- case 6: ft.setSimilarity(new BM25SimilarityProvider(\"foo\", Settings.EMPTY)); break;\n- case 7: ft.setNormsLoading(MappedFieldType.Loading.LAZY); break;\n- case 8: ft.setFieldDataType(new FieldDataType(\"foo\", Settings.builder().put(\"loading\", \"eager\").build())); break;\n- case 9: ft.setNullValue(dummyNullValue()); break;\n- default: fail(\"unknown fieldtype property number \" + propNum);\n- }\n+ protected void assertCompatible(String msg, MappedFieldType ft1, MappedFieldType ft2, boolean strict) {\n+ List<String> conflicts = new ArrayList<>();\n+ ft1.checkCompatibility(ft2, conflicts, strict);\n+ assertTrue(\"Found conflicts for \" + msg + \": \" + conflicts, conflicts.isEmpty());\n }\n \n- // TODO: remove this once toString is no longer final on FieldType...\n- protected void assertEquals(int i, MappedFieldType ft1, MappedFieldType ft2) {\n- assertEquals(\"prop \" + i + \"\\nexpected: \" + toString(ft1) + \"; \\nactual: \" + toString(ft2), ft1, ft2);\n+ protected void assertNotCompatible(String msg, MappedFieldType ft1, MappedFieldType ft2, boolean strict, String... messages) {\n+ assert messages.length != 0;\n+ List<String> conflicts = new ArrayList<>();\n+ ft1.checkCompatibility(ft2, conflicts, strict);\n+ for (String message : messages) {\n+ boolean found = false;\n+ for (String conflict : conflicts) {\n+ if (conflict.contains(message)) {\n+ found = true;\n+ }\n+ }\n+ assertTrue(\"Missing conflict for \" + msg + \": [\" + message + \"] in conflicts \" + conflicts, found);\n+ }\n }\n \n protected String toString(MappedFieldType ft) {\n@@ -88,53 +228,58 @@ protected String toString(MappedFieldType ft) {\n }\n \n public void testClone() {\n- MappedFieldType fieldType = createNamedDefaultFieldType(\"foo\");\n+ MappedFieldType fieldType = createNamedDefaultFieldType();\n MappedFieldType clone = fieldType.clone();\n assertNotSame(clone, fieldType);\n assertEquals(clone.getClass(), fieldType.getClass());\n assertEquals(clone, fieldType);\n assertEquals(clone, clone.clone()); // transitivity\n \n- for (int i = 0; i < numProperties(); ++i) {\n- fieldType = createNamedDefaultFieldType(\"foo\");\n- modifyProperty(fieldType, i);\n+ for (Modifier modifier : modifiers) {\n+ fieldType = createNamedDefaultFieldType();\n+ modifier.modify(fieldType);\n clone = fieldType.clone();\n assertNotSame(clone, fieldType);\n- assertEquals(i, clone, fieldType);\n+ assertFieldTypeEquals(modifier.property, clone, fieldType);\n }\n }\n \n public void testEquals() {\n- MappedFieldType ft1 = createNamedDefaultFieldType(\"foo\");\n- MappedFieldType ft2 = createNamedDefaultFieldType(\"foo\");\n+ MappedFieldType ft1 = createNamedDefaultFieldType();\n+ MappedFieldType ft2 = createNamedDefaultFieldType();\n assertEquals(ft1, ft1); // reflexive\n assertEquals(ft1, ft2); // symmetric\n assertEquals(ft2, ft1);\n assertEquals(ft1.hashCode(), ft2.hashCode());\n \n- for (int i = 0; i < numProperties(); ++i) {\n- ft2 = createNamedDefaultFieldType(\"foo\");\n- modifyProperty(ft2, i);\n- assertNotEquals(ft1, ft2);\n- assertNotEquals(ft1.hashCode(), ft2.hashCode());\n+ for (Modifier modifier : modifiers) {\n+ ft1 = createNamedDefaultFieldType();\n+ ft2 = createNamedDefaultFieldType();\n+ modifier.modify(ft2);\n+ assertFieldTypeNotEquals(modifier.property, ft1, ft2);\n+ assertNotEquals(\"hash code for modified property \" + modifier.property, ft1.hashCode(), ft2.hashCode());\n+ // modify the same property and they are equal again\n+ modifier.modify(ft1);\n+ assertFieldTypeEquals(modifier.property, ft1, ft2);\n+ assertEquals(\"hash code for modified property \" + modifier.property, ft1.hashCode(), ft2.hashCode());\n }\n }\n \n public void testFreeze() {\n- for (int i = 0; i < numProperties(); ++i) {\n- MappedFieldType fieldType = createNamedDefaultFieldType(\"foo\");\n+ for (Modifier modifier : modifiers) {\n+ MappedFieldType fieldType = createNamedDefaultFieldType();\n fieldType.freeze();\n try {\n- modifyProperty(fieldType, i);\n- fail(\"expected already frozen exception for property \" + i);\n+ modifier.modify(fieldType);\n+ fail(\"expected already frozen exception for property \" + modifier.property);\n } catch (IllegalStateException e) {\n assertTrue(e.getMessage().contains(\"already frozen\"));\n }\n }\n }\n \n public void testCheckTypeName() {\n- final MappedFieldType fieldType = createNamedDefaultFieldType(\"foo\");\n+ final MappedFieldType fieldType = createNamedDefaultFieldType();\n List<String> conflicts = new ArrayList<>();\n fieldType.checkTypeName(fieldType, conflicts);\n assertTrue(conflicts.toString(), conflicts.isEmpty());\n@@ -164,4 +309,46 @@ public void testCheckTypeName() {\n assertTrue(conflicts.get(0).contains(\"cannot be changed from type\"));\n assertEquals(1, conflicts.size());\n }\n+\n+ public void testCheckCompatibility() {\n+ MappedFieldType ft1 = createNamedDefaultFieldType();\n+ MappedFieldType ft2 = createNamedDefaultFieldType();\n+ assertCompatible(\"default\", ft1, ft2, true);\n+ assertCompatible(\"default\", ft1, ft2, false);\n+ assertCompatible(\"default\", ft2, ft1, true);\n+ assertCompatible(\"default\", ft2, ft1, false);\n+\n+ for (Modifier modifier : modifiers) {\n+ ft1 = createNamedDefaultFieldType();\n+ ft2 = createNamedDefaultFieldType();\n+ modifier.normalizeOther(ft1);\n+ modifier.modify(ft2);\n+ if (modifier.strictOnly) {\n+ String[] conflicts = {\n+ \"mapper [foo] is used by multiple types\",\n+ \"update [\" + modifier.property + \"]\"\n+ };\n+ assertCompatible(modifier.property, ft1, ft2, false);\n+ assertNotCompatible(modifier.property, ft1, ft2, true, conflicts);\n+ assertCompatible(modifier.property, ft2, ft1, false); // always symmetric when not strict\n+ if (modifier.symmetric) {\n+ assertNotCompatible(modifier.property, ft2, ft1, true, conflicts);\n+ } else {\n+ assertCompatible(modifier.property, ft2, ft1, true);\n+ }\n+ } else {\n+ // not compatible whether strict or not\n+ String conflict = \"different [\" + modifier.property + \"]\";\n+ assertNotCompatible(modifier.property, ft1, ft2, true, conflict);\n+ assertNotCompatible(modifier.property, ft1, ft2, false, conflict);\n+ if (modifier.symmetric) {\n+ assertNotCompatible(modifier.property, ft2, ft1, true, conflict);\n+ assertNotCompatible(modifier.property, ft2, ft1, false, conflict);\n+ } else {\n+ assertCompatible(modifier.property, ft2, ft1, true);\n+ assertCompatible(modifier.property, ft2, ft1, false);\n+ }\n+ }\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/FieldTypeTestCase.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n \n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n public class BinaryFieldTypeTests extends FieldTypeTestCase {\n \n@@ -28,17 +29,14 @@ protected MappedFieldType createDefaultFieldType() {\n return new BinaryFieldMapper.BinaryFieldType();\n }\n \n- @Override\n- protected int numProperties() {\n- return 1 + super.numProperties();\n- }\n-\n- @Override\n- protected void modifyProperty(MappedFieldType ft, int propNum) {\n- BinaryFieldMapper.BinaryFieldType bft = (BinaryFieldMapper.BinaryFieldType)ft;\n- switch (propNum) {\n- case 0: bft.setTryUncompressing(!bft.tryUncompressing()); break;\n- default: super.modifyProperty(ft, propNum - 1);\n- }\n+ @Before\n+ public void setupProperties() {\n+ addModifier(new Modifier(\"try_uncompressing\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ BinaryFieldMapper.BinaryFieldType bft = (BinaryFieldMapper.BinaryFieldType)ft;\n+ bft.setTryUncompressing(!bft.tryUncompressing());\n+ }\n+ });\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/BinaryFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -20,15 +20,16 @@\n \n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n public class BooleanFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n return new BooleanFieldMapper.BooleanFieldType();\n }\n \n- @Override\n- protected Object dummyNullValue() {\n- return true;\n+ @Before\n+ public void setupProperties() {\n+ setDummyNullValue(true);\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/BooleanFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -20,15 +20,16 @@\n \n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n public class ByteFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n return new ByteFieldMapper.ByteFieldType();\n }\n \n- @Override\n- protected Object dummyNullValue() {\n- return (byte)10;\n+ @Before\n+ public void setupProperties() {\n+ setDummyNullValue((byte)10);\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/ByteFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -20,10 +20,53 @@\n \n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.elasticsearch.search.suggest.completion.AnalyzingCompletionLookupProvider;\n+import org.elasticsearch.search.suggest.context.ContextBuilder;\n+import org.elasticsearch.search.suggest.context.ContextMapping;\n+import org.junit.Before;\n+\n+import java.util.SortedMap;\n+import java.util.TreeMap;\n \n public class CompletionFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n- return new CompletionFieldMapper.CompletionFieldType();\n+ CompletionFieldMapper.CompletionFieldType ft = new CompletionFieldMapper.CompletionFieldType();\n+ ft.setProvider(new AnalyzingCompletionLookupProvider(true, false, true, false));\n+ return ft;\n+ }\n+\n+ @Before\n+ public void setupProperties() {\n+ addModifier(new Modifier(\"preserve_separators\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ CompletionFieldMapper.CompletionFieldType cft = (CompletionFieldMapper.CompletionFieldType)ft;\n+ cft.setProvider(new AnalyzingCompletionLookupProvider(false, false, true, false));\n+ }\n+ });\n+ addModifier(new Modifier(\"preserve_position_increments\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ CompletionFieldMapper.CompletionFieldType cft = (CompletionFieldMapper.CompletionFieldType)ft;\n+ cft.setProvider(new AnalyzingCompletionLookupProvider(true, false, false, false));\n+ }\n+ });\n+ addModifier(new Modifier(\"payload\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ CompletionFieldMapper.CompletionFieldType cft = (CompletionFieldMapper.CompletionFieldType)ft;\n+ cft.setProvider(new AnalyzingCompletionLookupProvider(true, false, true, true));\n+ }\n+ });\n+ addModifier(new Modifier(\"context_mapping\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ CompletionFieldMapper.CompletionFieldType cft = (CompletionFieldMapper.CompletionFieldType)ft;\n+ SortedMap<String, ContextMapping> contextMapping = new TreeMap<>();\n+ contextMapping.put(\"foo\", ContextBuilder.location(\"foo\").build());\n+ cft.setContextMapping(contextMapping);\n+ }\n+ });\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/CompletionFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n import java.util.Locale;\n import java.util.concurrent.TimeUnit;\n@@ -31,23 +32,26 @@ protected MappedFieldType createDefaultFieldType() {\n return new DateFieldMapper.DateFieldType();\n }\n \n- @Override\n- protected Object dummyNullValue() {\n- return 10;\n- }\n-\n- @Override\n- protected int numProperties() {\n- return 2 + super.numProperties();\n- }\n-\n- @Override\n- protected void modifyProperty(MappedFieldType ft, int propNum) {\n- DateFieldMapper.DateFieldType dft = (DateFieldMapper.DateFieldType)ft;\n- switch (propNum) {\n- case 0: dft.setDateTimeFormatter(Joda.forPattern(\"basic_week_date\", Locale.ROOT)); break;\n- case 1: dft.setTimeUnit(TimeUnit.HOURS); break;\n- default: super.modifyProperty(ft, propNum - 2);\n- }\n+ @Before\n+ public void setupProperties() {\n+ setDummyNullValue(10);\n+ addModifier(new Modifier(\"format\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((DateFieldMapper.DateFieldType) ft).setDateTimeFormatter(Joda.forPattern(\"basic_week_date\", Locale.ROOT));\n+ }\n+ });\n+ addModifier(new Modifier(\"locale\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((DateFieldMapper.DateFieldType) ft).setDateTimeFormatter(Joda.forPattern(\"date_optional_time\", Locale.CANADA));\n+ }\n+ });\n+ addModifier(new Modifier(\"numeric_resolution\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((DateFieldMapper.DateFieldType)ft).setTimeUnit(TimeUnit.HOURS);\n+ }\n+ });\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/DateFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -20,15 +20,16 @@\n \n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n public class DoubleFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n return new DoubleFieldMapper.DoubleFieldType();\n }\n \n- @Override\n- protected Object dummyNullValue() {\n- return 10.0D;\n+ @Before\n+ public void setupProperties() {\n+ setDummyNullValue(10.0D);\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/DoubleFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -20,15 +20,16 @@\n \n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n public class FloatFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n return new DoubleFieldMapper.DoubleFieldType();\n }\n \n- @Override\n- protected Object dummyNullValue() {\n- return 10.0;\n+ @Before\n+ public void setupProperties() {\n+ setDummyNullValue(10.0);\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/FloatFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -20,15 +20,16 @@\n \n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n public class IntegerFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n return new IntegerFieldMapper.IntegerFieldType();\n }\n \n- @Override\n- protected Object dummyNullValue() {\n- return 10;\n+ @Before\n+ public void setupProperties() {\n+ setDummyNullValue(10);\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/IntegerFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -20,15 +20,16 @@\n \n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n public class LongFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n return new LongFieldMapper.LongFieldType();\n }\n \n- @Override\n- protected Object dummyNullValue() {\n- return (long)10;\n+ @Before\n+ public void setupProperties() {\n+ setDummyNullValue((long)10);\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/LongFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -20,15 +20,16 @@\n \n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n public class ShortFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n return new ShortFieldMapper.ShortFieldType();\n }\n \n- @Override\n- protected Object dummyNullValue() {\n- return (short)10;\n+ @Before\n+ public void setupProperties() {\n+ setDummyNullValue((short)10);\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/core/ShortFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -626,9 +626,10 @@ public void testGeoPointMapperMerge() throws Exception {\n \n MergeResult mergeResult = stage1.merge(stage2.mapping(), false, false);\n assertThat(mergeResult.hasConflicts(), equalTo(true));\n- assertThat(mergeResult.buildConflicts().length, equalTo(1));\n+ assertThat(mergeResult.buildConflicts().length, equalTo(2));\n // todo better way of checking conflict?\n- assertThat(\"mapper [point] has different lat_lon\", isIn(new ArrayList<>(Arrays.asList(mergeResult.buildConflicts()))));\n+ assertThat(\"mapper [point] has different [lat_lon]\", isIn(new ArrayList<>(Arrays.asList(mergeResult.buildConflicts()))));\n+ assertThat(\"mapper [point] has different [ignore_malformed]\", isIn(new ArrayList<>(Arrays.asList(mergeResult.buildConflicts()))));\n \n // correct mapping and ensure no failures\n stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")", "filename": "core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java", "status": "modified" }, { "diff": "@@ -22,27 +22,41 @@\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.core.DoubleFieldMapper;\n import org.elasticsearch.index.mapper.core.StringFieldMapper;\n+import org.junit.Before;\n \n public class GeoPointFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n return new GeoPointFieldMapper.GeoPointFieldType();\n }\n \n- @Override\n- protected int numProperties() {\n- return 4 + super.numProperties();\n- }\n-\n- @Override\n- protected void modifyProperty(MappedFieldType ft, int propNum) {\n- GeoPointFieldMapper.GeoPointFieldType gft = (GeoPointFieldMapper.GeoPointFieldType)ft;\n- switch (propNum) {\n- case 0: gft.setGeohashEnabled(new StringFieldMapper.StringFieldType(), 1, true); break;\n- case 1: gft.setLatLonEnabled(new DoubleFieldMapper.DoubleFieldType(), new DoubleFieldMapper.DoubleFieldType()); break;\n- case 2: gft.setIgnoreMalformed(!gft.ignoreMalformed()); break;\n- case 3: gft.setCoerce(!gft.coerce()); break;\n- default: super.modifyProperty(ft, propNum - 4);\n- }\n+ @Before\n+ public void setupProperties() {\n+ addModifier(new Modifier(\"geohash\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((GeoPointFieldMapper.GeoPointFieldType)ft).setGeohashEnabled(new StringFieldMapper.StringFieldType(), 1, true);\n+ }\n+ });\n+ addModifier(new Modifier(\"lat_lon\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((GeoPointFieldMapper.GeoPointFieldType)ft).setLatLonEnabled(new DoubleFieldMapper.DoubleFieldType(), new DoubleFieldMapper.DoubleFieldType());\n+ }\n+ });\n+ addModifier(new Modifier(\"ignore_malformed\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ GeoPointFieldMapper.GeoPointFieldType gft = (GeoPointFieldMapper.GeoPointFieldType)ft;\n+ gft.setIgnoreMalformed(!gft.ignoreMalformed());\n+ }\n+ });\n+ addModifier(new Modifier(\"coerce\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ GeoPointFieldMapper.GeoPointFieldType gft = (GeoPointFieldMapper.GeoPointFieldType)ft;\n+ gft.setCoerce(!gft.coerce());\n+ }\n+ });\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -377,10 +377,10 @@ public void testGeoShapeMapperMerge() throws Exception {\n assertThat(mergeResult.hasConflicts(), equalTo(true));\n assertThat(mergeResult.buildConflicts().length, equalTo(4));\n ArrayList conflicts = new ArrayList<>(Arrays.asList(mergeResult.buildConflicts()));\n- assertThat(\"mapper [shape] has different strategy\", isIn(conflicts));\n- assertThat(\"mapper [shape] has different tree\", isIn(conflicts));\n- assertThat(\"mapper [shape] has different tree_levels\", isIn(conflicts));\n- assertThat(\"mapper [shape] has different precision\", isIn(conflicts));\n+ assertThat(\"mapper [shape] has different [strategy]\", isIn(conflicts));\n+ assertThat(\"mapper [shape] has different [tree]\", isIn(conflicts));\n+ assertThat(\"mapper [shape] has different [tree_levels]\", isIn(conflicts));\n+ assertThat(\"mapper [shape] has different [precision]\", isIn(conflicts));\n \n // verify nothing changed\n FieldMapper fieldMapper = stage1.mappers().getMapper(\"shape\");", "filename": "core/src/test/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapperTests.java", "status": "modified" }, { "diff": "@@ -21,31 +21,51 @@\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n public class GeoShapeFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n- GeoShapeFieldMapper.GeoShapeFieldType gft = new GeoShapeFieldMapper.GeoShapeFieldType();\n- gft.setNames(new MappedFieldType.Names(\"testgeoshape\"));\n- return gft;\n+ return new GeoShapeFieldMapper.GeoShapeFieldType();\n }\n \n- @Override\n- protected int numProperties() {\n- return 6 + super.numProperties();\n- }\n-\n- @Override\n- protected void modifyProperty(MappedFieldType ft, int propNum) {\n- GeoShapeFieldMapper.GeoShapeFieldType gft = (GeoShapeFieldMapper.GeoShapeFieldType)ft;\n- switch (propNum) {\n- case 0: gft.setTree(\"quadtree\"); break;\n- case 1: gft.setStrategyName(\"term\"); break;\n- case 2: gft.setTreeLevels(10); break;\n- case 3: gft.setPrecisionInMeters(20); break;\n- case 4: gft.setDefaultDistanceErrorPct(0.5); break;\n- case 5: gft.setOrientation(ShapeBuilder.Orientation.LEFT); break;\n- default: super.modifyProperty(ft, propNum - 6);\n- }\n+ @Before\n+ public void setupProperties() {\n+ addModifier(new Modifier(\"tree\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((GeoShapeFieldMapper.GeoShapeFieldType)ft).setTree(\"quadtree\");\n+ }\n+ });\n+ addModifier(new Modifier(\"strategy\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((GeoShapeFieldMapper.GeoShapeFieldType)ft).setStrategyName(\"term\");\n+ }\n+ });\n+ addModifier(new Modifier(\"tree_levels\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((GeoShapeFieldMapper.GeoShapeFieldType)ft).setTreeLevels(10);\n+ }\n+ });\n+ addModifier(new Modifier(\"precision\", false, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((GeoShapeFieldMapper.GeoShapeFieldType)ft).setPrecisionInMeters(20);\n+ }\n+ });\n+ addModifier(new Modifier(\"distance_error_pct\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((GeoShapeFieldMapper.GeoShapeFieldType)ft).setDefaultDistanceErrorPct(0.5);\n+ }\n+ });\n+ addModifier(new Modifier(\"orientation\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ ((GeoShapeFieldMapper.GeoShapeFieldType)ft).setOrientation(ShapeBuilder.Orientation.LEFT);\n+ }\n+ });\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -20,24 +20,22 @@\n \n import org.elasticsearch.index.mapper.FieldTypeTestCase;\n import org.elasticsearch.index.mapper.MappedFieldType;\n+import org.junit.Before;\n \n public class FieldNamesFieldTypeTests extends FieldTypeTestCase {\n @Override\n protected MappedFieldType createDefaultFieldType() {\n return new FieldNamesFieldMapper.FieldNamesFieldType();\n }\n \n- @Override\n- protected int numProperties() {\n- return 1 + super.numProperties();\n- }\n-\n- @Override\n- protected void modifyProperty(MappedFieldType ft, int propNum) {\n- FieldNamesFieldMapper.FieldNamesFieldType fnft = (FieldNamesFieldMapper.FieldNamesFieldType)ft;\n- switch (propNum) {\n- case 0: fnft.setEnabled(!fnft.isEnabled()); break;\n- default: super.modifyProperty(ft, propNum - 1);\n- }\n+ @Before\n+ public void setupProperties() {\n+ addModifier(new Modifier(\"enabled\", true, true) {\n+ @Override\n+ public void modify(MappedFieldType ft) {\n+ FieldNamesFieldMapper.FieldNamesFieldType fnft = (FieldNamesFieldMapper.FieldNamesFieldType)ft;\n+ fnft.setEnabled(!fnft.isEnabled());\n+ }\n+ });\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/internal/FieldNamesFieldTypeTests.java", "status": "modified" }, { "diff": "@@ -173,15 +173,15 @@ public void testUpgradeFromMultiFieldTypeToMultiFields() throws Exception {\n DocumentMapper docMapper4 = parser.parse(mapping);\n mergeResult = docMapper.merge(docMapper4.mapping(), true, false);\n assertThat(Arrays.toString(mergeResult.buildConflicts()), mergeResult.hasConflicts(), equalTo(true));\n- assertThat(mergeResult.buildConflicts()[0], equalTo(\"mapper [name] has different index values\"));\n- assertThat(mergeResult.buildConflicts()[1], equalTo(\"mapper [name] has different store values\"));\n+ assertThat(mergeResult.buildConflicts()[0], equalTo(\"mapper [name] has different [index] values\"));\n+ assertThat(mergeResult.buildConflicts()[1], equalTo(\"mapper [name] has different [store] values\"));\n \n mergeResult = docMapper.merge(docMapper4.mapping(), false, false);\n assertThat(Arrays.toString(mergeResult.buildConflicts()), mergeResult.hasConflicts(), equalTo(true));\n \n assertNotSame(IndexOptions.NONE, docMapper.mappers().getMapper(\"name\").fieldType().indexOptions());\n- assertThat(mergeResult.buildConflicts()[0], equalTo(\"mapper [name] has different index values\"));\n- assertThat(mergeResult.buildConflicts()[1], equalTo(\"mapper [name] has different store values\"));\n+ assertThat(mergeResult.buildConflicts()[0], equalTo(\"mapper [name] has different [index] values\"));\n+ assertThat(mergeResult.buildConflicts()[1], equalTo(\"mapper [name] has different [store] values\"));\n \n // There are conflicts, but the `name.not_indexed3` has been added, b/c that field has no conflicts\n assertNotSame(IndexOptions.NONE, docMapper.mappers().getMapper(\"name\").fieldType().indexOptions());", "filename": "core/src/test/java/org/elasticsearch/index/mapper/multifield/merge/JavaMultiFieldMergeTests.java", "status": "modified" }, { "diff": "@@ -515,7 +515,7 @@ public void testDisableNorms() throws Exception {\n mergeResult = defaultMapper.merge(parser.parse(updatedMapping).mapping(), true, false);\n assertTrue(mergeResult.hasConflicts());\n assertEquals(1, mergeResult.buildConflicts().length);\n- assertTrue(mergeResult.buildConflicts()[0].contains(\"cannot enable norms\"));\n+ assertTrue(mergeResult.buildConflicts()[0].contains(\"different [omit_norms]\"));\n }\n \n /**", "filename": "core/src/test/java/org/elasticsearch/index/mapper/string/SimpleStringMappingTests.java", "status": "modified" }, { "diff": "@@ -579,11 +579,10 @@ public void testMergingConflicts() throws Exception {\n \n MergeResult mergeResult = docMapper.merge(parser.parse(mapping).mapping(), true, false);\n List<String> expectedConflicts = new ArrayList<>(Arrays.asList(\n- \"mapper [_timestamp] has different index values\",\n- \"mapper [_timestamp] has different store values\",\n+ \"mapper [_timestamp] has different [index] values\",\n+ \"mapper [_timestamp] has different [store] values\",\n \"Cannot update default in _timestamp value. Value is 1970-01-01 now encountering 1970-01-02\",\n- \"Cannot update path in _timestamp value. Value is foo path in merged mapping is bar\",\n- \"mapper [_timestamp] has different tokenize values\"));\n+ \"Cannot update path in _timestamp value. Value is foo path in merged mapping is bar\"));\n \n for (String conflict : mergeResult.buildConflicts()) {\n assertTrue(\"found unexpected conflict [\" + conflict + \"]\", expectedConflicts.remove(conflict));\n@@ -618,12 +617,12 @@ public void testBackcompatMergingConflictsForIndexValues() throws Exception {\n \n MergeResult mergeResult = docMapper.merge(parser.parse(mapping).mapping(), true, false);\n List<String> expectedConflicts = new ArrayList<>();\n- expectedConflicts.add(\"mapper [_timestamp] has different index values\");\n- expectedConflicts.add(\"mapper [_timestamp] has different tokenize values\");\n+ expectedConflicts.add(\"mapper [_timestamp] has different [index] values\");\n+ expectedConflicts.add(\"mapper [_timestamp] has different [tokenize] values\");\n if (indexValues.get(0).equals(\"not_analyzed\") == false) {\n // if the only index value left is not_analyzed, then the doc values setting will be the same, but in the\n // other two cases, it will change\n- expectedConflicts.add(\"mapper [_timestamp] has different doc_values values\");\n+ expectedConflicts.add(\"mapper [_timestamp] has different [doc_values] values\");\n }\n \n for (String conflict : mergeResult.buildConflicts()) {", "filename": "core/src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java", "status": "modified" }, { "diff": "@@ -55,14 +55,14 @@ public void test_all_conflicts() throws Exception {\n String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/update/all_mapping_create_index.json\");\n String mappingUpdate = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/update/all_mapping_update_with_conflicts.json\");\n String[] errorMessage = {\"[_all] enabled is true now encountering false\",\n- \"[_all] cannot enable norms (`norms.enabled`)\",\n- \"[_all] has different store values\",\n- \"[_all] has different store_term_vector values\",\n- \"[_all] has different store_term_vector_offsets values\",\n- \"[_all] has different store_term_vector_positions values\",\n- \"[_all] has different store_term_vector_payloads values\",\n- \"[_all] has different analyzer\",\n- \"[_all] has different similarity\"};\n+ \"[_all] has different [omit_norms] values\",\n+ \"[_all] has different [store] values\",\n+ \"[_all] has different [store_term_vector] values\",\n+ \"[_all] has different [store_term_vector_offsets] values\",\n+ \"[_all] has different [store_term_vector_positions] values\",\n+ \"[_all] has different [store_term_vector_payloads] values\",\n+ \"[_all] has different [analyzer]\",\n+ \"[_all] has different [similarity]\"};\n // fielddata and search_analyzer should not report conflict\n testConflict(mapping, mappingUpdate, errorMessage);\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingOnClusterIT.java", "status": "modified" } ] }
{ "body": "Tested in 1.7.1.\nFor the following test\n\n```\nPUT /test_qs\n{\n \"mappings\": {\n \"test_type\": {\n \"properties\": {\n \"text1\": {\n \"type\": \"string\"\n },\n \"text2\": {\n \"type\": \"string\"\n },\n \"text3\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}\n\nPOST /test_qs/test_type/_bulk\n{\"index\":{}}\n{\"text1\":\"abc\",\"text2\":\"def\",\"text3\":\"ghi\"}\n\nGET /test_qs/test_type/_search\n{\n \"fielddata_fields\": [\"_all\"]\n}\n```\n\nthe result is\n\n```\n \"fields\": {\n \"_all\": \"abc\"\n }\n```\n\nwhere it should have been actually `\"_all\": [\"abc\", \"def\", \"ghi\"]`.\n", "comments": [ { "body": "The background to this bug: meta-fields typically contain a single value only, while other fields may contain zero or more values. So meta-fields are special cased to return just the first value (and in 2.0, to return their values at the \"top\" level, instead of within the `fields` element).\n\n`_all` is an exception to this rule and should probably be treated as an ordinary field. really, returning fielddata for the `_all` field is a really bad idea, fine for testing, but will blow up your heap in production.\n", "created_at": "2015-08-28T14:57:25Z" }, { "body": "Had a chat to @nik9000 about highlighting and the `_all` field. I think we should leave things as they are for now, and revisit this once we have a better story around highlighting.\n", "created_at": "2015-09-01T17:15:11Z" }, { "body": "Support for the `_all` field has been removed, closing.", "created_at": "2018-03-16T10:17:59Z" } ], "number": 13178, "title": "fielddata_fields for _all doesn't display all the terms" }
{ "body": "The '_all' meta field is not stored, but our fielddata field could work with not stored field. \n\nOur meta-fields typically contain a single value only, when `toXContent`for meta field, we only return the first value, but _all should be an exception there. This commit adds all values for `_all` field in the `toXContent` method\n\ncloses #13178\n", "number": 13194, "review_comments": [ { "body": "I think you could use the constant [Metadata.ALL](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/cluster/metadata/MetaData.java#L85) here.\n", "created_at": "2015-08-29T13:23:36Z" }, { "body": "@andrestc Thanks, agree \n", "created_at": "2015-08-29T18:01:48Z" } ], "title": "`_all` should be treated as an ordinary field when returning `fields` or `fielddata_fields`" }
{ "commits": [ { "message": "Meta-fields typically contain a single value only, `_all`\nis an exception. This commit adds all values in `_all`\nmeta field in the toXContent of the InternalSearchHit.\n\ncloses #13178" }, { "message": "chang test" } ], "files": [ { "diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.search.Explanation;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesArray;\n@@ -470,7 +471,16 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.field(Fields._SCORE, score);\n }\n for (SearchHitField field : metaFields) {\n- builder.field(field.name(), field.value());\n+ // \"_all\" should be treated as an ordinary field, issue 13178\n+ if (field.name().equals(MetaData.ALL)){\n+ builder.startArray(field.name());\n+ for (Object value : field.getValues()) {\n+ builder.value(value);\n+ }\n+ builder.endArray();\n+ }else {\n+ builder.field(field.name(), field.value());\n+ }\n }\n if (source != null) {\n XContentHelper.writeRawField(\"_source\", source, builder, params);", "filename": "core/src/main/java/org/elasticsearch/search/internal/InternalSearchHit.java", "status": "modified" }, { "diff": "@@ -33,6 +33,8 @@\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n import org.elasticsearch.rest.RestStatus;\n@@ -670,4 +672,41 @@ public void testLoadMetadata() throws Exception {\n assertThat(fields.get(\"_parent\").isMetadataField(), equalTo(true));\n assertThat(fields.get(\"_parent\").getValue().toString(), equalTo(\"parent_1\"));\n }\n+\n+ // issue 13178\n+ public void testMetaAllFieldPulledFromFieldData() throws Exception {\n+ createIndex(\"test\");\n+ client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForYellowStatus().execute().actionGet();\n+\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"properties\")\n+ .startObject(\"_all\").field(\"enabled\", true).endObject()\n+ .startObject(\"_source\").field(\"enabled\", false).endObject()\n+ .startObject(\"string_field_1\").field(\"type\", \"string\").endObject()\n+ .startObject(\"string_field_2\").field(\"type\", \"string\").endObject()\n+ .startObject(\"string_field_3\").field(\"type\", \"string\").endObject()\n+ .endObject().endObject().endObject().string();\n+\n+ client().admin().indices().preparePutMapping().setType(\"type1\").setSource(mapping).execute().actionGet();\n+\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"string_field1\", \"value1\")\n+ .field(\"string_field2\", \"value2\")\n+ .field(\"string_field3\", \"value3\")\n+ .endObject()).execute().actionGet();\n+\n+ client().admin().indices().prepareRefresh().execute().actionGet();\n+\n+ SearchRequestBuilder builder = client().prepareSearch().setQuery(matchAllQuery())\n+ .addFieldDataField(\"_all\");\n+ SearchResponse searchResponse = builder.execute().actionGet();\n+\n+ assertThat(searchResponse.getHits().getAt(0).fields().get(\"_all\").values().size(), equalTo(3));\n+\n+ XContentBuilder xContentBuilder = XContentFactory.jsonBuilder();\n+ xContentBuilder.startObject();\n+ searchResponse.getHits().toXContent(xContentBuilder, ToXContent.EMPTY_PARAMS);\n+ xContentBuilder.endObject();\n+ String expectedSubSequence = \"\\\"_all\\\":[\\\"value1\\\",\\\"value2\\\",\\\"value3\\\"]\";\n+ assertTrue(xContentBuilder.string().contains(expectedSubSequence));\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/search/fields/SearchFieldsIT.java", "status": "modified" } ] }
{ "body": "This change adds a simplistic heuristic to try to balance new shard allocations across multiple data paths on one node.\n\nIt very roughly predicts (guesses!) how much disk space a shard will eventually use, as the max of the current avg. size of shards across the cluster, and 5% of current free space across all path.data on the current node, and then reserves space by counting how many shards are now assigned to each path.data.\n\nPicking the best path.data for a new shard is using the same \"most free space\" logic, except it now deducts the reserved space.\n\nI tested this on an EC2 instance with 2 SSDs with nearly the same amount of free space and confirmed we now put 2 shards on one SSD and 3 shards on the other, vs all 5 shards on a single path with master today, but I'm not sure how to make a standalone unit test ... maybe I can use a MockFS to fake up N path.datas with different free space?\n\nThis is just a heuristic, and it easily has adversarial cases that will fill up one path.data while other path.data on the same node still have plenty of space, and unfortunately ES can't recover from that today. E.g., DiskThresholdDecider won't even detect any problem (since it sums up total free space across all path.data) ... I think we should separately think about fixing that, but at least this change improves the current situation.\n\nCloses #11122\n", "comments": [ { "body": "This looks ok to me, although I do hope we keep tweaking this heuristic. I've tried to think of a better one but can't for now..but I wish we could do something based on weighted shard count instead of the very hard to think about mixed/shifting logic of average/5%.\n", "created_at": "2015-05-15T18:02:07Z" }, { "body": "mike this looks good to me - can we somehow have a test for this?\n", "created_at": "2015-05-15T20:08:24Z" }, { "body": "> can we somehow have a test for this?\n\nI agree, but it's tricky: I think I need a new MockFS that fakes how much free space there is on each path. I can try ...\n", "created_at": "2015-05-17T09:46:06Z" }, { "body": "> This looks ok to me, although I do hope we keep tweaking this heuristic. I've tried to think of a better one but can't for now..but I wish we could do something based on weighted shard count instead of the very hard to think about mixed/shifting logic of average/5%.\n\nThe challenge with only using current shard count is then we don't take into account how much space the already allocated shards are already using? E.g maybe one path has only a few shards, and is nearly full, but another path has quite a few shards and has lots of free space.\n\nNo matter the heuristic here, there will be easy adversarial cases against it, so in the end this will be at best a \"starting guess\": we can't predict the future.\n\nTo fix this correctly we really need shard allocation to separately see / allocate across each path.data on a node, so we can move a shard off a path.data that is filling up even if other path.data on that same node have plenty of space.\n", "created_at": "2015-05-17T09:53:15Z" }, { "body": "> I agree, but it's tricky: I think I need a new MockFS that fakes how much free space there is on each path. I can try ...\n\nthis requires QUITE a few steps, and please note that ES's file management (especially tests) is simply not ready to juggle multiple nio filesystems at once (various copy/move routines are unsafe there). \n\nSeparately, an out-of-disk space mockfs would be great. But please don't add a complicated test, and please don't use multiple nio.2 filesystems when ES isn't ready for that yet.\n\nTest what is reasonable to test and then if we want to do better, some cleanups are needed. I have been working on these things but it is quite difficult without straight up banning methods, because more tests are added all the time.\n", "created_at": "2015-05-17T13:48:36Z" }, { "body": "> this requires QUITE a few steps, and please note that ES's file management (especially tests) is simply not ready to juggle multiple nio filesystems at once (various copy/move routines are unsafe there).\n\nOK this sounds hard :) I'll try to make a more direct test that just tests the heuristic logic w/o needing MockFS ...\n", "created_at": "2015-05-18T21:54:50Z" }, { "body": "I pushed a new commit addressing feedback (thanks!).\n\nHowever, I gave up making a direct unit test for the ShardPath.selectNewPath ... I tried to simplify the arguments passed to it (e.g. replacing Iterator<IndexShard> with Iterator<Path> and extracting the shard's paths \"up above\") so that it was more easily tested, but it just became too forced ...\n\nI did retest on EC2 and confirmed the 5 shards are split to 2 and 3 shards on each SSD.\n\nI'll open a follow-on issue to allow for shards to relocated to different path.data within a single node.\n", "created_at": "2015-05-20T20:50:10Z" }, { "body": "One problem with this change is it's making a \"blind guess\" about how big a shard will be, yet if it's a relocation of an existing shard, the source node clearly \"knows\" how large it is, so it's crazy not to use this.\n\n@dakrone is working on getting this information published via ClusterInfo or index meta data or something (thanks!), then I'll fix this change to use that, so we can make a more informed \"guess\".\n\nEven so, this is just the current size of the shard, and we still separately need to fix #11271 so shards on one path.data that's filling up will still be relocated even if other path.data on the same node have plenty of space.\n", "created_at": "2015-05-22T09:19:46Z" }, { "body": "I left some comments here again, I think we should move forward here without the infrastructure to make it perfect. It's a step in the right direction?\n", "created_at": "2015-06-12T12:28:30Z" }, { "body": "OK I folded in feedback here.\n\nI avoid this logic when a custom data path is set, and force the statePath to be NodePaths[0] in that case.\n\nAnd I removed ClusterInfo/Service and instead take avg. of all shards already on the node.\n", "created_at": "2015-06-12T17:57:34Z" }, { "body": "left a tiny comment otherwise LGTM\n", "created_at": "2015-06-16T18:18:52Z" }, { "body": "LGTM\n", "created_at": "2015-06-17T09:36:34Z" } ], "number": 11185, "title": "Balance new shard allocations more evenly on multiple path.data" }
{ "body": "In #11185 we fixed this method to be more careful to not allocate a bunch of sudden new shards to a single path on path.data when other paths have nearly the same usable space.\n\nThis change just adds a unit test for that method, by mocking the filesystem that `NodeEnvironment` sees, so we can easily fake how much usable space each path on path.data sees.\n\nThe hardest part here was the mocking infra (I spent quite a while battling `ProviderMismatchException`!!), and after that I simplified `selectNewPathForShard` to not take `IndexShard` to make unit testing possible, then I created a basic test case to show the original failure.\n\nAside/rant: the method really is one giant hack, making a silly guess at how large a shard will grow to be when in many cases (existing shards being replicated/reallocated) we know exactly how much disk space it will eventually use up, at least once it's done copying the segments over. For newly allocated shards we could probably make more informed guesses, e.g. if it's a Marvel shard it will be small, if it's another daily/hourly index we can predict, etc. And finally we separately need the disk allocator to be able to reallocate shards from one path on path.data to another, within a single node or across nodes ... but these are all separate from just adding this test case :)\n", "number": 13158, "review_comments": [ { "body": "you can just use `ShardPath#getRootStatePath()` here?\n", "created_at": "2015-08-27T18:58:36Z" } ], "title": "Add unit test for ShardPath.selectNewPathForShard" }
{ "commits": [ { "message": "initial mock filesystem setup for test case" }, { "message": "add asserts to make sure mocking 'took'" }, { "message": "simplify API for ShardPath.selectNewPathForShard to enable unit testing: don't pass IndexShard" }, { "message": "add basic unit test" }, { "message": "polish" }, { "message": "reference original issue" }, { "message": "use ShardPath.getRootStatePath; allow forbidden API" }, { "message": "try to work on Windows too" } ], "files": [ { "diff": "@@ -22,6 +22,7 @@\n import com.google.common.base.Function;\n import com.google.common.collect.ImmutableMap;\n import com.google.common.collect.Iterators;\n+\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -56,8 +57,10 @@\n \n import java.io.Closeable;\n import java.io.IOException;\n+import java.nio.file.Path;\n import java.util.HashMap;\n import java.util.Iterator;\n+import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n@@ -314,8 +317,23 @@ public synchronized IndexShard createShard(int sShardId, ShardRouting routing) {\n throw t;\n }\n }\n+\n if (path == null) {\n- path = ShardPath.selectNewPathForShard(nodeEnv, shardId, indexSettings, routing.getExpectedShardSize() == ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE ? getAvgShardSizeInBytes() : routing.getExpectedShardSize(), this);\n+ // TODO: we should, instead, hold a \"bytes reserved\" of how large we anticipate this shard will be, e.g. for a shard\n+ // that's being relocated/replicated we know how large it will become once it's done copying:\n+\n+ // Count up how many shards are currently on each data path:\n+ Map<Path,Integer> dataPathToShardCount = new HashMap<>();\n+ for(IndexShard shard : this) {\n+ Path dataPath = shard.shardPath().getRootStatePath();\n+ Integer curCount = dataPathToShardCount.get(dataPath);\n+ if (curCount == null) {\n+ curCount = 0;\n+ }\n+ dataPathToShardCount.put(dataPath, curCount+1);\n+ }\n+ path = ShardPath.selectNewPathForShard(nodeEnv, shardId, indexSettings, routing.getExpectedShardSize() == ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE ? getAvgShardSizeInBytes() : routing.getExpectedShardSize(),\n+ dataPathToShardCount);\n logger.debug(\"{} creating using a new path [{}]\", shardId, path);\n } else {\n logger.debug(\"{} creating using an existing path [{}]\", shardId, path);", "filename": "core/src/main/java/org/elasticsearch/index/IndexService.java", "status": "modified" }, { "diff": "@@ -199,19 +199,27 @@ private static Map<Path,Long> getEstimatedReservedBytes(NodeEnvironment env, lon\n }\n \n public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shardId, @IndexSettings Settings indexSettings,\n- long avgShardSizeInBytes, Iterable<IndexShard> shards) throws IOException {\n+ long avgShardSizeInBytes, Map<Path,Integer> dataPathToShardCount) throws IOException {\n \n final Path dataPath;\n final Path statePath;\n \n- final String indexUUID = indexSettings.get(IndexMetaData.SETTING_INDEX_UUID, IndexMetaData.INDEX_UUID_NA_VALUE);\n-\n if (NodeEnvironment.hasCustomDataPath(indexSettings)) {\n dataPath = env.resolveCustomLocation(indexSettings, shardId);\n statePath = env.nodePaths()[0].resolve(shardId);\n } else {\n \n- Map<Path,Long> estReservedBytes = getEstimatedReservedBytes(env, avgShardSizeInBytes, shards);\n+ long totFreeSpace = 0;\n+ for (NodeEnvironment.NodePath nodePath : env.nodePaths()) {\n+ totFreeSpace += nodePath.fileStore.getUsableSpace();\n+ }\n+\n+ // TODO: this is a hack!! We should instead keep track of incoming (relocated) shards since we know\n+ // how large they will be once they're done copying, instead of a silly guess for such cases:\n+\n+ // Very rough heurisic of how much disk space we expect the shard will use over its lifetime, the max of current average\n+ // shard size across the cluster and 5% of the total available free space on this node:\n+ long estShardSizeInBytes = Math.max(avgShardSizeInBytes, (long) (totFreeSpace/20.0));\n \n // TODO - do we need something more extensible? Yet, this does the job for now...\n final NodeEnvironment.NodePath[] paths = env.nodePaths();\n@@ -220,10 +228,11 @@ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shard\n for (NodeEnvironment.NodePath nodePath : paths) {\n FileStore fileStore = nodePath.fileStore;\n long usableBytes = fileStore.getUsableSpace();\n- Long reservedBytes = estReservedBytes.get(nodePath.path);\n- if (reservedBytes != null) {\n- // Deduct estimated reserved bytes from usable space:\n- usableBytes -= reservedBytes;\n+\n+ // Deduct estimated reserved bytes from usable space:\n+ Integer count = dataPathToShardCount.get(nodePath.path);\n+ if (count != null) {\n+ usableBytes -= estShardSizeInBytes * count;\n }\n if (usableBytes > maxUsableBytes) {\n maxUsableBytes = usableBytes;\n@@ -235,6 +244,8 @@ public static ShardPath selectNewPathForShard(NodeEnvironment env, ShardId shard\n dataPath = statePath;\n }\n \n+ final String indexUUID = indexSettings.get(IndexMetaData.SETTING_INDEX_UUID, IndexMetaData.INDEX_UUID_NA_VALUE);\n+\n return new ShardPath(NodeEnvironment.hasCustomDataPath(indexSettings), dataPath, statePath, indexUUID, shardId);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/index/shard/ShardPath.java", "status": "modified" }, { "diff": "@@ -0,0 +1,234 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.shard;\n+\n+import com.carrotsearch.randomizedtesting.annotations.Repeat;\n+\n+import org.apache.lucene.mockfile.FilterFileSystem;\n+import org.apache.lucene.mockfile.FilterFileSystemProvider;\n+import org.apache.lucene.mockfile.FilterPath;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.SuppressForbidden;\n+import org.elasticsearch.common.io.PathUtils;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.env.Environment;\n+import org.elasticsearch.env.NodeEnvironment.NodePath;\n+import org.elasticsearch.env.NodeEnvironment;\n+import org.elasticsearch.test.ESTestCase;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+\n+import java.io.File;\n+import java.io.IOException;\n+import java.lang.reflect.Field;\n+import java.nio.file.FileStore;\n+import java.nio.file.FileSystem;\n+import java.nio.file.FileSystems;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.nio.file.attribute.FileAttributeView;\n+import java.nio.file.attribute.FileStoreAttributeView;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.HashMap;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.Set;\n+\n+import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n+\n+/** Separate test class from ShardPathTests because we need static (BeforeClass) setup to install mock filesystems... */\n+@SuppressForbidden(reason = \"ProviderMismatchException if I try to use PathUtils.getDefault instead\")\n+public class NewPathForShardTest extends ESTestCase {\n+\n+ // Sneakiness to install mock file stores so we can pretend how much free space we have on each path.data:\n+ private static MockFileStore aFileStore = new MockFileStore(\"mocka\");\n+ private static MockFileStore bFileStore = new MockFileStore(\"mockb\");\n+ private static FileSystem origFileSystem;\n+ private static String aPathPart = File.separator + 'a' + File.separator;\n+ private static String bPathPart = File.separator + 'b' + File.separator;\n+\n+ @BeforeClass\n+ public static void installMockUsableSpaceFS() throws Exception {\n+ // Necessary so when Environment.clinit runs, to gather all FileStores, it sees ours:\n+ origFileSystem = FileSystems.getDefault();\n+\n+ Field field = PathUtils.class.getDeclaredField(\"DEFAULT\");\n+ field.setAccessible(true);\n+ FileSystem mock = new MockUsableSpaceFileSystemProvider().getFileSystem(getBaseTempDirForTestClass().toUri());\n+ field.set(null, mock);\n+ assertEquals(mock, PathUtils.getDefaultFileSystem());\n+ }\n+\n+ @AfterClass\n+ public static void removeMockUsableSpaceFS() throws Exception {\n+ Field field = PathUtils.class.getDeclaredField(\"DEFAULT\");\n+ field.setAccessible(true);\n+ field.set(null, origFileSystem);\n+ origFileSystem = null;\n+ aFileStore = null;\n+ bFileStore = null;\n+ }\n+\n+ /** Mock file system that fakes usable space for each FileStore */\n+ @SuppressForbidden(reason = \"ProviderMismatchException if I try to use PathUtils.getDefault instead\")\n+ static class MockUsableSpaceFileSystemProvider extends FilterFileSystemProvider {\n+ \n+ public MockUsableSpaceFileSystemProvider() {\n+ super(\"mockusablespace://\", FileSystems.getDefault());\n+ final List<FileStore> fileStores = new ArrayList<>();\n+ fileStores.add(aFileStore);\n+ fileStores.add(bFileStore);\n+ fileSystem = new FilterFileSystem(this, origFileSystem) {\n+ @Override\n+ public Iterable<FileStore> getFileStores() {\n+ return fileStores;\n+ }\n+ };\n+ }\n+\n+ @Override\n+ public FileStore getFileStore(Path path) throws IOException {\n+ if (path.toString().contains(aPathPart)) {\n+ return aFileStore;\n+ } else {\n+ return bFileStore;\n+ }\n+ }\n+ }\n+\n+ static class MockFileStore extends FileStore {\n+\n+ public long usableSpace;\n+\n+ private final String desc;\n+\n+ public MockFileStore(String desc) {\n+ this.desc = desc;\n+ }\n+ \n+ @Override\n+ public String type() {\n+ return \"mock\";\n+ }\n+\n+ @Override\n+ public String name() {\n+ return desc;\n+ }\n+\n+ @Override\n+ public String toString() {\n+ return desc;\n+ }\n+\n+ @Override\n+ public boolean isReadOnly() {\n+ return false;\n+ }\n+\n+ @Override\n+ public long getTotalSpace() throws IOException {\n+ return usableSpace*3;\n+ }\n+\n+ @Override\n+ public long getUsableSpace() throws IOException {\n+ return usableSpace;\n+ }\n+\n+ @Override\n+ public long getUnallocatedSpace() throws IOException {\n+ return usableSpace*2;\n+ }\n+\n+ @Override\n+ public boolean supportsFileAttributeView(Class<? extends FileAttributeView> type) {\n+ return false;\n+ }\n+\n+ @Override\n+ public boolean supportsFileAttributeView(String name) {\n+ return false;\n+ }\n+\n+ @Override\n+ public <V extends FileStoreAttributeView> V getFileStoreAttributeView(Class<V> type) {\n+ return null;\n+ }\n+\n+ @Override\n+ public Object getAttribute(String attribute) throws IOException {\n+ return null;\n+ }\n+ }\n+\n+ public void testSelectNewPathForShard() throws Exception {\n+ Path path = PathUtils.get(createTempDir().toString());\n+\n+ // Use 2 data paths:\n+ String[] paths = new String[] {path.resolve(\"a\").toString(),\n+ path.resolve(\"b\").toString()};\n+\n+ Settings settings = Settings.builder()\n+ .put(\"path.home\", path)\n+ .putArray(\"path.data\", paths).build();\n+ NodeEnvironment nodeEnv = new NodeEnvironment(settings, new Environment(settings));\n+\n+ // Make sure all our mocking above actually worked:\n+ NodePath[] nodePaths = nodeEnv.nodePaths();\n+ assertEquals(2, nodePaths.length);\n+\n+ assertEquals(\"mocka\", nodePaths[0].fileStore.name());\n+ assertEquals(\"mockb\", nodePaths[1].fileStore.name());\n+\n+ // Path a has lots of free space, but b has little, so new shard should go to a:\n+ aFileStore.usableSpace = 100000;\n+ bFileStore.usableSpace = 1000;\n+\n+ ShardId shardId = new ShardId(\"index\", 0);\n+ ShardPath result = ShardPath.selectNewPathForShard(nodeEnv, shardId, Settings.EMPTY, 100, Collections.<Path,Integer>emptyMap());\n+ assertTrue(result.getDataPath().toString().contains(aPathPart));\n+\n+ // Test the reverse: b has lots of free space, but a has little, so new shard should go to b:\n+ aFileStore.usableSpace = 1000;\n+ bFileStore.usableSpace = 100000;\n+\n+ shardId = new ShardId(\"index\", 0);\n+ result = ShardPath.selectNewPathForShard(nodeEnv, shardId, Settings.EMPTY, 100, Collections.<Path,Integer>emptyMap());\n+ assertTrue(result.getDataPath().toString().contains(bPathPart));\n+\n+ // Now a and be have equal usable space; we allocate two shards to the node, and each should go to different paths:\n+ aFileStore.usableSpace = 100000;\n+ bFileStore.usableSpace = 100000;\n+\n+ Map<Path,Integer> dataPathToShardCount = new HashMap<>();\n+ ShardPath result1 = ShardPath.selectNewPathForShard(nodeEnv, shardId, Settings.EMPTY, 100, dataPathToShardCount);\n+ dataPathToShardCount.put(NodeEnvironment.shardStatePathToDataPath(result1.getDataPath()), 1);\n+ ShardPath result2 = ShardPath.selectNewPathForShard(nodeEnv, shardId, Settings.EMPTY, 100, dataPathToShardCount);\n+\n+ // #11122: this was the original failure: on a node with 2 disks that have nearly equal\n+ // free space, we would always allocate all N incoming shards to the one path that\n+ // had the most free space, never using the other drive unless new shards arrive\n+ // after the first shards started using storage:\n+ assertNotEquals(result1.getDataPath(), result2.getDataPath());\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/index/shard/NewPathForShardTest.java", "status": "added" } ] }
{ "body": "There seems to be no effect on score on changing boost value on match_phrase_prefix query. \n\nMy query is:\n<pre><code> \"query\": {\n \"bool\": {\n \"should\": [\n {\"match_phrase_prefix\": {\n \"firstName\": {\n \"query\": \"is\", \n \"boost\":2.0\n }\n }} ,\n {\"match_phrase_prefix\": {\n \"userName.raw\": {\n \"query\": \"Tiv\", \n \"boost\":4.0\n }\n }}\n ]\n }\n }</pre></code>\n\nI am trying this on Elasticsearch-1.7\n", "comments": [ { "body": "Boost does seem to be ignored by the match_phrase_prefix query:\n\n```\nPUT t/t/1\n{\n \"one\": \"foo\",\n \"two\": \"bar\"\n}\n\nGET t/_search?explain\n{\n \"query\": {\n \"bool\": {\n \"should\": [\n {\n \"match\": {\n \"one\": {\n \"query\": \"foo\",\n \"boost\": 5\n }\n }\n },\n {\n \"match_phrase_prefix\": {\n \"two\": {\n \"query\": \"bar\",\n \"boost\": 50\n }\n }\n }\n ]\n }\n }\n}\n```\n", "created_at": "2015-08-27T10:53:12Z" }, { "body": "This is also broken in master\n", "created_at": "2015-08-27T10:53:26Z" } ], "number": 13129, "title": "Match phrase prefix query seems to ignore boost value." }
{ "body": "The match_phrase_prefix query properly parses the boost etc. but it loses it in its rewrite method. Fixed that by setting the orginal boost to the rewritten query before returning it. Also cleaned up some warning in MultiPhrasePrefixQuery.\n\nCloses #13129\n", "number": 13142, "review_comments": [ { "body": "Can we make the test simpler and just call Query.rewrite and ensure the boost is preserved?\n", "created_at": "2015-09-01T07:37:38Z" }, { "body": "sure will do\n", "created_at": "2015-09-03T17:03:34Z" } ], "title": "Query DSL: match_phrase_prefix to take boost into account" }
{ "commits": [ { "message": "Query DSL: match_phrase_prefix to take boost into account\n\nThe match_phrase_prefix query properly parses the boost etc. but it loses it in its rewrite method. Fixed that by setting the orginal boost to the rewritten query before returning it. Also cleaned up some warning in MultiPhrasePrefixQuery.\n\nCloses #13129\nCloses #13142" } ], "files": [ { "diff": "@@ -20,12 +20,7 @@\n package org.elasticsearch.common.lucene.search;\n \n import com.carrotsearch.hppc.ObjectHashSet;\n-\n-import org.apache.lucene.index.IndexReader;\n-import org.apache.lucene.index.LeafReaderContext;\n-import org.apache.lucene.index.Term;\n-import org.apache.lucene.index.Terms;\n-import org.apache.lucene.index.TermsEnum;\n+import org.apache.lucene.index.*;\n import org.apache.lucene.search.MatchNoDocsQuery;\n import org.apache.lucene.search.MultiPhraseQuery;\n import org.apache.lucene.search.Query;\n@@ -34,12 +29,7 @@\n import org.apache.lucene.util.ToStringUtils;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.Arrays;\n-import java.util.Collections;\n-import java.util.Iterator;\n-import java.util.List;\n-import java.util.ListIterator;\n+import java.util.*;\n \n public class MultiPhrasePrefixQuery extends Query {\n \n@@ -90,16 +80,16 @@ public void add(Term term) {\n public void add(Term[] terms) {\n int position = 0;\n if (positions.size() > 0)\n- position = positions.get(positions.size() - 1).intValue() + 1;\n+ position = positions.get(positions.size() - 1) + 1;\n \n add(terms, position);\n }\n \n /**\n * Allows to specify the relative position of terms within the phrase.\n *\n- * @param terms\n- * @param position\n+ * @param terms the terms\n+ * @param position the position of the terms provided as argument\n * @see org.apache.lucene.search.PhraseQuery#add(Term, int)\n */\n public void add(Term[] terms, int position) {\n@@ -115,15 +105,7 @@ public void add(Term[] terms, int position) {\n }\n \n termArrays.add(terms);\n- positions.add(Integer.valueOf(position));\n- }\n-\n- /**\n- * Returns a List of the terms in the multiphrase.\n- * Do not modify the List or its contents.\n- */\n- public List<Term[]> getTermArrays() {\n- return Collections.unmodifiableList(termArrays);\n+ positions.add(position);\n }\n \n /**\n@@ -132,7 +114,7 @@ public List<Term[]> getTermArrays() {\n public int[] getPositions() {\n int[] result = new int[positions.size()];\n for (int i = 0; i < positions.size(); i++)\n- result[i] = positions.get(i).intValue();\n+ result[i] = positions.get(i);\n return result;\n }\n \n@@ -160,6 +142,7 @@ public Query rewrite(IndexReader reader) throws IOException {\n return Queries.newMatchNoDocsQuery();\n }\n query.add(terms.toArray(Term.class), position);\n+ query.setBoost(getBoost());\n return query.rewrite(reader);\n }\n ", "filename": "core/src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java", "status": "modified" }, { "diff": "@@ -23,13 +23,12 @@\n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.TextField;\n import org.apache.lucene.index.*;\n-import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.*;\n import org.apache.lucene.store.RAMDirectory;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.test.ESTestCase;\n import org.junit.Test;\n \n-import static org.hamcrest.MatcherAssert.assertThat;\n import static org.hamcrest.Matchers.equalTo;\n \n public class MultiPhrasePrefixQueryTests extends ESTestCase {\n@@ -63,4 +62,21 @@ public void simpleTests() throws Exception {\n query.add(new Term(\"field\", \"xxx\"));\n assertThat(Lucene.count(searcher, query), equalTo(0l));\n }\n+\n+ @Test\n+ public void testBoost() throws Exception {\n+ IndexWriter writer = new IndexWriter(new RAMDirectory(), new IndexWriterConfig(Lucene.STANDARD_ANALYZER));\n+ Document doc = new Document();\n+ doc.add(new Field(\"field\", \"aaa bbb\", TextField.TYPE_NOT_STORED));\n+ writer.addDocument(doc);\n+ doc = new Document();\n+ doc.add(new Field(\"field\", \"ccc ddd\", TextField.TYPE_NOT_STORED));\n+ writer.addDocument(doc);\n+ IndexReader reader = DirectoryReader.open(writer, true);\n+ MultiPhrasePrefixQuery multiPhrasePrefixQuery = new MultiPhrasePrefixQuery();\n+ multiPhrasePrefixQuery.add(new Term[]{new Term(\"field\", \"aaa\"), new Term(\"field\", \"bb\")});\n+ multiPhrasePrefixQuery.setBoost(randomFloat());\n+ Query query = multiPhrasePrefixQuery.rewrite(reader);\n+ assertThat(query.getBoost(), equalTo(multiPhrasePrefixQuery.getBoost()));\n+ }\n }\n\\ No newline at end of file", "filename": "core/src/test/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQueryTests.java", "status": "modified" } ] }
{ "body": "In integration tests, using `enabled: false` on a root document type makes document indexing fails.\n\nHere is the mapping I use:\n\n```\n\"my_doc_type\": {\n \"enabled\": false\n}\n```\n\nBut then indexing a document throws a MapperParsingException with the following stack:\n\n``` java\nMapperParsingException[failed to parse]; nested: AssertionError;\n at __randomizedtesting.SeedInfo.seed([1B10B6999E316CD7:4AED1D00C8EC9444]:0)\n at org.elasticsearch.index.mapper.DocumentParser.innerParseDocument(DocumentParser.java:155)\n at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:79)\n at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:317)\n at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:313)\n ...\n```\n\nThe assertion is located in the `DocumentParser` class:\n\n``` java\n // try to parse the next token, this should be null if the object is ended properly\n // but will throw a JSON exception if the extra tokens is not valid JSON (this will be handled by the catch)\n if (Version.indexCreated(indexSettings).onOrAfter(Version.V_2_0_0_beta1)\n && source.parser() == null && parser != null) {\n // only check for end of tokens if we created the parser here\n token = parser.nextToken();\n assert token == null; // double check, in tests, that we didn't end parsing early\n }\n```\n\nWhen assertion are disabled, everything is OK. I did a simple test to reproduce the issue here:\nhttps://github.com/tlrx/elasticsearch/commit/94712121dd524c48f2dc40c787efd1eb6375d799\n\nThis assertion has been added recently and I don't know much about this part of code, maybe @rjernst can help here?\n", "comments": [ { "body": "This seems like a bug maybe? why are we not consuming the token there?\n", "created_at": "2015-08-20T16:29:23Z" }, { "body": "Looks like it has been introduced by #11414\n", "created_at": "2015-08-20T16:39:42Z" }, { "body": "I think you misunderstood. The assertion code is correct, it means there was leftover stuff at the end of parsing. But _why_ is there leftover stuff in this case. what is the document you are sending, I only see the mapping in the original description? My hunch is, we need to fully consume the parser for mappings that are disabled.\n", "created_at": "2015-08-20T17:16:58Z" }, { "body": "What I don't get is why I can create this mapping and index documents in elasticsearch but cannot do exactly the same thing in integration tests.\n\nHere is the document I index: https://github.com/tlrx/elasticsearch/commit/94712121dd524c48f2dc40c787efd1eb6375d799#diff-82784da2187574da5398ae215c8a6795R276\n\nThis test fails on my computer and I don't get why.\n", "created_at": "2015-08-20T17:51:03Z" }, { "body": "Assertions are enabled in tests, but not when running with bin/elasticsearch.\n", "created_at": "2015-08-20T17:57:41Z" }, { "body": "Why isn't this a real check\n", "created_at": "2015-08-20T18:00:50Z" }, { "body": "@rjernst Yes I know, but the test I linked to seems good to me and I don't see why this assertion throws up. Can you please have a quick look to the code and/or try to reproduce?\n", "created_at": "2015-08-20T18:03:11Z" }, { "body": "this is a bug. The following fails with assertions enabled:\n\n```\nPUT my_index\n{\n \"mappings\": {\n \"my_type\": {\n \"enabled\": false\n }\n }\n}\n\nPUT my_index/my_type/1\n{\n \"foo\": \"bar\"\n}\n```\n\nThe whole type is disabled, which means the _source should be stored but nothing should be indexed. It looks like we're just skipping parsing completely instead of consuming the tokens.\n", "created_at": "2015-08-24T12:27:35Z" }, { "body": "I have a test reproducing the issue and am investigating.\n", "created_at": "2015-08-24T16:25:16Z" }, { "body": "@rjernst Thanks for the work you have done :) Unfortunately I reopen this bug because indexing a document (like @clinton suggested) now fails with a `NullPointerException` (I tested on latest snapshot - build e8834cc78c13a507e2851908f8d51489e7888570).\n\nAs far as I understand the code it seems that your fix skips the document parsing when the type is disabled. So field mappers like `UidFieldMapper` are not used, resulting in a null `uid` thrown [here](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java#L515). \n\nI'm not familiar with this part of the code but if you don't have time to look further please let me know and I'll try to have a look.\n", "created_at": "2015-08-26T08:11:25Z" } ], "number": 13017, "title": "Using enabled:false on document type throws exception in tests" }
{ "body": "I improved the tests more so we won't hit this problem again.\n\ncloses #13017\n", "number": 13137, "review_comments": [], "title": "Fix doc parser to still pre/post process metadata fields on disabled type" }
{ "commits": [ { "message": "Mappings: Fix doc parser to still pre/post process metadata fields on disabled type\n\ncloses #13017" } ], "files": [ { "diff": "@@ -104,32 +104,34 @@ private ParsedDocument innerParseDocument(SourceToParse source) throws MapperPar\n if (token != XContentParser.Token.START_OBJECT) {\n throw new MapperParsingException(\"Malformed content, must start with an object\");\n }\n+\n+ boolean emptyDoc = false;\n if (mapping.root.isEnabled()) {\n- boolean emptyDoc = false;\n token = parser.nextToken();\n if (token == XContentParser.Token.END_OBJECT) {\n // empty doc, we can handle it...\n emptyDoc = true;\n } else if (token != XContentParser.Token.FIELD_NAME) {\n throw new MapperParsingException(\"Malformed content, after first object, either the type field or the actual properties should exist\");\n }\n+ }\n \n- for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) {\n- metadataMapper.preParse(context);\n- }\n- if (emptyDoc == false) {\n- Mapper update = parseObject(context, mapping.root);\n- if (update != null) {\n- context.addDynamicMappingsUpdate(update);\n- }\n- }\n- for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) {\n- metadataMapper.postParse(context);\n- }\n+ for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) {\n+ metadataMapper.preParse(context);\n+ }\n \n- } else {\n+ if (mapping.root.isEnabled() == false) {\n // entire type is disabled\n parser.skipChildren();\n+ } else if (emptyDoc == false) {\n+ Mapper update = parseObject(context, mapping.root);\n+ if (update != null) {\n+ context.addDynamicMappingsUpdate(update);\n+ }\n+ }\n+\n+ for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) {\n+ metadataMapper.postParse(context);\n }\n \n // try to parse the next token, this should be null if the object is ended properly", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java", "status": "modified" }, { "diff": "@@ -19,12 +19,9 @@\n \n package org.elasticsearch.index.mapper;\n \n-import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.common.xcontent.json.JsonXContent;\n-import org.elasticsearch.common.xcontent.json.JsonXContentParser;\n+import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.test.ESSingleNodeTestCase;\n \n // TODO: make this a real unit test\n@@ -37,11 +34,12 @@ public void testTypeDisabled() throws Exception {\n DocumentMapper mapper = mapperParser.parse(mapping);\n \n BytesReference bytes = XContentFactory.jsonBuilder()\n- .startObject()\n+ .startObject().startObject(\"foo\")\n .field(\"field\", \"1234\")\n- .endObject().bytes();\n+ .endObject().endObject().bytes();\n ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", bytes);\n assertNull(doc.rootDoc().getField(\"field\"));\n+ assertNotNull(doc.rootDoc().getField(UidFieldMapper.NAME));\n }\n \n public void testFieldDisabled() throws Exception {\n@@ -60,5 +58,6 @@ public void testFieldDisabled() throws Exception {\n ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", bytes);\n assertNull(doc.rootDoc().getField(\"foo\"));\n assertNotNull(doc.rootDoc().getField(\"bar\"));\n+ assertNotNull(doc.rootDoc().getField(UidFieldMapper.NAME));\n }\n }", "filename": "core/src/test/java/org/elasticsearch/index/mapper/DocumentParserTests.java", "status": "modified" } ] }
{ "body": "Hi folks,\n\nI just encountered an issue with elasticsearch 1.7.1. I tried to upgrade from ES 1.4.4 to 1.7.1.\n\nES does run as unprivileged user.\n\nError output while starting:\n\n> bin/elasticsearch: HOSTNAME=: is not an identifier\n\nIt seems that in contrast to 1.4.4 there is line 151 of\n\n> elasticsearch-1.7.1/bin/elasticsearch\n\nwhich tries to export the environment variable \"HOSTNAME\". The value is assigned via \"hostname -s\" which is not available in Solaris.\n\nThis feature seems to have been introduced in #8470. Since I have defined my node name via \"node.name\" this is not quite an issue with exporting the HOSTNAME variable but rather with the switch that is not supported on Solaris.\n\nAnd yes. I apologize for still being forced to use Solaris, but that's the way it is ;)\n\nCheers\nThomas\n", "comments": [ { "body": "Hi @TwizzyDizzy \n\nAny ideas what we could use instead on Solaris?\n", "created_at": "2015-08-25T16:08:16Z" }, { "body": "Looks like we have a working patch so I removed feedback_needed. Then I thought better of it. @TwizzyDizzy, can you try `hostname | cut -d. -f1` on a solaris 10 box?\n", "created_at": "2015-08-25T18:03:33Z" }, { "body": "Hi @nik9000,\n\nthanks for getting back to me that fast!\n\n> hostname | cut -d. -f1\n\nThis should totally work, but let me check it at work tomorrow to be totally sure. I somehow remember some glitches with either the delimiter or the field number, when it comes to the exact syntax.\n\nWill get back to you tomorrow :)\n\nCheers\nThomas\n", "created_at": "2015-08-25T18:09:57Z" }, { "body": "@TwizzyDizzy I've tested Elasticsearch with `hostname | cut -d. -f1` on a Solaris VM. You should be able to get by with replacing `hostname -s` with `hostname | cut -d. -f1` in the `elasticsearch` and `plugin` scripts in the `bin` subfolder of the Elasticsearch home. Please see the pull request #13109 I've opened for the full details.\n\n```\n-bash-4.3$ uname -a\nSunOS shortname.domainname.tld 5.10 Generic_147148-26 i86pc i386 i86pc\n-bash-4.3$ ./bin/elasticsearch\n[2015-08-25 23:30:59,373][INFO ][org.elasticsearch.node ] [shortname] started\n```\n", "created_at": "2015-08-25T18:27:58Z" }, { "body": "Ah well, if you've tested it in a VM then I guess it's fine.\n\nCheers\nThomas\n", "created_at": "2015-08-25T18:32:13Z" }, { "body": "Just for the full picture: the\n\n> | cut -d. -f1\n\npart works on my machine, too.\n\nCheers\nThomas\n", "created_at": "2015-08-26T08:31:34Z" }, { "body": "> part works on my machine, too.\n\nThanks!\n", "created_at": "2015-08-26T13:38:20Z" } ], "number": 13107, "title": "\"hostname -s\" not available in at least Solaris 10" }
{ "body": "This commit increases the portability of extracting the short hostname\non a Unix-like system.\n\nCloses #13107\n", "number": 13109, "review_comments": [], "title": "More portable extraction of short hostname" }
{ "commits": [ { "message": "More portable extraction of short hostname\n\nThis commit increases the portability of extracting the short hostname\non a Unix-like system.\n\nCloses #13107" } ], "files": [ { "diff": "@@ -121,7 +121,10 @@ case `uname` in\n ;;\n esac\n \n-export HOSTNAME=`hostname -s`\n+# full hostname passed through cut for portability on systems that do not support hostname -s\n+# export on separate line for shells that do not support combining definition and export\n+HOSTNAME=`hostname | cut -d. -f1`\n+export HOSTNAME\n \n # manual parsing to find out, if process should be detached\n daemonized=`echo $* | grep -E -- '(^-d |-d$| -d |--daemonize$|--daemonize )'`", "filename": "distribution/src/main/resources/bin/elasticsearch", "status": "modified" }, { "diff": "@@ -103,6 +103,9 @@ if [ -e \"$CONF_FILE\" ]; then\n esac\n fi\n \n-export HOSTNAME=`hostname -s`\n+# full hostname passed through cut for portability on systems that do not support hostname -s\n+# export on separate line for shells that do not support combining definition and export\n+HOSTNAME=`hostname | cut -d. -f1`\n+export HOSTNAME\n \n eval \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS -Xmx64m -Xms16m -Delasticsearch -Des.path.home=\"\\\"$ES_HOME\\\"\" $properties -cp \"\\\"$ES_HOME/lib/*\\\"\" org.elasticsearch.plugins.PluginManagerCliParser $args", "filename": "distribution/src/main/resources/bin/plugin", "status": "modified" } ] }
{ "body": "I hit this while smoke testing the 2.0.0 beta RC.\n\nToday, if you have the same setting in `config/elasticsearch.yml` more than once, I think the last one silently \"wins\".\n\nBut I think this is too lenient/trappy? E.g. someone may open the file, insert what they think is a new setting, but then it doesn't \"take\" because that setting already appears later in the file.\n\nI think we should make this a hard error on startup instead.\n", "comments": [ { "body": "+1\n", "created_at": "2015-08-24T16:21:41Z" }, { "body": "+1\n\nNot only for different values, but the same setting twice should be an error too. We should fail early, before the user tries to change the setting (in just one place) and then hits the proposed error here.\n", "created_at": "2015-08-24T16:24:04Z" }, { "body": "> Not only for different values, but the same setting twice should be an error too. We should fail early, \n\n++\n", "created_at": "2015-08-24T16:29:58Z" } ], "number": 13079, "title": "Same setting with different values in elasticsearch.yml should be an error" }
{ "body": "This commit changes the startup behavior of Elasticsearch to throw an\nexception if duplicate settings keys are detected in the Elasticsearch\nconfiguration file.\n\nCloses #13079\n", "number": 13086, "review_comments": [ { "body": "Is this the default?\n", "created_at": "2015-08-24T18:45:58Z" }, { "body": "Drop the `@Test` because other tests are dropping it?\n", "created_at": "2015-08-24T18:46:21Z" }, { "body": "No, it has to be specified.\n", "created_at": "2015-08-24T23:58:27Z" }, { "body": "Can we just use a simple try/catch here? I don't see why we need to use hamcrest complicatedness...\n", "created_at": "2015-08-25T00:08:50Z" }, { "body": "or junit for that matter. try/catch is much more readable (and the way most other tests do this)\n", "created_at": "2015-08-25T00:10:51Z" }, { "body": "do we need this in a separate file? it could be inline in the test as a string stream?\n", "created_at": "2015-08-25T00:12:40Z" }, { "body": "Agree. I've pushed a commit to inline the test content.\n", "created_at": "2015-08-25T00:21:17Z" }, { "body": "ACK. I pushed a commit that I agree is simpler.\n", "created_at": "2015-08-25T00:40:49Z" }, { "body": "we can remove the test annotations here too?\n", "created_at": "2015-08-25T00:41:37Z" }, { "body": "Yes. Done.\n", "created_at": "2015-08-25T00:48:58Z" }, { "body": "Done.\n", "created_at": "2015-08-25T00:55:55Z" } ], "title": "Detect duplicate settings keys on startup" }
{ "commits": [ { "message": "Detect duplicate settings keys on startup\n\nThis commit changes the startup behavior of Elasticsearch to throw an\nexception if duplicate settings keys are detected in the Elasticsearch\nconfiguration file.\n\nCloses #13079" } ], "files": [ { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.common.settings.loader;\n \n import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.io.FastStringReader;\n import org.elasticsearch.common.io.stream.StreamInput;\n \n@@ -36,7 +37,7 @@ public class PropertiesSettingsLoader implements SettingsLoader {\n \n @Override\n public Map<String, String> load(String source) throws IOException {\n- Properties props = new Properties();\n+ Properties props = new NoDuplicatesProperties();\n FastStringReader reader = new FastStringReader(source);\n try {\n props.load(reader);\n@@ -52,7 +53,7 @@ public Map<String, String> load(String source) throws IOException {\n \n @Override\n public Map<String, String> load(byte[] source) throws IOException {\n- Properties props = new Properties();\n+ Properties props = new NoDuplicatesProperties();\n StreamInput stream = StreamInput.wrap(source);\n try {\n props.load(stream);\n@@ -65,4 +66,15 @@ public Map<String, String> load(byte[] source) throws IOException {\n IOUtils.closeWhileHandlingException(stream);\n }\n }\n+\n+ class NoDuplicatesProperties extends Properties {\n+ @Override\n+ public synchronized Object put(Object key, Object value) {\n+ Object previousValue = super.put(key, value);\n+ if (previousValue != null) {\n+ throw new ElasticsearchParseException(\"duplicate settings key [{}] found, previous value [{}], current value [{}]\", key, previousValue, value);\n+ }\n+ return previousValue;\n+ }\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/common/settings/loader/PropertiesSettingsLoader.java", "status": "modified" }, { "diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.common.settings.loader;\n \n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.common.xcontent.XContent;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.XContentType;\n@@ -141,7 +140,18 @@ private void serializeValue(Map<String, String> settings, StringBuilder sb, List\n sb.append(pathEle).append('.');\n }\n sb.append(fieldName);\n- settings.put(sb.toString(), parser.text());\n+ String key = sb.toString();\n+ String currentValue = parser.text();\n+ String previousValue = settings.put(key, currentValue);\n+ if (previousValue != null) {\n+ throw new ElasticsearchParseException(\n+ \"duplicate settings key [{}] found at line number [{}], column number [{}], previous value [{}], current value [{}]\",\n+ key,\n+ parser.getTokenLocation().lineNumber,\n+ parser.getTokenLocation().columnNumber,\n+ previousValue,\n+ currentValue\n+ );\n+ }\n }\n-\n }", "filename": "core/src/main/java/org/elasticsearch/common/settings/loader/XContentSettingsLoader.java", "status": "modified" }, { "diff": "@@ -19,19 +19,19 @@\n \n package org.elasticsearch.common.settings.loader;\n \n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.settings.SettingsException;\n import org.elasticsearch.test.ESTestCase;\n import org.junit.Test;\n \n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n-import static org.hamcrest.MatcherAssert.assertThat;\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n *\n */\n public class JsonSettingsLoaderTests extends ESTestCase {\n-\n @Test\n public void testSimpleJsonSettings() throws Exception {\n String json = \"/org/elasticsearch/common/settings/loader/test-settings.json\";\n@@ -50,4 +50,17 @@ public void testSimpleJsonSettings() throws Exception {\n assertThat(settings.getAsArray(\"test1.test3\")[0], equalTo(\"test3-1\"));\n assertThat(settings.getAsArray(\"test1.test3\")[1], equalTo(\"test3-2\"));\n }\n+\n+ public void testDuplicateKeysThrowsException() {\n+ String json = \"{\\\"foo\\\":\\\"bar\\\",\\\"foo\\\":\\\"baz\\\"}\";\n+ try {\n+ settingsBuilder()\n+ .loadFromSource(json)\n+ .build();\n+ fail(\"expected exception\");\n+ } catch (SettingsException e) {\n+ assertEquals(e.getCause().getClass(), ElasticsearchParseException.class);\n+ assertTrue(e.toString().contains(\"duplicate settings key [foo] found at line number [1], column number [13], previous value [bar], current value [baz]\"));\n+ }\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/common/settings/loader/JsonSettingsLoaderTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,47 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.settings.loader;\n+\n+import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.test.ESTestCase;\n+\n+import java.io.IOException;\n+import java.nio.charset.Charset;\n+\n+public class PropertiesSettingsLoaderTests extends ESTestCase {\n+ public void testDuplicateKeyFromStringThrowsException() throws IOException {\n+ PropertiesSettingsLoader loader = new PropertiesSettingsLoader();\n+ try {\n+ loader.load(\"foo=bar\\nfoo=baz\");\n+ fail(\"expected exception\");\n+ } catch (ElasticsearchParseException e) {\n+ assertEquals(e.getMessage(), \"duplicate settings key [foo] found, previous value [bar], current value [baz]\");\n+ }\n+ }\n+\n+ public void testDuplicateKeysFromBytesThrowsException() throws IOException {\n+ PropertiesSettingsLoader loader = new PropertiesSettingsLoader();\n+ try {\n+ loader.load(\"foo=bar\\nfoo=baz\".getBytes(Charset.defaultCharset()));\n+ } catch (ElasticsearchParseException e) {\n+ assertEquals(e.getMessage(), \"duplicate settings key [foo] found, previous value [bar], current value [baz]\");\n+ }\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/common/settings/loader/PropertiesSettingsLoaderTests.java", "status": "added" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.common.settings.loader;\n \n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.SettingsException;\n import org.elasticsearch.test.ESTestCase;\n@@ -31,7 +32,6 @@\n *\n */\n public class YamlSettingsLoaderTests extends ESTestCase {\n-\n @Test\n public void testSimpleYamlSettings() throws Exception {\n String yaml = \"/org/elasticsearch/common/settings/loader/test-settings.yml\";\n@@ -66,4 +66,17 @@ public void testIndentationWithExplicitDocumentStart() {\n .loadFromStream(yaml, getClass().getResourceAsStream(yaml))\n .build();\n }\n-}\n\\ No newline at end of file\n+\n+ public void testDuplicateKeysThrowsException() {\n+ String yaml = \"foo: bar\\nfoo: baz\";\n+ try {\n+ settingsBuilder()\n+ .loadFromSource(yaml)\n+ .build();\n+ fail(\"expected exception\");\n+ } catch (SettingsException e) {\n+ assertEquals(e.getCause().getClass(), ElasticsearchParseException.class);\n+ assertTrue(e.toString().contains(\"duplicate settings key [foo] found at line number [2], column number [6], previous value [bar], current value [baz]\"));\n+ }\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/common/settings/loader/YamlSettingsLoaderTests.java", "status": "modified" } ] }
{ "body": "In integration tests, using `enabled: false` on a root document type makes document indexing fails.\n\nHere is the mapping I use:\n\n```\n\"my_doc_type\": {\n \"enabled\": false\n}\n```\n\nBut then indexing a document throws a MapperParsingException with the following stack:\n\n``` java\nMapperParsingException[failed to parse]; nested: AssertionError;\n at __randomizedtesting.SeedInfo.seed([1B10B6999E316CD7:4AED1D00C8EC9444]:0)\n at org.elasticsearch.index.mapper.DocumentParser.innerParseDocument(DocumentParser.java:155)\n at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:79)\n at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:317)\n at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:313)\n ...\n```\n\nThe assertion is located in the `DocumentParser` class:\n\n``` java\n // try to parse the next token, this should be null if the object is ended properly\n // but will throw a JSON exception if the extra tokens is not valid JSON (this will be handled by the catch)\n if (Version.indexCreated(indexSettings).onOrAfter(Version.V_2_0_0_beta1)\n && source.parser() == null && parser != null) {\n // only check for end of tokens if we created the parser here\n token = parser.nextToken();\n assert token == null; // double check, in tests, that we didn't end parsing early\n }\n```\n\nWhen assertion are disabled, everything is OK. I did a simple test to reproduce the issue here:\nhttps://github.com/tlrx/elasticsearch/commit/94712121dd524c48f2dc40c787efd1eb6375d799\n\nThis assertion has been added recently and I don't know much about this part of code, maybe @rjernst can help here?\n", "comments": [ { "body": "This seems like a bug maybe? why are we not consuming the token there?\n", "created_at": "2015-08-20T16:29:23Z" }, { "body": "Looks like it has been introduced by #11414\n", "created_at": "2015-08-20T16:39:42Z" }, { "body": "I think you misunderstood. The assertion code is correct, it means there was leftover stuff at the end of parsing. But _why_ is there leftover stuff in this case. what is the document you are sending, I only see the mapping in the original description? My hunch is, we need to fully consume the parser for mappings that are disabled.\n", "created_at": "2015-08-20T17:16:58Z" }, { "body": "What I don't get is why I can create this mapping and index documents in elasticsearch but cannot do exactly the same thing in integration tests.\n\nHere is the document I index: https://github.com/tlrx/elasticsearch/commit/94712121dd524c48f2dc40c787efd1eb6375d799#diff-82784da2187574da5398ae215c8a6795R276\n\nThis test fails on my computer and I don't get why.\n", "created_at": "2015-08-20T17:51:03Z" }, { "body": "Assertions are enabled in tests, but not when running with bin/elasticsearch.\n", "created_at": "2015-08-20T17:57:41Z" }, { "body": "Why isn't this a real check\n", "created_at": "2015-08-20T18:00:50Z" }, { "body": "@rjernst Yes I know, but the test I linked to seems good to me and I don't see why this assertion throws up. Can you please have a quick look to the code and/or try to reproduce?\n", "created_at": "2015-08-20T18:03:11Z" }, { "body": "this is a bug. The following fails with assertions enabled:\n\n```\nPUT my_index\n{\n \"mappings\": {\n \"my_type\": {\n \"enabled\": false\n }\n }\n}\n\nPUT my_index/my_type/1\n{\n \"foo\": \"bar\"\n}\n```\n\nThe whole type is disabled, which means the _source should be stored but nothing should be indexed. It looks like we're just skipping parsing completely instead of consuming the tokens.\n", "created_at": "2015-08-24T12:27:35Z" }, { "body": "I have a test reproducing the issue and am investigating.\n", "created_at": "2015-08-24T16:25:16Z" }, { "body": "@rjernst Thanks for the work you have done :) Unfortunately I reopen this bug because indexing a document (like @clinton suggested) now fails with a `NullPointerException` (I tested on latest snapshot - build e8834cc78c13a507e2851908f8d51489e7888570).\n\nAs far as I understand the code it seems that your fix skips the document parsing when the type is disabled. So field mappers like `UidFieldMapper` are not used, resulting in a null `uid` thrown [here](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/index/shard/IndexShard.java#L515). \n\nI'm not familiar with this part of the code but if you don't have time to look further please let me know and I'll try to have a look.\n", "created_at": "2015-08-26T08:11:25Z" } ], "number": 13017, "title": "Using enabled:false on document type throws exception in tests" }
{ "body": "Currently when an entire type is disabled, our document parser will end\nparsing on the first field of the document. This blows up the recently\nadded check that parsing did not silently skip any tokens (ie whether\nthere was garbage leftover).\n\nThis change fixes the parser to correctly skip the entire document when\nthe type is disabled.\n\ncloses #13017\n", "number": 13085, "review_comments": [], "title": "Fix document parsing to properly ignore entire type when disabled" }
{ "commits": [ { "message": "Mappings: Fix document parsing to properly ignore entire type when disabled\n\nCurrently when an entire type is disabled, our document parser will end\nparsing on the first field of the document. This blows up the recently\nadded check that parsing did not silently skip any tokens (ie whether\nthere was garbage leftover).\n\nThis change fixes the parser to correctly skip the entire document when\nthe type is disabled.\n\ncloses #13017" } ], "files": [ { "diff": "@@ -100,33 +100,36 @@ private ParsedDocument innerParseDocument(SourceToParse source) throws MapperPar\n context.reset(parser, new ParseContext.Document(), source);\n \n // will result in START_OBJECT\n- int countDownTokens = 0;\n XContentParser.Token token = parser.nextToken();\n if (token != XContentParser.Token.START_OBJECT) {\n throw new MapperParsingException(\"Malformed content, must start with an object\");\n }\n- boolean emptyDoc = false;\n- token = parser.nextToken();\n- if (token == XContentParser.Token.END_OBJECT) {\n- // empty doc, we can handle it...\n- emptyDoc = true;\n- } else if (token != XContentParser.Token.FIELD_NAME) {\n- throw new MapperParsingException(\"Malformed content, after first object, either the type field or the actual properties should exist\");\n- }\n-\n- for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) {\n- metadataMapper.preParse(context);\n- }\n+ if (mapping.root.isEnabled()) {\n+ boolean emptyDoc = false;\n+ token = parser.nextToken();\n+ if (token == XContentParser.Token.END_OBJECT) {\n+ // empty doc, we can handle it...\n+ emptyDoc = true;\n+ } else if (token != XContentParser.Token.FIELD_NAME) {\n+ throw new MapperParsingException(\"Malformed content, after first object, either the type field or the actual properties should exist\");\n+ }\n \n- if (!emptyDoc) {\n- Mapper update = parseObject(context, mapping.root);\n- if (update != null) {\n- context.addDynamicMappingsUpdate(update);\n+ for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) {\n+ metadataMapper.preParse(context);\n+ }\n+ if (emptyDoc == false) {\n+ Mapper update = parseObject(context, mapping.root);\n+ if (update != null) {\n+ context.addDynamicMappingsUpdate(update);\n+ }\n+ }\n+ for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) {\n+ metadataMapper.postParse(context);\n }\n- }\n \n- for (int i = 0; i < countDownTokens; i++) {\n- parser.nextToken();\n+ } else {\n+ // entire type is disabled\n+ parser.skipChildren();\n }\n \n // try to parse the next token, this should be null if the object is ended properly\n@@ -135,12 +138,11 @@ private ParsedDocument innerParseDocument(SourceToParse source) throws MapperPar\n && source.parser() == null && parser != null) {\n // only check for end of tokens if we created the parser here\n token = parser.nextToken();\n- assert token == null; // double check, in tests, that we didn't end parsing early\n+ if (token != null) {\n+ throw new IllegalArgumentException(\"Malformed content, found extra data after parsing: \" + token);\n+ }\n }\n \n- for (MetadataFieldMapper metadataMapper : mapping.metadataMappers) {\n- metadataMapper.postParse(context);\n- }\n } catch (Throwable e) {\n // if its already a mapper parsing exception, no need to wrap it...\n if (e instanceof MapperParsingException) {", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java", "status": "modified" }, { "diff": "@@ -0,0 +1,64 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper;\n+\n+import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.common.xcontent.json.JsonXContentParser;\n+import org.elasticsearch.test.ESSingleNodeTestCase;\n+\n+// TODO: make this a real unit test\n+public class DocumentParserTests extends ESSingleNodeTestCase {\n+\n+ public void testTypeDisabled() throws Exception {\n+ DocumentMapperParser mapperParser = createIndex(\"test\").mapperService().documentMapperParser();\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .field(\"enabled\", false).endObject().endObject().string();\n+ DocumentMapper mapper = mapperParser.parse(mapping);\n+\n+ BytesReference bytes = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"field\", \"1234\")\n+ .endObject().bytes();\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", bytes);\n+ assertNull(doc.rootDoc().getField(\"field\"));\n+ }\n+\n+ public void testFieldDisabled() throws Exception {\n+ DocumentMapperParser mapperParser = createIndex(\"test\").mapperService().documentMapperParser();\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"foo\").field(\"enabled\", false).endObject()\n+ .startObject(\"bar\").field(\"type\", \"integer\").endObject()\n+ .endObject().endObject().endObject().string();\n+ DocumentMapper mapper = mapperParser.parse(mapping);\n+\n+ BytesReference bytes = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"foo\", \"1234\")\n+ .field(\"bar\", 10)\n+ .endObject().bytes();\n+ ParsedDocument doc = mapper.parse(\"test\", \"type\", \"1\", bytes);\n+ assertNull(doc.rootDoc().getField(\"foo\"));\n+ assertNotNull(doc.rootDoc().getField(\"bar\"));\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/index/mapper/DocumentParserTests.java", "status": "added" } ] }
{ "body": "SimpleSortTests.testIssue8226 for example fails about once a week. Example failure:\nhttp://build-us-00.elasticsearch.org/job/es_g1gc_1x_metal/3129/\n\nI can reproduce it locally (although very rarely) with some additional logging (action.search.type: TRACE). \n\nHere is a brief analysis of what happened. Would be great if someone could take a look and let me know if this makes sense.\n\nFailure:\n\n```\n1> REPRODUCE WITH : mvn clean test -Dtests.seed=774A2866F1B6042D -Dtests.class=org.elasticsearch.search.sort.SimpleSortTests -Dtests.method=\"testIssue8226 {#76 seed=[774A2866F1B6042D:ACB4FF9F8C8CA341]}\" -Des.logger.level=DEBUG -Des.node.mode=network -Dtests.security.manager=true -Dtests.nightly=false -Dtests.client.ratio=0.0 -Dtests.heap.size=512m -Dtests.jvm.argline=\"-server -XX:+UseConcMarkSweepGC -XX:-UseCompressedOops -XX:+AggressiveOpts -Djava.net.preferIPv4Stack=true\" -Dtests.locale=fi_FI -Dtests.timezone=Etc/GMT+9 -Dtests.processors=4\n 1> Throwable:\n 1> java.lang.AssertionError: One or more shards were not successful but didn't trigger a failure\n 1> Expected: <47>\n 1> but: was <46>\n```\n\nHere is an example failure in detail, the relevant parts of the logs are below:\n## State\n\nnode_0 is master.\n[test_5][0] is relocating from node_1 to node_0.\nCluster state 3673 has the shard as relocating, in cluster state 3674 it is started.\nnode_0 is the coordinating node for the search request.\n\nIn brief, the request fails for shard [test_5][0] because node_0 operates on an older cluster state 3673 when processing the search request, while node_1 is already on 3674.\n## Course of events:\n1. node_0 sends shard started, but the shard is still in state POST_RECOVERY and will remain so until it receives the new cluster state and applies it locally\n2. node_0(master) receives the shard started request and publishes the new cluster state 3674 to node_0 and node_1\n3. node_1 receives the cluster state 3674 and applies it locally\n4. node_0 sends search request for [test_5][0] to node_1 because according to cluster state 3673 the shard is there and relocating\n -> request fails with IndexShardMissingException because node_1 already applied cluster state 3674 and deleted the shard.\n5. node_0 then sends request for [test_5][0] to node_0 because the shard is there as well (according to cluster state 3673 it is and initializing)\n -> request fails with IllegalIndexShardStateException because node_0 has not yet processed cluster state 3674 and therefore the shard is in POST_RECOVERY instead of STARTED\n No shard failure is logged because IndexShardMissingException and IllegalIndexShardStateException are explicitly excluded from shard failures.\n6. node_0 finally also gets to process the new cluster state and moves the shard [test_5][0] to STARTED but it is too late\n\nThis is a very rare condition and maybe too bad on client side because the information that one shard did not deliver results is there although it is not explicitly listed as shard failure. We can probably make the test pass easily be just waiting for relocations before executing the search request but that seems wrong because any search request can fail this way.\n## Sample log\n\n```\n[....]\n\n 1> [2015-01-26 09:27:14,435][DEBUG][indices.recovery ] [node_0] [test_5][0] recovery completed from [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}], took [84ms]\n 1> [2015-01-26 09:27:14,435][DEBUG][cluster.action.shard ] [node_0] sending shard started for [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n 1> [2015-01-26 09:27:14,435][DEBUG][cluster.action.shard ] [node_0] received shard started for [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n 1> [2015-01-26 09:27:14,436][DEBUG][cluster.service ] [node_0] processing [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]: execute\n 1> [2015-01-26 09:27:14,436][DEBUG][cluster.action.shard ] [node_0] [test_5][0] will apply shard started [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING], indexUUID [E3T8J7CaRkyK533W0hMBPw], reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]\n\n\n[....]\n\n\n 1> [2015-01-26 09:27:14,441][DEBUG][cluster.service ] [node_0] cluster state updated, version [3674], source [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]\n 1> [2015-01-26 09:27:14,441][DEBUG][cluster.service ] [node_0] publishing cluster state version 3674\n 1> [2015-01-26 09:27:14,442][DEBUG][discovery.zen.publish ] [node_1] received cluster state version 3674\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] processing [zen-disco-receive(from master [[node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}])]: execute\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] cluster state updated, version [3674], source [zen-disco-receive(from master [[node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}])]\n 1> [2015-01-26 09:27:14,443][DEBUG][cluster.service ] [node_1] set local cluster state to version 3674\n 1> [2015-01-26 09:27:14,443][DEBUG][indices.cluster ] [node_1] [test_5][0] removing shard (not allocated)\n 1> [2015-01-26 09:27:14,443][DEBUG][index ] [node_1] [test_5] [0] closing... (reason: [removing shard (not allocated)])\n 1> [2015-01-26 09:27:14,443][INFO ][test.store ] [node_1] [test_5][0] Shard state before potentially flushing is STARTED\n 1> [2015-01-26 09:27:14,453][DEBUG][search.sort ] cluster state:\n 1> version: 3673\n 1> meta data version: 2043\n 1> nodes:\n 1> [node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}\n 1> [node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}, local, master\n 1> routing_table (version 3006):\n 1> -- index [test_4]\n 1> ----shard_id [test_4][4]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][7]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][0]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][3]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][1]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][5]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][6]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][2]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_3]\n 1> ----shard_id [test_3][2]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][0]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][3]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][1]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][5]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][6]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][4]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_2]\n 1> ----shard_id [test_2][2]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][0]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][3]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][1]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][5]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][4]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_1]\n 1> ----shard_id [test_1][0]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_1][1]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_0]\n 1> ----shard_id [test_0][4]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][0]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][7]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][3]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][1]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][5]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][6]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][2]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_8]\n 1> ----shard_id [test_8][0]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_8][1]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_8][2]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> -- index [test_7]\n 1> ----shard_id [test_7][2]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][0]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][3]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][1]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][4]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_6]\n 1> ----shard_id [test_6][0]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][3]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][1]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][2]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_5]\n 1> ----shard_id [test_5][0]\n 1> --------[test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]\n 1> ----shard_id [test_5][3]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][1]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][2]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> routing_nodes:\n 1> -----node_id[GQ6yYxmyRT-sfvT0cmuqQQ][V]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> -----node_id[G4AEDzbrRae5BC_UD9zItA][V]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ---- unassigned\n 1>\n 1> tasks: (1):\n 1> 13638/URGENT/shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]/13ms\n 1>\n\n\n\n[...]\n\n\n 1> [2015-01-26 09:27:14,459][TRACE][action.search.type ] [node_0] got first-phase result from [G4AEDzbrRae5BC_UD9zItA][test_3][3]\n 1> [2015-01-26 09:27:14,460][TRACE][action.search.type ] [node_0] [test_5][0], node[G4AEDzbrRae5BC_UD9zItA], relocating [GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[RELOCATING]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1469040a] lastShard [false]\n 1> org.elasticsearch.transport.RemoteTransportException: [node_1][inet[/192.168.2.102:9401]][indices:data/read/search[phase/dfs]]\n 1> Caused by: org.elasticsearch.index.IndexShardMissingException: [test_5][0] missing\n 1> at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:203)\n 1> at org.elasticsearch.search.SearchService.createContext(SearchService.java:539)\n 1> at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:523)\n 1> at org.elasticsearch.search.SearchService.executeDfsPhase(SearchService.java:208)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$SearchDfsTransportHandler.messageReceived(SearchServiceTransportAction.java:757)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$SearchDfsTransportHandler.messageReceived(SearchServiceTransportAction.java:748)\n 1> at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:275)\n 1> at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_6][1]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_5][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_7][1]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_3][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_8][2]\n 1> [2015-01-26 09:27:14,455][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_1][0]\n\n\n[...]\n\n\n 1> [2015-01-26 09:27:14,463][TRACE][action.search.type ] [node_0] [test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1469040a] lastShard [true]\n 1> org.elasticsearch.index.shard.IllegalIndexShardStateException: [test_5][0] CurrentState[POST_RECOVERY] operations only allowed when started/relocated\n 1> at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:839)\n 1> at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:651)\n 1> at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:647)\n 1> at org.elasticsearch.search.SearchService.createContext(SearchService.java:543)\n 1> at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:523)\n 1> at org.elasticsearch.search.SearchService.executeDfsPhase(SearchService.java:208)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$3.call(SearchServiceTransportAction.java:197)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$3.call(SearchServiceTransportAction.java:194)\n 1> at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 1> at java.lang.Thread.run(Thread.java:745)\n 1> [2015-01-26 09:27:14,459][TRACE][action.search.type ] [node_0] got first-phase result from [G4AEDzbrRae5BC_UD9zItA][test_2][4]\n\n\n\n[...]\n\n\n\n 1> [2015-01-26 09:27:14,493][DEBUG][cluster.service ] [node_0] set local cluster state to version 3674\n 1> [2015-01-26 09:27:14,493][DEBUG][index.shard ] [node_0] [test_5][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]\n 1> [2015-01-26 09:27:14,493][DEBUG][river.cluster ] [node_0] processing [reroute_rivers_node_changed]: execute\n 1> [2015-01-26 09:27:14,493][DEBUG][river.cluster ] [node_0] processing [reroute_rivers_node_changed]: no change in cluster_state\n 1> [2015-01-26 09:27:14,493][DEBUG][cluster.service ] [node_0] processing [shard-started ([test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], relocating [G4AEDzbrRae5BC_UD9zItA], [P], s[INITIALIZING]), reason [after recovery (replica) from node [[node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}]]]: done applying updated cluster_state (version: 3674)\n 1> [2015-01-26 09:27:14,456][TRACE][action.search.type ] [node_0] got first-phase result from [GQ6yYxmyRT-sfvT0cmuqQQ][test_2][3]\n\n\n[...]\n\n 1> [2015-01-26 09:27:14,527][DEBUG][search.sort ] cluster state:\n 1> version: 3674\n 1> meta data version: 2043\n 1> nodes:\n 1> [node_1][G4AEDzbrRae5BC_UD9zItA][schmusi][inet[/192.168.2.102:9401]]{mode=network, enable_custom_paths=true}\n 1> [node_0][GQ6yYxmyRT-sfvT0cmuqQQ][schmusi][inet[/192.168.2.102:9400]]{mode=network, enable_custom_paths=true}, local, master\n 1> routing_table (version 3007):\n 1> -- index [test_4]\n 1> ----shard_id [test_4][2]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][7]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][0]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][3]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][1]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][5]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_4][6]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_4][4]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_3]\n 1> ----shard_id [test_3][4]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][0]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][3]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][1]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][5]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_3][6]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_3][2]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_2]\n 1> ----shard_id [test_2][4]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][0]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_2][3]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][1]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][5]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_2][2]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_1]\n 1> ----shard_id [test_1][0]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_1][1]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_0]\n 1> ----shard_id [test_0][2]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][0]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][7]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][3]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][1]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][5]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_0][6]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> ----shard_id [test_0][4]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_8]\n 1> ----shard_id [test_8][0]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_8][1]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_8][2]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> -- index [test_7]\n 1> ----shard_id [test_7][4]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][0]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_7][3]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][1]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_7][2]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1>\n 1> -- index [test_6]\n 1> ----shard_id [test_6][0]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][3]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][1]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_6][2]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1>\n 1> -- index [test_5]\n 1> ----shard_id [test_5][0]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> ----shard_id [test_5][3]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][1]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ----shard_id [test_5][2]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1>\n 1> routing_nodes:\n 1> -----node_id[GQ6yYxmyRT-sfvT0cmuqQQ][V]\n 1> --------[test_4][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_4][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_4][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_3][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_3][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_2][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_2][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_1][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][7], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][5], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_0][6], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_0][4], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_8][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_7][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_6][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][3], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][1], node[GQ6yYxmyRT-sfvT0cmuqQQ], [R], s[STARTED]\n 1> --------[test_6][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][0], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> --------[test_5][2], node[GQ6yYxmyRT-sfvT0cmuqQQ], [P], s[STARTED]\n 1> -----node_id[G4AEDzbrRae5BC_UD9zItA][V]\n 1> --------[test_4][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][7], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_4][6], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_4][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_3][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_3][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_2][3], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][1], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][5], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_2][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_1][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][0], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][7], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][5], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_0][6], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_0][4], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_8][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][4], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_7][2], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][0], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_6][2], node[G4AEDzbrRae5BC_UD9zItA], [R], s[STARTED]\n 1> --------[test_5][3], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> --------[test_5][1], node[G4AEDzbrRae5BC_UD9zItA], [P], s[STARTED]\n 1> ---- unassigned\n 1>\n 1> tasks: (0):\n 1>\n\n[...]\n```\n", "comments": [ { "body": "A similar test failure:\n\n`org.elasticsearch.deleteByQuery.DeleteByQueryTests.testDeleteAllOneIndex`\n\nhttp://build-us-00.elasticsearch.org/job/es_g1gc_master_metal/2579/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\nhttp://build-us-00.elasticsearch.org/job/es_core_master_centos/2640/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\nhttp://build-us-00.elasticsearch.org/job/es_core_master_regression/1263/testReport/junit/org.elasticsearch.deleteByQuery/DeleteByQueryTests/testDeleteAllOneIndex/\n\nIt fails on the:\n\n``` java\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.numPrimaries));\n```\n\nWhich I believe relates to the relocation issue Britta mentioned.\n", "created_at": "2015-01-27T00:21:32Z" }, { "body": "I think this is unrelated. I actually fixed the DeleteByQueryTests yesterday (c3f1982f21150336f87b7b4def74e019e8bdac18) and this commit does not seem to be in the build you linked to.\n\nA brief explanation: DeleteByQuery is a write operation. The shard header returned and checked in DeleteByQueryTests is different from the one return for search requests. The reason why DeleteByQuery failed is because I added the check \n\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.totalNumShards));\n\nbefore which was wrong because there was no ensureGreen() so some of the replicas might not have ben initialized yet. I fixed this in c3f1982f2 by instead checking\n\nassertThat(shardInfo.getSuccessful(), greaterThanOrEqualTo(numShards.numPrimaries));\n", "created_at": "2015-01-27T08:57:22Z" }, { "body": "I wonder if we should just allow reads in the POST_RECOVERY phase. At that point the shards is effectively ready to do everything it needs to do. @brwe this will solve the issue, right?\n", "created_at": "2015-01-27T10:36:17Z" }, { "body": "@brwe okay, does that mean I can unmute the `DeleteByQueryTests.testDeleteAllOneIndex`?\n", "created_at": "2015-01-27T16:42:41Z" }, { "body": "yes\n", "created_at": "2015-01-27T16:45:32Z" }, { "body": "Unmuted the `DeleteByQueryTests.testDeleteAllOneIndex` test\n", "created_at": "2015-01-27T17:42:04Z" }, { "body": "@bleskes I think that would fix it. However, before I push I want to try and write a test that reproduces reliably. Will not do before next week.\n", "created_at": "2015-01-28T15:25:51Z" }, { "body": "@brwe please ping before starting on this. I want to make sure that we capture the original issue which caused us to introduce POST_RECOVERY. I don't recall exactly recall what the problem was (it was refresh related) and I think it was solved by a more recent change to how refresh work (#6545) but it requires careful thought\n", "created_at": "2015-02-24T12:48:00Z" }, { "body": "@bleskes ping :)\nI finally came back to this and wrote a test that reproduces the failure reliably (#10194) but I did not quite get what you meant by \"capture the original issue\". Can you elaborate?\n", "created_at": "2015-03-20T21:11:20Z" }, { "body": "@kimchy do you recall why we can't read in that state?\n", "created_at": "2015-04-13T14:39:28Z" } ], "number": 9421, "title": "After relocation shards might temporarily not be searchable if still in POST_RECOVERY" }
{ "body": "prerequisite to #9421\nsee also #12600\n", "number": 13068, "review_comments": [ { "body": "I know it's unrelated to this change but why don't we have defaults for these boolean they look the same in 90% fo the cases?\n", "created_at": "2015-08-25T09:12:11Z" }, { "body": "can you add a javadoc to this?\n", "created_at": "2015-08-25T09:15:29Z" }, { "body": "can we maybe have unittests for this - this looks pretty unittestable?\n", "created_at": "2015-08-25T09:16:17Z" }, { "body": "if you factor `replicatedBroadcastShardAction.execute` and `GroupShardsIterator groupShardsIterator = clusterService.operationRouting().searchShards(clusterService.state(), indexNameExpressionResolver.concreteIndices(clusterService.state(), request), null, null);\n` \nout of this you can get a nicely unit-testable class here having one method like `doInnerExecute(GroupSahrdsIterator iter, final ActionListener<Response> listener)` ? \n", "created_at": "2015-08-25T09:19:05Z" }, { "body": "I don't know. Change that in this pr or better in another one?\n", "created_at": "2015-08-28T12:47:53Z" }, { "body": "can we use the incoming state here?\n", "created_at": "2015-08-28T13:51:54Z" }, { "body": "also, why not use indexShards() now that we see this as a write op - then we don't need to change the visibility of the shards methond on Operation Routing\n", "created_at": "2015-08-28T13:55:31Z" }, { "body": "same comments as on flush... \n", "created_at": "2015-08-28T13:57:47Z" }, { "body": "we should reduce the timeout on the request to 0 (we don't wait to wait if things are not available) - also, can we make a test that make sure we get a quick response when there is no master/other block and or primary shard is not available?\n", "created_at": "2015-08-28T14:11:37Z" }, { "body": "same comment as the refresh action, can we lower the timeout to 0 and test it returns quickly if we have a block/primary not available?\n", "created_at": "2015-08-28T14:13:07Z" }, { "body": "can we make this an abstract method and also use a ShardIterator? (the grouping doesn't really makes sense IMHO). Also I think we can just iterate on all shards for the given indices, no? \n", "created_at": "2015-08-28T14:15:36Z" }, { "body": "can we add the action name here? useful in this base classes which may have multiple implementations.\n", "created_at": "2015-08-28T14:17:22Z" }, { "body": "also, we should log this out of the countdown no?\n", "created_at": "2015-08-28T14:18:36Z" }, { "body": "can we cache the cluster state we used when we started this operation? this way we know for sure the index is still in it.\n", "created_at": "2015-08-28T14:20:30Z" }, { "body": "can we please add the action here?\n", "created_at": "2015-08-28T14:22:19Z" }, { "body": "note that we don't see shard not available exceptions as a failure - we typically ignore those and just let the successful count be different than the total count. We have to check the type of failure and protect for it.\n", "created_at": "2015-08-28T14:24:59Z" }, { "body": "if we change the groupShardIterator to a ShardIterator - can this happen?\n", "created_at": "2015-08-28T14:25:27Z" }, { "body": "this can be protected now, right?\n", "created_at": "2015-08-28T14:30:44Z" }, { "body": "can we mark this as nullable (and doc when it's null)? also, can we move it next to the setter, and make the naming consistent with the rest of this class? (i.e., shardId)\n", "created_at": "2015-08-28T14:31:45Z" }, { "body": "I think it's OK to mark the shard as deleted - if there was some connectivity failure (i.e., the shard it self is not broken) _strictly_ speaking the shard is no longer a good copy as it doesn't maintain the \"all writes so far are visible to searches\" semantics. I can see that it's potentially a big deal for nothing (after 1s it will be OK) but I think we should for now do the right thing and fail the shard?\n", "created_at": "2015-08-28T14:34:20Z" }, { "body": "I think the comment is flipped - if the method returns true , shadow replicas will be skipped. I think we will better off making this a positive - executeOnShadowReplicas? we can also make this method say \"shouldExecuteReplication\" and have the default implementation do `return IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.settings()) == false`. That will make the if where the method is being used simpler.\n\nAlso, we need to adapt the unit testing of this class for this (I don't think we have a test now for shadow replicas)\n", "created_at": "2015-08-28T14:38:20Z" }, { "body": "see comment above about this visibility change.\n", "created_at": "2015-08-28T14:39:12Z" }, { "body": "Skip my comment about ShardIterator, it inherits from ShardsIterator and thus needs ShardRouting instances (which are useless here). Instead, maybe just use a List<ShardId> as a return value for the new method. Also, thinking some more, maybe supply a default implementation that gives all shards for the indices back... \n", "created_at": "2015-08-28T18:45:40Z" }, { "body": "> can we use the incoming state here?\n\nyes\n\n> also, why not use indexShards() now that we see this as a write op\n\nyou mean `clusterService.operationRouting().indexShards()`? that one needs a type and id and we don't have that here. or is there another one that does not?\n", "created_at": "2015-08-31T07:55:04Z" }, { "body": "indeed the indexShards suggestion is bad - it is not the right construct here as it is tied to a single doc. Since we are after shard ids here (not grouping) , I think we should simplify the API to return a list of shardIds (which will solve this too). See comments here : https://github.com/elastic/elasticsearch/pull/13068/files#r38203633\n", "created_at": "2015-08-31T08:11:34Z" }, { "body": "we have `ShardReplicationTests.testReplicationWithShadowIndex`. Is that enough or do we need another test?\n", "created_at": "2015-08-31T09:31:09Z" }, { "body": "I forgot about that test. I think it's good for now.\n", "created_at": "2015-08-31T09:58:11Z" }, { "body": "I added BroadcastReplicationTests. Let me know if this is what you meant.\n", "created_at": "2015-08-31T12:13:24Z" }, { "body": "I added BroadcastReplicationTests. Let me know if this is what you meant.\n", "created_at": "2015-08-31T12:13:29Z" }, { "body": "This is now a method that returns List<ShardId>: https://github.com/elastic/elasticsearch/pull/13068/files#diff-8ec8c1c769c4acb6f880e4e15d2b96f6R120 Is that what your meant?\n", "created_at": "2015-08-31T12:14:59Z" } ], "title": "Make refresh a replicated action" }
{ "commits": [ { "message": "Make refresh a replicated action\n\nprerequisite to #9421\nsee also #12600" } ], "files": [ { "diff": "@@ -39,7 +39,7 @@\n /**\n * Base class for write action responses.\n */\n-public abstract class ActionWriteResponse extends ActionResponse {\n+public class ActionWriteResponse extends ActionResponse {\n \n public final static ActionWriteResponse.ShardInfo.Failure[] EMPTY = new ActionWriteResponse.ShardInfo.Failure[0];\n ", "filename": "core/src/main/java/org/elasticsearch/action/ActionWriteResponse.java", "status": "modified" }, { "diff": "@@ -21,10 +21,7 @@\n \n import org.elasticsearch.action.ShardOperationFailedException;\n import org.elasticsearch.action.support.broadcast.BroadcastResponse;\n-import org.elasticsearch.common.io.stream.StreamInput;\n-import org.elasticsearch.common.io.stream.StreamOutput;\n \n-import java.io.IOException;\n import java.util.List;\n \n /**\n@@ -42,13 +39,4 @@ public class FlushResponse extends BroadcastResponse {\n super(totalShards, successfulShards, failedShards, shardFailures);\n }\n \n- @Override\n- public void readFrom(StreamInput in) throws IOException {\n- super.readFrom(in);\n- }\n-\n- @Override\n- public void writeTo(StreamOutput out) throws IOException {\n- super.writeTo(out);\n- }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/FlushResponse.java", "status": "modified" }, { "diff": "@@ -19,27 +19,27 @@\n \n package org.elasticsearch.action.admin.indices.flush;\n \n-import org.elasticsearch.action.support.broadcast.BroadcastShardRequest;\n+import org.elasticsearch.action.support.replication.ReplicationRequest;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.index.shard.ShardId;\n \n import java.io.IOException;\n \n-/**\n- *\n- */\n-class ShardFlushRequest extends BroadcastShardRequest {\n+public class ShardFlushRequest extends ReplicationRequest<ShardFlushRequest> {\n+\n private FlushRequest request = new FlushRequest();\n \n- ShardFlushRequest() {\n+ public ShardFlushRequest(FlushRequest request) {\n+ super(request);\n+ this.request = request;\n }\n \n- ShardFlushRequest(ShardId shardId, FlushRequest request) {\n- super(shardId, request);\n- this.request = request;\n+ public ShardFlushRequest() {\n }\n \n+ FlushRequest getRequest() {\n+ return request;\n+ }\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n@@ -53,7 +53,5 @@ public void writeTo(StreamOutput out) throws IOException {\n request.writeTo(out);\n }\n \n- FlushRequest getRequest() {\n- return request;\n- }\n+\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/ShardFlushRequest.java", "status": "modified" }, { "diff": "@@ -19,99 +19,45 @@\n \n package org.elasticsearch.action.admin.indices.flush;\n \n+import org.elasticsearch.action.ActionWriteResponse;\n import org.elasticsearch.action.ShardOperationFailedException;\n import org.elasticsearch.action.support.ActionFilters;\n-import org.elasticsearch.action.support.DefaultShardOperationFailedException;\n-import org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException;\n-import org.elasticsearch.action.support.broadcast.TransportBroadcastAction;\n+import org.elasticsearch.action.support.replication.TransportBroadcastReplicationAction;\n import org.elasticsearch.cluster.ClusterService;\n-import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.block.ClusterBlockException;\n-import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n-import org.elasticsearch.cluster.routing.GroupShardsIterator;\n-import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.index.shard.IndexShard;\n-import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n-import java.util.ArrayList;\n import java.util.List;\n-import java.util.concurrent.atomic.AtomicReferenceArray;\n \n /**\n * Flush Action.\n */\n-public class TransportFlushAction extends TransportBroadcastAction<FlushRequest, FlushResponse, ShardFlushRequest, ShardFlushResponse> {\n-\n- private final IndicesService indicesService;\n+public class TransportFlushAction extends TransportBroadcastReplicationAction<FlushRequest, FlushResponse, ShardFlushRequest, ActionWriteResponse> {\n \n @Inject\n public TransportFlushAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n- TransportService transportService, IndicesService indicesService,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, FlushAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,\n- FlushRequest.class, ShardFlushRequest.class, ThreadPool.Names.FLUSH);\n- this.indicesService = indicesService;\n- }\n-\n- @Override\n- protected FlushResponse newResponse(FlushRequest request, AtomicReferenceArray shardsResponses, ClusterState clusterState) {\n- int successfulShards = 0;\n- int failedShards = 0;\n- List<ShardOperationFailedException> shardFailures = null;\n- for (int i = 0; i < shardsResponses.length(); i++) {\n- Object shardResponse = shardsResponses.get(i);\n- if (shardResponse == null) {\n- // a non active shard, ignore\n- } else if (shardResponse instanceof BroadcastShardOperationFailedException) {\n- failedShards++;\n- if (shardFailures == null) {\n- shardFailures = new ArrayList<>();\n- }\n- shardFailures.add(new DefaultShardOperationFailedException((BroadcastShardOperationFailedException) shardResponse));\n- } else {\n- successfulShards++;\n- }\n- }\n- return new FlushResponse(shardsResponses.length(), successfulShards, failedShards, shardFailures);\n- }\n-\n- @Override\n- protected ShardFlushRequest newShardRequest(int numShards, ShardRouting shard, FlushRequest request) {\n- return new ShardFlushRequest(shard.shardId(), request);\n- }\n-\n- @Override\n- protected ShardFlushResponse newShardResponse() {\n- return new ShardFlushResponse();\n- }\n-\n- @Override\n- protected ShardFlushResponse shardOperation(ShardFlushRequest request) {\n- IndexShard indexShard = indicesService.indexServiceSafe(request.shardId().getIndex()).shardSafe(request.shardId().id());\n- indexShard.flush(request.getRequest());\n- return new ShardFlushResponse(request.shardId());\n+ TransportService transportService, ActionFilters actionFilters,\n+ IndexNameExpressionResolver indexNameExpressionResolver,\n+ TransportShardFlushAction replicatedFlushAction) {\n+ super(FlushAction.NAME, FlushRequest.class, settings, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, replicatedFlushAction);\n }\n \n- /**\n- * The refresh request works against *all* shards.\n- */\n @Override\n- protected GroupShardsIterator shards(ClusterState clusterState, FlushRequest request, String[] concreteIndices) {\n- return clusterState.routingTable().allActiveShardsGrouped(concreteIndices, true, true);\n+ protected ActionWriteResponse newShardResponse() {\n+ return new ActionWriteResponse();\n }\n \n @Override\n- protected ClusterBlockException checkGlobalBlock(ClusterState state, FlushRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n+ protected ShardFlushRequest newShardRequest(FlushRequest request, ShardId shardId) {\n+ return new ShardFlushRequest(request).setShardId(shardId).timeout(\"0ms\");\n }\n \n @Override\n- protected ClusterBlockException checkRequestBlock(ClusterState state, FlushRequest countRequest, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n+ protected FlushResponse newResponse(int successfulShards, int failedShards, int totalNumCopies, List<ShardOperationFailedException> shardFailures) {\n+ return new FlushResponse(totalNumCopies, successfulShards, failedShards, shardFailures);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportFlushAction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,102 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.admin.indices.flush;\n+\n+import org.elasticsearch.action.ActionWriteResponse;\n+import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.replication.TransportReplicationAction;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.action.index.MappingUpdatedAction;\n+import org.elasticsearch.cluster.action.shard.ShardStateAction;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.routing.ShardIterator;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportService;\n+\n+/**\n+ *\n+ */\n+public class TransportShardFlushAction extends TransportReplicationAction<ShardFlushRequest, ShardFlushRequest, ActionWriteResponse> {\n+\n+ public static final String NAME = \"indices:data/write/flush\";\n+\n+ @Inject\n+ public TransportShardFlushAction(Settings settings, TransportService transportService, ClusterService clusterService,\n+ IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction,\n+ MappingUpdatedAction mappingUpdatedAction, ActionFilters actionFilters,\n+ IndexNameExpressionResolver indexNameExpressionResolver) {\n+ super(settings, NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, mappingUpdatedAction,\n+ actionFilters, indexNameExpressionResolver, ShardFlushRequest.class, ShardFlushRequest.class, ThreadPool.Names.FLUSH);\n+ }\n+\n+ @Override\n+ protected ActionWriteResponse newResponseInstance() {\n+ return new ActionWriteResponse();\n+ }\n+\n+ @Override\n+ protected Tuple<ActionWriteResponse, ShardFlushRequest> shardOperationOnPrimary(ClusterState clusterState, PrimaryOperationRequest shardRequest) throws Throwable {\n+ IndexShard indexShard = indicesService.indexServiceSafe(shardRequest.shardId.getIndex()).shardSafe(shardRequest.shardId.id());\n+ indexShard.flush(shardRequest.request.getRequest());\n+ logger.trace(\"{} flush request executed on primary\", indexShard.shardId());\n+ return new Tuple<>(new ActionWriteResponse(), shardRequest.request);\n+ }\n+\n+ @Override\n+ protected void shardOperationOnReplica(ShardId shardId, ShardFlushRequest request) {\n+ IndexShard indexShard = indicesService.indexServiceSafe(request.shardId().getIndex()).shardSafe(request.shardId().id());\n+ indexShard.flush(request.getRequest());\n+ logger.trace(\"{} flush request executed on replica\", indexShard.shardId());\n+ }\n+\n+ @Override\n+ protected boolean checkWriteConsistency() {\n+ return false;\n+ }\n+\n+ @Override\n+ protected ShardIterator shards(ClusterState clusterState, InternalRequest request) {\n+ return clusterState.getRoutingTable().indicesRouting().get(request.concreteIndex()).getShards().get(request.request().shardId().getId()).shardsIt();\n+ }\n+\n+ @Override\n+ protected ClusterBlockException checkGlobalBlock(ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n+ }\n+\n+ @Override\n+ protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) {\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, new String[]{request.concreteIndex()});\n+ }\n+\n+ @Override\n+ protected boolean shouldExecuteReplication(Settings settings) {\n+ return true;\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java", "status": "added" }, { "diff": "@@ -22,8 +22,6 @@\n import org.elasticsearch.action.Action;\n import org.elasticsearch.client.ElasticsearchClient;\n \n-/**\n- */\n public class RefreshAction extends Action<RefreshRequest, RefreshResponse, RefreshRequestBuilder> {\n \n public static final RefreshAction INSTANCE = new RefreshAction();", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/RefreshAction.java", "status": "modified" }, { "diff": "@@ -33,7 +33,6 @@\n */\n public class RefreshRequest extends BroadcastRequest<RefreshRequest> {\n \n-\n RefreshRequest() {\n }\n \n@@ -48,5 +47,4 @@ public RefreshRequest(ActionRequest originalRequest) {\n public RefreshRequest(String... indices) {\n super(indices);\n }\n-\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/RefreshRequest.java", "status": "modified" }, { "diff": "@@ -21,34 +21,18 @@\n \n import org.elasticsearch.action.ShardOperationFailedException;\n import org.elasticsearch.action.support.broadcast.BroadcastResponse;\n-import org.elasticsearch.common.io.stream.StreamInput;\n-import org.elasticsearch.common.io.stream.StreamOutput;\n \n-import java.io.IOException;\n import java.util.List;\n \n /**\n * The response of a refresh action.\n- *\n- *\n */\n public class RefreshResponse extends BroadcastResponse {\n \n RefreshResponse() {\n-\n }\n \n RefreshResponse(int totalShards, int successfulShards, int failedShards, List<ShardOperationFailedException> shardFailures) {\n super(totalShards, successfulShards, failedShards, shardFailures);\n }\n-\n- @Override\n- public void readFrom(StreamInput in) throws IOException {\n- super.readFrom(in);\n- }\n-\n- @Override\n- public void writeTo(StreamOutput out) throws IOException {\n- super.writeTo(out);\n- }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/RefreshResponse.java", "status": "modified" }, { "diff": "@@ -19,100 +19,46 @@\n \n package org.elasticsearch.action.admin.indices.refresh;\n \n+import org.elasticsearch.action.ActionWriteResponse;\n import org.elasticsearch.action.ShardOperationFailedException;\n import org.elasticsearch.action.support.ActionFilters;\n-import org.elasticsearch.action.support.DefaultShardOperationFailedException;\n-import org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException;\n-import org.elasticsearch.action.support.broadcast.TransportBroadcastAction;\n+import org.elasticsearch.action.support.replication.ReplicationRequest;\n+import org.elasticsearch.action.support.replication.TransportBroadcastReplicationAction;\n import org.elasticsearch.cluster.ClusterService;\n-import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.block.ClusterBlockException;\n-import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n-import org.elasticsearch.cluster.routing.GroupShardsIterator;\n-import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.index.shard.IndexShard;\n-import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n-import java.util.ArrayList;\n import java.util.List;\n-import java.util.concurrent.atomic.AtomicReferenceArray;\n \n /**\n * Refresh action.\n */\n-public class TransportRefreshAction extends TransportBroadcastAction<RefreshRequest, RefreshResponse, ShardRefreshRequest, ShardRefreshResponse> {\n-\n- private final IndicesService indicesService;\n+public class TransportRefreshAction extends TransportBroadcastReplicationAction<RefreshRequest, RefreshResponse, ReplicationRequest, ActionWriteResponse> {\n \n @Inject\n public TransportRefreshAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,\n- TransportService transportService, IndicesService indicesService,\n- ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {\n- super(settings, RefreshAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,\n- RefreshRequest.class, ShardRefreshRequest.class, ThreadPool.Names.REFRESH);\n- this.indicesService = indicesService;\n- }\n-\n- @Override\n- protected RefreshResponse newResponse(RefreshRequest request, AtomicReferenceArray shardsResponses, ClusterState clusterState) {\n- int successfulShards = 0;\n- int failedShards = 0;\n- List<ShardOperationFailedException> shardFailures = null;\n- for (int i = 0; i < shardsResponses.length(); i++) {\n- Object shardResponse = shardsResponses.get(i);\n- if (shardResponse == null) {\n- // non active shard, ignore\n- } else if (shardResponse instanceof BroadcastShardOperationFailedException) {\n- failedShards++;\n- if (shardFailures == null) {\n- shardFailures = new ArrayList<>();\n- }\n- shardFailures.add(new DefaultShardOperationFailedException((BroadcastShardOperationFailedException) shardResponse));\n- } else {\n- successfulShards++;\n- }\n- }\n- return new RefreshResponse(shardsResponses.length(), successfulShards, failedShards, shardFailures);\n- }\n-\n- @Override\n- protected ShardRefreshRequest newShardRequest(int numShards, ShardRouting shard, RefreshRequest request) {\n- return new ShardRefreshRequest(shard.shardId(), request);\n- }\n-\n- @Override\n- protected ShardRefreshResponse newShardResponse() {\n- return new ShardRefreshResponse();\n- }\n-\n- @Override\n- protected ShardRefreshResponse shardOperation(ShardRefreshRequest request) {\n- IndexShard indexShard = indicesService.indexServiceSafe(request.shardId().getIndex()).shardSafe(request.shardId().id());\n- indexShard.refresh(\"api\");\n- logger.trace(\"{} refresh request executed\", indexShard.shardId());\n- return new ShardRefreshResponse(request.shardId());\n+ TransportService transportService, ActionFilters actionFilters,\n+ IndexNameExpressionResolver indexNameExpressionResolver,\n+ TransportShardRefreshAction shardRefreshAction) {\n+ super(RefreshAction.NAME, RefreshRequest.class, settings, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, shardRefreshAction);\n }\n \n- /**\n- * The refresh request works against *all* shards.\n- */\n @Override\n- protected GroupShardsIterator shards(ClusterState clusterState, RefreshRequest request, String[] concreteIndices) {\n- return clusterState.routingTable().allAssignedShardsGrouped(concreteIndices, true, true);\n+ protected ActionWriteResponse newShardResponse() {\n+ return new ActionWriteResponse();\n }\n \n @Override\n- protected ClusterBlockException checkGlobalBlock(ClusterState state, RefreshRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n+ protected ReplicationRequest newShardRequest(RefreshRequest request, ShardId shardId) {\n+ return new ReplicationRequest(request).setShardId(shardId).timeout(\"0ms\");\n }\n \n @Override\n- protected ClusterBlockException checkRequestBlock(ClusterState state, RefreshRequest countRequest, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n+ protected RefreshResponse newResponse(int successfulShards, int failedShards, int totalNumCopies, List<ShardOperationFailedException> shardFailures) {\n+ return new RefreshResponse(totalNumCopies, successfulShards, failedShards, shardFailures);\n }\n }", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportRefreshAction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,103 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.admin.indices.refresh;\n+\n+import org.elasticsearch.action.ActionWriteResponse;\n+import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.replication.ReplicationRequest;\n+import org.elasticsearch.action.support.replication.TransportReplicationAction;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.action.index.MappingUpdatedAction;\n+import org.elasticsearch.cluster.action.shard.ShardStateAction;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.routing.ShardIterator;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.shard.IndexShard;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportService;\n+\n+/**\n+ *\n+ */\n+public class TransportShardRefreshAction extends TransportReplicationAction<ReplicationRequest, ReplicationRequest, ActionWriteResponse> {\n+\n+ public static final String NAME = \"indices:data/write/refresh\";\n+\n+ @Inject\n+ public TransportShardRefreshAction(Settings settings, TransportService transportService, ClusterService clusterService,\n+ IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction,\n+ MappingUpdatedAction mappingUpdatedAction, ActionFilters actionFilters,\n+ IndexNameExpressionResolver indexNameExpressionResolver) {\n+ super(settings, NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, mappingUpdatedAction,\n+ actionFilters, indexNameExpressionResolver, ReplicationRequest.class, ReplicationRequest.class, ThreadPool.Names.REFRESH);\n+ }\n+\n+ @Override\n+ protected ActionWriteResponse newResponseInstance() {\n+ return new ActionWriteResponse();\n+ }\n+\n+ @Override\n+ protected Tuple<ActionWriteResponse, ReplicationRequest> shardOperationOnPrimary(ClusterState clusterState, PrimaryOperationRequest shardRequest) throws Throwable {\n+ IndexShard indexShard = indicesService.indexServiceSafe(shardRequest.shardId.getIndex()).shardSafe(shardRequest.shardId.id());\n+ indexShard.refresh(\"api\");\n+ logger.trace(\"{} refresh request executed on primary\", indexShard.shardId());\n+ return new Tuple<>(new ActionWriteResponse(), shardRequest.request);\n+ }\n+\n+ @Override\n+ protected void shardOperationOnReplica(ShardId shardId, ReplicationRequest request) {\n+ IndexShard indexShard = indicesService.indexServiceSafe(shardId.getIndex()).shardSafe(shardId.id());\n+ indexShard.refresh(\"api\");\n+ logger.trace(\"{} refresh request executed on replica\", indexShard.shardId());\n+ }\n+\n+ @Override\n+ protected boolean checkWriteConsistency() {\n+ return false;\n+ }\n+\n+ @Override\n+ protected ShardIterator shards(ClusterState clusterState, InternalRequest request) {\n+ return clusterState.getRoutingTable().indicesRouting().get(request.concreteIndex()).getShards().get(request.request().shardId().getId()).shardsIt();\n+ }\n+\n+ @Override\n+ protected ClusterBlockException checkGlobalBlock(ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n+ }\n+\n+ @Override\n+ protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) {\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, new String[]{request.concreteIndex()});\n+ }\n+\n+ @Override\n+ protected boolean shouldExecuteReplication(Settings settings) {\n+ return true;\n+ }\n+}", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java", "status": "added" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.action.support.replication.ReplicationRequest;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.index.shard.ShardId;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -32,8 +33,6 @@\n */\n public class BulkShardRequest extends ReplicationRequest<BulkShardRequest> {\n \n- private int shardId;\n-\n private BulkItemRequest[] items;\n \n private boolean refresh;\n@@ -44,7 +43,7 @@ public class BulkShardRequest extends ReplicationRequest<BulkShardRequest> {\n BulkShardRequest(BulkRequest bulkRequest, String index, int shardId, boolean refresh, BulkItemRequest[] items) {\n super(bulkRequest);\n this.index = index;\n- this.shardId = shardId;\n+ this.setShardId(new ShardId(index, shardId));\n this.items = items;\n this.refresh = refresh;\n }\n@@ -53,10 +52,6 @@ boolean refresh() {\n return this.refresh;\n }\n \n- int shardId() {\n- return shardId;\n- }\n-\n BulkItemRequest[] items() {\n return items;\n }\n@@ -75,7 +70,6 @@ public String[] indices() {\n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n- out.writeVInt(shardId);\n out.writeVInt(items.length);\n for (BulkItemRequest item : items) {\n if (item != null) {\n@@ -91,7 +85,6 @@ public void writeTo(StreamOutput out) throws IOException {\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n- shardId = in.readVInt();\n items = new BulkItemRequest[in.readVInt()];\n for (int i = 0; i < items.length; i++) {\n if (in.readBoolean()) {", "filename": "core/src/main/java/org/elasticsearch/action/bulk/BulkShardRequest.java", "status": "modified" }, { "diff": "@@ -109,7 +109,7 @@ protected boolean resolveIndex() {\n \n @Override\n protected ShardIterator shards(ClusterState clusterState, InternalRequest request) {\n- return clusterState.routingTable().index(request.concreteIndex()).shard(request.request().shardId()).shardsIt();\n+ return clusterState.routingTable().index(request.concreteIndex()).shard(request.request().shardId().id()).shardsIt();\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java", "status": "modified" }, { "diff": "@@ -25,18 +25,20 @@\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.unit.TimeValue;\n \n import java.io.IOException;\n+import java.util.concurrent.TimeUnit;\n \n /**\n *\n */\n-public abstract class BroadcastRequest<T extends BroadcastRequest> extends ActionRequest<T> implements IndicesRequest.Replaceable {\n+public class BroadcastRequest<T extends BroadcastRequest> extends ActionRequest<T> implements IndicesRequest.Replaceable {\n \n protected String[] indices;\n private IndicesOptions indicesOptions = IndicesOptions.strictExpandOpenAndForbidClosed();\n \n- protected BroadcastRequest() {\n+ public BroadcastRequest() {\n \n }\n ", "filename": "core/src/main/java/org/elasticsearch/action/support/broadcast/BroadcastRequest.java", "status": "modified" }, { "diff": "@@ -32,17 +32,17 @@\n /**\n * Base class for all broadcast operation based responses.\n */\n-public abstract class BroadcastResponse extends ActionResponse {\n+public class BroadcastResponse extends ActionResponse {\n private static final ShardOperationFailedException[] EMPTY = new ShardOperationFailedException[0];\n private int totalShards;\n private int successfulShards;\n private int failedShards;\n private ShardOperationFailedException[] shardFailures = EMPTY;\n \n- protected BroadcastResponse() {\n+ public BroadcastResponse() {\n }\n \n- protected BroadcastResponse(int totalShards, int successfulShards, int failedShards, List<? extends ShardOperationFailedException> shardFailures) {\n+ public BroadcastResponse(int totalShards, int successfulShards, int failedShards, List<? extends ShardOperationFailedException> shardFailures) {\n this.totalShards = totalShards;\n this.successfulShards = successfulShards;\n this.failedShards = failedShards;", "filename": "core/src/main/java/org/elasticsearch/action/support/broadcast/BroadcastResponse.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.action.IndicesRequest;\n import org.elasticsearch.action.WriteConsistencyLevel;\n import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.unit.TimeValue;\n@@ -37,7 +38,7 @@\n /**\n *\n */\n-public abstract class ReplicationRequest<T extends ReplicationRequest> extends ActionRequest<T> implements IndicesRequest {\n+public class ReplicationRequest<T extends ReplicationRequest> extends ActionRequest<T> implements IndicesRequest {\n \n public static final TimeValue DEFAULT_TIMEOUT = new TimeValue(1, TimeUnit.MINUTES);\n \n@@ -49,14 +50,14 @@ public abstract class ReplicationRequest<T extends ReplicationRequest> extends A\n private WriteConsistencyLevel consistencyLevel = WriteConsistencyLevel.DEFAULT;\n private volatile boolean canHaveDuplicates = false;\n \n- protected ReplicationRequest() {\n+ public ReplicationRequest() {\n \n }\n \n /**\n * Creates a new request that inherits headers and context from the request provided as argument.\n */\n- protected ReplicationRequest(ActionRequest request) {\n+ public ReplicationRequest(ActionRequest request) {\n super(request);\n }\n \n@@ -133,6 +134,16 @@ public WriteConsistencyLevel consistencyLevel() {\n return this.consistencyLevel;\n }\n \n+ /**\n+ * @return the shardId of the shard where this operation should be executed on.\n+ * can be null in case the shardId is determined by a single document (index, type, id) for example for index or delete request.\n+ */\n+ public\n+ @Nullable\n+ ShardId shardId() {\n+ return internalShardId;\n+ }\n+\n /**\n * Sets the consistency level of write. Defaults to {@link org.elasticsearch.action.WriteConsistencyLevel#DEFAULT}\n */\n@@ -173,4 +184,10 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeString(index);\n out.writeBoolean(canHaveDuplicates);\n }\n+\n+ public T setShardId(ShardId shardId) {\n+ this.internalShardId = shardId;\n+ this.index = shardId.getIndex();\n+ return (T) this;\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java", "status": "modified" }, { "diff": "@@ -0,0 +1,162 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.support.replication;\n+\n+import com.carrotsearch.hppc.cursors.IntObjectCursor;\n+import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.ActionWriteResponse;\n+import org.elasticsearch.action.ShardOperationFailedException;\n+import org.elasticsearch.action.UnavailableShardsException;\n+import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.DefaultShardOperationFailedException;\n+import org.elasticsearch.action.support.HandledTransportAction;\n+import org.elasticsearch.action.support.broadcast.BroadcastRequest;\n+import org.elasticsearch.action.support.broadcast.BroadcastResponse;\n+import org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.concurrent.CountDown;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportService;\n+\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.List;\n+import java.util.concurrent.CopyOnWriteArrayList;\n+\n+/**\n+ * Base class for requests that should be executed on all shards of an index or several indices.\n+ * This action sends shard requests to all primary shards of the indices and they are then replicated like write requests\n+ */\n+public abstract class TransportBroadcastReplicationAction<Request extends BroadcastRequest, Response extends BroadcastResponse, ShardRequest extends ReplicationRequest, ShardResponse extends ActionWriteResponse> extends HandledTransportAction<Request, Response> {\n+\n+ private final TransportReplicationAction replicatedBroadcastShardAction;\n+ private final ClusterService clusterService;\n+\n+ public TransportBroadcastReplicationAction(String name, Class<Request> request, Settings settings, ThreadPool threadPool, ClusterService clusterService,\n+ TransportService transportService,\n+ ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, TransportReplicationAction replicatedBroadcastShardAction) {\n+ super(settings, name, threadPool, transportService, actionFilters, indexNameExpressionResolver, request);\n+ this.replicatedBroadcastShardAction = replicatedBroadcastShardAction;\n+ this.clusterService = clusterService;\n+ }\n+\n+ @Override\n+ protected void doExecute(final Request request, final ActionListener<Response> listener) {\n+ final ClusterState clusterState = clusterService.state();\n+ List<ShardId> shards = shards(request, clusterState);\n+ final CopyOnWriteArrayList<ShardResponse> shardsResponses = new CopyOnWriteArrayList();\n+ if (shards.size() == 0) {\n+ finishAndNotifyListener(listener, shardsResponses);\n+ }\n+ final CountDown responsesCountDown = new CountDown(shards.size());\n+ for (final ShardId shardId : shards) {\n+ ActionListener<ShardResponse> shardActionListener = new ActionListener<ShardResponse>() {\n+ @Override\n+ public void onResponse(ShardResponse shardResponse) {\n+ shardsResponses.add(shardResponse);\n+ logger.trace(\"{}: got response from {}\", actionName, shardId);\n+ if (responsesCountDown.countDown()) {\n+ finishAndNotifyListener(listener, shardsResponses);\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ logger.trace(\"{}: got failure from {}\", actionName, shardId);\n+ int totalNumCopies = clusterState.getMetaData().index(shardId.index().getName()).getNumberOfReplicas() + 1;\n+ ShardResponse shardResponse = newShardResponse();\n+ ActionWriteResponse.ShardInfo.Failure[] failures;\n+ if (ExceptionsHelper.unwrap(e, UnavailableShardsException.class) != null) {\n+ failures = new ActionWriteResponse.ShardInfo.Failure[0];\n+ } else {\n+ ActionWriteResponse.ShardInfo.Failure failure = new ActionWriteResponse.ShardInfo.Failure(shardId.index().name(), shardId.id(), null, e, ExceptionsHelper.status(e), true);\n+ failures = new ActionWriteResponse.ShardInfo.Failure[totalNumCopies];\n+ Arrays.fill(failures, failure);\n+ }\n+ shardResponse.setShardInfo(new ActionWriteResponse.ShardInfo(totalNumCopies, 0, failures));\n+ shardsResponses.add(shardResponse);\n+ if (responsesCountDown.countDown()) {\n+ finishAndNotifyListener(listener, shardsResponses);\n+ }\n+ }\n+ };\n+ shardExecute(request, shardId, shardActionListener);\n+ }\n+ }\n+\n+ protected void shardExecute(Request request, ShardId shardId, ActionListener<ShardResponse> shardActionListener) {\n+ replicatedBroadcastShardAction.execute(newShardRequest(request, shardId), shardActionListener);\n+ }\n+\n+ /**\n+ * @return all shard ids the request should run on\n+ */\n+ protected List<ShardId> shards(Request request, ClusterState clusterState) {\n+ List<ShardId> shardIds = new ArrayList<>();\n+ String[] concreteIndices = indexNameExpressionResolver.concreteIndices(clusterState, request);\n+ for (String index : concreteIndices) {\n+ IndexMetaData indexMetaData = clusterState.metaData().getIndices().get(index);\n+ if (indexMetaData != null) {\n+ for (IntObjectCursor<IndexShardRoutingTable> shardRouting : clusterState.getRoutingTable().indicesRouting().get(index).getShards()) {\n+ shardIds.add(shardRouting.value.shardId());\n+ }\n+ }\n+ }\n+ return shardIds;\n+ }\n+\n+ protected abstract ShardResponse newShardResponse();\n+\n+ protected abstract ShardRequest newShardRequest(Request request, ShardId shardId);\n+\n+ private void finishAndNotifyListener(ActionListener listener, CopyOnWriteArrayList<ShardResponse> shardsResponses) {\n+ logger.trace(\"{}: got all shard responses\", actionName);\n+ int successfulShards = 0;\n+ int failedShards = 0;\n+ int totalNumCopies = 0;\n+ List<ShardOperationFailedException> shardFailures = null;\n+ for (int i = 0; i < shardsResponses.size(); i++) {\n+ ActionWriteResponse shardResponse = shardsResponses.get(i);\n+ if (shardResponse == null) {\n+ // non active shard, ignore\n+ } else {\n+ failedShards += shardResponse.getShardInfo().getFailed();\n+ successfulShards += shardResponse.getShardInfo().getSuccessful();\n+ totalNumCopies += shardResponse.getShardInfo().getTotal();\n+ if (shardFailures == null) {\n+ shardFailures = new ArrayList<>();\n+ }\n+ for (ActionWriteResponse.ShardInfo.Failure failure : shardResponse.getShardInfo().getFailures()) {\n+ shardFailures.add(new DefaultShardOperationFailedException(new BroadcastShardOperationFailedException(new ShardId(failure.index(), failure.shardId()), failure.getCause())));\n+ }\n+ }\n+ }\n+ listener.onResponse(newResponse(successfulShards, failedShards, totalNumCopies, shardFailures));\n+ }\n+\n+ protected abstract BroadcastResponse newResponse(int successfulShards, int failedShards, int totalNumCopies, List<ShardOperationFailedException> shardFailures);\n+}", "filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportBroadcastReplicationAction.java", "status": "added" }, { "diff": "@@ -362,6 +362,7 @@ public void onFailure(Throwable e) {\n finishWithUnexpectedFailure(e);\n }\n \n+ @Override\n protected void doRun() {\n if (checkBlocks() == false) {\n return;\n@@ -727,7 +728,7 @@ public ReplicationPhase(ShardIterator originalShardIt, ReplicaRequest replicaReq\n // new primary shard as well...\n ClusterState newState = clusterService.state();\n \n- int numberOfUnassignedOrShadowReplicas = 0;\n+ int numberOfUnassignedOrIgnoredReplicas = 0;\n int numberOfPendingShardInstances = 0;\n if (observer.observedState() != newState) {\n observer.reset(newState);\n@@ -741,7 +742,7 @@ public ReplicationPhase(ShardIterator originalShardIt, ReplicaRequest replicaReq\n if (shard.relocating()) {\n numberOfPendingShardInstances++;\n }\n- } else if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.settings())) {\n+ } else if (shouldExecuteReplication(indexMetaData.settings()) == false) {\n // If the replicas use shadow replicas, there is no reason to\n // perform the action on the replica, so skip it and\n // immediately return\n@@ -750,9 +751,9 @@ public ReplicationPhase(ShardIterator originalShardIt, ReplicaRequest replicaReq\n // to wait until they get the new mapping through the cluster\n // state, which is why we recommend pre-defined mappings for\n // indices using shadow replicas\n- numberOfUnassignedOrShadowReplicas++;\n+ numberOfUnassignedOrIgnoredReplicas++;\n } else if (shard.unassigned()) {\n- numberOfUnassignedOrShadowReplicas++;\n+ numberOfUnassignedOrIgnoredReplicas++;\n } else if (shard.relocating()) {\n // we need to send to two copies\n numberOfPendingShardInstances += 2;\n@@ -769,13 +770,13 @@ public ReplicationPhase(ShardIterator originalShardIt, ReplicaRequest replicaReq\n replicaRequest.setCanHaveDuplicates();\n }\n if (shard.unassigned()) {\n- numberOfUnassignedOrShadowReplicas++;\n+ numberOfUnassignedOrIgnoredReplicas++;\n } else if (shard.primary()) {\n if (shard.relocating()) {\n // we have to replicate to the other copy\n numberOfPendingShardInstances += 1;\n }\n- } else if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.settings())) {\n+ } else if (shouldExecuteReplication(indexMetaData.settings()) == false) {\n // If the replicas use shadow replicas, there is no reason to\n // perform the action on the replica, so skip it and\n // immediately return\n@@ -784,7 +785,7 @@ public ReplicationPhase(ShardIterator originalShardIt, ReplicaRequest replicaReq\n // to wait until they get the new mapping through the cluster\n // state, which is why we recommend pre-defined mappings for\n // indices using shadow replicas\n- numberOfUnassignedOrShadowReplicas++;\n+ numberOfUnassignedOrIgnoredReplicas++;\n } else if (shard.relocating()) {\n // we need to send to two copies\n numberOfPendingShardInstances += 2;\n@@ -795,7 +796,7 @@ public ReplicationPhase(ShardIterator originalShardIt, ReplicaRequest replicaReq\n }\n \n // one for the primary already done\n- this.totalShards = 1 + numberOfPendingShardInstances + numberOfUnassignedOrShadowReplicas;\n+ this.totalShards = 1 + numberOfPendingShardInstances + numberOfUnassignedOrIgnoredReplicas;\n this.pending = new AtomicInteger(numberOfPendingShardInstances);\n }\n \n@@ -854,7 +855,7 @@ protected void doRun() {\n if (shard.relocating()) {\n performOnReplica(shard, shard.relocatingNodeId());\n }\n- } else if (IndexMetaData.isIndexUsingShadowReplicas(indexMetaData.settings()) == false) {\n+ } else if (shouldExecuteReplication(indexMetaData.settings())) {\n performOnReplica(shard, shard.currentNodeId());\n if (shard.relocating()) {\n performOnReplica(shard, shard.relocatingNodeId());\n@@ -985,6 +986,14 @@ private void doFinish() {\n \n }\n \n+ /**\n+ * Indicated whether this operation should be replicated to shadow replicas or not. If this method returns true the replication phase will be skipped.\n+ * For example writes such as index and delete don't need to be replicated on shadow replicas but refresh and flush do.\n+ */\n+ protected boolean shouldExecuteReplication(Settings settings) {\n+ return IndexMetaData.isIndexUsingShadowReplicas(settings) == false;\n+ }\n+\n /**\n * Internal request class that gets built on each node. Holds the original request plus additional info.\n */", "filename": "core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java", "status": "modified" }, { "diff": "@@ -28,8 +28,8 @@\n import org.elasticsearch.action.admin.indices.close.CloseIndexRequest;\n import org.elasticsearch.action.admin.indices.delete.DeleteIndexAction;\n import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;\n-import org.elasticsearch.action.admin.indices.flush.FlushAction;\n import org.elasticsearch.action.admin.indices.flush.FlushRequest;\n+import org.elasticsearch.action.admin.indices.flush.TransportShardFlushAction;\n import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsAction;\n import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsRequest;\n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsAction;\n@@ -42,8 +42,8 @@\n import org.elasticsearch.action.admin.indices.optimize.OptimizeRequest;\n import org.elasticsearch.action.admin.indices.recovery.RecoveryAction;\n import org.elasticsearch.action.admin.indices.recovery.RecoveryRequest;\n-import org.elasticsearch.action.admin.indices.refresh.RefreshAction;\n import org.elasticsearch.action.admin.indices.refresh.RefreshRequest;\n+import org.elasticsearch.action.admin.indices.refresh.TransportShardRefreshAction;\n import org.elasticsearch.action.admin.indices.segments.IndicesSegmentsAction;\n import org.elasticsearch.action.admin.indices.segments.IndicesSegmentsRequest;\n import org.elasticsearch.action.admin.indices.settings.get.GetSettingsAction;\n@@ -85,6 +85,7 @@\n import org.elasticsearch.action.update.UpdateAction;\n import org.elasticsearch.action.update.UpdateRequest;\n import org.elasticsearch.action.update.UpdateResponse;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n@@ -95,35 +96,18 @@\n import org.elasticsearch.test.ESIntegTestCase;\n import org.elasticsearch.test.ESIntegTestCase.ClusterScope;\n import org.elasticsearch.test.ESIntegTestCase.Scope;\n-import org.elasticsearch.test.transport.MockTransportService;\n import org.elasticsearch.threadpool.ThreadPool;\n-import org.elasticsearch.transport.Transport;\n-import org.elasticsearch.transport.TransportChannel;\n-import org.elasticsearch.transport.TransportModule;\n-import org.elasticsearch.transport.TransportRequest;\n-import org.elasticsearch.transport.TransportRequestHandler;\n-import org.elasticsearch.transport.TransportService;\n+import org.elasticsearch.transport.*;\n import org.junit.After;\n import org.junit.Before;\n import org.junit.Test;\n \n-import java.util.ArrayList;\n-import java.util.Collection;\n-import java.util.Collections;\n-import java.util.HashMap;\n-import java.util.HashSet;\n-import java.util.List;\n-import java.util.Map;\n-import java.util.Set;\n+import java.util.*;\n import java.util.concurrent.Callable;\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.hamcrest.Matchers.emptyIterable;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.greaterThan;\n-import static org.hamcrest.Matchers.hasItem;\n-import static org.hamcrest.Matchers.instanceOf;\n+import static org.hamcrest.Matchers.*;\n \n @ClusterScope(scope = Scope.SUITE, numClientNodes = 1, minNumDataNodes = 2)\n public class IndicesRequestIT extends ESIntegTestCase {\n@@ -390,14 +374,15 @@ public void testExists() {\n \n @Test\n public void testFlush() {\n- String flushShardAction = FlushAction.NAME + \"[s]\";\n- interceptTransportActions(flushShardAction);\n+ String[] indexShardActions = new String[]{TransportShardFlushAction.NAME + \"[r]\", TransportShardFlushAction.NAME};\n+ interceptTransportActions(indexShardActions);\n \n FlushRequest flushRequest = new FlushRequest(randomIndicesOrAliases());\n internalCluster().clientNodeClient().admin().indices().flush(flushRequest).actionGet();\n \n clearInterceptedActions();\n- assertSameIndices(flushRequest, flushShardAction);\n+ String[] indices = new IndexNameExpressionResolver(Settings.EMPTY).concreteIndices(client().admin().cluster().prepareState().get().getState(), flushRequest);\n+ assertIndicesSubset(Arrays.asList(indices), indexShardActions);\n }\n \n @Test\n@@ -414,14 +399,15 @@ public void testOptimize() {\n \n @Test\n public void testRefresh() {\n- String refreshShardAction = RefreshAction.NAME + \"[s]\";\n- interceptTransportActions(refreshShardAction);\n+ String[] indexShardActions = new String[]{TransportShardRefreshAction.NAME + \"[r]\", TransportShardRefreshAction.NAME};\n+ interceptTransportActions(indexShardActions);\n \n RefreshRequest refreshRequest = new RefreshRequest(randomIndicesOrAliases());\n internalCluster().clientNodeClient().admin().indices().refresh(refreshRequest).actionGet();\n \n clearInterceptedActions();\n- assertSameIndices(refreshRequest, refreshShardAction);\n+ String[] indices = new IndexNameExpressionResolver(Settings.EMPTY).concreteIndices(client().admin().cluster().prepareState().get().getState(), refreshRequest);\n+ assertIndicesSubset(Arrays.asList(indices), indexShardActions);\n }\n \n @Test", "filename": "core/src/test/java/org/elasticsearch/action/IndicesRequestIT.java", "status": "modified" }, { "diff": "@@ -61,7 +61,8 @@ public void testFlushWithBlocks() {\n for (String blockSetting : Arrays.asList(SETTING_READ_ONLY, SETTING_BLOCKS_METADATA)) {\n try {\n enableIndexBlock(\"test\", blockSetting);\n- assertBlocked(client().admin().indices().prepareFlush(\"test\"));\n+ FlushResponse flushResponse = client().admin().indices().prepareFlush(\"test\").get();\n+ assertBlocked(flushResponse);\n } finally {\n disableIndexBlock(\"test\", blockSetting);\n }\n@@ -74,7 +75,7 @@ public void testFlushWithBlocks() {\n assertThat(response.getSuccessfulShards(), equalTo(numShards.totalNumShards));\n \n setClusterReadOnly(true);\n- assertBlocked(client().admin().indices().prepareFlush());\n+ assertBlocked(client().admin().indices().prepareFlush().get());\n } finally {\n setClusterReadOnly(false);\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/flush/FlushBlocksIT.java", "status": "modified" }, { "diff": "@@ -74,7 +74,7 @@ public void testOptimizeWithBlocks() {\n assertThat(response.getSuccessfulShards(), equalTo(numShards.totalNumShards));\n \n setClusterReadOnly(true);\n- assertBlocked(client().admin().indices().prepareFlush());\n+ assertBlocked(client().admin().indices().prepareOptimize());\n } finally {\n setClusterReadOnly(false);\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/optimize/OptimizeBlocksIT.java", "status": "modified" }, { "diff": "@@ -57,7 +57,7 @@ public void testRefreshWithBlocks() {\n for (String blockSetting : Arrays.asList(SETTING_READ_ONLY, SETTING_BLOCKS_METADATA)) {\n try {\n enableIndexBlock(\"test\", blockSetting);\n- assertBlocked(client().admin().indices().prepareRefresh(\"test\"));\n+ assertBlocked(client().admin().indices().prepareRefresh(\"test\").get());\n } finally {\n disableIndexBlock(\"test\", blockSetting);\n }\n@@ -70,7 +70,7 @@ public void testRefreshWithBlocks() {\n assertThat(response.getSuccessfulShards(), equalTo(numShards.totalNumShards));\n \n setClusterReadOnly(true);\n- assertBlocked(client().admin().indices().prepareRefresh());\n+ assertBlocked(client().admin().indices().prepareRefresh().get());\n } finally {\n setClusterReadOnly(false);\n }", "filename": "core/src/test/java/org/elasticsearch/action/admin/indices/refresh/RefreshBlocksIT.java", "status": "modified" }, { "diff": "@@ -0,0 +1,315 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.action.support.replication;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.ActionWriteResponse;\n+import org.elasticsearch.action.UnavailableShardsException;\n+import org.elasticsearch.action.admin.indices.flush.FlushRequest;\n+import org.elasticsearch.action.admin.indices.flush.FlushResponse;\n+import org.elasticsearch.action.admin.indices.flush.TransportFlushAction;\n+import org.elasticsearch.action.admin.indices.flush.TransportShardFlushAction;\n+import org.elasticsearch.action.admin.indices.refresh.RefreshRequest;\n+import org.elasticsearch.action.admin.indices.refresh.RefreshResponse;\n+import org.elasticsearch.action.admin.indices.refresh.TransportRefreshAction;\n+import org.elasticsearch.action.admin.indices.refresh.TransportShardRefreshAction;\n+import org.elasticsearch.action.support.ActionFilter;\n+import org.elasticsearch.action.support.ActionFilters;\n+import org.elasticsearch.action.support.broadcast.BroadcastRequest;\n+import org.elasticsearch.action.support.broadcast.BroadcastResponse;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlock;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.block.ClusterBlocks;\n+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n+import org.elasticsearch.common.collect.Tuple;\n+import org.elasticsearch.common.io.stream.NamedWriteableRegistry;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.rest.RestStatus;\n+import org.elasticsearch.test.ESTestCase;\n+import org.elasticsearch.test.cluster.TestClusterService;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportService;\n+import org.elasticsearch.transport.local.LocalTransport;\n+import org.junit.AfterClass;\n+import org.junit.Before;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.Date;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Set;\n+import java.util.concurrent.ExecutionException;\n+import java.util.concurrent.Future;\n+import java.util.concurrent.TimeUnit;\n+\n+import static org.elasticsearch.action.support.replication.ClusterStateCreationUtils.*;\n+import static org.hamcrest.Matchers.*;\n+\n+public class BroadcastReplicationTests extends ESTestCase {\n+\n+ private static ThreadPool threadPool;\n+ private TestClusterService clusterService;\n+ private TransportService transportService;\n+ private LocalTransport transport;\n+ private TestBroadcastReplicationAction broadcastReplicationAction;\n+\n+ @BeforeClass\n+ public static void beforeClass() {\n+ threadPool = new ThreadPool(\"BroadcastReplicationTests\");\n+ }\n+\n+ @Override\n+ @Before\n+ public void setUp() throws Exception {\n+ super.setUp();\n+ transport = new LocalTransport(Settings.EMPTY, threadPool, Version.CURRENT, new NamedWriteableRegistry());\n+ clusterService = new TestClusterService(threadPool);\n+ transportService = new TransportService(transport, threadPool);\n+ transportService.start();\n+ broadcastReplicationAction = new TestBroadcastReplicationAction(Settings.EMPTY, threadPool, clusterService, transportService, new ActionFilters(new HashSet<ActionFilter>()), new IndexNameExpressionResolver(Settings.EMPTY), null);\n+ }\n+\n+ @AfterClass\n+ public static void afterClass() {\n+ ThreadPool.terminate(threadPool, 30, TimeUnit.SECONDS);\n+ threadPool = null;\n+ }\n+\n+ @Test\n+ public void testNotStartedPrimary() throws InterruptedException, ExecutionException, IOException {\n+ final String index = \"test\";\n+ final ShardId shardId = new ShardId(index, 0);\n+ clusterService.setState(state(index, randomBoolean(),\n+ randomBoolean() ? ShardRoutingState.INITIALIZING : ShardRoutingState.UNASSIGNED, ShardRoutingState.UNASSIGNED));\n+ logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n+ Future<BroadcastResponse> response = (broadcastReplicationAction.execute(new BroadcastRequest().indices(index)));\n+ for (Tuple<ShardId, ActionListener<ActionWriteResponse>> shardRequests : broadcastReplicationAction.capturedShardRequests) {\n+ shardRequests.v2().onFailure(new UnavailableShardsException(shardId, \"test exception expected\"));\n+ }\n+ response.get();\n+ logger.info(\"total shards: {}, \", response.get().getTotalShards());\n+ // we expect no failures here because UnavailableShardsException does not count as failed\n+ assertBroadcastResponse(2, 0, 0, response.get(), null);\n+ }\n+\n+ @Test\n+ public void testStartedPrimary() throws InterruptedException, ExecutionException, IOException {\n+ final String index = \"test\";\n+ clusterService.setState(state(index, randomBoolean(),\n+ ShardRoutingState.STARTED));\n+ logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n+ Future<BroadcastResponse> response = (broadcastReplicationAction.execute(new BroadcastRequest().indices(index)));\n+ for (Tuple<ShardId, ActionListener<ActionWriteResponse>> shardRequests : broadcastReplicationAction.capturedShardRequests) {\n+ ActionWriteResponse actionWriteResponse = new ActionWriteResponse();\n+ actionWriteResponse.setShardInfo(new ActionWriteResponse.ShardInfo(1, 1, new ActionWriteResponse.ShardInfo.Failure[0]));\n+ shardRequests.v2().onResponse(actionWriteResponse);\n+ }\n+ logger.info(\"total shards: {}, \", response.get().getTotalShards());\n+ assertBroadcastResponse(1, 1, 0, response.get(), null);\n+ }\n+\n+ @Test\n+ public void testResultCombine() throws InterruptedException, ExecutionException, IOException {\n+ final String index = \"test\";\n+ int numShards = randomInt(3);\n+ clusterService.setState(stateWithAssignedPrimariesAndOneReplica(index, numShards));\n+ logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n+ Future<BroadcastResponse> response = (broadcastReplicationAction.execute(new BroadcastRequest().indices(index)));\n+ int succeeded = 0;\n+ int failed = 0;\n+ for (Tuple<ShardId, ActionListener<ActionWriteResponse>> shardRequests : broadcastReplicationAction.capturedShardRequests) {\n+ if (randomBoolean()) {\n+ ActionWriteResponse.ShardInfo.Failure[] failures = new ActionWriteResponse.ShardInfo.Failure[0];\n+ int shardsSucceeded = randomInt(1) + 1;\n+ succeeded += shardsSucceeded;\n+ ActionWriteResponse actionWriteResponse = new ActionWriteResponse();\n+ if (shardsSucceeded == 1 && randomBoolean()) {\n+ //sometimes add failure (no failure means shard unavailable)\n+ failures = new ActionWriteResponse.ShardInfo.Failure[1];\n+ failures[0] = new ActionWriteResponse.ShardInfo.Failure(index, shardRequests.v1().id(), null, new Exception(\"pretend shard failed\"), RestStatus.GATEWAY_TIMEOUT, false);\n+ failed++;\n+ }\n+ actionWriteResponse.setShardInfo(new ActionWriteResponse.ShardInfo(2, shardsSucceeded, failures));\n+ shardRequests.v2().onResponse(actionWriteResponse);\n+ } else {\n+ // sometimes fail\n+ failed += 2;\n+ // just add a general exception and see if failed shards will be incremented by 2\n+ shardRequests.v2().onFailure(new Exception(\"pretend shard failed\"));\n+ }\n+ }\n+ assertBroadcastResponse(2 * numShards, succeeded, failed, response.get(), Exception.class);\n+ }\n+\n+ @Test\n+ public void testNoShards() throws InterruptedException, ExecutionException, IOException {\n+ clusterService.setState(stateWithNoShard());\n+ logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n+ BroadcastResponse response = executeAndAssertImmediateResponse(broadcastReplicationAction, new BroadcastRequest());\n+ assertBroadcastResponse(0, 0, 0, response, null);\n+ }\n+\n+ @Test\n+ public void testShardsList() throws InterruptedException, ExecutionException {\n+ final String index = \"test\";\n+ final ShardId shardId = new ShardId(index, 0);\n+ ClusterState clusterState = state(index, randomBoolean(),\n+ randomBoolean() ? ShardRoutingState.INITIALIZING : ShardRoutingState.UNASSIGNED, ShardRoutingState.UNASSIGNED);\n+ logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n+ List<ShardId> shards = broadcastReplicationAction.shards(new BroadcastRequest().indices(shardId.index().name()), clusterState);\n+ assertThat(shards.size(), equalTo(1));\n+ assertThat(shards.get(0), equalTo(shardId));\n+ }\n+\n+ private class TestBroadcastReplicationAction extends TransportBroadcastReplicationAction<BroadcastRequest, BroadcastResponse, ReplicationRequest, ActionWriteResponse> {\n+ protected final Set<Tuple<ShardId, ActionListener<ActionWriteResponse>>> capturedShardRequests = ConcurrentCollections.newConcurrentSet();\n+\n+ public TestBroadcastReplicationAction(Settings settings, ThreadPool threadPool, ClusterService clusterService, TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, TransportReplicationAction replicatedBroadcastShardAction) {\n+ super(\"test-broadcast-replication-action\", BroadcastRequest.class, settings, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, replicatedBroadcastShardAction);\n+ }\n+\n+ @Override\n+ protected ActionWriteResponse newShardResponse() {\n+ return new ActionWriteResponse();\n+ }\n+\n+ @Override\n+ protected ReplicationRequest newShardRequest(BroadcastRequest request, ShardId shardId) {\n+ return new ReplicationRequest().setShardId(shardId);\n+ }\n+\n+ @Override\n+ protected BroadcastResponse newResponse(int successfulShards, int failedShards, int totalNumCopies, List shardFailures) {\n+ return new BroadcastResponse(totalNumCopies, successfulShards, failedShards, shardFailures);\n+ }\n+\n+ @Override\n+ protected void shardExecute(BroadcastRequest request, ShardId shardId, ActionListener<ActionWriteResponse> shardActionListener) {\n+ capturedShardRequests.add(new Tuple<>(shardId, shardActionListener));\n+ }\n+\n+ protected void clearCapturedRequests() {\n+ capturedShardRequests.clear();\n+ }\n+ }\n+\n+ public FlushResponse assertImmediateResponse(String index, TransportFlushAction flushAction) throws InterruptedException, ExecutionException {\n+ Date beginDate = new Date();\n+ FlushResponse flushResponse = flushAction.execute(new FlushRequest(index)).get();\n+ Date endDate = new Date();\n+ long maxTime = 500;\n+ assertThat(\"this should not take longer than \" + maxTime + \" ms. The request hangs somewhere\", endDate.getTime() - beginDate.getTime(), lessThanOrEqualTo(maxTime));\n+ return flushResponse;\n+ }\n+\n+ @Test\n+ public void testTimeoutFlush() throws ExecutionException, InterruptedException {\n+\n+ final String index = \"test\";\n+ clusterService.setState(state(index, randomBoolean(),\n+ randomBoolean() ? ShardRoutingState.INITIALIZING : ShardRoutingState.UNASSIGNED, ShardRoutingState.UNASSIGNED));\n+ logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n+ TransportShardFlushAction shardFlushAction = new TransportShardFlushAction(Settings.EMPTY, transportService, clusterService,\n+ null, threadPool, null,\n+ null, new ActionFilters(new HashSet<ActionFilter>()), new IndexNameExpressionResolver(Settings.EMPTY));\n+ TransportFlushAction flushAction = new TransportFlushAction(Settings.EMPTY, threadPool, clusterService,\n+ transportService, new ActionFilters(new HashSet<ActionFilter>()), new IndexNameExpressionResolver(Settings.EMPTY),\n+ shardFlushAction);\n+ FlushResponse flushResponse = (FlushResponse) executeAndAssertImmediateResponse(flushAction, new FlushRequest(index));\n+ logger.info(\"total shards: {}, \", flushResponse.getTotalShards());\n+ assertBroadcastResponse(2, 0, 0, flushResponse, UnavailableShardsException.class);\n+\n+ ClusterBlocks.Builder block = ClusterBlocks.builder()\n+ .addGlobalBlock(new ClusterBlock(1, \"non retryable\", false, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL));\n+ clusterService.setState(ClusterState.builder(clusterService.state()).blocks(block));\n+ assertFailure(\"all shards should fail with cluster block\", executeAndAssertImmediateResponse(flushAction, new FlushRequest(index)), ClusterBlockException.class);\n+\n+ block = ClusterBlocks.builder()\n+ .addGlobalBlock(new ClusterBlock(1, \"retryable\", true, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL));\n+ clusterService.setState(ClusterState.builder(clusterService.state()).blocks(block));\n+ assertFailure(\"all shards should fail with cluster block\", executeAndAssertImmediateResponse(flushAction, new FlushRequest(index)), ClusterBlockException.class);\n+\n+ block = ClusterBlocks.builder()\n+ .addGlobalBlock(new ClusterBlock(1, \"non retryable\", false, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL));\n+ clusterService.setState(ClusterState.builder(clusterService.state()).blocks(block));\n+ assertFailure(\"all shards should fail with cluster block\", executeAndAssertImmediateResponse(flushAction, new FlushRequest(index)), ClusterBlockException.class);\n+ }\n+\n+ void assertFailure(String msg, BroadcastResponse broadcastResponse, Class<?> klass) throws InterruptedException {\n+ assertThat(broadcastResponse.getSuccessfulShards(), equalTo(0));\n+ assertThat(broadcastResponse.getTotalShards(), equalTo(broadcastResponse.getFailedShards()));\n+ for (int i = 0; i < broadcastResponse.getFailedShards(); i++) {\n+ assertThat(msg, broadcastResponse.getShardFailures()[i].getCause().getCause(), instanceOf(klass));\n+ }\n+ }\n+\n+ @Test\n+ public void testTimeoutRefresh() throws ExecutionException, InterruptedException {\n+\n+ final String index = \"test\";\n+ clusterService.setState(state(index, randomBoolean(),\n+ randomBoolean() ? ShardRoutingState.INITIALIZING : ShardRoutingState.UNASSIGNED, ShardRoutingState.UNASSIGNED));\n+ logger.debug(\"--> using initial state:\\n{}\", clusterService.state().prettyPrint());\n+ TransportShardRefreshAction shardrefreshAction = new TransportShardRefreshAction(Settings.EMPTY, transportService, clusterService,\n+ null, threadPool, null,\n+ null, new ActionFilters(new HashSet<ActionFilter>()), new IndexNameExpressionResolver(Settings.EMPTY));\n+ TransportRefreshAction refreshAction = new TransportRefreshAction(Settings.EMPTY, threadPool, clusterService,\n+ transportService, new ActionFilters(new HashSet<ActionFilter>()), new IndexNameExpressionResolver(Settings.EMPTY),\n+ shardrefreshAction);\n+ RefreshResponse refreshResponse = (RefreshResponse) executeAndAssertImmediateResponse(refreshAction, new RefreshRequest(index));\n+ assertBroadcastResponse(2, 0, 0, refreshResponse, UnavailableShardsException.class);\n+\n+ ClusterBlocks.Builder block = ClusterBlocks.builder()\n+ .addGlobalBlock(new ClusterBlock(1, \"non retryable\", false, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL));\n+ clusterService.setState(ClusterState.builder(clusterService.state()).blocks(block));\n+ assertFailure(\"all shards should fail with cluster block\", executeAndAssertImmediateResponse(refreshAction, new RefreshRequest(index)), ClusterBlockException.class);\n+\n+ block = ClusterBlocks.builder()\n+ .addGlobalBlock(new ClusterBlock(1, \"retryable\", true, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL));\n+ clusterService.setState(ClusterState.builder(clusterService.state()).blocks(block));\n+ assertFailure(\"all shards should fail with cluster block\", executeAndAssertImmediateResponse(refreshAction, new RefreshRequest(index)), ClusterBlockException.class);\n+\n+ block = ClusterBlocks.builder()\n+ .addGlobalBlock(new ClusterBlock(1, \"non retryable\", false, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL));\n+ clusterService.setState(ClusterState.builder(clusterService.state()).blocks(block));\n+ assertFailure(\"all shards should fail with cluster block\", executeAndAssertImmediateResponse(refreshAction, new RefreshRequest(index)), ClusterBlockException.class);\n+ }\n+\n+ public BroadcastResponse executeAndAssertImmediateResponse(TransportBroadcastReplicationAction broadcastAction, BroadcastRequest request) throws InterruptedException, ExecutionException {\n+ return (BroadcastResponse) broadcastAction.execute(request).actionGet(\"5s\");\n+ }\n+\n+ private void assertBroadcastResponse(int total, int successful, int failed, BroadcastResponse response, Class exceptionClass) {\n+ assertThat(response.getSuccessfulShards(), equalTo(successful));\n+ assertThat(response.getTotalShards(), equalTo(total));\n+ assertThat(response.getFailedShards(), equalTo(failed));\n+ for (int i = 0; i < failed; i++) {\n+ assertThat(response.getShardFailures()[0].getCause().getCause(), instanceOf(exceptionClass));\n+ }\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/action/support/replication/BroadcastReplicationTests.java", "status": "added" }, { "diff": "@@ -0,0 +1,230 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+\n+package org.elasticsearch.action.support.replication;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterName;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.*;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.transport.DummyTransportAddress;\n+import org.elasticsearch.index.shard.ShardId;\n+\n+import java.util.HashSet;\n+import java.util.Set;\n+\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.*;\n+import static org.elasticsearch.test.ESTestCase.randomBoolean;\n+import static org.elasticsearch.test.ESTestCase.randomFrom;\n+import static org.elasticsearch.test.ESTestCase.randomIntBetween;\n+\n+/**\n+ * Helper methods for generating cluster states\n+ */\n+public class ClusterStateCreationUtils {\n+\n+\n+ /**\n+ * Creates cluster state with and index that has one shard and #(replicaStates) replicas\n+ *\n+ * @param index name of the index\n+ * @param primaryLocal if primary should coincide with the local node in the cluster state\n+ * @param primaryState state of primary\n+ * @param replicaStates states of the replicas. length of this array determines also the number of replicas\n+ */\n+ public static ClusterState state(String index, boolean primaryLocal, ShardRoutingState primaryState, ShardRoutingState... replicaStates) {\n+ final int numberOfReplicas = replicaStates.length;\n+\n+ int numberOfNodes = numberOfReplicas + 1;\n+ if (primaryState == ShardRoutingState.RELOCATING) {\n+ numberOfNodes++;\n+ }\n+ for (ShardRoutingState state : replicaStates) {\n+ if (state == ShardRoutingState.RELOCATING) {\n+ numberOfNodes++;\n+ }\n+ }\n+ numberOfNodes = Math.max(2, numberOfNodes); // we need a non-local master to test shard failures\n+ final ShardId shardId = new ShardId(index, 0);\n+ DiscoveryNodes.Builder discoBuilder = DiscoveryNodes.builder();\n+ Set<String> unassignedNodes = new HashSet<>();\n+ for (int i = 0; i < numberOfNodes + 1; i++) {\n+ final DiscoveryNode node = newNode(i);\n+ discoBuilder = discoBuilder.put(node);\n+ unassignedNodes.add(node.id());\n+ }\n+ discoBuilder.localNodeId(newNode(0).id());\n+ discoBuilder.masterNodeId(newNode(1).id()); // we need a non-local master to test shard failures\n+ IndexMetaData indexMetaData = IndexMetaData.builder(index).settings(Settings.builder()\n+ .put(SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, numberOfReplicas)\n+ .put(SETTING_CREATION_DATE, System.currentTimeMillis())).build();\n+\n+ RoutingTable.Builder routing = new RoutingTable.Builder();\n+ routing.addAsNew(indexMetaData);\n+ IndexShardRoutingTable.Builder indexShardRoutingBuilder = new IndexShardRoutingTable.Builder(shardId);\n+\n+ String primaryNode = null;\n+ String relocatingNode = null;\n+ UnassignedInfo unassignedInfo = null;\n+ if (primaryState != ShardRoutingState.UNASSIGNED) {\n+ if (primaryLocal) {\n+ primaryNode = newNode(0).id();\n+ unassignedNodes.remove(primaryNode);\n+ } else {\n+ primaryNode = selectAndRemove(unassignedNodes);\n+ }\n+ if (primaryState == ShardRoutingState.RELOCATING) {\n+ relocatingNode = selectAndRemove(unassignedNodes);\n+ }\n+ } else {\n+ unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null);\n+ }\n+ indexShardRoutingBuilder.addShard(TestShardRouting.newShardRouting(index, 0, primaryNode, relocatingNode, null, true, primaryState, 0, unassignedInfo));\n+\n+ for (ShardRoutingState replicaState : replicaStates) {\n+ String replicaNode = null;\n+ relocatingNode = null;\n+ unassignedInfo = null;\n+ if (replicaState != ShardRoutingState.UNASSIGNED) {\n+ assert primaryNode != null : \"a replica is assigned but the primary isn't\";\n+ replicaNode = selectAndRemove(unassignedNodes);\n+ if (replicaState == ShardRoutingState.RELOCATING) {\n+ relocatingNode = selectAndRemove(unassignedNodes);\n+ }\n+ } else {\n+ unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null);\n+ }\n+ indexShardRoutingBuilder.addShard(\n+ TestShardRouting.newShardRouting(index, shardId.id(), replicaNode, relocatingNode, null, false, replicaState, 0, unassignedInfo));\n+ }\n+\n+ ClusterState.Builder state = ClusterState.builder(new ClusterName(\"test\"));\n+ state.nodes(discoBuilder);\n+ state.metaData(MetaData.builder().put(indexMetaData, false).generateClusterUuidIfNeeded());\n+ state.routingTable(RoutingTable.builder().add(IndexRoutingTable.builder(index).addIndexShard(indexShardRoutingBuilder.build())));\n+ return state.build();\n+ }\n+\n+ /**\n+ * Creates cluster state with several shards and one replica and all shards STARTED.\n+ */\n+ public static ClusterState stateWithAssignedPrimariesAndOneReplica(String index, int numberOfShards) {\n+\n+ int numberOfNodes = 2; // we need a non-local master to test shard failures\n+ DiscoveryNodes.Builder discoBuilder = DiscoveryNodes.builder();\n+ for (int i = 0; i < numberOfNodes + 1; i++) {\n+ final DiscoveryNode node = newNode(i);\n+ discoBuilder = discoBuilder.put(node);\n+ }\n+ discoBuilder.localNodeId(newNode(0).id());\n+ discoBuilder.masterNodeId(newNode(1).id()); // we need a non-local master to test shard failures\n+ IndexMetaData indexMetaData = IndexMetaData.builder(index).settings(Settings.builder()\n+ .put(SETTING_VERSION_CREATED, Version.CURRENT)\n+ .put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 1)\n+ .put(SETTING_CREATION_DATE, System.currentTimeMillis())).build();\n+ ClusterState.Builder state = ClusterState.builder(new ClusterName(\"test\"));\n+ state.nodes(discoBuilder);\n+ state.metaData(MetaData.builder().put(indexMetaData, false).generateClusterUuidIfNeeded());\n+ IndexRoutingTable.Builder indexRoutingTableBuilder = IndexRoutingTable.builder(index);\n+ for (int i = 0; i < numberOfShards; i++) {\n+ RoutingTable.Builder routing = new RoutingTable.Builder();\n+ routing.addAsNew(indexMetaData);\n+ final ShardId shardId = new ShardId(index, i);\n+ IndexShardRoutingTable.Builder indexShardRoutingBuilder = new IndexShardRoutingTable.Builder(shardId);\n+ indexShardRoutingBuilder.addShard(TestShardRouting.newShardRouting(index, i, newNode(0).id(), null, null, true, ShardRoutingState.STARTED, 0, null));\n+ indexShardRoutingBuilder.addShard(TestShardRouting.newShardRouting(index, i, newNode(1).id(), null, null, false, ShardRoutingState.STARTED, 0, null));\n+ indexRoutingTableBuilder.addIndexShard(indexShardRoutingBuilder.build());\n+ }\n+ state.routingTable(RoutingTable.builder().add(indexRoutingTableBuilder));\n+ return state.build();\n+ }\n+\n+ /**\n+ * Creates cluster state with and index that has one shard and as many replicas as numberOfReplicas.\n+ * Primary will be STARTED in cluster state but replicas will be one of UNASSIGNED, INITIALIZING, STARTED or RELOCATING.\n+ *\n+ * @param index name of the index\n+ * @param primaryLocal if primary should coincide with the local node in the cluster state\n+ * @param numberOfReplicas number of replicas\n+ */\n+ public static ClusterState stateWithStartedPrimary(String index, boolean primaryLocal, int numberOfReplicas) {\n+ int assignedReplicas = randomIntBetween(0, numberOfReplicas);\n+ return stateWithStartedPrimary(index, primaryLocal, assignedReplicas, numberOfReplicas - assignedReplicas);\n+ }\n+\n+ /**\n+ * Creates cluster state with and index that has one shard and as many replicas as numberOfReplicas.\n+ * Primary will be STARTED in cluster state. Some (unassignedReplicas) will be UNASSIGNED and\n+ * some (assignedReplicas) will be one of INITIALIZING, STARTED or RELOCATING.\n+ *\n+ * @param index name of the index\n+ * @param primaryLocal if primary should coincide with the local node in the cluster state\n+ * @param assignedReplicas number of replicas that should have INITIALIZING, STARTED or RELOCATING state\n+ * @param unassignedReplicas number of replicas that should be unassigned\n+ */\n+ public static ClusterState stateWithStartedPrimary(String index, boolean primaryLocal, int assignedReplicas, int unassignedReplicas) {\n+ ShardRoutingState[] replicaStates = new ShardRoutingState[assignedReplicas + unassignedReplicas];\n+ // no point in randomizing - node assignment later on does it too.\n+ for (int i = 0; i < assignedReplicas; i++) {\n+ replicaStates[i] = randomFrom(ShardRoutingState.INITIALIZING, ShardRoutingState.STARTED, ShardRoutingState.RELOCATING);\n+ }\n+ for (int i = assignedReplicas; i < replicaStates.length; i++) {\n+ replicaStates[i] = ShardRoutingState.UNASSIGNED;\n+ }\n+ return state(index, primaryLocal, randomFrom(ShardRoutingState.STARTED, ShardRoutingState.RELOCATING), replicaStates);\n+ }\n+\n+ /**\n+ * Creates a cluster state with no index\n+ */\n+ public static ClusterState stateWithNoShard() {\n+ int numberOfNodes = 2;\n+ DiscoveryNodes.Builder discoBuilder = DiscoveryNodes.builder();\n+ Set<String> unassignedNodes = new HashSet<>();\n+ for (int i = 0; i < numberOfNodes + 1; i++) {\n+ final DiscoveryNode node = newNode(i);\n+ discoBuilder = discoBuilder.put(node);\n+ unassignedNodes.add(node.id());\n+ }\n+ discoBuilder.localNodeId(newNode(0).id());\n+ discoBuilder.masterNodeId(newNode(1).id());\n+ ClusterState.Builder state = ClusterState.builder(new ClusterName(\"test\"));\n+ state.nodes(discoBuilder);\n+ state.metaData(MetaData.builder().generateClusterUuidIfNeeded());\n+ state.routingTable(RoutingTable.builder());\n+ return state.build();\n+ }\n+\n+ private static DiscoveryNode newNode(int nodeId) {\n+ return new DiscoveryNode(\"node_\" + nodeId, DummyTransportAddress.INSTANCE, Version.CURRENT);\n+ }\n+\n+ static private String selectAndRemove(Set<String> strings) {\n+ String selection = randomFrom(strings.toArray(new String[strings.size()]));\n+ strings.remove(selection);\n+ return selection;\n+ }\n+}", "filename": "core/src/test/java/org/elasticsearch/action/support/replication/ClusterStateCreationUtils.java", "status": "added" }, { "diff": "@@ -77,6 +77,8 @@\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicInteger;\n \n+import static org.elasticsearch.action.support.replication.ClusterStateCreationUtils.state;\n+import static org.elasticsearch.action.support.replication.ClusterStateCreationUtils.stateWithStartedPrimary;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.*;\n import static org.hamcrest.Matchers.*;\n \n@@ -98,6 +100,7 @@ public static void beforeClass() {\n threadPool = new ThreadPool(\"ShardReplicationTests\");\n }\n \n+ @Override\n @Before\n public void setUp() throws Exception {\n super.setUp();\n@@ -161,103 +164,6 @@ public void assertIndexShardUninitialized() {\n assertEquals(1, count.get());\n }\n \n- ClusterState stateWithStartedPrimary(String index, boolean primaryLocal, int numberOfReplicas) {\n- int assignedReplicas = randomIntBetween(0, numberOfReplicas);\n- return stateWithStartedPrimary(index, primaryLocal, assignedReplicas, numberOfReplicas - assignedReplicas);\n- }\n-\n- ClusterState stateWithStartedPrimary(String index, boolean primaryLocal, int assignedReplicas, int unassignedReplicas) {\n- ShardRoutingState[] replicaStates = new ShardRoutingState[assignedReplicas + unassignedReplicas];\n- // no point in randomizing - node assignment later on does it too.\n- for (int i = 0; i < assignedReplicas; i++) {\n- replicaStates[i] = randomFrom(ShardRoutingState.INITIALIZING, ShardRoutingState.STARTED, ShardRoutingState.RELOCATING);\n- }\n- for (int i = assignedReplicas; i < replicaStates.length; i++) {\n- replicaStates[i] = ShardRoutingState.UNASSIGNED;\n- }\n- return state(index, primaryLocal, randomFrom(ShardRoutingState.STARTED, ShardRoutingState.RELOCATING), replicaStates);\n- }\n-\n- ClusterState state(String index, boolean primaryLocal, ShardRoutingState primaryState, ShardRoutingState... replicaStates) {\n- final int numberOfReplicas = replicaStates.length;\n-\n- int numberOfNodes = numberOfReplicas + 1;\n- if (primaryState == ShardRoutingState.RELOCATING) {\n- numberOfNodes++;\n- }\n- for (ShardRoutingState state : replicaStates) {\n- if (state == ShardRoutingState.RELOCATING) {\n- numberOfNodes++;\n- }\n- }\n- numberOfNodes = Math.max(2, numberOfNodes); // we need a non-local master to test shard failures\n- final ShardId shardId = new ShardId(index, 0);\n- DiscoveryNodes.Builder discoBuilder = DiscoveryNodes.builder();\n- Set<String> unassignedNodes = new HashSet<>();\n- for (int i = 0; i < numberOfNodes + 1; i++) {\n- final DiscoveryNode node = newNode(i);\n- discoBuilder = discoBuilder.put(node);\n- unassignedNodes.add(node.id());\n- }\n- discoBuilder.localNodeId(newNode(0).id());\n- discoBuilder.masterNodeId(newNode(1).id()); // we need a non-local master to test shard failures\n- IndexMetaData indexMetaData = IndexMetaData.builder(index).settings(Settings.builder()\n- .put(SETTING_VERSION_CREATED, Version.CURRENT)\n- .put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, numberOfReplicas)\n- .put(SETTING_CREATION_DATE, System.currentTimeMillis())).build();\n-\n- RoutingTable.Builder routing = new RoutingTable.Builder();\n- routing.addAsNew(indexMetaData);\n- IndexShardRoutingTable.Builder indexShardRoutingBuilder = new IndexShardRoutingTable.Builder(shardId);\n-\n- String primaryNode = null;\n- String relocatingNode = null;\n- UnassignedInfo unassignedInfo = null;\n- if (primaryState != ShardRoutingState.UNASSIGNED) {\n- if (primaryLocal) {\n- primaryNode = newNode(0).id();\n- unassignedNodes.remove(primaryNode);\n- } else {\n- primaryNode = selectAndRemove(unassignedNodes);\n- }\n- if (primaryState == ShardRoutingState.RELOCATING) {\n- relocatingNode = selectAndRemove(unassignedNodes);\n- }\n- } else {\n- unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null);\n- }\n- indexShardRoutingBuilder.addShard(TestShardRouting.newShardRouting(index, 0, primaryNode, relocatingNode, null, true, primaryState, 0, unassignedInfo));\n-\n- for (ShardRoutingState replicaState : replicaStates) {\n- String replicaNode = null;\n- relocatingNode = null;\n- unassignedInfo = null;\n- if (replicaState != ShardRoutingState.UNASSIGNED) {\n- assert primaryNode != null : \"a replica is assigned but the primary isn't\";\n- replicaNode = selectAndRemove(unassignedNodes);\n- if (replicaState == ShardRoutingState.RELOCATING) {\n- relocatingNode = selectAndRemove(unassignedNodes);\n- }\n- } else {\n- unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.INDEX_CREATED, null);\n- }\n- indexShardRoutingBuilder.addShard(\n- TestShardRouting.newShardRouting(index, shardId.id(), replicaNode, relocatingNode, null, false, replicaState, 0, unassignedInfo));\n- }\n-\n- ClusterState.Builder state = ClusterState.builder(new ClusterName(\"test\"));\n- state.nodes(discoBuilder);\n- state.metaData(MetaData.builder().put(indexMetaData, false).generateClusterUuidIfNeeded());\n- state.routingTable(RoutingTable.builder().add(IndexRoutingTable.builder(index).addIndexShard(indexShardRoutingBuilder.build())));\n- return state.build();\n- }\n-\n- private String selectAndRemove(Set<String> strings) {\n- String selection = randomFrom(strings.toArray(new String[strings.size()]));\n- strings.remove(selection);\n- return selection;\n- }\n-\n @Test\n public void testNotStartedPrimary() throws InterruptedException, ExecutionException {\n final String index = \"test\";\n@@ -527,6 +433,7 @@ public void testCounterOnPrimary() throws InterruptedException, ExecutionExcepti\n action = new ActionWithDelay(Settings.EMPTY, \"testActionWithExceptions\", transportService, clusterService, threadPool);\n final TransportReplicationAction<Request, Request, Response>.PrimaryPhase primaryPhase = action.new PrimaryPhase(request, listener);\n Thread t = new Thread() {\n+ @Override\n public void run() {\n primaryPhase.run();\n }\n@@ -587,6 +494,7 @@ public void testReplicasCounter() throws Exception {\n action = new ActionWithDelay(Settings.EMPTY, \"testActionWithExceptions\", transportService, clusterService, threadPool);\n final Action.ReplicaOperationTransportHandler replicaOperationTransportHandler = action.new ReplicaOperationTransportHandler();\n Thread t = new Thread() {\n+ @Override\n public void run() {\n try {\n replicaOperationTransportHandler.messageReceived(new Request(), createTransportChannel());\n@@ -746,10 +654,6 @@ protected boolean checkWriteConsistency() {\n }\n }\n \n- static DiscoveryNode newNode(int nodeId) {\n- return new DiscoveryNode(\"node_\" + nodeId, DummyTransportAddress.INSTANCE, Version.CURRENT);\n- }\n-\n /*\n * Throws exceptions when executed. Used for testing if the counter is correctly decremented in case an operation fails.\n * */", "filename": "core/src/test/java/org/elasticsearch/action/support/replication/ShardReplicationTests.java", "status": "modified" }, { "diff": "@@ -432,12 +432,12 @@ public boolean apply(Object o) {\n * state processing when a recover starts and only unblocking it shortly after the node receives\n * the ShardActiveRequest.\n */\n- static class ReclocationStartEndTracer extends MockTransportService.Tracer {\n+ public static class ReclocationStartEndTracer extends MockTransportService.Tracer {\n private final ESLogger logger;\n private final CountDownLatch beginRelocationLatch;\n private final CountDownLatch receivedShardExistsRequestLatch;\n \n- ReclocationStartEndTracer(ESLogger logger, CountDownLatch beginRelocationLatch, CountDownLatch receivedShardExistsRequestLatch) {\n+ public ReclocationStartEndTracer(ESLogger logger, CountDownLatch beginRelocationLatch, CountDownLatch receivedShardExistsRequestLatch) {\n this.logger = logger;\n this.beginRelocationLatch = beginRelocationLatch;\n this.receivedShardExistsRequestLatch = receivedShardExistsRequestLatch;", "filename": "core/src/test/java/org/elasticsearch/indices/store/IndicesStoreIntegrationIT.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.OperationRouting;\n+import org.elasticsearch.cluster.routing.allocation.decider.AwarenessAllocationDecider;\n import org.elasticsearch.cluster.service.PendingClusterTask;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Priority;\n@@ -55,6 +56,7 @@ public class TestClusterService implements ClusterService {\n private final Queue<NotifyTimeout> onGoingTimeouts = ConcurrentCollections.newQueue();\n private final ThreadPool threadPool;\n private final ESLogger logger = Loggers.getLogger(getClass(), Settings.EMPTY);\n+ private final OperationRouting operationRouting = new OperationRouting(Settings.Builder.EMPTY_SETTINGS, new AwarenessAllocationDecider());\n \n public TestClusterService() {\n this(ClusterState.builder(new ClusterName(\"test\")).build());\n@@ -129,7 +131,7 @@ public void removeInitialStateBlock(ClusterBlock block) throws IllegalStateExcep\n \n @Override\n public OperationRouting operationRouting() {\n- return null;\n+ return operationRouting;\n }\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/test/cluster/TestClusterService.java", "status": "modified" }, { "diff": "@@ -67,6 +67,7 @@\n import org.elasticsearch.search.suggest.Suggest;\n import org.elasticsearch.test.VersionUtils;\n import org.elasticsearch.test.rest.client.http.HttpResponse;\n+import org.hamcrest.CoreMatchers;\n import org.hamcrest.Matcher;\n import org.hamcrest.Matchers;\n import org.junit.Assert;\n@@ -126,6 +127,22 @@ public static void assertBlocked(ActionRequestBuilder builder) {\n assertBlocked(builder, null);\n }\n \n+ /**\n+ * Checks that all shard requests of a replicated brodcast request failed due to a cluster block\n+ *\n+ * @param replicatedBroadcastResponse the response that should only contain failed shard responses\n+ *\n+ * */\n+ public static void assertBlocked(BroadcastResponse replicatedBroadcastResponse) {\n+ assertThat(\"all shard requests should have failed\", replicatedBroadcastResponse.getFailedShards(), Matchers.equalTo(replicatedBroadcastResponse.getTotalShards()));\n+ for (ShardOperationFailedException exception : replicatedBroadcastResponse.getShardFailures()) {\n+ ClusterBlockException clusterBlockException = (ClusterBlockException) ExceptionsHelper.unwrap(exception.getCause(), ClusterBlockException.class);\n+ assertNotNull(\"expected the cause of failure to be a ClusterBlockException but got \" + exception.getCause().getMessage(), clusterBlockException);\n+ assertThat(clusterBlockException.blocks().size(), greaterThan(0));\n+ assertThat(clusterBlockException.status(), CoreMatchers.equalTo(RestStatus.FORBIDDEN));\n+ }\n+ }\n+\n /**\n * Executes the request and fails if the request has not been blocked by a specific {@link ClusterBlock}.\n *", "filename": "core/src/test/java/org/elasticsearch/test/hamcrest/ElasticsearchAssertions.java", "status": "modified" } ] }
{ "body": "I noticed this when testing https://github.com/elastic/elasticsearch/pull/13039\n\nWe only use this exception today after we startup, but we should probably try to use it during startup too.\n", "comments": [ { "body": "Closed by #13041\n", "created_at": "2015-08-24T13:55:26Z" } ], "number": 13040, "title": "Use StartupException more consistently in Bootstrap for consistent console error formatting." }
{ "body": "Closes #13040\n", "number": 13041, "review_comments": [], "title": "Use StartupError to format all exceptions hitting the console" }
{ "commits": [ { "message": "Use StartupError to format all exceptions hitting the console" } ], "files": [ { "diff": "@@ -221,8 +221,17 @@ private void stop() {\n keepAliveLatch.countDown();\n }\n }\n+ \n+ /** Calls doMain(), but with special formatting of errors */\n+ public static void main(String[] args) throws StartupError {\n+ try {\n+ doMain(args);\n+ } catch (Throwable t) {\n+ throw new StartupError(t);\n+ }\n+ }\n \n- public static void main(String[] args) throws Throwable {\n+ public static void doMain(String[] args) throws Throwable {\n BootstrapCLIParser bootstrapCLIParser = new BootstrapCLIParser();\n CliTool.ExitStatus status = bootstrapCLIParser.execute(args);\n \n@@ -291,7 +300,7 @@ public static void main(String[] args) throws Throwable {\n Loggers.enableConsoleLogging();\n }\n \n- throw new StartupError(e);\n+ throw e;\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java", "status": "modified" } ] }
{ "body": "I added this:\n\n```\n...\n# action.destructive_requires_name: true\n\nSKDFLK@$#L%@KL#%L#@$#@L$ #L$@$ #L@K$#L $L $K#L#@L $#L\n!!@!@$(#%#)(@)% #(%)\n#(%#@)%@#)% (@#%()\n()#%@#% (@ )%@%(@#)% @( %)@ %(@)\n)(%)@()(%)()(#%)@#\nrmuir@beast:~/workspace/\n```\n\nNothing happens.\n", "comments": [ { "body": "it also causes all configuration to be silently ignored.\n\nHow could be done any worse?\n", "created_at": "2015-08-21T13:38:45Z" }, { "body": "There is a storm coming. I will break backwards compatibility, whatever it takes to fix this.\n", "created_at": "2015-08-21T13:42:42Z" }, { "body": "+1\n", "created_at": "2015-08-21T13:47:52Z" } ], "number": 13028, "title": "straight up garbage in elasticsearch.yml does not cause error or even warning." }
{ "body": "This commit fixes an issue that was causing Elasticsearch to silently\nignore settings files that contain garbage. The underlying issue was\nswallowing an `SettingsException` under the assumption that the only\nreason that an exception could be throw was due to the settings file\nnot existing (in this case the `IOException` would be the cause of the\nswallowed `SettingsException`). This assumption is mistaken as an\n`IOException` could also be thrown due to an access error or a read\nerror. Additionally, a `SettingsException` could be thrown exactly\nbecause garbage was found in the settings file. We should instead\nexplicitly check that the settings file exists, and bomb on an\nexception thrown for any reason.\n\nCloses #13028\n", "number": 13039, "review_comments": [ { "body": "I'm wondering if we should fail if we find several times the config file with different extensions?\n", "created_at": "2015-08-21T14:46:32Z" }, { "body": "I don't like Files.exists in general but personally I think this is the right decision for a good surgical fix!\n", "created_at": "2015-08-21T14:48:18Z" }, { "body": "I think you're right, but I don't know the history of allowing multiple settings formats and whether or not it is used in practice. If we are going to make a breaking change on this, now (pre 2.0) is most definitely the time.\n", "created_at": "2015-08-21T14:48:29Z" }, { "body": "FWIW I've used this in the past for production ES clusters to have a set of common settings (elasticsearch.yml) and node-specific settings (elasticsearch.json) to merge two files with settings.\n\nThat said, I still think it's safer/better to remove this feature and fail if more than one config file is found. It reduces the complexity for reasoning where a setting came from.\n", "created_at": "2015-08-21T14:55:43Z" }, { "body": "> still think it's safer/better to remove this feature and fail if more than one config file is found. It reduces the complexity for reasoning where a setting came from.\n\n+1\n", "created_at": "2015-08-21T14:56:33Z" }, { "body": "I'll open a separate issue for it.\n", "created_at": "2015-08-21T15:07:54Z" } ], "title": "Do not swallow exceptions thrown while parsing settings" }
{ "commits": [ { "message": "Do not swallow exceptions thrown while parsing settings\n\nThis commit fixes an issue that was causing Elasticsearch to silently\nignore settings files that contain garbage. The underlying issue was\nswallowing an SettingsException under the assumption that the only\nreason that an exception could be throw was due to the settings file\nnot existing (in this case the IOException would be the cause of the\nswallowed SettingsException). This assumption is mistaken as an\nIOException could also be thrown due to an access error or a read\nerror. Additionally, a SettingsException could be thrown exactly\nbecause garbage was found in the settings file. We should instead\nexplicitly check that the settings file exists, and bomb on an\nexception thrown for any reason.\n\nCloses #13028" } ], "files": [ { "diff": "@@ -114,10 +114,9 @@ public static Tuple<Settings, Environment> prepareSettings(Settings pSettings, b\n }\n if (loadFromEnv) {\n for (String allowedSuffix : ALLOWED_SUFFIXES) {\n- try {\n- settingsBuilder.loadFromPath(environment.configFile().resolve(\"elasticsearch\" + allowedSuffix));\n- } catch (SettingsException e) {\n- // ignore\n+ Path path = environment.configFile().resolve(\"elasticsearch\" + allowedSuffix);\n+ if (Files.exists(path)) {\n+ settingsBuilder.loadFromPath(path);\n }\n }\n }", "filename": "core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java", "status": "modified" }, { "diff": "@@ -23,15 +23,15 @@\n import org.elasticsearch.common.cli.Terminal;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.settings.SettingsException;\n import org.elasticsearch.env.Environment;\n import org.elasticsearch.test.ESTestCase;\n import org.junit.After;\n import org.junit.Before;\n import org.junit.Test;\n \n+import java.io.IOException;\n import java.io.InputStream;\n-import java.net.URL;\n-import java.net.URLClassLoader;\n import java.nio.file.Files;\n import java.nio.file.Path;\n import java.util.ArrayList;\n@@ -235,4 +235,17 @@ public String readText(String message, Object... args) {\n assertThat(settings.get(\"name\"), is(\"prompted name 0\"));\n assertThat(settings.get(\"node.name\"), is(\"prompted name 0\"));\n }\n+\n+ @Test(expected = SettingsException.class)\n+ public void testGarbageIsNotSwallowed() throws IOException {\n+ InputStream garbage = getClass().getResourceAsStream(\"/config/garbage/garbage.yml\");\n+ Path home = createTempDir();\n+ Path config = home.resolve(\"config\");\n+ Files.createDirectory(config);\n+ Files.copy(garbage, config.resolve(\"elasticsearch.yml\"));\n+ InternalSettingsPreparer.prepareSettings(settingsBuilder()\n+ .put(\"config.ignore_system_properties\", true)\n+ .put(\"path.home\", home)\n+ .build(), true);\n+ }\n }", "filename": "core/src/test/java/org/elasticsearch/node/internal/InternalSettingsPreparerTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,7 @@\n+SKDFLK@$#L%@KL#%L#@$#@L$ #L$@$ #L@K$#L $L $K#L#@L $#L\n+!!@!@$(#%#)(@)% #(%)\n+#(%#@)%@#)% (@#%()\n+()#%@#% (@ )%@%(@#)% @( %)@ %(@)\n+)(%)@()(%)()(#%)@#\n+\n+node.name: \"Hiro Takachiho\"", "filename": "core/src/test/resources/config/garbage/garbage.yml", "status": "added" } ] }
{ "body": "Request:\n\n``` js\nGET _search\n{\n \"aggs\": {\n \"nested_geodistance\": {\n \"nested\": {\n \"path\": \"addresses\"\n },\n \"aggs\": {\n \"per_ring\": {\n \"geo_distance\": {\n \"field\": \"addresses.location\",\n \"unit\": \"km\",\n \"orgin\": {\n \"lat\": 56.78,\n \"lon\": 12.34\n },\n \"ranges\": [\n {\n \"from\": 100,\n \"to\": 200\n }\n ]\n }\n }\n }\n }\n }\n}\n```\n\nNote that `origin` field is mis-spelt as `orgin`. The error that comes back in the response is:\n\n```\n{\n \"error\": {\n \"root_cause\": [\n {\n \"type\": \"search_parse_exception\",\n \"reason\": \"Unexpected token START_OBJECT in [per_ring].\",\n \"line\": 11,\n \"col\": 28\n }\n ],\n \"type\": \"search_phase_execution_exception\",\n \"reason\": \"all shards failed\",\n \"phase\": \"query\",\n \"grouped\": true,\n \"failed_shards\": [\n {\n \"shard\": 0,\n \"index\": \".scripts\",\n \"node\": \"IzyWUpmzTbmgk1L8LDCRhQ\",\n \"reason\": {\n \"type\": \"search_parse_exception\",\n \"reason\": \"Unexpected token START_OBJECT in [per_ring].\",\n \"line\": 11,\n \"col\": 28\n }\n }\n ]\n },\n \"status\": 400\n}\n```\n\nThis is very confusing and doesn't actually point to the error. We should improve this\n", "comments": [ { "body": "FYI: Just now started working on this .... Will record my observations soon .\n", "created_at": "2015-08-18T15:18:51Z" }, { "body": "made a pull request : https://github.com/elastic/elasticsearch/pull/12971 .\nFeedback welcome.\n", "created_at": "2015-08-18T19:35:39Z" }, { "body": "I think we just need to print fieldname to keep consistency with other exceptions.\n", "created_at": "2015-08-21T08:19:05Z" }, { "body": "Hi Friend @xuzha , \n\nthat is what has been done ..\nmy PR takes care of \n1) what u said\n2) REMOVING the unnecessary CRYTIC parts in the message that distracts users.\n3) changing the message into a READABLE one\n4) changing all the places in that Geo Distance where such error can occur in future. and changing all into readable one. (with a NOTE: in the commit msg stating wherelse it can occur)\n5) also reusing some redundant code.\n\nBy the way,\ncomments or new opinions can be given as new review comments and not new PR's ....\nthis statement is made by me with the thinking that \"We are all working as a team . So we should not allow wasting time working on issues others are already working on. If N people work on a project and N people raise PR's on the same issue I feel that it is a waste of time\" .\n\nHappy Learning,\nThanks, it was just my opinion given to my friend.\n", "created_at": "2015-08-21T13:21:01Z" } ], "number": 12391, "title": "Cryptic error message when mis-spelling a field in geo-distance aggregation" }
{ "body": "closes #12391 \n", "number": 13033, "review_comments": [], "title": "GeoDistance Aggregation now prints field name when it finds an unexpected token." }
{ "commits": [ { "message": "Print field name when meet unexpected token.\n\ncloses #12391" } ], "files": [ { "diff": "@@ -145,8 +145,8 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n + currentFieldName + \"].\", parser.getTokenLocation());\n }\n } else {\n- throw new SearchParseException(context, \"Unexpected token \" + token + \" in [\" + aggregationName + \"].\",\n- parser.getTokenLocation());\n+ throw new SearchParseException(context, \"Unexpected token \" + token + \" in [\" + aggregationName + \"]: [\"\n+ + currentFieldName + \"].\", parser.getTokenLocation());\n }\n }\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/geodistance/GeoDistanceParser.java", "status": "modified" } ] }
{ "body": "On the REST API you can construct a request for a cardinality aggregation which has a sub-aggregation. This should not be allowed since the Cardinality aggregation is a metrics aggregation.\n\n``` js\nPOST test/doc/1\n{\n \"l\":1\n}\n\nPOST test/doc/2\n{\n \"l\":2\n}\n\nPOST test/doc/3\n{\n \"l\":3\n}\n\nPOST test/doc/4\n{\n \"l\":4\n}\n\nGET test/_search\n{\n \"size\": 0, \n \"aggs\": {\n \"card\": {\n \"cardinality\": {\n \"field\": \"l\"\n },\n \"aggs\": {\n \"terms\": {\n \"terms\": {\n \"field\": \"l\",\n \"size\": 10\n }\n }\n }\n }\n }\n}\n```\n\nThis returns \n\n```\n{\n \"took\": 5,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 4,\n \"max_score\": 0,\n \"hits\": []\n },\n \"aggregations\": {\n \"card\": {\n \"value\": 4\n }\n }\n}\n```\n\nIt should instead throw an error response back.\n", "comments": [], "number": 12988, "title": "Cardinality Agg wrongly accepts sub aggregation" }
{ "body": "The cardinality aggregation is a metric aggregation and therefore cannot accept sub-aggregations. It was previously possible to create a rest request with a cardinality aggregation that had sub-aggregations. Now such a request will throw an error in the response.\n\nClose #12988\n", "number": 12989, "review_comments": [], "title": "Throw error if cardinality aggregator has sub aggregations" }
{ "commits": [ { "message": "Aggregations: Throw error if cardinality aggregator has a sub aggregation\n\nThe cardinality aggregation is a metric aggregation and therefore cannot accept sub-aggregations. It was previously possible to create a rest request with a cardinality aggregation that had sub-aggregations. Now such a request will throw an error in the response.\n\nClose #12988" } ], "files": [ { "diff": "@@ -31,7 +31,7 @@\n import java.util.List;\n import java.util.Map;\n \n-final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory<ValuesSource> {\n+final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory.LeafOnly<ValuesSource> {\n \n private final long precisionThreshold;\n ", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java", "status": "modified" } ] }
{ "body": "In #11060 (issue #10979), the request headers and context of the originating request were copied to all sub-requests. In doing some testing on master, I noticed that in the `ScriptService#getScriptFromIndex` the headers and context were not being copied due to the lack of a `SearchContext`. To see what calls were affected, I added an assert to check if the context was null. When doing so the following tests trip the assertion:\n\n```\n- org.elasticsearch.search.aggregations.pipeline.BucketScriptIT.indexedScript\n- org.elasticsearch.search.aggregations.metrics.ScriptedMetricIT.testInitMapCombineReduce_withParams_Indexed\n- org.elasticsearch.search.aggregations.pipeline.BucketSelectorIT.indexedScript\n- org.elasticsearch.validate.RenderSearchTemplateIT.indexedTemplate\n```\n\nThere were also two rest test failures for the Render Search Template API. For the render search template api, we need a way to pass in the headers/context in a way that isn't a search context because it is not a search request.\n\nFor the other failures, it appears that the subsequent requests happen after the SearchContext has been cleared/released.\n", "comments": [ { "body": "@colings86 please could you take this.\n", "created_at": "2015-08-15T08:26:57Z" } ], "number": 12891, "title": "Requests do not always propagate headers/context when retrieving indexed scripts" }
{ "body": "At the moment if an index script is used in a request, the spawned request to get the indexed script from the `.scripts` index does not get the headers and context copied to it from the original request. This change makes the calls to the `ScriptService` pass in a `HasContextAndHeaders` object that can provide the headers and context. For the `search()` method the context and headers are retrieved from `SearchContext.current()`.\n\nCloses #12891\n", "number": 12982, "review_comments": [ { "body": "with this change in place, can you try to remove the `HasContextAndHeaders` delegation methods from `PercolateContext` and `FilteredSearchContext`?\n", "created_at": "2015-09-01T15:49:26Z" } ], "title": "Propagate Headers and Context through to ScriptService" }
{ "commits": [ { "message": "Scripting: Propagate Headers and Context through to ScriptService\n\nAt the moment if an index script is used in a request, the spawned request to get the indexed script from the `.scripts` index does not get the headers and context copied to it from the original request. This change makes the calls to the `ScriptService` pass in a `HasContextAndHeaders` object that can provide the headers and context. For the `search()` method the context and headers are retrieved from `SearchContext.current()`.\n\nCloses #12891" } ], "files": [ { "diff": "@@ -55,7 +55,7 @@ public void onFailure(Throwable t) {\n \n @Override\n protected void doRun() throws Exception {\n- ExecutableScript executable = scriptService.executable(request.template(), ScriptContext.Standard.SEARCH);\n+ ExecutableScript executable = scriptService.executable(request.template(), ScriptContext.Standard.SEARCH, request);\n BytesReference processedTemplate = (BytesReference) executable.run();\n RenderSearchTemplateResponse response = new RenderSearchTemplateResponse();\n response.source(processedTemplate);", "filename": "core/src/main/java/org/elasticsearch/action/admin/indices/validate/template/TransportRenderSearchTemplateAction.java", "status": "modified" }, { "diff": "@@ -146,7 +146,7 @@ public static PercolateResponse reduce(PercolateRequest request, AtomicReference\n PercolateResponse.Match[] matches = request.onlyCount() ? null : PercolateResponse.EMPTY;\n return new PercolateResponse(shardsResponses.length(), successfulShards, failedShards, shardFailures, tookInMillis, matches);\n } else {\n- PercolatorService.ReduceResult result = percolatorService.reduce(percolatorTypeId, shardResults);\n+ PercolatorService.ReduceResult result = percolatorService.reduce(percolatorTypeId, shardResults, request);\n long tookInMillis = Math.max(1, System.currentTimeMillis() - request.startTime);\n return new PercolateResponse(\n shardsResponses.length(), successfulShards, failedShards, shardFailures,", "filename": "core/src/main/java/org/elasticsearch/action/percolate/TransportPercolateAction.java", "status": "modified" }, { "diff": "@@ -75,7 +75,8 @@ protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportReq\n @Override\n protected void moveToSecondPhase() throws Exception {\n // no need to sort, since we know we have no hits back\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(SearchPhaseController.EMPTY_DOCS, firstResults, (AtomicArray<? extends FetchSearchResultProvider>) AtomicArray.empty());\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(SearchPhaseController.EMPTY_DOCS, firstResults,\n+ (AtomicArray<? extends FetchSearchResultProvider>) AtomicArray.empty(), request);\n String scrollId = null;\n if (request.scroll() != null) {\n scrollId = buildScrollId(request.searchType(), firstResults, null);", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchCountAction.java", "status": "modified" }, { "diff": "@@ -134,7 +134,8 @@ private void finishHim() {\n @Override\n public void doRun() throws IOException {\n sortedShardList = searchPhaseController.sortDocs(true, queryFetchResults);\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryFetchResults, queryFetchResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryFetchResults,\n+ queryFetchResults, request);\n String scrollId = null;\n if (request.scroll() != null) {\n scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchDfsQueryAndFetchAction.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.action.search.type;\n \n import com.carrotsearch.hppc.IntArrayList;\n+\n import org.apache.lucene.search.ScoreDoc;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.ActionRunnable;\n@@ -39,8 +40,8 @@\n import org.elasticsearch.search.controller.SearchPhaseController;\n import org.elasticsearch.search.dfs.AggregatedDfs;\n import org.elasticsearch.search.dfs.DfsSearchResult;\n-import org.elasticsearch.search.fetch.ShardFetchSearchRequest;\n import org.elasticsearch.search.fetch.FetchSearchResult;\n+import org.elasticsearch.search.fetch.ShardFetchSearchRequest;\n import org.elasticsearch.search.internal.InternalSearchResponse;\n import org.elasticsearch.search.internal.ShardSearchTransportRequest;\n import org.elasticsearch.search.query.QuerySearchRequest;\n@@ -210,7 +211,8 @@ private void finishHim() {\n threadPool.executor(ThreadPool.Names.SEARCH).execute(new ActionRunnable<SearchResponse>(listener) {\n @Override\n public void doRun() throws IOException {\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryResults, fetchResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryResults,\n+ fetchResults, request);\n String scrollId = null;\n if (request.scroll() != null) {\n scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchDfsQueryThenFetchAction.java", "status": "modified" }, { "diff": "@@ -81,7 +81,8 @@ protected void moveToSecondPhase() throws Exception {\n public void doRun() throws IOException {\n boolean useScroll = request.scroll() != null;\n sortedShardList = searchPhaseController.sortDocs(useScroll, firstResults);\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, firstResults, firstResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, firstResults,\n+ firstResults, request);\n String scrollId = null;\n if (request.scroll() != null) {\n scrollId = buildScrollId(request.searchType(), firstResults, null);", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchQueryAndFetchAction.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.action.search.type;\n \n import com.carrotsearch.hppc.IntArrayList;\n+\n import org.apache.lucene.search.ScoreDoc;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.ActionRunnable;\n@@ -145,7 +146,8 @@ private void finishHim() {\n threadPool.executor(ThreadPool.Names.SEARCH).execute(new ActionRunnable<SearchResponse>(listener) {\n @Override\n public void doRun() throws IOException {\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, firstResults, fetchResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, firstResults,\n+ fetchResults, request);\n String scrollId = null;\n if (request.scroll() != null) {\n scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchQueryThenFetchAction.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.action.search.type;\n \n import com.google.common.collect.ImmutableMap;\n+\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -73,7 +74,8 @@ protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportReq\n \n @Override\n protected void moveToSecondPhase() throws Exception {\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(SearchPhaseController.EMPTY_DOCS, firstResults, (AtomicArray<? extends FetchSearchResultProvider>) AtomicArray.empty());\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(SearchPhaseController.EMPTY_DOCS, firstResults,\n+ (AtomicArray<? extends FetchSearchResultProvider>) AtomicArray.empty(), request);\n String scrollId = null;\n if (request.scroll() != null) {\n scrollId = buildScrollId(request.searchType(), firstResults, ImmutableMap.of(\"total_hits\", Long.toString(internalResponse.hits().totalHits())));", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchScanAction.java", "status": "modified" }, { "diff": "@@ -21,7 +21,11 @@\n \n import org.apache.lucene.search.ScoreDoc;\n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.search.*;\n+import org.elasticsearch.action.search.ReduceSearchPhaseException;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.search.SearchScrollRequest;\n+import org.elasticsearch.action.search.ShardSearchFailure;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n@@ -188,7 +192,8 @@ private void finishHim() {\n \n private void innerFinishHim() throws Exception {\n ScoreDoc[] sortedShardList = searchPhaseController.sortDocs(true, queryFetchResults);\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryFetchResults, queryFetchResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryFetchResults,\n+ queryFetchResults, request);\n String scrollId = null;\n if (request.scroll() != null) {\n scrollId = request.scrollId();", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchScrollQueryAndFetchAction.java", "status": "modified" }, { "diff": "@@ -20,9 +20,14 @@\n package org.elasticsearch.action.search.type;\n \n import com.carrotsearch.hppc.IntArrayList;\n+\n import org.apache.lucene.search.ScoreDoc;\n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.search.*;\n+import org.elasticsearch.action.search.ReduceSearchPhaseException;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.search.SearchScrollRequest;\n+import org.elasticsearch.action.search.ShardSearchFailure;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n@@ -239,7 +244,7 @@ private void finishHim() {\n }\n \n private void innerFinishHim() {\n- InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryResults, fetchResults);\n+ InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryResults, fetchResults, request);\n String scrollId = null;\n if (request.scroll() != null) {\n scrollId = request.scrollId();", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchScrollQueryThenFetchAction.java", "status": "modified" }, { "diff": "@@ -212,7 +212,8 @@ private void innerFinishHim() throws IOException {\n docs.add(scoreDoc);\n }\n }\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(docs.toArray(new ScoreDoc[0]), queryFetchResults, queryFetchResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(docs.toArray(new ScoreDoc[0]), queryFetchResults,\n+ queryFetchResults, request);\n ((InternalSearchHits) internalResponse.hits()).totalHits = Long.parseLong(this.scrollId.getAttributes().get(\"total_hits\"));\n \n ", "filename": "core/src/main/java/org/elasticsearch/action/search/type/TransportSearchScrollScanAction.java", "status": "modified" }, { "diff": "@@ -143,7 +143,7 @@ protected ShardSuggestResponse shardOperation(ShardSuggestRequest request) {\n throw new IllegalArgumentException(\"suggest content missing\");\n }\n final SuggestionSearchContext context = suggestPhase.parseElement().parseInternal(parser, indexService.mapperService(),\n- indexService.queryParserService(), request.shardId().getIndex(), request.shardId().id());\n+ indexService.queryParserService(), request.shardId().getIndex(), request.shardId().id(), request);\n final Suggest result = suggestPhase.execute(context, searcher.searcher());\n return new ShardSuggestResponse(request.shardId(), result);\n }", "filename": "core/src/main/java/org/elasticsearch/action/suggest/TransportSuggestAction.java", "status": "modified" }, { "diff": "@@ -246,7 +246,7 @@ protected Result prepare(UpdateRequest request, final GetResult getResult) {\n private Map<String, Object> executeScript(UpdateRequest request, Map<String, Object> ctx) {\n try {\n if (scriptService != null) {\n- ExecutableScript script = scriptService.executable(request.script, ScriptContext.Standard.UPDATE);\n+ ExecutableScript script = scriptService.executable(request.script, ScriptContext.Standard.UPDATE, request);\n script.setNextVar(\"ctx\", ctx);\n script.run();\n // we need to unwrap the ctx...", "filename": "core/src/main/java/org/elasticsearch/action/update/UpdateHelper.java", "status": "modified" }, { "diff": "@@ -0,0 +1,112 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common;\n+\n+import com.carrotsearch.hppc.ObjectObjectAssociativeContainer;\n+\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n+\n+import java.util.Set;\n+\n+public class DelegatingHasContextAndHeaders implements HasContextAndHeaders {\n+\n+ private HasContextAndHeaders delegate;\n+\n+ public DelegatingHasContextAndHeaders(HasContextAndHeaders delegate) {\n+ this.delegate = delegate;\n+ }\n+\n+ @Override\n+ public <V> void putHeader(String key, V value) {\n+ delegate.putHeader(key, value);\n+ }\n+\n+ @Override\n+ public void copyContextAndHeadersFrom(HasContextAndHeaders other) {\n+ delegate.copyContextAndHeadersFrom(other);\n+ }\n+\n+ @Override\n+ public <V> V getHeader(String key) {\n+ return delegate.getHeader(key);\n+ }\n+\n+ @Override\n+ public boolean hasHeader(String key) {\n+ return delegate.hasHeader(key);\n+ }\n+\n+ @Override\n+ public <V> V putInContext(Object key, Object value) {\n+ return delegate.putInContext(key, value);\n+ }\n+\n+ @Override\n+ public Set<String> getHeaders() {\n+ return delegate.getHeaders();\n+ }\n+\n+ @Override\n+ public void copyHeadersFrom(HasHeaders from) {\n+ delegate.copyHeadersFrom(from);\n+ }\n+\n+ @Override\n+ public void putAllInContext(ObjectObjectAssociativeContainer<Object, Object> map) {\n+ delegate.putAllInContext(map);\n+ }\n+\n+ @Override\n+ public <V> V getFromContext(Object key) {\n+ return delegate.getFromContext(key);\n+ }\n+\n+ @Override\n+ public <V> V getFromContext(Object key, V defaultValue) {\n+ return delegate.getFromContext(key, defaultValue);\n+ }\n+\n+ @Override\n+ public boolean hasInContext(Object key) {\n+ return delegate.hasInContext(key);\n+ }\n+\n+ @Override\n+ public int contextSize() {\n+ return delegate.contextSize();\n+ }\n+\n+ @Override\n+ public boolean isContextEmpty() {\n+ return delegate.isContextEmpty();\n+ }\n+\n+ @Override\n+ public ImmutableOpenMap<Object, Object> getContext() {\n+ return delegate.getContext();\n+ }\n+\n+ @Override\n+ public void copyContextFrom(HasContext other) {\n+ delegate.copyContextFrom(other);\n+ }\n+\n+\n+}", "filename": "core/src/main/java/org/elasticsearch/common/DelegatingHasContextAndHeaders.java", "status": "added" }, { "diff": "@@ -23,6 +23,7 @@\n import com.google.common.base.Preconditions;\n import com.google.common.collect.ImmutableMap;\n import com.google.common.collect.Maps;\n+\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.DocIdSetIterator;\n@@ -199,7 +200,7 @@ public DocumentMapper(MapperService mapperService, @Nullable Settings indexSetti\n List<FieldMapper> newFieldMappers = new ArrayList<>();\n for (MetadataFieldMapper metadataMapper : this.mapping.metadataMappers) {\n if (metadataMapper instanceof FieldMapper) {\n- newFieldMappers.add((FieldMapper) metadataMapper);\n+ newFieldMappers.add(metadataMapper);\n }\n }\n MapperUtils.collect(this.mapping.root, newObjectMappers, newFieldMappers);\n@@ -452,7 +453,7 @@ public ScriptTransform(ScriptService scriptService, Script script) {\n public Map<String, Object> transformSourceAsMap(Map<String, Object> sourceAsMap) {\n try {\n // We use the ctx variable and the _source name to be consistent with the update api.\n- ExecutableScript executable = scriptService.executable(script, ScriptContext.Standard.MAPPING);\n+ ExecutableScript executable = scriptService.executable(script, ScriptContext.Standard.MAPPING, null);\n Map<String, Object> ctx = new HashMap<>(1);\n ctx.put(\"_source\", sourceAsMap);\n executable.setNextVar(\"ctx\", ctx);", "filename": "core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.script.ScriptContext;\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.script.Template;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n import java.util.HashMap;\n@@ -68,7 +69,7 @@ public String[] names() {\n * Parses the template query replacing template parameters with provided\n * values. Handles both submitting the template as part of the request as\n * well as referencing only the template name.\n- * \n+ *\n * @param parseContext\n * parse context containing the templated query.\n */\n@@ -77,7 +78,7 @@ public String[] names() {\n public Query parse(QueryParseContext parseContext) throws IOException {\n XContentParser parser = parseContext.parser();\n Template template = parse(parser, parseContext.parseFieldMatcher());\n- ExecutableScript executable = this.scriptService.executable(template, ScriptContext.Standard.SEARCH);\n+ ExecutableScript executable = this.scriptService.executable(template, ScriptContext.Standard.SEARCH, SearchContext.current());\n \n BytesReference querySource = (BytesReference) executable.run();\n ", "filename": "core/src/main/java/org/elasticsearch/index/query/TemplateQueryParser.java", "status": "modified" }, { "diff": "@@ -32,7 +32,10 @@\n import org.elasticsearch.action.percolate.PercolateShardRequest;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n-import org.elasticsearch.common.*;\n+import org.elasticsearch.common.HasContext;\n+import org.elasticsearch.common.HasContextAndHeaders;\n+import org.elasticsearch.common.HasHeaders;\n+import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.text.StringText;\n@@ -75,7 +78,11 @@\n import org.elasticsearch.search.scan.ScanContext;\n import org.elasticsearch.search.suggest.SuggestionSearchContext;\n \n-import java.util.*;\n+import java.util.Collections;\n+import java.util.HashMap;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.Set;\n import java.util.concurrent.ConcurrentMap;\n \n /**\n@@ -121,7 +128,7 @@ public class PercolateContext extends SearchContext {\n public PercolateContext(PercolateShardRequest request, SearchShardTarget searchShardTarget, IndexShard indexShard,\n IndexService indexService, PageCacheRecycler pageCacheRecycler,\n BigArrays bigArrays, ScriptService scriptService, Query aliasFilter, ParseFieldMatcher parseFieldMatcher) {\n- super(parseFieldMatcher);\n+ super(parseFieldMatcher, request);\n this.indexShard = indexShard;\n this.indexService = indexService;\n this.fieldDataService = indexService.fieldData();", "filename": "core/src/main/java/org/elasticsearch/percolator/PercolateContext.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.percolator;\n \n import com.carrotsearch.hppc.IntObjectHashMap;\n+\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.ReaderUtil;\n import org.apache.lucene.index.memory.ExtendedMemoryIndex;\n@@ -40,6 +41,7 @@\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.action.index.MappingUpdatedAction;\n import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;\n+import org.elasticsearch.common.HasContextAndHeaders;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -63,17 +65,22 @@\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n import org.elasticsearch.index.fielddata.SortedBinaryDocValues;\n+import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.mapper.DocumentMapperForType;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.index.mapper.Mapping;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.index.percolator.stats.ShardPercolateService;\n import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.indices.IndicesService;\n-import org.elasticsearch.percolator.QueryCollector.*;\n+import org.elasticsearch.percolator.QueryCollector.Count;\n+import org.elasticsearch.percolator.QueryCollector.Match;\n+import org.elasticsearch.percolator.QueryCollector.MatchAndScore;\n+import org.elasticsearch.percolator.QueryCollector.MatchAndSort;\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.SearchShardTarget;\n@@ -95,7 +102,9 @@\n \n import static org.elasticsearch.common.util.CollectionUtils.eagerTransform;\n import static org.elasticsearch.index.mapper.SourceToParse.source;\n-import static org.elasticsearch.percolator.QueryCollector.*;\n+import static org.elasticsearch.percolator.QueryCollector.count;\n+import static org.elasticsearch.percolator.QueryCollector.match;\n+import static org.elasticsearch.percolator.QueryCollector.matchAndScore;\n \n public class PercolatorService extends AbstractComponent {\n \n@@ -162,9 +171,9 @@ protected MemoryIndex initialValue() {\n }\n \n \n- public ReduceResult reduce(byte percolatorTypeId, List<PercolateShardResponse> shardResults) {\n+ public ReduceResult reduce(byte percolatorTypeId, List<PercolateShardResponse> shardResults, HasContextAndHeaders headersContext) {\n PercolatorType percolatorType = percolatorTypes.get(percolatorTypeId);\n- return percolatorType.reduce(shardResults);\n+ return percolatorType.reduce(shardResults, headersContext);\n }\n \n public PercolateShardResponse percolate(PercolateShardRequest request) {\n@@ -423,7 +432,7 @@ interface PercolatorType {\n // 0x00 is reserved for empty type.\n byte id();\n \n- ReduceResult reduce(List<PercolateShardResponse> shardResults);\n+ ReduceResult reduce(List<PercolateShardResponse> shardResults, HasContextAndHeaders headersContext);\n \n PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context, boolean isNested);\n \n@@ -437,14 +446,14 @@ public byte id() {\n }\n \n @Override\n- public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n+ public ReduceResult reduce(List<PercolateShardResponse> shardResults, HasContextAndHeaders headersContext) {\n long finalCount = 0;\n for (PercolateShardResponse shardResponse : shardResults) {\n finalCount += shardResponse.count();\n }\n \n assert !shardResults.isEmpty();\n- InternalAggregations reducedAggregations = reduceAggregations(shardResults);\n+ InternalAggregations reducedAggregations = reduceAggregations(shardResults, headersContext);\n return new ReduceResult(finalCount, reducedAggregations);\n }\n \n@@ -481,8 +490,8 @@ public byte id() {\n }\n \n @Override\n- public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n- return countPercolator.reduce(shardResults);\n+ public ReduceResult reduce(List<PercolateShardResponse> shardResults, HasContextAndHeaders headersContext) {\n+ return countPercolator.reduce(shardResults, headersContext);\n }\n \n @Override\n@@ -511,7 +520,7 @@ public byte id() {\n }\n \n @Override\n- public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n+ public ReduceResult reduce(List<PercolateShardResponse> shardResults, HasContextAndHeaders headersContext) {\n long foundMatches = 0;\n int numMatches = 0;\n for (PercolateShardResponse response : shardResults) {\n@@ -537,7 +546,7 @@ public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n }\n \n assert !shardResults.isEmpty();\n- InternalAggregations reducedAggregations = reduceAggregations(shardResults);\n+ InternalAggregations reducedAggregations = reduceAggregations(shardResults, headersContext);\n return new ReduceResult(foundMatches, finalMatches.toArray(new PercolateResponse.Match[finalMatches.size()]), reducedAggregations);\n }\n \n@@ -589,8 +598,8 @@ public byte id() {\n }\n \n @Override\n- public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n- return matchPercolator.reduce(shardResults);\n+ public ReduceResult reduce(List<PercolateShardResponse> shardResults, HasContextAndHeaders headersContext) {\n+ return matchPercolator.reduce(shardResults, headersContext);\n }\n \n @Override\n@@ -622,8 +631,8 @@ public byte id() {\n }\n \n @Override\n- public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n- return matchPercolator.reduce(shardResults);\n+ public ReduceResult reduce(List<PercolateShardResponse> shardResults, HasContextAndHeaders headersContext) {\n+ return matchPercolator.reduce(shardResults, headersContext);\n }\n \n @Override\n@@ -656,7 +665,7 @@ public byte id() {\n }\n \n @Override\n- public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n+ public ReduceResult reduce(List<PercolateShardResponse> shardResults, HasContextAndHeaders headersContext) {\n long foundMatches = 0;\n int nonEmptyResponses = 0;\n int firstNonEmptyIndex = 0;\n@@ -735,7 +744,7 @@ public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n }\n \n assert !shardResults.isEmpty();\n- InternalAggregations reducedAggregations = reduceAggregations(shardResults);\n+ InternalAggregations reducedAggregations = reduceAggregations(shardResults, headersContext);\n return new ReduceResult(foundMatches, finalMatches.toArray(new PercolateResponse.Match[finalMatches.size()]), reducedAggregations);\n }\n \n@@ -843,7 +852,7 @@ public InternalAggregations reducedAggregations() {\n }\n }\n \n- private InternalAggregations reduceAggregations(List<PercolateShardResponse> shardResults) {\n+ private InternalAggregations reduceAggregations(List<PercolateShardResponse> shardResults, HasContextAndHeaders headersContext) {\n if (shardResults.get(0).aggregations() == null) {\n return null;\n }\n@@ -852,14 +861,15 @@ private InternalAggregations reduceAggregations(List<PercolateShardResponse> sha\n for (PercolateShardResponse shardResult : shardResults) {\n aggregationsList.add(shardResult.aggregations());\n }\n- InternalAggregations aggregations = InternalAggregations.reduce(aggregationsList, new ReduceContext(bigArrays, scriptService));\n+ InternalAggregations aggregations = InternalAggregations.reduce(aggregationsList, new ReduceContext(bigArrays, scriptService,\n+ headersContext));\n if (aggregations != null) {\n List<SiblingPipelineAggregator> pipelineAggregators = shardResults.get(0).pipelineAggregators();\n if (pipelineAggregators != null) {\n List<InternalAggregation> newAggs = new ArrayList<>(eagerTransform(aggregations.asList(), PipelineAggregator.AGGREGATION_TRANFORM_FUNCTION));\n for (SiblingPipelineAggregator pipelineAggregator : pipelineAggregators) {\n- InternalAggregation newAgg = pipelineAggregator.doReduce(new InternalAggregations(newAggs), new ReduceContext(bigArrays,\n- scriptService));\n+ InternalAggregation newAgg = pipelineAggregator.doReduce(new InternalAggregations(newAggs), new ReduceContext(\n+ bigArrays, scriptService, headersContext));\n newAggs.add(newAgg);\n }\n aggregations = new InternalAggregations(newAggs);", "filename": "core/src/main/java/org/elasticsearch/percolator/PercolatorService.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import com.google.common.cache.RemovalListener;\n import com.google.common.cache.RemovalNotification;\n import com.google.common.collect.ImmutableMap;\n+\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.delete.DeleteRequest;\n@@ -37,6 +38,7 @@\n import org.elasticsearch.action.indexedscripts.get.GetIndexedScriptRequest;\n import org.elasticsearch.action.indexedscripts.put.PutIndexedScriptRequest;\n import org.elasticsearch.client.Client;\n+import org.elasticsearch.common.HasContextAndHeaders;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.Strings;\n@@ -114,21 +116,25 @@ public class ScriptService extends AbstractComponent implements Closeable {\n * @deprecated Use {@link org.elasticsearch.script.Script.ScriptField} instead. This should be removed in\n * 2.0\n */\n+ @Deprecated\n public static final ParseField SCRIPT_LANG = new ParseField(\"lang\",\"script_lang\");\n /**\n * @deprecated Use {@link ScriptType#getParseField()} instead. This should\n * be removed in 2.0\n */\n+ @Deprecated\n public static final ParseField SCRIPT_FILE = new ParseField(\"script_file\");\n /**\n * @deprecated Use {@link ScriptType#getParseField()} instead. This should\n * be removed in 2.0\n */\n+ @Deprecated\n public static final ParseField SCRIPT_ID = new ParseField(\"script_id\");\n /**\n * @deprecated Use {@link ScriptType#getParseField()} instead. This should\n * be removed in 2.0\n */\n+ @Deprecated\n public static final ParseField SCRIPT_INLINE = new ParseField(\"script\");\n \n @Inject\n@@ -220,7 +226,7 @@ private ScriptEngineService getScriptEngineServiceForFileExt(String fileExtensio\n /**\n * Checks if a script can be executed and compiles it if needed, or returns the previously compiled and cached script.\n */\n- public CompiledScript compile(Script script, ScriptContext scriptContext) {\n+ public CompiledScript compile(Script script, ScriptContext scriptContext, HasContextAndHeaders headersContext) {\n if (script == null) {\n throw new IllegalArgumentException(\"The parameter script (Script) must not be null.\");\n }\n@@ -248,14 +254,14 @@ public CompiledScript compile(Script script, ScriptContext scriptContext) {\n \" operation [\" + scriptContext.getKey() + \"] and lang [\" + lang + \"] are not supported\");\n }\n \n- return compileInternal(script);\n+ return compileInternal(script, headersContext);\n }\n \n /**\n * Compiles a script straight-away, or returns the previously compiled and cached script,\n * without checking if it can be executed based on settings.\n */\n- public CompiledScript compileInternal(Script script) {\n+ public CompiledScript compileInternal(Script script, HasContextAndHeaders context) {\n if (script == null) {\n throw new IllegalArgumentException(\"The parameter script (Script) must not be null.\");\n }\n@@ -292,7 +298,7 @@ public CompiledScript compileInternal(Script script) {\n //the script has been updated in the index since the last look up.\n final IndexedScript indexedScript = new IndexedScript(lang, name);\n name = indexedScript.id;\n- code = getScriptFromIndex(indexedScript.lang, indexedScript.id);\n+ code = getScriptFromIndex(indexedScript.lang, indexedScript.id, context);\n }\n \n String cacheKey = getCacheKey(scriptEngineService, type == ScriptType.INLINE ? null : name, code);\n@@ -333,13 +339,13 @@ private String validateScriptLanguage(String scriptLang) {\n return scriptLang;\n }\n \n- String getScriptFromIndex(String scriptLang, String id) {\n+ String getScriptFromIndex(String scriptLang, String id, HasContextAndHeaders context) {\n if (client == null) {\n throw new IllegalArgumentException(\"Got an indexed script with no Client registered.\");\n }\n scriptLang = validateScriptLanguage(scriptLang);\n GetRequest getRequest = new GetRequest(SCRIPT_INDEX, scriptLang, id);\n- getRequest.copyContextAndHeadersFrom(SearchContext.current());\n+ getRequest.copyContextAndHeadersFrom(context);\n GetResponse responseFields = client.get(getRequest).actionGet();\n if (responseFields.isExists()) {\n return getScriptFromResponse(responseFields);\n@@ -432,8 +438,8 @@ public static String getScriptFromResponse(GetResponse responseFields) {\n /**\n * Compiles (or retrieves from cache) and executes the provided script\n */\n- public ExecutableScript executable(Script script, ScriptContext scriptContext) {\n- return executable(compile(script, scriptContext), script.getParams());\n+ public ExecutableScript executable(Script script, ScriptContext scriptContext, HasContextAndHeaders headersContext) {\n+ return executable(compile(script, scriptContext, headersContext), script.getParams());\n }\n \n /**\n@@ -447,7 +453,7 @@ public ExecutableScript executable(CompiledScript compiledScript, Map<String, Ob\n * Compiles (or retrieves from cache) and executes the provided search script\n */\n public SearchScript search(SearchLookup lookup, Script script, ScriptContext scriptContext) {\n- CompiledScript compiledScript = compile(script, scriptContext);\n+ CompiledScript compiledScript = compile(script, scriptContext, SearchContext.current());\n return getScriptEngineServiceForLang(compiledScript.lang()).search(compiledScript, lookup, script.getParams());\n }\n ", "filename": "core/src/main/java/org/elasticsearch/script/ScriptService.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import com.carrotsearch.hppc.ObjectSet;\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n import com.google.common.collect.ImmutableMap;\n+\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n@@ -82,10 +83,23 @@\n import org.elasticsearch.script.mustache.MustacheScriptEngineService;\n import org.elasticsearch.search.dfs.DfsPhase;\n import org.elasticsearch.search.dfs.DfsSearchResult;\n-import org.elasticsearch.search.fetch.*;\n-import org.elasticsearch.search.internal.*;\n+import org.elasticsearch.search.fetch.FetchPhase;\n+import org.elasticsearch.search.fetch.FetchSearchResult;\n+import org.elasticsearch.search.fetch.QueryFetchSearchResult;\n+import org.elasticsearch.search.fetch.ScrollQueryFetchSearchResult;\n+import org.elasticsearch.search.fetch.ShardFetchRequest;\n+import org.elasticsearch.search.internal.DefaultSearchContext;\n+import org.elasticsearch.search.internal.InternalScrollSearchRequest;\n+import org.elasticsearch.search.internal.ScrollContext;\n+import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.internal.SearchContext.Lifetime;\n-import org.elasticsearch.search.query.*;\n+import org.elasticsearch.search.internal.ShardSearchLocalRequest;\n+import org.elasticsearch.search.internal.ShardSearchRequest;\n+import org.elasticsearch.search.query.QueryPhase;\n+import org.elasticsearch.search.query.QuerySearchRequest;\n+import org.elasticsearch.search.query.QuerySearchResult;\n+import org.elasticsearch.search.query.QuerySearchResultProvider;\n+import org.elasticsearch.search.query.ScrollQuerySearchResult;\n import org.elasticsearch.search.warmer.IndexWarmersMetaData;\n import org.elasticsearch.threadpool.ThreadPool;\n \n@@ -736,7 +750,7 @@ private void parseTemplate(ShardSearchRequest request, SearchContext searchConte\n \n BytesReference processedQuery;\n if (request.template() != null) {\n- ExecutableScript executable = this.scriptService.executable(request.template(), ScriptContext.Standard.SEARCH);\n+ ExecutableScript executable = this.scriptService.executable(request.template(), ScriptContext.Standard.SEARCH, searchContext);\n processedQuery = (BytesReference) executable.run();\n } else {\n if (!hasLength(request.templateSource())) {\n@@ -753,15 +767,15 @@ private void parseTemplate(ShardSearchRequest request, SearchContext searchConte\n //Try to double parse for nested template id/file\n parser = null;\n try {\n- ExecutableScript executable = this.scriptService.executable(template, ScriptContext.Standard.SEARCH);\n+ ExecutableScript executable = this.scriptService.executable(template, ScriptContext.Standard.SEARCH, searchContext);\n processedQuery = (BytesReference) executable.run();\n parser = XContentFactory.xContent(processedQuery).createParser(processedQuery);\n } catch (ElasticsearchParseException epe) {\n //This was an non-nested template, the parse failure was due to this, it is safe to assume this refers to a file\n //for backwards compatibility and keep going\n template = new Template(template.getScript(), ScriptService.ScriptType.FILE, MustacheScriptEngineService.NAME,\n null, template.getParams());\n- ExecutableScript executable = this.scriptService.executable(template, ScriptContext.Standard.SEARCH);\n+ ExecutableScript executable = this.scriptService.executable(template, ScriptContext.Standard.SEARCH, searchContext);\n processedQuery = (BytesReference) executable.run();\n }\n if (parser != null) {\n@@ -771,15 +785,16 @@ private void parseTemplate(ShardSearchRequest request, SearchContext searchConte\n //An inner template referring to a filename or id\n template = new Template(innerTemplate.getScript(), innerTemplate.getType(),\n MustacheScriptEngineService.NAME, null, template.getParams());\n- ExecutableScript executable = this.scriptService.executable(template, ScriptContext.Standard.SEARCH);\n+ ExecutableScript executable = this.scriptService.executable(template, ScriptContext.Standard.SEARCH,\n+ searchContext);\n processedQuery = (BytesReference) executable.run();\n }\n } catch (ScriptParseException e) {\n // No inner template found, use original template from above\n }\n }\n } else {\n- ExecutableScript executable = this.scriptService.executable(template, ScriptContext.Standard.SEARCH);\n+ ExecutableScript executable = this.scriptService.executable(template, ScriptContext.Standard.SEARCH, searchContext);\n processedQuery = (BytesReference) executable.run();\n }\n } catch (IOException e) {", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -18,6 +18,8 @@\n */\n package org.elasticsearch.search.aggregations;\n \n+import org.elasticsearch.common.DelegatingHasContextAndHeaders;\n+import org.elasticsearch.common.HasContextAndHeaders;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -90,12 +92,13 @@ public String toString() {\n }\n }\n \n- public static class ReduceContext {\n+ public static class ReduceContext extends DelegatingHasContextAndHeaders {\n \n private final BigArrays bigArrays;\n private ScriptService scriptService;\n \n- public ReduceContext(BigArrays bigArrays, ScriptService scriptService) {\n+ public ReduceContext(BigArrays bigArrays, ScriptService scriptService, HasContextAndHeaders headersContext) {\n+ super(headersContext);\n this.bigArrays = bigArrays;\n this.scriptService = scriptService;\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/InternalAggregation.java", "status": "modified" }, { "diff": "@@ -60,11 +60,11 @@ public TermsAggregator.BucketCountThresholds getDefaultBucketCountThresholds() {\n \n @Override\n public void parseSpecial(String aggregationName, XContentParser parser, SearchContext context, XContentParser.Token token, String currentFieldName) throws IOException {\n- \n+\n if (token == XContentParser.Token.START_OBJECT) {\n SignificanceHeuristicParser significanceHeuristicParser = significanceHeuristicParserMapper.get(currentFieldName);\n if (significanceHeuristicParser != null) {\n- significanceHeuristic = significanceHeuristicParser.parse(parser, context.parseFieldMatcher());\n+ significanceHeuristic = significanceHeuristicParser.parse(parser, context.parseFieldMatcher(), context);\n } else if (context.parseFieldMatcher().match(currentFieldName, BACKGROUND_FILTER)) {\n filter = context.queryParserService().parseInnerFilter(parser).query();\n } else {", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantTermsParametersParser.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryParsingException;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n \n@@ -115,7 +116,8 @@ protected SignificanceHeuristic newHeuristic(boolean includeNegatives, boolean b\n }\n \n @Override\n- public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException, QueryParsingException {\n+ public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher, SearchContext context)\n+ throws IOException, QueryParsingException {\n String givenName = parser.currentName();\n boolean backgroundIsSuperset = true;\n XContentParser.Token token = parser.nextToken();", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/GND.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryParsingException;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n \n@@ -108,7 +109,8 @@ public void writeTo(StreamOutput out) throws IOException {\n public static class JLHScoreParser implements SignificanceHeuristicParser {\n \n @Override\n- public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException, QueryParsingException {\n+ public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher, SearchContext context)\n+ throws IOException, QueryParsingException {\n // move to the closing bracket\n if (!parser.nextToken().equals(XContentParser.Token.END_OBJECT)) {\n throw new ElasticsearchParseException(\"failed to parse [jhl] significance heuristic. expected an empty object, but found [{}] instead\", parser.currentToken());", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/JLHScore.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryParsingException;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n \n@@ -138,7 +139,8 @@ protected void checkFrequencies(long subsetFreq, long subsetSize, long supersetF\n public static abstract class NXYParser implements SignificanceHeuristicParser {\n \n @Override\n- public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException, QueryParsingException {\n+ public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher, SearchContext context)\n+ throws IOException, QueryParsingException {\n String givenName = parser.currentName();\n boolean includeNegatives = false;\n boolean backgroundIsSuperset = true;", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/NXYSignificanceHeuristic.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryParsingException;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n \n@@ -57,15 +58,15 @@ public static SignificanceHeuristic readFrom(StreamInput in) throws IOException\n \n /**\n * Indicates the significance of a term in a sample by determining what percentage\n- * of all occurrences of a term are found in the sample. \n+ * of all occurrences of a term are found in the sample.\n */\n @Override\n public double getScore(long subsetFreq, long subsetSize, long supersetFreq, long supersetSize) {\n checkFrequencyValidity(subsetFreq, subsetSize, supersetFreq, supersetSize, \"PercentageScore\");\n if (supersetFreq == 0) {\n // avoid a divide by zero issue\n return 0;\n- } \n+ }\n return (double) subsetFreq / (double) supersetFreq;\n }\n \n@@ -77,7 +78,8 @@ public void writeTo(StreamOutput out) throws IOException {\n public static class PercentageScoreParser implements SignificanceHeuristicParser {\n \n @Override\n- public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException, QueryParsingException {\n+ public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher, SearchContext context)\n+ throws IOException, QueryParsingException {\n // move to the closing bracket\n if (!parser.nextToken().equals(XContentParser.Token.END_OBJECT)) {\n throw new ElasticsearchParseException(\"failed to parse [percentage] significance heuristic. expected an empty object, but got [{}] instead\", parser.currentToken());", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/PercentageScore.java", "status": "modified" }, { "diff": "@@ -24,17 +24,21 @@\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.ParseFieldMatcher;\n-import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryParsingException;\n-import org.elasticsearch.script.*;\n+import org.elasticsearch.script.ExecutableScript;\n+import org.elasticsearch.script.Script;\n import org.elasticsearch.script.Script.ScriptField;\n+import org.elasticsearch.script.ScriptContext;\n+import org.elasticsearch.script.ScriptParameterParser;\n import org.elasticsearch.script.ScriptParameterParser.ScriptParameterValue;\n+import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n import java.util.Map;\n@@ -81,8 +85,9 @@ public ScriptHeuristic(ExecutableScript searchScript, Script script) {\n \n }\n \n+ @Override\n public void initialize(InternalAggregation.ReduceContext context) {\n- searchScript = context.scriptService().executable(script, ScriptContext.Standard.AGGS);\n+ searchScript = context.scriptService().executable(script, ScriptContext.Standard.AGGS, context);\n searchScript.setNextVar(\"_subset_freq\", subsetDfHolder);\n searchScript.setNextVar(\"_subset_size\", subsetSizeHolder);\n searchScript.setNextVar(\"_superset_freq\", supersetDfHolder);\n@@ -129,7 +134,8 @@ public ScriptHeuristicParser(ScriptService scriptService) {\n }\n \n @Override\n- public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException, QueryParsingException {\n+ public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher, SearchContext context)\n+ throws IOException, QueryParsingException {\n String heuristicName = parser.currentName();\n Script script = null;\n XContentParser.Token token;\n@@ -169,7 +175,7 @@ public SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher pars\n }\n ExecutableScript searchScript;\n try {\n- searchScript = scriptService.executable(script, ScriptContext.Standard.AGGS);\n+ searchScript = scriptService.executable(script, ScriptContext.Standard.AGGS, context);\n } catch (Exception e) {\n throw new ElasticsearchParseException(\"failed to parse [{}] significance heuristic. the script [{}] could not be loaded\", e, script, heuristicName);\n }\n@@ -204,21 +210,23 @@ public XContentBuilder toXContent(XContentBuilder builder, Params builderParams)\n \n public final class LongAccessor extends Number {\n public long value;\n+ @Override\n public int intValue() {\n return (int)value;\n }\n+ @Override\n public long longValue() {\n return value;\n }\n \n @Override\n public float floatValue() {\n- return (float)value;\n+ return value;\n }\n \n @Override\n public double doubleValue() {\n- return (double)value;\n+ return value;\n }\n \n @Override", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/ScriptHeuristic.java", "status": "modified" }, { "diff": "@@ -23,12 +23,14 @@\n import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.QueryParsingException;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n \n public interface SignificanceHeuristicParser {\n \n- SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher) throws IOException, QueryParsingException;\n+ SignificanceHeuristic parse(XContentParser parser, ParseFieldMatcher parseFieldMatcher, SearchContext context) throws IOException,\n+ QueryParsingException;\n \n String[] getNames();\n }", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificanceHeuristicParser.java", "status": "modified" }, { "diff": "@@ -91,7 +91,7 @@ public InternalAggregation doReduce(List<InternalAggregation> aggregations, Redu\n vars.putAll(firstAggregation.reduceScript.getParams());\n }\n CompiledScript compiledScript = reduceContext.scriptService().compile(firstAggregation.reduceScript,\n- ScriptContext.Standard.AGGS);\n+ ScriptContext.Standard.AGGS, reduceContext);\n ExecutableScript script = reduceContext.scriptService().executable(compiledScript, vars);\n aggregation = script.run();\n } else {", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/InternalScriptedMetric.java", "status": "modified" }, { "diff": "@@ -58,11 +58,11 @@ protected ScriptedMetricAggregator(String name, Script initScript, Script mapScr\n this.params = params;\n ScriptService scriptService = context.searchContext().scriptService();\n if (initScript != null) {\n- scriptService.executable(initScript, ScriptContext.Standard.AGGS).run();\n+ scriptService.executable(initScript, ScriptContext.Standard.AGGS, context.searchContext()).run();\n }\n this.mapScript = scriptService.search(context.searchContext().lookup(), mapScript, ScriptContext.Standard.AGGS);\n if (combineScript != null) {\n- this.combineScript = scriptService.executable(combineScript, ScriptContext.Standard.AGGS);\n+ this.combineScript = scriptService.executable(combineScript, ScriptContext.Standard.AGGS, context.searchContext());\n } else {\n this.combineScript = null;\n }\n@@ -159,7 +159,7 @@ private static Script deepCopyScript(Script script, SearchContext context) {\n return null;\n }\n }\n- \n+\n @SuppressWarnings({ \"unchecked\" })\n private static <T> T deepCopyParams(T original, SearchContext context) {\n T clone;", "filename": "core/src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java", "status": "modified" } ] }
{ "body": "When creating date fields dynamically:\n\n```\nPUT my_index/my_type/1\n{\n \"date_one\": \"2015-01-01\", \n \"date_two\": \"2015/01/01\" \n}\n```\n\nthe date matching `strict_date_optional_time` adds the `||epoch_millis` format as an alternative, but the date matching `yyyy/MM/ss` doesn't:\n\n```\n{\n \"my_index\": {\n \"mappings\": {\n \"my_type\": {\n \"properties\": {\n \"date_one\": {\n \"type\": \"date\",\n \"format\": \"strict_date_optional_time||epoch_millis\"\n },\n \"date_two\": {\n \"type\": \"date\",\n \"format\": \"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd\"\n }\n }\n }\n }\n }\n}\n```\n\nI think it should.\n", "comments": [], "number": 12873, "title": "Add epoch_ms to dynamic dates with format `yyyy/MM/dd`" }
{ "body": "Dynamic date fields mapped from dates of the form \"yyyy-MM-dd\"\nautomatically receive the millisecond paresr epoch_millis as an\nalternative parsing format. However, dynamic date fields mapped from\ndates of the form \"yyyy/MM/dd\" do not. This is a bug since the migration\ndocumentation currently specifies that a dynamically added date field,\nby default, includes the epoch_millis format. This commit adds\nepoch_millis as an alternative parser to dynamic date fields mapped from\ndates of the form \"yyyy/MM/dd\".\n\nCloses #12873\n", "number": 12977, "review_comments": [], "title": "Add millisecond parser for dynamic date fields mapped from \"yyyy/MM/dd\"" }
{ "commits": [ { "message": "Add millisecond parser for dynamic date fields mapped from \"yyyy/MM/dd\"\n\nDynamic date fields mapped from dates of the form \"yyyy-MM-dd\"\nautomatically receive the millisecond paresr epoch_millis as an\nalternative parsing format. However, dynamic date fields mapped from\ndates of the form \"yyyy/MM/dd\" do not. This is a bug since the migration\ndocumentation currently specifies that a dynamically added date field,\nby default, includes the epoch_millis format. This commit adds\nepoch_millis as an alternative parser to dynamic date fields mapped from\ndates of the form \"yyyy/MM/dd\".\n\nCloses #12873" } ], "files": [ { "diff": "@@ -275,9 +275,9 @@ public static FormatDateTimeFormatter getStrictStandardDateFormatter() {\n .toFormatter()\n .withZoneUTC();\n \n- DateTimeFormatterBuilder builder = new DateTimeFormatterBuilder().append(longFormatter.withZone(DateTimeZone.UTC).getPrinter(), new DateTimeParser[] {longFormatter.getParser(), shortFormatter.getParser()});\n+ DateTimeFormatterBuilder builder = new DateTimeFormatterBuilder().append(longFormatter.withZone(DateTimeZone.UTC).getPrinter(), new DateTimeParser[]{longFormatter.getParser(), shortFormatter.getParser(), new EpochTimeParser(true)});\n \n- return new FormatDateTimeFormatter(\"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd\", builder.toFormatter().withZone(DateTimeZone.UTC), Locale.ROOT);\n+ return new FormatDateTimeFormatter(\"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd||epoch_millis\", builder.toFormatter().withZone(DateTimeZone.UTC), Locale.ROOT);\n }\n \n ", "filename": "core/src/main/java/org/elasticsearch/common/joda/Joda.java", "status": "modified" }, { "diff": "@@ -83,6 +83,9 @@ public void testAutomaticDateParser() throws Exception {\n \n FieldMapper fieldMapper = defaultMapper.mappers().smartNameFieldMapper(\"date_field1\");\n assertThat(fieldMapper, instanceOf(DateFieldMapper.class));\n+ DateFieldMapper dateFieldMapper = (DateFieldMapper)fieldMapper;\n+ assertEquals(\"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd||epoch_millis\", dateFieldMapper.fieldType().dateTimeFormatter().format());\n+ assertEquals(1265587200000L, dateFieldMapper.fieldType().dateTimeFormatter().parser().parseMillis(\"1265587200000\"));\n fieldMapper = defaultMapper.mappers().smartNameFieldMapper(\"date_field2\");\n assertThat(fieldMapper, instanceOf(DateFieldMapper.class));\n ", "filename": "core/src/test/java/org/elasticsearch/index/mapper/date/SimpleDateMappingTests.java", "status": "modified" } ] }
{ "body": "and use the IndexSeacher directly from the engine searcher.\n", "comments": [ { "body": "Changed the version to `2.0.0`. The second commit is larger than I want to it be for beta1.\n", "created_at": "2015-08-14T10:18:52Z" }, { "body": "@jpountz I updated the PR based on the just pushed changes in ContextIndexSearcher.\n", "created_at": "2015-08-14T14:05:37Z" }, { "body": "Isn't the fact that ContextIndexSearcher calls `searchContext.clearReleasables(Lifetime.COLLECTION)` going to be an issue for p/c queries as these queries build recyclable data-structures in createWeight that they later reuse in their scorer?\n", "created_at": "2015-08-14T14:12:06Z" }, { "body": "I don't think that is an issue, because by the time the ContextIndexSearcher is going to clear the releasables the p/c query has already executed completely. (the search in #doCreateWeight() and the scores created by the weight have then already done their jobs)\n", "created_at": "2015-08-14T14:18:14Z" }, { "body": "I'm not sure I understand why it is safe yet. For instance, ParentQuery.createWeight calls IndexSearcher.search, so clearReleasables will be called both in ParentQuery.createWeight and after execution of the main search request terminates?\n", "created_at": "2015-08-17T13:09:48Z" }, { "body": "right now I see, it is weird that `mvn verify` didn't fail last friday...\n\nWhat I tried to get around this if add an extra search method to ContextIndexSearcher that search features which use releasable data structures and that run an extra search during the main search should use. This search method doesn't release the data structures, so that this can be done after the main search has been completed.\n", "created_at": "2015-08-17T13:35:29Z" }, { "body": "Ideally, I think the best option would be to push back the clearReleasables calls from ContextIndexSearcher to the callers. In case there are many callers, an ok-ish in-between might be to have the logic on Engine.Searcher?\n", "created_at": "2015-08-17T13:40:44Z" }, { "body": "@jpountz I moved the clearReleasables part from the ContexIndexSearcher to SearchContext#search(...), with the reason the on places the ContextIndexSearcher is used, the search context is always available. Whereas if the logic would be added to `Searcher.Engine` class the engine searcher impl isn't always as type `ContexIndexSearcher`.\n", "created_at": "2015-08-17T21:28:30Z" }, { "body": "@jpountz I updated the PR and removed all `clearReleasables()` calls from ContextIndexSearcher.\n\nI wasn't able to `s/sc.searcher()/searcher/` in p/c queries, because in the case of a dfs_\\* search the provided searcher can be of type CachedDfSource and that implementation can't be used for searching.\n\nMaybe as a followup PR we can merge CachedDfSource into ContextIndexSearcher? \n", "created_at": "2015-08-18T13:16:30Z" }, { "body": "@jpountz I updated the PR so that it works based on the changes in #12973\n (now the `s/sc.searcher()/searcher/` change is possible to make)\n", "created_at": "2015-08-19T20:56:38Z" }, { "body": "LGTM\n", "created_at": "2015-08-26T11:56:53Z" } ], "number": 12864, "title": "Remove unnecessary usage of extra index searchers" }
{ "body": "and move the dfs logic to the ContextIndexSearcher.\n\nThis PR relates to #12864\n", "number": 12973, "review_comments": [ { "body": "Wouldn't the 3 above lines be equivalent to just doing: `return super.createNormalizedWeight(query, needsScores);`?\n", "created_at": "2015-08-19T08:19:07Z" }, { "body": "During tests the wrapped `IndexSearcher` is an `AssertingIndexSearch` and if we wouldn't delegate to `in` we would miss assertions during tests?\n", "created_at": "2015-08-19T08:22:10Z" }, { "body": "ok, can you add a comment saying this?\n", "created_at": "2015-08-19T08:23:29Z" }, { "body": "yes, I can :)\n\nOn 19 August 2015 at 10:23, Adrien Grand notifications@github.com wrote:\n\n> In\n> core/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java\n> https://github.com/elastic/elasticsearch/pull/12973#discussion_r37390259\n> :\n> \n> > @@ -77,8 +77,8 @@ public Query rewrite(Query original) throws IOException {\n> > public Weight createNormalizedWeight(Query query, boolean needsScores) throws IOException {\n> > try {\n> > // if scores are needed and we have dfs data then use it\n> > - if (dfSource != null && needsScores) {\n> > - return dfSource.createNormalizedWeight(query, needsScores);\n> > - if (aggregatedDfs != null && needsScores) {\n> > - return super.createNormalizedWeight(query, needsScores);\n> > }\n> > return in.createNormalizedWeight(query, needsScores);\n> \n> ok, can you add a comment saying this?\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elastic/elasticsearch/pull/12973/files#r37390259.\n\n## \n\nMet vriendelijke groet,\n\nMartijn van Groningen\n", "created_at": "2015-08-19T08:28:48Z" } ], "title": "Remove CachedDfSource" }
{ "commits": [ { "message": "Removed CachedDfSource and move the dfs logic into the ContextIndexSearcher" } ], "files": [ { "diff": "@@ -26,7 +26,6 @@\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n-import org.apache.lucene.search.QueryCachingPolicy;\n import org.apache.lucene.search.TopDocs;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchParseException;\n@@ -54,7 +53,6 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.IndexService;\n-import org.elasticsearch.index.cache.IndexCache;\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.index.fielddata.FieldDataType;\n import org.elasticsearch.index.fielddata.IndexFieldData;\n@@ -82,7 +80,6 @@\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.script.Template;\n import org.elasticsearch.script.mustache.MustacheScriptEngineService;\n-import org.elasticsearch.search.dfs.CachedDfSource;\n import org.elasticsearch.search.dfs.DfsPhase;\n import org.elasticsearch.search.dfs.DfsSearchResult;\n import org.elasticsearch.search.fetch.*;\n@@ -412,17 +409,8 @@ public ScrollQuerySearchResult executeQueryPhase(InternalScrollSearchRequest req\n public QuerySearchResult executeQueryPhase(QuerySearchRequest request) {\n final SearchContext context = findContext(request.id());\n contextProcessing(context);\n+ context.searcher().setAggregatedDfs(request.dfs());\n IndexShard indexShard = context.indexShard();\n- try {\n- final IndexCache indexCache = indexShard.indexService().cache();\n- final QueryCachingPolicy cachingPolicy = indexShard.getQueryCachingPolicy();\n- context.searcher().dfSource(new CachedDfSource(context.searcher().getIndexReader(), request.dfs(), context.similarityService().similarity(),\n- indexCache.query(), cachingPolicy));\n- } catch (Throwable e) {\n- processFailure(context, e);\n- cleanContext(context);\n- throw new QueryPhaseExecutionException(context, \"Failed to set aggregated df\", e);\n- }\n ShardSearchStats shardSearchStats = indexShard.searchService();\n try {\n shardSearchStats.onPreQueryPhase(context);\n@@ -488,17 +476,7 @@ public QueryFetchSearchResult executeFetchPhase(ShardSearchRequest request) {\n public QueryFetchSearchResult executeFetchPhase(QuerySearchRequest request) {\n final SearchContext context = findContext(request.id());\n contextProcessing(context);\n- try {\n- final IndexShard indexShard = context.indexShard();\n- final IndexCache indexCache = indexShard.indexService().cache();\n- final QueryCachingPolicy cachingPolicy = indexShard.getQueryCachingPolicy();\n- context.searcher().dfSource(new CachedDfSource(context.searcher().getIndexReader(), request.dfs(), context.similarityService().similarity(),\n- indexCache.query(), cachingPolicy));\n- } catch (Throwable e) {\n- freeContext(context.id());\n- cleanContext(context);\n- throw new QueryPhaseExecutionException(context, \"Failed to set aggregated df\", e);\n- }\n+ context.searcher().setAggregatedDfs(request.dfs());\n try {\n ShardSearchStats shardSearchStats = context.indexShard().searchService();\n shardSearchStats.onPreQueryPhase(context);", "filename": "core/src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -20,15 +20,13 @@\n package org.elasticsearch.search.internal;\n \n import org.apache.lucene.index.LeafReaderContext;\n-import org.apache.lucene.search.Collector;\n-import org.apache.lucene.search.Explanation;\n-import org.apache.lucene.search.IndexSearcher;\n-import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.Weight;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.index.TermContext;\n+import org.apache.lucene.search.*;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.index.engine.Engine;\n-import org.elasticsearch.search.dfs.CachedDfSource;\n+import org.elasticsearch.search.dfs.AggregatedDfs;\n import org.elasticsearch.search.internal.SearchContext.Lifetime;\n \n import java.io.IOException;\n@@ -46,21 +44,23 @@ public class ContextIndexSearcher extends IndexSearcher implements Releasable {\n \n private final SearchContext searchContext;\n \n- private CachedDfSource dfSource;\n+ private AggregatedDfs aggregatedDfs;\n \n public ContextIndexSearcher(SearchContext searchContext, Engine.Searcher searcher) {\n super(searcher.reader());\n in = searcher.searcher();\n this.searchContext = searchContext;\n setSimilarity(searcher.searcher().getSimilarity(true));\n+ setQueryCache(searchContext.indexShard().indexService().cache().query());\n+ setQueryCachingPolicy(searchContext.indexShard().getQueryCachingPolicy());\n }\n \n @Override\n public void close() {\n }\n \n- public void dfSource(CachedDfSource dfSource) {\n- this.dfSource = dfSource;\n+ public void setAggregatedDfs(AggregatedDfs aggregatedDfs) {\n+ this.aggregatedDfs = aggregatedDfs;\n }\n \n @Override\n@@ -75,10 +75,12 @@ public Query rewrite(Query original) throws IOException {\n \n @Override\n public Weight createNormalizedWeight(Query query, boolean needsScores) throws IOException {\n+ // During tests we prefer to use the wrapped IndexSearcher, because then we use the AssertingIndexSearcher\n+ // it is hacky, because if we perform a dfs search, we don't use the wrapped IndexSearcher...\n try {\n // if scores are needed and we have dfs data then use it\n- if (dfSource != null && needsScores) {\n- return dfSource.createNormalizedWeight(query, needsScores);\n+ if (aggregatedDfs != null && needsScores) {\n+ return super.createNormalizedWeight(query, needsScores);\n }\n return in.createNormalizedWeight(query, needsScores);\n } catch (Throwable t) {\n@@ -104,4 +106,32 @@ protected void search(List<LeafReaderContext> leaves, Weight weight, Collector c\n searchContext.clearReleasables(Lifetime.COLLECTION);\n }\n }\n+\n+ @Override\n+ public TermStatistics termStatistics(Term term, TermContext context) throws IOException {\n+ if (aggregatedDfs == null) {\n+ // we are either executing the dfs phase or the search_type doesn't include the dfs phase.\n+ return super.termStatistics(term, context);\n+ }\n+ TermStatistics termStatistics = aggregatedDfs.termStatistics().get(term);\n+ if (termStatistics == null) {\n+ // we don't have stats for this - this might be a must_not clauses etc. that doesn't allow extract terms on the query\n+ return super.termStatistics(term, context);\n+ }\n+ return termStatistics;\n+ }\n+\n+ @Override\n+ public CollectionStatistics collectionStatistics(String field) throws IOException {\n+ if (aggregatedDfs == null) {\n+ // we are either executing the dfs phase or the search_type doesn't include the dfs phase.\n+ return super.collectionStatistics(field);\n+ }\n+ CollectionStatistics collectionStatistics = aggregatedDfs.fieldStatistics().get(field);\n+ if (collectionStatistics == null) {\n+ // we don't have stats for this - this might be a must_not clauses etc. that doesn't allow extract terms on the query\n+ return super.collectionStatistics(field);\n+ }\n+ return collectionStatistics;\n+ }\n }", "filename": "core/src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java", "status": "modified" }, { "diff": "@@ -35,7 +35,6 @@\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.node.Node;\n@@ -46,12 +45,9 @@\n import org.junit.After;\n import org.junit.AfterClass;\n import org.junit.BeforeClass;\n-import org.junit.Ignore;\n \n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.is;\n-import static org.hamcrest.Matchers.lessThanOrEqualTo;\n+import static org.hamcrest.Matchers.*;\n \n /**\n * A test that keep a singleton node started for all tests that can be used to get\n@@ -225,7 +221,7 @@ protected static SearchContext createSearchContext(IndexService indexService) {\n BigArrays bigArrays = indexService.injector().getInstance(BigArrays.class);\n ThreadPool threadPool = indexService.injector().getInstance(ThreadPool.class);\n PageCacheRecycler pageCacheRecycler = indexService.injector().getInstance(PageCacheRecycler.class);\n- return new TestSearchContext(threadPool, pageCacheRecycler, bigArrays, indexService, indexService.cache().query(), indexService.fieldData());\n+ return new TestSearchContext(threadPool, pageCacheRecycler, bigArrays, indexService);\n }\n \n /**", "filename": "core/src/test/java/org/elasticsearch/test/ESSingleNodeTestCase.java", "status": "modified" }, { "diff": "@@ -19,22 +19,19 @@\n package org.elasticsearch.test;\n \n import com.carrotsearch.hppc.ObjectObjectAssociativeContainer;\n-\n-import org.apache.lucene.search.Collector;\n-import org.apache.lucene.search.Filter;\n-import org.apache.lucene.search.Query;\n-import org.apache.lucene.search.ScoreDoc;\n-import org.apache.lucene.search.Sort;\n+import org.apache.lucene.search.*;\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n-import org.elasticsearch.common.*;\n+import org.elasticsearch.common.HasContext;\n+import org.elasticsearch.common.HasContextAndHeaders;\n+import org.elasticsearch.common.HasHeaders;\n+import org.elasticsearch.common.ParseFieldMatcher;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.analysis.AnalysisService;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n-import org.elasticsearch.index.cache.query.QueryCache;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.mapper.MapperService;\n@@ -76,6 +73,7 @@ public class TestSearchContext extends SearchContext {\n final BitsetFilterCache fixedBitSetFilterCache;\n final ThreadPool threadPool;\n final Map<Class<?>, Collector> queryCollectors = new HashMap<>();\n+ final IndexShard indexShard;\n \n ContextIndexSearcher searcher;\n int size;\n@@ -86,14 +84,15 @@ public class TestSearchContext extends SearchContext {\n private final long originNanoTime = System.nanoTime();\n private final Map<String, FetchSubPhaseContext> subPhaseContexts = new HashMap<>();\n \n- public TestSearchContext(ThreadPool threadPool,PageCacheRecycler pageCacheRecycler, BigArrays bigArrays, IndexService indexService, QueryCache filterCache, IndexFieldDataService indexFieldDataService) {\n+ public TestSearchContext(ThreadPool threadPool,PageCacheRecycler pageCacheRecycler, BigArrays bigArrays, IndexService indexService) {\n super(ParseFieldMatcher.STRICT);\n this.pageCacheRecycler = pageCacheRecycler;\n this.bigArrays = bigArrays.withCircuitBreaking();\n this.indexService = indexService;\n this.indexFieldDataService = indexService.fieldData();\n this.fixedBitSetFilterCache = indexService.bitsetFilterCache();\n this.threadPool = threadPool;\n+ this.indexShard = indexService.shard(0);\n }\n \n public TestSearchContext() {\n@@ -104,6 +103,7 @@ public TestSearchContext() {\n this.indexFieldDataService = null;\n this.threadPool = null;\n this.fixedBitSetFilterCache = null;\n+ this.indexShard = null;\n }\n \n public void setTypes(String... types) {\n@@ -282,7 +282,7 @@ public void setSearcher(ContextIndexSearcher searcher) {\n \n @Override\n public IndexShard indexShard() {\n- return null;\n+ return indexShard;\n }\n \n @Override", "filename": "core/src/test/java/org/elasticsearch/test/TestSearchContext.java", "status": "modified" } ] }